×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer, Cloud

Job in New York, New York County, New York, 10261, USA
Listing for: Karbone
Full Time position
Listed on 2026-02-04
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below
Location: New York

Karbone Inc. is an award-winning liquidity services provider for energy transition and environmental commodity markets. Since 2008, we have offered integrated and innovative revenue hedging, risk management, and market advisory solutions to a global suite of clients across energy markets. Our teams are proudly ranked first amongst their peers and are all dedicated toward our core mission of providing our clients and partners with the necessary market access, liquidity solutions, and commercial insight to help them succeed in the new energy transition.


Position Overview:

We are seeking a Cloud Data Engineer to help build our cloud-based IT infrastructure, data foundation, and internal applications from the ground up. In this role, you will have the opportunity to shape a clean, scalable GCP data environment, with the opportunity to influence tooling and architecture decisions from day one. You will work on a focused set of data pipelines with significant flexibility in implementation, bringing hands-on expertise in cloud technologies and data engineering, and contributing to the introduction of AI/ML capabilities over time.


Responsibilities:

  • Build and optimize cloud-native data pipelines within a clean, scalable GCP environment.
  • Manage Postgre

    SQL and Timescale

    DB systems to support complex geospatial and high-velocity time-series datasets.
  • Develop Python-based ETL/ELT workflows using GCP-native tools like Dataplex, Cloud Run, or Dataflow.
  • Implement monitoring, alerting, and dashboards to maintain data infrastructure health and uptime.
  • Drive the transition to "Infrastructure as Code" using Terraform for reproducible and version-controlled environments.
  • Set up automated CI/CD pipelines via Jenkins or Git Hub Actions to replace manual deployment processes.
  • Architect the foundation for Retrieval-Augmented Generation (RAG) using Big Query’s vector search and Vertex AI.


Qualifications:

  • 2–4+ years of data engineering experience on GCP or AWS (ready to focus on GCP).
  • Bachelor’s degree in Computer Science, Data Science, Engineering, Information Systems/IT, or a related technical field.
  • Proficiency in Python and SQL for ETL automation and Postgre

    SQL/Timescale

    DB orchestration.
  • Familiarity with modern engineering practices, including infrastructure-as-code with Terraform, GCP-native tools like Dataflow, or experience in hybrid Dev Ops/Data roles.
  • Skill in cloud observability using tools like Grafana, Cloud Watch, or Cloud Monitoring.
  • Independent troubleshooter capable of managing workflows in an early-stage environment.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary