Data Engineer II Critical Industries
Listed on 2025-10-31
-
IT/Tech
Data Engineer, AI Engineer
Location: New York
Data Engineer II - Quantum Black, AI by McKinsey (Critical Industries)
Join to apply for the Data Engineer II - Quantum Black, AI by McKinsey (Critical Industries) role at Quantum Black, AI by Mc Kinsey .
Who You’ll Work WithDriving lasting impact and building long‑term capabilities with our clients is not easy work. You are the kind of person who thrives in a high‑performance, high‑reward culture—doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward.
In return for your drive, determination, and curiosity, we provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you’ll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else.
You’ll have a:
- Continuous learning:
Our learning and apprenticeship culture, backed by structured programs, helps you grow while creating an environment where feedback is clear, actionable, and focused on your development. - A voice that matters:
From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. - Global community:
With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for clients. - World‑class benefits:
On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well‑being for you and your family.
As a Data Engineer II, you will design, build, and optimize modern data platforms that power advanced analytics and AI solutions.
Your work will drive lasting impact by ensuring data is accurate, accessible, and production‑ready, enabling clients to accelerate digital transformations, adopt AI responsibly, and achieve measurable business outcomes.
In a given year you might:
- Develop a streaming data platform to integrate telemetry for predictive maintenance in aerospace systems.
- Implement secure data pipelines that reduce time‑to‑insight for a Fortune 500 utility company.
- Optimize large‑scale batch and streaming workflows for a global financial services client, cutting infrastructure costs while improving performance.
- Build pipelines for embeddings and vector databases to enable retrieval‑augmented generation (RAG) for a global defense client.
You’ll work in cross‑functional Agile teams with Data Scientists, Machine Learning Engineers, Designers, and domain experts to deliver high‑quality analytics solutions. Partnering closely with clients—from data owners to C‑level executives—you’ll shape data ecosystems that drive innovation and long‑term resilience.
You should expect this role to include at least some work in critical industries (Government, Defense, Aerospace, Utilities, Oil and Gas), but you will also serve other industries.
Your Qualifications and Skills- U.S. Citizenship is required (you must be able to be staffed on Critical Industries work which includes Government, Defense, Aerospace, Utilities, etc.).
- Degree in Computer Science, Business Analytics, Engineering, Mathematics, or a related field.
- 2+ years of professional experience in data engineering, software engineering, or adjacent technical roles.
- Proficiency in Python, Scala, or Java for production‑grade pipelines, with strong skills in SQL and PySpark.
- Hands‑on experience with cloud platforms such as AWS, GCP, Azure, Oracle and modern data storage/warehouse solutions such as Snowflake, Big Query, Redshift, and Delta Lake.
- Practical experience with Databricks, AWS Glue, and transformation frameworks like dbt, Dataform, or Databricks Asset Bundles.
- Knowledge of distributed systems such as Spark, Dask, Flink and streaming platforms (Kafka, Kinesis, Pulsar).
- Familiarity with workflow orchestration tools such as Airflow, Dagster, Prefect, CI/CD for data workflows, and infrastructure‑as‑code (Terraform, Cloud Formation).
- Understanding of Data Ops principles including pipeline monitoring, testing, and automation, with exposure to observability tools such as Datadog, Prometheus, and Great Expectations.
- Exposure to ML platforms such as Databricks, Sage Maker, Vertex AI, MLOps best practices, and GenAI toolkits (Lang Chain, Llama Index, Hugging Face).
- Willingness to travel as required.
- Strong communication, time management, and resilience, with the ability to align technical solutions to business value.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).