×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; Associate

Job in Van Buren, Crawford County, Arkansas, 72957, USA
Listing for: Huron
Full Time position
Listed on 2025-12-31
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Job Description & How to Apply Below
Position: Data Engineer (Associate)

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation and navigate constant change. Through a combination of strategy, expertise and creativity, we help clients accelerate operational, digital and cultural transformation, enabling the change they to own their future. Join our team as the expert you are now and create your future.

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation, and navigate constant change. We’re seeking a Data Engineer to join the Data Science & Machine Learning team in our Commercial Digital practice, where you’ll design, build, and optimize the data infrastructure that powers intelligent systems across Financial Services, Manufacturing, Energy & Utilities, and other commercial industries.

This isn’t a maintenance role or a ticket queue—you’ll own the full data lifecycle from source integration through analytics‑ready delivery. You’ll build pipelines that matter: real‑time data architectures that feed mission‑critical ML models, transformation layers that turn messy enterprise data into trusted datasets, and orchestration systems that ensure reliability  clients are Fortune 500 companies looking for partners who can engineer solutions, not just write SQL.

The variety is real. In your first year, you might architect a lakehouse solution for a global manufacturer’s IoT data, build a real‑time streaming pipeline for a financial services firm’s trading analytics, and design a data mesh implementation for a utility company’s distribution systems. If you thrive on solving complex data challenges and shipping production systems that ML teams and analysts depend on, this role is for you.

What

You’ll Do
  • Design and build end‑to‑end data pipelines (batch and streaming)—from source extraction and ingestion through transformation, quality validation, and delivery. You own the data infrastructure, not just a piece of it.
  • Develop modern data transformation layers using dbt
    , implementing modular SQL models, testing frameworks, documentation, and CI/CD practices that ensure data quality and maintainability.
  • Build and orchestrate workflows using Microsoft Fabric, Apache Airflow, Dagster, Databricks Workflows, or similar tools to automate complex data processing at scale.
  • Architect lakehouse solutions using open table formats (Delta Lake, Apache Iceberg) on Microsoft Fabric, Snowflake, and Databricks—designing schemas, optimizing performance, and implementing governance frameworks.
  • Ensure data quality and observability
    —implementing testing frameworks (dbt tests, Great Expectations), monitoring, alerting, and lineage tracking that maintain trust in data assets.
  • Collaborate directly with clients to understand business requirements, translate data needs into technical solutions, and communicate architecture decisions to both technical and executive audiences.
Required Qualifications
  • 2+ years of hands‑on experience building and deploying data pipelines in production
    —not just ad‑hoc queries and exports. You’ve built ETL/ELT systems that run reliably and scale.
  • Strong SQL and Python programming skills with experience in PySpark for distributed data processing. SQL for analytics and data modeling;
    Python/PySpark for pipeline development and large‑scale transformations.
  • Experience building data pipelines that serve AI/ML systems
    , including feature engineering workflows, vector embeddings for retrieval‑augmented generation (RAG), and data quality frameworks that ensure model reproducibility. Familiarity with emerging agent integration standards such as MCP (Model Context Protocol) and A2A (Agent‑to‑Agent), and the ability to design data services and APIs that can be discovered and consumed by autonomous AI agents.
  • Experience with modern data transformation tools
    , particularly dbt (data build tool). You understand modular SQL development, testing, and documentation practices.
  • Experience with cloud data platforms and lakehouse architectures
    —Snowflake, Databricks, and familiarity with open table formats (Delta Lake, Apache Iceberg). We’re platform‑flexible but Microsoft‑preferred.
  • Familiarity with workflow…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary