×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Snowflake, Navajo County, Arizona, 85937, USA
Listing for: MindTech, LLC
Full Time position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below

About Mindtech
Mindtech is your gateway to exciting and impactful tech projects. We specialize in end-to-end software outsourcing, linking Latin American talent with global opportunities. Our fast, cost-effective approach ensures that our clients receive exceptional service and innovative solutions. With a diverse team of over 70 skilled professionals across Latin America and the US, we are committed to delivering software that drives success.

About

the Role

We're looking for a Data Engineer to design, build, and maintain scalable data infrastructure that powers decision-making across the business. You'll work closely with data scientists, analysts, and software engineers to ensure clean, reliable data pipelines and access to high-quality datasets for analytics, machine learning, and product features.

This role is ideal for someone who loves turning messy, complex data into clean, structured systems and who thrives in environments where speed, quality, and scale go hand in hand.

Responsibilities

Design, implement, and maintain robust ETL/ELT pipelines using tools like Airflow, dbt, or similar.

Build and optimize data models in our data warehouse (e.g., Big Query, Redshift, Snowflake).

Ensure data quality, lineage, and observability with appropriate monitoring and alerting.

Collaborate with product, engineering, and business teams to understand data needs and deliver solutions.

Implement best practices around data governance, security, and compliance (e.g., GDPR, HIPAA if applicable).

Automate data validation, anomaly detection, and data backfill processes.

Contribute to internal data platform development and tooling.

Requirements Must-Have:

3+ years of experience as a data engineer or in a similar backend/data infrastructure role.

Proficient in Python or Scala and SQL.

Experience with orchestration tools (e.g., Airflow, Prefect).

Deep understanding of data warehousing concepts and distributed data systems.

Experience with cloud platforms (AWS, GCP, or Azure) and tools like S3, EMR, Lambda, etc.

Strong communication skills; ability to explain technical details to non-technical stakeholders.

Nice-to-Have:
  • Experience with real-time data processing (e.g., Kafka, Spark Streaming, Flink).
  • Familiarity with modern data stack tools like dbt, Fivetran, Snowplow, etc.

Exposure to Dev Ops practices (CI/CD, infrastructure as code).

Interest in or experience with machine learning pipelines.

Growth & Impact

In this role, you'll have the opportunity to shape our data infrastructure from the ground up, scale systems for future growth, and directly impact the performance of business-critical decisions. As we grow, your career can evolve toward architecture, leadership, or specialized tracks (e.g., ML ops, platform engineering).

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary