Senior QA Automation Engineer
Listed on 2026-02-16
-
IT/Tech
Data Engineer
Job Title:
Senior QA Automation Data Engineer (Remote CAN) Company Overview:
Atreides helps organizations transform large and complex multi‑modal datasets into information‑rich geo‑spatial data subscriptions that can be used across a wide spectrum of use cases. Currently, Atreides focuses on providing high‑fidelity data solutions to enable customers to derive insights quickly.
We are a fast‑moving, high‑performance startup. We value a diverse team and believe inclusion drives better performance. We trust our team with autonomy, believing it leads to better results and job satisfaction. With a mission‑driven mindset and entrepreneurial spirit, we are building something new and helping unlock the power of massive‑scale data to make the world safer, stronger, and more prosperous.
TeamOverview:
We are a passionate team of technologists, data scientists, and analysts with backgrounds in operational intelligence, law enforcement, large multinationals, and cybersecurity operations. We obsess about designing products that will change the way global companies, governments and nonprofits protect themselves from external threats and global adversaries.
Position Overview:We are seeking a QA Automation Data Engineer to ensure the correctness, performance, and reliability of our data pipelines, data lakes, and enrichment systems. In this role, you will design, implement, and maintain automated validation frameworks for our large‑scale data workflows. You will work closely with data engineers, analysts, and platform engineers to embed test coverage and data quality controls directly into the CI/CD lifecycle of our ETL and geospatial data pipelines.
You should be deeply familiar with test automation in data contexts, including schema evolution validation, edge case generation, null/duplicate detection, statistical drift analysis, and pipeline integration testing. This is not a manual QA role — you will write code, define test frameworks, and help enforce reliability through automation.
Team Principles:- Remain curious and passionate in all aspects of our work
- Promote clear, direct, and transparent communication
- Embrace the 'measure twice, cut once' philosophy
- Value and encourage diverse ideas and technologies
- Lead with empathy in all interactions
- Develop automated test harnesses for validating Spark pipelines, Iceberg table transformations, and Python‑based data flows.
- Implement validation suites for data schema enforcement, contract testing, and null/duplication/anomaly checks.
- Design test cases for validating geospatial data processing pipelines (e.g., geometry validation, bounding box edge cases).
- Integrate data pipeline validation with CI/CD tooling.
- Monitor and alert on data quality regressions using metric‑driven validation (e.g., row count deltas, join key sparsity, referential integrity).
- Write and maintain mock data generators and property‑based test cases for data edge cases and corner conditions.
- Contribute to team standards for testing strategy, coverage thresholds, and release readiness gates.
- Collaborate with data engineers on pipeline observability and reproducibility strategies.
- Participate in root cause analysis and post‑mortems for failed data releases or quality incidents.
- Document infrastructure design, data engineering processes, and maintain comprehensive documentation.
Qualifications:
- 5+ years of experience in data engineering or data QA roles with automation focus.
- Strong proficiency in Python and PySpark, including writing testable, modular data code.
- Experience with Apache Iceberg, Delta Lake, or Hudi, including schema evolution and partitioning.
- Familiarity with data validation libraries (e.g., Great Expectations, Deequ, Soda SQL) or homegrown equivalents.
- Understanding of geospatial formats (e.g., Geo Parquet, GeoJSON, Shape files) and related edge cases.
- Experience with test automation frameworks such as pytest, hypothesis, unittest, and integration with CI pipelines.
- Familiarity with cloud‑native data infrastructure, especially AWS (Glue, S3, Athena, EMR).
- Knowledge of data lineage, data contracts, and observability tools is a plus.
- Strong communication skills and the ability to work…
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: