×
Register Here to Apply for Jobs or Post Jobs. X

Data Tester

Job in Louisville, Jefferson County, Kentucky, 40201, USA
Listing for: Compunnel, Inc.
Full Time position
Listed on 2025-11-27
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Big Data, Data Science Manager
Salary/Wage Range or Industry Benchmark: 90000 - 110000 USD Yearly USD 90000.00 110000.00 YEAR
Job Description & How to Apply Below

We are seeking an experienced Data Tester with strong expertise in Databricks, PySpark, and Big Data ecosystems.

This role focuses on validating data pipelines, ETL workflows, and analytical models to ensure data integrity, accuracy, and performance across distributed systems.

The ideal candidate will have hands-on experience in cloud environments and automation frameworks, with a deep understanding of data lake testing and SQL-based validation.

Key Responsibilities
  • Validate end-to-end data pipelines developed in Databricks and PySpark.
  • Develop and execute test plans, test cases, and automated scripts for ETL and data quality validation.
  • Perform data validation, reconciliation, and regression testing using SQL, Python, and PySpark Data Frame APIs.
  • Verify data transformations, aggregations, and schema consistency across raw, curated, and presentation layers.
  • Test Delta Lake tables for schema evolution, partitioning, versioning, and performance.
  • Collaborate with data engineers, analysts, and Dev Ops teams to ensure high-quality data delivery.
  • Analyze Databricks job logs, Spark execution plans, and cluster metrics to troubleshoot issues.
  • Participate in Agile/Scrum ceremonies and contribute to sprint planning and defect triage.
  • Maintain documentation for test scenarios, execution reports, and data lineage verification.
Required Qualifications
  • 8+ years of experience in data testing or QA in enterprise data environments.
  • 5+ years of experience testing ETL/Big Data pipelines and validating data transformations.
  • 4+ years of hands-on experience with Databricks, including notebook execution and job scheduling.
  • 4+ years of experience in PySpark (Data Frame APIs, UDFs, joins, transformations).
  • 5+ years of strong proficiency in SQL for complex data validation.
  • 3+ years of experience with Delta Lake or data lake testing.
  • 3+ years of experience in Python scripting for automation.
  • 3+ years of experience with cloud platforms (Azure, AWS, or GCP).
  • 2+ years of experience in test automation using tools like pytest or custom Python utilities.
  • 4+ years of experience in data warehousing, data modeling, and data quality frameworks.
  • 4+ years of experience with Agile/SAFe methodologies.
  • 6+ years of analytical and debugging experience for data pipeline issues.
Preferred Qualifications
  • Experience with CI/CD tools for Databricks or data testing (e.g., Git Hub Actions, Jenkins, Azure Dev Ops).
  • Exposure to BI validation tools (e.g., Power BI, Tableau, Looker).
  • Knowledge of REST APIs for metadata or integration testing.
  • Familiarity with big data tools such as Hive, Spark SQL, Snowflake, and Airflow.
Certifications
  • Microsoft Azure Data Engineer Associate or AWS Big Data Specialty (preferred).
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary