Senior Data Engineer
Listed on 2026-02-07
-
IT/Tech
Data Engineer, Cloud Computing
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a 10-month contract, paying $80-$85/hr, located in hybrid Burbank, CA. Requires 7+ years in data engineering, AWS services, SQL, Python, and experience with modern data platforms.
United States
$ USD
Hybrid
W2 Contractor
Burbank, CA 91505
#Databricks #Lambda (AWS Lambda) #ML (Machine Learning) #SQL (Structured Query Language) #BI (Business Intelligence) #Redshift #Scripting #Data Catalog #Data Governance #Deployment #Data Engineering #Data Quality #PySpark #AI (Artificial Intelligence) #Airflow #Python #AWS (Amazon Web Services) #Data Pipeline #Batch #RDS (Amazon Relational Database Service) #Snowflake #Anomaly Detection #Monitoring #Data Science #S3 (Amazon Simple Storage Service) #ETL (Extract #Transform #Load) #Metadata #Scala #Spark (Apache Spark) #Dynamo
DB #Informatica #Data Transformations
- Design & Build Scalable Data Pipelines
- Lead development of batch and streaming pipelines using AWS-native tools (Glue, Lambda, Step Functions, Kinesis) and modern orchestration frameworks.
- Implement best practices for monitoring, resilience, and cost optimization in high-scale pipelines.
- Collaborate with architects to translate canonical and semantic data models into physical implementations.
- Build pipelines that deliver clean, well-structured data to analysts, BI tools, and ML pipelines.
- Work with data scientists to enable feature engineering and deployment of ML models into production environments.
- Embed validation, lineage, and anomaly detection into pipelines.
- Contribute to the enterprise data catalog and enforce schema alignment across pods.
- Partner with governance teams to implement role-based access, tagging, and metadata standards.
- Guide junior data engineers, sharing best practices in pipeline design and coding standards.
- Participate in pod ceremonies (backlog refinement, sprint reviews) and program-level architecture syncs.
- Promote reusable services and reduce fragmentation by advocating platform-first approaches.
- Experience in data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, Dynamo
DB, S3) and orchestration tools (Airflow, Step Functions). 7+ years of experience. - Strong skills in SQL, Python, PySpark, and scripting for data transformations. 7+ years of experience.
- Experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica). 7+ years of experience.
- Strong collaboration and mentoring skills; ability to influence across pods and domains.
- Knowledge of data governance practices, including lineage, validation, and cataloging.
- Strong skills in SQL, Python, PySpark, and scripting for data transformations.
- Hands‑on expertise in AWS services (Glue, Kinesis, Lambda, RDS, Dynamo
DB, S3) and orchestration tools (Airflow, Step Functions). - Experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica).
- Proven ability to optimize pipelines for both batch and streaming use cases.
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Job Type: Contract
Pay: $80.00 - $85.00 per hour
Expected hours: 40 per week
Application Question(s):
How many years of experience of hands‑on expertise in AWS services do you have?
Years of experience with SQL, Python, PySpark, and scripting for data transformations?
Years of experience working with modern data platforms (Snowflake, Databricks, Redshift, Informatica)?
Work Location:
Hybrid remote in Burbank, CA 91505
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).