Data Engineer/Senior Data Engineer; GenAI & Cloud Data Platforms
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Data Science Manager, Data Analyst, Big Data
We are actively hiring mid to senior-level Data Engineers and Data Analysts with hands-on experience building modern data pipelines, working on GenAI-enabled data workflows, and delivering analytics solutions on AWS, Azure, or GCP. These roles support multiple enterprise clients across different locations and projects. Positions may be remote, hybrid, or onsite depending on client requirements.
SummaryWe're hiring a mid to senior level Data Engineer with strong analytical skills to build pipelines, prepare analytical datasets, and support GenAI projects. The role blends data engineering, analysis, and cross-team collaboration.
Responsibilities- Design, build, and maintain scalable ETL/ELT data pipelines using Python and SQL
- Ingest data from APIs, cloud storage, databases, files, and streaming platforms
- Develop analytics-ready and ML-ready datasets for reporting and advanced use cases
- Implement data quality checks, validation, monitoring, and lineage
- Collaborate with business stakeholders, analysts, and ML teams to define metrics and ensure data accuracy
- Optimize data models to support dashboards and self-service analytics
- Prepare structured and unstructured data for GenAI use cases (embeddings, vector databases, RAG pipelines)
- Improve performance, reliability, scalability, and cost efficiency
- Document pipelines, data models, and operational processes clearly
- Python (Pandas, PySpark), strong SQL
- ETL/ELT design, modeling, and pipeline optimization
- Experience with cloud data warehouses (Snowflake, Big Query, Redshift, Synapse)
- Tools like Airflow, DBT, ADF, Glue, Dataflow, Databricks, or similar
- Exposure to GenAI data workflows
- Git, CI/CD, basic Dev Ops awareness
- Strong Python experience (Pandas, PySpark)
- Advanced SQL skills
Hands-on experience designing ETL/ELT pipelines and data models - Experience with cloud data platforms:
- AWS (Redshift, Glue, S3, Athena)
- Azure (ADF, Synapse, Databricks)
- GCP (Big Query, Dataflow, Pub/Sub)
- Familiarity with orchestration and transformation tools such as:
- Airflow, dbt, Databricks, Glue, ADF, Dataflow
- Exposure to GenAI data workflows (vector embeddings, document ingestion, RAG pipelines)
- Experience with Git, CI/CD, and basic Dev Ops practices
- 5 15 years of experience in Data Engineering or hybrid Data Engineering/Data Analytics
- Bachelor's degree in Computer Science, Engineering, Data, or a related field (or equivalent experience)
We consider candidates across all visa categories. Work-authorized applicants, as well as candidates who may require visa sponsorship now or in the future, will be considered in accordance with applicable laws.
We are an Equal Opportunity Employer and do not discriminate on the basis of race, color, religion, sex, gender identity, sexual orientation, national origin, age, disability, veteran status, or any other protected characteristic.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).