More jobs:
Job Description & How to Apply Below
- Good hands-on experience in Google Cloud Data Services (like Dataflow, Cloud Storage, Big Query, Cloud Composer, Secret Manager, etc.).
- Strong understanding of ETL/ELT concepts and data migration of TB scale of data.
- Develop and optimize end-to-end data pipelines using Dataflow.
- Develop and implement generic, reusable pipelines for integrating incremental data.
- Expertise in leading a data processing team and delivering with quality.
- Ensure data quality throughout the data pipeline.
Roles & Responsibilities
- Have proficiency in design, implementation, and optimization of data engineering solutions over large volume (TB, PB scale) of data using GCP data services.
- Proven expertise in GCP services including Dataflow, Big Query, Cloud Storage, Cloud Composer, Cloud Functions. Experience building scalable data lakes and pipelines.
- Proficiency in PySpark, Python, Spark SQL, and automating workflows.
- Have good exposure to writing optimized SQL (Big Query SQL preferred).
- Have good communication and problem-solving skills.
- Able to create POC to achieve the solutions and perform code reviews of team.
- Have understanding of GenAI technologies and able to implement solutions using GenAI, code assist frameworks (CoPilot, Windsurf, etc.).
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×