More jobs:
Data Engineer
Job in
Phoenix, Maricopa County, Arizona, 85003, USA
Listed on 2026-02-08
Listing for:
CoreAi Consulting
Full Time
position Listed on 2026-02-08
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing, Big Data
Job Description & How to Apply Below
We are seeking a skilled Data Engineer with 5+ years of hands-on experience designing, building, and maintaining scalable data pipelines and data platforms. The ideal candidate has strong experience working with DAG-based orchestration, cloud technologies (preferably Google Cloud Platform), SQL-driven data processing, Apache Spark, and Python-based API development using Fast API. You will play a key role in enabling reliable data ingestion, transformation, and quality assurance across enterprise systems.
Key Responsibilities- Design, develop, and maintain DAG-based data pipelines (Airflow or similar orchestration tools).
- Build and optimize SQL-based data transformations for analytics and reporting.
- Develop and manage batch and streaming data pipelines using Apache Spark.
- Implement Python-based REST APIs using FastAPI for data services and integrations.
- Perform data quality checks, validation, reconciliation, and anomaly detection.
- Work with cloud platforms (preferably Google Cloud Platform) for storage, compute, and orchestration.
- Architect and implement cloud-native data platforms on GCP, leveraging services such as Big Query, Big Table, Dataflow, Dataproc, Pub/Sub, and Cloud Storage.
- Monitor pipeline performance, troubleshoot failures, and optimize processing efficiency.
- Collaborate with analytics, application, and business teams to understand data requirements.
- Ensure best practices around security, scalability, and maintainability.
- Ensure data quality, reliability, security, governance, and compliance with enterprise standards
- 5 + years of experience as a Data Engineer or similar role.
- Strong experience with DAG orchestration (e.g., Apache Airflow).
- Solid understanding of cloud technologies, preferably Google Cloud Platform (GCP).
- Advanced proficiency in SQL for data processing and transformations.
- Hands-on experience running and tuning Apache Spark jobs.
- Experience developing APIs using Python and FastAPI.
- Strong understanding of data quality frameworks, checks, and validation techniques.
- Proficiency in Python, Java, Scala, or PySpark, with strong SQL expertise.
- Hands-on experience with GCP data services, including Big Query, Big Table, Dataproc, Dataflow, and cloud-native ETL patterns.
- Experience with software delivery methodologies such as Agile, Scrum, and CI/CD practices.
- Strong analytical and problem-solving skills.
- Ability to work independently and in cross-functional teams.
- Good communication and documentation skills.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×