Senior Big Data Engineer
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Big Data, Database Administrator, Data Warehousing
Must Have Skills – Data Engineer with Scala
Skill 1 – Scala, Spark, Python, SQL, Bigdata, Hadoop
GCP data tools:
Big Query, Dataproc, Vertex AI, Pub/Sub, Cloud Functions
Skill 2 – PySpark, Python, Spark
SQL, and data modeling
As a Senior Data Engineer, you will
Design, develop, and maintain ETL/ELT data pipelines for batch and real-time data ingestion, transformation, and loading using Spark (PySpark/Scala) and streaming technologies (Kafka, Flink).
Build and optimize scalable data architectures, including data lakes, data warehouses (Big Query), and streaming platforms.
Performance Tuning:
Optimize Spark jobs, SQL queries, and data processing workflows for speed, efficiency, and cost-effectiveness.
Data Quality:
Implement data quality checks, monitoring, and alerting systems to ensure data accuracy and consistency.
Qualifications:
Programming:
Strong proficiency in Python, SQL, and potentially Scala/Java.
Big Data:
Expertise in Apache Spark (Spark SQL, Data Frames, Streaming).
Streaming:
Experience with messaging queues like Apache Kafka, or Pub/Sub.
Cloud:
Familiarity with GCP, Azure data services.
Databases:
Knowledge of data warehousing (Snowflake, Redshift) and No
SQL databases.
Tools:
Experience with Airflow, Databricks, Docker, Kubernetes is a plus.
Total IT Experience – Minimum 8 years
GCP – 4+ years of recent GCP experience
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).