×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in Dubai, UAE/Dubai
Listing for: Washmen
Full Time position
Listed on 2025-11-21
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager, Big Data, Data Warehousing
Salary/Wage Range or Industry Benchmark: 120000 - 200000 AED Yearly AED 120000.00 200000.00 YEAR
Job Description & How to Apply Below

Position Overview

We're seeking a self-sufficient Senior Data Engineer to build and scale our data infrastructure supporting product, engineering and analytics team. You'll architect data pipelines, optimize our data platform, and ensure the teams have reliable, high-quality data to drive business decisions.

This is a hands‑on role for someone who can own the entire data engineering stack – from ingestion to transformation to orchestration. You'll work independently to solve complex data challenges and build scalable solutions.

Core Responsibilities Data Pipeline Development & Optimization
  • Design, build, and maintain scalable data pipelines using Spark and Databricks
  • Develop ETL/ELT workflows to process large volumes of customer behavior data
  • Optimize Spark jobs for performance, cost efficiency, and reliability
  • Build real‑time and batch data processing solutions
  • Implement data quality checks and monitoring throughout pipelines
  • Ensure data freshness and SLA compliance for analytics workloads
AWS Data Infrastructure
  • Architect and manage data infrastructure on AWS (S3, Glue, EMR, Redshift)
  • Design and implement data lake architecture with proper partitioning and optimization
  • Configure and optimize AWS Glue for ETL jobs and data cataloging
  • Shift Glue jobs to Zero ETL
  • Implement security best practices for data access and governance
  • Monitor and optimize cloud costs related to data infrastructure
Data Modeling & Architecture
  • Design and implement dimensional data models for analytics
  • Build star/snowflake schemas optimized for analytical queries
  • Create data marts for specific business domains (retention, campaigns, product)
  • Ensure data model scalability and maintainability
  • Document data lineage, dependencies, and business logic
  • Implement slowly changing dimensions and historical tracking
Orchestration & Automation
  • Build and maintain workflow orchestration using Airflow or similar tools
  • Implement scheduling, monitoring, and alerting for data pipelines
  • Create automated data quality validation frameworks
  • Design retry logic and error handling for production pipelines
  • Build CI/CD pipelines for data workflows
  • Automate infrastructure provisioning using Infrastructure as Code
Cross‑Functional Collaboration
  • Partner with Senior Data Analyst to understand analytics requirements
  • Work with Growth Director and team to enable data‑driven decision making
  • Support CRM Lead with data needs for campaign execution
  • Collaborate with Product and Engineering on event tracking and instrumentation
  • Document technical specifications and best practices for the team
  • Work closely with all squads, establish data contracts with engineers to land data in a most optimal way
Required Qualifications
Must‑Have Technical Skills
  • Apache Spark
    :
    Expert‑level proficiency in PySpark/Spark SQL for large‑scale data processing – this is non‑negotiable
  • Databricks
    :
    Strong hands‑on experience building and optimizing pipelines on Databricks platform – this is non‑negotiable
  • AWS
    :
    Deep knowledge of AWS data services (S3, Glue, EMR, Redshift, Athena) – this is non‑negotiable
  • Data Modeling
    :
    Proven experience designing dimensional models and data warehouses – this is non‑negotiable
  • Orchestration
    :
    Strong experience with workflow orchestration tools (Airflow, Prefect, or similar) – this is non‑negotiable
  • SQL
    :
    Advanced SQL skills for complex queries and optimization
  • Python
    :
    Strong programming skills for data engineering tasks
Experience
  • 6‑10 years in data engineering with focus on building scalable data platforms
  • Proven track record architecting and implementing data infrastructure from scratch
  • Experience processing large volumes of event data (billions of records)
  • Background in high‑growth tech companies or consumer‑facing products
  • Experience with mobile/web analytics data preferred
Technical Requirements
  • Expert in Apache Spark (PySpark and Spark SQL) with performance tuning experience
  • Deep hands‑on experience with Databricks (clusters, jobs, notebooks, Delta Lake)
  • Strong AWS expertise: S3, Glue, EMR, Redshift, Athena, Lambda, Cloud Watch
  • Proficiency with orchestration tools:
    Airflow, Prefect, Step Functions, or similar
  • Advanced data modeling skills: dimensional modeling, normalization, denormalization
  • Experience…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary