×
Register Here to Apply for Jobs or Post Jobs. X

Professional Data Engineer

Job in Dubai, Dubai, UAE/Dubai
Listing for: Property Finder
Full Time position
Listed on 2025-12-02
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager, Data Analyst
Salary/Wage Range or Industry Benchmark: 120000 - 200000 AED Yearly AED 120000.00 200000.00 YEAR
Job Description & How to Apply Below

Get AI-powered advice on this job and more exclusive features.

Property Finder is the leading property portal in the Middle East and North Africa (MENA) region, dedicated to shaping an inclusive future for real estate while spearheading the region’s growing tech ecosystem. At its core is a clear and powerful purpose:
To change living for good in the region. Founded on the value of great ambitions, Property Finder connects millions of property seekers with thousands of real estate professionals every day. The platform offers a seamless and enriching experience, empowering both buyers and renters to make informed decisions. Since its inception in 2007, Property Finder has evolved into a trusted partner for developers, brokers, and home seekers.

As a lighthouse tech company, it continues to create an environment where people can thrive and contribute meaningfully to the transformation of real estate in MENA.

Position Summary

We are looking for a Data Engineer to build reliable, scalable data pipelines and contribute to the core data ecosystem that powers analytics,
AI/ML
, and emerging Generative AI use cases. You will work closely with senior engineers and data scientists to deliver high-quality pipelines, models, and integrations that support business growth and internal AI initiatives.

Key Responsibilities Core Engineering
  • Build and maintain batch and streaming data pipelines with strong emphasis on reliability, performance, and efficient cost usage.
  • Develop SQL, Python, and Spark/PySpark transformations to support analytics, reporting, and ML workloads.
  • Contribute to data model design and ensure datasets adhere to high standards of quality, structure, and governance.
  • Support integrations with internal and external systems, ensuring accuracy and resilience of data flows.
GenAI & Advanced Data Use Cases
  • Build and maintain data flows that support GenAI workloads (e.g., embedding generation, vector pipelines, data preparation for LLM training and inference).
  • Collaborate with ML/GenAI teams to enable high-quality training and inference datasets.
  • Contribute to the development of retrieval pipelines, enrichment workflows, or AI-powered data quality checks.
Collaboration & Delivery
  • Work with Data Science, Analytics, Product, and Engineering teams to translate data requirements into reliable solutions.
  • Participate in design reviews and provide input toward scalable and maintainable engineering practices.
  • Uphold strong data quality, testing, and documentation standards.
  • Support deployments, troubleshooting, and operational stability of the pipelines you own.
Professional Growth & Team Contribution
  • Demonstrate ownership of well-scoped components of the data platform.
  • Share knowledge with peers and contribute to team learning through code reviews, documentation, and pairing.
  • Show strong execution skills — delivering high-quality work, on time, with clarity and reliability.
Impact of the Role

In this role, you will help extend and strengthen the data foundation that powers analytics, AI/ML, and GenAI initiatives across the company. Your contributions will improve data availability, tooling, and performance, enabling teams to build intelligent, data-driven experiences.

Tech Stack
  • Languages:

    Python, SQL, Java/Scala
  • Streaming:
    Kafka, Kinesis
  • Data Stores:
    Redshift, Snowflake, Click House, S3
  • Orchestration:
    Dagster (Airflow legacy)
  • Platforms:
    Docker, Kubernetes
  • AWS: DMS, Glue, Athena, ECS/EKS, S3, Kinesis
  • ETL/ELT:
    Fivetran, dbt
  • IaC:
    Terraform + Terragrunt
Desired Qualifications
  • 5+ years of experience as a Data Engineer.
  • Strong SQL and Python skills; good understanding of Spark/PySpark.
  • Experience building and maintaining production data pipelines.
  • Practical experience working with cloud-based data warehouses and data lake architectures.
  • Experience with AWS services for data processing (Glue, Athena, Kinesis, Lambda, S3, etc.).
  • Familiarity with orchestration tools (Dagster, Airflow, Step Functions).
  • Solid understanding of data modeling, and data quality best practices.
  • Experience working with CI/CD pipelines or basic automation for data workflows.
  • Exposure to Generative AI workflows or willingness to learn: embeddings, vector stores, enrichment…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary