×
Register Here to Apply for Jobs or Post Jobs. X

Data Scientist -Relocate to Saudi Arabia, Expat

Job in Houston, Harris County, Texas, 77246, USA
Listing for: aramco
Full Time position
Listed on 2026-02-14
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below
Position: Data Scientist -Relocate to Saudi Arabia, Permanent Expat Relocation

Job Overview

This position requires full relocation to Saudi Arabia. It is a permanent full time Expat Relocation Package.

We are seeking a Data Engineering Specialist to join Downstream Global Optimizer (GO) team. The Downstream Global Optimizer team drives commercial optimization across Saudi Aramco’s integrated downstream value chain, including domestic and international refining assets. The GO team identifies and captures optimization opportunities to enhance system netback and deliver shareholder value. By building robust data infrastructure, automating pipelines, and enabling real-time analytics, the team supports agile decision-making in dynamic market conditions.

Your primary role is to design, develop, and maintain scalable data platforms that power the iGO (intelligent Global Optimizer) ecosystem. Your work will ensure seamless integration of structured/unstructured data, enable high-performance analytics, and provide reliable foundations for optimization models, machine learning pipelines, and enterprise reporting. You will focus on building and maintaining the data infrastructure that supports our data scientists. This role will involve designing and implementing scalable data pipelines, ensuring data quality, and managing data storage solutions.

The specialist will work closely with data scientists and other teams to provide the necessary data infrastructure for advanced analytics and machine learning models.

Key Responsibilities

As the successful candidate you will be required to perform the following:

  • Design and implement data architectures and efficient ETL (Extract, Transform, Load) processes to move and transform data from various sources into a format suitable for analysis.
  • Ensure the accuracy, completeness, and consistency of data by implementing data validation and cleansing processes.
  • Architect and manage data storage solutions, including data lakes and warehouses, to meet the needs of the organization.
  • Work with data scientists and other stakeholders to understand data requirements and provide appropriate data solutions.
  • Optimize data pipelines and architectures for scalability and performance, ensuring they can handle large volumes of data efficiently.
  • Maintain thorough documentation of data systems, pipelines, and processes for clarity and continuity.
  • Proficiency in languages such as Python, SQL, and Java or Scala.
  • Experience with big data technologies like Apache Spark, Hadoop, Kafka, and Airflow.
  • Knowledge of cloud services such as AWS, Azure, or GCP, including their data storage and processing services.
  • Understanding of data warehousing concepts and experience with tools like Databricks, Microsoft data fabric and Cloudera.
  • Familiarity with ETL tools and processes for data transformation and migration.
  • Experience with CI/CD pipelines and version control systems like Git.
  • You will hold a Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, or a related field from a recognized institution. A Master’s or advanced degree is preferred.
  • You will have at least 10 years of experience in data engineering, data architecture, or enterprise data platform development, including 5+ years in designing and managing enterprise-grade data pipelines.
  • Strong technical expertise in programming languages:
    Python, SQL, Scala, and experience with data engineering frameworks (Apache Spark, Kafka, Airflow, Flink).
  • Proven experience building data lakes, data warehouses, and ETL/ELT pipelines for large-scale, heterogeneous datasets.
  • In-depth knowledge of cloud platforms (AWS, GCP, Azure) and distributed data processing tools (e.g., Databricks, Cloudera, Snowflake).
  • Hands-on experience with data governance, metadata management, and data quality frameworks.
  • Ability to work in complex, cross-functional environments, translating business needs into scalable data architectures.
  • Strong communication skills to articulate technical designs to stakeholders and mentor junior engineers.
  • Familiarity with industrial/energy sector data ecosystems (e.g., OT/IT systems, SCADA, ERP, market data) is preferred.
  • Certifications in cloud platforms (AWS/GCP/Azure), data engineering (e.g., Google Professional…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary