More jobs:
Job Description & How to Apply Below
This position requires full relocation to Saudi Arabia. It is a permanent full time Expat position with an attractive relocation package. Please note only qualified candidates will be contacted.
Overview :
We are seeking a Data Engineering Specialist to join Downstream Global Optimizer (GO) team.
The Downstream Global Optimizer team drives commercial optimization across Saudi Aramco’s integrated downstream value chain, including domestic and international refining assets. The GO team identifies and captures optimization opportunities to enhance system netback and deliver shareholder value. By building robust data infrastructure, automating pipelines, and enabling real-time analytics, the team supports agile decision-making in dynamic market conditions.
Your primary role is to design, develop, and maintain scalable data platforms that power the iGO (intelligent Global Optimizer) ecosystem. Your work will ensure seamless integration of structured/unstructured data, enable high-performance analytics, and provide reliable foundations for optimization models, machine learning pipelines, and enterprise reporting. You will focus on building and maintaining the data infrastructure that supports our data scientists. This role will involve designing and implementing scalable data pipelines, ensuring data quality, and managing data storage solutions.
The specialist will work closely with data scientists and other teams to provide the necessary data infrastructure for advanced analytics and machine learning models.
Key Responsibilities
As the successful candidate you will be required to perform the following:
- Design and implement data architectures and efficient ETL (Extract, Transform, Load) processes to move and transform data from various sources into a format suitable for analysis.
- Ensure the accuracy, completeness, and consistency of data by implementing data validation and cleansing processes.
- Architect and manage data storage solutions, including data lakes and warehouses, to meet the needs of the organization.
- Work with data scientists and other stakeholders to understand data requirements and provide appropriate data solutions.
- Optimize data pipelines and architectures for scalability and performance, ensuring they can handle large volumes of data efficiently.
- Maintain thorough documentation of data systems, pipelines, and processes for clarity and continuity.
- Proficiency in languages such as Python, SQL, and Java or Scala.
- Experience with big data technologies like Apache Spark, Hadoop, Kafka, and Airflow.
- Knowledge of cloud services such as AWS, Azure, or GCP, including their data storage and processing services.
- Understanding of data warehousing concepts and experience with tools like Databricks, Microsoft data fabric and Cloudera.
- Familiarity with ETL tools and processes for data transformation and migration.
- Experience with CI/CD pipelines and version control systems like Git.
Minimum Requirements :
- You will hold a Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, or a related field from a recognized institution.
- A Master’s or advanced degree is preferred.
- You will have at least 10 years of experience in data engineering, data architecture, or enterprise data platform development, including 5+ years in designing and managing enterprise-grade data pipelines.
- Strong technical expertise in programming languages:
Python, SQL, Scala, and experience with data engineering frameworks (Apache Spark, Kafka, Airflow, Flink).
- Proven experience building data lakes, data warehouses, and ETL/ELT pipelines for large-scale, heterogeneous datasets.
- In-depth knowledge of cloud platforms (AWS, GCP, Azure) and distributed data processing tools (e.g., Databricks, Cloudera, Snowflake).
- Hands-on experience with data governance, metadata management, and data quality frameworks.
- Ability to work in complex, cross-functional environments, translating business needs into scalable data architectures.
- Strong communication skills to articulate technical designs to stakeholders and mentor junior engineers.
- Familiarity with industrial/energy sector data ecosystems (e.g., OT/IT systems, SCADA, ERP, market data) is preferred.
- Certifications in cloud platforms (AWS/GCP/Azure), data engineering (e.g., Google Professional Data Engineer, AWS Big Data Specialty) are a plus.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×