×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer IRC

Job in Romania, Somerset County, Pennsylvania, USA
Listing for: Hitachi Vantara Corporation
Full Time position
Listed on 2025-12-06
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Salary/Wage Range or Industry Benchmark: 110000 - 150000 USD Yearly USD 110000.00 150000.00 YEAR
Job Description & How to Apply Below
Position: Senior Data Engineer IRC279569
Location: Romania

Description

Our client is a global technology and manufacturing company with a long history of innovation across multiple industries, including industrial solutions, worker safety, and consumer goods. Headquartered in the United States, the company develops and produces a wide range of products - from adhesives, abrasives, and protective materials to personal safety equipment, electronic components, and optical films. With tens of thousands of products in its portfolio and operations in markets around the world, it plays a key role in delivering high-quality, reliable solutions for both businesses and consumers.

Requirements

We are looking for a highly skilled and experienced Senior Data Engineer to join our team. In this role, you will be a key player in designing, building, and optimizing our data architecture and pipelines. You will be working on a complex data project, transforming raw data into reliable, high-quality assets ready for analytics, data science, and business intelligence. As a senior member of the team, you will also be expected to help junior/middle engineers, drive technical best practices, and contribute to the strategic direction of our data platform.

Required Qualifications & Skills
  • 5+ years of professional experience in data engineering or a related role.
  • A minimum of 3 years of deep, hands‑on experience using Python for data processing, automation, and building data pipelines.
  • A minimum of 3 years of strong, hands‑on experience with advanced SQL for complex querying, data manipulation, and performance tuning.
  • Proven experience with cloud data services, preferably Azure (Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage).
  • Hands‑on experience with big data processing frameworks like Spark (PySpark) and platforms such as Databricks.
  • Solid experience working with large, complex data environments, including data processing, data integration, and data warehousing.
  • Proficiency in data quality assessment and improvement techniques.
  • Experience working with and cleansing a variety of data formats, including unstructured and semi‑structured data (e.g., CSV, JSON, Parquet, XML).
  • Familiarity with Agile and Scrum methodologies and project management tools (e.g., Azure Dev Ops, Jira).
  • Excellent problem‑solving skills and the ability to communicate complex technical concepts effectively to both technical and non‑technical audiences.
Preferred Qualifications & Skills
  • Knowledge of Dev Ops methodologies and CI/CD practices for data pipelines.
  • Experience with modern data platforms like Microsoft Fabric for data modeling and integration.
  • Experience with consuming data from REST APIs.
  • Experience with database design, optimization, and performance tuning for software application backends.
  • Knowledge of dimensional data modeling concepts (Star Schema, Snowflake Schema).
  • Familiarity with modern data architecture concepts such as Data Mesh.
  • Real‑world experience supporting and troubleshooting critical, end‑to‑end production data pipelines.
Job responsibilities

Key Responsibilities
  • Architect & Build Data Pipelines: Design, develop, and maintain robust, scalable, and reliable data pipelines using Python, SQL, and Spark on the Azure cloud platform.
  • End‑to‑End Data Solutions: Architect and implement end‑to‑end data solutions, from data ingestion and processing to storage in our data lake (Azure Data Lake Storage, Delta Lake) and data warehouse.
  • Cloud Data Services Management: Utilize Azure services like Azure Data Factory, Databricks, and Azure SQL Database to build, orchestrate, and manage complex data workflows.
  • Data Quality & Governance: Implement and enforce comprehensive data quality frameworks, including data profiling, cleansing, and validation routines to ensure the highest levels of data integrity and trust.
  • Performance Optimization: Analyze and optimize data pipelines for performance, scalability, and cost‑efficiency, ensuring our systems can handle growing data volumes.
  • Mentorship & Best Practices: Mentor and provide technical guidance to junior and mid‑level data engineers. Lead code reviews and champion best practices in data engineering, coding standards, and data modeling.
  • Stak…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary