×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer Lead

Job in New York, New York County, New York, 10261, USA
Listing for: Data Freelance Hub
Contract position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below
Location: New York

- The JOB Portal

This role is for a Data Engineer Lead in New York City, NY, on a contract basis. Requires 8+ years of experience in big data solutions, proficiency in AWS, Python, Scala, SQL, and HR Data Warehousing, with financial services experience essential.

United States

$ USD

Unknown

-

December 29, 2025

-

On-site

-

-

New York, NY

Tags: #SQL #AWS #ETL #Data Warehouse #SNS #Big Data #Spark #Unix #Data Quality #AWS Glue #Automation #S3 #Databases #Workday #Postgre

SQL #Snowflake #Linux #Datasets #Amazon Redshift #Programming #Data Integration #Oracle #Web Services #ML #MySQL #Data Pipeline #SAP #SQS #Data Modeling #Business Analysis #Shell Scripting #Scripting #Hadoop #Scala #Lambda #Data Lineage #Cloud #Compliance #GDPR #Data Engineering #Regression #Security #Apache Spark #Redshift #Python

Title:

Data Engineer Lead

Location:

New York City, NY (Onsite)

Type:
Contract

An AWS Data Engineer role involves designing, building, and maintaining scalable data pipelines, architectures, and solutions on the Amazon Web Services (AWS) cloud platform, with additional focus on HR Data Warehouse (HR DWH). The role includes developing secure, compliant, and high-performance data platforms to support enterprise analytics across HR, Finance, and Business domains. Key responsibilities include data integration, building ETL/ELT processes using services like AWS Glue and Redshift, data modeling, and ensuring data quality, governance, and security‑particularly for sensitive HR and employee data.

This role requires strong proficiency in programming languages such as Python and Scala, as well as experience with SQL, Apache Spark, and serverless architectures.

Key Responsibilities
  • Design, build, and maintain scalable data pipelines and ETL/ELT processes using AWS Glue, EMR, Lambda, and Redshift to support analytics and reporting.
  • Develop and manage HR Data Warehouse (HR DWH) solutions, integrating data from HR systems such as Workday, SAP HCM, Oracle HCM, ADP, payroll, benefits, recruiting, and learning platforms.
  • Create and maintain HR data models (employee, position, compensation, headcount, attrition, performance, time & attendance) optimized for reporting and analytics.
  • Integrate data from multiple structured and semi-structured sources across enterprise systems.
  • Ensure data quality, data lineage, security, and compliance, including handling PII and sensitive HR data in accordance with regulatory requirements (GDPR, SOC, internal controls).
  • Implement data validation, reconciliation, and audit checks for HR and enterprise datasets.
  • Monitor, optimize, and tune data pipelines and Redshift/Snowflake performance.
  • Collaborate with HR stakeholders, business analysts, and reporting teams to understand People Analytics and workforce reporting requirements.
  • Maintain, support, and operationalize existing data solutions in production environments.
Minimum Skills Required
  • 8+ years of experience in design, development, and end‑to‑end implementation of enterprise‑wide big data solutions.
  • Strong experience designing and developing big data solutions using Spark, Scala, AWS Glue, Lambda, SNS/SQS, Cloud Watch.
  • Strong application development experience in Scala and Python.
  • Strong SQL development experience, preferably with Amazon Redshift.
  • Hands‑on experience with HR Data Warehousing (HR DWH) and workforce analytics.
  • Experience integrating and modeling data from HR systems such as Workday, SAP HCM, Oracle HCM, payroll, benefits, recruiting, and learning systems.
  • Experience with ETL/ELT frameworks and best practices.
  • Strong background in AWS cloud services:
    Lambda, Glue, S3, EMR, SNS, SQS, Cloud Watch, Redshift.
  • Expertise in SQL and relational databases such as Oracle, MySQL, Postgre

    SQL.
  • Experience with Snowflake is an added advantage.
  • Proficiency in Python for data engineering, automation, and orchestration.
  • Experience with shell scripting in Linux/Unix environments.
  • Experience with Big Data technologies:
    Hadoop, Spark.
  • Financial Services experience required.
  • Nice to have:
    Knowledge of Machine Learning models, regression, and validation techniques.
  • Nice to have:
    Experience with People Analytics, HR reporting, and workforce metrics.

Freelance data hiring powered by an engaged, trusted community — not a CV database.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary