More jobs:
Data Architect
Job in
Florham Park, Morris County, New Jersey, 07932, USA
Listed on 2026-02-14
Listing for:
Qode
Full Time
position Listed on 2026-02-14
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
** Hybrid
* * Full timee
78031b3-212e-400e-85c2-9edc
2636813aTexas, Texas, United States New Jersey, New Jersey, United States## Description
*
* Job Title:
Senior Architect – AWS | Kafka | | Glue Streaming; API consumption
**** Location**:
Dallas, Texas (Preferred) Or Florham Park, New Jersey , NJFull time
*
* About the Role:
** We are seeking a Senior Technical Lead /Architect with strong expertise in AWS-based streaming data pipelines, Apache Kafka (MSK), AWS Glue, Flink and PySpark, to help solution, design and implement a scalable data ingestion, data validation, data enrichment and reconciliation processing, and event logs, data observability, operational KPI tracking framework.
You will play a key role in solutioning and building out the , event driven capabilities with control gates in place to measure, track, and improve the operational SLAs and drive the data quality and reconciliation workflows for a high-impact data platform serving financial applications
*
* Key Responsibilities:
** Provide technical solution discovery effort on any new capabilities or new functionality. Assist PO with technical user stories to ensure healthy backlog features Lead the development of real-time data pipelines using AWS DMS, MSK, Kafka or Glue Streaming and for CDC ingestion from multiple SQL Server sources (RDS/on-prem). Build and optimize streaming and batch data pipelines using AWS Glue (PySpark) to validate, transform, and normalize data to Iceberg and Dynamo
DB. Define and enforce data quality, lineage, and reconciliation logic with support for both streaming and batch use cases. Integrate with S3 Bronze/Silver layers and implement efficient schema evolution and partitioning strategies using Iceberg. Collaborate with architects, analysts, and downstream application teams to design API and file-based egress layers. Implement monitoring, logging, and event-based alerting using Cloud Watch, SNS, and Event Bridge.
Mentor junior developers and enforce best practices for modular, secure, and scalable data pipeline development.
Required Skills:
6+ years of hands-on expert level data engineering experience in cloud-based environments (AWS preferred) with event driven implementation Strong experience with Apache Kafka / AWS MSK including topic design, partitioning, and Kafka Connect/Debezium Proficiency in AWS Glue (PySpark) and for both batch and streaming ETL Working knowledge of AWS DMS, S3, Lake Formation, Dynamo
DB, and Iceberg Solid grasp of schema evolution, CDC patterns, and data reconciliation frameworks
Experience with infrastructure-as-code (CDK/Terraform) and Dev Ops practices (CI/CD,Git
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×