More jobs:
Senior Service Designer
Job in
San Jose, Santa Clara County, California, 95199, USA
Listed on 2025-12-02
Listing for:
UST
Full Time
position Listed on 2025-12-02
Job specializations:
-
IT/Tech
Data Engineer, Big Data
Job Description & How to Apply Below
Job Summary
As a Product Engineer - Big Data, you will design, build, and optimize large‑scale data processing pipelines using modern Big Data technologies. You will collaborate with data scientists, analysts, and product managers to ensure data accessibility, security, and reliability. Your work will focus on delivering scalable, high‑quality data solutions while driving continuous improvements across the data lifecycle.
Key Responsibilities- ETL Pipeline Development & Optimization – Design and implement complex, end‑to‑end ETL pipelines for large‑scale data ingestion and processing. Optimize performance, scalability, and resilience of data pipelines.
- Big Data Processing – Develop and optimize real‑time and batch data workflows using Apache Spark, Scala/PySpark, and Apache Kafka. Ensure fault‑tolerant, high‑performance data processing. Knowledge of Java and No
SQL is a plus. - Cloud Infrastructure Development – Build scalable, cost‑efficient cloud‑based data infrastructure leveraging AWS services. Ensure pipelines are resilient to variations in data volume, velocity, and variety.
- Data Analysis & Insights – Work with business teams and data scientists to deliver high‑quality datasets aligned with business needs. Perform data analysis to uncover trends, anomalies, and actionable insights. Present findings clearly to technical and non‑technical stakeholders.
- Real‑time & Batch Data Integration – Enable seamless integration of real‑time streaming and batch datasets from systems like AWS MSK. Ensure consistency and reliability across data ingestion sources and formats.
- CI/CD & Automation – Use Jenkins (or similar tools) to implement CI/CD pipelines. Automate testing, deployment, and monitoring of data solutions.
- Data Security & Compliance – Ensure pipelines comply with relevant data governance and regulatory frameworks (e.g., GDPR, HIPAA). Implement controls for data integrity, security, and traceability.
- Collaboration & Cross‑Functional Work – Partner with engineers, product managers, and data teams in an Agile environment. Contribute to sprint planning, architectural discussions, and solution design.
- Troubleshooting & Performance Tuning – Identify and resolve bottlenecks in data pipelines. Conduct performance tuning and adopt best practices for ingestion, processing, and storage.
- 4‑8 years of hands‑on experience in Big Data engineering, cloud data platforms, and large‑scale data processing.
- Proven experience delivering scalable data solutions in production environments.
- Familiarity with data governance frameworks and compliance standards.
- Experience with monitoring tools such as AWS Cloud Watch, Splunk, or Dynatrace.
- Working knowledge of Java or No
SQL databases. - Exposure to cost optimization strategies in cloud environments.
Apache Spark, Scala experience, AWS, Big Data
#J-18808-LjbffrPosition Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×