More jobs:
Data Engineer - III
Job in
San Francisco, San Francisco County, California, 94199, USA
Listed on 2025-12-01
Listing for:
Compunnel, Inc.
Full Time
position Listed on 2025-12-01
Job specializations:
-
IT/Tech
Data Engineer, Big Data, Cloud Computing
Job Description & How to Apply Below
We are seeking a Data Engineer – III to design, build, and maintain scalable, secure, and repeatable data pipelines across multiple platforms.
This role involves working with large datasets to transform raw data into actionable insights and collaborating with cross-functional teams to support data-driven decision-making.
Key Responsibilities- Design, develop, and maintain robust data pipelines to ingest, transform, catalog, and deliver curated, trusted data from diverse sources.
- Participate in Agile ceremonies and follow Scaled Agile practices as defined by the program team.
- Deliver high-quality data products and services aligned with Safe Agile principles.
- Proactively identify and resolve issues related to data pipelines and analytical data stores.
- Implement monitoring and alerting for data pipelines and stores, including auto-remediation strategies.
- Apply a security-first, test-driven, and automation-focused approach to data engineering.
- Collaborate with product managers, data scientists, analysts, and business stakeholders to understand data needs and deliver appropriate infrastructure and tools.
- Stay current with emerging technologies and recommend tools and frameworks to enhance data engineering processes.
- Bachelor’s degree in Computer Science, Information Systems, or a related field, or equivalent experience.
- 3+ years of experience with Python and PySpark.
- 2+ years of experience with Databricks, Collibra, and Starburst.
- Experience using Jupyter notebooks for coding and unit testing.
- Hands-on experience with relational and No
SQL databases, including STAR and dimensional modeling. - 2+ years of experience with modern data stacks (e.g., S3, Spark, Airflow, Lakehouse architectures, real-time databases).
- Experience with cloud data warehouses such as Redshift or Snowflake.
- Strong background in traditional ETL and Big Data engineering, both on-premises and in the cloud.
- Experience building end-to-end data pipelines for unstructured and semi-structured data using Spark.
- Data engineering experience in AWS, including familiarity with relevant services and tools.
- Experience working in secure environments with access to confidential supervisory information.
- Familiarity with compliance and data governance requirements in regulated industries.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×