Senior Data Engineer
Listed on 2026-01-05
-
IT/Tech
Data Engineer, Data Analyst
Overview
Job Title: Senior Data Engineer
Department: Digital
Reporting To: Manager, Data Engineering
Employment Type: Full-Time
Pay Transparency:
The anticipated starting salary range for Denver, CO based individuals expressing interest in this position is $140,000/yr.
- $160,000/yr. Salary to be determined by the education, experience, knowledge, skills, abilities and location of the applicant, as well as internal and external equity.
Audacy offers benefits eligible employees a comprehensive benefits package to include: health care coordinator, medical, dental, vision, telemedicine, flexible spending accounts, health savings account, disability, life insurance, critical illness, hospital indemnity, accident insurance, paid time off (sick, vacation, personal, parental, volunteer), 401(k) retirement plan, discounted employee stock purchase, student loan payment assistance program, legal assistance, life assistance program, identity theft protection, discounted home and auto insurance, and pet insurance.
Location: Denver, CO, New York, or Philadelphia
Work Arrangement: Hybrid
OverviewAudacy is looking for a Senior Data Engineer to join our dynamic Data Engineering team. In this role, you’ll be at the forefront of building and optimizing scalable data infrastructure that supports critical business insights, product innovation, and operational excellence. You’ll collaborate with other engineering teams, analytics, and product teams to design and drive data-forward solutions across a massive-scale digital audio ecosystem.
Whether you’re designing data pipelines, optimizing data models, or ensuring the reliability and performance of our data platform, you’ll play a key role in shaping how Audacy utilizes big data on the magnitude of billions of rows to unlock insights, enhance decision-making, and drive successful business outcomes. We are looking for a technical leader who values focus, curiosity, and collaboration who can mentor other data data engineers—and we offer an environment that supports your growth while respecting your work-life balance.
ResponsibilitiesWhat You’ll Do
Design and develop batch and streaming data pipelines using cloud technologies (GCP, AWS, Big Query, Snowflake, etc). GCP and Big Query experience preferred.
Build, maintain, and optimize data models and dimensional warehouses to support analytics and BI tooling as well as other operational workflows.
Design and develop robust ETL/ELT processes for ingesting, transforming, and validating terabytes of structured data.
Troubleshoot complex data pipeline issues to determine root cause and corrective actions.
Research, analyze, recommend, and implement technical approaches for solving complex data transformation and integration problems within a Medallion data warehouse architecture.
Collaborate with business analysts, data scientists, and stakeholders to understand complex data problems and translate business requirements into technical solutions and a structured backlog.
Lead and mentor other data engineers to follow the team’s evolving data engineering standards and best practices.
Analyze, support and improve the performance, reliability, and scalability of our data infrastructure.
Define and implement data quality, data governance, privacy-first practices, and validation mechanisms to ensure accuracy and compliance (e.g., GDPR).
Document pipelines, systems, and workflows to improve our internal data catalog.
Stay current with emerging technologies and evaluate their relevance in our stack.
Required:
8+ years of professional data engineering and programming skills building scalable ETL/ELT systems.
Degree in Computer Science or related field, or equivalent practical experience.
6+ years of experience with Python and SQL.
Advanced analytical problem solving and troubleshooting skills
Extensive experience working with cloud data warehouses (e.g. Big Query or Snowflake), and Google Data tools (Data Flow, Data Form, Pub/Sub, GCS).
3+ years of experience with Airflow (Cloud Composer), data modeling (star/relational schema), and test case development.
Solid understanding of columnar vs. row-based storage solutions and when to…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).