×
Register Here to Apply for Jobs or Post Jobs. X

Senior Platform Engineer - GCP

Job in Denver, Denver County, Colorado, 80238, USA
Listing for: Audacy
Full Time position
Listed on 2026-01-06
Job specializations:
  • IT/Tech
    Cloud Computing, Systems Engineer
Salary/Wage Range or Industry Benchmark: 140000 - 150000 USD Yearly USD 140000.00 150000.00 YEAR
Job Description & How to Apply Below
** Overview*
* ** Overview:*
* Audacy is looking for a self-starting and results-oriented Senior Platform Engineer in GCP with a pragmatic and intelligent approach to problem solving. You will join a team of like-minded engineers to design, develop and support  consumer desktop, web, and mobile applications that deliver streaming broadcast radio and podcasts for Audacy's 40M active monthly listeners. Evangelizing a "Dev Ops" culture by enabling and empowering developer teams to be as self-sufficient as possible is key to being successful at Audacy.

** Pay Transparency:*
* The anticipated starting salary range for individuals expressing interest in this position is $140,000/yr.

- $150,000/yr.

Salary to be determined by specific location.

** _Audacy offers employees who are eligible for benefits with a comprehensive benefits package which includes: a health care coordinator, medical, dental, vision, mental health, telemedicine, flexible spending accounts, health savings account, disability, life insurance, critical illness, hospital indemnity, accident insurance, paid time off (sick, flex-time away/vacation days, personal, parental, volunteer), 401(k) retirement plan, student loan payment assistance program, legal assistance, life assistance program, identity theft protection, discounted home and auto insurance, and pet insurance._*
* ** Responsibilities*
* ** What You'll Do:*
* *
* Key Responsibilities:

*
* + Deploy infrastructure to GCP using Terraform & Terragrunt.

+ Support hundreds of deployments over 10+ EKS clusters and 2+ GKE clusters.

+ Provide expert level support to a streaming platform for broadcast radio and Data Team leveraging AI tools like Big Query, Vertex AI, Jupyter Notebook, etc.

+ Work with developer teams to automate their release pipelines using Git Lab and empowering them to own the deployment lifecycle.

+ Contribute to project planning and influence solution architecture design that satisfies business goals while maintaining Platform Engineering standards.

+ Deliver 24/7 on-call rotational support of applications and infrastructure.

+ Monitor system and application performance and troubleshoot/resolve escalated issues.

+ Cross functional collaboration in the care and feeding of existing Kubernetes architecture like controller, cluster, and add-on upgrades.

+ Continually identify opportunities to optimize and improve all operational aspects of our technical solutions.

** Qualifications*
* ** More

About You:

*
* *
* Required Skills:

*
* + At least 3 years of experience in a Dev Ops or Platform Engineering role.

+ Proficient experience around pipeline automation with Git Lab and automation with Helm like pre-deploy hooks will be helpful.

+ Practical experience with most core GCP services:
Big Query, Vertex AI, Juniper Notebooks, Airflow, Google Transfer Services, Cloud Run, Cloud Storage, Cloud Spanner, etc.

+ Proven experience deploying and supporting GCP services:
Big Query, Vertex

AI, GKE, Kubeflow pipelines, Networking, etc.

+ Exposure to multiple cloud accounts that leverage

VPC peering and Transit Gateway for systems that span cross-account.

+ Hands-on experience managing Kubernetes in a production setting.

+ Demonstrated experience with Git Lab, Bitbucket, or Git Hub automation.

+ Experience writing infrastructure as code in Terraform.

+

Experience with logging, monitoring, and alerting solutions like Grafana,Data Dog, Honeycomb, etc.

+ Practical experience with Python or Go, Ansible and bash.

+

Experience with Unix / Red Hat.

+ Experience troubleshooting and resolving front-end application performance, connectivity, and other issues in a production setting.

+ Demonstrate strong knowledge of engineering cloud based solutions that are fault tolerant and highly available in a production setting.

** Preferred:*
* + At least 3 years of experience with Git Lab pipelines and managing runners.

+ Knowledge / familiarity with GCP services like Big Query, Vertex

AI, GKE, Kubeflow pipelines, Networking, etc.

+ Experience working with Confluent Cloud and managing clusters/Kafka topics.

+ Experience automating pipelines for language learning models in Google Cloud.

+ Experience using Grafana Alloy in both contexts of EC2…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary