×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Platform Engineer, Kubernetes

Job in Olde West Chester, West Chester Township, Butler County, Ohio, USA
Listing for: Comcast
Full Time position
Listed on 2026-02-15
Job specializations:
  • IT/Tech
    Cloud Computing, SRE/Site Reliability
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Location: Olde West Chester

Overview

Make your mark at Comcast -- a Fortune 30 global media and technology company. From the connectivity and platforms we provide, to the content and experiences we create, we reach hundreds of millions of customers, viewers, and guests worldwide. Become part of our award‑winning technology team that turns big ideas into cutting‑edge products, platforms, and solutions that our customers love. We create space to innovate, and we recognize, reward, and invest in your ideas, while ensuring you can proudly bring your authentic self to the workplace.

Join us. You’ll do the best work of your career right here at Comcast. (In most cases, Comcast prefers to have employees on‑site collaborating unless the team has been designated as virtual due to the nature of their work. If a position is listed with both office locations and virtual offerings, Comcast may be willing to consider candidates who live greater than 100 miles from the office for the remote option.)

Job Summary

As a Sr. Platform Engineer, you will be responsible for building, managing, and optimizing the underlying infrastructure and tools that enable efficient, scalable, and reliable execution of large‑scale data processing workloads. Designing systems for collecting metrics (Prometheus) and visualizing data (Grafana) to provide deep insights into application / infrastructure performance. This role is a specialized subset of data platform engineering, ensuring the environment where data engineers and data scientists run their Spark jobs is robust and cost‑efficient.

RESPONSIBILITIES

AND DUTIES
  • Architecting and managing the platforms where Spark runs, such as Kubernetes clusters, or cloud services like AWS (EKS).
  • Packaging Spark workloads (often via Docker/Kubernetes) and integrating them with orchestration systems like Apache Flyte.
  • Deploying infrastructure via Terraform/Ansible.
  • Troubleshooting and resolving job failures, memory/resource issues, and execution anomalies. This includes optimizing Spark configurations to reduce cloud compute and storage costs.
  • Building automation and tools in languages like Python, Java, or Scala, Linux Scripting (Bash) to increase the productivity of development teams.
  • Write medium to complex SQL queries as needed.
  • Implementing and maintaining systems for monitoring, logging, and alerting (e.g., Prometheus, Grafana) to ensure platform stability and reliability.
  • Develop and optimize the data catalog platform (e.g., Apache Iceberg, Unity Catalog) for authorization, search, and lineage.
  • Automate workflows, monitoring, and incident resolution.
  • Collaborate with Data Stewards, Analysts, and Scientists to address data needs and issues.
  • Promote best practices and assess emerging technologies.
  • Working closely with data engineers, data scientists, and other engineering teams to define requirements, advise on best practices, and ensure successful delivery of data objectives.
  • Engaging with open‑source communities (like Apache Spark, Delta Lake, or Apache Iceberg) to discuss technical challenges and contribute improvements.
  • Create and maintain comprehensive documentation for Kubernetes infrastructure, processes, and procedures. Provide training and support to team members as needed.
Qualifications
  • Bachelor's degree in computer science or a related field, or equivalent experience, typically 7 years in a Dev Ops or Systems Engineering role.
  • Expertise in Apache Spark:
    Deep understanding of Spark architecture, including RDDs, Data Frames, execution hierarchy, lazy evaluation, shuffling, and fault tolerance.
  • Proficiency in languages used for Spark development and automation, such as Python, PySpark and Scala/Java.
  • Proficient in Linux Scripting (Bash).
  • Proficient in writing SQL.
  • Experience in CI/CD tools, Github.
  • Experience in setting up and using observability tools like Prometheus, Grafana etc.,
  • Strong knowledge on Networking Protocols (TCP/IP, DNS, Load Balancer etc.,) and hardware components,
  • Automation via Terraform/Ansible.
  • Hands‑on experience with on‑prem and major cloud providers (AWS, Azure, GCP) and container orchestration tools like Docker and Kubernetes.
  • Hands‑on experience setting up IAM, VPC, EC2 etc.,
  • Familiarity with related…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary