×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Kafka Engineer

Job in Home, Indiana County, Pennsylvania, 15747, USA
Listing for: Broadridge
Full Time position
Listed on 2025-12-27
Job specializations:
  • IT/Tech
    Cloud Computing, Systems Engineer, Data Engineer, SRE/Site Reliability
Job Description & How to Apply Below
Location: Home

Overview

At Broadridge, we empower others to accomplish more. If you’re passionate about developing your career while helping others, come join the Broadridge team.

Broadridge is hiring a Sr. Kafka Engineer! As the Kafka Platform Director, you’ll lead the strategy, design, and operations of large-scale event streaming solutions with Confluent Cloud and Kafka. You’ll drive automation, security, and performance across hybrid and multi-cloud environments, ensuring the platform is resilient, scalable, and future-ready. Partnering with cross-functional teams, you’ll power real-time data streaming that fuels innovation and critical business insights.

Responsibilities:

  • Architecture & Design – Architect, design, and implement Kafka-based solutions using Confluent Cloud and Confluent Platform, ensuring they are highly scalable, resilient, and future-proof.
  • Architecture & Design – Provide technical leadership in designing event-driven architectures that integrate with on-prem systems and multiple cloud environments (AWS, Azure, or GCP).
  • Platform Management – Oversee administration and operational management of Confluent Platform components:
    Kafka brokers, Schema Registry, Kafka Connect, ksql

    DB, and REST Proxy.
  • Platform Management – Develop and maintain Kafka producers, consumers, and streams applications to support real-time data streaming use cases.
  • Deployment & Automation – Lead deployments and configurations of Kafka topics, partitions, replication strategies in both on-prem and cloud setups.
  • Deployment & Automation – Automate provisioning, deployment, and maintenance tasks with Terraform, Chef, Ansible, Jenkins, or similar CI/CD tools.
  • Monitoring & Troubleshooting – Implement robust monitoring, alerting, and observability frameworks using Splunk, Datadog, Prometheus, or similar tools for both Confluent Cloud and on-prem clusters.
  • Monitoring & Troubleshooting – Proactively troubleshoot Kafka clusters, diagnose performance issues, and conduct root cause analysis for complex, distributed environments.
  • Performance & Capacity Planning – Conduct capacity planning and performance tuning to optimize Kafka clusters; ensure they can handle current and future data volumes.
  • Performance & Capacity Planning – Define and maintain SLA/SLI metrics to track latency, throughput, and downtime.
  • Security & Compliance – Ensure secure configuration of all Kafka and Confluent components, implementing best practices for authentication (Kerberos/OAuth), encryption (SSL/TLS), and access control (RBAC).
  • Security & Compliance – Collaborate with Info Sec teams to stay compliant with internal and industry regulations (GDPR, SOC, PCI, etc.).
  • Cross-Functional Collaboration – Work with Dev Ops, Cloud, Application, and Infrastructure teams to define and align business requirements for data streaming solutions.
  • Cross-Functional Collaboration – Provide guidance and support during platform upgrades, expansions, and new feature rollouts.
  • Continuous Improvement – Stay current with Confluent Platform releases and Kafka community innovations.
  • Continuous Improvement – Drive continuous improvement by recommending new tools, frameworks, and processes to enhance reliability and developer productivity.

Qualifications

  • 5+ years of hands-on experience with Apache Kafka; at least 2+ years focused on Confluent Cloud and Confluent Platform.
  • Deep knowledge of Kafka Connect, Schema Registry, Control Center, ksql

    DB, and other Confluent components.
  • Experience architecting and managing hybrid Kafka solutions in on-prem and cloud (AWS, Azure, GCP).
  • Advanced understanding of event-driven architecture and the real-time data integration ecosystem.
  • Strong programming/scripting skills (Java, Python, Scala) for Kafka-based application development and automation tasks.
  • Dev Ops & Automation – Hands-on experience with Infrastructure as Code (Terraform, Cloud Formation) for Kafka resource management in both cloud and on-prem.
  • Familiarity with Chef, Ansible, or similar configuration management tools to automate deployments.
  • Skilled in CI/CD pipelines (e.g., Jenkins) and version control (Git) for distributed systems.
  • Monitoring & Reliability – Proven ability to monitor and troubleshoot…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary