Kafka Engineer
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Cloud Computing, Cybersecurity, Data Security
Overview
The Kafka Engineer / Administrator / Developer is a key member of the program technical team, supporting large-scale data streaming, system integration, and platform modernization initiatives. This role is responsible for designing, developing, administering, and optimizing Apache Kafka clusters and event-driven architectures that support high-volume, mission-critical data flows. The Kafka Engineer works closely with Federal Government stakeholders, architects, developers, Dev Ops teams, API Gateway (APIGW) teams, and backend system owners to ensure reliable, secure, and scalable event streaming pipelines.
This role plays a critical part in enabling real-time data integration, microservices communication, and operational resilience across complex enterprise systems.
Kafka Engineering & Administration
- Design, build, administer, and maintain Kafka clusters across development, test, and production environments.
- Manage Kafka topics, partitions, brokers, replication, retention policies, and access controls.
- Monitor Kafka performance, availability, throughput, and latency; proactively identify and resolve issues.
- Perform capacity planning, tuning, upgrades, patching, and disaster recovery planning for Kafka environments.
- Implement and maintain high availability and fault-tolerant Kafka configurations.
Event Streaming & Integration
- Develop and support event streaming pipelines using Kafka for real-time and near-real-time data processing.
- Integrate Kafka with API Gateway (APIGW)–based microservices and downstream backend systems.
- Design and implement Kafka producers, consumers, and connectors (e.g., Kafka Connect) to support system integrations and ETL/data movement needs.
- Collaborate with application teams to define event schemas, topics, and data contracts.
- Ensure reliable message delivery, data integrity, and error handling across streaming workflows.
Security, Compliance & Operations
- Implement Kafka security best practices, including authentication, authorization, encryption in transit, and auditing.
- Ensure Kafka implementations comply with CMS security, data governance, and operational standards.
- Support Dev Sec Ops practices, CI/CD pipelines, and infrastructure-as-code approaches where applicable.
- Participate in incident response, root cause analysis, and operational readiness activities.
Collaboration & Documentation
- Work closely with architects, developers, Dev Ops engineers, and system administrators to support solution design and delivery.
- Document Kafka architectures, configurations, operational procedures, and integration patterns.
- Provide technical guidance, troubleshooting support, and knowledge transfer to internal teams.
Minimum Qualifications
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field.
- 3+ years of experience developing, administering, and supporting Apache Kafka in enterprise environments.
- Hands-on experience managing Kafka clusters, topics, partitions, and event streaming pipelines.
- Experience integrating Kafka with microservices, API Gateways (APIGW), and backend systems.
- Strong understanding of event-driven architectures, messaging patterns, and data streaming concepts.
- Experience with Linux-based environments and command-line administration.
- Strong troubleshooting and performance tuning skills.
- Ability to clearly communicate technical concepts to both technical and non-technical stakeholders.
Preferred Qualifications
- Experience supporting federal healthcare programs.
- Experience working in Agile, Scrum, and/or Dev Sec Ops environments.
- Familiarity with cloud-based Kafka deployments (AWS MSK or similar managed Kafka services).
- Experience with CI/CD pipelines and automation tools.
- Knowledge of cloud security concepts and secure data transmission.
- Experience with monitoring tools and observability platforms for Kafka (e.g., Prometheus, Grafana, Cloud Watch).
- Familiarity with schema management tools (e.g., Schema Registry).
- Knowledge of containerized environments and orchestration tools (Docker, Kubernetes) is a plus.
Position Details
- Employment Type:
Full-Time, W2 - Location:
100% Remote (US-based only) - Hours:
40 hours/week, availability during core business hours - Start Date:
ASAP - Eligibility:
Must be eligible to obtain a Public Trust clearance - Salary: $100,000 – $130,000 (commensurate with experience)
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).