More jobs:
Devops Engineer
Job in
San Antonio, Bexar County, Texas, 78208, USA
Listed on 2025-12-07
Listing for:
Mothership
Full Time
position Listed on 2025-12-07
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
- Proficiency in AWS services such as Amazon MSK (Managed Streaming for Kafka), Amazon Kinesis, AWS Lambda, Amazon S3, Amazon EC2, Amazon RDS, Amazon VPC, and AWS IAM.
- Ability to manage infrastructure as code with AWS Cloud Formation or Terraform.
- Understanding of Apache Flink for real-time stream processing and batch data processing.
- Familiarity with Flinks integration with Kafka, or other messaging services.
- Experience in managing Flink clusters on AWS (using EC2, EKS, or managed services).
- Deep knowledge of Kafka architecture, including brokers, topics, partitions, producers, consumers, and zookeeper.
- Proficiency with Kafka management, monitoring, scaling, and optimization.
- Hands-on experience with Amazon MSK (Managed Streaming for Kafka) or self-managed Kafka clusters on EC2.
- Dev Ops & Automation:
- Strong experience in automating deployments and infrastructure provisioning.
- Familiarity with CI/CD pipelines using tools like Jenkins, Git Lab, Git Hub Actions, CircleCI, etc.
- Experience with Docker and Kubernetes, especially for containerizing and orchestrating applications in cloud environments.
- Programming & Scripting:
- Strong scripting skills in Python, Bash, or Go for automation tasks.
- Ability to write and maintain code for integrating data pipelines with Kafka, Flink, and other data sources.
- Monitoring & Performance Tuning:
- Knowledge of Cloud Watch, Prometheus, Grafana, or similar monitoring tools to observe Kafka, Flink, and AWS service health.
- Expertise in optimizing real-time data pipelines for scalability, fault tolerance, and performance.
Responsibilities:
- Infrastructure Design & Implementation:
- Design and deploy scalable and fault-tolerant real-time data processing pipelines using Apache Flink and Kafka on AWS.
- Build highly available, resilient infrastructure for data streaming, including Kafka brokers and Flink clusters.
- Platform Management:
- Manage and optimize the performance and scaling of Kafka clusters (using MSK or self-managed).
- Configure, monitor, and troubleshoot Flink jobs on AWS infrastructure.
- Oversee the deployment of data processing workloads, ensuring low-latency, high-throughput processing.
- Automation & CI/CD:
- Automate infrastructure provisioning, deployment, and monitoring using Terraform, Cloud Formation, or other tools.
- Integrate new applications and services into CI/CD pipelines for real-time processing.
- Collaboration with Data Engineering Teams:
- Work closely with Data Engineers, Data Scientists, and Dev Ops teams to ensure smooth integration of data systems and services.
- Ensure the data platforms scalability and performance meet the needs of real-time applications.
- Security and Compliance:
- Implement proper security mechanisms for Kafka and Flink clusters (e.g., encryption, access control, VPC configurations).
- Ensure compliance with organizational and regulatory standards, such as GDPR or HIPAA, where necessary.
- Optimization & Troubleshooting:
- Optimize Kafka and Flink deployments for performance, latency, and resource utilization.
- Troubleshoot issues related to Kafka message delivery, Flink job failures, or AWS service outages.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×