DevOps/Python Engineer; Onsite, Austin TX or Sunnyvale CA
Listed on 2025-12-24
-
IT/Tech
Data Engineer, Cloud Computing
Dev Ops/Python Engineer (Onsite, Austin TX or Sunnyvale CA)
We are looking for an experienced Dev Ops Engineer to design, build, and optimize our data infrastructure, enabling high-performance, reliable, and scalable data workflows. The ideal candidate has deep expertise in the modern data ecosystem (Druid, Databricks, dbt, Redshift, etc.), a strong understanding of distributed systems, and a proven track record in managing data pipelines and platforms addition, strong programming skills are essential for building automation, custom integrations, and advanced data solutions.
Key Responsibilities- Design, implement, and maintain highly available and scalable data pipelines leveraging tools such as Druid, Databricks, dbt, and Amazon Redshift
- Manage and optimize distributed data systems for real‑time, batch, and analytical workloads
- Develop custom scripts and applications using programming languages (Python, Scala, or Java) to enhance data workflows and automation
- Implement automation for deployment, monitoring, and alerting of data workflows
- Collaborate with data engineering, analytics, and platform teams to deliver reliable and performant data services
- Monitor data quality, reliability, and cost efficiency across platforms
- Build and enforce data governance, lineage, and observability practices
- Work with cloud platforms (AWS/Azure/GCP) to provision and maintain data infrastructure
- Apply CI/CD and Infrastructure-as-Code (IaC) principles to data workflows
- 5+ years of experience in Data Ops, Data Engineering, Dev Ops Engineering, or related roles
- Strong hands‑on experience with Druid, Databricks, dbt, and Redshift (experience with Snowflake, Big Query, or similar is a plus)
- Solid understanding of distributed systems architecture and data infrastructure at scale
- Proficiency in SQL and strong programming skills in at least one language (Python, Scala, or Java)
- Experience with orchestration tools (Airflow, Dagster, Prefect, etc.)
- Familiarity with cloud‑native services on AWS, Azure, or GCP
- Experience with CI/CD tools (Git Hub Actions, Git Lab CI, Jenkins, etc.)
- Strong problem‑solving, debugging, and performance‑tuning skills
- Experience with real‑time streaming platforms (Kafka, Kinesis, Pulsar)
- Knowledge of containerization/orchestration (Docker, Kubernetes)
- Experience with Infrastructure-as‑Code (Terraform, Cloud Formation)
Experience:
5‑8 Years.
The expected compensation for this role ranges from $60,000 to $135,000.
Final compensation will depend on various factors, including your geographical location, minimum wage obligations, skills, and relevant experience. The role is also eligible for Wipro's standard benefits including medical and dental benefits, disability insurance, paid time off (inclusive of sick leave), and other paid and unpaid leave options.
Applicants are advised that employment in some roles may be conditioned on successful completion of a post‑offer drug screening, subject to applicable state law.
Wipro provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. Applications from veterans and people with disabilities are explicitly welcome.
Seniority level:
Entry level
Employment type:
Full‑time
Job function:
Engineering and Information Technology
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).