More jobs:
Senior Data Engineer - DevOps Gitlab, Terraform
Job in
Houston, Harris County, Texas, 77246, USA
Listed on 2025-12-28
Listing for:
First Citizens Bank
Full Time
position Listed on 2025-12-28
Job specializations:
-
IT/Tech
Cloud Computing, Data Engineer
Job Description & How to Apply Below
Overview
This is a remote role that may only be hired in the following locations: NC, AZ, TX
We are seeking an experienced Dev Ops Engineer to design, build, and maintain CI/CD pipelines, infrastructure automation, and deployment workflows supporting our data engineering platform. This role focuses on infrastructure as code, configuration management, cloud operations, and enabling data engineers to deploy reliably and rapidly across AWS and Azure environments.
Responsibilities CI/CD Pipeline & Deployment Automation- Design and implement robust CI/CD pipelines using Azure Dev Ops or Git Lab; automate build, test, and deployment processes for data applications, dbt Cloud jobs, and infrastructure changes.
- Build deployment orchestration for multi-environment (dev, qa, uat, production) workflows with approval gates, rollback mechanisms, and artifact management.
- Implement Git Ops practices for infrastructure and application deployments; maintain version control and audit trails for all changes.
- Optimize pipeline performance, reduce deployment times, and enable fast feedback loops for rapid iteration.
- Design and manage Snowflake, AWS and Azure infrastructure using Terraform; ensure modularity, reusability, and consistency across environments.
- Provision and manage Cloud resources
- Implement tagging strategies and resource governance; maintain Terraform state management and implement remote state backends.
- Support multi-cloud architecture patterns and ensure portability between AWS and Azure where applicable.
- Deploy and manage Ansible playbooks for configuration management, patching, and infrastructure orchestration across cloud environments.
- Utilize Puppet for infrastructure configuration, state management, and compliance enforcement; maintain Puppet modules and manifests for reproducible environments.
- Automate VM provisioning, OS hardening, and application stack deployment; reduce manual configuration and ensure environment consistency.
- Build automation for scaling, failover, and disaster recovery procedures.
- Automate Snowflake provisioning, warehouse sizing, and cluster management via Terraform; integrate Snowflake with CI/CD pipelines.
- Implement Infrastructure as Code patterns for Snowflake roles, permissions, databases, and schema management.
- Build automated deployment workflows for dbt Cloud jobs and Snowflake objects; integrate version control with Snowflake changes.
- Monitor Snowflake resource utilization, costs, and performance; implement auto-suspend/auto-resume policies and scaling strategies.
- Develop Python scripts and tools for infrastructure automation, cloud operations, and deployment workflows.
- Build custom integrations between CI/CD systems, cloud platforms, and Snowflake; create monitoring and alerting automation.
- Integrate monitoring and logging solutions (Splunk, Dynatrace, Cloud Watch, Azure Monitor) into CI/CD and infrastructure stacks.
- Build automated alerting for infrastructure health, deployment failures, and performance degradation.
- Implement centralized logging for applications, infrastructure, and cloud audit trails; maintain log retention and compliance requirements.
- Create dashboards and metrics for infrastructure utilization, deployment frequency, and change failure rates.
- Support deployment of data processing jobs, Airflow DAGs, and dbt Cloud transformations through automated pipelines.
- Implement blue-green or canary deployment patterns for zero-downtime updates to data applications.
- Build artifact management workflows (Docker images, Python packages, dbt artifacts); integrate with Artifactory or cloud registries.
- Collaborate with data engineers on deployment best practices and production readiness reviews.
- Design backup and disaster recovery strategies for data infrastructure; automate backup provisioning and testing.
- Implement infrastructure redundancy and failover automation using AWS/Azure native services.
- Maintain comprehensive documentation for infrastructure architecture, CI/CD workflows, and operational procedures.
- Create runbooks and troubleshooting guides for common issues; document infrastructure changes and design decisions.
- Establish Dev Ops best practices and standards; share knowledge through documentation, lunch-and-learns, and mentoring.
Bachelor's Degree and 4 years of experience in Data engineering, big data technologies, cloud platforms OR High School Diploma or GED and 8 years of experience in Data engineering, big data technologies, cloud platforms
Preferred- CI/CD tools:
Azure Dev Ops Pipelines or Git Lab CI/CD (hands‑on pipeline development) - Infrastructure as Code:
Terraform (AWS and Azure providers) - production‑grade experience - Configuration Management:
Ansible and/or Puppet -…
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×