×
Register Here to Apply for Jobs or Post Jobs. X

Sr DevOps Engineer

Remote / Online - Candidates ideally in
Omaha, Douglas County, Nebraska, 68197, USA
Listing for: Kiewit
Full Time, Remote/Work from Home position
Listed on 2026-02-20
Job specializations:
  • IT/Tech
    Cloud Computing, Data Engineer, Systems Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 130000 USD Yearly USD 100000.00 130000.00 YEAR
Job Description & How to Apply Below

Market:Corporate Home Office

Employment Type:Full Time

Position Overview

The Sr Dev Ops Engineer role focuses on helping build and improve the company’s cloud and data environments so they are reliable, secure, and easy for teams to use. You’ll work on modernizing core systems, streamlining how environments are set up, and supporting the tools that store and move large amounts of data. Tihs role involves creating repeatable processes, improving automation, and cross collaboration to ensure teams have what they need to work efficiently.

You’ll also work closely with cloud, platform, and data engineering teams while documenting plans, improvements, and recommendations. The ideal candidate has experience working in cloud environments, supporting data systems, improving processes, and communicating clearly across teams.

District Overview

Kiewit Technology Group  builds solutions to enable and support our company's expansive operations. Our mission is to deliver projectscheduleand cost certainty by employingtechnologydesigned by and for the construction industry. Our teamutilizessystems and tools that manage every part of Kiewit's business and the project lifecycle to improve planning and day-to-day execution in the field. We give our people real-time data to make faster, smarter decisions.

Location

This is an in-office role located in Omaha, NE. This role is not offering relocation assistance.

Responsibilities
  • Design and automate the provisioning of Azure infrastructure using infrastructure-as-code (IaC) patterns.
  • Modernize core infrastructure components including networking, compute, storage, and identity.
  • Improve environment standardization across development, test, and production landscapes.
  • Strengthen governance, security, and operational resilience within cloud environments.
  • Enhance provisioning and deployment automation for data platforms including data warehouses, data lakes, and analytics services.
  • Improve CI/CD patterns for data pipelines and data platform components.
  • Support containerized, serverless, and distributed data workloads.
  • Implement scalable infrastructure patterns that support high-volume data ingestion and transformation workloads.

Infrastructure as Code & Automation

  • Translate knowledge from AWS Cloud Formation/CDK or Terraform into Azure-native patterns.
  • Develop reusable modules, templates, and patterns for infrastructure and data services.
  • Improve automation scripts using Power Shell, Bash, or Python.
  • Partner with Cloud Architecture, Platform Engineering, and Data Engineering teams to advance standardized infrastructure and data platform patterns
  • Produce high-quality documentation, architectural diagrams, execution plans, and process improvements.
  • Deliver clear, actionable recommendations aligned with cloud and infrastructure best practices.
Qualifications
  • 4+ years of experience in Cloud Infrastructure Engineering, Dev Ops, or Data Engineering roles.
  • Strong hands‑on experience provisioning and managing Azure or GCP cloud infrastructure.
  • Practical experience with infrastructure as code (Bicep, ARM, Terraform, Cloud Formation, or CDK).
  • Experience supporting data platforms such as data lakes, data warehouses, or large‑scale analytics environments.
  • Experience building and maintaining CI/CD pipelines for infrastructure and data workloads.
  • Experience working with container‑based workloads and centralized registries (Azure Container Registry, JFrog, Git Hub Container Registry).
  • Ability to assess current‑state infrastructure and deliver structured modernization recommendations.
  • Strong documentation and process improvement capabilities

Technical Experience

  • Azure or GCP compute, networking, storage, and identity services.
  • Infrastructure as code (Ansible,Terraform, Bicep)
  • Automation scripting (Power Shell, Bash, Python)
  • Monitoring and observability (Azure Monitor, Application Insights, Cloud Watch, or similar).
  • Experience with data pipeline orchestration or ETL/ELT tooling is a plus

Other Requirements:

  • Work productively and meet deadlines timely
  • Communicate and interact effectively and professionally with supervisors, employees, and others individually or in a team environment.
  • Perform work safely and effectively.…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary