×
Register Here to Apply for Jobs or Post Jobs. X

Google Cloud Data Engineer

Job in Virginia, St. Louis County, Minnesota, 55792, USA
Listing for: Linuxcareers
Full Time position
Listed on 2026-01-07
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Data Science Manager, Data Analyst
Job Description & How to Apply Below

Job Family

Data Science Consulting

Travel Required

Up to 10%

Clearance Required

Ability to Obtain Public Trust

What You Will Do

Guidehouse is seeking an experienced Data Engineer to join our Technology AI and Data practice within the Defense & Security segment. This individual will have a strong data engineering background and be a hands‑on technical contributor, responsible for designing, implementing, and maintaining scalable, cloud‑native data pipelines which power interactive dashboards that enable federal clients to achieve mission outcomes, operational efficiency, and digital transformation.

This is an exciting opportunity for someone who thrives at the intersection of data engineering, Google Cloud technologies, and public sector modernization. The Data Engineer will collaborate with cross‑functional teams and client stakeholders to modernize legacy environments, implement scalable Big Query‑centric data pipelines using Dataform and Python, and support advanced analytics initiatives for our federal client within the insurance space.

Client

Leadership & Delivery
  • Collaborate with government clients to understand enterprise data architecture, ingestion, transformation, and reporting requirements within a Google Cloud Platform (GCP) environment.
  • Communicate technical designs, tradeoffs, and delivery timelines clearly to both technical and non‑technical audiences.
  • Lead the development of extract‑transform‑load (ETL) and extract‑load‑transform (ELT) pipelines using Cloud Composer (GCP hosted Airflow), Dataform, and Big Query to support our analytical data warehouse powering downstream Looker dashboards.
  • Adhere to high‑quality delivery standards and promote measurable outcomes across data migration and visualization efforts.
Solution Development & Innovation
  • Design, develop, and maintain scalable ETL/ELT pipelines using SQL (Big Query), Dataform (SQLX), Cloud Storage, and Python (Cloud Composer/Airflow, Cloud Functions).
  • Apply modern ELT/ETL and analytics engineering practices using Big Query and Dataform to enable version‑controlled, testable, and maintainable data transformations.
  • Leverage tools such as Gitlab and Github to manage version control, merge requests, and promotion pipelines.
  • Optimize data pipelines and warehouse performance for large‑scale analytical workloads, including partitioning, clustering, incremental processing, and cost optimization to enable downstream BI utilizing Looker.
  • Validate compliance with federal data governance, security, and performance standards.
  • Design and document enterprise data models, metadata strategies, data lineage frameworks, and other relevant documentation, as needed.
  • Align data from multiple discrete datasets into a cohesive, interoperable architecture, identifying opportunities for linkages between datasets, normalization, field standardization, etc.
  • Assist with cleanup of existing data and models, including use of ETL.
Practice & Team Leadership
  • Work closely with data architects, data scientists, data analysts, and cloud engineers to deliver integrated solutions.
  • Collaborate across Scaled Agile Framework (SAFe) teams and participate in Agile ceremonies including standups, retrospectives, and Program Increment (PI) planning.
  • Manage tasks and consistently document progress and outcomes using Confluence and Jira.
  • Support documentation, testing, and deployment of data products.
  • Mentor junior team members and contribute to reusable frameworks and accelerators.
  • Contribute to thought leadership, business development, and best practice development across the AI & Data team.
What You Will Need
  • US Citizenship and the ability to obtain and maintain a federal Public Trust clearance. Individuals with an active Public Trust clearance are preferred.
  • Bachelor's degree in computer science, engineering, mathematics, statistics, or a related technical field.
  • Minimum Three (3) years of experience in data engineering within cloud environments.
  • Strong proficiency in SQL for data modeling and data quality tests and Python for pipeline design. Comfortable with the command line in Linux for git, deploying code to the cloud, and interacting with cloud files.
  • Experience with orchestration tools…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary