×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Software Engineer, Data Engineer

Job in Columbus, Franklin County, Ohio, 43224, USA
Listing for: Judge Group, Inc.
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 69 - 74 USD Hourly USD 69.00 74.00 HOUR
Job Description & How to Apply Below

Location: Columbus, OH

Salary: $69.00 USD Hourly – $74.00 USD Hourly

Description:

About the Role

Please note:

This is a contingent role. In this position, you will serve as a senior technical contributor supporting large-scale data engineering initiatives. You will design, build, and optimize modern data lake and data processing architectures on Google Cloud Platform (GCP). You’ll partner with cross-functional engineering teams to solve complex data challenges, advise on architectural decisions, and ensure solutions meet enterprise standards for scalability, reliability, and security.

This role is ideal for engineers with deep experience in cloud-native data platforms, large-scale distributed processing, and advanced analytics data models.

Responsibilities Data Lake Architecture & Storage
  • Design and implement scalable data lake architectures (e.g., Bronze/Silver/Gold layered models).
  • Define Cloud Storage (GCS) architecture including bucket structures, naming standards, lifecycle policies, and IAM models.
  • Apply best practices for Hadoop/HDFS-like storage, distributed file systems, and data locality.
  • Work with columnar formats (Parquet, Avro, ORC) and compression for performance and cost optimization.
  • Develop effective partitioning strategies, organization techniques, and backfill approaches.
  • Build curated and analytical data models optimized for BI and visualization tools.
Data Ingestion & Orchestration
  • Build batch and streaming ingestion pipelines using Google Cloud Platform-native tools.
  • Design event-driven architectures using Pub/Sub with well-defined schemas and versioning.
  • Implement incremental ingestion, CDC patterns, idempotency, and deduplication.
  • Develop workflows using Cloud Composer / Apache Airflow.
  • Create mechanisms for error handling, monitoring, replay, and historical backfills.
Data Processing & Transformation
  • Build scalable batch and streaming data pipelines using Dataflow (Apache Beam) and/or Spark (Dataproc).
  • Write optimized Big Query SQL leveraging clustering, partitioning, and cost-efficient design.
  • Utilize Hadoop ecosystem tools (Hive, Pig, Sqoop) where applicable.
  • Write production-grade Python for data engineering with maintainable, testable code.
  • Manage schema evolution with minimized downstream disruption.
Analytics & Data Serving
  • Optimize Big Query datasets for performance, governance, and cost.
  • Build semantic layers, governed metrics, and data serving patterns for BI consumption.
  • Integrate datasets with BI tools using compliant access controls and dashboarding standards.
  • Expose data through views, APIs, and curated analytics-ready datasets.
Data Governance, Quality & Metadata
  • Implement metadata management, cataloging, and ownership standards.
  • Define lineage models to support auditing and troubleshooting.
  • Build data quality frameworks (validation, freshness, SLAs, alerting).
  • Establish and enforce data contracts, schema policies, and data reliability standards.
  • Work with audit logging and compliance readiness processes.
Cloud Platform Management
  • Manage Google Cloud Platform environments including project setup, resource boundaries, billing, quotas, and cost optimization.
  • Implement IAM best practices with least-privilege design and secure service account usage.
  • Configure secure networking including VPCs, private access, and service connectivity.
  • Manage encryption strategies using KMS/CMEK and perform platform-level security audits.
Dev Ops, Platform & Reliability
  • Build and maintain CI/CD pipelines for data platform and pipeline deployments.
  • Manage secret storage with Google Cloud Platform Secret Manager.
  • Build observability stacks including dashboards, SLOs, alerts, and runbooks.
  • Support logging and monitoring for pipeline health and platform reliability.
Preferred Skills (Nice to Have)
Security, Privacy & Compliance
  • Implement fine-grained access controls for Big Query and GCS.
  • Experience with VPC Service Controls, perimeter security, and data exfiltration prevention.
  • Understanding of PII protection, data masking, tokenization, and audit/compliance practices.
Required Qualifications
  • 5 years of software engineering or data engineering experience, or equivalent through training, education, consulting, or military experience.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary