×
Register Here to Apply for Jobs or Post Jobs. X

Software Dev Engineer II - Backend​/Data

Job in Champaign, Champaign County, Illinois, 61825, USA
Listing for: Yahoo Holdings Inc.
Full Time position
Listed on 2026-02-16
Job specializations:
  • Software Development
    Data Engineer, Software Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Overview

It takes powerful technology to connect our brands and partners with an audience of hundreds of millions of people. Whether you're writing mobile app code, engineering the servers behind our massive ad tech stacks, or developing algorithms to help us process trillions of data points a day, what you do here will have a huge impact on our business—and the world.

This candidate needs to be located in EST timezone (Reston, VA preferred).

Role

Software Development Engineer II - Backend/Data

The Software Development Engineer II - Backend/Data designs, builds, and maintains robust, scalable data pipelines and backend services. This role requires strong technical expertise in backend development, distributed systems, and cloud-based data platforms. You'll play a key role in ensuring the performance, reliability, and scalability of data solutions that power critical business operations and products.

Responsibilities
  • Design, develop, and maintain automated, cloud-native ETL/ELT pipelines using GCP services (e.g., Dataflow, Dataproc, Cloud Composer, Pub/Sub, Big Query).
  • Ingest and process structured and unstructured data from diverse sources into Big Query and other GCP data systems.
  • Transform, clean, and enrich datasets to support analytics, machine learning, and reporting use cases.
  • Implement data validation, monitoring, and alerting for pipeline reliability and data quality.
  • Collaborate with cross-functional teams using Git, Jira, and CI/CD pipelines.
  • Troubleshoot and resolve production data and backend issues, ensuring system reliability and scalability.
  • Document system design, data flows, and operational processes for transparency and maintainability.
  • Mentor junior engineers and contribute to engineering best practices for cloud-based data systems.
Requirements
  • B.S. or M.S. in Computer Science, Engineering, or related field, or equivalent practical experience.
  • 3+ years of experience in backend or data engineering.
  • Proficiency in at least one backend language - Python, Java, or Go preferred.
  • Hands-on experience with GCP data services:
    Big Query, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer (Airflow).
  • Solid understanding of relational and distributed databases, data modeling, and performance optimization.
  • Familiarity with containerization and orchestration (Docker, Kubernetes/GKE).
  • Strong Unix/Linux and shell scripting skills.
  • Experience building and managing workflow orchestration (Airflow or Cloud Composer).
  • Knowledge of CI/CD, logging, and monitoring (Stackdriver, Cloud Logging, Prometheus, etc.).
  • Strong problem-solving, debugging, and communication skills with a focus on reliability and scalability.
Nice to Have
  • Exposure to AI/ML workflows or tools such as Vertex AI.
  • Experience with streaming data processing (Dataflow, Kafka, or Pub/Sub).
  • Understanding of concurrency, multithreading, and distributed systems design.
  • Familiarity with infrastructure-as-code (Terraform, Deployment Manager).
  • Experience with tools (e.g., Git, Maven, JIRA).
  • Troubleshoot and resolve production issues post-deployment.
  • Document designs, processes, and outcomes to ensure maintainability.
  • Mentor junior engineers and contribute to team development best practices.
Additional Requirements
  • B.S. or M.S. in Computer Science (or equivalent experience).
  • 3+ years of experience in backend or data engineering.
  • Proficiency in at least one backend programming language (Java, Python, or JavaScript) and ETL frameworks/tools.
  • Hands-on experience with relational and distributed databases (e.g., Oracle, MySQL, Vertica, Big Query, HBase).
  • Strong knowledge of Unix/Linux systems and shell scripting.
  • Experience with workflow/schedulers (e.g., Airflow, Oozie).
  • Solid understanding of database structures, principles, and performance optimization.
  • Proven expertise with the Hadoop ecosystem (Dataproc, HBase, Hive, Pig).
  • Cloud platform knowledge (AWS, GCP, or Azure).
  • Familiarity with version control tools (Git).
  • Strong problem-solving and debugging skills; ability to troubleshoot production issues.
  • Effective communication, collaboration, and customer focus.
Nice to Have
  • Exposure to AI/ML workflows.
  • Understanding of multithreading and concurrency concepts.

The material job…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary