×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Engineer II; Big Data Engineering

Job in Broomfield, Boulder County, Colorado, 80020, USA
Listing for: Magnite
Full Time position
Listed on 2025-12-31
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below
Position: Engineer II (Big Data Engineering)
Location: Broomfield

Engineer II (Big Data Engineering)

Locations: New York City, NY;
Boston, MA;
Los Angeles, CA;
Broomfield, CO

Schedule: Hybrid Schedule (M/F remote, T/W/TH in-office)

At Magnite, we cultivate an environment of continuous growth and collaboration. Our work impacts what millions of people read, watch, and buy, and we’re looking for people to help us tackle that responsibility with creativity and focus. Magnite (NASDAQ: MGNI) is the world’s largest independent sell-side advertising platform. Publishers use our technology to monetize their content across all screens and formats including CTV / streaming, online video, display, and audio.

Our tech fuels billions of transactions per day!

Are you excited about high-performance Big Data implementation? Then great! Magnite is growing, and we need software developers who are thorough and agile, capable of breaking down and solving problems and have a strong will to get things done. In the DV+ Data Engineering team you will work on real-world problems working on big data stack where accuracy and speed are paramount, take responsibility for your systems end-to-end and influence the direction of our technology that impacts customers around the world.

About this team:

We own the data systems that process hundreds of billions of events per day for the DV+ platform. We are looking for a Data- or Software-Engineer with both conceptual and hands-on experience working on big data development. This is a fully integrated environment that includes upstream data ingestion processes, proprietary and Open Source DBMS as well as large scale data warehouse environment.

As a member of our data engineering team you will be a part of a service group responsible for continuing organizational expansion of our data platform. Ideal candidates must be excited about all aspects of big data development including data transport, data processing, data warehouse/ETL integration, quick learning, and self-starting. This is a demanding role that will require hands-on experience with big data processing development on Linux.

You will be responsible for the day to day operation and new developments. We are seeking a candidate with good skills in software development life cycle, building data services with Java, Scala, script languages like Python etc. We are responsible for technological and operational excellence across our domain.

What you will be doing:

  • Design, develop and support various big data platforms applications including Hadoop, Kafka, ETL, and data warehouse applications integration
  • Develop applications with Java and Spark programming languages and Big Data technology; including scripting languages (Python, shell etc.) to support application execution
  • Design and implement full lifecycle of data services, from data transportation, data processing, ETL to data delivery for reporting
  • Strong data analysis and troubleshooting to support day-to-day production operations
  • Proactively identify, troubleshoot and resolve production data and performance issues

What we are looking for:

  • Proficiency with data engineering technologies:
    Hadoop / Spark / Kafka / Druid;
    Java and scripting languages (UNIX Shell, Perl, Python, etc.)
  • Familiar with process, infra, and application management technologies – JIRA, Jenkins, Git Hub, etc.
  • Ability to follow standard development practices and understand concepts related to computer architecture, data structures, and programming
  • Experience developing and executing testing/debugging, data quality, and performance tuning applications
  • Ability to communicate effectively to end users and work within a team
  • Bachelor’s degree in CS/EE or related science

Nice To Have:

  • Experience with the ad-tech industry
  • Experience with Massively Parallel Processing architecture
  • Experience with MapR Hadoop
  • Experience with SQL related to data transformation, reporting & analysis

Perks and Benefits :

  • Comprehensive Healthcare Coverage from Day One
  • Generous Time Off
  • Holiday Breaks and Quarterly Wellness Days
  • Equity and Employee Stock Purchase Plan
  • Family-Focused Benefits and Parental Leave
  • 401k Retirement Savings Plan with Employer Match
  • Disability and Life Insurance
  • Cell Phone Subsidy
  • Fitness and Wellness…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary