×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer III Irvine, CA Hybrid Posted Trending

Job in Irvine, Orange County, California, 92713, USA
Listing for: KFC Corporation
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Position: Data Engineer III Irvine, CA, United States (Hybrid) Posted on 01/23/2026 Trending

Taco Bell was born and raised in California and has been around since 1962. We went from selling everyone’s favorite Crunchy Tacos on the West Coast to a global brand with 8,500+ restaurants, 350 franchise organizations, that serve 42+ million fans each week around the globe. We’re not only the largest Mexican-inspired quick service brand (QSR) in the world, we’re also part of the biggest restaurant group in the world:

Yum! Brands .

Much of our fan love and authentic connection with our communities are rooted in being rebels with acause. From ensuring we use high quality, sustainable ingredients to elevating restaurant technology in ways that hasn’t been done before… we will continue to be inclusive, bold, challenge the status quo and push industryboundaries.

We’re a company that celebrates and advocates for different, has bold self-expression, strives for a better future, and brings the fun while we’re  fuel our culture with real people who bring unique experiences. We inspire and enable our teams and the world to Live Más.

At Taco Bell, we’re Cultural Rebels. Want to join in on the passion-fueled fun? Learn more about the career below.

About the Job

Taco Bell is seeking to add a savvy Data Engineer to join our growing Data and Analytics team. We are looking for a self-driven Data Engineer proficient with SQL & ETL pipelines who is familiar with Cloud technology preferably AWS and has scripting experience. You will work with cross functional partners and third-party vendors to enrich our customer data assets by acquiring, organizing, and aggregating customer data from various sources to construct a full and accurate 360 view of our customer for use in direct/email marketing, targeted media campaigns and analytics.

You will build data pipelines to source, analyze and validate data from internal and external customer data sources. This is a great opportunity to work on state-of-the-art data products in a friendly and fun environment.

Day to Day >
  • Design and develop highly scalable and extensible data pipelines from internal and external sources using cloud technology such as AWS, Airflow, Redshift, EMR.
  • Implement new source of truth datasets, in partnership with analytics and business teams.
  • Collaborate with data product managers, data scientists, data analysts, and data engineers to document requirements and data specifications.
  • Develop,deploy, and maintain serverless data pipelines using Event Bridge, Kinesis, AWS Lambda, S3, and Glue.
  • Focus on performance tuning, optimization and scalability to ensure efficiency.
  • Build out a robust big data ingestion framework with automation, self-heal capabilities and ability to handle data drifts.
  • Adopt automated and manual test strategies to ensure product quality
  • Learn and understand howTaco Bell products work and help build end-to-end solutions.
  • Ensure high operational efficiency and quality of your solutions to meet SLAs.
  • Actively participate in code reviews and summarize complex data into usable, digestible datasets.
  • Is This You?
    • Bachelor’s degree in analytics, statistics, engineering, math, economics, computer science, information technology or related discipline
    • 2+ years professional experience in the big data space
    • 2 - 5 years of experience designing and delivering large scale, 24-7, mission-critical data pipelines and features using modern big data architectures
    • 2+ years of hands‑on experience in Strong coding skills with Python/Pyspark/Spark and SQL
    • 3+ years of hands‑on experience in ETL pipeline such as Informatica, AWS Glue etc.
    • 3+ years of experience working in Redshift or other relevant databases.
    • Expert knowledge in writing complex SQL and ETL development with experience processing extremely large datasets.
    • Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
    • Experience integrating data using streaming technologies such as Kinesis Firehose, Kafka
    • Experience with AWS Ecosystem, especially Redshift,Athena,Dynamo

      DB, Airflow and S3
    • Experience integrating data from multiple data sources and file types such as JSON, Parquet and Avro formats.
    • Experience supporting and working with…
    To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
    (If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
     
     
     
    Search for further Jobs Here:
    (Try combinations for better Results! Or enter less keywords for broader Results)
    Location
    Increase/decrease your Search Radius (miles)

    Job Posting Language
    Employment Category
    Education (minimum level)
    Filters
    Education Level
    Experience Level (years)
    Posted in last:
    Salary