Data Engineer III Irvine, CA Hybrid Posted Trending
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Big Data
Taco Bell was born and raised in California and has been around since 1962. We went from selling everyone’s favorite Crunchy Tacos on the West Coast to a global brand with 8,500+ restaurants, 350 franchise organizations, that serve 42+ million fans each week around the globe. We’re not only the largest Mexican-inspired quick service brand (QSR) in the world, we’re also part of the biggest restaurant group in the world:
Yum! Brands .
Much of our fan love and authentic connection with our communities are rooted in being rebels with acause. From ensuring we use high quality, sustainable ingredients to elevating restaurant technology in ways that hasn’t been done before… we will continue to be inclusive, bold, challenge the status quo and push industryboundaries.
We’re a company that celebrates and advocates for different, has bold self-expression, strives for a better future, and brings the fun while we’re fuel our culture with real people who bring unique experiences. We inspire and enable our teams and the world to Live Más.
At Taco Bell, we’re Cultural Rebels. Want to join in on the passion-fueled fun? Learn more about the career below.
About the JobTaco Bell is seeking to add a savvy Data Engineer to join our growing Data and Analytics team. We are looking for a self-driven Data Engineer proficient with SQL & ETL pipelines who is familiar with Cloud technology preferably AWS and has scripting experience. You will work with cross functional partners and third-party vendors to enrich our customer data assets by acquiring, organizing, and aggregating customer data from various sources to construct a full and accurate 360 view of our customer for use in direct/email marketing, targeted media campaigns and analytics.
You will build data pipelines to source, analyze and validate data from internal and external customer data sources. This is a great opportunity to work on state-of-the-art data products in a friendly and fun environment.
- Bachelor’s degree in analytics, statistics, engineering, math, economics, computer science, information technology or related discipline
- 2+ years professional experience in the big data space
- 2 - 5 years of experience designing and delivering large scale, 24-7, mission-critical data pipelines and features using modern big data architectures
- 2+ years of hands‑on experience in Strong coding skills with Python/Pyspark/Spark and SQL
- 3+ years of hands‑on experience in ETL pipeline such as Informatica, AWS Glue etc.
- 3+ years of experience working in Redshift or other relevant databases.
- Expert knowledge in writing complex SQL and ETL development with experience processing extremely large datasets.
- Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions
- Experience integrating data using streaming technologies such as Kinesis Firehose, Kafka
- Experience with AWS Ecosystem, especially Redshift,Athena,Dynamo
DB, Airflow and S3 - Experience integrating data from multiple data sources and file types such as JSON, Parquet and Avro formats.
- Experience supporting and working with…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).