Data Engineer
Listed on 2026-02-18
-
Software Development
Data Engineer
Founded by fans, Crunchyroll delivers the art and culture of anime to a passionate community. We super-serve over 100 million anime and manga fans across 200+ countries and territories and help them connect with the stories and characters they crave. Whether that experience is online or in‑person, streaming video, theatrical, games, merchandise, events and more, it’s powered by the anime content we all love.
Join our team, and help us shape the future of anime!
About the roleWe are hiring a Staff Data Engineer to play a crucial role in our mission to establish a world‑class Data Engineering team within the Center for Data and Insights (CDI). You will be a key contributor, advancing our data engineering capabilities in the AWS and GCP ecosystems.
Your responsibilities include collaborating with partners, guiding and mentoring fellow data engineers, and working hands‑on in various domains such as data architecture, data lake infrastructure, data and ML job orchestration. Your contributions will ensure the consistency and reliability of data and insights, aligning with our objective of enabling well‑informed decision‑making.
You will demonstrate an empathetic and service‑oriented approach, fostering a thriving data and insights culture while enhancing and safeguarding our data infrastructure. You will have a unique opportunity to build and strengthen our data engineering platforms at a global level. If you are an experienced professional with a passion for impactful data engineering initiatives and a commitment to driving transformative changes, we encourage you to explore this role.
- Be a subject‑matter expert on critical datasets: field stakeholder questions, explain lineage/assumptions, and help partners interpret data accurately
- Own projects end‑to‑end
: drive scoping, design, implementation, launch, and iteration of data products from raw sources to analytics‑ready datasets (requirements → modeling → pipelines → metrics/reporting enablement) - Architect and build scalable pipelines on Databricks using Spark + SQL (batch and/or streaming as needed), focusing on correctness, performance, and cost
- Design strong data models (lakehouse/warehouse‑style) that serve analytics and self‑service use cases; define entities, grains, dimensions, metrics, and contracts
- Integrate diverse data sources (internal systems + vendor platforms), manage schema evolution, and produce clean, well‑documented curated datasets for the Analytics team
- Establish data quality and reliability standards
: testing, reconciliation, anomaly detection, SLAs/SLOs, monitoring/alerting, and incident response; continuously improve time‑to‑detect and time‑to‑recover (FAANG‑style data orgs explicitly call out delivering “high quality data” reliably at scale) - Performance‑tune Spark + SQL
: optimize joins, partitioning, file layout, clustering/z‑ordering, caching strategy, and job configuration; benchmark and remove bottlenecks - Partner with stakeholders (Product, Engineering, Growth, Finance, Analytics) to translate ambiguous questions into concrete data deliverables; communicate trade‑offs and drive alignment
- Raise the engineering bar
: set patterns for pipeline templates, CI/CD, code reviews, and operational playbooks; mentor other engineers via technical leadership and examples (even without direct reports)
In the role of Staff Data Engineer, you will report to the Senior Director, Data Engineering. We are considering applicants for the location of Los Angeles or San Francisco.
About YouWe get excited about candidates, like you, because...
- You have 12+ years of hands‑on experience in data engineering and/or software development
- You are highly skilled in programming languages like Python, Spark & SQL
- You are comfortable using BI tools like Tableau, Looker, Preset
- You are proficient in utilizing event data collection tools such as Snowplow, Segment, Google Tag Manager, Tealium, mParticle, and more
- You have comprehensive expertise across the entire lifecycle of implementing compute and orchestration tools like Databricks, Airflow, Talend, and others
- You are experienced in working with streaming OLAP engines like Druid, Click House, and similar technologies
- You…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).