×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Crunchyroll
Full Time position
Listed on 2026-02-21
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Staff Data Engineer

About Crunchyroll

Founded by fans, Crunchyroll delivers the art and culture of anime to a passionate community. We serve over 100 million anime and manga fans across 200+ countries and territories, helping them connect with the stories and characters they crave. Whether that experience is online or in‑person—streaming video, theatrical releases, games, merchandise, events and more—it’s powered by the anime content we all love.

About

the Role

We are hiring a Staff Data Analyst to play a crucial role in establishing a world‑class Data Engineering team within the Center for Data and Insights (CDI). You will be a key contributor, advancing our data engineering capabilities in the AWS and GCP ecosystems. Your responsibilities include collaborating with partners, guiding and mentoring fellow data engineers, and working hands‑on in domains such as data architecture, data lake infrastructure, and data and ML job orchestration.

Your contributions will ensure the consistency and reliability of data and insights, aligning with our objective of enabling well‑informed decision‑making. You will demonstrate an empathetic and service‑oriented approach, fostering a thriving data and insights culture while enhancing and safeguarding our data infrastructure. You will have a unique opportunity to build and strengthen our data engineering platforms at a global level.

If you are an experienced professional with a passion for impactful data engineering initiatives and a commitment to driving transformative changes, we encourage you to explore this role.

  • Be a subject‑matter expert on critical datasets: field stakeholder questions, explain lineage/assumptions, and help partners interpret data accurately.
  • Own projects end‑to‑end: drive scoping, design, implementation, launch, and iteration of data products from raw sources to analytics‑ready datasets (requirements → modeling → pipelines → metrics/reporting enablement).
  • Architect and build scalable pipelines on Databricks using Spark + SQL (batch and/or streaming as needed), focusing on correctness, performance, and cost.
  • Design strong data models (lakehouse/warehouse‑style) that serve analytics and self‑service use cases; define entities, grains, dimensions, metrics, and contracts.
  • Integrate diverse data sources (internal systems + vendor platforms), manage schema evolution, and produce clean, well‑documented curated datasets for the Analytics team.
  • Establish data quality and reliability standards: testing, reconciliation, anomaly detection, SLAs/SLOs, monitoring/alerting, and incident response; continuously improve time‑to‑detect and time‑to‑recover (FAANG‑style data orgs explicitly call out delivering “high quality data” reliably at scale).
  • Performance‑tune Spark + SQL: optimize joins, partitioning, file layout, clustering/z‑ordering, caching strategy, and job configuration; benchmark and remove bottlenecks.
  • Partner with stakeholders (Product, Engineering, Growth, Finance, Analytics) to translate ambiguous questions into concrete data deliverables; communicate tradeoffs and drive alignment.
  • Raise the engineering bar: set patterns for pipeline templates, CI/CD, code reviews, and operational playbooks; mentor other engineers via technical leadership and examples (even without direct reports).

In the role of Staff Data Analyst you will report to the Senior Director, Data Engineering. We are considering applicants for the locations of Los Angeles or San Francisco.

About You

We get excited about candidates like you because:

  • You have 12+ years of hands‑on experience in data engineering and/or software development.
  • You are highly skilled in programming languages like Python, Spark, and SQL.
  • You are comfortable using BI tools like Tableau, Looker, and Preset.
  • You are proficient in utilizing event data collection tools such as Snowplow, Segment, Google Tag Manager, Tealium, mParticle, and more.
  • You have comprehensive expertise across the entire lifecycle of implementing compute and orchestration tools like Databricks, Airflow, Talend, and others.
  • You are experienced in working with streaming OLAP engines like Druid, Click House, and similar technologies.
  • You have experience using AWS services…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary