×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Los Angeles, Los Angeles County, California, 90079, USA
Listing for: Scribd, Inc.
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Staff Data Engineer

Join to apply for the Staff Data Engineer role at Scribd, Inc.

About The Company:

At Scribd (pronounced “scribbed”), our mission is to spark human curiosity. Join our team as we create a world of stories and knowledge, democratize the exchange of ideas and information, and empower collective expertise through our four products:
Everand, Scribd, Slideshare, and Fable.

We support a culture where our employees can be real and bold; where we debate and commit as we embrace plot twists; and where every employee is empowered to take action as we prioritize the customer. Scribd Flex allows employees to choose a daily work-style in partnership with their manager, with occasional in-person attendance required for all employees, regardless of location.

We hire for GRIT:
Goals, Results, Innovation, and Team through collaboration and attitude. We’re looking for someone who demonstrates the ability to set and achieve goals, achieve results, contribute innovative ideas, and positively influence the broader team.

Team and role context:
Scribd’s Data Platform builds data pipelines, storage, and developer tooling that power analytics, experimentation, ML, and product features across Scribd, Everand, and Slideshare. We are modernizing our data architecture for fully governed, properly modeled data that every team can trust and build upon. You’ll join a data engineering team tackling complex challenges across three brands serving over 200 million monthly visitors and 2 million paying subscribers.

What You’ll Do
:
As a Staff Data Engineer, you’ll be a hands-on technical expert and strategic leader. You’ll drive the design of core data models and pipelines in our Databricks/Delta Lake lakehouse, setting standards for quality, reliability, and scalability across the platform. You’ll own end-to-end solutions—from architecture and implementation to operations and optimization—while guiding the long-term direction of Scribd’s data ecosystem. You’ll collaborate across teams to translate complex business problems into robust data solutions and mentor engineers to grow and deliver at a higher level.

You’ll help evolve toward a fully governed lakehouse with fine-grained access controls and consistent lineage.

You Will

  • Design and implement core data models and pipelines that power analytics, ML, and product experiences.
  • Implement modern data lake orchestration patterns, including medallion architectures.
  • Architect and evolve a scalable, cost-efficient, and reliable lakehouse foundation using Databricks, Delta Lake, and Airflow.
  • Define best practices and technical standards that improve data quality, governance, and performance across teams.
  • Mentor engineers and foster a culture of ownership, operational excellence, and continuous learning.
  • Shape the long-term technical vision and roadmap for Scribd’s data platform.

Required Skills

  • 8+ years of experience in data engineering, with a strong background in data architecture, data modeling, and distributed data systems.
  • Deep expertise in Databricks, Delta Lake, Spark, and modern lakehouse technologies.
  • Advanced proficiency in SQL and Python or Scala, including performance optimization and large-scale ETL design.
  • Proven experience designing data models and schemas that serve multiple downstream use cases (analytics, ML, APIs).
  • Experience implementing modern data orchestration patterns for big data use-cases, including batch and streaming workloads.
  • Demonstrated ability to lead technical initiatives, set standards, and influence decisions across teams.
  • Comfort owning systems end-to-end, including monitoring, reliability, and cost management.
  • Excellent communication skills with the ability to translate technical trade-offs to engineers and non-technical stakeholders.

Desired Skills

  • Experience with subscription, payments, or large-scale consumer data domains.
  • Familiarity with AWS data services (S3, Glue, EMR, Kinesis) and cloud cost optimization.
  • Knowledge of streaming architectures (Kafka, Kinesis, or similar).
  • Experience implementing data quality, governance, and observability standards at scale.
  • Contributions to open-source projects or thought leadership in the data engineering community.
  • Experience…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary