×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer - Data Platform and Analytics

Remote / Online - Candidates ideally in
Concord, Contra Costa County, California, 94527, USA
Listing for: Worldly
Remote/Work from Home position
Listed on 2026-02-12
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager
Salary/Wage Range or Industry Benchmark: 135000 - 165000 USD Yearly USD 135000.00 165000.00 YEAR
Job Description & How to Apply Below

Senior Data Engineer - Data Platform and Analytics

Location:

Remote - US

About Worldly

Worldly is the world’s most comprehensive impact intelligence platform — delivering real data to businesses on impacts within their supply chain. Worldly is trusted by 40,000 global brands, retailers, and manufacturers to provide the single source of ESG intelligence they need to accelerate business and industry transformation.

Through strategic and meaningful customer relationships, Worldly provides key insights into supplier performance, product impact, trends analysis, and compliance. When a company wants to change how business is done, we enable that systemic shift.

Backed by a dedicated global team of individuals aligned by values, Worldly proudly operates as a public benefit corporation with backing from mission‑aligned investors. Want to learn more? Read our story.

About the Opportunity

Worldly is hiring a hands‑on data engineer with a passion for sustainability to join our dynamic team. You will take on a primary role in building, operating, maintaining, and evolving the systems that support our internal analytics and power our customer‑facing analytics platforms.

In this role:

  • You will collaborate with stakeholders across the organization to design and implement scalable, cloud‑based data solutions, integrating generative AI to drive innovation.

  • You will work closely with cross‑functional stakeholders (finance, product, marketing, customer support, tech, data science) to enable trusted data products for internal decision making and external‑facing tools.

  • You will have a leading role in the development of a data lake resource to complement our existing data warehouse, enabling greater flexibility in analytics and reporting.

  • You will work with AWS services, automation tools, machine learning, and generative AI to enhance efficiency, stability, security, and performance.

This role is expected to drive outcomes in day‑to‑day execution and operational stability, while partnering with senior engineering leadership on longer‑range architecture direction.

What You'll Do
SQL + Postgres Data Warehouse
  • Operate and evolve our Postgres data warehouse: schema design, performance tuning, indexing, access controls, and so on.

  • Build analytics‑ready datasets supporting sustainability measurement, supply‑chain insights, and business metrics.

Semantic Layer Deployment
  • Deploy and maintain multiple instances of Cube.dev semantic layers with standardized configuration, CI/CD workflows, and governance practices (including documentation of processes, configurations, and troubleshooting).

  • Establish clear and consistent metric definitions and versioning across dashboards and analytics surfaces.

GenAI/NLP Enablement
  • Support integration and deployment of genAI‑enabled workflows, especially NLP‑based use cases (classification, extraction, normalization, embeddings/similarity).
    • Ensure that our data infrastructure is “AI‑ready”.

Graph Data Enablement
  • In collaboration with data scientists, research and develop practical transition plans for evolving selected relational/warehouse data structures into a graph‑based knowledgebase (for use with the Neo4j framework), including candidate use cases, data modeling approach, migration sequencing, and operational considerations (performance, governance, lineage, and security).

DBT Pipelines and Automation
  • Maintain and stabilize existing DBT pipelines that underpin reporting and analytics, including automation for incremental processing/scheduling, data quality monitoring, and performance tuning.

  • Lead operational support and modernization planning: rapid triage and root‑cause resolution for pipeline issues, and evaluation/prototyping of next‑generation transformation approaches with clear, low‑risk transition plans in partnership with analytics and engineering stakeholders.

AWS Data Pipelines
  • Build ingestion and ETL processes using S3, Glue, Lambda, and App Flow.

  • Integrate data from third‑party systems and APIs (e.g., Zendesk, Hub Spot, Net Suite, other platform data) with strong auditability and operational resilience.

We'd Like to See
  • 5+ years of professional experience in data engineering, analytics engineering,…

Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary