×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer, Delivery

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Troveo AI
Full Time position
Listed on 2026-02-28
Job specializations:
  • Software Development
    Data Engineer, AI Engineer
Salary/Wage Range or Industry Benchmark: 120000 - 160000 USD Yearly USD 120000.00 160000.00 YEAR
Job Description & How to Apply Below

About Troveo

Troveo is building the next-generation data platform to train AI video models. We offer the world’s largest library of AI video training data—featuring millions of hours of licensed video content. Our end-to-end data pipeline connects creators, rights holders, and AI research labs, enabling scalable, compliant, and innovative uses of video for AI applications and model development.

We are an early-stage, high-growth venture backed by forward-thinking investors, and we’re seeking a deeply technical engineer to help build and optimize the backbone of our content delivery systems.

Role Overview

As a Software Engineer, Delivery
, you’ll own the reliability, performance, and scalability of Troveo’s video content delivery infrastructure. This role is highly hands-on, blending systems engineering with data-centric development to ensure seamless transfer and processing of petabyte-scale video data.

You’ll work across data transport, distributed processing, and client integration layers - building efficient, fault-tolerant systems that power Troveo’s end-to-end AI data pipeline. Ideal candidates have a strong command of algorithms, concurrency, and network programming, paired with a pragmatic mindset for maintaining production-grade reliability.

Key Responsibilities

Core Delivery Engineering

  • Design, build, and maintain robust delivery pipelines that handle large-scale video ingestion, transformation, and distribution across distributed systems.

  • Optimize throughput, latency, and fault-tolerance across Troveo’s global data delivery layer.

  • Implement monitoring, redundancy, and recovery mechanisms to maintain system reliability at scale.

  • Collaborate with platform and ML teams to ensure smooth data handoffs into analytics, training, and indexing workflows.

Systems Design & Optimization

  • Apply strong fundamentals in algorithms, data structures, and concurrency to optimize data movement and task scheduling.

  • Develop and tune software for high-performance, parallel data processing and low-latency streaming workloads.

  • Implement and optimize both OLAP and OLTP integrations—bridging analytics warehouses and transactional databases for real-time delivery insights.

  • Leverage tools like Python
    , Go
    , or Node.js to build efficient services and automation frameworks.

Network & Distributed Systems

  • Build and maintain network-aware systems that support high-throughput video delivery using TCP/UDP, socket programming, and custom streaming protocols.

  • Profile, benchmark, and optimize data transmission across multi-region infrastructure.

  • Contribute to distributed coordination mechanisms to ensure system consistency and efficient data replication.

Reliability & Maintenance

  • Own production operations for delivery services—implement alerting, observability, and incident response workflows.

  • Partner with infrastructure engineers to scale compute and storage resources dynamically.

  • Drive continuous improvement in uptime, throughput, and cost efficiency.

Qualifications & Experience
  • 4 - 6 years of experience in software engineering, with focus areas in distributed systems, networking, or data infrastructure.

  • Deep understanding of algorithms
    , data structures
    , and concurrency control
    .

  • Proven experience building systems that interact with both OLAP (e.g., Snowflake, Big Query, Redshift) and OLTP (e.g., Postgres, MySQL, Dynamo

    DB) layers.

  • Strong proficiency in Python
    , Go
    , or Node.js for systems-level development.

  • Familiarity with network programming principles
    —including TCP/UDP protocols, sockets, and performance optimization for high-throughput data streams.

  • Experience operating within distributed, data-heavy production environments.

  • Clear, pragmatic communication skills; capable of collaborating closely with data, ML, and platform teams.

Nice to Have
  • Experience designing and implementing microservices architectures
    .

  • Familiarity with vector databases
    , Elasticsearch
    , or similar search/indexing technologies.

  • Exposure to modern streaming frameworks or distributed task queues (e.g., Kafka, Celery, Airflow).

  • Knowledge of cloud infrastructure operations (AWS preferred).

Location & Compensation

Location: Strong preference for candidates based in the San Francisco Bay Area
.

Compensation: $120,000 – $160,000 base salary + meaningful equity participation.

Why Join Troveo?
  • Work at the cutting edge of AI, video, and distributed data infrastructure.

  • Build the systems that deliver and power the world’s largest AI video datasets.

  • Collaborate with a world-class team of engineers, researchers, and industry experts.

  • High autonomy, high impact—your work will directly shape Troveo’s core delivery platform.

  • Competitive compensation with significant equity upside.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary