×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer - Fleet Management

Job in San Mateo, San Mateo County, California, 94409, USA
Listing for: Verkada
Full Time position
Listed on 2025-12-02
Job specializations:
  • IT/Tech
    Data Science Manager, Data Engineer, Data Analyst, Machine Learning/ ML Engineer
Job Description & How to Apply Below

Who We Are

Verkada is transforming how organizations protect their people and places with an integrated, AI-powered platform. A leader in cloud physical security, Verkada helps organizations strengthen safety and efficiency through one connected software platform that includes solutions for video security, access control, air quality sensors, alarms, intercoms, and visitor management. Over 30,000 organizations worldwide, including more than 100 companies in the Fortune 500, trust Verkada as their physical security layer for easier management, intelligent control, and scalable deployments.

Founded in 2016, Verkada has expanded rapidly with 15 offices and 2,200+ full‑time employees.

About the Role

We’re looking for a backend software engineer with strong data analysis skills to join our camera fleet management team. You’ll build the data infrastructure and analytical tools that power our safe release operations across a million+ camera devices. This role combines traditional backend engineering with data pipeline development, log analysis, and metrics‑driven insights.

Camera firmware releases include critical updates like new AI models, and understanding their impact requires sophisticated data analysis ’ll develop the pipelines, dashboards, and analytical tools that help us detect anomalies, measure release health, and ensure every deployment is successful. Your work will directly support data‑driven decision making for releases that impact our customers and our reputation.

Every release decision we make affects hundreds of thousands of cameras in the field. The data pipelines you build and the insights you surface directly determine whether we can release confidently or need to halt a problematic rollout. You’ll be the engineering force behind our data‑driven release culture.

You’ll work closely with the Systems Software Engineer leading the team to build robust data infrastructure—from ingestion pipelines processing high‑volume logs to SQL queries surfacing critical insights to real‑time monitoring dashboards.

What You’ll Do
  • Build data pipelines: Design and implement data workflows using technologies like Kafka, Firehose, or Spark to process release metrics and device telemetry at scale
  • Develop analytical tools: Create Python‑based analysis tools using pandas and SQL to identify release issues, detect anomalies, and measure fleet health
  • High‑volume log analysis: Build systems to ingest, process, and analyze logs from millions of devices using technologies like Open Search, text clustering, and AI‑based techniques
  • Create monitoring infrastructure: Develop Grafana dashboards and alerts that surface critical metrics and anomalies in real time
  • Support release operations: Provide data‑driven insights during releases, helping the team make informed decisions about rollout speed and risk
  • Design test infrastructure: Build test bench setups and CI pipelines that validate releases before they reach production
  • Query and optimize: Write efficient SQL queries against time‑series databases to extract insights from large‑scale device data
Must‑Haves
  • BS/MS in Computer Science (or similar degree).
  • 3+ years of industry experience in distributed software engineering.
  • Strong Python skills: Proficiency in Python for data analysis, particularly with libraries like pandas
  • SQL expertise: Experience writing complex SQL queries for time‑series analysis
  • Backend engineering fundamentals: Solid software engineering skills – this is a backend role that involves data, not a pure data engineering position
  • Data pipeline experience: Familiarity with pipeline technologies like Kafka, Firehose, or Spark
  • Log analysis at scale: Experience with high‑volume log analysis technologies such as Open Search, text clustering, or AI‑based log analysis techniques
  • Time series databases: Experience working with time series databases and temporal data
  • Metrics & observability: Hands‑on experience with Grafana or similar monitoring tools
  • Anomaly detection: Understanding of anomaly detection techniques and their practical application
  • Coding‑based analysis: Preference for solving problems through code rather than manual analysis
  • Must be willing and able to work onsite…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary