More jobs:
Data Platform Engineer
Job in
Palo Alto, Santa Clara County, California, 94306, USA
Listed on 2026-02-19
Listing for:
BrightAI Inc.
Full Time
position Listed on 2026-02-19
Job specializations:
-
Software Development
Data Engineer
Job Description & How to Apply Below
We are a high-growth company transforming how businesses operate by integrating AI, IoT, and cloud-native services into scalable, real-time platforms. As a Platform Data Engineer
, you’ll play a critical role in building and maintaining the data infrastructure that powers our products, services, and insights.
You’ll join a multidisciplinary team focused on ingesting, processing, and managing massive streams of sensor and operational data across a wide array of devices—from drones and robots to industrial systems and smart environments.
Responsibilities- Design, build, and maintain scalable, reliable, and high-throughput data ingestion pipelines for structured and semi-structured data.
- Implement robust and secure data lake and SQL-based storage architectures optimized for performance and cost.
- Develop and maintain internal tools and frameworks for data ingestion using Python, Golang, and SQL.
- Collaborate cross-functionally with Cloud, Edge, Product, and AI teams to define data contracts, schemas, and retention policies.
- Use AWS cloud infrastructure (including Argo Workflows, S3, Lambda, Glue, Kinesis, Athena, and RDS) to support end-to-end data workflows.
- Employ Infrastructure-as-Code (IaC) practices using Terraform to manage data platform infrastructure.
- Monitor data pipelines for quality, latency, and failures using tools such as Cloud Watch, Sumo Logic, or Data Dog.
- Continuously optimize storage, partitioning, and query performance across large-scale datasets.
- Participate in architecture reviews and ensure the platform adheres to security, compliance, and best practice standards.
- 5+ years of professional experience in software engineering or data engineering.
- Strong programming skills in Python and Golang.
- Deep understanding of SQL and modern data lake architectures (e.g., using Parquet, Iceberg, or Delta Lake).
- Hands-on experience with AWS services including but not limited to: S3, Lambda, Glue, Kinesis, Athena, and RDS.
- Proficiency with Terraform for automating infrastructure deployment and management.
- Experience working with real-time or batch data ingestion at scale, and designing fault-tolerant ETL/ELT pipelines.
- Familiarity with event-driven architectures and messaging systems like Kafka or Kinesis.
- Strong debugging and optimization skills across cloud, network, and application layers.
- Excellent collaboration, communication, and documentation skills.
- Experience working with time-series or IoT sensor data at industrial scale.
- Familiarity with analytics tools and data warehouse integration (e.g., Redshift, Snowflake).
- Exposure to gRPC and protobuf-based data contracts.
- Experience supporting ML pipelines and feature stores.
- Working knowledge of Kubernetes concepts.
- Prior startup experience and/or comfort working in fast-paced, iterative environments.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×