More jobs:
Data Engineer; Forensic & Streaming
Job in
Bellevue, King County, Washington, 98009, USA
Listed on 2026-02-12
Listing for:
Oslitanditech
Full Time
position Listed on 2026-02-12
Job specializations:
-
Software Development
Data Engineer
Job Description & How to Apply Below
Primary Responsibilities (Operational Duties)
- ETL Pipeline Design: Design, build, and maintain fault-tolerant ETL pipelines to ingest, cleanse, transform, and normalize massive, multi-terabyte datasets of historical mission data (logs, transcripts, sensor records) using technologies like Apache Spark or equivalent distributed processing frameworks.
- Real-Time Streaming Architecture: Implement and administer high-throughput, low-latency real-time data streaming architectures utilizing Apache Kafka or Apache Pulsar to handle live feeds from numerous sensor sources.
- Time-Series Database Management: Administer and optimize specialized databases, such as Timescale DB or InfluxDB
, for high-speed storage and retrieval of time-stamped sensor data. - Data Governance & Lineage: Implement comprehensive data governance policies, metadata management, and data lineage tracking tools to ensure the integrity, quality, and audibility of data consumed by the AI/ML Squad.
- Query Optimization: Work closely with the Mission Software Squad to optimize complex SQL and distributed query performance for near real-time retrieval of historical mission data.
- API Integration: Perform integration activities to configure, connect, and pull data from 3rd party software APIs, and collaborate with separate engineering teams to configure data sources for pipeline integration.
- A minimum of 4+ years of progressive experience in Data Engineering, with specific experience handling high-volume (terabyte-scale), high-velocity datasets.
- At least 3+ years of experience designing and managing Apache Kafka or similar distributed messaging systems in production.
- Expert-level proficiency in advanced SQL and relational/time-series database optimization (e.g.,
Timescale DB
). - Strong scripting and development skills (
Python
) for pipeline orchestration and data manipulation. - Experience with distributed data processing frameworks such as Apache Spark
. - Proficiency in developing log ingestion, data normalization strategies, and implementing data models for complex datasets.
- The candidate shall have a Bachelor's or Master's degree in Computer Science, Data Engineering, or a related technical field.
- AWS Certified Big Data or Databricks Certified Data Engineer certification preferred.
- Must be eligible for a U.S. Government Secret Clearance
.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×