Data Engineer
Listed on 2026-02-09
-
IT/Tech
Data Engineer, Data Analyst
Job Title:
Information Technology Data Engineer
FLSA Status:
Exempt
Reports To:
Manager Data Engineering
Schedule:
Full-time
Effective Date:
Location:
Lynnwood, WA
A data engineer designs, builds, and maintains the infrastructure and pipelines that enable the collection, storage, and processing of large datasets. They ensure data is reliable, accessible, and optimized for analytics and business intelligence. Their role often involves working with databases, ETL processes, and cloud platforms to support data-driven decision-making.
Core Responsibilities- Data Pipeline Development
:
Design, build, and maintain scalable and reliable data pipelines to collect, process, and store data from various sources. - API Creation and Consumption
:
Capable of collecting data from SAAS providers published APIs and creating internal APIs for the publishing of data. - Data Modeling
:
Familiar with OLTP and OLAP modeling and when to use each. Capable of working with flat files, tables, and json to transform data into easy-to-use structures. - Data Quality and Validation
:
Implement data validation, cleansing, and monitoring processes to ensure high data quality and integrity. - Collaboration with Data Consumers
:
Work closely with data analysts, data scientists, and business teams to understand data needs and deliver appropriate solutions. - Tooling and Automation
:
Develop tools and scripts to automate repetitive tasks, improve data workflows, and support continuous integration and deployment of data solutions. Familiar with source control tools and typical software development lifecycle.
A typical day in the life of our data engineers includes the following:
- Morning Standups & Syncs
:
Participate in daily standup meetings with data teams and stakeholders to align on priorities, blockers, and progress updates. - Pipeline Monitoring & Maintenance
:
Check the health of data pipelines, troubleshoot failures, and ensure data is flowing correctly across systems. - Data Modeling & Architecture
:
Design or refine data models and schemas to support new analytics or application requirements. - ETL Development
:
Build or update ETL (Extract, Transform, Load) processes to integrate new data sources or improve performance. - Collaboration with Analysts & Scientists
:
Work closely with data analysts and data scientists to understand data needs and deliver clean, well-structured datasets. - Documentation & Code Reviews
:
Document data workflows, update technical specs, and review code contributions from peers to maintain quality and consistency.
Partners with others to ensure Zumiez creates an empowered, fair & honest, teaching & learning-based, competitive, and fun work environment that recognizes the contributions of our employees including:
- Anchors all interactions and practices around Zumiez’ Cultural Values.
- Partners with data analysts, BI engineers and data scientists to ensure components are secure, fast, stable and easy to support
- Seeks continual self-improvement through independent and relevant knowledge gathering and seeking internal and external training opportunities
- Humble, curious, and a voracious learner
- Forward thinking, creative, and collaborative
- Approachable, calm, and confident
- High degree of emotional intelligence
- Precise and effective in verbal and written communication
- Embraces risk, hates the status quo and rules, and fosters the idea that fair is almost never equal
- Seeks creative solutions, dives into the unknown, and feels comfortable out on limbs
- Thrives in the complexity of working through influence without authority
- Natural problem solver and differentiates where in the technology stack an incident occurs.
The data engineering team is on a journey migrating a legacy on premise solution to a cloud native solution. This long term project consists of the following main tasks:
- Data Inventory and Assessment - Audit existing data assets, schemas, and ETL processes to determine what should be migrated, transformed, or deprecated.
- ETL/ELT Pipeline Modernization - Rebuild or refactor legacy ETL pipelines using cloud-native tools (e.g., Azure Data Factory, AWS Glue, or Google Cloud Dataflow) to support scalable and efficient data movement.
- Data Quality and Validation Frameworks - Implement automated validation checks to ensure data integrity during and after migration, including schema matching, null checks, and reconciliation reports.
- Security and Compliance Configuration - Set up identity and access management, encryption, and audit logging to meet enterprise security standards and regulatory requirements.
- Performance Optimization and Cost Monitoring - Tune cloud resources for performance and cost-efficiency, including partitioning strategies, query optimization, and usage monitoring dashboards.
- Bachelor of Science in Computer Science, Computer Engineering or equivalent
- 1-5 years of professional experience in a data engineering role
- Proficiency in SQL and Data Modeling - Strong…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).