Senior Data Engineer
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Cloud Computing
Overview
Position Title:
Senior Data Engineer
Job Description
Position Title:
Senior Data Engineer
$114,668 to $154,216 annually DOE
Benefits: Medical, dental, vision, 401k, flexible spending account, paid sick leave and paid time off, parental leave, quarterly performance bonus, training, career growth and education reimbursement programs.
Ziply Fiber is a local internet service provider dedicated to elevating the connected lives of the communities we serve. We offer the fastest home internet in the nation, a refreshingly great customer experience, and affordable plans that put customers in charge. As our state-of-the-art fiber network expands, so does our need for team members who can help us grow and realize our goals.
Our Company Values:
- Genuinely Caring:
We treat customers and colleagues like neighbors, with empathy and full attention. - Empowering You:
We help customers choose what is best for them, and we support employees in implementing new ideas and solutions. - Innovation and Improvement:
We constantly seek ways to improve how we serve customers and each other. - Earning Your Trust:
We build trust through clear, honest, human communication.
The Senior Data Engineer will be responsible for designing, building, and maintaining scalable data pipelines, data models, and infrastructure that support business intelligence, analytics, and operational data needs. This role involves working with various structured and unstructured data sources, optimizing data workflows, and ensuring high data reliability and quality. The ideal candidate will be proficient in modern data engineering tools and cloud platforms bringing innovative solutions to a fast-paced and diverse data infrastructure.
Essential Duties and ResponsibilitiesThe Essential Duties and Responsibilities listed below are a range of duties performed by the employee and not intended to reflect all duties performed.
Data Pipeline Engineering & Automation- Design, develop, and maintain scalable data pipelines for ingestion, transformation, and storage of large datasets.
- Troubleshoot and resolve data pipeline and ETL failures, implementing robust monitoring and alerting systems.
- Automate data workflows to increase efficiency and reduce manual intervention.
- Optimize data models for analytics and business intelligence reporting.
- Build and maintain data infrastructure, ensuring performance, reliability, and scalability.
- Implement best practices for data governance, security, and compliance.
- Work with structured and unstructured data, integrating data from various sources including databases, APIs, and streaming platforms.
- Collaborate with data analysts, data scientists, and business stakeholders to understand data needs and design appropriate solutions.
- Mentor and train junior engineers, fostering a culture of learning and innovation.
- Develop and maintain documentation for data engineering processes and workflows.
- Performs other duties as required to support the business and evolving organization.
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Minimum of eight (8) years of experience in data engineering, ETL development, or related fields.
- Strong proficiency in SQL and database technologies (Postgre
SQL, MySQL, Oracle, SQL Server, etc.). - Familiarity with Linux/Unix and scripting technologies utilized on them.
- Proficiency in programming languages such as Python for data engineering tasks.
- Hands-on experience with cloud platforms such as Microsoft Azure and its data services such as Azure Data Factory and Azure Synapse Analytics.
- Experience working with data warehouses such as Snowflake or Azure SQL Data Warehouse.
- Familiarity with workflow automation tools such as Autosys.
- Knowledge of data modeling, schema design, and data architecture best practices.
- Strong understanding of data governance, security, and compliance standards.
- Ability to work independently in a remote environment across different time zones and collaborate effectively across teams.
- Exposure to Graph
QL and RESTful APIs for data retrieval and integration. - Familiarity with No
SQL databases such as Mongo
DB. - Experience with version control software such as Git Lab.
- Proven aptitude for independently managing complex procedures, even when encountered infrequently.
- Proactive approach to learning and optimizing operational workflows.
- Familiarity with Dev Ops practices and CI/CD pipelines for data engineering, including Azure Dev Ops.
- Proficient in designing, writing, and maintaining complex stored procedures and stored procedure–based ETL workflows for robust data processing.
- Comfortable working in complex ecosystems with heterogeneous data sources and diverse end-user requirements, adapting solutions to fit unique contexts.
- Working knowledge of data wrangling and ETL tools, including Alteryx or similar technologies.
- Understanding of data privacy…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).