Senior Data Engineer/Architect
Listed on 2026-01-02
-
IT/Tech
Data Engineer
Who We Are
Textron Systems is part of Textron, a $14 billion, multi-industry company employing 35,000 talented makers, thinkers, creators and doers worldwide. We make things that fly, hover, zoom and launch. Things that move people. Protect soldiers. Power industries. We serve customers in industries spanning aerospace and defense, specialized vehicles, turf care and fuel systems.
This role is in Textron System’s Information Technology business area. Visit and to read more about who we are and the products we make!
About This RoleWe are seeking an accomplished Senior Data Architect/Senior Data Engineer with deep expertise in architecting and delivering modern data platforms using Azure Databricks, Delta Lake, and Medallion Architecture. The ideal candidate has strong experience in data warehousing, lakehouse best practices, ACID transaction processing, and designing scalable data ecosystems that support advanced analytics, AI, and machine learning workloads.
Responsibilities Data Architecture & Platform Engineering- Design, develop, and implement large-scale data processing systems and solutions using Azure Databricks.
- Architect and build Medallion Architecture layers (Bronze, Silver, Gold) to ensure efficient data pipeline processing from raw data to cleaned and enriched datasets.
- Build ACID-compliant pipelines leveraging Delta Lake features such as transaction logs, schema enforcement, time travel, and versioning.
- Develop and optimize ELT (Extract, Load, Transact) processes using Databricks and other Azure services to support data warehousing, analytics, and reporting requirements.
- Create data architectures that support AI and machine learning cases, including feature engineering pipelines, large-scale data preprocessing, and high-performance data retrieval for model training inference.
- Apply data modeling best practices including Kimball vs Inmon methodologies, 3NF normalization, denormalization, dimensional modeling, and hybrid lakehouse patterns.
- Architect solutions that support both OLTP and OLAP workloads with appropriate storage, compute, and performance strategies.
- Develop logical and physical data models that scale for BI, analytics, predictive modeling, and ML training pipelines.
- Apply principles such as normalization/denormalization, partitioning, indexing, constraints, and schema design to enable optimized data processing.
- Conduct query tuning using execution plans, join optimization, Spark performance tuning, caching strategies, and workload segregation.
- Review, optimize, and design complex transformations, including nested functions, window functions, CTEs, and ML feature creation logic.
- Implement end-to-end data governance including metadata management, data quality, lineage, and documentation.
- Apply security best practices including schema-based security, ACLs, role-based access control, and integration with Azure AD and Key Vault.
- Ensure reliability, auditability, and recoverability across all layers of the lakehouse platform.
- Monitor data pipelines and implement error handling, logging, and alerting mechanisms to ensure data quality and reliability.
- Partner with data science, analytics, business, and engineering teams to deliver high-quality datasets for BI, ML, and AI solutions.
- Mentor and provide technical guidance to junior data engineers and team members.
- Perform code reviews and ensure adherence to coding standards and best practices.
- Provide documentation and training for end-users and team members to ensure seamless adoption and usage of data solutions.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 5+ years of experience in data engineering, data architecture, or enterprise analytics platform development.
- Extensive hands‑on experience with Azure Databricks, including building and managing data pipelines.
- Strong expertise in Medallion Architecture development, including the creation and optimization of Bronze, Silver, and Gold layers.
- Proficiency in SQL, Python, PySpark, and other data processing/ETL languages and…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).