Sr. Databricks Architect and Developer
Job in
Des Moines, Polk County, Iowa, 50319, USA
Listed on 2026-03-01
Listing for:
Apexon Technology
Full Time
position Listed on 2026-03-01
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing, Database Administrator, Data Analyst
Job Description & How to Apply Below
We are seeking a highly experienced Senior Databricks Architect and Developer to design, build, and optimize high performance data migration and ETL solutions. The ideal candidate will bring deep expertise in Databricks architecture, AWS cloud services, and large scale data migration from legacy systems to Postgre
SQL environments. This role requires strong hands on development experience along with architectural ownership of the Databricks platform setup, automation, and monitoring.
Sr. Databricks Architect and Developer
LocationRemote with occasional travel to Des Moines, IA
Required Skills- Databricks platform architecture and administration
- PySpark and Pandas
- SQL and PL SQL
- Spark Structured Streaming
- AWS services including S3, Glue, Lambda, Redshift, EMR, and overall cloud infrastructure
- ETL pipeline design and optimization
- Data validation and transformation
- SFTP, DoDSAFE, NIPRGPT
- Data visualization tools
- Optimization and monitoring including cluster autoscaling, spot instances, cost management
- Azure Monitor, Cloud Watch, and Databricks logs
- Strong experience designing and building high performance ETL pipelines using Databricks with PySpark, Delta Lake, and Databricks Workflows
- Proven expertise migrating data from multiple legacy sources including VSAM files to PostgreSQL
- Experience architecting and configuring Databricks Landing and Staging environments
- Job orchestration and automation design
- Performance monitoring and tuning tools implementation
- Advanced SQL, Databricks SQL, and Postgre
SQL expertise for load optimization and large volume cutovers - Experience in data mapping, conceptual and technical design
- Application and technical testing using Databricks Notebooks
- Implementation of data masking techniques
- Experience with spider web and reverse spider web logic
- Strong defect analysis and remediation skills
- Develop scalable and high performance ETL pipelines using Databricks including PySpark, Python, Delta Lake, and Databricks Workflows
- Lead migration efforts from legacy sequential databases, VSAM files, and other structured sources into PostgreSQL
- Configure and manage Databricks Landing and Staging schemas ensuring secure and efficient data movement
- Optimize data loads and manage high volume cutover activities
- Contribute to data mapping, architecture design, and technical validation
- Develop and execute technical test cases using Databricks Notebooks
- Implement data masking and transformation rules
- Support defect resolution and ensure high quality migration outcomes
- Setup, configuration, and ongoing maintenance of the Databricks platform
- High performance ETL workflows supporting large scale data migration
- Documented architecture, automation workflows, and monitoring framework
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×