×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Databricks Architect and Developer

Job in Des Moines, Polk County, Iowa, 50319, USA
Listing for: Apexon Technology
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Database Administrator, Data Analyst
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

We are seeking a highly experienced Senior Databricks Architect and Developer to design, build, and optimize high performance data migration and ETL solutions. The ideal candidate will bring deep expertise in Databricks architecture, AWS cloud services, and large scale data migration from legacy systems to Postgre

SQL environments. This role requires strong hands on development experience along with architectural ownership of the Databricks platform setup, automation, and monitoring.

Role Title

Sr. Databricks Architect and Developer

Location

Remote with occasional travel to Des Moines, IA

Required Skills
  • Databricks platform architecture and administration
  • PySpark and Pandas
  • SQL and PL SQL
  • Spark Structured Streaming
  • AWS services including S3, Glue, Lambda, Redshift, EMR, and overall cloud infrastructure
  • ETL pipeline design and optimization
  • Data validation and transformation
  • SFTP, DoDSAFE, NIPRGPT
  • Data visualization tools
  • Optimization and monitoring including cluster autoscaling, spot instances, cost management
  • Azure Monitor, Cloud Watch, and Databricks logs
Preferred Skills
  • Strong experience designing and building high performance ETL pipelines using Databricks with PySpark, Delta Lake, and Databricks Workflows
  • Proven expertise migrating data from multiple legacy sources including VSAM files to PostgreSQL
  • Experience architecting and configuring Databricks Landing and Staging environments
  • Job orchestration and automation design
  • Performance monitoring and tuning tools implementation
  • Advanced SQL, Databricks SQL, and Postgre

    SQL expertise for load optimization and large volume cutovers
  • Experience in data mapping, conceptual and technical design
  • Application and technical testing using Databricks Notebooks
  • Implementation of data masking techniques
  • Experience with spider web and reverse spider web logic
  • Strong defect analysis and remediation skills
Key Responsibilities
  • Develop scalable and high performance ETL pipelines using Databricks including PySpark, Python, Delta Lake, and Databricks Workflows
  • Lead migration efforts from legacy sequential databases, VSAM files, and other structured sources into PostgreSQL
  • Configure and manage Databricks Landing and Staging schemas ensuring secure and efficient data movement
  • Optimize data loads and manage high volume cutover activities
  • Contribute to data mapping, architecture design, and technical validation
  • Develop and execute technical test cases using Databricks Notebooks
  • Implement data masking and transformation rules
  • Support defect resolution and ensure high quality migration outcomes
  • Setup, configuration, and ongoing maintenance of the Databricks platform
  • High performance ETL workflows supporting large scale data migration
  • Documented architecture, automation workflows, and monitoring framework
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary