Databricks Architect
Listed on 2025-11-20
-
IT/Tech
Data Engineer, Data Analyst
OUR CLIENT
Our client provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting-edge analytics techniques, and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, our client's analytics team takes an industry-specific approach to transform decision-making and embed analytics more deeply into their business processes. They have a global footprint of 2,000+ data scientists and analysts who assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization.
They serve the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries.
- Develop and optimize ETL pipelines from various data sources using Databricks on cloud (AWS, Azure, etc.)
- Experienced in implementing standardized pipelines with automated testing, Airflow scheduling, Azure Dev Ops for CI/CD, Terraform for infrastructure as code, and Splunk for monitoring
- Continuously improve systems through performance enhancements and cost reductions in computing and storage
- Data Processing and API Integration:
Utilize Spark Structured Streaming for real-time data processing and integrate data outputs with REST APIs - Lead Data Engineering Projects to manage and implement data-driven communication systems
- Experienced with Scrum and Agile Methodologies to coordinate global delivery teams, run scrum ceremonies, manage backlog items, and handle escalations
- Integrate data across different systems and platforms
- Strong verbal and written communication skills to manage client discussions
- 8+ years' experience in developing and implementing ETL pipelines from various data sources using Databricks on cloud
- Some experience in the insurance domain/ data is a must
- Programming Languages SQL, Python
- Technologies - IaaS (AWS or Azure or GCP), Databricks platform, Delta Lake storage, Spark (PySpark, Spark SQL).
- Good to have - Airflow, Splunk, Kubernetes, Power BI, Git, Azure Dev Ops
- B.S. Degree in a data-centric field (Mathematics, Economics, Computer Science, Engineering or other science field), Information Systems, Information Processing or engineering.
- Excellent communication & leadership skills, with the ability to lead and motivate team members
- Ability to work independently with some level of ambiguity and juggle multiple demands
This is a Hybrid role - 2 days per week in Dublin, OH. Client is currently considering candidates who are local to Dublin, OH or those who are open to relocating themselves to the Dublin, OH area.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).