×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Databricks Solution Architect Healthcare Domain; W2 or Selfcorp

Remote / Online - Candidates ideally in
Washington, USA
Listing for: Data Freelance Hub
Remote/Work from Home position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Cloud Computing, Data Engineer
Job Description & How to Apply Below
Position: Sr. Databricks Solution Architect with Healthcare Domain (Only W2 or Selfcorp)

Sr. Databricks Solution Architect with Healthcare Domain (Only W2 or Selfcorp)

⭐ - Featured Role | Apply direct with Data Freelance Hub

This role is for a Sr. Databricks Solution Architect with Healthcare Domain, offering a contract-to-hire position. Pay rate is competitive. Key skills include Azure Databricks, ADF, CI/CD, and Terraform. Requires 5+ years in cloud data engineering and healthcare experience.

Location:

United States (Remote, Washington, DC office visits as needed). Employment:
Contract to Hire (W2 Contractor).

Job Description

Top Skills Needed:

  • Deep hands-on expertise with Databricks platform architecture and governance
  • Unity Catalog, work spaces, external locations, compute, access controls, cluster governance
  • Reliability engineering, monitoring, and operational hardening of the Lakehouse
  • Observability, alerting, DR readiness, backup/restore, performance tuning, incident response
  • Strong experience with ADF, CI/CD, and Terraform for orchestrating and managing the Lakehouse
  • Pipeline orchestration, IaC, Dev Ops, environment promotion, compute policies

Typical Day-to-Day:

  • Design how the Databricks Lakehouse should work including the structure, tools, standards, and best practices
  • Guide engineering teams on how to build pipelines and use Databricks correctly
  • Solve technical issues when data jobs fail or performance slows
  • Work with stakeholders to understand data needs and deliver solutions
  • Set standards for security, governance, naming conventions, and architecture
  • Ensure the Databricks platform is stable, reliable, and always available
  • Build and maintain monitoring, alerting, logging, and health dashboards
  • Strengthen and fix ingestion pipelines (ADF → landing → raw → curated)
  • Improve data quality checks, anomaly detection, and pipeline reliability
  • Manage CI/CD pipelines and deployment processes using Azure Dev Ops or Git Hub
  • Use Terraform (IaC) to deploy and manage Databricks and Azure infrastructure
  • Partner with Security and Fin Ops on access controls, compliance, and cost governance
  • Mentor the Data Engineer and support distributed data engineering teams across the organization
Key Responsibilities

1. Lakehouse Architecture & Platform Administration (Approximately 60% of role when combined with mentoring & code review)

  • Serve as the primary architect and administrator for the Azure Databricks Lakehouse (Unity Catalog, work spaces, external locations, compute, access controls).
  • Lead execution of a Minimal Viable Hardening Roadmap for the platform, prioritizing:
    • High availability and DR readiness
    • Backup/restore patterns for data and metadata
    • Platform observability and operational metrics
    • Secure and maintainable catalog/namespace structure
    • Robust and proactive data quality assurance
  • Implement and evolve naming conventions, environment strategies, and platform standards that enable long-term maintainability and safe scaling.
  • Act as the Lakehouse-facing counterpart to Enterprise Architecture and Security, collaborating on network architecture, identity & access, compliance controls, and platform guardrails.

2. Reliability, Monitoring, and Incident Management

  • Design, implement, and maintain comprehensive monitoring and alerting for Lakehouse platform components, ingestion jobs, key data assets, and system health indicators.
  • Oversee end-to-end reliability engineering, including capacity planning, throughput tuning, job performance optimization, and preventative maintenance (e.g., IR updates, compute policy reviews).
  • Participate in — and help shape — the on-call rotation for high-priority incidents affecting production workloads, including rapid diagnosis and mitigation during off-hours as needed.
  • Develop and maintain incident response runbooks, escalation pathways, stakeholder communication protocols, and operational readiness checklists.
  • Lead or participate in post-incident Root Cause Analyses, ensuring durable remediation and preventing recurrence.
  • Conduct periodic DR and failover simulations, validating RPO/RTO and documenting improvements.

3. Pipeline Reliability, Ingestion Patterns & Data Quality

  • Strengthen and standardize ingestion pipelines (ADF
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary