EDP Databricks Engineer
Listed on 2026-02-23
-
IT/Tech
Cloud Computing, Data Engineer, IT Support
Minimum Clearance Required
US Citizen
ResponsibilitiesI2X Technologies is a reputable technology services company to the Federal Government. Whether the focus is on space exploration, national security, cyber security, or cutting-edge engineering applications, I2X is ready to offer you the chance to make a real-world impact in your field and for your country. We provide long-term growth and development. Headquartered in Colorado, I2X is engaged in programs across the country and in more than 20 states.
Our programs support multiple Federal agencies, the Department of Defense and often focused on the space initiatives of our government customers.
This position will be on-site in Washington, DC.
- Hands-on experience administering Databricks (workspace administration, clusters/compute policies, jobs, SQL warehouses, repos, runtime management) and expertise using Databricks CLI.
- Strong Unity Catalog administration: metastores; catalogs/schemas; grants; service principals; external locations; storage credentials; governed storage access.
- Identity & Access Management proficiency: SSO concepts, SCIM provisioning, groupbased RBAC, service principals, least-privilege patterns.
- Security fundamentals: secrets management, secure connectivity, audit logging, access monitoring, and evidence-ready operations.
- Automation skills: scripting and/or IaC using Terraform/CLI/REST APIs for repeatable configuration and environment promotion.
- Experience implementing data governance controls (classification/tagging, lineage/metadata integrations) in partnership with governance teams.
- CI/CD practices for jobs/notebooks/config promotion across SDLC environments.
- Understanding of lakehouse concepts (e.g., Delta, table lifecycle management, separation of storage/compute).
- Strong troubleshooting and problem-solving; communicate clearly during incidents and changes.
- Experience administering Databricks serverless compute, Workspace Git integrations (Git Lab), Databricks Asset Bundles (DABs) for deployment automation, and modern workspace features supporting Dev Ops workflows.
- Bachelor’s degree in a related field or equivalent practical experience.
- 7+ years in cloud/data platform administration and operations, including 5+ years administering Databricks
Highly valued(Desirable, but not required) knowledge, skills and experience
- Cloud platform expertise (AWS ): IAM roles/policies, object storage security patterns, networking basics (VPC concepts), logging/monitoring integration.
- SQL proficiency and data engineering fundamentals for troubleshooting query performance issues, understanding ETL/ELT workflow patterns, and debugging data pipeline failures; basic Python/Scala familiarity for notebook/code issue diagnosis.
- Experience with compliance and regulatory frameworks (FedRAMP, HIPAA, SOC2, or similar) including implementation of data residency requirements, retention policies, and audit-ready evidence collection.
- Hands-on experience with AWS security and networking services including Private Link, Secrets Manager/Systems Manager integration, Cloud Watch/Cloud Trail integration, S3 bucket policies, cross-account access patterns, and KMS encryption key management.
- Demonstrated experience in Databricks and Cloud Fin Ops and budget management
- SLA/SLO management, incident management, and stakeholder communication skills; ability to define platform service levels, produce operational reports, translate technical issues to business stakeholders, and manage vendor relationships (Databricks account teams).
- 5+ years of demonstrated experience administering Databricks
- Databricks Platform Administrator/Databricks AWS Platform Architect
- Databricks Certified Data Engineer Associate/Professional
- AWS Certified Solutions Architect Associate or Professional
The Contractor shall deliver, but not limited to, the following:
- Administer Databricks account and work spaces across SDLC environments; standardize configuration, naming, and operational patterns.
- Configure and maintain clusters/compute, job compute, SQL warehouses, runtime versions, libraries, repos, and workspace settings.
- Implement platform monitoring/alerting, operational dashboards, and health…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).