×
Register Here to Apply for Jobs or Post Jobs. X

Database Developer - III

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Compunnel, Inc.
Full Time position
Listed on 2025-12-02
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager, Big Data
Job Description & How to Apply Below

We are looking for a skilled Data Mesh Data Modeler with expertise in Databricks to join our team. The ideal candidate will be responsible for designing and implementing scalable and efficient data models within a data mesh architecture, ensuring reliable and efficient data processing while collaborating with cross-functional teams. You will leverage Databricks for various data engineering tasks, from data processing to orchestration, to meet the needs of our data-driven organization.

Job Responsibilities:

  • Design and implement scalable, efficient data models within a data mesh architecture, following principles such as domain-driven design and federated data governance.
  • Work closely with data architects, engineers, and business stakeholders to translate business requirements into technical solutions.
  • Communicate data model designs effectively to technical and non-technical teams.
  • Leverage Databricks for data engineering tasks, including data processing, data validation, and data orchestration.
  • Optimize data pipelines to ensure high performance, scalability, and efficient data processing.
  • Implement data validation rules and quality checks to ensure data integrity and consistency
  • Design, implement, and manage the lifecycle of data products within the Data Mesh architecture.
  • Collaborate with other teams to build, manage, and monitor efficient data pipelines and data products.
Required Skills:
  • Experience in Data Mesh Architecture:
    Expertise in modeling data products and understanding the principles of data mesh, domain-driven design, and federated governance.
  • Databricks Expertise:
    Strong hands-on experience with Databricks, including using Spark for large-scale data processing, validation, and orchestration.
  • Programming

    Skills:

    Proficiency in SQL and Python for data processing, transformation, and validation.
  • Data Pipeline Optimization:
    Experience in designing and optimizing data pipelines to ensure scalability, high performance, and efficient data flow.
  • Data Integrity:
    Implementing robust data validation and quality checks to ensure data consistency and accuracy across systems.
  • Collaboration:

    Strong ability to work closely with cross-functional teams, including business users, data architects, and engineers.
  • Problem-Solving:
    Strong troubleshooting and problem-solving skills to ensure the efficiency and reliability of data systems.
Preferred

Skills:

  • Experience in designing and managing the lifecycle of Data Products within a Data Mesh architecture.
  • Familiarity with data governance practices and tools for managing data quality and compliance.
  • Knowledge of cloud platforms like AWS, Azure, or GCP for deploying and managing data solutions.
Certifications:

Databricks Certification or other relevant data engineering certifications are a plus.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary