More jobs:
Job Description & How to Apply Below
Roles & Responsibilities
Design and implement enterprise data models (conceptual, logical and physical) to support analytical, transactional and AI/ML workloads.
Define and maintain data architecture standards, modeling conventions, integration patterns and best practices across the group.
Lead the design and optimization of data warehouse and data lake architectures.
Model and manage structured, semi-structured and unstructured data using relational and No
SQL databases.
Develop data integration strategies (ETL/ELT) and partner with data engineering teams to build reliable pipelines.
Optimize database design for performance, partitioning, indexing, and query efficiency.
Partner with data governance, security and compliance teams to implement data security, privacy, lineage, cataloging and access control frameworks.
Collaborate with data scientists, BI developers and data engineers to ensure architecture enables advanced analytics, AI/ML pipelines and self-service BI.
Qualifications, Experience, and Skills
QUALIFICATIONS
Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or related field.
Master’s degree and certifications in cloud data architecture (AWS, Azure, GCP) or enterprise architecture frameworks (TOGAF, DAMA-DMBOK) are a plus.
EXPERIENCE
7+ years of progressive experience in data architecture, data engineering or database design.
Proven experience designing enterprise-scale data models and leading architecture design.
Hands-on expertise with relational databases (Postgre
SQL, MySQL, SQL Server, Oracle) and No
SQL technologies (Mongo
DB, Cassandra, Dynamo
DB, Cosmos DB).
Exposure to graph databases (Neo4j, Amazon Neptune) is a plus.
Experience with cloud data platforms such as Snowflake, Big Query, Redshift or Azure Synapse.
Experience with ETL/ELT tools such as Informatica, Talend, dbt or Apache NiFi.
Experience designing real-time streaming pipelines with Kafka, Spark Streaming or Flink.
TECHNICAL AND INTERPERSONAL SKILLS
Expert in data modeling tools (Erwin, ER/Studio, SQL Power Architect).
Advanced SQL skills for schema design, optimization and performance tuning.
Proficiency in Python or Scala for scripting and pipeline prototyping.
Knowledge of orchestration frameworks (Apache Airflow, Luigi, Prefect).
Familiarity with containerization (Docker, Kubernetes) and Dev Ops & CI/CD practices.
Understanding of data governance, security and compliance frameworks.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×