Data Platform Architect
Listed on 2026-01-01
-
IT/Tech
Data Engineer, Data Science Manager
Master Works is excited to invite applications for the position of Enterprise Data Platform Architect. In this strategic role, you will guide the design and implementation of enterprise-wide data platforms that facilitate effective data management, analytics, and governance. You will work collaboratively with stakeholders across the organization to develop data architecture strategies that empower the business while ensuring compliance with industry standards.
Your expertise will play a crucial role in optimizing data flow, storage, and accessibility, making data a valuable asset for decision-making. As a champion for best practices in data architecture, you will lead initiatives to promote data integrity, security, and scalability throughout the enterprise, ultimately transforming the way Master Works leverages its data assets for business success.
- Architect, implement, and maintain enterprise-scale data solutions, combining data virtualization (Denodo) and big data ecosystem technologies (Cloudera, Hadoop, Spark, Hive, Kafka, etc.).
- Integrate complex structured and unstructured data sources (SQL/No
SQL, cloud platforms, applications) into unified, high-performance data layers. - Design, optimize, and monitor large-scale data pipelines, virtual views, and workflows for high-performance, low-latency access.
- Implement and enforce data governance, security, and access control policies across all data platforms.
- Collaborate with data engineers, analysts, and business stakeholders to translate requirements into scalable and robust solutions.
- Troubleshoot, monitor, and continuously improve system performance, reliability, and scalability.
- Maintain best practices, documentation, and knowledge sharing for enterprise data platforms.
- Extensive experience with Denodo Platform, Cloudera Hadoop ecosystem, and enterprise data virtualization.
- Strong expertise in SQL, data modeling, query optimization, and distributed computing concepts.
- Proficient in big data tools:
Spark, Hive, Impala, HBase, Kafka, and Sqoop. - Solid understanding of ETL processes, data integration, and cloud data services.
- Proven ability to manage complex, enterprise-scale data projects with high-quality results.
- Excellent problem-solving, analytical, and communication skills.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Minimum 7+ years of experience in related filed
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).