Data Platform Engineer
Listed on 2026-01-26
-
IT/Tech
Data Engineer, Big Data, Cloud Computing
Data platforms are complex and require specialized skills to ensure they run securely, reliably, and cost-efficiently. This role focuses on building and managing the infrastructure that supports big data and analytics solutions, handling performance tuning, scaling, and compliance, so that data engineers and analysts can focus on delivering insights without being slowed down by infrastructure issues.
✍️ Highlights onJob Description
- Big Data & Analytics Platforms – Hands‑on with Hadoop, Spark, Kafka, Flink, Presto/Trino, Elastic, and cloud-native big data services (AWS EMR, GCP Dataproc/Big Query, E‑Map Reduce, snowflake, etc.
- Data Platform Architecture Design – Designing scalable, secure, and high‑availability infrastructure for data lakes, data warehouses, and streaming pipelines across multi‑cloud environments.
- Infrastructure as Code & Automation – Strong experience with Terraform, Ansible, Helm, and Kubernetes for automated provisioning, scaling, and configuration management.
- Cloud Services for Data – Skilled in AWS (S3, EMR, Redshift), GCP (Big Query, Dataproc, Pub/Sub), OSS/S3, E‑Map Reduce, Max Compute, Data Works, etc.
- Data Security – Protecting data across multiple sources and platforms with encryption (in‑transit & at‑rest), secure access control (IAM/RBAC), secret and key management, tokenization/masking for sensitive data, network isolation, and audit logging with
Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field
Proven experience as a Data Infrastructure Engineer, Data Engineer, or similar role
Strong understanding of data infrastructure, data pipelines, and data architecture
Experience with cloud platforms such as AWS, GCP, or Azure
Proficiency in building and maintaining data pipelines using tools such as Airflow, Kafka, or similar
Strong knowledge of SQL and experience with relational and non‑relational databases
Experience with data warehousing solutions (e.g., Big Query, Redshift, Snowflake)
Familiarity with ETL/ELT processes and best practices
Experience with containerization and orchestration tools such as Docker and Kubernetes is a plus
Solid understanding of data security, data governance, and access control
Ability to monitor, optimize, and troubleshoot data infrastructure performance and reliability
Experience with version control systems (e.g., Git) and CI/CD pipelines
Strong problem‑solving skills and attention to detail
Good communication skills and ability to collaborate with cross‑functional teams
Ability to work independently and manage multiple priorities in a fast‑paced environment
#J-18808-LjbffrTo Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: