Data Engineer
Listed on 2026-02-07
-
IT/Tech
Data Engineer, Cloud Computing
About the Role
As a data engineer, you will be working on all aspects of data, from platform and infra build out to pipeline engineering and writing tooling/services for augmenting and fronting the core platform.
You will be responsible for building and maintaining the state-of-the-art data lifecycle management platform, including acquisition, storage, processing, integration and consumption channels.
The team works closely with data scientists, product managers, legal, compliance and business stakeholders across the SEA in understanding and tailoring the offerings to their needs.
As a member of the data organization, you will be an early adopter and contributor to various big data technologies and you are encouraged to think out of the box and have fun exploring the latest patterns and designs in the fields of software and data engineering.
Work Responsibilities
- Build and manage the data asset using some of the most scalable and resilient opensource big data technologies like Airflow, Spark, DBT, Kafka, Yarn/Kubernetes,Elastic Search, Snowflake, visualization layer and more.
- Design and deliver the next-gen data lifecycle management suite of tools/frameworks,including ingestion, processing, integration, and consumption on the top of the data laketo support real-time, API-based and serverless use-cases, along with batch (mini/micro) as needed
- Enable Data Science teams to train, test, and product ionize various ML models,including propensity, risk and fraud models to better understand, serve and protect our customers
- Lead and/or participate in technical discussions across the organization through collaboration, including running RFC and architecture review sessions, tech talks onnew technologies as well as retrospectives
- Apply core software engineering and design concepts in creating operational as well as strategic technical roadmaps for business problems that are vague/not fully understood
- Obsess over security by ensuring all the components, from a platform, frameworks tothe applications are fully secure and are compliant by the group’s infosec policies.
Job Requirements
- At least 2+ years of relevant experience in developing scalable, secured, fault tolerant,resilient & mission-critical big data platforms.
- Able to maintain and monitor the ecosystem with high availability
● Must have sound understanding for all Big Data components & Administration
● Fundamentals. Hands-on in building a complete data platform using various open source technologies. - Must have good fundamental hands-on knowledge of Linux and building a big datastack on top of AWS/GCP using Kubernetes.
- Strong understanding of big data and related technologies like Spark, Presto, Airflow,HDFS, Yarn, Snowflake, etc.
- Good knowledge of Complex Event Processing (CEP) systems like Spark Streaming,Kafka, Apache Flink, Beam etc.
- Experience with No
SQL databases – KV/Document/Graph and similar - Able to drive best practices like CI/CD, containerization, blue-green deployments, 12-factor apps, secrets management, etc in the Data ecosystem.
- Able to develop an agile platform with auto scale capability up & down as well verticallyand horizontally.
- Must be in a position to create a monitoring ecosystem for all the components in use inthe data ecosystem.
- Proficiency in at least one of the programming languages Java, Scala, or Python alongwith a fair understanding of runtime complexities.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).