Data Engineer
Job in
South Tangerang, Banten, Indonesia
Listed on 2026-02-14
Listing for:
SwiftMind Indonesia
Full Time
position Listed on 2026-02-14
Job specializations:
-
IT/Tech
Data Engineer, Data Analyst, Data Science Manager, Data Warehousing
Job Description & How to Apply Below
Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes to support analytics and business intelligence initiatives
- Implement and optimize data storage solutions, ensuring data quality, accessibility, and security
- Collaborate with Data Scientists, Analytics teams, and other stakeholders to understand data requirements and deliver effective solutions
- Build and maintain data warehouses and data lakes, ensuring efficient data organization and retrieval
- Write clean, maintainable, and well-documented code following team standards and best practices
- Participate in code reviews and provide constructive feedback to team members
- Monitor and optimize data pipeline performance and efficiency"Create and maintain comprehensive documentation for data processes and architectures
- Implement data validation and quality control measures
- Contribute to technical discussions and architectural planning sessions
- Share knowledge with team members and participate in mentoring activities
- Bachelor's degree in Computer Science, Data Engineering, or related field
- Strong foundation in SQL and experience with relational databases (e.g., Postgre
SQL, MySQL) - Proficiency in at least one programming language (e.g., Python, Java, Scala)
- Experience with ETL tools and data pipeline development
- Understanding of data warehouse concepts and dimensional modeling
- Familiarity with version control systems (Git) and collaborative development workflows
- Basic knowledge of data security practices and compliance requirements
- Strong problem-solving skills and analytical thinking abilities
- Excellent communication skills in both technical and non-technical contexts
- Demonstrated interest in data engineering through projects or work experience
- Experience with big data technologies (e.g., Big Query, Spark)
- Knowledge of stream processing frameworks (e.g., Kafka, Rabbit
MQ) - Experience with data modeling and optimization techniques
- Knowledge of data governance principles
- Understanding of machine learning pipelines and requirements
- Experience with data visualization tools
- Understanding of CI/CD practices for data pipelines
- Knowledge of scripting languages for automation (e.g., Bash, Shell)
- Familiarity with Python web development (e.g., FastAPI, Flask) is a plus
- Proficiency with AI-powered coding assistants (e.g., Git Hub Copilot, Cursor) is a plus
- Familiarity with data warehouse tools or relevant Apache frameworks (e.g., Airflow,Spark)
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×