×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer - Data Storage & Data Lake; ByteDance

Job in Singapore, Singapore
Listing for: BYTEDANCE PTE. LTD.
Full Time position
Listed on 2026-03-10
Job specializations:
  • Software Development
    Data Engineer, Software Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 SGD Yearly SGD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Software Engineer - Data Storage & Data Lake (ByteDance )

About Us

Founded in 2012, Byte Dance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including Tik Tok, Lemon8, Cap Cut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, Byte Dance has made it easier and more fun for people to connect with, consume, and create content.

Why

Join Byte Dance

Inspiring creativity is at the core of Byte Dance's mission. Our innovative products are built to help people authentically express themselves, discover and connect – and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.

As Byte Dancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.

Diversity

& Inclusion

Byte Dance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At Byte Dance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach.

We are passionate about this and hope you are too.

Job highlights

Career growth opportunity, Paid leave, Flat organization

Responsibilities About the team

The Data Ecosystem Team has the vital role of crafting and implementing a storage solution for offline data in our recommendation system, which caters to more than a billion users. Their primary objectives are to guarantee system reliability, uninterrupted service, and seamless performance. They aim to create a storage and computing infrastructure that can adapt to various data sources within the recommendation system, accommodating diverse storage needs.

Their ultimate goal is to deliver efficient, affordable data storage with easy-to-use data management tools for the recommendation, search, and advertising functions.

What you will be doing:
  • Responsible for the design and development of distributed database Hbase-related components.
  • Responsible for the design and development of single-node LSM engine Rocksdb-related components.
  • Design and implement an offline/real-time data architecture for large-scale recommendation systems.
  • Design and implement a flexible, scalable, stable, and high-performance storage system and computation model.
  • Troubleshoot production systems, and design and implement necessary mechanisms and tools to ensure the overall stability of production systems.
  • Build industry-leading distributed systems such as offline and online storage, batch, and stream processing frameworks, providing reliable infrastructure for massive data and large-scale business systems.
  • Qualifications

    Minimum Qualifications
    • Bachelor's Degree or above, majoring in Computer Science, or related fields, with 3+ years of experience building scalable systems;
    • Proficiency in common big data processing systems like Spark/Flink at the source code level is required, with a preference for experience in customizing or extending these systems;
    Preferred Qualifications
    • A deep understanding of the source code of at least one data lake technology, such as Hudi, Iceberg, or Delta Lake, is highly valuable and should be prominently showcased in your resume, especially if you have practical implementation or customisation experience;
    • Knowledge of HDFS principles is expected, and familiarity with columnar storage formats like Parquet/ORC is an additional advantage;
    • Prior experience in data warehousing modeling;
    • Proficiency in programming languages such as Java, C++, and Scala is essential, along with strong coding skills and the ability to troubleshoot effectively;
    • Experience with other big data systems/frameworks like Hive, HBase, or Kudu is a plus;
    • A willingness to tackle challenging problems without clear solutions, a strong enthusiasm for learning new technologies, and prior experience in managing large-scale data (in the petabyte range) are all advantageous qualities.
    #J-18808-Ljbffr
    To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
    (If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
     
     
     
    Search for further Jobs Here:
    (Try combinations for better Results! Or enter less keywords for broader Results)
    Location
    Increase/decrease your Search Radius (miles)

    Job Posting Language
    Employment Category
    Education (minimum level)
    Filters
    Education Level
    Experience Level (years)
    Posted in last:
    Salary