×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in 600001, Chennai, Tamil Nadu, India
Listing for: Confidential
Full Time position
Listed on 2026-02-04
Job specializations:
  • IT/Tech
    Data Engineer, Database Administrator, Data Warehousing
Job Description & How to Apply Below
Skills:

Apache Spark, AWS Redshift, Kafka, Python, SQL, ETL Tools, Hadoop, Data Warehousing,

Job Title:

Senior Data Engineer (Database Design & Optimization Expert)

Location:

Chennai

Experience:

10+ years

Employment Type:

Full-time

Work model:  In-office

About Linarc

Linarc is revolutionizing the construction industry. As the emerging leader in construction technology, we are redefining how projects are planned, executed, and delivered.

Built for general contractors, construction managers, and trade partners, Linarc is a next-generation platform that brings unmatched collaboration, automation, and real-time intelligence to construction projects. Our mission is to eliminate inefficiencies, streamline workflows, and drive profitability-helping teams deliver projects faster, smarter, and with greater control.

Our platform is built to scale - from mid-sized contractors to enterprise-level builders - and backed by a robust, high-performance data infrastructure. As we grow, were investing deeply in our data and analytics capabilities to power real-time decisions across field and office teams.

This is your chance to help shape the future of construction tech by building resilient, scalable, and analytics-ready data systems at the core of Linarcs product.

Join us and be part of a high-impact, fast-growing team thats shaping the future of construction tech. If you thrive in a dynamic environment and want to make a real difference in the industry, Linarc is the place to be. This is a full-time position and you will be working out of our HQ in Chennai.

Key Responsibilities

Architect and manage high-performance RDBMS systems (e.g., Postgre

SQL, MySQL) with deep focus on performance tuning, indexing, and partitioning.
Design and optimize document databases (e.g., Mongo

DB, Dynamo

DB) for flexible and scalable data models.
Implement and manage real-time databases (e.g., Firebase, Firestore) for event-driven or live-sync applications.
Manage and tune in-memory databases (e.g., Redis, SQLite) for low-latency data access and offline sync scenarios.
Integrate and optimize data warehouse solutions (e.g., Redshift, Snowflake, Big Query) for analytics and reporting.
Build scalable ETL/ELT pipelines to move and transform data across transactional and analytical systems.
Implement and maintain Elasticsearch for fast, scalable search and log indexing.
Collaborate with engineering teams to build and maintain data models optimized for analytics and operational use.
Write complex SQL queries, stored procedures, and scripts to support reporting, data migration, and ad-hoc analysis.
Work with BI and data lineage tools like Dataedo, dbt, or similar for documentation and governance.
Define and enforce data architecture standards, best practices, and design guidelines.
Tune database configurations for high-throughput and low-latency scenarios under different load profiles.
Manage data access controls, backup/recovery strategies, and ensure data security on AWS (RDS, Dynamo

DB, S3, etc.).

Required Qualifications

10+ years of professional experience as a Data Engineer or Database Architect.
6+ years hands-on experience in database design, optimization, and configuration.
Deep knowledge of RDBMS performance tuning, query optimization, and system profiling.
Strong experience with No

SQL, real-time, and in-memory databases (Mongo

DB, Firebase, Redis, SQLite).
Hands-on with cloud-native data services (AWS RDS, Aurora, Dynamo

DB, Redshift).
Strong proficiency in structured query design, data modeling, and analytics optimization.

Experience with data documentation and lineage tools like Dataedo, dbt, or equivalent.
Proficient with Elasticsearch cluster management, search optimization, and data ingestion.
Solid foundation in data warehouse integration and performance-tuned ETL pipelines.
Excellent understanding of data security, encryption, and access control in cloud environments.
Familiarity with event-driven architecture, Kafka, or streaming systems.

Experience with CI/CD for data pipelines, infrastructure-as-code (Terraform, Cloud Formation).
Programming or scripting experience (Python, Bash, etc.) for data automation and orchestration.
Exposure to dashboarding tools (e.g., Power BI, Tableau) and building datasets for visualization.

Nice to Have
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary