×
Register Here to Apply for Jobs or Post Jobs. X

Principal Engineer T500-22524

Remote / Online - Candidates ideally in
500001, Hyderabad, Telangana, India
Listing for: CIBC India
Remote/Work from Home position
Listed on 2026-02-22
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager
Job Description & How to Apply Below
Position: Principal Engineer [T500-22524]
About CIBC India:

CIBC India is a technology and operations hub in Hyderabad, where you’ll be part of our highly engaged and connected global team, and play a central role in supporting our continued growth. Whether you’re driving innovation in digital banking or streamlining complex client onboarding, you’ll be part of a culture recognized for excellence and investment in its people. At CIBC India, people and progress are at the center of what we do: you’ll have opportunities to develop your skills, collaborate with industry leaders, and see your ideas come to life in a culture that values progress and belonging.

We’re committed to providing cutting-edge resources, ongoing learning, and a supportive environment where people come first. If you’re ready to create meaningful change and build your future with us, CIBC India is where your ambition meets opportunity.

What You’ll Be Doing (position summary):

The Principal Engineer – Databricks & Python is responsible for designing, building, and optimizing data pipelines within a large-scale Capital markets Databricks Lakehouse environment. You will design and implement the bronze, silver, gold medallion pipelines that power analytics, reporting, RAG, semantic layers, and downstream AI agents. The role transforms complex capital markets data into high-quality, governed, and query-ready datasets. This position ensures robust data engineering practices to support business and technology objectives.

At CIBC India we enable the work environment most optimal for you to thrive in your role. Details on your work arrangement (including on-site and remote work) will be discussed at the time of your interview.

How You’ll Succeed (responsibilities):

- Build robust ETL/ELT pipelines using Databricks, Delta Lake, PySpark, Spark SQL, and Python to enable high-quality data processing. Ingest complex datasets into Bronze tables, apply data validation and transformations into Silver, and model Gold-layer datasets for analytics and business consumption.
- Implement schema evolution, data quality enforcement, expectations, and automated lineage tracking.
- Optimize Spark jobs for performance and cost efficiency, ensuring Service Level Agreements (SLAs) are consistently met.

Collaboration & Integration:

- Work closely with Quant, Risk, and Trading Technology teams to integrate pricing, risk, RFQ, and PnL data.
- Collaborate with AI teams to ensure data is prepared for retrieval-augmented generation (RAG), semantic models, and agent workflows.

Governance & Orchestration:

- Apply Unity Catalog for governance, lineage, classification, and entitlements.
- Use Delta Live Tables, Workflows, or Databricks Jobs for production orchestration.
- Maintain and enhance metadata, documentation, and data contracts.

Dev Ops & Automation:

- Build and maintain CI/CD pipelines for Databricks code using Git Hub Actions or Azure Dev Ops.

Experience:

12+ years of experience

Who You Are (skills/qualifications):

Must Have

Skills:

- Strong experience with Databricks Lakehouse, Delta Lake, and PySpark.
- Deep expertise in Python for data transformations, UDFs, notebooks, and workflow automation.
- Advanced SQL skills with experience modeling Gold-layer datasets for analytics.
- Proficiency with data quality frameworks (Expectations, Delta Live Tables tests, custom validators).
- Solid understanding of medallion architecture and best practices for data reliability.

- Experience with ETL/ELT patterns and structured/unstructured data ingestion.

- Experience with Azure (Data Lake, Blob, Key Vault, Data Factory) or AWS equivalents

Good to Have:

- Financial markets experience (risk data, PnL, trades, positions, yield curves, vol surfaces).

- Experience with real-time ingestion (Kafka, Solace, Event Hub).
- Experience building semantic models (Cube.js, dbt, AtScale, Microsoft Fabric).
- Knowledge of feature stores, MLflow, or model training on Databricks.

What CIBC India Offers:

At CIBC India, your goals are a priority. We start with your strengths and ambitions and strive to create opportunities to tap into your potential. We aspire to give you a career that goes well beyond your compensation.

- We work to recognize you in…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary