Job Description & How to Apply Below
TCS is inviting applications!!
Role:
Data Engineer(Azure databricks, PySpark)
EXP: 5 - 9 YEARS
LOCATION:
Mumbai
** Virtual Interview**
* Job Description
Strong knowledge of Extraction Transformation and Loading (ETL) processes using frameworks like Azure Data Factory or Synapse or Databricks ; establishing cloud connectivity between different systems like ADLS, ADF, Synapse, Databricks etc.
Candidates must possess hands on Power BI skills.
Candidates must have good understanding of Informatica.
Design and develop ETL processes based on functional and non-functional requirements in python / pyspark within Azure platform.
A minimum of 5 years' experience with large SQL data marts. Expert relational database experience, Candidate should demonstrate ability to navigate through massive volumes of data to deliver effective and efficient data extraction, design, load, and reporting solutions to business partners
Experience in troubleshooting and supporting large databases and testing activities;
Identifying reporting, and managing database security issues, user access/management;
Designing database backup, archiving and storage, performance tuning, ETL importing large volume of data extracted from multiple systems, capacity planning
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×