More jobs:
Job Description & How to Apply Below
Experience Level: 3-14 Years
Loc:
Bangalore/Hyderabad
Mandatory
Skills:
Azure Databricks, Pyspark, SQL
Notice Period: Immediate to 20 Days
Role & responsibilities
Experience in Data warehouse/ETL projects.
Deep understanding of Star and Snowflake dimensional modelling.
Strong knowledge of Data Management principles
Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture
Should have hands-on experience in SQL, Python and Spark (PySpark)
Candidate must have experience in AWS/ Azure stack
Desirable to have ETL with batch and streaming (Kinesis).
Experience in building ETL / data warehouse transformation processes
Experience with Apache Kafka for use with streaming data / event-based data
Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala)
Experience with Open Source non-relational / No
SQL data repositories (incl. Mongo
DB, Cassandra, Neo4J)
Experience working with structured and unstructured data including imaging & geospatial data.
Experience working in a Dev/Ops environment with tools such as Terraform, Circle
CI, GIT.
Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×