More jobs:
Job Description & How to Apply Below
-Data Engineer
Location:
Pune
Experience:
Min 6+Years
Specific Requirements:
Development, maintenance, and enhancement of Data Pipelines (ETL/ELT) and processes with thorough knowledge of star/snowflake schemas
Hands on experience in IDMC Data Integration
Experience in DW Production Support / troubleshooting data or pipeline issues
Developing complex SQL queries and SQL optimization
Development experience must be full Life Cycle experience including business requirements gathering, data sourcing, testing/data reconciliation, and deployment within Business Intelligence/Data Warehousing Architecture.
Designing and implementing data security
Monitoring and optimizing data storage and data processing
Use of Object function/object-oriented scripting languages including Scala, C++, Java and Python
Cloud data warehouse and data lake platforms (e.g., Snowflake, Databricks, Redshift, Big Query, MS Synapse, MS ADLS, Apache Haddop, etc.)
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
Understanding of data science concepts
Familiarity with AI/ML frameworks and libraries
Business Competencies
• Developing data services that are fit for purpose, resilient, scalable, and future proof, to meet user needs
• Advanced SQL knowledge and experience working with relational databases and working familiarity with various databases.
• Demonstrated understanding and experience using software and tools including ETL, relational SQL and No
SQL databases, big data tools like Kafka, Spark, and Hadoop
• Experience building and optimizing ‘big data’ data pipelines, architecture, and data sets.
• Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
• Strong analytic skills related to working with unstructured datasets.
• Experience and knowledge of project management best practices and agile software development methodologies
• A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
• Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
• Strong project management and organizational skills.
• Experience supporting and working with cross-functional teams in a dynamic environment.
• Understanding of data science concepts
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×