More jobs:
Job Description & How to Apply Below
About The Role:
Eucloid is looking for a Senior Data Engineer to join our Data Platform team supporting various business applications. The ideal candidate will support development of data infrastructure for our clientsby participating in activities which may includestarting from up- stream and down-stream technology selection to designing and building of different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of overall ecosystem.
The ideal candidate is an experienced data wrangler who will support our software developers, database architects and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solution.
Responsibilities:
Responsible for the design, deployment, configuration, and operations for a multi-node big data cluster.
This includes working with open sourceand/or commercial stacks to support thefull SDLC. Resourcewill work to deploy, manage, and maintain development, test and production environments for the big data platform.
Develop scripts to automate and streamline operations and configurations in the infrastructure
Specify, design, build, and support
BI solutions by working closely with datalake team
Create dashboards and KPIs to show the business performance to management.
Design and maintain data models used for reporting and analytics
Work to identifyinfrastructure needs and providing supportto developers and business user
Research performance issues;
Optimize platform for performance
Troubleshoot and resolve issuesin all operational environments
Work with a cross functional team delivering softwaredeployments
Forward thinkingby continuously adoptingnew ideas and technologies to solve business problems
Own the design and development of automated solutions for recurring reporting and in-depth analysis.
A problem solver and criticalthinker.
Skills and Qualifications Skills needed:
Strong Experience in Data lake – Spark, distributed file system, Yarn, Cloud services(preferably GCP / AWS).
Strong Experience in SQL tools – Vertica,Dremio product or any big data SQL
Scripting Knowledge – Shell, python
Experience with ETL and OLAP concepts in building highlyscalable data pipelines
Exposure to any Visualization systemsis a plus (like Apache Superset, Tableau)
Experience with Agile, data structures, data analysis and wrangling tools and technologies.
Familiar with version control and relational databases.
Strong Experience in monitoring, debugging and troubleshooting of services.
Experience in providing on-call support.
Basic Qualifications:
Bachelors / Master’s Degreein Computer or related field in a reputed institution
5+ years of professional experience in software with most of them from a product company
Preferred Qualifications:
Proficient in one or more technologies like:
AWS, EMR
Hadoop , Spark
SQL
Python
Data Structures
Experience with workingin Linux based environment
Good communication and design skills
If Interested, Share your updated resume at cha
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×