×
Register Here to Apply for Jobs or Post Jobs. X

Data Scientist

Job in Bengaluru, 560001, Bangalore, Karnataka, India
Listing for: HCLTech
Full Time position
Listed on 2026-03-06
Job specializations:
  • IT/Tech
    Machine Learning/ ML Engineer, Data Scientist
Job Description & How to Apply Below
Location: Bengaluru

DATA SCIENCE OPENING FOR BENGALURU/CHENNAI/HYDERABAD/NOIDA(immediate joiner
5+years and above only apply

Job Overview:
We are seeking a skilled  Machine Learning Engineer ,  Data Scientist , or  Data Analyst  to design, develop, and deploy machine learning models, conduct deep data analysis, and generate actionable insights. The ideal candidate will have experience in data preprocessing, feature engineering, model development, and performance optimization, working with large datasets and leveraging advanced machine learning frameworks.

Key Responsibilities:

Data Preparation & Analysis :
Gather, clean, and preprocess structured, semi-structured, and unstructured data from various sources.
Conduct exploratory data analysis (EDA) to identify trends, patterns, and outliers.
Apply data wrangling techniques using  Pandas ,  Num Py , and  SQL  to transform raw data into usable formats.
Use statistical analysis to drive data-driven decision-making.
Machine Learning Model Development :
Build, train, and fine-tune machine learning models using  Scikit-learn ,  Tensor Flow ,  Keras , or  PyTorch .
Develop predictive models, classification algorithms, clustering models, and recommendation systems.
Conduct hyperparameter optimization using techniques like grid search or random search.
Model Evaluation & Optimization :
Evaluate model performance using metrics such as  Accuracy ,  Precision ,  Recall ,  F1-Score ,  AUC-ROC ,  Confusion Matrix , and  Cross-validation .
Improve model performance through techniques such as feature engineering, data augmentation, and regularization.
Deploy models into production environments, and monitor performance for continual improvement.
Data Visualization & Reporting :
Develop dashboards and reports using  Tableau ,  Power BI ,  Matplotlib ,  Seaborn , or  Plotly .
Present findings through clear visualizations and actionable insights to non-technical stakeholders.
Write detailed reports on data analysis and machine learning results, ensuring transparency and reproducibility.
Collaboration & Stakeholder Communication :
Work closely with cross-functional teams (e.g., engineering, product, business) to define data-driven solutions.
Communicate technical concepts clearly to non-technical stakeholders and provide insights that influence product and business strategy.
Data Pipeline & Automation :
Design and implement scalable data pipelines for model training and deployment using  Airflow ,  Apache Kafka , or  Celery .
Automate data collection, preprocessing, and feature extraction tasks.
Research & Continuous Learning :
Stay up-to-date with the latest trends in machine learning, deep learning, and data science methodologies.
Explore new tools, techniques, and frameworks to improve model accuracy and efficiency.

Required Skills:

Programming Languages :
Strong proficiency in  Python , with experience in   SQL .
Machine Learning :
Hands-on experience with  Scikit-learn ,  Tensor Flow ,  Keras ,  PyTorch , or similar ML libraries.
Data Analysis :
Strong skills in  Pandas ,  Num Py , and  Matplotlib  for data manipulation and analysis.
Statistical Analysis :
Experience applying statistical methods to data, including hypothesis testing and regression analysis.
Cloud Platforms :
Familiarity with  AWS ,  Azure , or  Google Cloud  for deploying models and using cloud-native data services (e.g.,  AWS Sagemaker ,  Azure ML ).
Data Visualization :
Experience using  Tableau ,  Power BI ,  Matplotlib ,  Seaborn , or  Plotly  for creating visualizations.
SQL & Databases :
Proficiency in  SQL  for querying relational databases and working with  No

SQL  databases (e.g.,  Mongo

DB ,  Big Query ).
Version Control :
Experience using  Git  for version control.

Desirable

Skills:

Big Data Technologies :
Familiarity with tools like  Apache Hadoop ,  Spark ,  Dask , or  Google Big Query  for processing large datasets.
Deep Learning :

Experience with deep learning frameworks such as  Tensor Flow ,  PyTorch , or  MXNet .
NLP & Computer Vision :

Experience with natural language processing (NLP) using  spaCy ,  NLTK , or  transformers , and computer vision using  OpenCV  or  Tensor Flow .
MLOps :
Familiarity with MLOps tools like  Kubeflow ,  MLflow , or  DVC  for managing model workflows.
Data Engineering :

Experience with ETL tools like  Apache Airflow ,  Talend ,  AWS Glue , or  Google Dataflow  for data pipeline automation.

Tools & Technologies:
Machine Learning :  Scikit-learn ,  Tensor Flow ,  PyTorch ,  Keras ,  XGBoost .
Data Analysis :  Pandas ,  Num Py ,  Matplotlib ,  Seaborn ,  Plotly .
Cloud Platforms :  AWS ,  Google Cloud ,  Azure .
Databases :  MySQL ,  Postgre

SQL ,  Mongo

DB ,  Big Query ,  Snowflake .
Data Visualization :  Tableau ,  Power BI ,  Matplotlib ,  Seaborn ,  Plotly .
Version Control :  Git .

Mail your resume to :
CTC:
ECTC:
Location only(Bangalore/Chennai/Hyderabad/Noida) preferred
Notice period:
Years of experience:
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary