×
Register Here to Apply for Jobs or Post Jobs. X

Data Scientist

Job in Toronto, Ontario, C6A, Canada
Listing for: Cogency
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer, Data Engineer, Data Scientist
Salary/Wage Range or Industry Benchmark: 100000 - 125000 CAD Yearly CAD 100000.00 125000.00 YEAR
Job Description & How to Apply Below

Cogency is a consulting and technology services firm delivering enterprise digital transformation solutions across financial services and regulated industries. We specialize in building scalable digital platforms, automating business processes, and enabling data-driven decision-making through modern cloud and low-code technologies.

Position Summary

We are seeking an experienced Data Scientist specializing in AI/ML and Generative AI to design, develop, deploy, and operationalize advanced machine learning solutions. This role requires deep expertise in model development, performance optimization, and production deployment using modern MLOps and LLMOps frameworks within cloud-based environments.

The ideal candidate combines strong research and modeling capabilities with hands‑on experience delivering scalable, enterprise‑grade AI systems leveraging AWS, Snowflake, and streaming technologies such as Kafka.

Key Responsibilities AI/ML & Generative AI Development
  • Design, develop, and deploy machine learning and generative AI models to solve complex business problems.
  • Build deep learning and large language model (LLM)-based solutions using Tensor Flow and PyTorch.
  • Develop custom training pipelines, feature engineering processes, and model evaluation frameworks.
  • Fine‑tune, optimize, and adapt pre‑trained models for domain‑specific use cases.
Model Optimization & Performance Management
  • Establish model performance benchmarks and evaluation metrics (accuracy, precision, recall, F1, AUC, latency, throughput, etc.).
  • Conduct hyperparameter tuning and model optimization to improve scalability and efficiency.
  • Monitor model drift, bias, and degradation in production environments.
  • Implement automated retraining and performance monitoring strategies.
MLOps & LLMOps Implementation
  • Design and implement scalable ML pipelines using MLOps best practices, including CI/CD for models.
  • Manage model versioning, artifact tracking, and reproducibility.
  • Automate model deployment workflows and rollback mechanisms.
  • Apply LLMOps principles for managing large language model lifecycle, prompt management, evaluation, and governance.
  • Ensure security, compliance, and governance standards for AI systems.
  • Deploy and manage ML solutions within AWS cloud environments.
  • Integrate models with Snowflake and other enterprise data platforms for scalable data access and inference.
  • Collaborate closely with data engineering teams to ensure efficient data pipelines and feature stores.
  • Optimize cloud resource utilization and cost efficiency for model training and inference workloads.
Real‑Time & Streaming Use Cases (Preferred)
  • Support real‑time or near real‑time inference workflows using Kafka.
  • Design event‑driven architectures for streaming model predictions and analytics.
  • Ensure low‑latency, high‑availability model serving in distributed systems.
Cross‑Functional Collaboration
  • Partner with business stakeholders to translate requirements into AI‑driven solutions.
  • Work closely with engineering teams to product ionize models within enterprise applications.
  • Contribute to architecture decisions, technical documentation, and knowledge sharing.
Required Qualifications
  • Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or related field (PhD preferred for advanced AI roles).
  • Strong hands‑on experience in AI/ML model development and deployment.
  • Proficiency in Tensor Flow and PyTorch for deep learning and generative AI model development.
  • Experience working with AWS cloud services for model training, deployment, and scaling.
  • Strong experience integrating ML workflows with Snowflake or enterprise data warehouse platforms.
  • Experience implementing MLOps practices, including CI/CD, model versioning, monitoring, and automation.
  • Knowledge of LLMOps principles for managing large language model lifecycle and governance.
  • Experience monitoring and optimizing model performance in production environments.
  • Familiarity with Kafka for real‑time data streaming (preferred).
Preferred Qualifications
  • Experience deploying large language models and generative AI applications in production.
  • Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
  • Experience building feature stores and scalable inference APIs.
  • Understanding of AI ethics, bias mitigation, and responsible AI practices.
  • Strong analytical and problem‑solving abilities
  • Ability to translate business challenges into scalable AI solutions
  • Excellent communication and stakeholder management skills
  • Continuous learning mindset with strong research orientation
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary