Machine Learning Engineer
Listed on 2025-12-20
-
IT/Tech
Data Engineer, AI Engineer, Machine Learning/ ML Engineer, Cloud Computing
About Anthelion
Anthelion is a next generation credit investment firm building a proprietary AI and data platform that powers our investment lifecycle from underwriting to portfolio management. The platform integrates structured and unstructured data, advanced analytics, and automated workflows to drive superior, risk adjusted returns in private credit and structured finance.
We are engineers and investors working together to redefine how institutional credit decisions are made, faster, smarter, and more transparent.
The RoleWe’re seeking a Machine Learning Engineer responsible for architecting, building, and maintaining robust pipelines for the deployment of ML and AI models, supporting investment process and multi-asset trading strategies. You will develop and maintain end-to-end systems that ensure models are deployed efficiently, consumed reliably, and consistently deliver value to the business.
This role sits at the intersection of data engineering, MLOps, and AI systems infrastructure. The engineer will play a critical role in operationalizing advanced machine learning and AI workflows - ensuring traceability, observability, and scalability from ingestion through inference.
You will also help spearhead the buildout of next-generation agentic workflows, integrating Model Control Platforms (MCP) and managing the lifecycle of AI agents across production environments.
What You’ll Do- Design, build, and maintain scalable ML/AI pipelines for both model retraining and live or batch inference, ensuring reliability, transparency, and traceability throughout the pipeline lifecycle.
- Develop and implement monitoring solutions to track model health, including systems for detecting data drift, monitoring model consumption, and assessing performance degradation over time.
- Collaborate closely with data scientists and investment team to enable efficient workflows, including the creation and maintenance of feature stores, data pipelines, and model inferencing tools.
- Ensure that all deployed pipelines are highly available, scalable, and resilient against failures, supporting both real-time and offline use cases according to business requirements.
- Take ownership of key infrastructure that supports the data science team, including data pipelines, scalable virtual machines, storage solutions, automated retraining pipelines, and self-serve model deployment frameworks.
- Document all pipeline processes, data lineage, and usage protocols to provide full transparency and facilitate efficient troubleshooting, auditing, and knowledge sharing.
- Continuously optimize system performance and resource utilization, implement best practices for deployment, and evaluate new tools and technologies that improve team productivity and reliability.
- Spearhead the buildout of advanced agentic workflows, integrating Model Control Platforms (MCP), orchestrating the deployment and management of AI agents, and ensuring robust agent hosting environments.
- Ensure observability, reliability, and transparency across all ML/AI pipeline components.
- Support broader data pipeline buildout and integration efforts
- Proficiency in developing and managing complex ML/AI deployment pipelines using modern orchestration tools.
- Experience with large-scale data systems, distributed storage, and cloud infrastructure (e.g., scalable VMs, feature stores, bulk data access solutions).
- Strong background in model monitoring, especially systems for data drift, prediction consumption, and pipeline health metrics.
- Solid understanding of both batch and real-time inference workflows, including the integration of ML models into production-grade APIs and services.
- Excellent documentation skills and the ability to communicate technical concepts to diverse audiences.
- Passion for building resilient, transparent, and scalable systems that empower data-driven decision making.
- Familiar with common ML models and their deployment workflows.
- Familiar with model servicing products (e.g., Sage Maker, Vertex AI, Bento
ML, etc.). - Experience building out pipelines for live and batch model inferencing.
- Skilled in managing large numbers of pipelines and implementing…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).