×
Register Here to Apply for Jobs or Post Jobs. X

Founding Machine Learning Engineer

Job in Zürich, 8058, Zurich, Kanton Zürich, Switzerland
Listing for: Bjak
Full Time position
Listed on 2025-12-15
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer, Data Engineer, Data Scientist
Salary/Wage Range or Industry Benchmark: 125000 - 150000 CHF Yearly CHF 125000.00 150000.00 YEAR
Job Description & How to Apply Below
Location: Zürich

Transform language models into real-world, high-impact product experiences.

A1 is a self-funded AI group, operating in full stealth. We’re building a new global consumer AI application focused on an important but under explored use case.

You will shape the core technical direction of A1 - model selection, training strategy, infrastructure, and long-term architecture. This is a founding technical role: your decisions will define our model stack, our data strategy, and our product capabilities for years ahead.

You won’t just fine-tune models - you’ll design systems: training pipelines, evaluation frameworks, inference stacks, and scalable deployment architectures. You will have full autonomy to experiment with frontier models (LLaMA, Mistral, Qwen, Claude-compatible architectures) and build new approaches where existing ones fall short.

Why This Role Matters
  • You are creating the intelligence layer of A1’s first product, defining how it understands, reasons, and interacts with users.

  • Your decisions shape our entire technical foundation — model architectures, training pipelines, inference systems, and long-term scalability.

  • You will push beyond typical chatbot use cases, working on a problem space that requires original thinking, experimentation, and contrarian insight.

  • You influence not just how the product works, but what it becomes, helping steer the direction of our earliest use cases.

  • You are joining as a founding builder, setting engineering standards, contributing to culture, and helping create one of the most meaningful AI applications of this wave.

What You’ll Do
  • Build end-to-end training pipelines: data → training → eval → inference

  • Design new model architectures or adapt open-source frontier models

  • Fine-tune models using state-of-the-art methods (LoRA/QLoRA, SFT, DPO, distillation)

  • Architect scalable inference systems using vLLM / Tensor

    RT-LLM / Deep Speed

  • Build data systems for high-quality synthetic and real-world training data

  • Develop alignment, safety, and guardrail strategies

  • Design evaluation frameworks across performance, robustness, safety, and bias

  • Own deployment: GPU optimization, latency reduction, scaling policies

  • Shape early product direction, experiment with new use cases, and build AI-powered experiences from zero

  • Explore frontier techniques: retrieval-augmented training, mixture-of-experts, distillation, multi-agent orchestration, multimodal models

What It’s Like to Work Here
  • You take ownership - you solve problems end-to-end rather than wait for perfect instructions

  • You learn through action - prototype → test → iterate → ship

  • You’re calm in ambiguity - zero-to-one building energises you

  • You bias toward speed with discipline - V1 now > perfect later

  • You see failures and feedback as essential to growth

  • You work with humility, curiosity, and a founder’s mindset

  • You lift the bar for yourself and your teammates every day

Requirements
  • Strong background in deep learning and transformer architectures

  • Hands-on experience training or fine-tuning large models (LLMs or vision models)

  • Proficiency with PyTorch, JAX, or Tensor Flow

  • Experience with distributed training frameworks (Deep Speed, FSDP, Megatron, ZeRO, Ray)

  • Strong software engineering skills — writing robust, production-grade systems

  • Experience with GPU optimization: memory efficiency, quantization, mixed precision

  • Comfortable owning ambiguous, zero-to-one technical problems end-to-end

Nice to Have
  • Experience with LLM inference frameworks (vLLM, Tensor

    RT-LLM, Faster Transformer)

  • Contributions to open-source ML libraries

  • Background in scientific computing, compilers, or GPU kernels

  • Experience with RLHF pipelines (PPO, DPO, ORPO)

  • Experience training or deploying multimodal or diffusion models

  • Experience in large-scale data processing (Apache Arrow, Spark, Ray)

  • Prior work in a research lab (Google Brain, Deep Mind, FAIR, Anthropic, OpenAI)

What You’ll Get
  • Extreme ownership and autonomy from day one - you define and build key model systems.

  • Founding-level influence over technical direction, model architecture, and product strategy.

  • Remote-first flexibility

  • High-impact scope—your work becomes core infrastructure of a global consumer AI product.

  • Competitive compensation and…

Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary