Python/PyTorch Developer — Frontend Inference Compiler
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About the Role:Would you like to participate in creating the fastest Generative Models inference in the world? Join the Cerebras Inference Team to participate in development of unique Software and Hardware combination that sports best inference characteristics in the market while running largest models available.
Cerebras wafer scale inference platform allows running Generative models with unprecedented speed thanks to unique hardware architecture that provides fastest access to local memory, ultra-fast interconnect and huge amount of available compute.
You will be part of the team that works with latest open and closed generative AI models to optimize for the Cerebras inference platform. Your responsibilities will include working on model representation, optimization and compilation stack to produce the best results on Cerebras current and future platforms.
Responsibilities:- Analysis of new models from generative AI field and understanding of impacts on compilation stack
- Develop and maintain model definition framework that consists of model building blocks to represent large language models based on PyTorch and Cerebras dialects ready to be deployed on Cerebras hardware.
- Develop and maintain the frontend compiler infrastructure that ingests PyTorch models and produces an intermediate representation (IR).
- Extend and optimize PyTorch FX / Torch Script / Torch Dynamo
-based tooling for graph capture, transformation, and analysis. - Collaboration with other teams throughout feature implementation
- Research on new methods for model optimization to improve Cerebras inference
- Degree in Engineering, Computer Science, or equivalent in experience and evidence of exceptional ability
- Strong Python programming skills and in-depth experience with PyTorch internals (e.g., Torch Script, FX, or Dynamo).
- Solid understanding of computational graphs, tensor operations, and model tracing
. - Experience building or extending compilers, interpreters, or ML graph optimization frameworks
. - Experience working with PyTorch and Hugging Face Transformers library
- Knowledge and experience working with Large Language Models (understanding Transformer architecture variations, generation cycle, etc.)
- Strong C++ programming skills.
- Knowledge of MLIR based compilation stack
- Prior experience contributing to Py Torch ,
Tensor Flow XLA
, TVM
, ONNX RT
, or similar compiler stacks. - Knowledge of hardware accelerators
, quantization
, or runtime scheduling
. - Experience with multi-target inference compilation (e.g., CPU, GPU, custom ASICs).
- Understanding of numerical precision trade-offs and operator lowering
. - Contributions to open-source ML compiler projects.
This offer is contingent upon Cerebras successfully obtaining an export license from the U.S. Department of Commerce’s Bureau of Industry and Security authorizing the release to you of certain software source code and/or technology that is subject to the Export Administration Regulations. However, we can make no assurances with respect to the final disposition…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).