More jobs:
Job Description & How to Apply Below
Location: Bengaluru
This role is open across our global offices, and successful candidates may be based in any of our international office locations.
Key Responsibilities
Develop and optimize compiler components for AI workloads (e.g., graph optimizations, operator fusion, scheduling).
Collaborate with hardware and software teams to design compiler backends for NPUs, and custom accelerators.
Implement performance tuning techniques for deep learning models across diverse architectures.
Contribute to open-source compiler projects and maintain internal tool chains.
Analyze and improve compilation pipelines for frameworks like PyTorch, Tensor Flow, and ONNX.
Required Qualifications
Master’s degree or PhD in Computer Science, Electrical Engineering, or related field.
Strong proficiency in C++ , Python , and compiler design principles.
2 out of 3 below,
Knowledge of AI frameworks (PyTorch, Tensor Flow)
Familiarity with hardware architectures (GPU, TPU, NPU) and parallel computing.
Understanding of graph-based optimizations.
Preferred Qualifications
Experience with quantization , kernel optimization , and code generation for AI accelerators .
Experience with MLIR, LLVM, or similar compiler infrastructures.
Contributions to open-source compiler or AI projects.
Understanding of performance profiling and benchmarking for AI workloads.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×