Model Optimization - Lead Engineer
Listed on 2025-12-27
-
Engineering
AI Engineer, Systems Engineer -
IT/Tech
AI Engineer, Machine Learning/ ML Engineer, Systems Engineer
About Our Team
On-device AI is the future-enabling real-time, private, and always-available intelligence. You'll push the boundaries of what's possible on mobile hardware, delivering AI experiences that run locally with low latency and all-day battery life. Your optimizations directly enable breakthrough product features.
Lenovo is hiring for a Model Optimization Lead Engineer to lead optimization and deployment of large models for edge devices. You will master technologies such as Quantization frameworks (Tensor
RT, ONNX Runtime), edge AI runtimes (Execu Torch, llama.cpp), NPU SDKs (Qualcomm QNN, Apple Core ML), model compression libraries, profiling tools (NVIDIA Nsight, Snapdragon Profiler)
Chicago, IL
Hybrid (3 days on-site, 2 days remote)
- Lead optimization and deployment of large models (LLMs, VLMs, diffusion) for edge devices using quantization (INT4/INT8), pruning, knowledge distillation, and LoRA.
- Partner with silicon teams to optimize model execution on heterogeneous hardware: NPUs (Qualcomm Hexagon, Google Edge TPU), GPUs, and CPUs.
- Implement and benchmark deployment frameworks:
Tensor
RT-LLM, ONNX Runtime, Execu Torch, llama.cpp, MLC-LLM. - Drive hardware-software co-design, influencing sensor and silicon roadmaps to enable efficient AI inference.
- Build ML ops infrastructure: model serving, A/B testing, performance monitoring, continuous optimization.
- Lead a team of optimization engineers and collaborate with ML researchers, hardware teams, and product managers.
- Stay at the forefront of on-device AI: sub-10B parameter models, mixed precision, sparse attention, federated learning.
- 7+ years in ML engineering or systems, with 3+ years focused on model optimization and deployment.
- Bachelor's Degree in Engineering or Computer Science.
- Deep expertise in model compression: quantization (QAT, PTQ), pruning, distillation, low-rank adaptation.
- Hands-on experience with mobile/edge AI frameworks (Tensor
RT, ONNX, TFLite, CoreML).
- Understanding of hardware architectures: NPU/GPU/CPU characteristics, SIMD operations, memory hierarchies.
- Proficiency in C++/Python and performance optimization (CUDA, OpenCL, or NPU programming).
- Track record of shipping ML models to production on resource-constrained devices.
The base salary budgeted range for this position is $180K-$220K. Individuals may also be considered for bonus and/or commission.
Lenovo's various benefits can be found on
We are an Equal Opportunity Employer and do not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, national origin, status as a veteran, and basis of disability or any federal, state, or local protected class.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).