DESCRIPTION:
At Frontier AI & Robotics, we're not just advancing robotics - we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios.
What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence - from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations.
Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
Key job responsibilities
- Drive inference optimization strategies for large-scale foundation models using TensorRT, CUDA, and other NVIDIA tools
- Collaborate closely with scientists to influence model architectures for optimal hardware utilization
- Design and implement efficient compilation pipelines for complex transformer architectures
- Develop comprehensive benchmarking frameworks to measure and optimize model performance
- Build robust monitoring solutions to ensure reliable model serving at scale
- Explore and evaluate emerging optimization techniques including ONNX Runtime and other ML compilers
- Maintain high engineering standards through proper testing, documentation, and code review practices
A day in the life
- Optimize transformer blocks using custom CUDA kernels and TensorRT optimization techniques
- Partner with scientists to analyze model architectures and propose efficiency improvements
- Implement and benchmark various optimization strategies for large-scale models
- Debug performance bottlenecks using NVIDIA profiling tools
- Participate in technical discussions about new model architectures with the science team
- Design and maintain performance monitoring systems for production deployment
- Prototype new acceleration approaches using emerging compilation frameworks
BASIC QUALIFICATIONS:
- Bachelor's degree in computer science or equivalent
- 5+ years of non-internship professional software development experience
- 5+ years of programming with at least one software programming language experience
- 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience as a mentor, tech lead or leading an engineering team
- Strong expertise in Python, C++ and CUDA programming
- Experience with TensorRT or similar ML optimization frameworks
- Track record of optimizing ML models for production
PREFERRED QUALIFICATIONS:
- Expertise in NVIDIA's ML stack (cuDNN, CUDA Graph, etc.)The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits.
This website uses cookies to ensure you get the best experience. Learn more