DESCRIPTION:
We are seeking an Applied Scientist to develop and optimize Visual Inertial Odometry (VIO) and sensor fusion systems for our intelligent robots. In this role, you will design, implement, and deploy state estimation and tracking algorithms that enable robots to understand their position and motion in real time, even in challenging and dynamic environments.
You will own the full pipeline from algorithm development through embedded deployment, ensuring that perception systems run efficiently on resource-constrained robotic hardware. You will also leverage modern machine learning approaches to push the boundaries of classical perception methods, combining learned representations with geometric techniques to achieve robust, real-time performance.
This is a deeply hands-on role. You will work directly with sensors, hardware, and real-world data, while prototyping, testing, and iterating in physical environments. The ideal candidate has strong foundations in VIO and sensor fusion, practical experience optimizing algorithms for embedded platforms, and familiarity with how modern deep learning is transforming perception.
Key job responsibilities
- Design and implement Visual Inertial Odometry algorithms for robust real-time state estimation on robotic platforms like Sprout
- Develop multi-sensor fusion pipelines integrating cameras, IMUs, and other sensing modalities for accurate pose tracking
- Optimize perception and tracking algorithms for deployment on embedded hardware (e.g., ARM, GPU-accelerated edge devices) under strict latency and power constraints
- Apply modern ML-based perception techniques (learned features, depth estimation, neural odometry) to complement and improve classical geometric approaches
- Build and maintain calibration, evaluation, and benchmarking infrastructure for perception systems
- Collaborate with hardware, controls, and navigation teams to integrate perception outputs into the robot's autonomy stack
- Lead technical projects from research prototyping through production deployment
BASIC QUALIFICATIONS:
- PhD, or Master's degree and 3+ years of applied research experience
- Experience with any programming language such as Python, Java, C++
- Hands-on experience developing and deploying Visual Inertial Odometry or visual-inertial SLAM systems
- Strong understanding of multi-sensor fusion (cameras, IMUs, odometry) and state estimation (EKF, factor graphs)
- Experience optimizing perception algorithms for embedded or resource-constrained hardware
- Demonstrated hands-on experience with real sensor data, calibration, and physical robot platforms
- Familiarity with modern ML approaches to perception (learned feature extraction, depth prediction, end-to-end odometry)
PREFERRED QUALIFICATIONS:
- Experience leading technical initiatives and key deliverablesThe base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits.
This website uses cookies to ensure you get the best experience. Learn more