- Expertini Resume Scoring: Our Semantic Matching Algorithm evaluates your CV/Résumé before you apply for this job role: ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs.
Urgent! ML Acceleration / Framework Engineer - Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs - Local Job Opening in Seattle
Description
By applying to this position, your application will be considered for all locations we hire for in the United States.
Annapurna Labs designs silicon and software that accelerates innovation.
Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday.
Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators.
This role is for a Machine Learning Engineer on one of our AWS Neuron teams:
- The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances.
Experience with training these large models using Python is a must.
FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.
- ML Frameworks partners with compiler, runtime, and research experts to make AWS Trainium and Inferentia feel native inside the tools builders already love—PyTorch, JAX, and the rapidly evolving vLLM ecosystem.
By weaving Neuron SDK deep into these frameworks, optimizing operators, and crafting targeted extensions, we unlock every teraflop of Annapurna’s AI chips for both training and lightning‑fast inference.
Beyond kernels, we shape next‑generation serving by upstreaming new features and driving scalable deployments with vLLM, Triton, and TensorRT—turning breakthrough ideas into production‑ready AI for millions of customers.
- The ML Inference team collaborates closely with hardware designers, software optimization experts, and systems engineers to develop and optimize high-performance inference solutions for Inferentia chips.
Proficiency in deploying and optimizing ML models for inference using frameworks like TensorFlow, PyTorch, and ONNX is essential.
The team focuses on techniques such as quantization, pruning, and model compression to enhance inference speed and efficiency.
Adapting and extending popular inference libraries and tools for Neuron-based systems is a key aspect of their work.
Key job responsibilities
You'll join one of our core ML teams - Frameworks, Distributed Training, or Inference - to enhance machine learning capabilities on AWS's specialized AI hardware.
Your responsibilities will include improving PyTorch and JAX for distributed training on Trainium chips, optimizing ML models for efficient inference on Inferentia processors, and collaborating with compiler and runtime teams to maximize hardware performance.
You'll also develop and integrate new features in ML frameworks to support AWS AI services.
We seek candidates with strong programming skills, eagerness to learn complex systems, and basic ML knowledge.
This role offers growth opportunities in ML infrastructure, bridging the gap between frameworks, distributed systems, and hardware acceleration.
About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated.
If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS.
Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations.
AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Basic Qualifications
- To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025.
- Working knowledge of C++ and Python
- Experience with ML frameworks, particularly PyTorch, Jax, and/or vLLM
- Understanding of parallel computing concepts and CUDA programming
Preferred Qualifications
- Open source contributions to ML frameworks or tools
- Experience optimizing ML workloads for performance
- Direct experience with PyTorch internals or CUDA optimization
- Hands-on experience with LLM infrastructure tools (e.g., vLLM, TensorRT)
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Los Angeles County applicants: Job duties for this position include: work safely and cooperatively with other employees, supervisors, and staff; adhere to standards of excellence despite stressful conditions; communicate effectively and respectfully with employees, supervisors, and staff to ensure exceptional customer service; and follow all federal, state, and local laws and Company policies.
Criminal history may have a direct, adverse, and negative relationship with some of the material job duties of this position.
These include the duties and responsibilities listed above, as well as the abilities to adhere to company policies, exercise sound judgment, effectively manage stress and work safely and respectfully with others, exhibit trustworthiness and professionalism, and safeguard business operations and the Company’s reputation.
Pursuant to the Los Angeles County Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Our inclusive culture empowers Amazonians to deliver the best results for our customers.
If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information.
If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets.
The base pay for this position ranges from $99,500/year in our lowest geographic market up to $200,000/year in our highest geographic market.
Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
Amazon is a total compensation company.
Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits.
For more information, please visit https://www.aboutamazon.com/workplace/employee-benefits .
This position will remain posted until filled.
Applicants should apply via our internal or external career site.
✨ Smart • Intelligent • Private • Secure
Practice for Any Interview Q&A (AI Enabled)
Predict interview Q&A (AI Supported)
Mock interview trainer (AI Supported)
Ace behavioral interviews (AI Powered)
Record interview questions (Confidential)
Master your interviews
Track your answers (Confidential)
Schedule your applications (Confidential)
Create perfect cover letters (AI Supported)
Analyze your resume (NLP Supported)
ATS compatibility check (AI Supported)
Optimize your applications (AI Supported)
O*NET Supported
O*NET Supported
O*NET Supported
O*NET Supported
O*NET Supported
European Union Recommended
Institution Recommended
Institution Recommended
Researcher Recommended
IT Savvy Recommended
Trades Recommended
O*NET Supported
Artist Recommended
Researchers Recommended
Create your account
Access your account
Create your professional profile
Preview your profile
Your saved opportunities
Reviews you've given
Companies you follow
Discover employers
O*NET Supported
Common questions answered
Help for job seekers
How matching works
Customized job suggestions
Fast application process
Manage alert settings
Understanding alerts
How we match resumes
Professional branding guide
Increase your visibility
Get verified status
Learn about our AI
How ATS ranks you
AI-powered matching
Join thousands of professionals who've advanced their careers with our platform
Unlock Your ML Acceleration Potential: Insight & Career Growth Guide
Real-time ML Acceleration Jobs Trends in Seattle, United States (Graphical Representation)
Explore profound insights with Expertini's real-time, in-depth analysis, showcased through the graph below. This graph displays the job market trends for ML Acceleration in Seattle, United States using a bar chart to represent the number of jobs available and a trend line to illustrate the trend over time. Specifically, the graph shows 3304 jobs in United States and 260 jobs in Seattle. This comprehensive analysis highlights market share and opportunities for professionals in ML Acceleration roles. These dynamic trends provide a better understanding of the job market landscape in these regions.
Great news! Amazon is currently hiring and seeking a ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs to join their team. Feel free to download the job details.
Wait no longer! Are you also interested in exploring similar jobs? Search now: ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs Jobs Seattle.
An organization's rules and standards set how people should be treated in the office and how different situations should be handled. The work culture at Amazon adheres to the cultural norms as outlined by Expertini.
The fundamental ethical values are:The average salary range for a ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs Jobs United States varies, but the pay scale is rated "Standard" in Seattle. Salary levels may vary depending on your industry, experience, and skills. It's essential to research and negotiate effectively. We advise reading the full job specification before proceeding with the application to understand the salary package.
Key qualifications for ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs typically include Other General and a list of qualifications and expertise as mentioned in the job specification. Be sure to check the specific job listing for detailed requirements and qualifications.
To improve your chances of getting hired for ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs, consider enhancing your skills. Check your CV/Résumé Score with our free Resume Scoring Tool. We have an in-built Resume Scoring tool that gives you the matching score for each job based on your CV/Résumé once it is uploaded. This can help you align your CV/Résumé according to the job requirements and enhance your skills if needed.
Here are some tips to help you prepare for and ace your job interview:
Before the Interview:To prepare for your ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs interview at Amazon, research the company, understand the job requirements, and practice common interview questions.
Highlight your leadership skills, achievements, and strategic thinking abilities. Be prepared to discuss your experience with HR, including your approach to meeting targets as a team player. Additionally, review the Amazon's products or services and be prepared to discuss how you can contribute to their success.
By following these tips, you can increase your chances of making a positive impression and landing the job!
Setting up job alerts for ML Acceleration / Framework Engineer Distributed Training & Inference, AWS Neuron, Annapurna Labs, Annapurna Labs is easy with Seattle Jobs | Expertini. Simply visit our job alerts page here, enter your preferred job title and location, and choose how often you want to receive notifications. You'll get the latest job openings sent directly to your email for FREE!