Member of Engineering (Inference)
Role Overview
As a Member of Engineering focused on Inference at poolside, you will develop and enhance the inference capabilities of Large Language Models (LLMs), optimizing for performance and efficiency. This senior-level role requires strong engineering skills and knowledge of deep learning fundamentals, working within a remote-first team dedicated to cutting-edge research and engineering practices.
Perks & Benefits
Enjoy the benefits of fully remote work with flexible hours and a generous vacation policy of 37 days per year. Poolside fosters a diverse and inclusive culture, offering health insurance allowances, company-provided equipment, and allowances for wellbeing and home office setups. Team members come together for monthly in-person meetings and frequent gatherings, promoting a collaborative work environment.
Full Job Description
ABOUT POOLSIDE
In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.
View GDPR Policy
ABOUT OUR TEAM
We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.
Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
ABOUT THE ROLE
You will be focused on building out our multi-device inference of Large Language Models, both standard transformers and custom linear attention architectures. You will be working with lowered precision inference and tensor parallelism. You will be comfortable diving into vLLM, Torch, AWS libraries. You will be working on improvements for both NVIDIA and AWS hardware. You will be working on the bleeding edge of what's possible and will find yourself, hacking and testing the latest vendor solutions. We are rewrite-in-Rust-friendly.
YOUR MISSION
To develop and continuously improve the inference of LLMs for source code generation, optimizing for the lowest latency, the highest throughput, and the best hardware utilization.
RESPONSIBILITIES
Follow the latest research on LLMs, inference and source code generation
Propose and evaluate innovations, both in the quality and the efficiency of the inference
Monitor and implement LLM inference metrics in production
Write high-quality high-performance Python, Cython, C/C++, Triton, ThunderKittens, native CUDA, Amazon Neuron code
Work in the team: plan future steps, discuss, and always stay in touch
SKILLS & EXPERIENCE
Experience with Large Language Models (LLM)
Confident knowledge of the computational properties of transformers
Knowledge/Experience with cutting-edge inference tricks
Knowledge/Experience of distributed and lower precision inference
Knowledge of deep learning fundamentals
Strong engineering background
Theoretical computer science knowledge is a must
Experience with programming for hardware accelerators
SIMD algorithms
Expert in matrix multiplication bottlenecks
Know hardware operation latencies by heart
Research experience
Nice to have but not required: Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc
Can freely discuss the latest papers and descend to fine details
You have strong opinions, weakly held
Programming experience
Linux
Git
Python with PyTorch or Jax
C/C++, CUDA, Triton, ThunderKittens
Use modern tools and are always looking to improve
Opinionated but reasonable, practical, and not afraid to ignore best practices
Strong critical thinking and ability to question code quality policies when applicable
Prior experience in non-ML programming is a nice to have
PROCESS
Intro call with one of our Founding Engineers
Technical Interview(s) with one of our Founding Engineers
Team fit call with the People team
Final interview with one of our Founding Engineers
BENEFITS
Fully remote work & flexible hours
37 days/year of vacation & holidays
Health insurance allowance for you and dependents
Company-provided equipment
Wellbeing, always-be-learning and home office allowances
Frequent team get togethers
Great diverse & inclusive people-first culture
Similar jobs
Found 6 similar jobs