Member of Engineering (Scalability)
Role Overview
This senior-level engineering role focuses on enhancing fault tolerance for large-scale LLM pre-training and inference. Day-to-day responsibilities include troubleshooting hardware issues, minimizing GPU downtime, developing recovery tools, and improving checkpointing reliability using Python, C/C++, and CUDA. The hire will work in a remote-first, research-oriented team to accelerate training of foundational code generation models, directly impacting model performance and efficiency.
Perks & Benefits
The role offers fully remote work with flexible hours, 37 days of vacation and holidays annually, and health insurance for dependents. Benefits include company-provided equipment, allowances for wellbeing, learning, and home office, and frequent in-person team gatherings in Europe and North America. It features a diverse, inclusive culture with opportunities for growth through applied research and engineering at scale.
Full Job Description
ABOUT POOLSIDE
In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
Poolside exists to be this company: to build a world where AI will be the engine behind economically valuable work and scientific progress. We believe the fastest way to reach AGI lies in accelerating software development itself, by reshaping the developer experience with agentic systems, coding assistants, and the frontier models that power them. We deploy these systems directly into the development environments of security-conscious enterprises.
ABOUT OUR TEAM
We were founded in the US and have our home there, but our team is distributed across Europe and North America. We get our fix of in-person collaboration (and croissants) in Paris each month for 3 days, always Monday-Wednesday, with an open invitation to stay the whole week. We also do longer off-sites once a year.
Our team is a multidisciplinary blend of research, engineering, and business experts. What unites us is our deep care for what we build together. We’re in a race that requires hard work, intellectual curiosity, and obsession; to balance this intensity, we’ve assembled a team of low ego and kind-hearted individuals who have built the special culture Poolside has. By building collaboratively and with intention, we create a compounding effect that moves the entire company forward towards our mission: reaching AGI through intelligence systems built for software development.
ABOUT THE ROLE
You would be working in our pre-training team focused on building out our distributed training and inference of Large Language Models (LLMs). This is a hands-on role that focuses on software reliability and fault tolerance. You will work on cross-platform checkpointing, NCCL recovery, and hardware fault detection. You will make high-level tools. You will not be afraid of debugging Linux kernel modules. You will have access to thousands of GPUs to test changes.
Strong engineering skills are a prerequisite. We assume good knowledge of Torch, NVIDIA GPU architecture, reliability concepts, distributed systems, and best coding practices. A basic understanding of LLM training and inference principles is required. We look for fast learners who are prepared for a steep learning curve and are not afraid to step out of their comfort zone.
YOUR MISSION
To help train the best foundational models for source code generation in the world
RESPONSIBILITIES
Identify, study, and troubleshoot hardware problems during training at scale
Minimize the GPU idle time during faults, both operationally and strategically
Design and develop tools and add-ons to accelerate the training recovery
Improve the performance and reliability of checkpointing
Write high-quality Python (PyTorch), Cython, C/C++, CUDA API code
SKILLS & EXPERIENCE
Understanding of Large Language Models (LLM)
Basic knowledge of Transformers
Knowledge of deep learning fundamentals
Strong engineering background
Programming experience
Linux API, Linux kernel
Strong algorithmic skills
Python with numpy, PyTorch, or Jax
C/C++
NCCL
Use modern tools and are always looking to improve
Strong critical thinking and ability to question code quality policies when applicable
Distributed systems
Reliability
Observability
Fault-tolerance
K8s stack
PROCESS
Intro call with one of our Founding Engineers
Technical Interview(s) with one of our Founding Engineers
Team fit call with the People team
Final interview with one of our Founding Engineers
BENEFITS
Fully remote work & flexible hours
37 days/year of vacation & holidays
Health insurance allowance for you & dependents
Company-provided equipment
Well-being, always-be-learning & home office allowances
Frequent team get togethers
Diverse & inclusive people-first culture
Similar jobs
Found 6 similar jobs