Member of Engineering (Pre-training / CUDA)

This listing is synced directly from the company ATS.

Role Overview

This senior-level role involves optimizing large-scale training of Large Language Models (LLMs) through custom kernel development and profiling on thousands of GPUs. Day-to-day responsibilities include identifying bottlenecks, enhancing training and inference codebases, and collaborating with researchers to scale novel ideas efficiently. The hire will impact the speed and performance of foundational models for source code generation, working in a hands-on, distributed pre-training team focused on applied research and engineering.

Perks & Benefits

The position offers fully remote work with flexible hours, requiring monthly in-person collaboration in Paris for 3 days (Monday-Wednesday) and annual off-sites. Benefits include 37 days of vacation and holidays, health insurance allowance for dependents, company-provided equipment, and allowances for well-being, learning, and home office. The culture emphasizes a low-ego, kind-hearted team with a diverse and inclusive environment, supporting career growth through continuous learning and collaborative projects.

Full Job Description

ABOUT POOLSIDE

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.

Poolside exists to be this company: to build a world where AI will be the engine behind economically valuable work and scientific progress. We believe the fastest way to reach AGI lies in accelerating software development itself, by reshaping the developer experience with agentic systems, coding assistants, and the frontier models that power them. We deploy these systems directly into the development environments of security-conscious enterprises.

ABOUT OUR TEAM

We were founded in the US and have our home there, but our team is distributed across Europe and North America. We get our fix of in-person collaboration (and croissants) in Paris each month for 3 days, always Monday-Wednesday, with an open invitation to stay the whole week. We also do longer off-sites once a year.

Our team is a multidisciplinary blend of research, engineering, and business experts. What unites us is our deep care for what we build together. We’re in a race that requires hard work, intellectual curiosity, and obsession; to balance this intensity, we’ve assembled a team of low ego and kind-hearted individuals who have built the special culture Poolside has. By building collaboratively and with intention, we create a compounding effect that moves the entire company forward towards our mission: reaching AGI through intelligence systems built for software development.

ABOUT THE ROLE

You would be working in our pre-training team focused on building out distributed training of Large Language Models (LLMs). This is a hands-on role that focuses on optimizing large-scale training runs via custom kernel development. You will have access to thousands of GPUs to verify changes.

Strong engineering skills are a prerequisite. We assume perfect knowledge of profiling tools, CUDA, and distributed training. We look for fast learners who are prepared for a steep learning curve and are not afraid to step out of their comfort zone.

YOUR MISSION

To make our training of the best foundational models for source code generation in the world faster.

RESPONSIBILITIES

  • Profile large-scale training workloads and identify communication and computation bottlenecks

  • Custom kernel development to improve training performance

  • Collaborating with researchers to make novel research ideas scale efficiently

  • Enhance and maintain our training and inference codebases

  • Write high-quality Python (PyTorch), Cython, C/C++ code. Perform refactorings

  • Work in the team: plan future steps, discuss, and always stay in touch

SKILLS & EXPERIENCE

  • Understanding of Large Language Models (LLM)

    • Basic knowledge of Transformers

  • Knowledge of distributed training

  • Strong CUDA background/experience with GPU programming

    • Development experience with NCCL, CUTLASS, CUBLAS, etc

    • Understanding of NVLink, NVSwitch, NVShmem

  • Strong engineering background

  • Programming experience

    • Linux

    • Strong algorithmic skills

    • Python with PyTorch, or Jax

    • C/C++

    • Use modern tools and are always looking to improve

    • Strong critical thinking and ability to question code quality policies when applicable

PROCESS

  • Intro call with one of our Founding Engineers

  • Technical Interview(s) with one of our Founding Engineers

  • Team fit call with the People team

  • Final interview with one of our Founding Engineers

BENEFITS

  • Fully remote work & flexible hours

  • 37 days/year of vacation & holidays

  • Health insurance allowance for you & dependents

  • Company-provided equipment

  • Well-being, always-be-learning & home office allowances

  • Frequent team get togethers

  • Diverse & inclusive people-first culture

Similar jobs

Found 6 similar jobs