Member of Engineering (Pre-training / Data)

This listing is synced directly from the company ATS.

Role Overview

This senior-level role involves hands-on work to improve the quality of pretraining datasets for large language models through synthetic data generation, data mix optimization, and training experiments. You will collaborate with teams like Pre-training and Fine-tuning, leveraging a distributed data pipeline and GPU cluster to deliver massive-scale datasets, directly impacting model performance and research initiatives.

Perks & Benefits

The position offers fully remote work with flexible hours, 37 days/year of vacation and holidays, and health insurance for you and dependents. It includes allowances for wellbeing, learning, and home office, along with company-provided equipment and frequent team gatherings, fostering a diverse, inclusive, and people-first culture across Europe and North America.

⚠️ This job was posted over 15 months ago and may no longer be open. We recommend checking the company's site for the latest status.

Full Job Description

ABOUT POOLSIDE

In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.


poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.

View GDPR Policy

ABOUT OUR TEAM

We are a remote-first team that sits across Europe and North America and comes together once a month in-person for 3 days and for longer offsites twice a year.

Our R&D and production teams are a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

ABOUT THE ROLE

You would be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where your #1 mission would be to improve the quality of the pretraining datasets by leveraging your previous experience, intuition and training experiments. This includes synthetic data generation and data mix optimization.

You would be closely collaborating with other teams like Pre-training, Fine-tuning and Product to define high-quality data both quantitatively and qualitatively.

Staying in sync with the latest research in the field of dataset design and pretraining is key for being successful in a role where you would be constantly showing original research initiatives with short time-bounded experiments and highly technical engineering competence while deploying your solutions in production. With the volumes of data to process being massive, you'll have at your disposal a performant distributed data pipeline together with a large GPU cluster.

YOUR MISSION

To deliver massive-scale datasets of natural language and source code with the highest quality for training poolside models.

RESPONSIBILITIES

  • Follow the latest research related to LLMs and data quality in particular. Be familiar with the most relevant open-source datasets and models

  • Closely work with other teams such as Pretraining, Fine-tuning or Product to ensure short feedback loops on the quality of the models delivered

  • Suggest, conduct and analyze data ablations or training experiments that aim to improve the quality of the datasets generated via quantitative insights

SKILLS & EXPERIENCE

  • Strong machine learning and engineering background

  • Experience with Large Language Models (LLM)

    • Good knowledge of Transformers is a must

    • Knowledge/Experience with cutting-edge training tricks

    • Knowledge/Experience of distributed training

    • Trained LLMs from scratch

    • Knowledge of deep learning fundamentals

  • Experience in building trillion-scale pretraining datasets, in particular:

    • Ingest, filter and deduplicate large amounts of web and code data

    • Familiar with concepts making SOTA pretraining datasets: multi-linguality, curriculum learning, data augmentation, data packing, etc

    • Run data ablations, tokenization and data-mixture experiments

    • Develop prompt engineering pipelines to generate synthetic data at scale

    • Fine-tuning small models for data filtering purposes

  • Experience working with large-scale GPU clusters and distributed data pipelines

  • Strong obsession with data quality

  • Research experience

    • Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc, is a nice to have

    • Can freely discuss the latest papers and descend to fine details

    • Is reasonably opinionated

  • Programming experience

    • Strong algorithmic skills

    • Linux

    • Git, Docker, k8s, cloud managed services

    • Data pipelines and queues

    • Python with PyTorch or Jax

    • Nice to have:

      • Prior experience in non-ML programming, especially not in Python

      • C/C++, CUDA, Triton

PROCESS

  • Intro call with Eiso, our CTO & Co-Founder

  • Technical Interview(s) with one of our Founding Engineers

  • Team fit call with the People team

  • Final interview with one of our Founding Engineers

BENEFITS

  • Fully remote work & flexible hours

  • 37 days/year of vacation & holidays

  • Health insurance allowance for you and dependents

  • Company-provided equipment

  • Wellbeing, always-be-learning and home office allowances

  • Frequent team get togethers

  • Great diverse & inclusive people-first culture

Similar jobs

Found 6 similar jobs