Member of Engineering (Pre-training / Data Research)
Role Overview
This senior-level role involves hands-on work to improve the quality of pretraining datasets for AI models, focusing on synthetic data generation and data mix optimization. You will collaborate with teams like Pretraining and Product to define data needs, conduct research experiments, and deploy engineering solutions using distributed data pipelines and GPU clusters. The impact includes delivering large, high-quality datasets to enhance model capabilities and support coding agents.
Perks & Benefits
Fully remote work with flexible hours, requiring monthly in-person meetings for 3 days in Europe or North America and annual off-sites. Benefits include 37 days of vacation and holidays, health insurance allowance for dependents, company-provided equipment, and allowances for wellbeing, learning, and home office. The culture is diverse, inclusive, and people-first, with a remote-first team that values collaboration and fast-paced development.
Full Job Description
ABOUT POOLSIDE
In this decade, the world will create Artificial General Intelligence. There will only be a small number of companies who will achieve this. Their ability to stack advantages and pull ahead will define the winners. These companies will move faster than anyone else. They will attract the world's most capable talent. They will be on the forefront of applied research, engineering, infrastructure and deployment at scale. They will continue to scale their training to larger & more capable models. They will be given the right to raise large amounts of capital along their journey to enable this. They will create powerful economic engines. They will obsess over the success of their users and customers.
Poolside exists to be this company - to build a world where AI will be the engine behind economically valuable work and scientific progress.
ABOUT OUR TEAM
We are a remote-first team that sits across Europe and North America. We come together once a month in-person for 3 days, always Monday-Wednesday, with an open invitation to stay the whole week. We also do longer off-sites once a year.
Our team is a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.
ABOUT THE ROLE
You’ll be working on our data team focused on the quality of the datasets being delivered for training our models. This is a hands-on role where your #1 mission would be to improve the quality of the pretraining datasets by leveraging your previous experience, intuition and training experiments. This includes synthetic data generation and data mix optimization. You’ll closely collaborate with other teams like Pretraining, Postraining, Evals, and Product to define high-quality data needs that map to missing model capabilities and downstream use cases.
Staying in sync with the latest research in the fields of dataset design and pretraining is key to success in this role. You will constantly lead original research initiatives through short, time-bounded experiments while deploying highly technical engineering solutions into production. With the volumes of data to process being massive, you'll have a performant distributed data pipeline together with a large GPU cluster at your disposal.
YOUR MISSION
To deliver large, high-quality, and diverse datasets of natural language and source code for training poolside models and coding agents.
RESPONSIBILITIES
Follow the latest research related to LLMs and data quality in particular. Be familiar with the most relevant open-source datasets and models.
Design and implement complex pipelines that can generate large amounts of data while maintaining high diversity and optimizing the resources available.
Closely work with other teams such as Pretraining, Posttraining, Evals and Product to ensure short feedback loops on the quality of the models delivered.
Suggest, conduct and analyze data ablations or training experiments that aim to improve the quality of the datasets generated via quantitative insights.
SKILLS & EXPERIENCE
Strong machine learning and engineering background
Experience with Large Language Models (LLM), including:
Understanding of transformer architectures and how LLMs learn
Data ablations and scaling laws
Mid-training and Post-training techniques
Training reasoning and agentic models
Experience with evals tracking model capabilities (general knowledge, reasoning, math, coding, long-context, etc)
Experience in building trillion-scale pretraining datasets, and familiarity with concepts like data curation, deduplication, data mixing, tokenization, curriculum, impact of data repetition, etc.
Excellent programming skills in Python
Strong prompt engineering skills
Experience working with large-scale GPU clusters and distributed data pipelines
Strong obsession with data quality
Research experience:
Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc. - is a nice to have
Can freely discuss the latest papers and descend to fine details
Is reasonably opinionated
PROCESS
Intro call with one of our Founding Engineers
Technical Interview(s) with one of our Members of Engineering
Team fit call with the People team
Final interview with one of our Founding Engineers
BENEFITS
Fully remote work & flexible hours
37 days/year of vacation & holidays
Health insurance allowance for you and dependents
Company-provided equipment
Wellbeing, always-be-learning and home office allowances
Frequent team get togethers
Great diverse & inclusive people-first culture
View GDPR Policy
Similar jobs
Found 6 similar jobs