Research Engineering Manager - Model Training

This listing is synced directly from the company ATS.

Role Overview

This senior-level role involves leading a team of AI researchers and engineers to develop state-of-the-art models for Perplexity's products, focusing on training and fine-tuning large language models using techniques like reinforcement learning and supervised learning. Day-to-day responsibilities include hands-on technical contributions, managing data and training pipelines, designing evaluations, and collaborating with engineering teams to integrate models into production. The hire will have significant impact by driving model performance improvements that enhance user experience and support business growth.

Perks & Benefits

The job is fully remote, offering flexibility in location, with likely expectations for collaboration across time zones in a fast-paced tech environment. It provides opportunities for career growth through leadership in cutting-edge AI research and development, with a culture that values scientific rigor, iteration velocity, and tackling challenging problems in AI quality and safety. Benefits may include typical tech perks such as professional development support and a dynamic, innovative work setting.

Full Job Description

Perplexity is seeking a Research Engineering Manager to lead the team of all-star AI researchers and engineers responsible for developing the models that drive our products. Our team has developed some of the most advanced models for agentic research, query understanding, and other domains that require accuracy and depth. As we expand our userbase and portfolio of product surfaces, our in-house models are increasingly critical to providing a premium, high-taste experience for the world’s most sophisticated users.

You will dive into our rich datasets of conversational and agentic queries, leveraging cutting‑edge training techniques to scale AI model performance. Through hands-on technical and organizational leadership, you will empower your team to develop SotA models for the use cases that matter most to our business and our users.

Responsibilities

  • Lead a team of researchers and engineers focused on training SotA models for Perplexity-relevant use cases, leveraging the latest supervised and reinforcement learning techniques.

  • Drive research and engineering efforts to develop production models through advanced model training and alignment techniques, including RL, SFT, and other approaches.

  • Become deeply familiar with the team’s technical stack, leading from the front through hands-on technical contributions.

  • Own the data, training, and eval pipelines required to train and continuously improve LLM models.

  • Design and iterate on model training and finetuning algorithms (e.g., preference‑based methods, reinforcement learning from human or AI feedback) through an approach that balances scientific rigor and iteration velocity.

  • Design evaluations and improve the production model training pipeline to reliably deliver models that lie on the Pareto frontier of speed and quality.

  • Work closely with engineering teams to integrate in-house models into our product and rapidly iterate based on real‑world usage.

  • Manage day‑to‑day execution, project planning, and prioritization for the model training team to hit ambitious quality and performance goals.

Qualifications

  • Proven experience with large-scale LLMs and Deep Learning systems.

  • Strong Python and PyTorch skills; versatility across languages and frameworks is a plus.

  • Experience leading or managing research or engineering teams working on large-scale AI model development, including driving complex projects from idea to production.

  • Self‑starter with a willingness to take ownership of tasks and navigate ambiguity in a fast‑moving environment.

  • Passion for tackling challenging problems in AI model quality, speed, safety, and reliability.

  • 10+ years of technical experience, with at least 2 of those years as a manager and at least 4 of those years working on large-scale AI model development.

Nice-to-have

  • PhD in Machine Learning or related areas.

  • Experience training very large Transformer-based models with techniques such as SFT, DPO, GRPO, RLHF‑style methods, or related preference‑based optimization approaches.

  • Prior experience designing evaluations and production training pipelines for large‑scale models in a high‑growth environment.

Similar jobs

Found 6 similar jobs

Browse more jobs in: