AI Researcher — Distillation

This listing is synced directly from the company ATS.

Role Overview

This senior-level AI Researcher role focuses on model distillation to develop efficient, high-performance models by designing and evaluating techniques like teacher-student training and representation matching. You will work closely with engineers in a small, highly technical team to productionize research, run large-scale experiments, and publish papers in top-tier venues, directly impacting the deployment of faster, deployable AI systems. The role involves researching tradeoffs between model size, latency, and accuracy, with real ownership over research direction and a tight feedback loop between research and real-world applications.

Perks & Benefits

The role is fully remote with no explicit time zone restrictions, offering strong support for publishing and open research, including opportunities to contribute to internal notes, technical blogs, and open-source projects. You'll have access to meaningful compute resources and production-scale problems, with a small, highly technical team that values deep ML and systems expertise, fostering a collaborative culture focused on innovation and real-world impact. Career growth is supported through publishing at top-tier conferences and a Series A stage environment that encourages ownership and applied research.

Full Job Description

About the Role

We’re looking for an AI Researcher focused on model distillation to help us push the frontier of efficient, high-performance models. You’ll work on turning large, expensive models into smaller, faster, and more deployable systems—while maintaining or improving quality.

This role is ideal for someone who enjoys publishing research, working close to real systems, and seeing their ideas move from papers → code → production.

What You’ll Work On

  • Design and evaluate model distillation techniques (teacher–student training, self-distillation, layer-wise distillation, representation matching, etc.)

  • Research tradeoffs between model size, latency, memory, and accuracy

  • Develop novel distillation approaches for:

    • Large language models

    • Long-context or specialized architectures

    • Inference-constrained environments

  • Run large-scale experiments and ablations; analyze results rigorously

  • Collaborate with engineers to productionize research outcomes

  • Write and submit research papers to top-tier venues (NeurIPS, ICML, ICLR, COLM, etc.)

  • Contribute to internal research notes, technical blogs, and open-source projects when appropriate

What We’re Looking For

Required

  • Strong background in machine learning research

  • Hands-on experience with model distillation or closely related topics (compression, pruning, quantization, representation learning)

  • Publication experience (conference or journal papers, workshop papers, or arXiv preprints)

  • Solid understanding of deep learning fundamentals (optimization, training dynamics, generalization)

  • Fluency in PyTorch (or equivalent) and research-grade experimentation

  • Ability to clearly communicate research ideas, results, and limitations

Nice to Have

  • Experience distilling large language models

  • Work on efficiency-focused research (latency, memory, throughput)

  • Experience with long-context models or non-Transformer architectures

  • Open-source contributions in ML or research tooling

  • Prior startup or applied research experience

Why Join Us

  • Real ownership over research direction at a Series A stage

  • Strong support for publishing and open research

  • Tight feedback loop between research and real-world deployment

  • Access to meaningful compute and production-scale problems

  • Small, highly technical team with deep ML and systems expertise

Example Backgrounds

  • ML researchers from academia transitioning to industry

  • Research engineers with published work in model efficiency

  • PhD / Post-doc graduates or industry researchers who still want to publish

Similar jobs

Found 6 similar jobs