AI Researcher — Training Optimization

This listing is synced directly from the company ATS.

Role Overview

As an AI Researcher focused on training optimization, you will design and evaluate techniques to enhance the efficiency and stability of large model training. This senior-level role involves conducting large-scale experiments and collaborating with infrastructure teams, allowing you to directly influence model training decisions and publish your research findings.

Perks & Benefits

This remote position offers the flexibility of working from anywhere, with a small, senior team that values thoughtful innovation and deep thinking. You'll have the opportunity to influence core training decisions, publish your research, and access large-scale experiments, fostering an environment conducive to career growth and personal development.

Full Job Description

About the Role

We’re looking for an AI Researcher focused on training optimization to help us push the efficiency, stability, and scalability of large-scale model training. You’ll work at the intersection of research and systems, developing novel techniques to reduce training cost, accelerate convergence, and improve model quality—while validating ideas through rigorous experiments and publications.

This role is ideal for someone who enjoys turning research insights into practical training wins, and who has a track record (or strong ambition) of publishing applied ML research.

What You’ll Work On

  • Design and evaluate training optimization techniques for large models (e.g. optimization algorithms, schedulers, normalization, curriculum strategies)

  • Improve training efficiency and stability across long runs and large datasets

  • Research and implement methods such as:

    • Optimizer and scheduler innovations

    • Mixed-precision, low-precision, and memory-efficient training

    • Gradient noise reduction, scaling laws, and convergence analysis

    • Training-time regularization and robustness techniques

  • Run large-scale experiments, analyze results, and translate findings into actionable improvements

  • Author or co-author research papers, technical reports, or blog posts

  • Collaborate closely with infrastructure and inference teams to ensure training decisions translate to real-world performance

What We’re Looking For

  • Strong background in machine learning research, with emphasis on training dynamics and optimization

  • Experience training large neural networks (LLMs, multimodal models, or large sequence models)

  • Publication experience in ML venues (e.g. NeurIPS, ICML, ICLR, ACL, EMNLP, COLM, arXiv) or equivalent high-quality open research

  • Solid understanding of:

    • Optimization theory and practice

    • Backpropagation, gradient flow, and training stability

    • Distributed and large-batch training

  • Proficiency in Python and modern ML frameworks (PyTorch preferred)

  • Ability to independently design experiments and reason from data

Nice to Have

  • Experience with non-standard architectures (e.g. RNN variants, long-context models, hybrid systems)

  • Experience optimizing training on GPUs at scale (FSDP, ZeRO, custom kernels)

  • Contributions to open-source ML or research codebases

  • Comfort operating in fast-moving, ambiguous startup environments

Why This Role

  • Real influence over core model training decisions

  • Freedom to pursue and publish novel research

  • Direct access to large-scale experiments and real production constraints

  • A small, senior team that values thinking deeply and shipping thoughtfully

Similar jobs

Found 6 similar jobs