Research Engineer – Benchmarking, Evals & Failure Analysis
Role Overview
This senior-level research engineer role involves designing and implementing benchmarking pipelines, evaluation systems, and failure analysis workflows for frontier language models. You'll work closely with AI researchers and applied teams to measure tool use, agentic behavior, and real-world reasoning, directly impacting model training and improvement. The position requires strong ownership in a fast-paced, in-person environment at the company's San Francisco headquarters.
Perks & Benefits
The role is remote but requires in-person work five days a week in San Francisco, with a $10K housing bonus for living within 0.5 miles of the office. Benefits include a generous equity grant, a $1.5K monthly meal stipend, free Equinox membership, and health insurance, reflecting a high-intensity, high-ownership culture focused on AI advancement.
Full Job Description
About Mercor
Mercor is defining the future of work. We partner with leading AI labs and enterprises to provide the human intelligence essential to AI development.
Our vast talent network trains frontier AI models in the same way teachers teach students: by sharing knowledge, experience, and context that can't be captured in code alone. Today, more than 30,000 experts in our network collectively earn over $2 million a day.
Mercor is creating a new category of work where expertise powers AI advancement. Achieving this requires an ambitious, fast-paced and deeply committed team. You’ll work alongside researchers, operators, and AI companies at the forefront of shaping the systems that are redefining society.
Mercor is a profitable Series C company valued at $10 billion. We work in-person five days a week in our new San Francisco headquarters.
About the Role
As a Research Engineer at Mercor, you’ll work at the intersection of engineering and applied AI research. You’ll own benchmarking pipelines, evaluation systems, and failure analysis workflows that directly inform how we train and improve frontier language models.
Your work will define how we measure tool use, agentic behavior, and real-world reasoning. You’ll design and run evals, build rubrics and scorers, and turn failure analysis into actionable improvements for post-training, RLVR, and data pipelines.
What You’ll Do
Benchmarking: Design, implement, and maintain benchmarks and metrics for tool use, agentic behavior, and real-world reasoning; ensure benchmarks scale with training and stay aligned with product and research goals.
Evaluation systems: Build and operate LLM evaluation systems end-to-end runs, scoring, dashboards, and reporting, so researchers and applied AI teams can track model performance and compare runs at scale.
Failure analysis: Run systematic failure analysis on model outputs (e.g., wrong tool use, reasoning errors, safety/alignment issues); categorize failure modes, quantify prevalence, and feed findings into reward design, data curation, and benchmark design.
Rubrics and evaluators: Create and refine rubrics, automated evaluators, and scoring frameworks that drive training and evaluation decisions; balance rigor with scalability (human vs. model-as-judge, calibration, agreement).
Data quality and usability: Quantify data usability, quality, and impact on key benchmarks; use evals and failure analysis to guide data generation, augmentation, and curation.
Cross-team collaboration: Work with AI researchers, applied AI teams, and data producers to align evals with training objectives and to prioritize benchmarks and failure analyses that matter most.
Ownership in a fast-paced environment: Operate in a high-iteration research setting with strong ownership of benchmarks, evals, and failure-analysis workflows.
What We’re Looking For
Strong applied research background, with focus on model evaluation, benchmarking, and/or failure analysis.
Strong coding skills and hands-on experience with ML models and evaluation code.
Solid grasp of data structures, algorithms, and backend systems.
Comfort with APIs, SQL/NoSQL, and cloud platforms for running and storing eval results.
Ability to reason about model behavior, experimental results, and data quality from evals and failure analyses.
Excitement to work in person in San Francisco five days a week in a high-intensity, high-ownership environment.
Nice To Have
Industry experience on a post-training or evaluation/benchmarking team (highest priority).
Publications at top-tier venues (NeurIPS, ICML, ACL), especially in evaluation or benchmarking.
Experience building or running LLM evaluations, benchmarks, or failure-analysis pipelines.
Experience with synthetic data generation, rubric design, or RL-style workflows that use evals for reward shaping.
Work samples or code (e.g., eval frameworks, benchmark suites, failure-analysis reports or tooling) that demonstrate relevant skills.
Benefits
Generous equity grant vested over 4 years
A $10K housing bonus (if you live within 0.5 miles of our office)
A $1.5K monthly stipend for meals
Free Equinox membership
Health insurance
Similar jobs
Found 6 similar jobs