Senior Code Reviewer for LLM Data Training (R)

This listing is synced directly from the company ATS.

Role Overview

This senior role involves reviewing and auditing evaluations of AI-generated R code by data annotators to ensure adherence to quality guidelines for instruction-following, correctness, and functionality. Day-to-day tasks include validating code snippets, identifying inaccuracies in ratings, and providing feedback to maintain high annotation standards within a remote team focused on AI training. The hire will impact the integrity and consistency of AI model evaluations by ensuring accurate and reliable code assessments.

Perks & Benefits

The job offers flexible, remote work with weekly paid compensation upon approval, ensuring reliable income. It provides opportunities for career growth through performance-based rate re-evaluation and exposure to AI training projects, with an implied collaborative culture focused on quality and structured workflows. Reasonable assumptions include flexible hours typical of remote tech roles, though time zone expectations are not specified.

Full Job Description

About the Company

G2i connects subject-matter experts, students, and professionals with flexible, remote AI training opportunities, including annotation, evaluation, fact-checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.

About the Role

We’re hiring a Code Reviewer with deep R expertise to review evaluations completed by data annotators assessing AI-generated R code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality.

Responsibilities

  • Review and audit annotator evaluations of AI-generated R code.

  • Assess if the R code follows the prompt instructions, is functionally correct, and secure.

  • Validate code snippets using proof-of-work methodology.

  • Identify inaccuracies in annotator ratings or explanations.

  • Provide constructive feedback to maintain high annotation standards.

  • Work within Project Atlas guidelines for evaluation integrity and consistency.

Required Qualifications

  • 5–7+ years of experience in R development, QA, or code review.

  • Strong knowledge of R syntax, debugging, edge cases, and testing.

  • Comfortable using code execution environments and testing tools.

  • Excellent written communication and documentation skills.

  • Experience working with structured QA or annotation workflows.

  • English proficiency at B2, C1, C2, or Native level.

Preferred Qualifications

  • Experience in AI training, LLM evaluation, or model alignment.

  • Familiarity with annotation platforms.

  • Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.

Compensation
Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You’ll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re-evaluated for different projects based on your performance and experience.

Similar jobs

Found 6 similar jobs