Code Reviewer for LLM Data Training (R)
Role Overview
This senior-level role involves reviewing and auditing evaluations of AI-generated R code to ensure annotators follow strict quality guidelines for instruction-following, factual correctness, and functionality. As a Code Reviewer, you will validate code snippets, identify inaccuracies in ratings, and provide constructive feedback to maintain high annotation standards within a remote team focused on AI training. Your work directly impacts the integrity and consistency of AI model evaluations, contributing to the development of reliable AI systems.
Perks & Benefits
This is a fully remote position with flexible opportunities, offering weekly paid compensation once work is approved for consistent and reliable income. While time zone expectations are not specified, typical remote tech roles may require some overlap with team hours for collaboration. Career growth is implied through personalized hourly rates based on experience and performance, with potential for re-evaluation on different projects, fostering a culture focused on AI innovation and quality assurance.
Full Job Description
10-min AI interview, project starts Jan 29, rare languages = higher placement rates
About the Company
G2i connects subject-matter experts, students, and professionals with flexible, remote AI training opportunities, including annotation, evaluation, fact-checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.
About the Role
We’re hiring a Code Reviewer with deep R expertise to review evaluations completed by data annotators assessing AI-generated R code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality.
Responsibilities
Review and audit annotator evaluations of AI-generated R code.
Assess if the R code follows the prompt instructions, is functionally correct, and secure.
Validate code snippets using proof-of-work methodology.
Identify inaccuracies in annotator ratings or explanations.
Provide constructive feedback to maintain high annotation standards.
Work within Project Atlas guidelines for evaluation integrity and consistency.
Required Qualifications
5–7+ years of experience in R development, QA, or code review.
Strong knowledge of R syntax, debugging, edge cases, and testing.
Comfortable using code execution environments and testing tools.
Excellent written communication and documentation skills.
Experience working with structured QA or annotation workflows.
English proficiency at B2, C1, C2, or Native level.
Preferred Qualifications
Experience in AI training, LLM evaluation, or model alignment.
Familiarity with annotation platforms.
Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Compensation
Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You’ll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re-evaluated for different projects based on your performance and experience.
Similar jobs
Found 6 similar jobs