Sr. SWE for Code Reviewing LLM Data Training (Java)
Role Overview
As a Senior Software Engineer specializing in code reviewing for AI-generated Java code, you will audit and validate evaluations made by data annotators. Your expertise will ensure adherence to quality standards in instruction-following and code functionality, directly impacting the integrity of AI training data and the quality of output from AI models.
Perks & Benefits
This remote position offers flexible work arrangements, with compensation tailored to your experience and qualifications. You'll receive weekly payments upon approval of your contributions, ensuring financial stability. The role also encourages professional growth through exposure to cutting-edge AI technologies and collaborative projects with leading AI teams.
Full Job Description
About the Company
G2i connects subject-matter experts, students, and professionals with flexible, remote AI training work such as annotation, evaluation, fact-checking, and content review. We partner with leading AI teams, and all contributions are paid weekly upon approval, ensuring consistent, reliable compensation.
About the Role
We’re hiring Code Reviewers with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality.
Responsibilities
Review and audit annotator evaluations of AI-generated Java code.
Assess if the Java code follows the prompt instructions, is functionally correct, and secure.
Validate code snippets using proof-of-work methodology.
Identify inaccuracies in annotator ratings or explanations.
Provide constructive feedback to maintain high annotation standards.
Work within Project Atlas guidelines for evaluation integrity and consistency.
Required Qualifications
5–7+ years of experience in Java development, QA, or code review.
Strong knowledge of Java syntax, debugging, edge cases, and testing.
Comfortable using code execution environments and testing tools.
Excellent written communication and documentation skills.
Experience working with structured QA or annotation workflows.
English proficiency at B2, C1, C2, or Native level.
Preferred Qualifications
Experience in AI training, LLM evaluation, or model alignment.
Familiarity with annotation platforms.
Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Compensation
Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You’ll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re-evaluated for different projects based on your performance and experience.
Similar jobs
Found 6 similar jobs