Red Teaming Domain Expert - AI Training (Contract)

This listing is synced directly from the company ATS.

Role Overview

As a Red Teaming Domain Expert, you will stress-test AI models by designing creative, adversarial prompts to expose vulnerabilities like unsafe content, bias, and broken guardrails. This contract role involves documenting experiments, collaborating with engineers and researchers, and working with potentially disturbing content to enhance AI safety and model robustness. It requires strong ethical judgment and is a mid-level position focused on adversarial problem-solving in a remote team environment.

Perks & Benefits

This is a fully remote contract position with flexible scheduling, typically requiring 10-20 hours per week and some availability for evenings or weekends. You'll work with leading AI research labs, offering opportunities for collaboration and feedback in a dynamic, fast-growing team. No visa sponsorship is available, and you must use your personal device and reliable smartphone for the role.

Full Job Description

About Handshake AI

Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired.

Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale.

This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.

Now’s a great time to join Handshake. Here’s why:

  • Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide.

  • Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs.

  • World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few.

  • Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.

About the Role

As a Red Teamer, you will stress-test AI models by intentionally trying to break them. Instead of checking whether an answer is correct, you’ll design creative, adversarial prompts that expose vulnerabilities—unsafe content, bias, broken guardrails, or unexpected behaviors. Your work directly supports AI safety and model robustness for leading research labs.

This role requires creativity, curiosity, and an ability to think like an adversary while operating with strong ethical judgment. No technical background is required. What matters most is how you think, how you write, and how you problem-solve.

This is a remote contract position with variable time commitments, typically 10–20 hours per week.

Day-to-day responsibilities include

  • Crafting creative prompts and scenarios to intentionally stress-test AI guardrails

  • Discovering ways around safety filters, restrictions, and defenses

  • Exploring edge cases to provoke disallowed, harmful, or incorrect outputs

  • Documenting experiments clearly, including what you tried and why

  • Reviewing and refining adversarial prompts generated by Fellows

  • Collaborating with engineers, tutors, and researchers to share findings and strengthen defenses

  • Working with potentially disturbing content, including violence, explicit topics, and hate speech

  • Staying current on jailbreaks, attack methods, and evolving model behaviors

Desired Capabilities

  • Strong hands-on experience using multiple LLMs

  • Intuition for crafting prompts; familiarity with jailbreak or evasion techniques is a plus

  • Creative, adversarial problem-solving skills

  • Clear and thoughtful written communication

  • Ability to tolerate emotionally heavy or graphic content

  • Curiosity, persistence, and comfort with frequent failure in experimentation

  • Strong ethical judgment and ability to separate adversarial thinking from personal values

  • Self-directed, collaborative, and comfortable in feedback-heavy environments

  • You go deep into unusual interests (fandoms, niche internet cultures, gaming exploits, Wikipedia rabbit holes, etc.)

  • You come from a creative background, writers, visual artists, etc

  • You are obsessed with AI and can’t stop talking about it

Extra Credit

  • Prior red teaming, moderation, or adversarial testing experience

  • Background in writing, gaming, improv, or niche internet subcultures

  • Experience documenting complex processes or research

  • Familiarity with safety, trust & safety, or digital security concepts

Additional Information

  • Engagement: Contract, remote, variable time commitment

  • Schedule: Flexibility required, with some evening or weekend availability

  • Location: Fully remote (no visa sponsorship available)

  • Technical Requirements: Personal device running Windows 10 or macOS Big Sur 11.0+ and reliable smartphone access

Similar jobs

Found 6 similar jobs