Computer Vision PhD Intern (Summer 2026)
Role Overview
This internship is for a PhD student to conduct cutting-edge research in deepfake image and video detection, collaborating with the AI team to develop new methods and publish peer-reviewed papers. Day-to-day tasks include implementing and evaluating deep learning models using Python and PyTorch, and writing up research findings for internal reports and academic submissions. The role is at a junior level, focusing on high-impact research to advance the company's mission in cybersecurity.
Perks & Benefits
The internship is fully remote with the option to work from the New York City HQ, offering flexibility in location. It provides opportunities for career growth through publishing research in top-tier venues and collaborating with experienced researchers. The culture is mission-driven, focusing on combating malicious AI-generated media, and the role includes access to modern deep learning tools and GPU-enabled cloud compute like AWS or GCP.
Full Job Description
Who we are.
Reality Defender is an award-winning cybersecurity company helping enterprises and governments detect deepfakes and AI-generated media. Utilizing a patented multi-model approach, Reality Defender is robust against the bleeding edge of generative platforms producing video, audio, imagery, and text media. Reality Defender's API-first deepfake detection platform empowers teams and developers alike to identify fraud, disinformation campaigns, and harmful deepfakes in real time.
Backed by world class investors including DCVC, Illuminate Financial, Y Combinator, Booz Allen Hamilton, IBM, Accenture, Rackhouse, and Argon VC, Reality Defender works with leading enterprise clients, financial institutions, and governments in order to ensure AI-generated media is not used for malicious purposes.
Youtube: Reality Defender Wins RSA Most Innovative Startup
The Computer Vision Internship.
This 3-month internship is designed for current PhD students and candidates to partner with Reality Defender's AI team to generate cutting-edge research and publish peer-reviewed papers. Your primary collaborator will be Jacob Seidman, who will guide and advise your efforts within deepfake image and video detection. This internship can be performed remotely, although you're welcome to work from our HQ in New York City.
What you'll do.
Investigate new methods for generative image/video detection.
Collaborate with researchers in the team.
Perform research of deepfake image/video detection.
Write up results of research for internal reports and submission to academic journals/workshops.
Independently implement and evaluate ideas on modern deep learning stack - Python, PyTorch, and GPU-enabled cloud compute, like AWS/GCP.
Who you are.
PhD student in a relevant technical field.
Experience in computer vision.
Proficient in Python and in building deep learning models with PyTorch.
Published peer-reviewed research papers in reputable computer vision venues, e.g. CVPR, ICCV, NeurIPS.
Excited about Reality Defender's mission to build a best-in-class and comprehensive deepfake and AI-generated media detection platform.
Available to start a research project in Summer of 2026.
Similar jobs
Found 6 similar jobs