GenAI Security Engineer
Role Overview
As a GenAI Security Engineer at Point72, you will design and implement advanced security measures for generative AI systems, focusing on model usage, API integrations, and incident response. This senior-level role requires expertise in AI threat modeling and risk assessment, ensuring the safety and integrity of AI applications. You will collaborate with various technology teams to maintain security standards and respond to AI-related security events.
Perks & Benefits
Point72 fosters a culture of professional development, encouraging team members to enhance their technical skills and contribute innovative ideas. While the job is based in New York, the company likely supports a flexible work environment typical of remote tech positions. Employees can expect career growth opportunities and engagement in cutting-edge AI projects, contributing to a significant global business impact.
Full Job Description
A Career with Point72's Technology GroupAs Point72 reimagines the future of investing, our Technology team is constantly evolving our firm's IT infrastructure and engineering capabilities, positioning us at the forefront of a rapidly evolving technology landscape. We're a team of experts who experiment and work to discover new ways to harness open-source solutions, modern cloud architectures, and sophisticated Artificial Intelligence (AI) solutions, while embracing enterprise agile methodologies. Our commitment to building and innovating in the AI space provides the framework intended to drive smarter decision making and enhance how we build and operate our platforms and applications. As a member of Point72's Technology team, we encourage and support your professional development from day one—helping you advance your technical skills, contribute innovative ideas, and satisfy your own intellectual curiosity—all while delivering real business impact for our multi-billion-dollar global business.What you'll doAs a GenAI Security Engineer, you will develop and implement next-generation security controls to protect the firm's agentic and human-in-the-loop GenAI systems. Specifically, you will: Build and run generative AI (GenAI) security controls for applications and platforms, including guardrails for model usage and API integrations.Secure agent/tool-calling and connector workflows, such as MCP or equivalent, to prevent tool abuse and data exfiltration.Lead AI threat modeling and risk assessments, maintaining threat models for prompt injection, jailbreaks, tool injection, data exfiltration, training data leakage, and supply chain risks, and driving mitigations.Define secure-by-default reference architectures for cloud-native and hybrid GenAI workloads, including network isolation and secrets handling.Develop and continuously improve monitoring and detection for anomalous AI behavior and unsafe outputs.Lead incident response and remediation for security events involving AI applications and/or data breaches.Translate policy and regulatory requirements into implementation, including governance artifacts, evidence collection, control testing, and audit-ready documentationAct as the GenAI security SME with other internal Technology teams, Compliance, and business stakeholders, staying current on evolving threats.What's required6+ years of software engineering experience with strong coding experience in one or more general purpose languages, such as Python, Go, and/or Java.Experience building containerizPlease mention the word **TEMPTINGLY** and tag RMTY3LjIzNS4xMy4xNg== when applying to show you read the job post completely (#RMTY3LjIzNS4xMy4xNg==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.
Similar jobs
Found 2 similar jobs