AI Security Engineer

This listing is synced directly from the company ATS.

Role Overview

This senior-level AI Security Engineer role involves designing and implementing security mechanisms for AI systems, including self-hosted models and LLM APIs, to protect against adversarial threats. Day-to-day responsibilities include hands-on remediation, threat modeling, and conducting security assessments in a cross-functional team environment, ensuring the AI stack is secure by default.

Perks & Benefits

The role is fully remote, offering flexibility in location and likely a supportive tech culture focused on innovation and security. It provides opportunities for career growth through technical leadership and project ownership, with an emphasis on collaboration and continuous learning in AI security.

⚠️ This job was posted over 7 months ago and may no longer be open. We recommend checking the company's site for the latest status.

Full Job Description

Perplexity is seeking a highly skilled, experienced, and hands-on AI Security Engineer to join our security team, driving the protection of next-generation AI systems against adversarial threats. In this role, you’ll design and implement robust mechanisms to secure self-hosted models, LLM APIs, agents, MCPs, and the core AI stack. You'll empower developers with tools and guidance, as well as technical contributions, enabling innovation while ensuring AI security is strong by default.

Our tech stack includes Python, NextJS, TypeScript, Docker, AWS, Kubernetes, and PostgreSQL.

Responsibilities

  • Define, build, and refine mechanisms to secure AI systems (including self-hosted models, LLM APIs, agents, MCPs, and other core components of the AI stack) against adversarial behavior of all kinds

  • Understand technically complex AI systems, identify potential weaknesses in their architecture, and implement improvements

  • At least 50% of time performing hands-on remediation. Also working closely with peer engineers to drive remediations

  • Plan and carry out threat modeling activities and realistic threat simulations across our offerings

  • Conduct cybersecurity evaluations and lead AI security assessments in a cross-functional environment

  • Develop initiatives that improve our capabilities to effectively evaluate AI systems and enhance the organization's prevention, detection, response, and threat hunting capabilities

  • Provide guidance and education to developers to help deter and prevent threats

Qualifications

  • Hands-on coding and prompting experience.

  • Bachelor of Science or Master of Science in Computer Science or a related field, or equivalent experience

  • Be a technical and process subject matter expert regarding AI security services and attacker tactics, techniques, and procedures

  • Good understanding of LLMs, AI architecture patterns, machine learning models, and related technologies such as MCP

  • Understanding of application security principles and secure coding practices

  • Experience developing and implementing security procedures and policies

  • Strong problem-solving, project management, leadership, and communication skills

  • Self-motivated with a willingness to take ownership of tasks

  • 4+ years of industry experience

Similar jobs

Found 6 similar jobs

Browse more jobs in: