Member of Technical Staff - Secure Intelligence Institute

This listing is synced directly from the company ATS.

Role Overview

This senior-level role involves conducting original research on security and privacy for frontier AI systems, focusing on threat modeling, developing defenses, and building evaluation frameworks. As a member of the Secure Intelligence Institute, you'll work independently and collaboratively to translate research into practical improvements for Perplexity's products, directly impacting user protection and system hardening.

Perks & Benefits

This is a fully remote position, likely with flexible hours, though time zone expectations may align with team collaboration. It offers opportunities for career growth through publishing at premier conferences, collaborating with top-tier researchers, and contributing to the broader security community in a fast-paced, innovative environment.

Full Job Description

Perplexity is seeking energetic researchers and engineers to join our Secure Intelligence Institute (SII), Perplexity's flagship research center for advancing security, privacy, and trust in frontier intelligence. SII’s goals are to advance frontier AI security research, translate those advances into concrete improvements in Perplexity's systems, and share knowledge and resources that strengthen the broader AI ecosystem.

As a member of SII, you'll conduct original and impactful research on improving the security and privacy of frontier intelligence systems. Your goal will be to conduct research that is not only rigorous in theory, but practical enough to improve the systems people rely on every day. This work will be informed by the realities of operating general-purpose AI systems used by millions of people and thousands of enterprises, and you'll be expected to translate both your own research and advances from the broader community to practical improvements that protect and defend Perplexity's users.

Responsibilities

  • Develop threat models for emerging attack surfaces in AI-native products, including browser, search, and autonomous agents.

  • Identify and analyze security and privacy threats across AI systems, infrastructure, and user-facing products.

  • Develop novel defenses, mitigations, and detection mechanisms for security and privacy in AI-native products.

  • Build security evaluation frameworks, benchmarks, and datasets to measure the effectiveness of different defense mechanisms.

  • Partner with Perplexity’s Security Engineering team to translate state of the art research into shipped security features and hardened system architectures.

  • Collaborate with top-tier academic and industry researchers in SII's external research network.

  • Publish findings at premier venues and contribute to the broader security research community.

Qualifications

  • Hold a PhD (or equivalent research experience) in Computer Science, Computer Engineering, or a related field, with a primary focus on security and/or privacy.

  • Experience publishing at top security conferences (IEEE S&P, USENIX Security, ACM CCS, NDSS) demonstrating original, impactful research contributions.

  • Deep expertise in one or more of: security of agentic systems, systems security, web and applications security, program analysis, and software security.

  • Proficiency in Python (bonus points for TypeScript, Go, and/or Rust).

  • Ability to operate with high independence, willing to dive in and take ownership, and comfortable in a fast-paced environment where research directly informs product.

  • Clear and concise communcation, translating complex attack narratives into actionable insights for engineering and leadership.

Similar jobs

Found 6 similar jobs