AI Inference Engineer
Role Overview
As an AI Inference Engineer at Perplexity, you will develop APIs for AI inference utilized by internal and external customers, ensuring the performance of machine learning models in real-time. This mid-level role involves tackling challenges related to system reliability and optimization, allowing you to significantly impact the efficiency of AI deployments and contribute to cutting-edge LLM technologies.
Perks & Benefits
This remote position offers flexibility in work location, enabling you to work from anywhere. You will join a dynamic team that values innovation and collaboration, with opportunities for equity in addition to competitive salary. The role promises growth potential in a fast-evolving sector, encouraging exploration of novel research and continuous learning.
Full Job Description
We are looking for an AI Inference Engineer to join our growing team. We build and run the inference engine behind every Perplexity query and deploy dozens of model architectures at scale with tight latency and cost budgets. Our stack is Rust, Python, CUDA, and CuTe DSL.
Responsibilities:
New models support. Support transformer-based retrieval, text-generation, and multimodal models in our inference infrastructure, from weight loading, request scheduling and KV-cache management to support in API Gateway.
GPU kernels migration to CuTe DSL. Port our in-house CUDA kernels to NVIDIA's CuTe DSL so they run on GB200 today and are portable to Vera Rubin racks tomorrow.
Rust-native serving runtime. Develop our internal Rust-based inference server to solve all Python pains and keep up with rapidly growing traffic.
Performance optimisation. Profile and fix bottlenecks from network ingress through continuous batching and GPU kernels interleaving.
Reliability and observability. Build dashboards, alerts, and automated remediation so we catch regressions before users do. Respond to and learn from production incidents.
Who we're looking for:
Deep experience with GPU programming and performance work (CUDA, Triton, CUTLASS, or similar). Any other deep systems programming experience is a plus.
You understand modern LLM architectures and are able to bring them up reliably in a production environment.
You've built and operated production distributed systems under real load - ideally performance-critical ones.
Comfortable working across languages and layers: Rust for the serving runtime, Python for model code, CUDA/CuteDSL for kernels.
You own problems end-to-end. You can read a research paper on Monday, write a kernel on Wednesday, and debug a production incident on Friday.
Self-directed. You do well in fast-moving environments where the path forward isn't laid out for you.
Nice-to-have:
ML compilers and framework internals: PyTorch internals, torch.compile, custom operators.
Distributed GPU communication: NCCL, NVLink, InfiniBand, RDMA libraries, model/tensor parallelism.
Low-precision inference: INT8/FP8/FP4 quantization, mixed-precision serving.
Profiling and debugging tools: Nsight Compute/Systems, CUDA-GDB, PTX/SASS analysis.
Container orchestration: Kubernetes, GPU scheduling, autoscaling inference workloads.
Qualifications:
3+ years of professional software engineering experience with meaningful work on ML inference or high-performance systems.
Familiarity with at least one deep learning framework (PyTorch, JAX, TensorFlow).
Understanding of GPU architectures (memory hierarchy, warp scheduling, tensor cores).
Understanding of common LLM architectures and inference optimization techniques (e.g. quantization, speculative decoding, prefill-decode disaggregation).
Final offer amounts are determined by multiple factors including experience and expertise.
Equity: In addition to the base salary, equity may be part of the total compensation package.
Similar jobs
Found 6 similar jobs