AI Inference Engineer

This listing is synced directly from the company ATS.

Role Overview

As an AI Inference Engineer at Perplexity, you will focus on developing APIs for AI inference, enhancing system reliability, and optimizing machine learning model deployments for real-time applications. This role is likely mid-level, working within a collaborative team to address system bottlenecks and improve observability, ultimately impacting how AI models are utilized by customers.

Perks & Benefits

This position offers a fully remote work setup, allowing for flexibility in your work-life balance. While specific time zone expectations are not mentioned, typical tech roles suggest some overlap with standard business hours. You'll have the opportunity for career growth through hands-on experience with cutting-edge technologies and participation in innovative projects.

⚠️ This job was posted over 20 months ago and may no longer be open. We recommend checking the company's site for the latest status.

Full Job Description

We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.

Responsibilities

  • Develop APIs for AI inference that will be used by both internal and external customers

  • Benchmark and address bottlenecks throughout our inference stack

  • Improve the reliability and observability of our systems and respond to system outages

  • Explore novel research and implement LLM inference optimizations

Qualifications

  • Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)

  • Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)

  • Understanding of GPU architectures or experience with GPU kernel programming using CUDA

 

Similar jobs

Found 6 similar jobs