AI Inference Engineer
Role Overview
As an AI Inference Engineer at Perplexity, you will be responsible for developing APIs for AI inference that cater to both internal and external clients. This mid-level role focuses on optimizing machine learning model deployments for real-time inference, enhancing system reliability, and addressing performance bottlenecks. Your work will directly impact the efficiency and effectiveness of AI solutions within the organization.
Perks & Benefits
This fully remote position offers flexibility in work hours, allowing you to manage your schedule effectively. You will be part of a dynamic team that values innovation and collaboration, providing ample opportunities for career growth and professional development. The company culture emphasizes a supportive environment where team members can explore novel research and implement cutting-edge technologies.
Full Job Description
We are looking for an AI Inference engineer to join our growing team. Our current stack is Python, Rust, C++, PyTorch, Triton, CUDA, Kubernetes. You will have the opportunity to work on large-scale deployment of machine learning models for real-time inference.
Responsibilities
Develop APIs for AI inference that will be used by both internal and external customers
Benchmark and address bottlenecks throughout our inference stack
Improve the reliability and observability of our systems and respond to system outages
Explore novel research and implement LLM inference optimizations
Qualifications
Experience with ML systems and deep learning frameworks (e.g. PyTorch, TensorFlow, ONNX)
Familiarity with common LLM architectures and inference optimization techniques (e.g. continuous batching, quantization, etc.)
Understanding of GPU architectures or experience with GPU kernel programming using CUDA
Similar jobs
Found 6 similar jobs