Backend Software Engineer - Search, Crawler Team
Role Overview
This is a senior-level Backend Software Engineer role on the Crawler team at Perplexity, focusing on designing, developing, and operating web-scale data ingestion and processing systems. Day-to-day responsibilities include building large-scale web crawlers, optimizing backend and frontend components, and collaborating with Search and Infrastructure teams to handle billions of web pages. The hire will have a high impact by architecting scalable distributed systems and enhancing performance for advanced search technologies.
Perks & Benefits
The role is fully remote, offering flexibility in work location, though collaboration with teams may imply some time zone alignment, likely in US or global hours. It provides opportunities for career growth through leading projects, experimenting with novel approaches, and working on high-impact, scalable systems in a fast-paced tech environment. Benefits likely include a competitive tech salary, health insurance, and professional development support, fostering a culture of innovation and reliability.
Full Job Description
We are seeking an experienced Backend Software Engineer to join our Crawler team. In this role, you will design, develop, and operate systems that ingest, process, and manage web-scale data in support of our next generation of advanced search technologies. This is a critical, high-impact engineering position, requiring expertise across both backend and frontend components of our data acquisition stack.
Responsibilities
Take ownership of and lead projects focused on developing large-scale web crawlers, ingestion pipelines, and data processing systems.
Build, maintain, and optimize core backend and frontend components for crawler services, including storage, retrieval, and UI dashboards for data management.
Collaborate closely with Search and Infrastructure teams to ensure the reliable, high-quality ingestion and processing of billions of web pages.
Architect and implement fullstack features and scalable distributed systems that handle high-load and real-time data operations.
Rapidly iterate, experiment with novel approaches, and continuously enhance system performance, usability, and reliability.
Qualifications
Minimum of 5 years of software development experience, with strong knowledge of data structures and algorithms in at least one of the following languages: Python, C++, Rust, or Go.
Experience with large-scale web crawlers is highly desirable.
Proven experience building, deploying, and optimizing high-load, distributed, and hardware-adjacent services.
Deep understanding of cloud infrastructure, with hands-on experience in Kubernetes (K8s) and AWS.
Demonstrated passion for writing clean, efficient, and scalable systems.
Similar jobs
Found 6 similar jobs