Senior / Staff Site Reliability Engineer

This listing is synced directly from the company ATS.

Role Overview

This senior-level Site Reliability Engineer role involves ensuring the reliability and performance of Fluidstack's global GPU cloud infrastructure, working across software, hardware, and operations. Day-to-day responsibilities include deploying large-scale GPU clusters, debugging complex production issues, building internal tooling, and optimizing compute, storage, and networking systems. The hire will have a significant impact by improving platform stability and scalability to support demanding AI workloads.

Perks & Benefits

The role is fully remote with a competitive total compensation package including salary and equity, health, dental, and vision insurance, and a generous PTO policy. It involves an on-call rotation up to one week per month, and the company culture emphasizes customer-centricity, accountability, and a bias to action, fostering a dynamic and collaborative environment for career growth in AI infrastructure.

Full Job Description

About Fluidstack

At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light.

We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.

About the Role

Senior / Staff SREs at Fluidstack sit at the core of our infrastructure, working across software, hardware, and operations to ensure the reliability and performance of our global GPU cloud.

They partner closely with teams including networking, platform engineering, and data center operations to build systems that scale with the demands of AI workloads.

SREs are hands-on and possess deep systems knowledge and strong communication skills. You’ll be responsible for tackling complex production issues, deploying resilient infrastructure, and continuously improving the stability and observability of our platform as we grow.

A typical day may involve:

  • Deploying clusters of 1,000+ GPUs using custom written playbooks; modifying these tools as necessary to provide the perfect solution for a customer.

  • Validating correctness and performance of underlying compute, storage, and networking infrastructure, and working with providers to optimize these subsystems.

  • Migrating petabytes of data from public cloud platforms to local storage, as quickly and cost effectively as possible.

  • Debugging issues anywhere in the stack, from “this server’s fan is blocked by a plastic bag” to “optimizing S3 dataloaders from buckets in different regions”.

  • Building internal tooling to decrease deployment time and increase cluster reliability, including automation where the customer benefits clearly outweigh the implementation overhead.

This role will involve being part of an on-call rotation up to one week per month.

Focus

  • A customer-centric attitude, an accountability mindset, and a bias to action.

  • A track record of shipping clean, well-documented code in complex environments.

  • An ability to create structure from chaos, navigate ambiguity, and adapt to the dynamic nature of the AI ecosystem.

  • Strong technical and interpersonal communication skills, a low ego, and a positive mental attitude.

An ideal candidate meets at least the following requirements:

  • 2+ years of SRE, DevOps, Sysadmin, and/or HPC engineering experience.

  • Great verbal and written communication skills in English.

  • Experience deploying and operating Kubernetes and/or SLURM clusters.

  • Experience in writing Go, Python, Bash.

  • Experience using Ansible, Terraform, and other automation or IAC tools.

  • Strong engineering background, preferably in Computer Science, Software Engineering, Math, Computer Engineering, or similar fields.

Exceptional candidates have one or more of the following experiences:

  • You have built and operated an AI workload at 1000+ GPU scale.

  • You have built multi-tenant, hyperscale Kubernetes based services.

  • You have physically deployed infrastructure in a datacenter, managed bare metal hardware via MaaS or Netbox, etc.

  • You have deployed and managed multi-tenant InfiniBand or RoCE networks.

  • You have deployed and managed petabyte scale all-flash storage systems, including DDN, VAST, and/or Weka; or Ceph, LUSTRE, or similar open source tools.

Salary & Benefits

  • Competitive total compensation package (salary + equity).

  • Retirement or pension plan, in line with local norms.

  • Health, dental, and vision insurance.

  • Generous PTO policy, in line with local norms.

The base salary range for this position is $175,000 - $320,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.

We are committed to pay equity and transparency.

Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Similar jobs

Found 6 similar jobs

Browse more jobs in: