Sr./Staff Backend Engineer - Java/Kafka

This listing is synced directly from the company ATS.

Role Overview

As a Senior/Staff Backend Engineer, you will design, implement, and optimize scalable backend services using Java and AWS, with a focus on Apache Kafka for real-time data streaming. You'll manage databases like Postgres, Redis, DynamoDB, and ClickHouse, and collaborate across teams to drive system scalability and reliability. This senior leadership role involves mentoring engineers and resolving performance bottlenecks in distributed systems.

Perks & Benefits

This is a fully remote role, offering the flexibility to work from anywhere with a focus on ownership and impact in a mission-driven team. You'll collaborate with industry veterans from top companies, with opportunities for career growth through mentorship and innovation in AI risk decisioning. The culture emphasizes extreme ownership, fast-paced decision-making, and cutting-edge work in finance technology.

Full Job Description

Shape the future of trust in the age of AI
At Oscilar, we're building the most advanced AI Risk Decisioning™ Platform. Banks, fintechs, and digitally native organizations rely on us to manage their fraud, credit, and compliance risk with the power of AI. If you're passionate about solving complex problems and making the internet safer for everyone, this is your place.

Why join us:

  • Mission-driven teams: Work alongside industry veterans from Meta, Uber, Citi, and Confluent, all united by a shared goal to make the digital world safer.

  • Ownership and impact: We believe in extreme ownership. You'll be empowered to take responsibility, move fast, and make decisions that drive our mission forward.

  • Innovate at the cutting edge: Your work will shape how modern finance detects fraud and manages risk.

Job Description

We are looking for a Senior/Staff Backend Engineer with deep expertise in backend development. In this role, you will design, implement, and optimize services that leverage Apache Kafka to handle high-throughput, real-time data streams. You will also be responsible for scaling and maintaining databases such as Postgres, Redis, DynamoDB, and ClickHouse, all within a cloud-based AWS infrastructure.

This is a senior technical leadership role, where you will collaborate across teams, mentor engineers, and drive the scalability, performance, and reliability of Oscilar’s backend systems.

Responsibilities

  • Design, develop, and maintain scalable backend services using Java and AWS technologies.

  • Lead the architecture, deployment, and optimization of Apache Kafka to support real-time data streaming across distributed systems.

  • Build and manage Kafka topics, brokers, producers, and consumers, ensuring optimal performance and data consistency.

  • Implement streaming solutions with Kafka Streams and Kafka Connect, focusing on high availability and low-latency processing.

  • Collaborate with product, frontend, and data engineering teams to define technical requirements and deliver reliable, performant services.

  • Design and maintain high-performance data storage solutions using Postgres, Redis, ClickHouse, and DynamoDB.

  • Optimize database performance through schema design, indexing strategies, and resource partitioning.

  • Implement best practices for infrastructure security, performance monitoring, and data integrity.

  • Establish and maintain CI/CD pipelines for automated testing, deployment, and monitoring.

  • Provide mentorship to junior engineers, conduct code reviews, and promote best practices in software development.

  • Proactively identify and resolve performance bottlenecks and technical challenges in both streaming and database systems.

Requirements

  • Backend Development: 8+ years of experience with Java in large-scale, distributed environments.

  • Kafka Mastery: Extensive experience with Apache Kafka, including Kafka Streams, Kafka Connect, partitioning, replication, and consumer group management.

  • Cloud Infrastructure: Strong experience with AWS services (e.g., MSK, EC2, RDS, DynamoDB, S3, Lambda).

  • Distributed Systems: Solid understanding of distributed system design, messaging patterns, and eventual consistency.

  • Performance Optimization: Proven ability to diagnose and resolve bottlenecks in streaming and database systems.

Nice-to-have

  • Experience integrating Kafka with analytics solutions like ClickHouse.

  • Knowledge of event-driven architecture and streaming patterns like CQRS and event sourcing.

  • Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Kafka Manager).

  • Experience automating infrastructure with tools like Terraform or CloudFormation.

  • Proficiency with Postgres, Redis, ClickHouse, and DynamoDB. Experience with data modeling, query optimization, and high-transaction databases.

  • Familiarity with encryption, role-based access control, and secure API development.

Similar jobs

Found 6 similar jobs

Browse more jobs in: