Software Engineer- Data Platform(India-Remote)

This listing is synced directly from the company ATS.

Role Overview

This senior-level software engineering role involves building backend APIs and scalable data pipelines using Python and PySpark, working with modern data technologies like Iceberg and Snowflake, and orchestrating workflows with Airflow. The engineer will design and scale systems for data workflows and automation in a fast-paced startup environment, focusing on reliability, cost efficiency, and solving complex data challenges with cross-team collaboration.

Perks & Benefits

The role is fully remote with flexible time off and comprehensive health coverage for the family, offering a competitive salary, meaningful equity, and bonuses. It provides a high-trust, low-bureaucracy culture that supports research and technical exploration, with opportunities for real ownership in building foundational AI infrastructure at a generational company backed by major investors.

Full Job Description

Join Granica’s core engineering team to design and scale systems powering data workflows, automation, and analytics. This is a deep engineering role—not feature delivery.

What You’ll Do-

  • Build backend APIs and scalable data pipelines (Python, PySpark).

  • Work with modern data lakehouse/warehouse tech (Iceberg, Delta Lake, Snowflake, Databricks).

  • Orchestrate workflows (Airflow) and optimize big data frameworks.

  • Manage infra as code (Terraform) and ensure reliability with monitoring/logging.

  • Collaborate across teams and with customers to solve complex data challenges and design seamless integration solutions.

  • Drive best practices in scalability, reliability, and cost efficiency.

What We're Looking For-

  • 5+ years in software/data engineering or infrastructure roles

  • Strong Python skills (backend APIs a plus)

  • Proven ability to build scalable data pipelines from scratch

  • Hands-on with Apache Iceberg/Delta Lake + Snowflake/Databricks

  • Workflow orchestration expertise (Airflow, Luigi, etc.)

  • Big data frameworks experience (Spark, Hadoop)

  • Familiar with monitoring/analytics tools (Prometheus, Grafana, ELK, Datadog)

  • Skilled in designing scalable, reliable, cost-efficient systems

  • Experience with large-scale distributed data architectures

  • Thrives in fast-paced startup environments

  • Excellent problem-solving, communication, and customer-facing skills

Nice-to-Haves:

  • Hands-on experience with Terraform or other infrastructure-as-code tools.

  • Familiarity with security and privacy best practices in data processing pipelines.

  • Exposure to cloud platforms (AWS, GCP, Azure) and containerisation (Docker, Kubernetes).

Compensation & Benefits

  • Competitive salary, meaningful equity, and substantial bonus for top performers

  • Flexible time off plus comprehensive health coverage for you and your family

  • Support for research, publication, and deep technical exploration

At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI!

Similar jobs

Found 6 similar jobs