(Senior) Data Engineer (m/w/d)

This listing is synced directly from the company ATS.

Role Overview

This senior-level data engineering role involves architecting and maintaining scalable data infrastructure to process real-time energy data from heat pumps and integrated systems. Day-to-day responsibilities include building data pipelines using tools like Kafka and Benthos, ensuring data quality, and collaborating with ML engineers to support AI models. The hire will work in a tech-driven team, impacting decarbonization efforts by enabling data-driven insights across a virtual power plant ecosystem.

Perks & Benefits

The role offers remote work with a partial remote option, though the company values in-person collaboration in Berlin. Benefits include a competitive salary, flexible perks for sports and learning, and opportunities for high impact in a hypergrowth green-tech startup. The culture emphasizes no corporate theater, direct communication, and a focus on real-world environmental impact.

Full Job Description

At GALVANY, our goal is to make climate-neutral living a reality for everyone. We focus on execution – developing concrete, smart solutions and making heat pumps, battery storage, and smart metering easy to access, reliable, and affordable. GALVANY-Tech drives the energy transition through software. We're building an AI-based Operating System for sales, planning, installation and operation of heat pumps and integrated energy systems – from single-family homes to multi-unit buildings, and across our Energy Community as a Virtual Power Plant.

We are a profitable green-tech startup and believe that sustainable impact and long-term growth are only possible when grounded in a healthy business model. Driven by customer value, clarity, and responsibility, we stand for high-quality heating solutions and an environment where people take responsibility and drive lasting impact.

The Role

As a Data Engineer at GALVANY, you'll architect and maintain the data infrastructure that powers our AI-based Operating System. You'll process real-time energy data from heat pumps and integrated systems, build pipelines that fuel AI models and analytics, and ensure data quality across our entire ecosystem – from individual homes to our Virtual Power Plant.

Responsibilities:

  • Design and build scalable data pipelines using Kafka, Benthos, Clickpipes and related tools.

  • Ensure data quality, consistency, and freshness across the energy ecosystem and internal processes.

  • Collaborate with ML engineers to deliver the right data in the right format for AI models.

  • Build monitoring and validation into pipelines from the start.

  • Translate business questions into data requirements and vice versa.

  • Leverage LLMs and AI tools to innovate data solutions and accelerate development.

Tech Stack: Data tools including Kafka, Benthos (data pipeline), Python, SQL; Backend with GoLang; ML with Python, PyTorch, and LLMs; General tools including Azure, GitHub, Linear, Notion.

Requirements

  • Experience. 4+ years of experience in high-performance environments (e.g. top-tier consulting, fast-scaling startups, or similar).

  • Track Record. Proven end-to-end responsibility in data engineering. From ideation to release.

  • Background. Strong background in relevant fields, e.g. Computer Science, Mathematics, Data Engineering, or similar.

  • Technical Skills. Strong coding skills with proficiency in Python and SQL. Experience with streaming architectures and data pipeline tooling. Fluent in spec-driven, AI-assisted development (e.g. Claude Code).

  • Mindset. Self-driven problem-solving mindset – no need for micromanagement or specific tickets.

  • Technical Aptitude. Technical mindset with a passion for understanding systems, data flows, and integrations, coupled with enthusiasm for continuous learning and problem solving.

  • Language. Fluent in English; German is a plus.

Skills and Qualities That Are Important to Us

  • Outcome-Led. You focus on value over volume. You embrace iteration, adapt quickly to new information, and prioritize what moves the needle for business and user.

  • Systems Thinker. You see the bigger picture. You understand how your work connects to the wider ecosystem – ensuring features contribute to a cohesive, scalable whole.

  • Pragmatic. You choose the simplest effective path to solve problems – especially by leveraging AI tools.

  • Customer Champion. You keep the end-user central in all decisions. You seek direct exposure to how customers experience the product.

  • Data Quality Guardian. You obsess over accuracy, freshness, and consistency. You build validation and monitoring into pipelines from the start.

  • Pipeline Architect. You design scalable data flows from source to insight. You balance real-time needs with batch efficiency.

Benefits

  • Strong Growth & High Impact. A unique opportunity to join during a hypergrowth phase and actively contribute to company success.

  • Compensation. Competitive salary and flexible perks (sports, mobility, learning) tailored to your needs.

  • Real-World Impact. Your work drives decarbonization – measurable in CO₂ savings, energy efficiency (kWh), and cost reductions (€).

  • Office. Prime location in Berlin Charlottenburg, regular company events and all-hands. We value in-person collaboration and connection, while partial remote work remains an option.

  • No Corporate Theater. Skip endless alignment meetings, politics and waiting for permission. You talk to the people who matter and ship.

Similar jobs

Found 6 similar jobs