Data Engineer

This listing is synced directly from the company ATS.

Role Overview

As a Data Engineer at Playson, you will build and maintain scalable data pipelines to support gameplay and analytics. This senior-level role involves optimizing ETL processes, ensuring data quality, and partnering with product owners and analysts to drive key performance indicators. You will play a crucial role in making data reliable and impactful for decision-making across the organization.

Perks & Benefits

Playson offers a fully remote work environment, accommodating EU-friendly time zones, and the option to work from their Bratislava office. Employees enjoy unlimited vacation, paid sick leave, and quarterly performance bonuses, fostering a culture of real ownership and impact without micromanagement. There is also a budget for conferences and professional growth, promoting a product-led culture.

⚠️ This job was posted over 4 months ago and may no longer be open. We recommend checking the company's site for the latest status.

Full Job Description

🎯 What You’ll Actually Do

  • Build and run scalable pipelines (batch + streaming) that power gameplay, wallet, and promo analytics.

  • Model data for decisions (star schemas, marts) that Product, BI, and Finance use daily.

  • Make things reliable: tests, lineage, alerts, SLAs. Fewer surprises, faster fixes.

  • Optimize ETL/ELT for speed and cost (partitioning, clustering, late arrivals, idempotency).

  • Keep promo data clean and compliant (PII, GDPR, access controls).

  • Partner with POs and analysts on bets/wins/turnover KPIs, experiment readouts, and ROI.

  • Evaluate tools, migrate or deprecate with clear trade-offs and docs.

  • Handle prod issues without drama, then prevent the next one.

🧠 What You Bring

  • 4+ years building production data systems. You’ve shipped, broken, and fixed pipelines at scale.

  • SQL that sings and Python you’re proud of.

  • Real experience with OLAP and BI (Power BI / Tableau / Redash — impact > logo).

  • ETL/ELT orchestration (Airflow/Prefect or similar) and CI/CD for data.

  • Strong grasp of warehouses & lakes: incremental loads, SCDs, partitioning.

  • Data quality mindset: contracts, tests, lineage, monitoring.

  • Product sense: you care about player impact, not just rows processed.

✨ Nice to Have (tell us if you’ve got it)

  • Kafka (or similar streaming), ClickHouse (we like it), dbt (modular ELT).

  • AWS data stack (S3, IAM, MSK/Glue/Lambda/Redshift) or equivalents.

  • Containers & orchestration (Docker/K8s), IaC (Terraform).

  • Familiarity with AI/ML data workflows (feature stores, reproducibility).

  • iGaming context: provider metrics bets / wins / turnover, regulated markets, promo events.

🔧 How We Work

  • Speed > perfection. Iterate, test, ship.

  • Impact > output. One rock-solid dataset beats five flaky ones.

  • Behavior > titles. Ownership matters more than hierarchy.

  • Direct > polite. Say what matters, early.

🔥 What We Offer

  • Fully remote (EU-friendly time zones) or Bratislava if you like offices.

  • Unlimited vacation + paid sick leave.

  • Quarterly performance bonuses.

  • No micromanagement. Real ownership, real impact.

  • Budget for conferences and growth.

  • Product-led culture with sharp people who care.

🧰 Our Day-to-Day Stack (representative)
Python, SQL, Airflow/Prefect, Kafka, ClickHouse/OLAP DBs, AWS (S3 + friends), dbt, Redash/Tableau, Docker/K8s, GitHub Actions.

👉 If you know how to make data boringly reliable and blisteringly fast — hit apply and let’s talk.

Similar jobs

Found 6 similar jobs