DevOps Engineer (Croatia)
Role Overview
This mid-to-senior DevOps Engineer role involves designing and automating CI/CD pipelines and Infrastructure as Code on Google Cloud Platform for AI/ML and data engineering projects. The engineer will work in an agile team with data scientists and ML experts, providing technical leadership and mentoring while optimizing DevOps functions for clients. They will have a tangible impact on cutting-edge projects by enhancing efficiency and consistency across client organizations.
Perks & Benefits
The role offers a hybrid working model with remote flexibility, 27 days of vacation, supplementary health insurance, and a 50% covered MultiSport membership. Career growth is supported through certifications, structured learning, and an internal knowledge library, with opportunities to innovate and shape solutions in a dynamic, technology-driven scale-up environment. While not explicitly stated, remote work likely allows some time zone flexibility, given the global client base and collaboration with partners like Google.
Full Job Description
About the job
Shape the Future of AI & Data with Us
At Datatonic, we are Google Cloud's premier partner in AI, driving transformation for world-class businesses. We push the boundaries of technology with expertise in machine learning, data engineering, and analytics on Google Cloud. By partnering with us, clients future-proof their operations, unlock actionable insights, and stay ahead of the curve in a rapidly evolving world.
Your Mission
As a DevOps Engineer, you will play a key role in establishing, leading and enhancing DevOps practices within our clients' organisations.
We are looking for a proactive and skilled professional who thrives in a dynamic, technology-driven environment and is eager to contribute to the growth and innovation of an established scale-up. This role offers the chance to apply your expertise and grow your career while making a tangible impact on cutting-edge projects.
What You’ll Do
Pipeline Development: Design, implement, and automate scalable and resilient CI/CD & DevOps pipelines on Google Cloud Platform, tailored to client-specific AI/ML and data engineering needs.
Infrastructure as Code (IaC) Pipelines: Develop and maintain IaC pipelines using tools like Terraform to automate infrastructure provisioning and deployment on GCP, improving efficiency and consistency.
Technical Best Practices: Define and embed best practices into our internal processes to enhance the quality and consistency of our work
Technical Thought Leadership: Introduce innovative ideas and approaches to enhance Datatonic’s capabilities and methodologies
Sales Support: Collaborate with sales teams to provide technical expertise during client engagements and help scope solutions for requirements
Knowledge Sharing: Contribute to Datatonic’s internal knowledge base, including technical collateral and thought leadership materials
Agile Collaboration: Work in a dynamic, agile environment alongside data scientists, machine learning experts, data analysts, architects and data engineers
Tech Partner Collaboration: Collaborate closely with partners such as Google to leverage their technologies effectively
Mentorship and Leadership: Guide team members, fostering a culture of growth and innovation
Best Practices Advisory: Provide expert advice to customers on DevOps best practices
Function Optimisation: Establish and enhance DevOps functions within clients’ organisations
Stakeholder Engagement: Engage with customers and project teams throughout the project development lifecycle to ensure seamless delivery
What You’ll Bring
DevOps Experience: 4+ years of experience in the DevOps field or as a Platform Engineer
Programming Skills: Proficiency in Python, Java, or Go programming languages
CI/CD Expertise: In-depth knowledge of CI/CD tools and processes
Tech Enthusiasm: Strong passion for technology and a drive to continuously learn
Team Collaboration: Eagerness to join a diverse team of ML, AI, and DevOps enthusiasts
Cloud Platform: Experience with Google Cloud technologies such as Cloud Run, Cloud Functions, Cloud Scheduler, Workflow, and Cloud Composer for automating deployments and managing infrastructure.
Containerisation/Virtualisation Expertise: Proficiency with technologies such as Terraform and Kubernetes
Bonus Points If You Have
Start-Up/Scale-Up Experience: Previous experience working in a start-up or scale-up environment
Open-Source Contributions: History of contributions to open-source projects, showcasing collaboration and innovation
SRE Principles: Experience in implementing Site Reliability Engineering (SRE) principles
Other Cloud Platforms: Experience/certifications for other cloud platforms such as AWS or Azure.
Data Science Knowledge: Basic understanding of topics like machine learning, data mining, statistics, and data visualisation, with some practical experience preferred.
SDN Knowledge: Understanding of Software Defined Networking (SDN) on Google Cloud
Why Join Us?
You’ll work on exciting, high-impact projects across various industries and have lots of opportunities to grow - whether through certifications, structured learning, or our internal knowledge library. You’ll also have the freedom to innovate, try new approaches, and actively shape how we build our solutions.
Benefits we offer include:
27 days of vacation
Supplementary and additional health insurance
50% covered MultiSport membership
Hybrid working model for flexibility and balance
Similar jobs
Found 6 similar jobs
