← Back to jobs

Senior Big Data Engineer

Brazil
full_time
Brazil

Role Summary

As a Senior Big Data Engineer, you will design, develop, and maintain data pipelines and ETL processes using Python, SQL, and PySpark. You will collaborate with backend developers and DevOps engineers within a cross-functional team, ensuring the performance and reliability of data solutions while mentoring mid-level engineers. Your work will have a significant impact on the development of a next-generation AdTech platform that enables smarter marketing decisions.

Benefits & Culture

This role offers a flexible remote work setup, allowing you to work from Brazil while collaborating with an international team. Sigma Software emphasizes career growth, with opportunities for mentorship and professional development. The company fosters a proactive and accountable culture, focusing on delivering high-quality results in a dynamic AdTech environment.

Full Job Description

Company DescriptionJoin Sigma Software s AdTech Competence Center a 300+ team of experts delivering innovative, high-load, and data-driven advertising technology solutions. Since 2008, we ve been helping leading AdTech companies and startups design, build, and scale their technology products.We focus on fostering deep domain expertise, building long-term client partnerships, and growing together as a global team of professionals passionate about AdTech, data, and cloud-based solutions.Does this sound like an exciting opportunity? Keep reading, and let s discuss your future role! CUSTOMEROur client is an international AdTech company developing modern, privacy-safe, and data-driven advertising platforms. The team works with AWS and cutting-edge data technologies to build scalable, high-performance systems.PROJECTThe project revolves around the development of a next-generation AdTech platform that powers real-time, data-driven advertising. It leverages AWS, Python, and distributed data frameworks to process large-scale datasets efficiently and securely enabling businesses to make smarter, faster, and more informed marketing decisions.Job DescriptionDesign, develop, and maintain robust data pipelines and ETL processes using Python, SQL, and PySparkWork with large-scale data storage on AWS (S3, DynamoDB, MongoDB)Ensure high-quality, consistent, and reliable data flows between systemsOptimize performance, scalability, and cost efficiency of data solutionsCollaborate with backend developers and DevOps engineers to integrate and deploy data componentsImplement monitoring, logging, and alerting for production data pipelinesParticipate in architecture design, propose improvements, and mentor mid-level engineers.Qualifications5+ years of experience in data engineering or backend developmentStrong knowledge of Python and SQLHands-on experience with AWS (S3, Glue, Lambda, DynamoDB)Practical knowledge of PySpark or other distributed processing frameworksExperience with NoSQL databases (MongoDB or DynamoDB)Good understanding of ETL principles, data modeling, and performance optimizationUnderstanding of data security and compliance in cloud environmentsFluent in English (Upper-Intermediate level or higher)Additional InformationPERSONAL PROFILEStrong communication and collaboration skills in cross-functional environmentsProactive, accountable, and driven to deliver high-quality results

Similar jobs

Found 6 similar jobs