Data Engineer
Role Overview
This senior-level data engineer role involves architecting and building scalable data pipelines using dbt, Python, and GCP services to support an AI platform. You will work closely with product and engineering teams to define data requirements, manage ETL/ELT processes, and ensure data reliability and performance. Success in this position directly impacts business outcomes and product velocity through robust data infrastructure.
Perks & Benefits
The role is fully remote, offering flexibility in work location. It emphasizes a collaborative environment with direct interaction with CTO and product teams, fostering high ownership and product-focused solutions. Career growth is supported by measurable impact on business outcomes, with an expectation for clear deliverables and deadlines in a startup-like setting.
Full Job Description
Data Engineer | Matter
About the Role
We are seeking an experienced Senior Data Engineer. You’ll work closely with our CTO, CPO, and product teams to architect, build, and deliver robust data pipelines and transformations for our always-on AI platform. Success in this role will have a direct, measurable impact on business outcomes and future product velocity.
Key Responsibilities
Architect, build, and optimize scalable end-to-end data pipelines using dbt and our stack (Python microservices, Postgres, BigQuery, and GCP).
Design, implement, and maintain ETL/ELT processes to ingest, clean, and transform large datasets from a variety of sources.
Collaborate with product and engineering to define data requirements for new product features and analytics initiatives.
Manage our data pipeline by onboarding new clients, loading their data, and applying all required formulas.
Maintain and troubleshoot workflows by fixing broken formulas, adding new services, and ensuring reliable, high-throughput performance.
Ensure data reliability, integrity, and security at scale.
Troubleshoot, performance-tune, and document all pipelines and data workflows for smooth handoff.
About You
5+ years of hands-on experience building production-grade data pipelines, ETL/ELT workflows, and transformed datasets at scale.
Expert-level Python; strong experience with dbt, Postgres, BigQuery, and core GCP data services (e.g., Dataflow, Pub/Sub, Storage, Composer).
Demonstrated experience architecting, optimizing, and troubleshooting cloud-based data infrastructure.
Familiarity with analytics, BI tools, and data visualization platforms is a plus.
Startup or contract/consulting experience strongly preferred; you move quickly and commit to clear deliverables and deadlines.
Excellent communication skills: you read requirements carefully, keep stakeholders looped in, and document as you go. You can boil down complex tasks into plain english for non-data engineers to understand.
Low-ego, high-ownership, and passionate about building product-focused solutions.
Similar jobs
Found 6 similar jobs