We’re looking for a Data Platform Engineer to build a scalable, ML-ready data foundation, unifying systems like PostgreSQL, Kafka, BigQuery, and GCS to power our AI initiatives.
Key requirements:
Strong experience with GCP (BigQuery, Dataflow, Cloud Storage, DataStream, AlloyDB/CloudSQL).
Advanced SQL skills.
Experience with dbt or similar for data modeling/testing.
Hands-on with Kafka and CDC tools (e.g., Debezium).
Proven ability to build batch and real-time pipelines.
Experience with dimensional modeling for analytics/ML.
Knowledge of data governance (PII tagging, security, access control).
Experience in data quality monitoring with automated checks/alerts.
Familiar with data observability (freshness, schema, pipeline health).
Skilled in BigQuery optimization (partitioning, clustering, query tuning).
Nice to Have:
Experience with Feature Store design and ML pipelines.
Understanding of real-time vs. batch feature serving.
Exposure to regulated data environments.
Familiar with Vertex AI and Apache Beam/Dataflow.
Experience collaborating with ML/Data Science teams.
Knowledge of vector databases or semantic search.
Benefits:
Competitive compensation and potential for equity participation.
Private health and dental insurance.
High-quality equipment to support your work.
30 days of paid vacation.
Flexibility to work fully remote.
Opportunity to work abroad for up to 4 months per year.
A high-impact role offering autonomy, ownership, and the ability to shape the data landscape.
Interview Process:
Introductory Call
Technical Interview
Team Interview
Final Interview
Apply Now by sending your CV in English with contact details in it.
Only shortlisted candidates will be invited to an interview.
By enabling them, you help us to develop and deliver better services in the way that's most convenient for you. For information and settings, see our Cookie Policy.