+
Login

Enter your email and password to log in if you already have an account on H512.com

Forgot password?
+
Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

105+22 =
+
Forgot password

Enter your email, and we will send you your password

EGT Digital

Data Engineer, Core Platform

ApplySubmit your application

The job listing is published in the following categories

  • Anywhere
  • Report an issue Megaphone icon

Report an issue with the job ad

×

    What is wrong with the job listing?*
    Please describe the problem:
    In order to confirm you are not a robot please fill the answer to the calculation in the field:
    Tech Stack / Requirements

     

    EGT Digital is a next-generation tech company focused on all online gaming products, and its portfolio includes Casino Games, Sportsbook, and the all-in-one solution – a Gambling Platform.

    EGT Digital is a part of the Euro Games Technology (EGT) Group, headquartered in Sofia, Bulgaria. EGT Group is one of the fastest-growing enterprises in the gaming industry. Our global network includes offices in 25 countries and our products are installed in over 85 jurisdictions in Europe, Asia, Africa, and North, Central, and South America.

    Being a part of such a fast-moving industry as iGaming, the company knows no limits and is growing rapidly through its dedication to innovation and constant improvement. This is the reason why we are expanding our Platform Department  and now we are looking for some fresh and enthusiastic people to come and join us in the exciting digital world of iGaming

    About the role:

    EGT Digital is developing a unified Data Platform that supports reporting, analytics, regulatory workloads, and the evolution toward automated insights and AI-driven decision support. The platform is built around a Lakehouse architecture, streaming and batch data pipelines, and modern tooling for data modeling, quality, governance, and orchestration.

    Responsibilities:

    • Develop and maintain batch and streaming data pipelines for ingestion, transformation, and delivery into the lakehouse and warehouse layers
    • Implement DBT models, SQL transformations, and data quality checks for curated datasets
    • Build Python applications, utilities, and APIs that support data processing, metadata, and operational workflows
    • Write and maintain Spark jobs and other distributed processing scripts where needed
    • Develop deployment scripts, automation tools, and platform integration components
    • Work with Airflow DAGs for orchestration and scheduling of data workflows
    • Contribute to data modeling tasks: schema design, normalization/denormalization decisions, partitioning, and optimization
    • Implement data testing using SQL, Python, and platform tools
    • Participate in platform improvements related to observability, reliability, and efficiency
    • Collaborate with data engineers, architects, and analysts to understand requirements and translate them into clear technical tasks and implementations

    Requirements:

    • Python – 3+ years of experience building applications, scripts, data processes, or APIs
    • SQL – 3+ years of experience with joins, CTEs, window functions, query optimization, and analytical modeling concepts
    • Experience working with Kafka or another streaming/messaging system
    • Experience with Airflow or a similar orchestrator
    • Experience with dbt for SQL transformations and data modeling
    • Experience with Docker and an understanding of containerized application workflows
    • Familiarity with Kubernetes basics (deployments, services, jobs, configs)
    • Ability to understand and apply concepts of normalization, denormalization, and data modeling for analytical workloads
    • Ability to design and implement data quality checks and validation logic
    • Understanding of algorithms, data manipulation techniques, and general problem-solving in a data context.
    • Ability to work with Git-based workflows and CI/CD pipelines
    • Clear communication in English for discussing technical tasks

    Nice to Have:

    • Experience with Spark or similar distributed data processing frameworks
    • Experience with ClickHouse, PostgreSQL, or other analytical/operational databases
    • Familiarity with lakehouse concepts (object storage, table formats, partitioning strategies)
    • Experience with Terraform, Helm, or other infrastructure-as-code tools
    • Experience building small backend services or internal tools using FastAPI or Flask
    • Understanding of metadata, data contracts, or data observability tools
    • Experience working in hybrid environments (on-prem + cloud) or AWS services

    What we offer:

    • Competitive salary
    • Performance based annual bonus
    • Performance evaluation & salary review twice a year
    • 25 days paid annual leave
    • Work from home option -2 days weekly
    • Flexible working schedule
    • Additional health insurance – premium package
    • Fully paid annual transportation card
    • Fully paid Sports card
    • Free company shuttle by the office
    • Sports Teams/Sports events
    • Professional development, supportive company culture, and challenging projects
    • Company-sponsored trainings
    • Tickets for conferences and seminars
    • Team building events and office parties
    • Referral Program
    • Free snacks, soft drinks, coffee, and fruit are always available
    • Birthday, newborn baby, and first-grader bonuses
    • Corporate discounts in various shops and restaurants
    • State-of-the-art modern office
    • Positive working environment and chill-out zone (PS4, foosball-table, and lazy chairs)

    All applications will be treated in strict confidentiality and only the approved candidates will be invited to an interview.