+
Login

Enter your email and password to log in if you already have an account on H512.com

Forgot password?
+
Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

101+58 =
+
Forgot password

Enter your email, and we will send you your password

Sportingtech Bulgaria

Data Engineer

ApplySubmit your application

The job listing is published in the following categories

  • Anywhere
  • Report an issue Megaphone icon

Report an issue with the job ad

×

    What is wrong with the job listing?*
    Please describe the problem:
    In order to confirm you are not a robot please fill the answer to the calculation in the field:
    Tech Stack / Requirements

    About Us


    Sportingtech is a multi-award-winning provider of turnkey betting and gaming solutions designed for regulated and emerging markets around the world. With offices in Malta, Bulgaria, London, Brazil and Portugal, our iGaming platform offers everything covering sportsbook, casino and retail via a modular system and intuitive back office for a fully omni-channel solution. Our unparalleled ability to cater for local-market customization and operator preferences provides players with an optimal betting experience, resulting in proven growth for our rapidly expanding customer base.

    Who are You:

    You are passionate about leveraging data to drive business success. We are currently seeking a dynamic and experienced Mid-level Data Engineer to join our team and be part of a team leading the data-driven decision-making initiatives and who will play a pivotal role in driving the growth and success of their organization.

    In this role you will be able to leverage your data engineering experience, working with massive volumes of data. In the team, you will be working with senior members with 10+ years of experience, which are willing to share their knowledge and support the new hires.

    ​Where you Fit In:

    ​The data team is responsible for maintaining and developing end-to-end data and business intelligence solutions. The team is made up of highly skilled professionals who are dedicated to ensuring that our organization’s data is accurate, reliable, and easily accessible. DWH work closely with other departments to understand their data needs and provide them with the information they require to make informed decisions.

    Data Engineer function will be focused on the collecting, storing, processing and preparing datasets for analysing world class products – features adoption, client’s acquisition and behaviour, customers lifetime value etc.

    You will oversee the ingestion, transformation, delivery, and movement of data throughout every part of an organization. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.

    The impact you will have:

    • ​You will develop and maintain the different data pipelines in order to ensure the quality and accuracy of our product analytics, as well as build datasets for reports and visualizations for internal use and external customers.
    • Build the infrastructure required for optimal extraction, transformation, and loading of large volumes of data from a wide variety of data sources – both internal and external
    • Proactively identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
    • Develop ETL monitoring and testing in order to troubleshoot and resolve data-related issues, ensuring data quality and availability
    • Work on integrating all data into third-party systems, ensuring accessibility and ease of use for the relevant teams.

    What we’re looking for:

    • ​At least 2 years of experience in Dev engineering role
    • Experience with feature engineering and data pipelines using Python, SQL, Scala and similar programming language
    • Strong knowledge of distributed computing frameworks, like Apache Spark, Hadoop and Hadoop YARN, for managing and processing big data sets
    • Experience in data acquisition (API calls/FTP downloads), ETL, transformation / normalization (from raw to DB table schema structure), storage (Raw files, Database server), distribution & access (Entitlements for users, build of API’s and access points for data)
    • Experience in data management – clean and commented code, version control, documentation, automated testing and deployment etc.
    • Familiar with the design (dimensional modeling and schema design) and optimization of databases or data warehouses – including handling and logging errors, system monitoring
    • Experience with event-driven technologies and streaming such as RMQ and/or Kafka is a big advantage

    Technologies Used:

    • Databases – PostgreSQL and Oracle
    • Programming languages – SQL / Python / Scala / Java
    • Computing frameworks – Apache Spark / Kafka / Hadoop and Hadoop YARN

    Technical Expertise:

    • Prepare ad-hoc reports using SQL queries.
    • Improve existing data infrastructure by optimizing Spark cluster setup and related services
    • Have regular on-call duties to ensure flawless data collection and aggregation

    Projects and Initiatives:

    • Collaborate with various teams to implement event-driven analytics

    • Establish a robust data governance framework, including data quality management processes and compliance protocols, to ensure accurate and reliable insights.

    Strategic Contribution:

    • Proactively identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

    • Contribute to the company’s growth, revenue generation, and competitive advantage by providing key insights, analysis, and new thinking based on data-driven decision-making.