+
Login

Enter your email and password to log in if you already have an account on H512.com

Forgot password?
+
Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

92-64 =
+
Forgot password

Enter your email, and we will send you your password

Mobile Wave Solutions

The job listing is published in the following categories

  • Anywhere
  • Report an issue Megaphone icon

Report an issue with the job ad

×

    What is wrong with the job listing?*
    Please describe the problem:
    In order to confirm you are not a robot please fill the answer to the calculation in the field:
    Tech Stack / Requirements

    Mobile Wave Solutions is a professional services company specializing in software development as a service. With a team of over 120 engineers, we deliver scalable, high-quality software that empowers our global clients to innovate and grow. We value collaboration, technical excellence, and a pragmatic approach to solving complex problems.

     

    About the Role

    The Senior Data Engineer will design, build, and optimize next-generation Lakehouse data platform. This is a hands-on role focused on transforming complex, legacy datasets into structured, high-quality data products that power analytics and operational workflows. We seek a proactive, independent problem-solver who excels in a fast-paced rebuild environment, maintains code stability (including AI-generated code), and drives rapid iteration.

     

    Key Responsibilities

    • Design and implement robust, scalable data ingestion and transformation pipelines using Databricks, PySpark, and distributed processing. Utilize Airflow (or similar) for reliable workflow orchestration.
    • Implement Delta Lake principles, focusing on CDC and schema evolution. Establish and integrate data quality frameworks (e.g., Great Expectations) within CI/CD pipelines for data integrity.
    • Develop and optimize complex SQL and Python scripts. Integrate diverse data sources (APIs, S3 files, etc.) and handle both structured and unstructured data.
    • Support the implementation of data governance and cataloguing solutions (e.g., Unity Catalog). Proactively investigate and improve inconsistent legacy datasets.
    • Guide and manage AI agents for code generation, ensuring quality and stability. Work pragmatically and collaboratively to drive technical solutions.

     

    Qualifications

    • 5+ years of professional experience in data engineering, focused on cloud and distributed processing environments.
    • Strong experience with Databricks, PySpark, distributed processing, and Delta Lake. Deep knowledge of CDC and schema evolution.
    • Expert SQL optimization and Python skills. Hands-on experience with Airflow (or similar).
    • Familiarity with relevant AWS components. Good understanding of CI/CD for data workflows and implementing data quality frameworks.
    • Knowledge of streaming (Kafka/Kinesis) and exposure to Unity Catalog or similar governance tools.
    • Proactive problem-solver; comfortable working with complex, inconsistent data without needing explicit specs.
    • Able to work independently in a fast-paced rebuild environment.
    • Collaborative, pragmatic, thrives on rapid iteration.
    • Skilled at directing AI agents to code while maintaining high quality standards.

     

    Our Benefits

    • Remote Office – Flexible hybrid form of working
    • Parking Space – We provide free parking spots
    • Fun Office Space – We offer a game zone and a relaxation area
    • Health Insurance – Additional private health insurance, including a dental care plan
    • Personal Development – Company-sponsored training budget to further develop your skills
    • Employee Referral Program – Receive a bonus for referring a friend
    • Holidays – Enjoy an extra 5 days after your 1st and 5th year
    • Social Events – We love to celebrate our success together
    • Family Insurance – Add insurance to a family member
    • Offering sports cards – 100% sponsored by the company