+
Login

Enter your email and password to log in if you already have an account on H512.com

Forgot password?
+
Създай своя профил в DEV.BG/Jobs

За да потвърдите, че не сте робот, моля отговорете на въпроса, като попълните празното поле:

74-24 =
+
Forgot password

Enter your email, and we will send you your password

DataArt Bulgaria

Senior Data Engineer (with Python)

ApplySubmit your application

The job listing is published in the following categories

  • Anywhere
  • Report an issue Megaphone icon

Report an issue with the job ad

×

    What is wrong with the job listing?*
    Please describe the problem:
    In order to confirm you are not a robot please fill the answer to the calculation in the field:
    Tech Stack / Requirements

    *This position is fully remote only for employment in Bulgaria. However, people can also work in one of our offices in Sofia or Varna if they prefer to.

     

    About DataArt

    DataArt is a global software engineering firm and a trusted technology partner for market leaders and visionaries. Our world-class team designs and engineers data-driven, cloud-native solutions to deliver immediate and enduring business value.

    We promote a culture of radical respect, prioritizing your personal well-being as much as your expertise. We stand firmly against prejudice and inequality, valuing each of our employees equally.

    We respect the autonomy of others before all else, offering remote, onsite, and hybrid work options. Our Learning and development centers, R&D labs, and mentorship programs encourage professional growth.

    Our long-term approach to collaboration with clients and colleagues alike focuses on building partnerships that extend beyond one-off projects. We provide the ability to switch between projects and technology stacks, creating opportunities for exploration through our learning and networking systems to advance your career.

     

    Position Overview

    We are looking for a Data Engineer to help build the next generation of our cloud-based data platform using AWS and Databricks. In this role, you will design and operate scalable, resilient, high-quality data pipelines and services that empower analytics, real-time streaming, and machine learning use cases across the organization.

     

    Responsibilities

    • Design, build, and operate robust, scalable, secure data pipelines across batch, streaming, and real-time workloads
    • Transform raw data into high-quality, reusable datasets and data products that power analytics and ML
    • Work hands-on with AWS, Databricks, PySpark/Spark SQL, and modern data tooling
    • Develop ETL/ELT processes, ingestion patterns, and streaming integrations using services such as Kafka, Kinesis, Glue, Lambda, EMR, DynamoDB, and Athena
    • Ensure data reliability and observability through monitoring, alerting, testing, and CI/CD best practices
    • Drive engineering best practices in performance tuning, cost optimization, security, metadata management, and data quality
    • Partner with Data Product Owners, ML teams, and business stakeholders to translate requirements into technical solutions
    • Lead technical design discussions, influence data platform decisions, and mentor other engineers
    • Operate services in production with a focus on uptime, data availability, and continuous improvement

     

    Requirements

    • 4+ years of experience building data pipelines and large-scale ETL/ELT workflows
    • Strong hands-on experience with AWS cloud data services and the Databricks ecosystem
    • Deep proficiency in Python, PySpark/Spark SQL, SQL optimization, and performance tuning
    • Experience with streaming architectures: Kafka, Kinesis, or similar
    • Familiarity with CI/CD, infrastructure-as-code, automation, and DevOps practices
    • Experience with data warehousing, structured and semi-structured data, and performance-optimized storage formats (Parquet/Delta)
    • Knowledge of Agile development and modern engineering practices.
    • Collaborative communicator who works well with engineering, product, and business teams
    • An enabler mindset and a drive to support peers and function as one team
    • Able to translate business needs into scalable technical solutions
    • A mindset of ownership, accountability, and continuous improvement
    • Passion for data craftsmanship, clarity, and quality

     

    Nice To Have

    • Experience with Machine Learning data pipelines, feature stores, or MLOps
    • Familiarity with data governance, data cataloging, lineage, and metadata tools
    • Experience with containerization and orchestration (Docker, ECS, Kubernetes, Airflow, Step Functions)
    • Knowledge of scalable data warehousing technologies
    • Contributions to engineering communities, open-source, or internal tech groups

     

    What We Offer

    • Unique corporate culture – no micromanagement, friendly atmosphere, freedom, and mutual respect
    • Flexible schedule – ability to change projects, to work from home, and to try yourself in different roles
    • Professional Development Map – a comprehensive map of your professional development within DataArt
    • We hire people not for a project, but for the company. If the project (or your work in it) is over, you go to another project or to a paid “Idle”.
    • Social benefits – additional health insurance, life insurance, sports card, etc.
    • Opportunity to work from another DataArt office in a different city or country (temporarily or permanently)
    • Free English courses
    • Cozy office with a great atmosphere
    • Snacks, drinks, and fruits are always available

     

    reference number: DE00231