• Design, develop and maintain scalable batch ETL and near-real-time data pipelines and architectures for various parts of our business, on fast and versatile data sources with hundreds of thousands of changes per day
• Ensure all data provided is of the highest quality, accuracy, and consistency
• Identify, design, and implement internal process improvements for optimizing data delivery and re-designing data pipelines for greater scalability
• Build out new API integrations to support continuing increases in data volume and complexity
• Communicate with data scientist, DevOps engineers, and BI analysts in order to understand business processes and data needs for specific features
For efficiency and effective role performance:
• 2+ years of experience in data engineering, data platforms, BI, or data-centric applications, such as data warehouses, operational data stores, and data integration projects
• Experience with one or more ML workflow orchestration frameworks (Apache Airflow, Kubeflow, MLFlow, etc.)
• Proficient in SQL and PL/SQL programming, working with Oracle databases
• Excellent coding skills in Python
• Experience with software development automation tools like Jenkins, and with VCSs like GitHub, GitLab or other VCSs
• Understanding of containerization and orchestration technologies like Docker/Kubernetes
Become part of Fibank family by sending a CV with a recent photo by 30 April 2025
For additional information: contact the Human Capital Management Department, Internal Consultants Unit, phone 02/9100100.
By enabling them, you help us to develop and deliver better services in the way that's most convenient for you. For information and settings, see our Cookie Policy.