We’re an AI-first global tech company with 25+ years of engineering leadership, 2,000+ team members, and 500+ active projects powering Fortune 500 clients, including HBO, Microsoft, Google, and Starbucks.
From AI platforms to digital transformation, we partner with enterprise leaders to build what’s next.
What powers it all? Our people are ambitious, collaborative, and constantly evolving.
About the Client
The leading provider of vehicle lifecycle solutions, with headquarters in Chicago, enables the companies that build, insure, and replace vehicles to power the next generation of transportation. Its platform delivers advanced mobile, artificial intelligence, and car technologies. It connects a network of 350+ insurance companies, 24,000+ repair facilities, hundreds of parts suppliers, and dozens of third-party data and service providers. The customer’s collective solutions enhance productivity and help clients deliver better experiences for end consumers.
What You’ll Do
Design, implement, and support scalable multi-tenant data infrastructure solutions to integrate with multi heterogeneous data sources; aggregate and retrieve data in a fast and secure mode; curate data that can be used in reporting, analysis, machine learning models, and ad-hoc data requests
Designing and implementing complex ingestion and analysis pipelines and other BI solutions
Interface with other engineering and ML teams to extract, transform, and load data from a wide variety of data sources using SQL/Non-relational and big data technologies
Work with customers to understand, gather, and analyze their data sources and define/implement the ingestion strategy
What You Bring
5+ years of industry experience in software development, data engineering, business intelligence, or a related field with a solid track record of manipulating, processing, and extracting values from large datasets
Knowledge of AWS DataOps (i.e. IAM, Lambda, Step Functions, EMR/Glue and DynamoDB)
Expertise in SQL/Non-relational, Data Modeling, ETL Development leveraging Python and Data Warehousing
Proficiency with using big data technologies (PostgreSQL, Hadoop, Hive, HBase, Spark)
Background in working with data streaming technologies (Kafka, Spark Streaming, etc.)
Expertise in data management and data storage best practices
Demonstrated capacity to clearly and concisely communicate about complex technical, architectural, and/or organizational problems and propose thorough iterative solutions
Skills owning a feature from concept to production, including proposal, discussion, and execution
Nice to have
Background in financial services, including banking, insurance, or an equivalent
Degree in computer science, engineering, mathematics, or a related field
English level
Upper-Intermediate
Legal & Hiring Information
Exadel is proud to be an Equal Opportunity Employer committed to inclusion across minority, gender identity, sexual orientation, disability, age, and more
Reasonable accommodations are available to enable individuals with disabilities to perform essential functions
Please note: this job description is not exhaustive. Duties and responsibilities may evolve based on business needs
Benefits
International projects
In-office, hybrid, or remote flexibility
Medical healthcare
Recognition program
Ongoing learning & reimbursement
Well-being program
Team events & local benefits
Sports compensation
Referral bonuses
Top-tier equipment provision
Exadel Culture
We lead with trust, respect, and purpose. We believe in open dialogue, creative freedom, and mentorship that helps you grow, lead, and make a real difference. Ours is a culture where ideas are challenged, voices are heard, and your impact matters.
By enabling them, you help us to develop and deliver better services in the way that's most convenient for you. For information and settings, see our Cookie Policy.