Projects

Integrated Data Analysis Pipelines for Large-Scale Data Management, HPC, and Machine Learning

Name of the Project: Integrated Data Analysis Pipelines for Large-Scale Data Management, HPC, and Machine Learning

Acronym: DAPHNE

Project fund: EU programme H2020 EU.2.1.1. – INDUSTRIAL LEADERSHIP – Leadership in enabling and industrial technologies – Information and Communication Technologies (ICT)

  • Project Website at project funder (in CORDIS):
    https://cordis.europa.eu/project/id/957407
    • Reference number: 957407
  • Topic: ICT-51-2020 – Big Data technologies and extreme-scale analytics
    • Funding Scheme: RIA – Research and Innovation action
    • Call for proposal: H2020-ICT-2018-20
  • Field of science:
    • /natural sciences/chemical sciences/analytical chemistry/quantitative analysis
    • /humanities/languages and literature/languages – general
    • /natural sciences/computer and information sciences/artificial intelligence/machine learning
    • /natural sciences/computer and information sciences/data science/data analysis
  • Time frame: 01. 12. 2020 – 30. 11. 2024
  • Total costs: 6,609,665.00 €
    • Co-funding rate (in %): 100 %
  • The amount of co-financing at UM FERI (UM FERI share): 244 975 €

Project Coordinator: KNOW-CENTER GMBH RESEARCH CENTER FOR DATA-DRIVEN BUSINESS & BIG DATA ANALYTICS (Austria)

Project Partners:

Project Summary

Modern data-driven applications leverage large, heterogeneous data collections to find interesting patterns, and build robust machine learning (ML) models for accurate predictions. Large data sizes and advanced analytics spurred the development and adoption of data-parallel computation frameworks like Apache Spark or Flink as well as distributed ML systems like MLlib, TensorFlow, or PyTorch. A key observation is that these new systems share many techniques with traditional high-performance computing (HPC), and the architecture of underlying HW clusters converges. Yet, the programming paradigms, cluster resource management, as well as data formats and representations differ substantially across data management, HPC, and ML software stacks. There is a trend though, toward complex data analysis pipelines that combine these different systems. Examples are workflows of distributed data pre-processing, tuned HPC libraries, and dedicated ML systems, but also HPC applications that leverage ML models for more cost-effective simulation. Major obstacles are (1) limited development productivity for integrated analysis pipelines due to different programming models, and separated cluster environments, (2) unnecessary data movement overhead and underutilization due to separate, statically provisioned clusters, and (3) lack of a common system infrastructure with good interoperability. For these reasons, DAPHNE’s overall objective is the definition of an open and extensible systems infrastructure for integrated data analysis pipelines. We aim at building a reference implementation of language abstractions (i.e., APIs and a domain-specific language), an intermediate representation, as well as compilation and runtime techniques with support for integrating and scheduling heterogeneous accelerator and storage devices. A variety of real-world, high-impact use cases, datasets, and a new benchmark will be used for qualitative and quantitative analysis compared to state-of-the-art.

UM FERI Activities

UM FERI is involved in project work from project management, system architecture, compilation and abstraction of a domain-specific language, runtime and integration, through use case preparations, benchmarking and analysis to dissemination and exploitation of project results.

DAPHNE Project Research Work Group Members at UM FERI

Project data on SICRIS: https://cris.cobiss.net/ecris/si/sl/project/22513

News updates (X)

Load More

More projects

See my ORCID & LinkedIn for more information.