
In exascale systems massive amounts of data must be queried, transferred and analyzed in (near) real-time by using very large amounts of memory/storage.
The goal of the ASPIDE project is to propose solutions to these challenges and, consequently, improving the performance and efficiency of extreme data processing applications. To achieve it the project designs, implements, and tests programming models and tools for data-intensive applications. The project also integrates and evaluates methods and tools for application and infrastructure monitoring in large scale systems.
The ASPIDE project contributes with the definition of a new programming paradigms, APIs, runtime tools and methodologies for expressing data-intensive tasks on Exascale systems, which paves the way for the exploitation of massive parallelism over a simplified model of the system architecture.
Solutions developed by the project are evaluated on various applications in the domains such as health or industry 4.0. Applied technologies include applications of deep learning.
The project is implemented by a scientific and industrial consortium: University Carlos III of Madrid (project leader), Poznan Supercomputing and Networking Center, Institute e-Austria Timisoara, Alpen-Adria-Universität-Klagenfurt, Servicio Madrileño de Salud, Integris S.p.A. and BULL/ATOS.
Potential recipients of the product are computing centers and users of data-intensive applications including deep learning.