WP 1. Survey of existing scalable scientific software algorithms and techniques

The objective of this WP is the Identification of the current developments in the available software by the partners and under development by third parties, and its associated algorithms and techniques in other to assess its scalability to simulation codes exascale.

At the end of this WP it is expected to have a detailed map of the most representative ingredients addressed to the exascale, that is, relevant codes and libraries on progress (i.e. Trilinos), algorithms, techniques and tendencies related the recommended use of accelerators, GPU and ARM processors, linked to other research projects, the already mentioned DEEP and MONTBLANC, attending to their hardware related goals, CRESTA, on the software side and to mention the ones related to the FP7, but also non-European ones, such as Interoperable Technologies for Advanced Petascale Simulations (ITAPS, http://www.itaps.org).

Task 1.1 Design of scalability test

In this task we will design scalability tests for the parallel algorithms on a parallel architecture as a measure of their capability to effectively utilize an increasing number of processors. The most relevant parameters will be defined for the benchmarking that will be used through the whole project, to compare the effective use of many-core, the flexibility working with hybrid architectures, load transfer, speed, how certain software strategies affect the power saving and error propagation (which is an important issue when working with this so huge amount of data).

The numerical and parallel scalability of an algorithm on a parallel architecture will be studied under different sets of conditions. The scalability analysis will be used to select the best algorithm/architecture combination for a problem under different constraints related to the problem size and the number of processors. It will also be used to predict the performance of a parallel algorithm and a parallel architecture for a large number of processors from the known performance on fewer processors.

In the scalability analysis we will introduce hardware cost factors (in addition to speed up and efficiency) so that that overall cost effectiveness can be determined.

Task leader: CESCA. Partners involved: CIMNE, CESCA, LUH-HLRN, NTUA

D1.1 Report with Scalability test

Lead beneficiary: CESCA

Task 1.2 Analysis of existing codes

In this task we will look in detail at the parallelization features of the codes made available by the partners as well as their potential features addressed to the exascale expected to be implemented in them.

Although the codes that will be surveyed have been already briefly described in section 1.3.3., below are some additional comments about their specific role in the project and towards the exascale:

  • GiD will be used as the main graphics interface to integrate the rest of codes, with pre and postprocessing capabilities. GiD supports most of the standard file formats, including the Visualization Toolkit (VTK) data model used by ParaView, one of the reference packages for large-scale visualization of scientific data. New features in GiD related to the exascale will be developed mainly in WPs 3 and 7;
  • KRATOS will be used as general framework to build generic parallel simulation programs, and has already integrated libraries such as Trilinos in their kernel layers. Kratos also includes some modules able to be executed in GPUs. Kratos new modules will be prepared for the exascale in the WPs 4, 5 and 9;
  • FEMPAR will be used for WPs 5 and 9, for multiphyiscs related problems and, especially, for magnetohydrodynamics. FEMPAR has been already executed in some of the top supercomputers in the world (CURIE; HERMIT; HPC-FF; MARENOSTRUM; MINOTAURO; HELIOS);
  • DEMPACK and DEMFLOW, as discrete element codes, will get further developments in the WP 4, where significant progress are expected compared to the other most conventional codes given the parallel nature of their formulation.
  • SOLVERIZE, as general purpose code, with implicit parallel solvers, flexible to integrate different simulation methods, will be used as base to develop the implicit parallel solvers for structural, fluid and couple problem as well as for optimisation solver in the WP 6, considering uncertainties, what is an important matter also at the hardware level for the exascale, which will be used in WPs 4, 5 and 6;
  • STAMPACKVULCAN and Click2Cast are industrial codes very focalised to the engineering endusers, so they will incorporate their scalability through the new features of the rest of the codes, more generic and scientific, as identification of problems and validation of the practical use by the industry of the simulation codes at the exascale, integration which will be performed in the WP 9.

Task leader: NTUA. Partners involved: CIMNE, LUH-IKM, NTUA, QUANTECH

D1.2 Report with the scalability assessment of available codes

Lead beneficiary: NTUA

Monday, December 23, 2024     [ login ]

The research leading to these results has received funding from the European Community's Seventh Framework Programme under grant agreement n° 611636