HORIZON-EUROHPC-JU-2024-BENCHMARK-05
A European HPC-centric Benchmark Framework -
About the connections
The graph above was generated based on the following links
Call text (as on F&T portal)
View on F&T portalExpected Outcome:
The Project is expected to contribute to the following outcomes:
- Enhanced decision-making through comprehensive system comparisons that improves the procurement process for exascale, post-exascale supercomputers and supercomputers with dedicated AI capabilities. This will enable more informed choices regarding the acquisition of new systems and upgrades of existing ones
- Competent HPC application developers and end-users in selecting systems that best meet their needs, balancing quality factors like accuracy with considerations of cost, such as time-to-solution
- Overall improved operation and fine-tuning of HPC and HPC-AI systems leading to improved performance, throughput and energy optimization, and improved end-user experience
- A unified, extensible and well-documented benchmarking framework to easily accommodate new, community-contributed benchmarks with common standards, versioning and control
- A well-maintained and continuously updated benchmarking suite for exascale and post-exascale HPC, incl. set of apps, as well as AI models.
Scope:
A. Deployment of a benchmarking framework for designing, developing and executing exascale HPC and HPC-AI benchmarks. The envisioned benchmarking framework will:
- offer a fine grained and fair comparison methodology among different HPC systems, i.e. all benchmarks, benchmark run rules[1] and benchmark submission rules must be designed to ensure reproducibility, repeatability and replicability of metrics on the same system, ensuring fairness and comparability of metrics across different systems
- define precise performance metrics with a clear focus on energy-related performance indicators
- standardise all benchmarking input- and output formats
- collect and report all benchmarking results while offering statistically sound result analyses
- ensure that all benchmarks are executable on the respective target environment(s)
- offer a standardized structured workflow capturing and streamlining the entire benchmarking process
- offer a standardised repository with transparent version control
- provide a reference implementation for each benchmark
- use a EuroHPC reference system, where applicable, to normalize the performance metrics produced by the benchmarking suite, i.e. each benchmark is run and measured on this system to establish a reference value for that benchmark[2] subsequently, the normalized performance is the quotient of the performance value attained on the EuroHPC reference machine and the one on the system under test
- is of production-quality and ready to assess all EuroHPC supercomputers and supercomputers with AI capabilities,
- provides all required templates with relevant input data to properly execute the benchmarking suite on every EuroHPC system.
The benchmarking framework along with its workflows will be realised in a software implementation that offers to the end-user a dynamic workspace for the entire workflow.
B. Establishing a comprehensive exascale HPC and HPC-AI benchmarking suite utilizing the framework developed in the first objective. This benchmarking suite, with its associated performance metrics, will be designed to measure and assess the performance of HPC, as well as HPC-AI[3] systems at various levels of granularity, encompassing:
The envisioned benchmarking suite is expected to:
- be generally hardware agnostic
- provide documentation for developers and end-users
- catalogue well-established benchmarks of both technical areas
- continuously update the portfolio with novel benchmarks of both technical areas
- ensure that each benchmark produces at least one metric, examples are time-to-solution (under a quality constraint), throughput or utilization define reliable and appropriate common metrics to compare the different architectures based on pre-defined criteria (e.g. efficiency)
- ensure that all benchmarks and associated metrics will comprehensively cover all relevant workloads and performance aspects ensuring to meet the diverse needs of the European HPC-AI community in a future-proof manner
- offer a comprehensive coverage of contemporary and upcoming architectures, utilizing current representative and upcoming workloads from the HPC and HPC-AI domains
- be application oriented, reflecting actual use-patterns, use-cases and diverse workloads in all three technical areas (exascale HPC, as well as HPC-AI ), ensuring that the genuine capabilities and limitations of each system is well-captured
- ensure the scalability of each benchmark by identifying relevant scale parameters[4].
Proposals should provide a thorough justification for the selection of each benchmark and performance metric, clearly explaining how they align with the specific requirements and priorities of the European HPC-AI landscape. The inclusion or integration of existing benchmarks under the umbrella of this initiative is encouraged, provided there are prior agreements with the benchmark owners and compatibility with licensing conditions.
Proposals must outline a strategy for ensuring the sustainability and availability of the benchmarking suite beyond the duration of the action, specifically focusing on how to transform it into a community-driven effort. The proposal must also outline a clear IP plan targeting industry needs.
The consortium will actively coordinate with international collaborators to establish common and objective benchmarking standards.
The project will also propose and maintain a detailed strategic development roadmap for the action, which:
The consortium will actively engage with industry and research communities through workshops, working groups, and feedback loops to receive continuous feedback ensuring that all benchmarks are relevant and up to date.
Requirements:
[1] Run rules define required and forbidden hardware, software, optimization, tuning, and procedures.
[2] When two different systems are compared with the same benchmark, their performance relative to each other must be invariant, even if different reference machines are used.
[3] We shall refer to conventional HPC and HPC-AI systems and benchmarks collectively as HPC-AI systems and benchmarks.
[4] For example, the scale parameter for an FFT benchmark is the window size and the scale parameters for AI model training applications include the size of the dataset, model size, and, in some cases, the number of models being trained simultaneously (e.g., in bagging scenarios).
News flashes
Opening date: 2025-12-11 (4 months ago)
Closing date: 2026-04-14 (2 days from now)
Procedure: single-stage
Budget: 2,000,000
Expected grants: 1
Contribution: 500,000 - 1,000,000
This call topic has been appended 2 times by the EC with news.
-
2026-04-11
faq for the call: horizon-eurohpc-ju-202... -
2026-04-11
the submission session is now available...
HORIZON-EUROHPC-JU-2024-BENCHMARK-05
Call topics are often grouped together in a call. Sometimes this is for a thematic reason, but often it is also for practical reasons.
There is 1 other topic in this call:
Showing the latest information. Found 10 versions of this call topic in the F&T portal.
Information from
- 2026-03-26_06-33-26
- 2026-03-25_06-34-20
- 2026-03-19_06-34-15
- 2026-03-03_06-33-53
- 2026-01-06_06-31-09
- 2025-12-24_06-31-06
- 2025-12-23_06-31-13
- 2025-12-17_06-30-58
- 2025-12-16_06-31-03
- 2025-12-12_06-30-43
Check the differences between the versions.
Annotations
Events
Events are added by the ideal-ist NCP community and are hand-picked. If you would like to suggest an event, please contact idealist@ffg.at.
Call topic timeline
-
Work programme available
- 6 months agoThe call topics are published first in the Work Programme, which is available a while before the call opens. By following up the Work Programme publications, you can get a headstart.
-
Opening date
- 4 months agoThe call opened for submissions.
-
Publication date
- 3 months agoThe call was first imported in TopicTree.
-
Today
-
Closing date
- 2 days from nowDeadline for submitting a project.
-
Time to inform applicants Estimate
- 5 months from nowThe maximum time to inform applicants (TTI) of the outcome of the evaluation is five months from the call closure date.
-
Sign grant agreement Estimate
- 8 months from nowThe maximum time to sign grant agreements (TTG) is three months from the date of informing applicants.
Funded Projects
Loading...
Project information comes from CORDIS (for Horizon 2020 and Horizon Europe) and will be sourced from F&T Portal (for Digital Europe projects)