Principles for Automated and Reproducible Benchmarking

Koskela, Tuomas and Christidi, Ilektra and Giordano, Mosè and Dubrovska, Emily and Quinn, Jamie and Maynard, Christopher and Case, Dave and Olgu, Kaan and Deakin, Tom

First International Workshop on HPC Testing and Evaluation of Systems, Tools, and Software held in conduction with Supercomputing (HPCTESTS), 2023

Abstract

The diversity in processor technology used by High Performance Computing (HPC) facilities is growing, and so applications must be written in such a way that they can attain high levels of performance across a range of different CPUs, GPUs, and other accelerators. Measuring application performance across this wide range of platforms becomes crucial, but there are significant challenges to do this rigorously, in a time efficient way, while assuring results are scientifically meaningful, reproducible, and actionable. We present a methodology for measuring and analyzing the performance portability of a parallel application and shares a software framework which combines and extends adopted technologies to provide a usable benchmarking tool. We demonstrate the flexibility and effectiveness of the methodology and benchmarking framework by showcasing a variety of benchmarking case studies which utilize a stable of supercomputing resources at a national scale.

In press

@inproceedings{hpctests23,
  author = {Koskela, Tuomas and Christidi, Ilektra and Giordano, Mosè and Dubrovska, Emily and Quinn, Jamie and Maynard, Christopher and Case, Dave and Olgu, Kaan and Deakin, Tom},
  title = {{Principles for Automated and Reproducible Benchmarking}},
  booktitle = {{First International Workshop on HPC Testing and Evaluation of Systems, Tools, and Software held in conduction with Supercomputing (HPCTESTS)}},
  year = {2023},
  publisher = {{IEEE}},
  keywords = {Conferences and Workshops},
  doi = {10.1145/3624062.3624133},
  note = {In press}
}