IXPUG Workshop at HPC Asia 2023
Conference Dates: February 27 – March 2, 2023
Workshop Date: February 27, 2023 during 13:30-17:00 SGT / Singapore Time GMT+8. Room location Garnet - 216.
Location: HPC Asia 2023 – International Conference on HPC in Asia-Pacific Region; co-located with the SupercomputingAsia (SCA) conference in Singapore
Registration: To attend, register via HPC Asia 2023
Event Description:
The Intel eXtreme Performance Users Group (IXPUG) is an active community-led forum for sharing industry best practices, techniques, tools, etc., for maximizing efficiency on Intel platforms and products. IXPUG Workshop at HPC Asia 2023 is an open workshop on high-performance computing applications, systems, and architecture with Intel technologies. This is a half-day workshop with invited talks and contributed papers. The workshop aims to bring together software developers and technology experts to share challenges, experiences, and best-practice methods for the optimization of HPC, Machine Learning, and Data Analytics workloads on Intel® Xeon® Scalable processors, Intel® Xeon Phi™ processors, Intel® FPGA, and any related hardware/software platforms. The workshop will cover application performance, and scalability challenges at all levels - from intra-node performance up to large-scale compute systems. Any research aspect related to Intel HPC products is welcome to be presented in this workshop.
Workshop Agenda:
All times are shown in SGT / Singapore Time, GMT+8. Final presentations will be made accessible to download at https://www.ixpug.org/resources after the workshop.
Time |
Title and Authors |
Presenter |
Presentation |
13:30 |
Opening Remarks |
Toshihiro Hanawa (Workshop Chair), The University of Tokyo |
|
|
Session 1:
|
|
|
13:40-14:30 |
Keynote: Porting Simulation, Data-Intensive, and AI Applications to the Aurora Exascale System
Aurora is an exascale supercomputer in the final stages of assembly at the Argonne Leadership Computing Facility (ALCF) in the U.S. Through the ALCF Aurora Early Science Program (ESP), and the US Dept. of Energy Exascale Computing Program (ECP), scientists at Argonne National Laboratory and a variety of other institutions, in collaboration with Intel, are preparing dozens of applications and workflows to run at scale on the Aurora system. The ESP applications emphasize projects with workflows including AI training and inference and data-intensive computing, as well as conventional simulation. We will present an overview of the Aurora system, models for programming it, and salient experience from preparing ESP and ECP applications for the system. We will present promising early performance measurements on Intel Data Center Max Series GPUs (a.k.a. PVC).
|
Timothy Williams and Venkat Vishwanath, Argonne National Laboratory |
|
14:30-15:00 |
Contributed Paper: Understanding DAOS Storage Performance Scaling
High performance scale-out storage systems are a critical component of modern HPC and AI clusters. However, characterizing their performance remains challenging: Different client I/O patterns have very different performance scaling behavior, and bottlenecks in the HPC storage software may also limit performance scaling. This paper investigates the performance scaling behavior of the Distributed Asynchronous Object Storage (DAOS) scale-out storage system for typical IOR and mdtest workloads, as a function of the size of the storage server hardware, the client-side parallelism, and the HPC networking stack (libfabric/verbs and UCX).
|
Michael Hennecke, Intel Corporation |
|
15:00-15:30 |
Invited Talk: Persistent Memory Supercomputer Pegasus for Data-driven and AI-driven Science
Pegasus supercomputer has been introduced at University of Tsukuba last December. Each compute node has Sapphire Rapids, H100 PCIe GPU and Optane persistent memory. With system software development, we will strongly drive big data and AI.
|
Prof. Osamu Tatebe, Center for Computational Sciences, University of Tsukuba |
|
15:30-16:00 |
Break (30 min.) |
|
|
|
Session 2: |
|
|
16:00-16:30 |
Contributed Paper: Implementation and Performance Evaluation of Collective Communications Using CIRCUS on Multiple FPGAs
Field-programmable gate array (FPGA) is focused as a novel accelerator in HPC domain. FPGAs have strong intercommunication interfaces and can achieve performance superior to other accelerators in terms of communication. The Center for Computational Sciences at the University of Tsukuba has developed the communication integrated reconfigurable computing system (CIRCUS) framework that enable the description of inter-FPGA communication in OpenCL. Currently, this framework does not support collective communications widely used in HPC applications. In this study, in order to provide high-performance and user-friendly collective communication APIs to HPC users on CIRCUS, we implement and evaluate the performance of allreduce communication using CIRCUS.
|
Kohei Kikuchi, University of Tsukuba |
|
16:30-17:00 |
Invited Talk: Benchmarking Omni-Path Express
HPC application benchmarking is a complex journey. While performance measurements are the primary objective, there are a multitude of details regarding software configuration and the strategy to obtain and properly document the measurements. In this presentation, the latest efforts towards HPC application benchmarking at Cornelis Networks are presented. With a focus on test methodology, select performance results are also shown using first generation Omni-Path Architecture hardware, comparing the latest Omni-Path Express (OPX) libfabric provider with the traditional Performance Scaled Messaging (PSM2) provider.
|
James Erwin, Cornelis Networks |
|
Paper Topics of Interest:
- Artificial Intelligence (Machine Learning / Deep Learning)
- Application porting and performance optimization
- Vectorization, memory, communications, thread, and process management
- Multi-node application experiences
- Programming models, algorithms, and methods
- Software environment and tools
- Benchmarking and profiling tools
- Visualization development
- FPGA applications and system software
Paper Submission:
Paper submissions are welcomed via EasyChair by December 16, 2022 (AOE). All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Regular papers up to 18 pages with single column in PDF format including figures and references
- Short papers up to 10 pages with single column in PDF format including figures and references
- The paper format is described in the “Paper Submission” section of HPCAsia2023
Important Dates:
- Paper due: December 23, 2022 (AOE, extended)
- Notification: January 6, 2023 (tentative)
- Camera ready due: January 13, 2023
Workshop Date: February 27, 2023 (Day-1 of HPC Asia 2023: February 27- March 2, 2023)
Conference Venue: Singapore, in-person
Publication: All accepted papers will be included in ACM Digital Library as a part of the HPC Asia 2023 Workshop Proceedings
Organizing Committee: Chair: Toshihiro Hanawa (The University of Tokyo)
Program Committee:
- Aksel Alpay (Heidelberg University)
- R. Glenn Brook (Cornelis Networks, Inc.)
- Melyssa Fratkin (Texas Advanced Computing Center, The University of Texas at Austin)
- Clay Hughes (Sandia National Laboratory)
- David Keyes (King Abdullah University of Science & Technology)
- Nalini Kumar (Intel Corporation)
- James Lin (Shanghai Jiao Tong University)
- Hatem Ltaief (King Abdullah University of Science & Technology)
- David Martin (Argonne National Laboratory)
- Christopher Mauney (Los Alamos National Laboratory)
- Amit Ruhela (Texas Advanced Computing Center, The University of Texas at Austin)
- Thomas Steinke (Zuse Intitute Berlin)
General questions should be sent to This email address is being protected from spambots. You need JavaScript enabled to view it.