Invited Speakers
- Home /
- Invited Speakers
Dr. Jack Dongarra
Emeritus Professor, University of Tennessee Department of Electrical Engineering and Computer Science.Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his Ph.D. in Applied Mathematics from the University of New Mexico in 1980.
He worked at the Argonne National Laboratory until 1989, becoming a senior scientist. He now holds an appointment as University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee and holds the title of Distinguished Research Staff in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL); Turing Fellow at Manchester University; an Adjunct Professor in the Computer Science Department at Rice University. He is the director of the Innovative Computing Laboratory at the University of Tennessee. He is also the director of the Center for Information Technology Research at the University of Tennessee which coordinates and facilitates IT research efforts at the University.
He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. His research includes the development, testing and documentation of high quality mathematical software. He has contributed to the design and implementation of the following open source software packages and systems: EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, Top500, ATLAS, and PAPI.
He has published over 400 articles, papers, reports and technical memoranda and he is coauthor of several books. He was awarded the IEEE Sid Fernbach Award in 2004 for his contributions in the application of high performance computers using innovative approaches; in 2008 he was the recipient of the first IEEE Medal of Excellence in Scalable Computing; in 2010 he was the first recipient of the SIAM Special Interest Group on Supercomputing's award for Career Achievement; in 2011 he was the recipient of the IEEE Charles Babbage Award; in 2013 he was the recipient of the ACM/IEEE Ken Kennedy Award for his leadership in designing and promoting standards for mathematical software used to solve numerical problems common to high performance computing, in 2019 he was awarded the SIAM/ACM Prize in Computational Science and Engineering, and in 2020 he received the IEEE Computer Pioneer Award for leadership in the area of high-performance mathematical software, and the ACM A.M. Turing Award in 2021.
He is a Fellow of the AAAS, ACM, IEEE, and SIAM, a Foreign Fellow of the British Royal Society, and a Member of the US National Academy of Sciences and the National Academy of Engineering.
Theme: Innovation Driven by Advances in High-Performance Computing
Abstract: This talk will explore the transformative role of high-performance computing (HPC) as a key driver in scientific discovery, addressing complex, large-scale challenges across multiple domains. We will illustrate how HPC has evolved from early vector supercomputers to today’s GPU-based systems and future advancements in AI-integrated computing, significantly enhancing computational speed, scalability, and precision. By examining real-world applications like climate modeling and digital twins, the talk highlights HPC's impact on pivotal fields, including healthcare, finance, and energy. Additionally, we will discuss emerging trends and challenges in HPC, such as power consumption, data management, and the integration of AI and edge computing, ultimately projecting a future where HPC remains central to innovation, enabling new discoveries and advancements across science and industry.
Dr. Alex Tuo-hung Hou
Director General of the Taiwan Semiconductor Research Institute.Dr. Hou received his Ph.D. degree in electrical and computer engineering from Cornell University in 2008. From 2000 to 2004, he was with the Taiwan Semiconductor Manufacturing Company (TSMC). In 2008, he joined the Department of Electronics Engineering, National Chiao Tung University (NCTU) (as National Yang Ming Chiao Tung University (NYCU) since 2021), where he is currently a Chair Professor. Dr. Hou is also the Program Director of Angstrom Semiconductor Initiative, one of the largest national research programs for advanced semiconductors. In 2022, Dr. Hou became the Director General of Taiwan Semiconductor Research Institute (TSRI), one of the National Applied Research Laboratories in Taiwan focusing on semiconductor technology. He was also the Associate Vice President for Research & Development at NYCU. His research interests include the emerging nonvolatile memory for embedded and high-density data storage, electronic synaptic device and neuromorphic computing systems, and heterogeneous integration of silicon electronics with low-dimensional and low-temperature nanomaterials.
Dr. Hou was a recipient of CIEE Outstanding Electrical Engineering Professor Award, Micron Teacher Award, Micron Chair Professor Award, MOST Ta-You Wu Memorial Award, and MOST Outstanding Research Award (twice). He served on the technical program committees of major conferences, including VLSI, IEDM, IRPS, DRC, EDTM, ISCAS, etc. He is currently the Regional Editor of IEEE EDS Newsletter. He is also a Member of the Board of Directors of the IEEE Taipei Section.
Theme: Intelligent Memory in Future High-Performance Computing
Abstract: Memory technology is not only the pillar of the present semiconductor industry, but it also plays a critical role in various innovation forefronts of future, such as big data storage/processing, AI acceleration, neuromorphic computing, hardware security, combinatorial optimizer, etc. Memory-centric architectures and computing systems promise to provide unprecedented parallelism, energy efficiency, and density beyond the conventional von Neumann architectures. In this intelligent memory era, breakthroughs in the memory device, circuit, and architecture are required. We will talk about the latest ferroelectric and magnetic memory development in NYCU, including BEOL-compatible ferroelectric transistors, ferroelectric tunnel junctions, and compact STT-MRAM neuron devices. Demonstrations of high energy-efficiency in-memory computing and in-memory annealing will also be highlighted, and the challenges and potential solutions in computing precision, device/circuit variation, and area/energy efficiency will be addressed. Our approach is showcased using a recent SRAM-based in-memory computing macro achieving unprecedentedly high energy efficiency of 20943 TOPS/W.
Dr. Xian-He Sun
University Distinguished Professor at Illinois Institute of TechnologyDr. Xian-He Sun is a University Distinguished Professor, the Ron Hochsprung Endowed Chair of Computer Science, and the director of the Gnosis Research Center for accelerating data-driven discovery at the Illinois Institute of Technology (Illinois Tech). Before joining Illinois Tech, he worked at DoE Ames National Laboratory, at ICASE, NASA Langley Research Center, at Louisiana State University, Baton Rouge, and was an ASEE fellow at Navy Research Laboratories. Dr. Sun is an IEEE fellow and is known for his memory-bounded speedup model, also called Sun-Ni’s Law, for scalable computing. His research interests include high-performance data processing, memory and I/O systems, and performance evaluation and optimization. He has over 350 publications and 7 patents in these areas and is currently leading multiple large national software development projects in high performance I/O systems. Dr. Sun is the Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems, and a former department chair of the Computer Science Department at Illinois Tech. He received the Golden Core award from IEEE CS society in 2017, the ACM Karsten Schwan Best Paper Award from ACM HPDC in 2019, the Ron Hocksprung Endowed Chair from Illinois Tech in 2020, and the first prize best paper award from ACM/IEEE CCGrid in 2021. More information about Dr. Sun can be found on his web site https://www.cs.iit.edu/~scs/sun/
Theme: Dataflow under the von Neumann machine: a destructive new under existing systems
Abstract: While the success of deep learning hinges on its ability to process vast amounts of data, computing systems struggle to keep up with the unprecedented demand of ever-increasing data, leading researchers back to the notorious memory-wall problem. In this talk, we introduce the concept of dataflow under the von Neumann machine to address this issue. We begin by presenting the C-AMAT model, which quantifies the benefits of concurrent data access and reveals the relationship between data locality and concurrency. Next, we introduce the LPM (Layer Performance Matching) framework to optimize memory system performance and formally introduce the concept of dataflow under the von Neumann machine. We then discuss our recent work in I/O systems, focusing on the Hermes multi-tiered I/O buffering system. Hermes optimizes data movement based on the LPM framework and has been a significant success. While Hermes is a software solution for I/O systems, what is the practical hardware solution for memory systems? Finally, we will address some fundamental issues and present forward-thinking computer system designs for AI and big data applications, aimed at mitigating the memory-wall problem.
About Us
On behalf of HPC Asia 2025committee and the NCHC-National Center for High-performance Computing (NCHC), it is our pleasure to welcome you to HPC Asia 2025, hosted in the vibrant city of Hsinchu, Taiwan. This year’s theme,“Chip-based Exploration and Innovation for HPC,” resonates deeply with Hsinchu's dual legacy of rich cultural heritage and cutting-edge technological advancements.