Slot 1

Performance Evaluation and Benchmarking
Lizy John, University of Texas at Austin, USA

Abstract

The complexity of hardware and software has made the design of microprocessors and computer systems extremely complicated and interesting, whether they be for the smart phone, the desktop, or the high-end server. The microprocessors in modern systems often contain billions of transistors and operate at multiple GHz frequencies. These processors are deeply pipelined, issue multiple instructions per cycle, execute instructions in out-of-order, employ significant amounts of speculation, and contain multiple layers of large caches. These processors have hundreds of instructions in flight at any time, and are truly marvels of engineering. Several new types of non-volatile memories and solid-state disks are emerging rendering the design space further interesting. The workloads that run on these processors involve a deep software stack and contain products from several vendors, interconnected in intricate manners with many layers of virtualization.
The amount of analysis that needs to be done to understand bottlenecks in these systems is overwhelming. Simply consider the fact that one second of program execution involves several billions of instructions and analyzing one second of execution may involve dealing with hundreds of billion pieces of information. The large number of potential designs, and the constantly evolving nature of workloads have resulted in performance evaluation becoming an overwhelming task. Adding power consumption and energy into the picture, the task of designing energy efficient computers has become a daunting task. Decades ago microprocessors and computer systems could be designed using rules of thumb and intuition, however, it is not possible to design them simply based on designer intuition any more.

Good analysis leads to good design. It is important to do design space analysis in the early design stages to identify good design points. It is also important to analyze existing designs to understand bottlenecks and identify changes for future designs. Usually, early design analysis is accomplished using simulation models, whereas later stages of analysis can be based on measurement on actual systems. Simulators are orders of magnitude slower than real hardware and limit the amount of analysis that can be done. Hence efficient techniques and methodologies need to be devised to enable meaningful performance analysis.

This course presents various topics in microprocessor and computer system performance evaluation such as:

(*) Issues in evaluating Performance and Power/Energy of Computers
(*) Performance Evaluation Tools and Techniques
(*) Verification of Performance Models
(*) Workload Characterization
(*) CPU-intensive, commercial and database, web server, embedded, cloud and big-data benchmarks
(*) Statistical Techniques for Performance Evaluation
(*) Design of Experiments
(*) Introduction to Analytical Modeling of Processors

Bio

Lizy Kurian John is B. N. Gafford Professor in the Electrical and Computer Engineering at UT Austin and IEEE Fellow (Class of 2009). Her research interests include workload characterization, performance evaluation, architectures with emerging memory technologies such as die-stacked DRAM, and high performance processor architectures for emerging workloads. She is recipient of NSF CAREER award, UT Austin Engineering Foundation Faculty Award, Halliburton, Brown and Root Engineering Foundation Young Faculty Award  2001, University of Texas Alumni Association (Texas Exes) Teaching Award 2004, The Pennsylvania State University Outstanding Engineering Alumnus 2011, etc.

She has coauthored a book on Digital Systems Design using VHDL (Cengage Publishers, 2007), a book on Digital Systems Design using Verilog (Cengage Publishers, 2014)  and has edited 4 books including a book on Computer Performance Evaluation and Benchmarking.  She holds 9 US patents. She received her Ph. D in Computer Engineering from the Pennsylvania State University.


  Back to course info