Compilers are one of the most fundamental building blocks of computer science. They serve as bridges between humans and computers, translating high-level programming languages into low-level binary code. Given the expressiveness of modern programming dialects, and the complexity of current hardware, compiler developers must deal with very challenging tasks. And yet, learning how this software works can be a tremendously fun and rewarding experience for several reasons. First, compilers touch several fields of computer science: graph theory, algebra, architecture, parallel algorithm and so forth. Second, knowing how compilers work lets us improve our coding skills, removing away misconceptions and naturally leading to the development of better programs. And, finally, there are many good job opportunities for compiler writers, be it in startups, be it in large companies, such as Intel, Google, Apple and Facebook. In this course we shall briefly touch four topics related to program analysis and optimization: the monotonic dataflow framework, the LLVM compilation infrastructure, the design and implementation of Just-in-time compilers and the craft of analyses and optimizations for graphics processing units. A list of subjects covered in each topic is given below:
- Introduction to Dataflow Analyses: in this class we review the basic principles that support dataflow analyses. We take a look into the different types of analyses, e.g., may/must and forward/backward. We also discuss how to implement them. The class is mostly based on examples, which illustrate these principles and differences.
- Introduction to LLVM: in this class we learn about LLVM, a compilation infrastructure that has been largely adopted in the industry and in the academia. We shall see how to use it, which tools are available, how to understand its intermediate representation, how to generate machine code, how to invoke common optimizations, and how to write an LLVM pass.
- Just-In-Time Compilers: here we introduce the student to the concept of just-in-time compilation. A JIT compiler generates code for a program while this program is being executed. In this class we go over speculation and specialization, two key techniques used to optimize code at runtime. We analyze two speculative techniques, class inference and parameter specialization. We close the class by discussing the notion of a trace compiler.
- Code Optimizations for GPUs: this class introduces the student to general purpose graphics processing units, and describes a static code analysis that we call Divergence Analysis. Divergence Analysis proves that two variables always have the same values for different GPU threads. The class then moves on to describe a suite of divergence aware compiler optimizations, with emphasis on Register Allocation.
Fernando Magno Quintão Pereira is, since 2010, a professor in the Department of Computer Science of the Federal University of Minas Gerais, Brazil. He got his PhD at UCLA, under the supervision of Jens Palsberg in early 2009. Together, they developed the notion of Register Allocation by Puzzle Solving, whose patent was deposited in 2013. Since then, he has been doing research in the design and implementation of compiler optimizations and static analyses at UFMG's Compilers Lab. Together with his students he has produced code that nowadays is available in Firefox, LLVM and Ocelot. He has published over 50 papers; four of which got a best paper award.