
This year, the class will focus heavily on how the combination of
hardware and software achieves performance through parallelism:
pipelining, multicore CPUs in shared-memory systems, data parallel
programming as exemplified by GPU programming, and distributed-memory,
message-passing systems such as supercomputers exemplified by MPI.