May 4 - Shared-Memory Parallelism Can Be Simple, Fast and Scalable

Join us at 2:30p on May 4 for one of the last talks in CS 2311! Julian Shun, Carnegie Mellon University Abstract: With the growth of data in recent years, efficiently processing large data has become crucial. As such, many have turned to parallel computing to deliver high performance. One source of parallelism appears in shared-memory multicore processors, which have become prevalent, appearing in personal computers and even cellular phones. Shun's research focuses on developing frameworks and tools for simplifying shared-memory programming and designing large-scale shared-memory algorithms that are efficient both in practice and in theory. In this talk, he will first discuss Ligra, a shared-memory graph processing framework for simplifying the programming of shared-memory graph algorithms. Then, he will present my work on developing large-scale shared-memory algorithms that are efficient both in theory and in practice. Finally, I will discuss my work on designing tools for deterministic parallel programming. Bio: Julian Shun is a Ph.D. candidate in Computer Science at Carnegie Mellon University. He is interested in developing large-scale shared-memory algorithms for graph processing, as well as parallel text algorithms and data structures. He is also interested in designing methods for writing deterministic shared-memory programs, benchmarking parallel programs, and developing external-memory and cache-efficient algorithms. His work has been supported by a Facebook Graduate Fellowship. Julian obtained his undergraduate degree in Computer Science from UC Berkeley.​