Parallel Data Laboratory Talks

— 2:00pm

Location:
Virtual Presentations - ET - Remote Access - Zoom

Speaker:
PAT HELLAND, MICHAEL KUCHNIK

12:00 pmPAT HELLAND, Salesforce, Principal Architect     

I'm SO Glad I'm Uncoordinated! In my 45 years building software, technology trends have dramatically changed what's difficult and what's hard. In 1978, CPU, storage, and memory were precious and expensive but coordinating across work was effectively free. Running on a single server, networking was infinitely expensive as we had none. Now, there's an abundance of computation, memory, storage, and network with even more on the way! The only challenge is coordination. Year after year, the cost of coordinating gets larger in terms of instruction opportunities lost while waiting. The first half of the talk explains these changes and their impact on our systems. In response, there are many approaches to avoiding or minimizing the pain of coordination. We taxonomize these solutions and discuss how our systems are evolving and likely to evolve as the world changes around us. I am, indeed, a person who's uncoordinated and very likely to drop and/or break stuff. I've adapted to that in my personal life and spend a great deal of my professional life looking for ways our systems can avoid the need to coordinate.

Pat Helland has been building distributed systems, database systems, high-performance messaging systems, and multiprocessors since 1978, shortly after dropping out of UC Irvine without a bachelor's degree. That hasn't stopped him from having a passion for academics and publication. From 1982 to 1990, Pat was the chief architect for TMF (Transaction Monitoring Facility), the transaction logging and recovery systems for NonStop SQL, a message-based fault-tolerant system providing high-availability solutions for business critical solutions. In 1991, he moved to HaL Computers where he was chief architect for the Mercury Interconnect Architecture, a cache-coherent non-uniform memory architecture multiprocessor.

In 1994, Pat moved to Microsoft to help the company develop a business providing enterprise software solutions. He was chief architect for MTS (Microsoft Transaction Server) and DTC (Distributed Transaction Coordinator). Starting in 2000, Pat began the SQL Service Broker project, a high-performance transactional exactly-once in-order message processing and app execution engine built deeply into Microsoft SQL Server 2005. From 2005-2007, he worked at Amazon on scalable enterprise solutions, scale-out user facing services, integrating product catalog feeds from millions of sellers, and highly-available eventually consistent storage.

From 2007 to 2011, Pat was back at Microsoft working on a number of projects including Structured Streams in Cosmos. Structured streams kept metadata within the "big data" streams that were typically 10s of terabytes in size. This metadata allowed affinitized placement within the cluster as well as efficient joins across multiple streams. On launch, this doubled the work performed within the 250PB store. Pat also did the initial design for Baja, the distributed transaction support for a distributed event-processing engine implemented as an LSM atop structured streams providing transactional updates targeting the ingestion of "the entire web in one table" with changes visible in seconds.

Starting in 2012, Pat has worked at Salesforce on database technology running within cloud environments. His current interests include latency bounding of online enterprise-grade transaction systems in the face of jitter, the management of metastability in complex environments, and zero-downtime upgrades to databases and stateful applications. In his spare time, Pat regularly writes for ACM Queue, Communications of the ACM, and various conferences. He has been deeply involved in the organization of the HPTS (High Performance Transactions Systems) workshop since 1985. His blog and he parsimoniously tweets with the handle @pathelland.

1:00 pmMICHAEL KUCHNIK, 2023 Ph.D. Graduate, Computer Science Department, Carnegie Mellon University    

Optimizing Deep Learning Training and Validation Pipelines The past decade has shown that deep learning can tackle a variety of fundamental problems ranging from vision tasks to dialogue. While deep learning's advances have been predominantly driven by algorithms and accelerators, there is a current shift to re-examine the role of data in the deep learning pipeline. In this talk, I will discuss two works I completed during my PhD that focus on improving deep learning through the lens of data systems. The first, Plumber, achieves up to 47x training speedups with a single line of code by inserting parallelism, prefetching, and caches in data pipelines. The second, ReLM, obtains up to 15x speedups, 2.5x data efficiency, and stronger validation semantics by executing validation queries as standard regular expressions. I’ll conclude by outlining future directions of work in the data systems for deep learning space.

Michael is a newly minted Ph.D. from the Computer Science Department of Carnegie Mellon University, where he was advised by George Amvrosiadis and Virginia Smith. He received his B.S. from Georgia Tech, and he was fortunate to be supported by an NDSEG fellowship. He is interested in systems for machine learning, with a particular emphasis on data pipelines and data-centric thinking. Zoom Participation. See announcement.

Event Website:
https://pdl.cmu.edu/talk-series/2023/061523.shtml


Add event to Google
Add event to iCal