layout |
---|
default |
- When: Mondays and Fridays from 2:00 to 3:30
- Where: Soda 310
- Instructor: Joseph E. Gonzalez
- Office Hours: Wednesdays from 4:00 to 5:00 in 773 Soda Hall.
- Announcements: Piazza
- Sign-up to Present: Google Spreadsheet Every student should sign-up to present in at least three rows and as different roles each time. Note that the Backup/Scribe presenter may be asked to fill in for one of the other roles with little notice.
- If you have reading suggestions please send a pull request to this course website on Github by modifying the index.md file.
The recent success of AI has been in large part due in part to advances in hardware and software systems. These systems have enabled training increasingly complex models on ever larger datasets. In the process, these systems have also simplified model development, enabling the rapid growth in the machine learning community. These new hardware and software systems include a new generation of GPUs and hardware accelerators (e.g., TPU and Nervana), open source frameworks such as Theano, TensorFlow, PyTorch, MXNet, Apache Spark, Clipper, Horovod, and Ray, and a myriad of systems deployed internally at companies just to name a few. At the same time, we are witnessing a flurry of ML/RL applications to improve hardware and system designs, job scheduling, program synthesis, and circuit layouts.
In this course, we will describe the latest trends in systems designs to better support the next generation of AI applications, and applications of AI to optimize the architecture and the performance of systems. The format of this course will be a mix of lectures, seminar-style discussions, and student presentations. Students will be responsible for paper readings, and completing a hands-on project. For projects, we will strongly encourage teams that contains both AI and systems students.
A previous version of this course was offered in Spring 2019. The format of this second offering is slightly different. Each week will cover a different research area in AI-Systems. The Monday lecture will be presented by Professor Gonzalez and will cover the context of the topic as well as a high-level overview of the reading for the week. The Friday lecture will be organized around a mini program committee meeting for the weeks readings. Students will be required to submit detailed reviews for a subset of the papers and lead the paper review discussions. The goal of this new format is to both build a mastery of the material and also to develop a deeper understanding of how to evaluate and review research and hopefully provide insight into how to write better papers.
{% capture dates %} 8/30/19 9/2/19 9/6/19 9/9/19 9/13/19 9/16/19 9/20/19 9/23/19 9/27/19 9/30/19 10/4/19 10/7/19 10/11/19 10/14/19 10/18/19 10/21/19 10/25/19 10/28/19 11/1/19 11/4/19 11/8/19 11/11/19 11/15/19 11/18/19 11/22/19 11/25/19 11/29/19 12/2/19 12/6/19 12/9/19 12/13/19 12/16/19 12/20/19 {% endcapture %} {% assign dates = dates | split: " " %}
This is a tentative schedule. Specific readings are subject to change as new material is published.
{% include syllabus_entry %}
This lecture will be an overview of the class, requirements, and an introduction to the history of machine learning and systems research.
- How to read a paper provides some pretty good advice on how to read papers effectively.
- Timothy Roscoe's writing reviews for systems conferences will also help you in the reviewing process.
{% include syllabus_entry %}
There will be no class but please sign-up for the weekly discussion slots.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Lecture slides: [pdf, pptx]
- SysML: The New Frontier of Machine Learning Systems
- Read Chapter 1 of Principles of Computer System Design. You will need to be on campus or use the Library VPN to obtain a free PDF.
- A Few Useful Things to Know About Machine Learning
- A Berkeley View of Systems Challenges for AI
- Kevin Murphy's Textbook Introduction to Machine Learning. This provides a very high-level overview of machine learning. You should probably know all of this.
- Stanford CS231n Tutorial on Neural Networks. I recommend reading Module 1 for a quick crash course in machine learning and some of the techniques used in this class.
- Rich Sutton's Post on Compute in ML and the corresponding Shimon Whiteson twitter debate
{% include syllabus_entry %}
This lecture will discuss the machine learning life-cycle, spanning model development, training, and serving. It will outline some of the technical machine learning and systems challenges at each stage and how these challenges interact.
- Lecture slides: [pdf, pptx]
- Template Slide Format for PC Meeting [Google Drive]
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides and scribe notes from the PC Meeting. (These are only accessible to students enrolled in the class.)
{% include syllabus_entry %}
In the previous lecture we saw that data and feature engineering is often the dominant hurtle in model development. Database systems are often the source of data and the platform in which feature engineering takes place. This lecture will cover some of the big ideas is database systems and how they relate to work on machine learning in databases.
- Lecture slides: [pdf, pptx]
- Project Proposal Sign-up doc. You must be enrolled in the class or on the waitlist to access this document. Please add any projects you are thinking about starting and list yourself as interested in anyone else's projects.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting posted. (These slides will only be accessible to students enrolled in the class.)
{% include syllabus_entry %}
This week we will discuss recent development in model development and training frameworks. While there is a long history of machine learning frameworks we will focus on frameworks for deep learning and automatic differentiation. In class we will review some of the big trends in machine learning framework design and basic ideas in forward and backward automatic differentiation.
Project proposals are due next Monday
{% include syllabus_entry %}
Update: Two of the readings were changed to reflect a focus on deep learning frameworks. The previous readings on SystemML and KeystoneML have been moved to optional reading.
- Submit your review before 1:00PM.
- Slides for PC Meeting ] These slides will only be accessible to students enrolled in the class.
- KeystoneML: Optimizing Pipelines for Large-Scale Advanced Analytics
- SystemML: Declarative Machine Learning on Spark
- Automatic Differentiation in Machine Learning: a Survey
- Roger Grosse's Lecture Notes on Automatic Differentiation
- A Differentiable Programming System to Bridge Machine Learning and Scientific Computing
- Caffe: Convolutional Architecture for Fast Feature Embedding
- Theano: A Python Framework for Fast Computation of Mathematical Expressions and Theano: A CPU and GPU Math Compiler in Python
- Automatic differentiation in PyTorch
- MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems
- TensorFlow Eager: A Multi-Stage, Python-Embedded DSL for Machine Learning
- TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
{% include syllabus_entry %}
This week we will discuss developments in distributed training. We will quickly review the statistical query model pushed by early map-reduce machine learning frameworks and then discuss advances in parameter servers and distributed neural network training.
- One Page Project description due at 11:59 PM. Check out the suggested projects. Submit a link to your one page Google document containing your project descriptions to this google form. You only need one submission per team but please list all the team member's email addresses. You can also update your submission if needed.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting (These slides will only be accessible to students enrolled in the class.)
- Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
- Integrated Model, Batch, and Domain Parallelism in Training Neural Networks
- Effect of batch size on training dynamics
- Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent[pdf]
- Large Scale Distributed Deep Networks
- Scaling Distributed Machine Learning with In-Network Aggregation
{% include syllabus_entry %}
Until recently, much of the focus on systems research was aimed at model training. However, recently there has been a growing interest in addressing the challenges of prediction serving. This lecture will frame the challenges of prediction serving and cover some of the recent advances.
{% include syllabus_entry %}
Unfortunately, class was canceled and so the PC Meeting has been moved to Monday. Note that early project presentations are also due next Friday.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting (These slides will only be accessible to students enrolled in the class.)
The Prediction-Serving Systems: What happens when we wish to actually deploy a machine learning model to production? ACM Queue article provides a nice overview.
- Live Video Analytics at Scale with Approximation and Delay-Tolerance
- LASER: A Scalable Response Prediction Platform For Online Advertising
- TensorFlow-Serving: Flexible, High-Performance ML Serving
- Clipper: A Low-Latency Online Prediction Serving System
- Deep Learning Inference in Facebook Data Centers: Characterization, Performance Optimizations and Hardware Implications
- The Missing Piece in Complex Analytics: Low Latency, Scalable Model Management and Serving with Velox
- The Case for Predictive Database Systems: Opportunities and Challenges.
- Paul Viola and Michael Jones Rapid Object Detection using a Boosted Cascade of Simple Features CVPR 2001.
{% include syllabus_entry %}
{% include syllabus_entry %}
This week we will explore the process of compiling/optimizing deep neural network computation graphs. This reading will span both graph level optimization as well as the compilation and optimization of individual tensor operations.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting (These slides will only be accessible to students enrolled in the class.)
- Learning to Optimize Tensor Programs: The TVM story is two fold. There's a System for ML story (above paper) and this paper is their the ML for System story.
- Exploring Hidden Dimensions in Parallelizing Convolutional Neural Networks
- TensorComprehensions
- Supporting Very Large Models using Automatic Dataflow Graph Partitioning
{% include syllabus_entry %}
Unfortunately, due to the power outage, lecture is canceled today. To make up for lost lecture(s) and accommodate our guest speakers, we will skip the overview lecture this week and start with the PC meeting on Machine Learning Applied to Systems. However, this will put a little extra pressure on the neutral presenters to provide additional context. We will then cover the discussion on machine learning hardware the following Monday.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting (These slides will only be accessible to students enrolled in the class.)
{% include syllabus_entry %}
This lecture will be presented by Kurt Keutzer and Suresh Krishna who are experts in processor design as well as network and architecture co-design.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting (These slides will only be accessible to students enrolled in the class.)
- Efficient Processing of Deep Neural Networks: A Tutorial and Survey
- A great spreadsheet analysis of the power and performance characteristics of all the publicly available hardware accelerators for deep learning (GPUs, CPU, TPUs).
- Nvidia post comparing different GPUs across a wide range of networks.
{% include syllabus_entry %}
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting coming soon. (These slides will only be accessible to students enrolled in the class.)
{% include syllabus_entry %}
This week we will discuss machine learning in adversarial settings. This includes secure federated learning, differential privacy, and adversarial examples.
{% include syllabus_entry %}
- Submit your review before 1:00PM.
- Slides for PC Meeting coming soon. (These slides will only be accessible to students enrolled in the class.)
- Helen: Maliciously Secure Coopetitive Learning for Linear Models
- Faster CryptoNets: Leveraging Sparsity for Real-World Encrypted Inference
- Rendered Insecure: GPU Side Channel Attacks are Practical
- The Algorithmic Foundations of Differential Privacy
- Federated Learning: Collaborative Machine Learning without Centralized Training Data
- Federated Learning at Google ... A comic strip?
- SecureML: A System for Scalable Privacy-Preserving Machine Learning
- More reading coming soon ...
{% include syllabus_entry %}
Autonomous vehicles will likely transform society in the next decade and are fundamentally AI enabled systems. In this lecture we will discuss the AI-Systems challenges around autonomous driving.
{% include syllabus_entry %}
{% include syllabus_entry %}
Everyone must do one of the readings (you pick).
- Submit your review before 1:00PM.
- Slides for PC Meeting coming soon. (These slides will only be accessible to students enrolled in the class.)
- Self-Driving Cars: A Survey. This is a slightly longer survey so focus more on the overview and framing first few pages of the autonomous driving problem and common solutions.
- The Architectural Implications of Autonomous Driving: Constraints and Acceleration
- ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst
{% include syllabus_entry %}
{% include syllabus_entry %}
{% include syllabus_entry %}
{% include syllabus_entry %}
{% include syllabus_entry %}
Don't forget to submit your final reports. As noted on Piazza, the final report should be 6-pages plus references (2-column, 10pt font, unlimited appendix). Please submit your report using this form:
You only need to submit the project once per team. The write-up should discuss the problem formulation, related work, your approach, and your results.
Week | Date (Lec.) | Topic |
---|
Detailed candidate project descriptions will be posted shortly. However, students are encourage to find projects that relate to their ongoing research.
Grades will be largely based on class participation and projects. In addition, we will require weekly paper summaries submitted before class.
- Projects: 60%
- Weekly Summaries: 20%
- Class Participation: 20%