Hardware Accelerators for Machine Learning
Download as PDF
Course Description
This course provides in-depth coverage of the architectural techniques used to design accelerators for training and inference in machine learning systems. This course will cover classical ML algorithms such as linear regression and support vector machines as well as DNN models such as convolutional neural nets, and recurrent neural nets. We will consider both training and inference for these models and discuss the impact of parameters such as batch size, precision, sparsity and compression on the accuracy of these models. We will cover the design of accelerators for ML model inference and training. Students will become familiar with hardware implementation techniques for using parallelism, locality, and low precision to implement the core computational kernels used in ML. To design energy-efficient accelerators, students will develop the intuition to make trade-offs between ML model parameters and hardware implementation techniques. Students will read recent research papers and complete a design project. Prerequisites: CS 149 or EE 180. CS 229 is ideal, but not required.
Grading Basis
ROP - Letter or Credit/No Credit
Min
3
Max
4
Course Repeatable for Degree Credit?
No
Course Component
Lecture
Enrollment Optional?
No
Programs
CS217
is a
completion requirement
for:
- (from the following course set: )
- (from the following course set: )
- (from the following course set: )