The language and logic of probability allows us to coherently and algorithmically account for modelling information and uncertainty. This course will discuss how to describe, implement, learn from data, and do inference with probabilistic models. These models let us generate novel images and text, find meaningful latent representations of data, take advantage of large unlabelled datasets, and more. This course will present the basic building blocks of these models, the techniques required to compose them, and some computational tools we have to realize them.
- An introduction to probability as a means of representing and reasoning with uncertain knowledge.
- Qualitative and quantitative specification of probability distributions using graphical models.
- Algorithms for inference and probabilistic reasoning with probabilistic graphical models.
- Statistical approaches and algorithms for learning probability models from empirical data.
- Applications of these models in artificial intelligence and machine learning.
- Lectures: Tuesdays 11:10-13:00 EST
- Tutorials: Thursdays 11:10-12:00 EST
Discussion Board: https://probml-forum.jessebett.com/
- Jesse Bettencourt
e-mail: csc412prof [at] cs toronto edu
- Chowdhury Sayantan
- Creager, Elliot
- Ebrahimi, Mohammad Reza
- Gowda, Sindhu C. M.
- Jia, Sheng
- Xu, Haoping
- Zhang, Haoran
e-mail: csc412tas [at] cs toronto edu
Friday 11:10 - 12:00
link and passcode: https://probml-forum.jessebett.com/t/instructor-office-hours-fridays-11-10-12/1479
- A1: 15% (Feb 12)
- A2: 15% (March 12)
- A3: 15% (April 2)
- Project Proposal: 10% (Feb 24)
- Project Report: 30% (April 9)
- Computer Science Communication: 15% (March 30)
For the assignments, you will complete problem sets requiring mathematical derivations, coded solutions, and a written summary of your results and figures. This will be done individually.
For the project component, you will carry out a small research project relating to the course content. This will be done in groups of 2-4.
For the ComSciCom component, you will produce an original work of scientific communication related to the content of the course through your choice of one or more of the following artifacts:
- Blog post
- Technical illustrations
- Animations
- Twitter thread
- Memes
- Video
- Podcast
All assessments are officially due by 23:59 on the specified due date.
All assessments that are submitted prior to collection will be graded without penalty. Assessments will be collected no sooner than 48 hours after the due date. Assessments submitted after collection will not be accepted without special accommodations.
If you find yourself in a situation that requires you to submit your deadline after the due date please contact the instructor email to discuss accommodation. Be preemptive and inform me as soon as you are aware of any special circumstances that are preventing you from submitting on time. Examples of special circumstances include
- Physical health
- Mental health
- Participation in an academic conference
- Job / Graduate School Interviews (good luck!)
After attempting the problems on an individual basis you may discuss and work together on the Assignments with up to 2 classmates. However, you must write your own solutions including code, mathematical derivations, figures, and written answers. Explicitly name any collaborators at the top of your submission.
Assignments solutions will require a combination of written responses, mathematical derivations, code, output from evaluated code, and figures. We expect all these elements to be contained in a single pdf writeup included in your submission. As well you will be expected to submit your code source, though graders will not be expected to run your code to assess your work.
Any requests to have work re-evaluated must be made via email up to one week of the date the work was returned. The request must contain a justification for remark consideration.
Graduate students will be evaluated at the graduate level according to the University Assessment and Grading Practices Policy.
This course, including your participation, will be recorded on video and will be available to students in the course for viewing remotely and after each session.
Course videos and materials belong to your instructor, the University, and/or other sources depending on the specific facts of each situation, and are protected by copyright. Do not download, copy, or share any course or student materials or videos without the explicit permission of the instructor.
For questions about recording and use of videos in which you appear please contact your instructor.
If a pet enters the camera frame during lecture, we will pause our discussion for an introduction to that pet and admiration by all. Dogs, cats, rabbits, hamsters, birds, snakes, iguanas, etc. are all welcome.
You are responsible for knowing the content of the University of Toronto’s Code of Behaviour on Academic Matters. If you have any questions about what is or is not permitted in this course, please do not hesitate to contact your instructor.
If you require accommodations for a disability, or have any accessibility concerns about the course, the means of delivery, or course materials, please contact Accessibility Services as soon as possible: http://accessibility.utoronto.ca.
- Probabilistic representation and inference
- Bayesian linear regression
- Probabilistic generative and discriminative models
- Connection to regularization methods
- Probabilistic Graphical Models
- Exact Inference
- Sampling-based Approximate Inference
- Variational Inference
- Stochastic Variational Inference with Automatic Differentiation
- Time series and recurrent models
- Variational Auto-Encoders (VAEs)
- Generative Adversarial Networks (GANs)
- Normalizing Flows
There is no required textbook for this class. Readings from various sources will be suggested primarily from the following excellent texts:
- Edwin Jaynes (2003) Probability Theory: The Logic of Science.
- David MacKay (2003) Information Theory, Inference, and Learning Algorithms http://www.inference.phy.cam.ac.uk/mackay/itila/ .
- Kevin Murphy (2012) Machine Learning: A Probabilistic Perspective.