generated from slds-lmu/lecture_template
-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Description
Here are some curated exercises from Bishop's book. They might be insightful for the SL lecture.
-
Chapter 2
- 2.21 Good exercise for entropy
- 2.22 Constrained optimization application in maximization of entropy. Quite classic exercise.
- 2.23-2.25 Good combo exercise for entropy.
- 2.26 Good exercise for KL-Divergence.
- 2.41 Regularized linear regression from a Bayesian perspective.
-
Chatper 4:
- 4.3 Exercise for logistic regression.
- 4.5 Exercise for linear regression.
- 4.7 Multi-output linear regression.
- 4.8 Exercise for multi-output regression.
- 4.11 Exercise for MLE vs. linear regression.
-
Chapter 5
- 5.7 Risk minization for classification.
- 5.10 Risk minization for classification.
- 5.13-5.15 MLE and Naive Bayes.
- 5.16-5.17 Good combo exercise for MLE.
- 5.20 Exercise for Logistic regression.
-
Chapter 9
- 9.2 Relation between weight decay and ridge regression.
- 9.6 Maybe useful for second-order optimization technique in optimization course.
- 9.18 Linear regression with dropout regularization.
Metadata
Metadata
Assignees
Labels
No labels