-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slide improvements #27
Comments
In the first slide, the slide that shows the category of MLs have 2 typos that need to be fixed. |
@FunnyPhantom Thank you for your suggestion. Please provide the exact name of the slide and page numbers. |
@mahsayazdani the file that needs improvement resides here: |
There is another issue in here: https://github.com/asharifiz/Introduction_to_Machine_Learning/blob/3a595142161801b224e9fd06b1e447de7dfb0749/Slides/Chapter_02_Classical_Models/Loss/Loss.tex#L269 |
There is also another improvements that can happen. Ideally after this slide: |
In this slide, there are multiple improvements:
|
|
Sample size should be removed in the following picture. |
Also for the xavier initialization slides, please denote whether it has been initialized with normal distribution, or uniform distribution. |
In this figure, the "loss" should be changed to "training loss" |
In this figure, the "testing error" should be changed to "validation error" |
Also, Early stopping slides to be put before L1/L2 regularization term. |
Also, solution to dropout causing hyperactivation is missing. |
Wrong picture is getting used for the result of CNN training. The picture is the same as the FCN. |
In the same context as the comment above, the Epochs of each network should be explicitly specified. Moreover, the epoch number should be starting from 1 and not 0 (Since zero is indicating not even once gradient decent has been ran) |
Page 20-22 can be aggregated into one slide. Moreover providing a famous kernel with well known function (such has horizontal or vertical edge detection) can be more helpful in the instruction process. |
In page 87, the number of dimensions is written as Also there it on the bottom of the page, the number is written as |
** VERY IMPORTANT CHANGE ** All the slides for the |
This needs to change from E(yy_hat) to E(2yy_hat) |
Very important!Please add the following images for the GAN slides to convey the concept better: (Based on Dr. Sharifi's comments, it would be best to revise the slides for GAN rigorously and let him review the results) |
page 22/65 RNN, a simpler example which can convey the meaning better. Preferably with same dimension and with a non linear activation function |
|
Page 37 RNN slides, there are two activation functions, one can be only tanh, but the other one can be tanh or sigmoid. This should be explicitely specified. |
Page 38 RNN slides, for the Gated recurrent unit, it should have more details for its architecture. (Compare with the previous page slide which is the architecture of a simple RNN Unit) |
In the introduction of the RNN slides, the limitation of previous model should be specified first in order to give more context about the problem RNN solves. |
RNN limitation should be specified, before going to GRU for the same reason explained above. |
GRU limitation should be specified before going to LSTM for the same reason above. |
Transformer should be introduced by the limitation of LSTM. (BPTT (need to be sequential), vanishing or exploding gradient, long range dependency) |
These slides are getting used as a reference for teaching in the ML for BioInformatic course as well.
In the process of class, some points of improvement got found. This issues tries to serve as a thread for conveying these improvements.
The text was updated successfully, but these errors were encountered: