You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This principle is closely related to what we call in machine learning “the curse of dimensionality.” Each dimension we add into a search space exponentially blows up the number of samples we require to get good generalization for any model learned from it. The curse of dimensionality is more often applied to datasets; simply put, the more columns or variables a dataset is represented with, the exponentially more samples in that dataset we need to understand it. In our case, we are thinking about the weights rather than the inputs, but the principle remains the same; high-dimensional space is enormous!
the first 'samples' refers to the sampling of the space, and the second one refers to elements of the dataset (if I understand correctly). Since in this chapter the former usage is more common, I suggest that the latter should be replaced by 'data' / 'element' etc to avoid confusion.
The text was updated successfully, but these errors were encountered:
Hi @genekogan, in this line of chapter 4:
https://github.com/ml4a/ml4a.github.io/blame/master/_chapters/how_neural_networks_are_trained.md#L62
(sorry for using blame mode, it's just because github does not allow users to point to a line of rendered markdown)
the first 'samples' refers to the sampling of the space, and the second one refers to elements of the dataset (if I understand correctly). Since in this chapter the former usage is more common, I suggest that the latter should be replaced by 'data' / 'element' etc to avoid confusion.
The text was updated successfully, but these errors were encountered: