Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor typos in documentation/comments. #1014

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/fundamentals.tex
Original file line number Diff line number Diff line change
Expand Up @@ -450,7 +450,7 @@ \subsection{Backpropagation in DAGs}\label{s:dag}
d\bx_{t} \leftarrow d\bx_{t}
+ \frac{d\langle \bp_L, f_{\pi_L}(\bx_0,\dots,\bx_{t-1})\rangle}{d\bx_t}.
\]
Here, for uniformity with the other iterations, we use the fact that $d\bx_l$ are initialized to zero an\emph{accumulate} the values instead of storing them. In practice, the update operation needs to be carried out only for the variables $\bx_l$ that are actual inputs to $f_{\pi_L}$, which is often a tiny fraction of all the variables in the DAG.
Here, for uniformity with the other iterations, we use the fact that $d\bx_l$ are initialized to zero and \emph{accumulate} the values instead of storing them. In practice, the update operation needs to be carried out only for the variables $\bx_l$ that are actual inputs to $f_{\pi_L}$, which is often a tiny fraction of all the variables in the DAG.

After the update, each $d\bx_t$ contains the projected derivative of function $h_L$ with respect to the corresponding variable:
\[
Expand Down
2 changes: 1 addition & 1 deletion doc/site/docs/wrappers.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ cellarray `net.layers` with a list of layers. For example:
net.layers{1} = struct(...
'name', 'conv1', ...
'type', 'conv', ...
'weights', {{randn(10,10,3,2,'single'), randn(2,1,'single')}}, ...
'weights', {randn(10,10,3,2,'single'), randn(2,1,'single')}, ...
'pad', 0, ...
'stride', 1) ;
net.layers{2} = struct(...
Expand Down
2 changes: 1 addition & 1 deletion matlab/simplenn/vl_simplenn_display.m
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
% `inputSize`:: auto
% Specifies the size of the input tensor X that will be passed to
% the network as input. This information is used in order to
% estiamte the memory required to process the network. When this
% estimate the memory required to process the network. When this
% option is not used, VL_SIMPLENN_DISPLAY() tires to use values
% in the NET structure to guess the input size:
% NET.META.INPUTSIZE and NET.META.NORMALIZATION.IMAGESIZE
Expand Down
6 changes: 3 additions & 3 deletions matlab/vl_nnbilinearsampler.m
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,18 @@
% For output image n, GRID(1,:,:,n) specifies the vertical location
% v of a sample in the input image X and GRID(2,:,:,n) the
% horizontal location u. The convention follows standard
% impelemntations of this operator in the literature. Namely:
% impelementations of this operator in the literature. Namely:
%
% 1. The grid coordinates are normalized in the range [-1,1]. This
% means that (-1,-1) is the center of the upper-left pixel in the
% input image and (+1,+1) the center of the bottom-right pixel.
%
% 2. The V,U coordiante planes are stacked in the fisrt dimension of
% 2. The V,U coordinate planes are stacked in the first dimension of
% GRID instead of in the third, as it would be more natural in
% MatConvNet (as these could be interpreted as 'channels' in
% GRID).
%
% Further, No can be a multiple of N; in this case, it is assumed
% Further, No shall be a multiple of N; in this case, it is assumed
% that there are No/N transforms per input image, hence, the
% transforms [1 ... No/N] are applied to the first image, [No/N+1
% ... 2*No/N] are applied to the second image, etc.
Expand Down
2 changes: 1 addition & 1 deletion matlab/vl_nnloss.m
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
%
% In the third form, C has dimension H x W x D x N and specifies
% attributes rather than categories. Here elements in C are either
% +1 or -1 and C, where +1 denotes that an attribute is present and
% +1 or -1, where +1 denotes that an attribute is present and
% -1 that it is not. The key difference is that multiple attributes
% can be active at the same time, while categories are mutually
% exclusive. By default, the loss is *summed* across attributes
Expand Down
2 changes: 1 addition & 1 deletion matlab/vl_tmove.m
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
% format = {'single', [1 1], 'x0' ;
% 'double', [10 5], 'x1' }
%
% As ane extension, it is possible to declare all or some of the
% As an extension, it is possible to declare all or some of the
% tensors as GPU ones, by adding a fourth column to FORMAT:
%
% format = {'single', [1 1], 'x0', 'cpu' ;
Expand Down