Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment bug-fix #1036

Open
wants to merge 148 commits into
base: devel
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
148 commits
Select commit Hold shift + click to select a range
10ba7ae
Added AdaGrad and AdaDelta solvers. First implementation of solver in…
Jan 27, 2016
88d4b63
Merge branch 'master' into new-solvers
Jan 27, 2016
26e7172
Added nnsolvers unit test, to check the convergence of new solvers.
Jan 27, 2016
2b6f2ed
Added solver interface to cnn_train_dag.
Jan 27, 2016
dc6f749
Improved nnsolvers unit test, added DAG tests.
Jan 27, 2016
85cc7f7
Merge branch 'master' into new-solvers
Jan 27, 2016
e723803
Fixed nnsolvers in GPU mode.
Jan 27, 2016
6d4c944
Added RMSProp. Tweaked the epsilon position in AdaGrad (literature va…
Jan 28, 2016
7fef669
Merge branch 'master' into new-solvers
May 6, 2016
0c429d8
documentation, backwards-compatible momentum option
May 6, 2016
1c21b66
update unit tests
May 6, 2016
bbe444d
different default hyper-params for each solver, same as in torch. imp…
May 8, 2016
394f639
renamed momentum and state to solverState in cnn_train/dag; fixed uni…
May 11, 2016
b3598e0
remove unneded code
May 11, 2016
0ec8114
add a moving average option to AdaGrad, which avoids the monotonic de…
May 12, 2016
09c771e
AdaGrad: running sum with decay instead of moving average (which does…
May 12, 2016
fb0b626
correct default hyperparam
May 12, 2016
f866715
fix options for custom solvers
May 13, 2016
8088635
Merge branch 'master-bb' into devel
lenck Jul 13, 2016
9ad2008
Merge branch 'devel' into new-solvers
lenck Jul 13, 2016
0711d24
Revamped the custom solvers. Properly merged to the new cnn_train.
lenck Jul 13, 2016
8ccdc96
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Sep 17, 2016
05c36ab
Makefile: bug fix: make sure that specified CUDA libraries are linked…
vedaldi Sep 16, 2016
5bdb22a
fast r-cnn: adds region of interest pooling
Sep 25, 2016
3eb8102
fast r-cnn: adds demo
Sep 25, 2016
041e8a9
fast r-cnn: adds import script
Sep 25, 2016
d579313
cnn_train_*: skips void updates
Sep 25, 2016
86b7a1c
Xcode: sync
vedaldi Sep 25, 2016
c2cb3a9
build: adds `-D_FORCE_INLINES` flag
vedaldi Sep 25, 2016
733b083
fast-rcnn: cleanup
vedaldi Sep 25, 2016
b098d06
beta22->beta23
vedaldi Sep 25, 2016
b62b96d
roipooling: fix recent compute capability
vedaldi Sep 25, 2016
e791bae
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Sep 26, 2016
7990df5
Fix names
Sep 26, 2016
ab371b2
Add xVOCap
Sep 26, 2016
0be8088
Change eval to FRCNN style
Sep 27, 2016
eedfc01
Add comments to eval
Sep 27, 2016
2d7bcee
Fix bbox regression
Sep 29, 2016
64e9b45
Remove ap_auc
Sep 29, 2016
b9415b3
switch to torch resnet sampling factor
vedaldi Sep 27, 2016
86f3af7
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Sep 29, 2016
e0004ac
minor typo fixes to vl_imreadjpeg docs
albanie Sep 30, 2016
3336632
Incorporate bbox meanstd
Sep 30, 2016
afe1be7
Clean up
Sep 30, 2016
8262097
Fix bbox reg
Sep 30, 2016
df7d476
more on default models
vedaldi Sep 26, 2016
5372113
fixed model filename: *-pascal07-*.mat
thelinuxmaniac Sep 28, 2016
1846751
Fix a minor bug in roipool
Sep 28, 2016
b428184
Minor improvement
Sep 28, 2016
72389fe
Demo shows on original
Sep 29, 2016
381905a
fixed missing *-pascal07-*.mat
thelinuxmaniac Sep 29, 2016
cf1c043
fast-rcnn: fixes evaluation script paths
vedaldi Sep 29, 2016
bff8e8c
utils/import-caffe.py: save input sizes in metadata
vedaldi Sep 29, 2016
6337e3f
dagnn.ROIPooling: adds missing `getOuptutSizes` method
vedaldi Sep 29, 2016
6ebaa6e
dagnn.getVarSizes: correctly expand empty variable sizes to [0 0 0 0]
vedaldi Sep 29, 2016
7ca8f06
model2dot.m: works with Fast R-CNN by using new metadata
vedaldi Sep 29, 2016
17f802f
doc/pretrained.md: adds Fast R-CNN performance
vedaldi Sep 29, 2016
9b9f1bb
pretrained.md: fixes
vedaldi Sep 29, 2016
6aba90b
utils/import-caffe.py: fix normalization metadata
vedaldi Sep 30, 2016
f9154a1
DagNN.fromSimpleNN: deals properly with `net.meta.inputs`
vedaldi Sep 30, 2016
1d3dae9
models2dot.m: compatible with latest models
vedaldi Sep 30, 2016
f0f1d4c
fast-rcnn: updates pretrained models
vedaldi Sep 30, 2016
4ef4e52
mode2dot.m: bugfix
vedaldi Sep 30, 2016
01242b3
doc: pretrained.md: fixes
vedaldi Sep 30, 2016
452f087
fast_rcnn_evaluate.m: fix bug introduced with last merge
vedaldi Sep 30, 2016
4ce2871
doc: pretrained.md: updates MD5s
vedaldi Sep 30, 2016
32bf09c
Merge remote-tracking branch 'origin/master' into smallTypos
albanie Oct 3, 2016
e347c75
fixed small typos in lrn normalization cuda
albanie Oct 3, 2016
35a23bd
COPYING: fixes
vedaldi Oct 4, 2016
26eed46
Merge remote-tracking branch 'origin/master' into smallTypos
albanie Oct 4, 2016
a55ceb2
fixes small typo in vl_nnpool
albanie Oct 4, 2016
ad3ed7a
more minor doc typo fixes to vl_nnpool
albanie Oct 4, 2016
c820607
fixed pooling comment in doc
albanie Oct 4, 2016
ed214c5
rename a parameter in dagnn
Oct 5, 2016
2f4a815
fixes summations in spatial normalization derivatives
albanie Oct 6, 2016
3a9afa6
Merge branch 'smallTypos' of https://bitbucket.org/ovl/matconvnet int…
albanie Oct 6, 2016
effc875
added opts.postEpochFn=@myfunc option to cnn_train, which may be used…
Oct 24, 2016
ff513ed
fix vl_nnloss doc ('logisticlog' -> 'logistic')
Nov 8, 2016
3ac854b
Fixed #722 #743
lenck Nov 11, 2016
a5db5e9
Fix for linux compilation.
lenck Nov 11, 2016
563edcd
Added more info about the linux dependencies.
lenck Nov 11, 2016
f90ed2d
Fixes #694, libjpeg not aborted once a libjpeg error found.
lenck Nov 22, 2016
829798d
Fixes #744.
lenck Nov 22, 2016
7ee8d8a
Fixes once more the issue #552.
lenck Nov 22, 2016
d248531
Merge branch 'master' into master-bb
lenck Nov 22, 2016
efa978b
Fixed the `vl_imreadjpeg.m` help string.
lenck Nov 24, 2016
e53837a
Merge branch 'master'
Nov 30, 2016
c8d650d
fix some errors introduced when merging
Nov 30, 2016
ecf6e00
fix subtle bug in Nesterov momentum (by using standard instead of in-…
Nov 30, 2016
784967a
further improvements to solvers
Nov 30, 2016
661b149
formatting
Nov 30, 2016
110092b
fix plotDiagnostics for solvers with non-numeric state
Nov 30, 2016
5b4af28
same random seed for consistent unit tests
Nov 30, 2016
7b1712a
ADAM solver
aravindhm Nov 30, 2016
d8c173d
Unit tests Adam solver
aravindhm Dec 1, 2016
4e2f662
Merge branch 'master'
Dec 2, 2016
4adcf54
Merged in new-solvers (pull request #21)
Dec 2, 2016
237b5dd
Update semilogy and plot
oya163 Dec 9, 2016
ef68d3d
Update cnn_mnist_experiment
oya163 Dec 9, 2016
5cb496f
Minor
Dec 18, 2016
4d73baf
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Dec 18, 2016
48c7aa9
added dilation to caffe Conv import (fixes #816)
albanie Dec 18, 2016
663d545
Merge pull request #805 from oya163/patch-1
lenck Dec 19, 2016
0a235f3
Fixed spelling mistake.
lenck Jan 18, 2017
f3f3a7f
Added a fix for the new version vl_imreadjpeg.
lenck Jan 18, 2017
3bd1306
Small improvements of cnn_train:
lenck Jan 18, 2017
c0381d0
Merge branch 'master' of github.com:vlfeat/matconvnet
lenck Jan 18, 2017
d6518b5
Merge branch 'master' into master-bb
lenck Jan 18, 2017
0f6412c
Bugfix.
lenck Jan 18, 2017
c038636
Merged in caffe-dilate-fix (pull request #26)
lenck Jan 18, 2017
b570d83
fixed mnist bnorm initialization, where the previous conv layer's bia…
Jan 20, 2017
4eec6fa
Merge branch 'master' of bitbucket.org:ovl/matconvnet
Feb 8, 2017
b4c57a0
Fix bug in support for empty train/val sets
Feb 9, 2017
a9798bf
fix bug that caused setting a solver to be ignored
Feb 13, 2017
5681a0b
fix random seed in mnist unit tests, for more reproducible tests
Feb 13, 2017
1a6a0e7
Merged in smallTypos (pull request #23)
albanie Feb 15, 2017
1505d50
Fixed a bug introduced in b4c57a0, isequal(~,nan) is always false.
lenck Feb 22, 2017
bf310fc
Added epochSize parameter to dagnn too.
lenck Mar 13, 2017
19e5057
Merge branch 'master' of ssh://bitbucket.org/ovl/matconvnet
lenck Mar 13, 2017
32aee3b
fixed small issue ignoring doubles on MNIST unit test
Mar 13, 2017
69d07fd
Small bugfix.
lenck Mar 15, 2017
ece81ef
Merge branch 'master' of ssh://bitbucket.org/ovl/matconvnet
lenck Mar 15, 2017
48ef227
Added one more output to vl_imreadjpeg.cu verbose.
lenck Mar 16, 2017
cfc2c47
Added a 'display' parameter to dagnn.Loss. Moved accummulation to a s…
lenck Mar 16, 2017
e46fb0a
Added a PDist wrapper...
lenck Mar 16, 2017
c04415d
refactoring.
lenck Mar 17, 2017
269d9bf
Changed a bit the PDist interface - aggregates outputs by default.
lenck Mar 17, 2017
198faec
Merged in loss-improvements (pull request #30)
lenck Mar 17, 2017
fbe2a45
Fixed interface quirk of vl_nnloss and vl_nnrelu (different from othe…
Mar 22, 2017
2695827
Fixed bug in vl_taccum when argument b is a scalar
Mar 22, 2017
c19748a
Allow redirecting vl_testnn to different test suite dirs
Mar 22, 2017
677e51f
Removed unnecessary try/catch.
lenck Mar 23, 2017
4987ded
Small bug fixes for loss.
lenck Mar 23, 2017
5dba39e
Merge branch 'master' of ssh://bitbucket.org/ovl/matconvnet
lenck Mar 23, 2017
5cdfa2f
Fix of the numerical derivatives for the vl_nnpdist.
lenck Mar 23, 2017
8420b7e
Merged in toy-data-example (pull request #28)
Mar 23, 2017
23e9d96
Merged in vl_argparse-improvements (pull request #31)
Mar 23, 2017
049e258
Merged in docFix (pull request #25)
albanie Mar 23, 2017
03442ce
prepare for beta24
vedaldi Mar 23, 2017
2e5d60e
doc: fixes
vedaldi Mar 23, 2017
3387ca9
Fixed compilation warning on MSVC/GPU compilation.
lenck Mar 24, 2017
24b8182
Commented out unused bool variable which causes "Unused" warnings.
lenck Mar 24, 2017
1b5cc3f
doc: minor fixes
vedaldi Mar 24, 2017
0461a96
Fixed the nnroipool bug for older MSVC compilers.
lenck Mar 24, 2017
0ea06fd
Merge branch 'master' of bitbucket.org:ovl/matconvnet into master-bb
lenck Mar 24, 2017
0c70ada
xtest/nnconv: reduce memory usage for a test
vedaldi Mar 24, 2017
6221423
doc: typo
vedaldi Mar 24, 2017
f4ea5c3
Biases update bug-fix in mergeBatchNorm
sosiris Aug 15, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions COPYING
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
Copyright (c) 2014 The MatConvNet team.
Copyright (c) 2014-16 The MatConvNet Team.
All rights reserved.

Redistribution and use in source and binary forms are permitted
provided that the above copyright notice and this paragraph are
duplicated in all such forms and that any documentation,
advertising materials, and other materials related to such
distribution and use acknowledge that the software was developed
by the <organization>. The name of the
<organization> may not be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
duplicated in all such forms and that any documentation, advertising
materials, and other materials related to such distribution and use
acknowledge that the software was developed by the MatConvNet
Team. The name of the MatConvNet Team may not be used to endorse or
promote products derived from this software without specific prior
written permission. THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.
14 changes: 9 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ DEBUG ?=
ARCH ?= maci64

# Configure MATLAB
MATLABROOT ?= /Applications/MATLAB_R2015a.app
MATLABROOT ?= /Applications/MATLAB_R2017a.app

# Configure CUDA and CuDNN. CUDAMETHOD can be either 'nvcc' or 'mex'.
CUDAROOT ?= /Developer/NVIDIA/CUDA-6.5
CUDAROOT ?= /Developer/NVIDIA/CUDA-8.0
CUDNNROOT ?= $(CURDIR)/local/
CUDAMETHOD ?= $(if $(ENABLE_CUDNN),nvcc,mex)

Expand All @@ -38,7 +38,7 @@ CUDAMETHOD ?= $(if $(ENABLE_CUDNN),nvcc,mex)

# Maintenance
NAME = matconvnet
VER = 1.0-beta22
VER = 1.0-beta24
DIST = $(NAME)-$(VER)
LATEST = $(NAME)-latest
RSYNC = rsync
Expand Down Expand Up @@ -82,7 +82,7 @@ LDFLAGS =
LDOPTIMFLAGS =
LINKLIBS = -lmwblas

NVCCFLAGS_PASS = -gencode=arch=compute_30,code=\"sm_30,compute_30\"
NVCCFLAGS_PASS = -D_FORCE_INLINES -gencode=arch=compute_30,code=\"sm_30,compute_30\"
NVCCVER = $(shell $(NVCC) --version | \
sed -n 's/.*V\([0-9]*\).\([0-9]*\).\([0-9]*\).*/\1 \2 \3/p' | \
xargs printf '%02d%02d%02d')
Expand Down Expand Up @@ -159,12 +159,14 @@ cpp_src+=matlab/src/bits/nnpooling.$(ext)
cpp_src+=matlab/src/bits/nnnormalize.$(ext)
cpp_src+=matlab/src/bits/nnbnorm.$(ext)
cpp_src+=matlab/src/bits/nnbilinearsampler.$(ext)
cpp_src+=matlab/src/bits/nnroipooling.$(ext)
mex_src+=matlab/src/vl_nnconv.$(ext)
mex_src+=matlab/src/vl_nnconvt.$(ext)
mex_src+=matlab/src/vl_nnpool.$(ext)
mex_src+=matlab/src/vl_nnnormalize.$(ext)
mex_src+=matlab/src/vl_nnbnorm.$(ext)
mex_src+=matlab/src/vl_nnbilinearsampler.$(ext)
mex_src+=matlab/src/vl_nnroipool.$(ext)
mex_src+=matlab/src/vl_taccummex.$(ext)
mex_src+=matlab/src/vl_tmove.$(ext)
ifdef ENABLE_IMREADJPEG
Expand All @@ -180,6 +182,7 @@ cpp_src+=matlab/src/bits/impl/pooling_cpu.cpp
cpp_src+=matlab/src/bits/impl/normalize_cpu.cpp
cpp_src+=matlab/src/bits/impl/bnorm_cpu.cpp
cpp_src+=matlab/src/bits/impl/bilinearsampler_cpu.cpp
cpp_src+=matlab/src/bits/impl/roipooling_cpu.cpp
cpp_src+=matlab/src/bits/impl/tinythread.cpp
ifdef ENABLE_IMREADJPEG
cpp_src+=matlab/src/bits/impl/imread_$(IMAGELIB).cpp
Expand All @@ -195,6 +198,7 @@ cpp_src+=matlab/src/bits/impl/pooling_gpu.cu
cpp_src+=matlab/src/bits/impl/normalize_gpu.cu
cpp_src+=matlab/src/bits/impl/bnorm_gpu.cu
cpp_src+=matlab/src/bits/impl/bilinearsampler_gpu.cu
cpp_src+=matlab/src/bits/impl/roipooling_gpu.cu
cpp_src+=matlab/src/bits/datacu.cu
mex_src+=matlab/src/vl_cudatool.cu
ifdef ENABLE_CUDNN
Expand Down Expand Up @@ -255,7 +259,7 @@ CXXOPTIMFLAGS='$$CXXOPTIMFLAGS $(call nvcc-quote,$(CXXOPTIMFLAGS))'
MEXFLAGS_LD := $(MEXFLAGS) \
LDFLAGS='$$LDFLAGS $(LDFLAGS)' \
LDOPTIMFLAGS='$$LDOPTIMFLAGS $(LDOPTIMFLAGS)' \
LINKLIBS='$$LINKLIBS $(LINKLIBS)' \
LINKLIBS='$(LINKLIBS) $$LINKLIBS' \

NVCCFLAGS = $(CXXFLAGS) $(NVCCFLAGS_PASS) \
-I"$(MATLABROOT)/extern/include" \
Expand Down
2 changes: 2 additions & 0 deletions doc/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ vl_nnnormalizelp.m \
vl_nnpdist.m \
vl_nnpool.m \
vl_nnrelu.m \
vl_nnroipool.m \
vl_nnsigmoid.m \
vl_nnsoftmax.m \
vl_nnsoftmaxloss.m \
Expand All @@ -38,6 +39,7 @@ vl_imreadjpeg.m \
vl_imreadjpeg.m \
vl_taccum.m \
vl_tmove.m \
vl_tshow.m \
simplenn/vl_simplenn.m \
simplenn/vl_simplenn_diagnose.m \
simplenn/vl_simplenn_tidy.m \
Expand Down
40 changes: 40 additions & 0 deletions doc/blocks.tex
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,46 @@ \section{Spatial bilinear resampling}\label{s:spatial-sampler}

See \cref{s:impl-sampler} for implementation details.

% ------------------------------------------------------------------
\section{Region of interest pooling}\label{s:roi-pooling}
% ------------------------------------------------------------------

The \emph{region of interest (ROI) pooling} block applies max or average pooling to specified subwindows of a tensor. A region is a rectangular region $R = (u_-,v_-,u_+,v_+)$. The region itself is partitioned into $(H',W')$ tiles along the vertical and horizontal directions. The edges of the tiles have coordinates
\begin{align*}
v_{i'} &= v_- + (v_+ - v_- + 1) (i' - 1), \quad i' = 1,\dots,H',\\
u_{j'} &= u_- + (u_+ - u_- + 1) (j' - 1), \quad j' = 1,\dots,W'.
\end{align*}
Following the implementation of~\cite{girshick15fast}, the $H'\times W'$ pooling tiles are given by
\[
\Omega_{i'j'} =
\{\lfloor v_{i'} \rfloor + 1, \dots, \lceil v_{i'+1} \rceil\}
\times
\{\lfloor u_{i'} \rfloor + 1, \dots, \lceil u_{i'+1} \rceil\}.
\]
Then the input and output tensors are as follows:
\[
\bx \in \mathbb{R}^{H \times W \times C},
\qquad
\by \in \mathbb{R}^{H' \times W' \times C},
\]
where
\[
y_{i'j'c} = \operatornamewithlimits{max}_{(i,j) \in \Omega_{i'j'}} x_{ijc}.
\]
Alternatively, $\max$ can be replaced by the averaging operator.

The extent of each region is defined by four coordinates as specified above; however, differently from tensor indexes, these use $(0,0)$ as the coordinate of the top-left pixel. In fact, if there is a single tile ($H'=W'=1$), then the region $(0,0,H-1,W-1)$ covers the whole input image:
\[
\Omega_{11} =
\{1, \dots, W\}
\times
\{1, \dots, H\}.
\]

In more details, the input of the block is a sequence of $K$ regions. Each region pools one of the $T$ images in the batch stored in $\bx \in \mathbb{R}^{H\times W\times C\times T}$. Regions are therefore specified as a tensor $R \in \mathbb{R}^{5 \times K}$, where the first coordinate is the index of the pooled image in the batch. The output is a $\by \in \mathbb{R}^{H' \times W' \times C \times K}$ tensor.

For compatibility with~\cite{girshick15fast}, furthermore, the region coordinates are rounded to the nearest integer before the definitions above are used. Note also that, due to the discretization details, 1) tiles always contain at least one pixel, 2) there can be a pixel of overlap between them and 3) the discretization has a slight bias towards left-top pixels.

% ------------------------------------------------------------------
\section{Normalization}\label{s:normalization}
% ------------------------------------------------------------------
Expand Down
10 changes: 5 additions & 5 deletions doc/impl.tex
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ \section{Spatial pooling}\label{s:impl-pooling}
\frac{d z}{d (\vv \by)^\top}
S(\bx),
$
for all but a null set of points, where the operator is not differentiable (this usually does not pose problems in optimization by stochastic gradient). For max-pooling, similar relations exists with two differences: $S$ does not depend on the input $\bx$ and it is not binary, in order to account for the normalization factors. In summary, we have the expressions:
for all but a null set of points, where the operator is not differentiable (this usually does not pose problems in optimization by stochastic gradient). For average pooling, similar relations exists with two differences: $S$ does not depend on the input $\bx$ and it is not binary, in order to account for the normalization factors. In summary, we have the expressions:
\begin{equation}\label{e:max-mat}
\boxed{
\vv\by = S(\bx) \vv \bx,
Expand Down Expand Up @@ -429,12 +429,12 @@ \subsection{Spatial normalization}\label{s:impl-spnorm}
The derivative of spatial normalization can be obtained as follows:
\begin{align*}
\frac{dz}{dx_{ijd}}
&= \sum_{i''j''d}
&= \sum_{i''j''}
\frac{dz}{d y_{i''j''d}}
\frac{d y_{i''j''d}}{d x_{ijd}}
\\
&=
\sum_{i''j''d}
\sum_{i''j''}
\frac{dz}{d y_{i''j''d}}
(1 + \alpha n_{i''j''d}^2)^{-\beta}
\frac{dx_{i''j''d}}{d x_{ijd}}
Expand All @@ -450,7 +450,7 @@ \subsection{Spatial normalization}\label{s:impl-spnorm}
(1 + \alpha n_{ijd}^2)^{-\beta}
-2\alpha\beta x_{ijd}
\left[
\sum_{i''j''d}
\sum_{i''j''}
\frac{dz}{d y_{i''j''d}}
(1 + \alpha n_{i''j''d}^2)^{-\beta-1}
x_{i''j''d}
Expand All @@ -462,7 +462,7 @@ \subsection{Spatial normalization}\label{s:impl-spnorm}
(1 + \alpha n_{ijd}^2)^{-\beta}
-2\alpha\beta x_{ijd}
\left[
\sum_{i''j''d}
\sum_{i''j''}
\eta_{i''j''d}
\frac{dn_{i''j''d}^2}{d (x_{ijd}^2)}
\right],
Expand Down
4 changes: 2 additions & 2 deletions doc/intro.tex
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ \section{Getting started}\label{s:getting-statrted}
\begin{lstlisting}[escapechar=!]
% install and compile MatConvNet (run once)
untar(['http://www.vlfeat.org/matconvnet/download/' ...
'matconvnet-1.0-beta12.tar.gz']) ;
cd matconvnet-1.0-beta12
'matconvnet-1.0-beta24.tar.gz']) ;
cd matconvnet-1.0-beta24
run matlab/vl_compilenn

% download a pre-trained CNN from the web (run once)
Expand Down
2 changes: 1 addition & 1 deletion doc/matdocparser.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
import re

__mpname__ = 'MatDocParser'
__version__ = '1.0-beta15'
__version__ = '1.0-beta24'
__date__ = '2015-09-20'
__description__ = 'MatDoc MATLAB inline function description interpreter.'
__long_description__ = __doc__
Expand Down
22 changes: 22 additions & 0 deletions doc/site/docs/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,28 @@ here.
<a name='changes'></a>
# Changes

- 1.0-beta24 (March 2017).

**New features**

* New toy example `cnn_toy_data.m` demonstrating using a
customized `imdb`.
* `vl_argparse.m` now supports dot paths and ignoring missing
defaults.
* Support for different example solvers (AdaGrad, Adam, AdaDelta,
RMSProp) and ability to add new ones.
* A new function `vl_tshow.m` to glance at tensors.
* Bugfixes.

- 1.0-beta23 (September 2016).

**New features**

* A new function `vl_nnroipool.m` for region of interest pooling,
supporting networks such as Fast-RCNN.
* Imported Fast-RCNN models from Caffe.
* An example Fast-RCNN implementation, training and testing.

- 1.0-beta22 (Spetember 2016).

* Bugfixes.
Expand Down
2 changes: 1 addition & 1 deletion doc/site/docs/css/fixes.css
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ a { color: #00438E ; }

#Functions .dropdown-menu {
color: #000;
max-height: 400px;
max-height: 800px;
width: 342px;
}

Expand Down
2 changes: 2 additions & 0 deletions doc/site/docs/functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ showing how to train CNNs.
- [`vl_nnpdist`](mfiles/vl_nnpdist.md) Pairwise distances.
- [`vl_nnpool`](mfiles/vl_nnpool.md) Max and sum pooling.
- [`vl_nnrelu`](mfiles/vl_nnrelu.md) Rectified Linear Unit.
- [`vl_nnroipool`](mfiles/vl_nnroipool.md) Reegion of interest pooling.
- [`vl_nnsigmoid`](mfiles/vl_nnsigmoid.md) Sigmoid.
- [`vl_nnsoftmax`](mfiles/vl_nnsoftmax.md) Channel soft-max.
- [`vl_nnsoftmaxloss`](mfiles/vl_nnsoftmaxloss.md) *Deprecated*
Expand Down Expand Up @@ -70,3 +71,4 @@ showing how to train CNNs.
- [`vl_imreadjpeg`](mfiles/vl_imreadjpeg.md) Quickly load a batch of JPEG images.
- [`vl_taccum`](mfiles/vl_taccum.md) Accumulate tensors operating in-place when possible.
- [`vl_tmove`](mfiles/vl_tmove.md) Exchange tensors between MATLAB processes and GPUs.
- [`vl_tshow`](mfiles/vl_tshow.md) Show a tensor on screen.
10 changes: 8 additions & 2 deletions doc/site/docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<div class="row" style="white-space: nowrap;">
<div class="col-sm-3">
<a href="download/matconvnet-1.0-beta21.tar.gz">
<a href="download/matconvnet-1.0-beta24.tar.gz">
<div class="menuicon"><span class="fa fa-download fa-2x"></span></div>
Download</a>
</div>
Expand Down Expand Up @@ -31,6 +31,12 @@ efficient, and can run and learn state-of-the-art CNNs. Many
pre-trained CNNs for image classification, segmentation, face
recognition, and text detection are available.

> **New:** [1.0-beta24](about.md#changes) released with bugfixes, new
> examples, and utility functions.
>
> **New:** [1.0-beta23](about.md#changes) released with
> [`vl_nnroipool`](mfiles/vl_nnroipool) and a Fast-RCNN demo.
>
> **New:** [1.0-beta22](about.md#changes) released with a few bugfixes.
>
> **New:** [1.0-beta21](about.md#changes) provides two new tools,
Expand All @@ -52,7 +58,7 @@ recognition, and text detection are available.
> numerous other improvements and bugfixes.

## Obtaining MatConvNet
- <span class="fa fa-file-archive-o"></span>&nbsp;Tarball for [version 1.0-beta22](download/matconvnet-1.0-beta22.tar.gz); [older versions](download/) (<span class="fa fa-apple"/> <span class="fa fa-windows"/> <span class="fa fa-linux"/>)
- <span class="fa fa-file-archive-o"></span>&nbsp;Tarball for [version 1.0-beta24](download/matconvnet-1.0-beta24.tar.gz); [older versions](download/) (<span class="fa fa-apple"/> <span class="fa fa-windows"/> <span class="fa fa-linux"/>)
- <span class="fa fa-github"></span>&nbsp;[GIT repository](http://www.github.com/vlfeat/matconvnet.git)
- <span class="fa fa-pencil-square-o"></span>&nbsp;<a href="javascript:void(0);"
onclick="toggle_visibility('citation');">Citation</a>
Expand Down
Loading