diff --git a/README.md b/README.md
new file mode 100644
index 00000000..52a2de9c
--- /dev/null
+++ b/README.md
@@ -0,0 +1,29 @@
+
+# Torch Package Reference Manual #
+
+__Torch__ is the main package in [Torch7](http://torch.ch) where data
+structures for multi-dimensional tensors and mathematical operations
+over these are defined. Additionally, it provides many utilities for
+accessing files, serializing objects of arbitrary types and other
+useful utilities.
+
+
+## Torch Packages ##
+
+ * Tensor Library
+ * [Tensor](doc/tensor.md) defines the _all powerful_ tensor object that provides multi-dimensional numerical arrays with type templating.
+ * [Mathematical operations](doc/maths.md) that are defined for the tensor object types.
+ * [Storage](doc/storage.md) defines a simple storage interface that controls the underlying storage for any tensor object.
+ * File I/O Interface Library
+ * [File](doc/file.md) is an abstract interface for common file operations.
+ * [Disk File](doc/diskfile.md) defines operations on files stored on disk.
+ * [Memory File](doc/memoryfile.md) defines operations on stored in RAM.
+ * [Pipe File](doc/pipefile.md) defines operations for using piped commands.
+ * [High-Level File operations](doc/serialization.md) defines higher-level serialization functions.
+ * Useful Utilities
+ * [Timer](doc/timer.md) provides functionality for _measuring time_.
+ * [Tester](doc/tester.md) is a generic tester framework.
+ * [CmdLine](doc/cmdline.md) is a command line argument parsing utility.
+ * [Random](doc/random.md) defines a random number generator package with various distributions.
+ * Finally useful [utility](doc/utility.md) functions are provided for easy handling of torch tensor types and class inheritance.
+
diff --git a/dok/cmdline.dok b/doc/cmdline.md
similarity index 65%
rename from dok/cmdline.dok
rename to doc/cmdline.md
index 680babf7..4d406010 100644
--- a/dok/cmdline.dok
+++ b/doc/cmdline.md
@@ -1,5 +1,5 @@
-====== CmdLine ======
-{{anchor:torch.CmdLine.dok}}
+
+# CmdLine #
This class provides a parameter parsing framework which is very
usefull when one needs to run several experiments that rely on
@@ -7,10 +7,10 @@ different parameter settings that are passed in the command line.
This class will also override the default print function to direct
all the output to a log file as well as screen at the same time.
-A sample ''lua'' file is given below that makes use of ''CmdLine''
+A sample `lua` file is given below that makes use of `CmdLine`
class.
-
+```lua
cmd = torch.CmdLine()
cmd:text()
@@ -31,16 +31,16 @@ params.rundir = cmd:string('experiment', params, {dir=true})
-- create log file
cmd:log(params.rundir .. '/log', params)
-
+```
When this file is run on the lua commandline as follows
-
+```shell
# lua myscript.lua
-
+```
It will produce the following output:
-
+```
[program started on Tue Jan 10 15:33:49 2012]
[command line arguments]
booloption false
@@ -52,20 +52,20 @@ booloption false
seed 123
rundir experiment
stroption mystring
-
+```
The same output will also be written to file
-''experiment/log''. Whenever one of the options are passed on the
-command line and is different than the default value, the ''rundir''
+`experiment/log`. Whenever one of the options are passed on the
+command line and is different than the default value, the `rundir`
is name is produced to reflect the parameter setting.
-
+```shell
# lua myscript.lua -seed 456 -stroption mycustomstring
-
+```
This will produce the following output:
-
+```
[program started on Tue Jan 10 15:36:55 2012]
[command line arguments]
booloption false
@@ -77,69 +77,70 @@ booloption false
seed 456
rundir experiment,seed=456,stroption=mycustomstring
stroption mycustomstring
-
+```
and the output will be logged in
-''experiment,seed=456,stroption=mycustomstring/log''
+`experiment,seed=456,stroption=mycustomstring/log`
-==== addTime([name] [,format]) ====
-{{anchor:torch.CmdLine.addtime}}
+
+### addTime([name] [,format]) ###
Adds a prefix to every line in the log file with the date/time in the
given format with an optional name argument. The date/time format is
-the same as ''os.date()''. Note that the prefix is only added to the
+the same as `os.date()`. Note that the prefix is only added to the
log file, not the screen output. The default value for name is empty
and the default format is '%F %T'.
The final produced output for the following command is:
-
+```lua
> cmd:addTime('your project name','%F %T')
> print('Your log message')
-
+```
-
+```
2012-02-07 08:21:56[your project name]: Your log message
-
+```
-==== log(filename, parameter_table) ====
-{{anchor:torch.CmdLine.log}}
+
+### log(filename, parameter_table) ###
-It sets the log filename to ''filename'' and prints the values of
-parameters in the ''parameter_table''.
+It sets the log filename to `filename` and prints the values of
+parameters in the `parameter_table`.
-==== option(name, default, help) ====
-{{anchor:torch.CmdLine.option}}
+
+### option(name, default, help) ###
Stores an option argument. The name should always start with '-'.
-==== [table] parse(arg) ====
-{{anchor:torch.CmdLine.parse}}
+
+### [table] parse(arg) ###
-Parses a given table, ''arg'' is by default the argument table that
-is created by ''lua'' using the command line arguments passed to the
+Parses a given table, `arg` is by default the argument table that
+is created by `lua` using the command line arguments passed to the
executable. Returns a table of option values.
-==== silent() ====
-{{anchor:torch.CmdLine.silent}}
+
+### silent() ###
Silences the output to standard output. The only output is written to
the log file.
-==== [string] string(prefix, params, ignore) ====
-{{anchor:torch.CmdLine.string}}
+
+### [string] string(prefix, params, ignore) ###
Returns a string representation of the options by concatenating the
-non-default options. ''ignore'' is a table ''{dir=true}'', which will
-ensure that option named ''dir'' will be ignored while creating the
+non-default options. `ignore` is a table `{dir=true}`, which will
+ensure that option named `dir` will be ignored while creating the
string representation.
This function is usefull for creating unique experiment directories that
depend on the parameter settings.
-==== text(string) ====
-{{anchor:torch.CmdLine.text}}
+
+### text(string) ###
Logs a custom text message.
+
diff --git a/doc/diskfile.md b/doc/diskfile.md
new file mode 100644
index 00000000..827ecb65
--- /dev/null
+++ b/doc/diskfile.md
@@ -0,0 +1,65 @@
+
+# DiskFile #
+
+Parent classes: [File](file.md)
+
+A `DiskFile` is a particular `File` which is able to perform basic read/write operations
+on a file stored on disk. It implements all methods described in [File](file.md), and
+some additional methods relative to _endian_ encoding.
+
+By default, a `DiskFile` is in [ASCII](file.md#torch.File.binary) mode. If changed to
+the [binary](file.md#torch.File.binary) mode, the default endian encoding is the native
+computer one.
+
+The file might be open in read, write, or read-write mode, depending on the parameter
+`mode` (which can take the value `"r"`, `"w"` or `"rw"` respectively)
+given to the [torch.DiskFile(fileName, mode)](#torch.DiskFile).
+
+
+### torch.DiskFile(fileName, [mode], [quiet]) ###
+
+_Constructor_ which opens `fileName` on disk, using the given `mode`. Valid `mode` are
+`"r"` (read), `"w"` (write) or `"rw"` (read-write). Default is read mode.
+
+If read-write mode, the file _will be created_ if it does not exists. If it
+exists, it will be positionned at the beginning of the file after opening.
+
+If (and only if) `quiet` is `true`, no error will be raised in case of
+problem opening the file: instead `nil` will be returned.
+
+The file is opened in [ASCII](file.md#torch.File.ascii) mode by default.
+
+
+### bigEndianEncoding() ###
+
+In [binary](file.md#torch.File.binary) mode, force encoding in _big endian_.
+(_big end first_: decreasing numeric significance with increasing memory
+addresses)
+
+
+### [boolean] isBigEndianCPU() ###
+
+Returns `true` if, and only if, the computer CPU operates in _big endian_.
+_Big end first_: decreasing numeric significance with increasing
+memory addresses.
+
+
+### [boolean] isLittleEndianCPU() ###
+
+Returns `true` if, and only if, the computer CPU operates in _little endian_.
+_Little end first_: increasing numeric significance with increasing
+memory addresses.
+
+
+### littleEndianEncoding() ###
+
+In [binary](file.md#torch.File.binary) mode, force encoding in _little endian_.
+(_little end first_: increasing numeric significance with increasing memory
+addresses)
+
+
+### nativeEndianEncoding() ###
+
+In [binary](file.md#torch.File.binary) mode, force encoding in _native endian_.
+
+
diff --git a/doc/file.md b/doc/file.md
new file mode 100644
index 00000000..637a3d0f
--- /dev/null
+++ b/doc/file.md
@@ -0,0 +1,348 @@
+
+# File #
+
+This is an _abstract_ class. It defines most methods implemented by its
+child classes, like [DiskFile](diskfile.md),
+[MemoryFile](memoryfile.md) and [PipeFile](pipefile.md).
+
+Methods defined here are intended for basic read/write functionalities.
+Read/write methods might write in [ASCII](#torch.File.ascii) mode or
+[binary](#torch.File.binary) mode.
+
+In [ASCII](#torch.File.ascii) mode, numbers are converted in human readable
+format (characters). Booleans are converted into `0` (false) or `1` (true).
+In [binary](#torch.File.binary) mode, numbers and boolean are directly encoded
+as represented in a register of the computer. While not being human
+readable and less portable, the binary mode is obviously faster.
+
+In [ASCII](#torch.File.ascii) mode, if the default option
+[autoSpacing()](#torch.File.autoSpacing) is chosen, a space will be generated
+after each written number or boolean. A carriage return will also be added
+after each call to a write method. With this option, the spaces are
+supposed to exist while reading. This option can be deactivated with
+[noAutoSpacing()](#torch.File.noAutoSpacing).
+
+A `Lua` error might or might be not generated in case of read/write error
+or problem in the file. This depends on the choice made between
+[quiet()](#torch.File.quiet) and [pedantic()](#torch.File.pedantic) options. It
+is possible to query if an error occured in the last operation by calling
+[hasError()](#torch.File.hasError).
+
+
+## Read methods ##
+
+
+
+
+
+
+
+
+
+They are three types of reading methods:
+ - `[number] readTYPE()`
+ - `[TYPEStorage] readTYPE(n)`
+ - `[number] readTYPE(TYPEStorage)`
+
+where `TYPE` can be either `Byte`, `Char`, `Short`, `Int`, `Long`, `Float` or `Double`.
+
+A convenience method also exist for boolean types: `[boolean] readBool()`. It reads
+a value on the file with `readInt()` and returns `true` if and only if this value is `1`. It is not possible
+to read storages of booleans.
+
+All these methods depends on the encoding choice: [ASCII](#torch.File.ascii)
+or [binary](#torch.File.binary) mode. In [ASCII](#torch.File.ascii) mode, the
+option [autoSpacing()](#torch.File.autoSpacing) and
+[noAutoSpacing()](#torch.File.noAutoSpacing) have also an effect on these
+methods.
+
+If no parameter is given, one element is returned. This element is
+converted to a `Lua` number when reading.
+
+If `n` is given, `n` values of the specified type are read
+and returned in a new [Storage](storage.md) of that particular type.
+The storage size corresponds to the number of elements actually read.
+
+If a `Storage` is given, the method will attempt to read a number of elements
+equals to the size of the given storage, and fill up the storage with these elements.
+The number of elements actually read is returned.
+
+In case of read error, these methods will call the `Lua` error function using the default
+[pedantic](#torch.File.pedantic) option, or stay quiet with the [quiet](#torch.File.quiet)
+option. In the latter case, one can check if an error occurred with
+[hasError()](#torch.File.hasError).
+
+
+## Write methods ##
+
+
+
+
+
+
+
+
+
+They are two types of reading methods:
+ - `[number] writeTYPE(number)`
+ - `[number] writeTYPE(TYPEStorage)`
+
+where `TYPE` can be either `Byte`, `Char`, `Short`, `Int`, `Long`, `Float` or `Double`.
+
+A convenience method also exist for boolean types: `writeBool(value)`. If `value` is `nil` or
+not `true` a it is equivalent to a `writeInt(0)` call, else to `writeInt(1)`. It is not possible
+to write storages of booleans.
+
+All these methods depends on the encoding choice: [ASCII](#torch.File.ascii)
+or [binary](#torch.File.ascii) mode. In [ASCII](#torch.File.ascii) mode, the
+option [autoSpacing()](#torch.File.autoSpacing) and
+[noAutoSpacing()](#torch.File.noAutoSpacing) have also an effect on these
+methods.
+
+If one `Lua` number is given, this number is converted according to the
+name of the method when writing (e.g. `writeInt(3.14)` will write `3`).
+
+If a `Storage` is given, the method will attempt to write all the elements contained
+in the storage.
+
+These methods return the number of elements actually written.
+
+In case of read error, these methods will call the `Lua` error function using the default
+[pedantic](#torch.File.pedantic) option, or stay quiet with the [quiet](#torch.File.quiet)
+option. In the latter case, one can check if an error occurred with
+[hasError()](#torch.File.hasError).
+
+
+## Serialization methods ##
+
+These methods allow the user to save any serializable objects on disk and
+reload it later in its original state. In other words, it can perform a
+_deep_ copy of an object into a given `File`.
+
+Serializable objects are `Torch` objects having a `read()` and
+`write()` method. `Lua` objects such as `table`, `number` or
+`string` or _pure Lua_ functions are also serializable.
+
+If the object to save contains several other objects (let say it is a tree
+of objects), then objects appearing several times in this tree will be
+_saved only once_. This saves disk space, speedup loading/saving and
+respect the dependencies between objects.
+
+Interestingly, if the `File` is a [MemoryFile](memoryfile.md), it allows
+the user to easily make a _clone_ of any serializable object:
+```lua
+file = torch.MemoryFile() -- creates a file in memory
+file:writeObject(object) -- writes the object into file
+file:seek(1) -- comes back at the beginning of the file
+objectClone = file:readObject() -- gets a clone of object
+```
+
+
+### readObject() ###
+
+Returns the next [serializable](#torch.File.serialization) object saved beforehand
+in the file with [writeObject()](#torch.File.writeObject).
+
+Note that objects which were [written](#torch.File.writeObject) with the same
+reference have still the same reference after loading.
+
+Example:
+```lua
+-- creates an array which contains twice the same tensor
+array = {}
+x = torch.Tensor(1)
+table.insert(array, x)
+table.insert(array, x)
+
+-- array[1] and array[2] refer to the same address
+-- x[1] == array[1][1] == array[2][1] == 3.14
+array[1][1] = 3.14
+
+-- write the array on disk
+file = torch.DiskFile('foo.asc', 'w')
+file:writeObject(array)
+file:close() -- make sure the data is written
+
+-- reload the array
+file = torch.DiskFile('foo.asc', 'r')
+arrayNew = file:readObject()
+
+-- arrayNew[1] and arrayNew[2] refer to the same address!
+-- arrayNew[1][1] == arrayNew[2][1] == 3.14
+-- so if we do now:
+arrayNew[1][1] = 2.72
+-- arrayNew[1][1] == arrayNew[2][1] == 2.72 !
+```
+
+
+### writeObject(object) ###
+
+Writes `object` into the file. This object can be read later using
+[readObject()](#torch.File.readObject). Serializable objects are `Torch`
+objects having a `read()` and `write()` method. `Lua` objects such as
+`table`, `number` or `string` or pure Lua functions are also serializable.
+
+If the object has been already written in the file, only a _reference_ to
+this already saved object will be written: this saves space an speed-up
+writing; it also allows to keep the dependencies between objects intact.
+
+In returns, if one writes an object, modify its member, and write the
+object again in the same file, the modifications will not be recorded
+in the file, as only a reference to the original will be written. See
+[readObject()](#torch.File.readObject) for an example.
+
+
+### [string] readString(format) ###
+
+If `format` starts with ''"*l"` then returns the next line in the `File''. The end-of-line character is skipped.
+
+If `format` starts with ''"*a"` then returns all the remaining contents of the `File''.
+
+If no data is available, then an error is raised, except if `File` is in [quiet()](#torch.File.quiet) mode where
+it then returns `nil`.
+
+Because Torch is more precised on number typing, the `Lua` format ''"*n"'' is not supported:
+instead use one of the [number read methods](#torch.File.read).
+
+
+### [number] writeString(str) ###
+
+Writes the string `str` in the `File`. If the string cannot be written completely an error is raised, except
+if `File` is in [quiet()](#torch.File.quiet) mode where it returns the number of character actually written.
+
+## General Access and Control Methods ##
+
+
+### ascii() [default] ###
+
+The data read or written will be in `ASCII` mode: all numbers are converted
+to characters (human readable format) and boolean are converted to `0`
+(false) or `1` (true). The input-output format in this mode depends on the
+options [autoSpacing()](#torch.File.autoSpacing) and
+[noAutoSpacing()](#torch.File.noAutoSpacing).
+
+
+### autoSpacing() [default] ###
+
+In [ASCII](#torch.File.ascii) mode, write additional spaces around the elements
+written on disk: if writing a [Storage](storage.md), a space will be
+generated between each _element_ and a _return line_ after the last
+element. If only writing one element, a _return line_ will be generated
+after this element.
+
+Those spaces are supposed to exist while reading in this mode.
+
+This is the default behavior. You can de-activate this option with the
+[noAutoSpacing()](#torch.File.noAutoSpacing) method.
+
+
+### binary() ###
+
+The data read or written will be in binary mode: the representation in the
+`File` is the same that the one in the computer memory/register (not human
+readable). This mode is faster than [ASCII](#torch.File.ascii) but less
+portable.
+
+
+### clearError() ###
+
+Clear the error.flag returned by [hasError()](#torch.File.hasError).
+
+
+### close() ###
+
+Close the file. Any subsequent operation will generate a `Lua` error.
+
+
+### noAutoSpacing() ###
+
+In [ASCII](#torch.File.ascii) mode, do not put extra spaces between element
+written on disk. This is the contrary of the option
+[autoSpacing()](#torch.File.autoSpacing).
+
+
+### synchronize() ###
+
+If the child class bufferize the data while writing, ensure that the data
+is actually written.
+
+
+
+### pedantic() [default] ###
+
+If this mode is chosen (which is the default), a `Lua` error will be
+generated in case of error (which will cause the program to stop).
+
+It is possible to use [quiet()](#torch.File.quiet) to avoid `Lua` error generation
+and set a flag instead.
+
+
+### [number] position() ###
+
+Returns the current position (in bytes) in the file.
+The first position is `1` (following Lua standard indexing).
+
+
+### quiet() ###
+
+If this mode is chosen instead of [pedantic()](#torch.File.pedantic), no `Lua`
+error will be generated in case of read/write error. Instead, a flag will
+be raised, readable through [hasError()](#torch.File.hasError). This flag can
+be cleared with [clearError()](#torch.File.clearError)
+
+Checking if a file is quiet can be performed using [isQuiet()](#torch.File.isQuiet).
+
+
+### seek(position) ###
+
+Jump into the file at the given `position` (in byte). Might generate/raise
+an error in case of problem. The first position is `1` (following Lua standard indexing).
+
+
+### seekEnd() ###
+
+Jump at the end of the file. Might generate/raise an error in case of
+problem.
+
+## File state query ##
+
+These methods allow the user to query the state of the given `File`.
+
+
+### [boolean] hasError() ###
+
+Returns if an error occurred since the last [clearError()](#torch.File.clearError) call, or since
+the opening of the file if `clearError()` has never been called.
+
+
+### [boolean] isQuiet() ###
+
+Returns a boolean which tells if the file is in [quiet](#torch.File.quiet) mode or not.
+
+
+### [boolean] isReadable() ###
+
+Tells if one can read the file or not.
+
+
+### [boolean] isWritable() ###
+
+Tells if one can write in the file or not.
+
+
+### [boolean] isAutoSpacing() ###
+
+Return `true` if [autoSpacing](#torch.File.autoSpacing) has been chosen.
+
+
+### referenced(ref) ###
+
+Sets the referenced property of the File to `ref`. `ref` has to be `true` or `false`. By default it is true, which means that a File object keeps track of objects written using [writeObject](#torch.File.writeObject) method. When one needs to push the same tensor repeatedly into a file but everytime changing its contents, calling `referenced(false)` ensures desired behaviour.
+
+
+### isReferenced() ###
+
+Return the state set by [referenced](#torch.File.referenced).
+
+
+
diff --git a/doc/maths.md b/doc/maths.md
new file mode 100644
index 00000000..536820ad
--- /dev/null
+++ b/doc/maths.md
@@ -0,0 +1,1652 @@
+
+
+# Math Functions #
+
+Torch provides Matlab-like functions for manipulating
+[Tensor](README.md#Tensor) objects. Functions fall into several types of
+categories:
+ * [constructors](#torch.construction.dok) like [zeros](#torch.zeros), [ones](#torch.ones)
+ * extractors like [diag](#torch.diag) and [triu](#torch.triu),
+ * [Element-wise](#torch.elementwise.dok) mathematical operations like [abs](#torch.abs) and [pow](#torch.pow),
+ * [BLAS](#torch.basicoperations.dok) operations,
+ * [column or row-wise operations](#torch.columnwise.dok) like [sum](#torch.sum) and [max](#torch.max),
+ * [matrix-wide operations](#torch.matrixwide.dok) like [trace](#torch.trace) and [norm](#torch.norm).
+ * [Convolution and cross-correlation](#torch.conv.dok) operations like [conv2](#torch.conv2).
+ * [Basic linear algebra operations](#torch.linalg.dok) like [eigen value/vector calculation](#torch.eig).
+ * [Logical Operations on Tensors](#torch.logical.dok).
+
+By default, all operations allocate a new tensor to return the
+result. However, all functions also support passing the target
+tensor(s) as the first argument(s), in which case the target tensor(s)
+will be resized accordingly and filled with result. This property is
+especially useful when one wants have tight control over when memory
+is allocated.
+
+The torch package adopts the same concept, so that calling a function
+directly on the tensor itself using an object-oriented syntax is
+equivalent to passing the tensor as the optional resulting tensor. The
+following two calls are equivalent.
+
+```lua
+torch.log(x,x)
+x:log()
+```
+
+Similarly, `torch.conv2` function can be used in the following manner.
+```lua
+
+x = torch.rand(100,100)
+k = torch.rand(10,10)
+res1 = torch.conv2(x,k)
+
+res2 = torch.Tensor()
+torch.conv2(res2,x,k)
+
+=res2:dist(res1)
+0
+
+```
+
+The advantage of second case is, same `res2` tensor can be used successively in a loop without any new allocation.
+
+```lua
+-- no new memory allocations...
+for i=1,100 do
+ torch.conv2(res2,x,k)
+end
+=res2:dist(res1)
+0
+```
+
+
+## Construction or extraction functions ##
+
+
+### [res] torch.cat( [res,] x_1, x_2, [dimension] ) ###
+
+`x=torch.cat(x_1,x_2,[dimension])` returns a tensor `x` which is the concatenation of tensors x_1 and x_2 along dimension `dimension`.
+
+If `dimension` is not specified it is the last dimension.
+
+The other dimensions of x_1 and x_2 have to be equal.
+
+Examples:
+```lua
+> print(torch.cat(torch.ones(3),torch.zeros(2)))
+
+ 1
+ 1
+ 1
+ 0
+ 0
+[torch.Tensor of dimension 5]
+
+
+> print(torch.cat(torch.ones(3,2),torch.zeros(2,2),1))
+
+ 1 1
+ 1 1
+ 1 1
+ 0 0
+ 0 0
+[torch.DoubleTensor of dimension 5x2]
+
+
+> print(torch.cat(torch.ones(2,2),torch.zeros(2,2),1))
+ 1 1
+ 1 1
+ 0 0
+ 0 0
+[torch.DoubleTensor of dimension 4x2]
+
+> print(torch.cat(torch.ones(2,2),torch.zeros(2,2),2))
+ 1 1 0 0
+ 1 1 0 0
+[torch.DoubleTensor of dimension 2x4]
+
+
+> print(torch.cat(torch.cat(torch.ones(2,2),torch.zeros(2,2),1),torch.rand(3,2),1))
+
+ 1.0000 1.0000
+ 1.0000 1.0000
+ 0.0000 0.0000
+ 0.0000 0.0000
+ 0.3227 0.0493
+ 0.9161 0.1086
+ 0.2206 0.7449
+[torch.DoubleTensor of dimension 7x2]
+
+```
+
+
+
+### [res] torch.diag([res,] x [,k]) ###
+
+
+`y=torch.diag(x)` when x is of dimension 1 returns a diagonal matrix with diagonal elements constructed from x.
+
+`y=torch.diag(x)` when x is of dimension 2 returns a tensor of dimension 1
+with elements constructed from the diagonal of x.
+
+`y=torch.diag(x,k)` returns the k-th diagonal of x,
+wher k=0 is the main diagonal, k>0 is above the main diagonal and k<0
+is below the main diagonal.
+
+
+### [res] torch.eye([res,] n [,m]) ###
+
+
+`y=torch.eye(n)` returns the n-by-n identity matrix.
+
+`y=torch.eye(n,m)` returns an m-by-m identity matrix with ones on the diagonal and zeros elsewhere.
+
+
+
+### [res] torch.linspace([res,] x1, x2, [,n]) ###
+
+
+`y=torch.linspace(x1,x2)` returns a one-dimensional tensor of size 100 equally spaced points between x1 and x2.
+
+`y=torch.linspace(x1,x2,n)` returns a one-dimensional tensor of n equally spaced points between x1 and x2.
+
+
+
+### [res] torch.logspace([res,] x1, x2, [,n]) ###
+
+
+`y=torch.logspace(x1,x2)` returns a one-dimensional tensor of 50 logarithmically eqally spaced points between x1 and x2.
+
+`y=torch.logspace(x1,x2,n)` returns a one-dimensional tensor of n logarithmically equally spaced points between x1 and x2.
+
+
+### [res] torch.ones([res,] m [,n...]) ###
+
+
+`y=torch.ones(n)` returns a one-dimensional tensor of size n filled with ones.
+
+`y=torch.ones(m,n)` returns a mxn tensor filled with ones.
+
+For more than 4 dimensions, you can use a storage as argument:
+`y=torch.ones(torch.LongStorage{m,n,k,l,o})`.
+
+
+### [res] torch.rand([res,] m [,n...]) ###
+
+
+`y=torch.rand(n)` returns a one-dimensional tensor of size n filled with random numbers from a uniform distribution on the interval (0,1).
+
+`y=torch.rand(m,n)` returns a mxn tensor of random numbers from a uniform distribution on the interval (0,1).
+
+For more than 4 dimensions, you can use a storage as argument:
+`y=torch.rand(torch.LongStorage{m,n,k,l,o})`
+
+
+### [res] torch.randn([res,] m [,n...]) ###
+
+
+`y=torch.randn(n)` returns a one-dimensional tensor of size n filled with random numbers from a normal distribution with mean zero and variance one.
+
+`y=torch.randn(m,n)` returns a mxn tensor of random numbers from a normal distribution with mean zero and variance one.
+
+For more than 4 dimensions, you can use a storage as argument:
+`y=torch.rand(torch.LongStorage{m,n,k,l,o})`
+
+
+### [res] torch.range([res,] x, y [,step]) ###
+
+
+`y=torch.range(x,y)` returns a tensor of size (int)(y-x)+1 with values
+from x to y with step 1. You can modifiy the default step with:
+`y=torch.range(x,y,step)`
+
+```lua
+> print(torch.range(2,5))
+
+ 2
+ 3
+ 4
+ 5
+[torch.Tensor of dimension 4]
+```
+
+`y=torch.range(n,m,incr)` returns a tensor filled in range n to m with incr increments.
+```lua
+print(torch.range(2,5,1.2))
+ 2.0000
+ 3.2000
+ 4.4000
+[torch.DoubleTensor of dimension 3]
+```
+
+
+### [res] torch.randperm([res,] n) ###
+
+
+`y=torch.randperm(n)` returns a random permutation of integers from 1 to n.
+
+
+### [res] torch.reshape([res,] x, m [,n...]) ###
+
+
+`y=torch.reshape(x,m,n)` returns a new mxn tensor y whose elements
+are taken rowwise from x, which must have m*n elements. The elements are copied into the new tensor.
+
+For more than 4 dimensions, you can use a storage:
+`y=torch.reshape(x,torch.LongStorage{m,n,k,l,o})`
+
+
+### [res] torch.tril([res,] x [,k]) ###
+
+
+`y=torch.tril(x)` returns the lower triangular part of x, the other elements of y are set to 0.
+
+`torch.tril(x,k)` returns the elements on and below the k-th diagonal of x as non-zero. k=0 is the main diagonal, k>0 is above the main diagonal and k<0
+is below the main diagonal.
+
+
+### [res] torch.triu([res,] x, [,k]) ###
+
+
+`y=torch.triu(x)` returns the upper triangular part of x,
+the other elements of y are set to 0.
+
+`torch.triu(x,k)` returns the elements on and above the k-th diagonal of x as non-zero. k=0 is the main diagonal, k>0 is above the main diagonal and k<0
+is below the main diagonal.
+
+
+### [res] torch.zeros([res,] x) ###
+
+
+`y=torch.zeros(n)` returns a one-dimensional tensor of size n filled with zeros.
+
+`y=torch.zeros(m,n)` returns a mxn tensor filled with zeros.
+
+For more than 4 dimensions, you can use a storage:
+`y=torch.zeros(torch.LongStorage{m,n,k,l,o})`
+
+
+### Element-wise Mathematical Operations ###
+
+
+### [res] torch.abs([res,] x) ###
+
+
+`y=torch.abs(x)` returns a new tensor with the absolute values of the elements of x.
+
+`x:abs()` replaces all elements in-place with the absolute values of the elements of x.
+
+
+### [res] torch.acos([res,] x) ###
+
+
+`y=torch.acos(x)` returns a new tensor with the arcosine of the elements of x.
+
+`x:acos()` replaces all elements in-place with the arcosine of the elements of x.
+
+
+### [res] torch.asin([res,] x) ###
+
+
+`y=torch.asin(x)` returns a new tensor with the arcsine of the elements of x.
+
+`x:asin()` replaces all elements in-place with the arcsine of the elements of x.
+
+
+### [res] torch.atan([res,] x) ###
+
+
+`y=torch.atan(x)` returns a new tensor with the arctangent of the elements of x.
+
+`x:atan()` replaces all elements in-place with the arctangent of the elements of x.
+
+
+### [res] torch.ceil([res,] x) ###
+
+
+`y=torch.ceil(x)` returns a new tensor with the values of the elements of x rounded up to the nearest integers.
+
+`x:ceil()` replaces all elements in-place with the values of the elements of x rounded up to the nearest integers.
+
+
+### [res] torch.cos([res,] x) ###
+
+
+`y=torch.cos(x)` returns a new tensor with the cosine of the elements of x.
+
+`x:cos()` replaces all elements in-place with the cosine of the elements of x.
+
+
+### [res] torch.cosh([res,] x) ###
+
+
+`y=torch.cosh(x)` returns a new tensor with the hyberbolic cosine of the elements of x.
+
+`x:cosh()` replaces all elements in-place with the hyberbolic cosine of the elements of x.
+
+
+### [res] torch.exp[res,] (x) ###
+
+
+`y=torch.exp(x)` returns, for each element in x, e (the base of natural logarithms) raised to the power of the element in x.
+
+`x:exp()` returns, for each element in x, e (the base of natural logarithms) raised to the power of the element in x.
+
+
+### [res] torch.floor([res,] x) ###
+
+
+`y=torch.floor(x)` returns a new tensor with the values of the elements of x rounded down to the nearest integers.
+
+`x:floor()` replaces all elements in-place with the values of the elements of x rounded down to the nearest integers.
+
+
+### [res] torch.log([res,] (x) ###
+
+
+`y=torch.log(x)` returns a new tensor with the natural logarithm of the elements of x.
+
+`x:log()` replaces all elements in-place with the natural logarithm of the elements of x.
+
+
+### [res] torch.log1p([res,] x) ###
+
+
+`y=torch.log1p(x)` returns a new tensor with the natural logarithm of the elements of x+1.
+
+`x:log1p()` replaces all elements in-place with the natural logarithm of the elements of x+1.
+This function is more accurate than [log()](#torch.log) for small values of x.
+
+
+### [res] torch.pow([res,] x) ###
+
+
+`y=torch.pow(x,n)` returns a new tensor with the elements of x to the power of n.
+
+`x:pow(n)` replaces all elements in-place with the elements of x to the power of n.
+
+
+### [res] torch.sin([res,] x) ###
+
+
+`y=torch.sin(x)` returns a new tensor with the sine of the elements of x.
+
+`x:sin()` replaces all elements in-place with the sine of the elements of x.
+
+
+### [res] torch.sinh([res,] x) ###
+
+
+`y=torch.sinh(x)` returns a new tensor with the hyperbolic sine of the elements of x.
+
+`x:sinh()` replaces all elements in-place with the hyperbolic sine of the elements of x.
+
+
+### [res] torch.sqrt([res,] x) ###
+
+
+`y=torch.sqrt(x)` returns a new tensor with the square root of the elements of x.
+
+`x:sqrt()` replaces all elements in-place with the square root of the elements of x.
+
+
+### [res] torch.tan([res,] x) ###
+
+
+`y=torch.tan(x)` returns a new tensor with the tangent of the elements of x.
+
+`x:tan()` replaces all elements in-place with the tangent of the elements of x.
+
+
+### [res] torch.tanh([res,] x) ###
+
+
+`y=torch.tanh(x)` returns a new tensor with the hyperbolic tangent of the elements of x.
+
+`x:tanh()` replaces all elements in-place with the hyperbolic tangent of the elements of x.
+
+
+## Basic operations ##
+
+In this section, we explain basic mathematical operations for Tensors.
+
+
+### [res] torch.add([res,] tensor, value) ###
+
+
+Add the given value to all elements in the tensor.
+
+`y=torch.add(x,value)` returns a new tensor.
+
+`x:add(value)` add `value` to all elements in place.
+
+
+### [res] torch.add([res,] tensor1, tensor2) ###
+
+
+Add `tensor1` to `tensor2` and put result into `res`. The number
+of elements must match, but sizes do not matter.
+
+```
+> x = torch.Tensor(2,2):fill(2)
+> y = torch.Tensor(4):fill(3)
+> x:add(y)
+> = x
+
+ 5 5
+ 5 5
+[torch.Tensor of dimension 2x2]
+```
+
+`y=torch.add(a,b)` returns a new tensor.
+
+`torch.add(y,a,b)` puts `a+b` in `y`.
+
+`a:add(b)` accumulates all elements of `b` into `a`.
+
+`y:add(a,b)` puts `a+b` in `y`.
+
+
+### [res] torch.add([res,] tensor1, value, tensor2) ###
+
+
+Multiply elements of `tensor2` by the scalar `value` and add it to
+`tensor1`. The number of elements must match, but sizes do not
+matter.
+
+```
+> x = torch.Tensor(2,2):fill(2)
+> y = torch.Tensor(4):fill(3)
+> x:add(2, y)
+> = x
+
+ 8 8
+ 8 8
+[torch.Tensor of dimension 2x2]
+```
+
+`x:add(value,y)` multiply-accumulates values of `y` into `x`.
+
+`z:add(x,value,y)` puts the result of ''x + value*y` in `z''.
+
+`torch.add(x,value,y)` returns a new tensor ''x + value*y''.
+
+`torch.add(z,x,value,y)` puts the result of ''x + value*y` in `z''.
+
+
+### [res] torch.mul([res,] tensor1, value) ###
+
+
+Multiply all elements in the tensor by the given `value`.
+
+`z=torch.mul(x,2)` will return a new tensor with the result of ''x*2''.
+
+`torch.mul(z,x,2)` will put the result of ''x*2` in `z''.
+
+`x:mul(2)` will multiply all elements of `x` with `2` in-place.
+
+`z:mul(x,2)` with put the result of ''x*2` in `z''.
+
+
+### [res] torch.cmul([res,] tensor1, tensor2) ###
+
+
+Element-wise multiplication of `tensor1` by `tensor2`. The number
+of elements must match, but sizes do not matter.
+
+```
+> x = torch.Tensor(2,2):fill(2)
+> y = torch.Tensor(4):fill(3)
+> x:cmul(y)
+> = x
+
+ 6 6
+ 6 6
+[torch.Tensor of dimension 2x2]
+```
+
+`z=torch.cmul(x,y)` returns a new tensor.
+
+`torch.cmul(z,x,y)` puts the result in `z`.
+
+`y:cmul(x)` multiplies all elements of `y` with corresponding elements of `x`.
+
+`z:cmul(x,y)` puts the result in `z`.
+
+
+### [res] torch.addcmul([res,] x [,value], tensor1, tensor2) ###
+
+
+Performs the element-wise multiplication of `tensor1` by `tensor2`,
+multiply the result by the scalar `value` (1 if not present) and add it
+to `x`. The number of elements must match, but sizes do not matter.
+
+```
+> x = torch.Tensor(2,2):fill(2)
+> y = torch.Tensor(4):fill(3)
+> z = torch.Tensor(2,2):fill(5)
+> x:addcmul(2, y, z)
+> = x
+
+ 32 32
+ 32 32
+[torch.Tensor of dimension 2x2]
+```
+
+`z:addcmul(value,x,y)` accumulates the result in `z`.
+
+`torch.addcmul(z,value,x,y)` returns a new tensor with the result.
+
+`torch.addcmul(z,z,value,x,y)` puts the result in `z`.
+
+
+### [res] torch.div([res,] tensor, value) ###
+
+
+Divide all elements in the tensor by the given `value`.
+
+`z=torch.div(x,2)` will return a new tensor with the result of `x/2`.
+
+`torch.div(z,x,2)` will put the result of `x/2` in `z`.
+
+`x:div(2)` will divide all elements of `x` with `2` in-place.
+
+`z:div(x,2)` with put the result of `x/2` in `z`.
+
+
+### [res] torch.cdiv([res,] tensor1, tensor2) ###
+
+
+Performs the element-wise division of `tensor1` by `tensor2`. The
+number of elements must match, but sizes do not matter.
+
+```
+> x = torch.Tensor(2,2):fill(1)
+> y = torch.Tensor(4)
+> for i=1,4 do y[i] = i end
+> x:cdiv(y)
+> = x
+
+ 1.0000 0.3333
+ 0.5000 0.2500
+[torch.Tensor of dimension 2x2]
+```
+
+`z=torch.cdiv(x,y)` returns a new tensor.
+
+`torch.cdiv(z,x,y)` puts the result in `z`.
+
+`y:cdiv(x)` divides all elements of `y` with corresponding elements of `x`.
+
+`z:cdiv(x,y)` puts the result in `z`.
+
+
+### [res] torch.addcdiv([res,] x [,value], tensor1, tensor2) ###
+
+
+Performs the element-wise division of `tensor1` by `tensor1`,
+multiply the result by the scalar `value` and add it to `x`.
+The number of elements must match, but sizes do not matter.
+
+```
+> x = torch.Tensor(2,2):fill(1)
+> y = torch.Tensor(4)
+> z = torch.Tensor(2,2):fill(5)
+> for i=1,4 do y[i] = i end
+> x:addcdiv(2, y, z)
+> = x
+
+ 1.4000 2.2000
+ 1.8000 2.6000
+[torch.Tensor of dimension 2x2]
+```
+
+`z:addcdiv(value,x,y)` accumulates the result in `z`.
+
+`torch.addcdiv(z,value,x,y)` returns a new tensor with the result.
+
+`torch.addcdiv(z,z,value,x,y)` puts the result in `z`.
+
+
+### [number] torch.dot(tensor1,tensor2) ###
+
+
+Performs the dot product between `tensor` and self. The number of
+elements must match: both tensors are seen as a 1D vector.
+
+```
+> x = torch.Tensor(2,2):fill(2)
+> y = torch.Tensor(4):fill(3)
+> = x:dot(y)
+24
+```
+
+`torch.dot(x,y)` returns dot product of `x` and `y`.
+`x:dot(y)` returns dot product of `x` and `y`.
+
+
+### [res] torch.addmv([res,] [v1,] vec1, [v2,] mat, vec2) ###
+
+
+Performs a matrix-vector multiplication between `mat` (2D tensor)
+and `vec` (1D tensor) and add it to vec1. In other words,
+
+```
+res = v1 * vec1 + v2 * mat*vec2
+```
+
+Sizes must respect the matrix-multiplication operation: if `mat` is
+a `n x m` matrix, `vec2` must be vector of size `m` and `vec1` must
+be a vector of size `n`.
+
+```
+> x = torch.Tensor(3):fill(0)
+> M = torch.Tensor(3,2):fill(3)
+> y = torch.Tensor(2):fill(2)
+> x:addmv(M, y)
+> = x
+
+ 12
+ 12
+ 12
+[torch.Tensor of dimension 3]
+```
+
+`torch.addmv(x,y,z)` returns a new tensor with the result.
+
+`torch.addmv(r,x,y,z)` puts the result in `r`.
+
+`x:addmv(y,z)` accumulates ''y*z` into `x''.
+
+`r:addmv(x,y,z)` puts the result of ''x+y*z` into `r''.
+
+Optional values `v1` and `v2` are scalars that multiply
+`vec1` and ''mat*vec2'' respectively.
+
+
+### [res] torch.addr([res,] [v1,] mat, [v2,] vec1, vec2) ###
+
+
+Performs the outer-product between `vec1` (1D tensor) and `vec2` (1D tensor).
+In other words,
+
+```
+res_ij = v1 * mat_ij + v2 * vec1_i * vec2_j
+```
+
+If `vec1` is a vector of size `n` and `vec2` is a vector of size `m`,
+then mat must be a matrix of size `n x m`.
+
+```
+> x = torch.Tensor(3)
+> y = torch.Tensor(2)
+> for i=1,3 do x[i] = i end
+> for i=1,2 do y[i] = i end
+> M = torch.Tensor(3, 2):zero()
+> M:addr(x, y)
+> = M
+
+ 1 2
+ 2 4
+ 3 6
+[torch.Tensor of dimension 3x2]
+```
+
+`torch.addr(M,x,y)` returns the result in a new tensor.
+
+`torch.addr(r,M,x,y)` puts the result in `r`.
+
+`M:addr(x,y)` puts the result in `M`.
+
+`r:addr(M,x,y)` puts the result in `r`.
+
+Optional values `v1` and `v2` are scalars that multiply
+`M` and `vec1 [out] vec2` respectively.
+
+
+
+### [res] torch.addmm([res,] [v1,] M [v2,] mat1, mat2) ###
+
+
+Performs a matrix-matrix multiplication between `mat1` (2D tensor)
+and `mat2` (2D tensor). In other words,
+
+```
+res = v1 * M + v2 * mat1*mat2
+```
+
+If `mat1` is a `n x m` matrix, `mat2` a `m x p` matrix,
+`M` must be a `n x p` matrix.
+
+`torch.addmm(M,mat1,mat2)` returns the result in a new tensor.
+
+`torch.addmm(r,M,mat1,mat2)` puts the result in `r`.
+
+`M:addmm(mat1,mat2)` puts the result in `M`.
+
+`r:addmm(M,mat1,mat2)` puts the result in `r`.
+
+Optional values `v1` and `v2` are scalars that multiply
+`M` and ''mat1 * mat2'' respectively.
+
+
+### [res] torch.mv([res,] mat, vec) ###
+
+
+Matrix vector product of `mat` and `vec`. Sizes must respect
+the matrix-multiplication operation: if `mat` is a `n x m` matrix,
+`vec` must be vector of size `m` and res must be a vector of size `n`.
+
+`torch.mv(x,y)` puts the result in a new tensor.
+
+`torch.mv(M,x,y)` puts the result in `M`.
+
+`M:mv(x,y)` puts the result in `M`.
+
+
+### [res] torch.mm([res,] mat1, mat2) ###
+
+
+Matrix matrix product of `mat1` and `mat2`. If `mat1` is a
+`n x m` matrix, `mat2` a `m x p` matrix, res must be a
+`n x p` matrix.
+
+
+`torch.mm(x,y)` puts the result in a new tensor.
+
+`torch.mm(M,x,y)` puts the result in `M`.
+
+`M:mm(x,y)` puts the result in `M`.
+
+
+### [res] torch.ger([res,] vec1, vec2) ###
+
+
+Outer product of `vec1` and `vec2`. If `vec1` is a vector of
+size `n` and `vec2` is a vector of size `m`, then res must
+be a matrix of size `n x m`.
+
+
+`torch.ger(x,y)` puts the result in a new tensor.
+
+`torch.ger(M,x,y)` puts the result in `M`.
+
+`M:ger(x,y)` puts the result in `M`.
+
+
+## Overloaded operators ##
+
+It is possible to use basic mathematic operators like `+`, `-`, `/` and =*=
+with tensors. These operators are provided as a convenience. While they
+might be handy, they create and return a new tensor containing the
+results. They are thus not as fast as the operations available in the
+[previous section](#torch.Tensor.BasicOperations.dok).
+
+### Addition and substraction ###
+
+You can add a tensor to another one with the `+` operator. Substraction is done with `-`.
+The number of elements in the tensors must match, but the sizes do not matter. The size
+of the returned tensor will be the size of the first tensor.
+```
+> x = torch.Tensor(2,2):fill(2)
+> y = torch.Tensor(4):fill(3)
+> = x+y
+
+ 5 5
+ 5 5
+[torch.Tensor of dimension 2x2]
+
+> = y-x
+
+ 1
+ 1
+ 1
+ 1
+[torch.Tensor of dimension 4]
+```
+
+A scalar might also be added or substracted to a tensor. The scalar might be on the right or left of the operator.
+```
+> x = torch.Tensor(2,2):fill(2)
+> = x+3
+
+ 5 5
+ 5 5
+[torch.Tensor of dimension 2x2]
+
+> = 3-x
+
+ 1 1
+ 1 1
+[torch.Tensor of dimension 2x2]
+```
+
+### Negation ###
+
+A tensor can be negated with the `-` operator placed in front:
+```
+> x = torch.Tensor(2,2):fill(2)
+> = -x
+
+-2 -2
+-2 -2
+[torch.Tensor of dimension 2x2]
+```
+
+### Multiplication ###
+
+Multiplication between two tensors is supported with the =*= operators. The result of the multiplication
+depends on the sizes of the tensors.
+$ 1D and 1D: Returns the dot product between the two tensors (scalar).
+$ 2D and 1D: Returns the matrix-vector operation between the two tensors (1D tensor).
+$ 2D and 2D: Returns the matrix-matrix operation between the two tensors (2D tensor).
+$ 4D and 2D: Returns a tensor product (2D tensor).
+Sizes must be relevant for the corresponding operation.
+
+A tensor might also be multiplied by a scalar. The scalar might be on the right or left of the operator.
+
+Examples:
+```
+> M = torch.Tensor(2,2):fill(2)
+> N = torch.Tensor(2,4):fill(3)
+> x = torch.Tensor(2):fill(4)
+> y = torch.Tensor(2):fill(5)
+> = x*y -- dot product
+40
+> = M*x --- matrix-vector
+
+ 16
+ 16
+[torch.Tensor of dimension 2]
+
+> = M*N -- matrix-matrix
+
+ 12 12 12 12
+ 12 12 12 12
+[torch.Tensor of dimension 2x4]
+```
+
+
+### Division ###
+
+Only the division of a tensor by a scalar is supported with the operator `/`.
+Example:
+```
+> x = torch.Tensor(2,2):fill(2)
+> = x/3
+
+ 0.6667 0.6667
+ 0.6667 0.6667
+[torch.Tensor of dimension 2x2]
+```
+
+
+
+## Column or row-wise operations (dimension-wise operations) ##
+
+
+### [res] torch.cross([res,] a, b [,n]) ###
+
+`y=torch.cross(a,b)` returns the cross product of the tensors a and b.
+a and b must be 3 element vectors.
+
+`y=cross(a,b)` returns the cross product of a and b along the first dimension of length 3.
+
+`y=cross(a,b,n)`, where a and b returns the cross
+product of vectors in dimension n of a and b.
+a and b must have the same size,
+and both a:size(n) and b:size(n) must be 3.
+
+
+
+### [res] torch.cumprod([res,] x [,dim]) ###
+
+`y=torch.cumprod(x)` returns the cumulative product of the elements
+of x, performing the operation over the last dimension.
+
+`y=torch.cumprod(x,n)` returns the cumulative product of the
+elements of x, performing the operation over dimension n.
+
+
+### [res] torch.cumsum([res,] x [,dim]) ###
+
+`y=torch.cumsum(x)` returns the cumulative sum of the elements
+of x, performing the operation over the first dimension.
+
+`y=torch.cumsum(x,n)` returns the cumulative sum of the elements
+of x, performing the operation over dimension n.
+
+
+### torch.max([resval, resind,] x [,dim]) ###
+
+`y=torch.max(x)` returns the single largest element of x.
+
+`y,i=torch.max(x,1)` returns the largest element in each column
+(across rows) of x, and a tensor i of their corresponding indices in
+x.
+
+`y,i=torch.max(x,2)` performs the max operation across rows and
+
+`y,i=torch.max(x,n)` performs the max operation over the dimension n.
+
+
+
+### [res] torch.mean([res,] x [,dim]) ###
+
+`y=torch.mean(x)` returns the mean of all elements of x.
+
+`y=torch.mean(x,1)` returns a tensor y of the mean of the elements in
+each column of x.
+
+`y=torch.mean(x,2)` performs the mean operation for each row and
+
+
+`y=torch.mean(x,n)` performs the mean operation over the dimension n.
+
+
+### torch.min([resval, resind,] x) ###
+
+`y=torch.min(x)` returns the single smallest element of x.
+
+`y,i=torch.min(x,1)` returns the smallest element in each column
+(across rows) of x, and a tensor i of their corresponding indices in
+x.
+
+`y,i=torch.min(x,2)` performs the min operation across rows and
+
+`y,i=torch.min(x,n)` performs the min operation over the dimension n.
+
+
+
+### [res] torch.prod([res,] x [,n]) ###
+
+`y=torch.prod(x)` returns a tensor y of the product of all elements in `x`.
+
+`y=torch.prod(x,2)` performs the prod operation for each row and
+
+`y=torch.prod(x,n)` performs the prod operation over the dimension n.
+
+
+### torch.sort([resval, resind,] x [,d] [,flag]) ###
+
+`y,i=torch.sort(x)` returns a tensor `y` where all entries
+are sorted along the last dimension, in __ascending__ order. It also returns a tensor
+`i` that provides the corresponding indices from `x`.
+
+`y,i=torch.sort(x,d)` performs the sort operation along
+a specific dimension `d`.
+
+`y,i=torch.sort(x)` is therefore equivalent to
+`y,i=torch.sort(x,x:dim())`
+
+`y,i=torch.sort(x,d,true)` performs the sort operation along
+a specific dimension `d`, in __descending__ order.
+
+
+### [res] torch.std([res,] x, [flag] [dim]) ###
+
+`y=torch.std(x)` returns the standard deviation of the elements of x.
+
+`y=torch.std(x,dim)` performs the std operation over the dimension dim.
+
+`y=torch.std(x,dim,false)` performs the std operation normalizing by n-1 (this is the default).
+
+`y=torch.std(x,dim,true)` performs the std operation normalizing by n instead of n-1.
+
+
+### [res] torch.sum([res,] x) ###
+
+`y=torch.sum(x)` returns the sum of the elements of x.
+
+`y=torch.sum(x,2)` performs the sum operation for each row and
+`y=torch.sum(x,n)` performs the sum operation over the dimension n.
+
+
+### [res] torch.var([res,] x [,dim] [,flag]) ###
+
+`y=torch.var(x)` returns the variance of the elements of x.
+
+`y=torch.var(x,dim)` performs the var operation over the dimension dim.
+
+`y=torch.var(x,dim,false)` performs the var operation normalizing by n-1 (this is the default).
+
+`y=torch.var(x,dim,true)` performs the var operation normalizing by n instead of n-1.
+
+
+## Matrix-wide operations (tensor-wide operations) ##
+
+
+### torch.norm(x) ###
+
+`y=torch.norm(x)` returns the 2-norm of the tensor x.
+
+`y=torch.norm(x,p)` returns the p-norm of the tensor x.
+
+`y=torch.norm(x,p,dim)` returns the p-norms of the tensor x computed over the dimension dim.
+
+
+### torch.dist(x,y) ###
+
+`y=torch.dist(x,y)` returns the 2-norm of (x-y).
+
+`y=torch.dist(x,y,p)` returns the p-norm of (x-y).
+
+
+### torch.numel(x) ###
+
+`y=torch.numel(x)` returns the count of the number of elements in the matrix x.
+
+
+### torch.trace(x) ###
+
+`y=torch.trace(x)` returns the trace (sum of the diagonal elements)
+of a matrix x. This is equal to the sum of the eigenvalues of x.
+The returned value `y` is a number, not a tensor.
+
+
+## Convolution Operations ##
+
+These function implement convolution or cross-correlation of an input
+image (or set of input images) with a kernel (or set of kernels). The
+convolution function in Torch can handle different types of
+input/kernel dimensions and produces corresponding outputs. The
+general form of operations always remain the same.
+
+
+### [res] torch.conv2([res,] x, k, ['F' or 'V']) ###
+
+
+This function computes 2 dimensional convolutions between ` x ` and ` k `. These operations are similar to BLAS operations when number of dimensions of input and kernel are reduced by 2.
+
+ * ` x ` and ` k ` are 2D : convolution of a single image with a single kernel (2D output). This operation is similar to multiplication of two scalars.
+ * ` x ` and ` k ` are 3D : convolution of each input slice with corresponding kernel (3D output).
+ * ` x (p x m x n) ` 3D, ` k (q x p x ki x kj)` 4D : convolution of all input slices with the corresponding slice of kernel. Output is 3D ` (q x m x n) `. This operation is similar to matrix vector product of matrix ` k ` and vector ` x `.
+
+The last argument controls if the convolution is a full ('F') or valid ('V') convolution. The default is 'valid' convolution.
+
+```lua
+x=torch.rand(100,100)
+k=torch.rand(10,10)
+c = torch.conv2(x,k)
+=c:size()
+
+ 91
+ 91
+[torch.LongStorage of size 2]
+
+c = torch.conv2(x,k,'F')
+=c:size()
+
+ 109
+ 109
+[torch.LongStorage of size 2]
+
+```
+
+
+### [res] torch.xcorr2([res,] x, k, ['F' or 'V']) ###
+
+
+This function operates with same options and input/output
+configurations as [torch.conv2](#torch.conv2), but performs
+cross-correlation of the input with the kernel ` k `.
+
+
+### [res] torch.conv3([res,] x, k, ['F' or 'V']) ###
+
+
+This function computes 3 dimensional convolutions between ` x ` and ` k `. These operations are similar to BLAS operations when number of dimensions of input and kernel are reduced by 3.
+
+ * ` x ` and ` k ` are 3D : convolution of a single image with a single kernel (3D output). This operation is similar to multiplication of two scalars.
+ * ` x ` and ` k ` are 4D : convolution of each input slice with corresponding kernel (4D output).
+ * ` x (p x m x n x o) ` 4D, ` k (q x p x ki x kj x kk)` 5D : convolution of all input slices with the corresponding slice of kernel. Output is 4D ` (q x m x n x o) `. This operation is similar to matrix vector product of matrix ` k ` and vector ` x `.
+
+The last argument controls if the convolution is a full ('F') or valid ('V') convolution. The default is 'valid' convolution.
+
+```lua
+x=torch.rand(100,100,100)
+k=torch.rand(10,10,10)
+c = torch.conv3(x,k)
+=c:size()
+
+ 91
+ 91
+ 91
+[torch.LongStorage of size 3]
+
+c = torch.conv3(x,k,'F')
+=c:size()
+
+ 109
+ 109
+ 109
+[torch.LongStorage of size 3]
+
+```
+
+
+### [res] torch.xcorr3([res,] x, k, ['F' or 'V']) ###
+
+
+This function operates with same options and input/output
+configurations as [torch.conv3](#torch.conv3), but performs
+cross-correlation of the input with the kernel ` k `.
+
+
+## Eigenvalues, SVD, Linear System Solution ##
+
+Functions in this section are implemented with an interface to LAPACK
+libraries. If LAPACK libraries are not found during compilation step,
+then these functions will not be available.
+
+
+### torch.gesv([resb, resa,] b,a) ###
+
+Solution of ` AX=B ` and `A` has to be square and non-singular. ''
+A ` is ` m x m `, ` X ` is ` m x k `, ` B ` is ` m x k ''.
+
+If `resb` and `resa` are given, then they will be used for
+temporary storage and returning the result.
+
+ * `resa` will contain L and U factors for `LU` factorization of `A`.
+ * `resb` will contain the solution.
+
+```lua
+a=torch.Tensor({{6.80, -2.11, 5.66, 5.97, 8.23},
+ {-6.05, -3.30, 5.36, -4.44, 1.08},
+ {-0.45, 2.58, -2.70, 0.27, 9.04},
+ {8.32, 2.71, 4.35, -7.17, 2.14},
+ {-9.67, -5.14, -7.26, 6.08, -6.87}}):t()
+
+b=torch.Tensor({{4.02, 6.19, -8.22, -7.57, -3.03},
+ {-1.56, 4.00, -8.67, 1.75, 2.86},
+ {9.81, -4.09, -4.57, -8.61, 8.99}}):t()
+
+ =b
+ 4.0200 -1.5600 9.8100
+ 6.1900 4.0000 -4.0900
+-8.2200 -8.6700 -4.5700
+-7.5700 1.7500 -8.6100
+-3.0300 2.8600 8.9900
+[torch.DoubleTensor of dimension 5x3]
+
+=a
+ 6.8000 -6.0500 -0.4500 8.3200 -9.6700
+-2.1100 -3.3000 2.5800 2.7100 -5.1400
+ 5.6600 5.3600 -2.7000 4.3500 -7.2600
+ 5.9700 -4.4400 0.2700 -7.1700 6.0800
+ 8.2300 1.0800 9.0400 2.1400 -6.8700
+[torch.DoubleTensor of dimension 5x5]
+
+
+x=torch.gesv(b,a)
+ =x
+-0.8007 -0.3896 0.9555
+-0.6952 -0.5544 0.2207
+ 0.5939 0.8422 1.9006
+ 1.3217 -0.1038 5.3577
+ 0.5658 0.1057 4.0406
+[torch.DoubleTensor of dimension 5x3]
+
+=b:dist(a*x)
+1.1682163181673e-14
+
+```
+
+
+### torch.gels([resb, resa,] b,a) ###
+
+Solution of least squares and least norm problems for a full rank ` A ` that is ` m x n`.
+ * If ` n %%<=%% m `, then solve ` ||AX-B||_F `.
+ * If ` n > m ` , then solve ` min ||X||_F s.t. AX=B `.
+
+On return, first ` n ` rows of ` X ` matrix contains the solution
+and the rest contains residual information. Square root of sum squares
+of elements of each column of ` X ` starting at row ` n + 1 ` is
+the residual for corresponding column.
+
+```lua
+
+a=torch.Tensor({{ 1.44, -9.96, -7.55, 8.34, 7.08, -5.45},
+ {-7.84, -0.28, 3.24, 8.09, 2.52, -5.70},
+ {-4.39, -3.24, 6.27, 5.28, 0.74, -1.19},
+ {4.53, 3.83, -6.64, 2.06, -2.47, 4.70}}):t()
+
+b=torch.Tensor({{8.58, 8.26, 8.48, -5.28, 5.72, 8.93},
+ {9.35, -4.43, -0.70, -0.26, -7.36, -2.52}}):t()
+
+=a
+ 1.4400 -7.8400 -4.3900 4.5300
+-9.9600 -0.2800 -3.2400 3.8300
+-7.5500 3.2400 6.2700 -6.6400
+ 8.3400 8.0900 5.2800 2.0600
+ 7.0800 2.5200 0.7400 -2.4700
+-5.4500 -5.7000 -1.1900 4.7000
+[torch.DoubleTensor of dimension 6x4]
+
+=b
+ 8.5800 9.3500
+ 8.2600 -4.4300
+ 8.4800 -0.7000
+-5.2800 -0.2600
+ 5.7200 -7.3600
+ 8.9300 -2.5200
+[torch.DoubleTensor of dimension 6x2]
+
+x = torch.gels(b,a)
+=x
+ -0.4506 0.2497
+ -0.8492 -0.9020
+ 0.7066 0.6323
+ 0.1289 0.1351
+ 13.1193 -7.4922
+ -4.8214 -7.1361
+[torch.DoubleTensor of dimension 6x2]
+
+=b:dist(a*x:narrow(1,1,4))
+17.390200628863
+
+=math.sqrt(x:narrow(1,5,2):pow(2):sumall())
+17.390200628863
+
+```
+
+
+### torch.symeig([rese, resv,] a, [, 'N' or 'V'] ['U' or 'L']) ###
+
+Eigen values and eigen vectors of a symmetric real matrix ` A ` of
+size ` m x m `. This function calculates all eigenvalues (and
+vectors) of ` A ` such that ` A = V' diag(e) V `. Since the input
+matrix ` A ` is supposed to be symmetric, only upper triangular
+portion is used by default. If the 4th argument is 'L', then lower
+triangular portion is used.
+
+Third argument defines computation of eigenvectors or eigenvalues
+only. If ` N `, only eignevalues are computed. If ` V `, both
+eigenvalues and eigenvectors are computed.
+
+```lua
+
+a=torch.Tensor({{ 1.96, 0.00, 0.00, 0.00, 0.00},
+ {-6.49, 3.80, 0.00, 0.00, 0.00},
+ {-0.47, -6.39, 4.17, 0.00, 0.00},
+ {-7.20, 1.50, -1.51, 5.70, 0.00},
+ {-0.65, -6.34, 2.67, 1.80, -7.10}}):t()
+
+=a
+ 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
+ 0.0000 3.8000 -6.3900 1.5000 -6.3400
+ 0.0000 0.0000 4.1700 -1.5100 2.6700
+ 0.0000 0.0000 0.0000 5.7000 1.8000
+ 0.0000 0.0000 0.0000 0.0000 -7.1000
+[torch.DoubleTensor of dimension 5x5]
+
+e = torch.symeig(a)
+=e
+-11.0656
+ -6.2287
+ 0.8640
+ 8.8655
+ 16.0948
+[torch.DoubleTensor of dimension 5]
+
+e,v = torch.symeig(a,'V')
+=e
+-11.0656
+ -6.2287
+ 0.8640
+ 8.8655
+ 16.0948
+[torch.DoubleTensor of dimension 5]
+
+=v
+-0.2981 -0.6075 0.4026 -0.3745 0.4896
+-0.5078 -0.2880 -0.4066 -0.3572 -0.6053
+-0.0816 -0.3843 -0.6600 0.5008 0.3991
+-0.0036 -0.4467 0.4553 0.6204 -0.4564
+-0.8041 0.4480 0.1725 0.3108 0.1622
+[torch.DoubleTensor of dimension 5x5]
+
+=v*torch.diag(e)*v:t()
+ 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
+-6.4900 3.8000 -6.3900 1.5000 -6.3400
+-0.4700 -6.3900 4.1700 -1.5100 2.6700
+-7.2000 1.5000 -1.5100 5.7000 1.8000
+-0.6500 -6.3400 2.6700 1.8000 -7.1000
+[torch.DoubleTensor of dimension 5x5]
+
+=a:dist(torch.triu(v*torch.diag(e)*v:t()))
+1.0219480822443e-14
+
+```
+
+
+### torch.eig([rese, resv,] a, [, 'N' or 'V']) ###
+
+Eigen values and eigen vectors of a general real matrix ` A ` of
+size ` m x m `. This function calculates all right eigenvalues (and
+vectors) of ` A ` such that ` A = V' diag(e) V `.
+
+Third argument defines computation of eigenvectors or eigenvalues
+only. If ` N `, only eignevalues are computed. If ` V `, both
+eigenvalues and eigenvectors are computed.
+
+```lua
+
+a=torch.Tensor({{ 1.96, 0.00, 0.00, 0.00, 0.00},
+ {-6.49, 3.80, 0.00, 0.00, 0.00},
+ {-0.47, -6.39, 4.17, 0.00, 0.00},
+ {-7.20, 1.50, -1.51, 5.70, 0.00},
+ {-0.65, -6.34, 2.67, 1.80, -7.10}}):t()
+
+=a
+ 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
+ 0.0000 3.8000 -6.3900 1.5000 -6.3400
+ 0.0000 0.0000 4.1700 -1.5100 2.6700
+ 0.0000 0.0000 0.0000 5.7000 1.8000
+ 0.0000 0.0000 0.0000 0.0000 -7.1000
+[torch.DoubleTensor of dimension 5x5]
+
+b = a+torch.triu(a,1):t()
+=b
+
+ 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
+ -6.4900 3.8000 -6.3900 1.5000 -6.3400
+ -0.4700 -6.3900 4.1700 -1.5100 2.6700
+ -7.2000 1.5000 -1.5100 5.7000 1.8000
+ -0.6500 -6.3400 2.6700 1.8000 -7.1000
+[torch.DoubleTensor of dimension 5x5]
+
+e = torch.eig(b)
+=e
+ 16.0948 0.0000
+-11.0656 0.0000
+ -6.2287 0.0000
+ 0.8640 0.0000
+ 8.8655 0.0000
+[torch.DoubleTensor of dimension 5x2]
+
+e,v = torch.eig(b,'V')
+=e
+ 16.0948 0.0000
+-11.0656 0.0000
+ -6.2287 0.0000
+ 0.8640 0.0000
+ 8.8655 0.0000
+[torch.DoubleTensor of dimension 5x2]
+
+=v
+-0.4896 0.2981 -0.6075 -0.4026 -0.3745
+ 0.6053 0.5078 -0.2880 0.4066 -0.3572
+-0.3991 0.0816 -0.3843 0.6600 0.5008
+ 0.4564 0.0036 -0.4467 -0.4553 0.6204
+-0.1622 0.8041 0.4480 -0.1725 0.3108
+[torch.DoubleTensor of dimension 5x5]
+
+=v*torch.diag(e:select(2,1))*v:t()
+ 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
+-6.4900 3.8000 -6.3900 1.5000 -6.3400
+-0.4700 -6.3900 4.1700 -1.5100 2.6700
+-7.2000 1.5000 -1.5100 5.7000 1.8000
+-0.6500 -6.3400 2.6700 1.8000 -7.1000
+[torch.DoubleTensor of dimension 5x5]
+
+=b:dist(v*torch.diag(e:select(2,1))*v:t())
+3.5423944346685e-14
+
+```
+
+
+### torch.svd([resu, ress, resv] a, [, 'S' or 'A']) ###
+
+Singular value decomposition of a real matrix ` A ` of size '' n x m
+` such that ` A = USV**T `. The call to `svd` returns `U,S,V''.
+
+The last argument, if it is string, represents the number of singular
+values to be computed. 'S' stands for 'some' and 'A' stands for 'all'.
+
+```lua
+
+a=torch.Tensor({{8.79, 6.11, -9.15, 9.57, -3.49, 9.84},
+ {9.93, 6.91, -7.93, 1.64, 4.02, 0.15},
+ {9.83, 5.04, 4.86, 8.83, 9.80, -8.99},
+ {5.45, -0.27, 4.85, 0.74, 10.00, -6.02},
+ {3.16, 7.98, 3.01, 5.80, 4.27, -5.31}}):t()
+=a
+ 8.7900 9.9300 9.8300 5.4500 3.1600
+ 6.1100 6.9100 5.0400 -0.2700 7.9800
+ -9.1500 -7.9300 4.8600 4.8500 3.0100
+ 9.5700 1.6400 8.8300 0.7400 5.8000
+ -3.4900 4.0200 9.8000 10.0000 4.2700
+ 9.8400 0.1500 -8.9900 -6.0200 -5.3100
+
+u,s,v = torch.svd(a)
+
+=u
+-0.5911 0.2632 0.3554 0.3143 0.2299
+-0.3976 0.2438 -0.2224 -0.7535 -0.3636
+-0.0335 -0.6003 -0.4508 0.2334 -0.3055
+-0.4297 0.2362 -0.6859 0.3319 0.1649
+-0.4697 -0.3509 0.3874 0.1587 -0.5183
+ 0.2934 0.5763 -0.0209 0.3791 -0.6526
+[torch.DoubleTensor of dimension 6x5]
+
+=s
+ 27.4687
+ 22.6432
+ 8.5584
+ 5.9857
+ 2.0149
+[torch.DoubleTensor of dimension 5]
+
+=v
+-0.2514 0.8148 -0.2606 0.3967 -0.2180
+-0.3968 0.3587 0.7008 -0.4507 0.1402
+-0.6922 -0.2489 -0.2208 0.2513 0.5891
+-0.3662 -0.3686 0.3859 0.4342 -0.6265
+-0.4076 -0.0980 -0.4933 -0.6227 -0.4396
+[torch.DoubleTensor of dimension 5x5]
+
+=u*torch.diag(s)*v:t()
+ 8.7900 9.9300 9.8300 5.4500 3.1600
+ 6.1100 6.9100 5.0400 -0.2700 7.9800
+ -9.1500 -7.9300 4.8600 4.8500 3.0100
+ 9.5700 1.6400 8.8300 0.7400 5.8000
+ -3.4900 4.0200 9.8000 10.0000 4.2700
+ 9.8400 0.1500 -8.9900 -6.0200 -5.3100
+[torch.DoubleTensor of dimension 6x5]
+
+ =a:dist(u*torch.diag(s)*v:t())
+2.8923773593204e-14
+
+```
+
+
+### torch.inverse([res,] x) ###
+
+Computes the inverse of square matrix `x`.
+
+`=torch.inverse(x)` returns the result as a new matrix.
+
+`torch.inverse(y,x)` puts the result in `y`.
+
+```lua
+x=torch.rand(10,10)
+y=torch.inverse(x)
+z=x*y
+print(z)
+ 1.0000 -0.0000 0.0000 -0.0000 0.0000 0.0000 0.0000 -0.0000 0.0000 0.0000
+ 0.0000 1.0000 -0.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 -0.0000 0.0000
+ 0.0000 -0.0000 1.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000 0.0000
+ 0.0000 -0.0000 -0.0000 1.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000
+ 0.0000 -0.0000 0.0000 -0.0000 1.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000
+ 0.0000 -0.0000 0.0000 -0.0000 0.0000 1.0000 0.0000 -0.0000 -0.0000 0.0000
+ 0.0000 -0.0000 0.0000 -0.0000 0.0000 0.0000 1.0000 -0.0000 0.0000 0.0000
+ 0.0000 -0.0000 -0.0000 -0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000
+ 0.0000 -0.0000 -0.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 1.0000 0.0000
+ 0.0000 -0.0000 0.0000 -0.0000 0.0000 0.0000 0.0000 -0.0000 0.0000 1.0000
+[torch.DoubleTensor of dimension 10x10]
+
+print('Max nonzero = ', torch.max(torch.abs(z-torch.eye(10))))
+Max nonzero = 2.3092638912203e-14
+
+```
+
+
+## Logical Operations on Tensors ##
+
+These functions implement logical comparison operators that take a
+tensor as input and another tensor or a number as the comparison
+target. They return a `ByteTensor` in which each element is 0 or 1
+indicating if the comparison for the corresponding element was
+`false` or `true` respectively.
+
+
+### torch.lt(a, b) ###
+
+Implements %%<%% operator comparing each element in `a` with `b`
+(if `b` is a number) or each element in `a` with corresponding element in `b`.
+
+
+### torch.le(a, b) ###
+
+Implements %%<=%% operator comparing each element in `a` with `b`
+(if `b` is a number) or each element in `a` with corresponding element in `b`.
+
+
+### torch.gt(a, b) ###
+
+Implements %%>%% operator comparing each element in `a` with `b`
+(if `b` is a number) or each element in `a` with corresponding element in `b`.
+
+
+### torch.ge(a, b) ###
+
+Implements %%>=%% operator comparing each element in `a` with `b`
+(if `b` is a number) or each element in `a` with corresponding element in `b`.
+
+
+### torch.eq(a, b) ###
+
+Implements %%==%% operator comparing each element in `a` with `b`
+(if `b` is a number) or each element in `a` with corresponding element in `b`.
+
+
+### torch.ne(a, b) ###
+
+Implements %%!=%% operator comparing each element in `a` with `b`
+(if `b` is a number) or each element in `a` with corresponding element in `b`.
+
+
+```lua
+
+> a = torch.rand(10)
+> b = torch.rand(10)
+> =a
+ 0.5694
+ 0.5264
+ 0.3041
+ 0.4159
+ 0.1677
+ 0.7964
+ 0.0257
+ 0.2093
+ 0.6564
+ 0.0740
+[torch.DoubleTensor of dimension 10]
+
+> =b
+ 0.2950
+ 0.4867
+ 0.9133
+ 0.1291
+ 0.1811
+ 0.3921
+ 0.7750
+ 0.3259
+ 0.2263
+ 0.1737
+[torch.DoubleTensor of dimension 10]
+
+> =torch.lt(a,b)
+ 0
+ 0
+ 1
+ 0
+ 1
+ 0
+ 1
+ 1
+ 0
+ 1
+[torch.ByteTensor of dimension 10]
+
+> return torch.eq(a,b)
+0
+0
+0
+0
+0
+0
+0
+0
+0
+0
+[torch.ByteTensor of dimension 10]
+
+> return torch.ne(a,b)
+ 1
+ 1
+ 1
+ 1
+ 1
+ 1
+ 1
+ 1
+ 1
+ 1
+[torch.ByteTensor of dimension 10]
+
+> return torch.gt(a,b)
+ 1
+ 1
+ 0
+ 1
+ 0
+ 1
+ 0
+ 0
+ 1
+ 0
+[torch.ByteTensor of dimension 10]
+
+> a[torch.gt(a,b)] = 10
+> =a
+ 10.0000
+ 10.0000
+ 0.3041
+ 10.0000
+ 0.1677
+ 10.0000
+ 0.0257
+ 0.2093
+ 10.0000
+ 0.0740
+[torch.DoubleTensor of dimension 10]
+
+> a[torch.gt(a,1)] = -1
+> =a
+-1.0000
+-1.0000
+ 0.3041
+-1.0000
+ 0.1677
+-1.0000
+ 0.0257
+ 0.2093
+-1.0000
+ 0.0740
+[torch.DoubleTensor of dimension 10]
+
+
+```
+
diff --git a/doc/memoryfile.md b/doc/memoryfile.md
new file mode 100644
index 00000000..63bd1851
--- /dev/null
+++ b/doc/memoryfile.md
@@ -0,0 +1,37 @@
+
+# MemoryFile #
+
+Parent classes: [File](file.md)
+
+A `MemoryFile` is a particular `File` which is able to perform basic
+read/write operations on a buffer in `RAM`. It implements all methods
+described in [File](file.md).
+
+The data of the this `File` is contained into a `NULL` terminated
+[CharStorage](storage.md).
+
+
+### torch.MemoryFile([mode]) ###
+
+_Constructor_ which returns a new `MemoryFile` object using `mode`. Valid
+`mode` are `"r"` (read), `"w"` (write) or `"rw"` (read-write). Default is `"rw"`.
+
+
+
+### torch.MemoryFile(storage, mode) ###
+
+_Constructor_ which returns a new `MemoryFile` object, using the given
+[storage](storage.md) (which must be a `CharStorage`) and `mode`. Valid
+`mode` are `"r"` (read), `"w"` (write) or `"rw"` (read-write). The last character
+in this storage _must_ be `NULL` or an error will be generated. This allow
+to read existing memory. If used for writing, not that the `storage` might
+be resized by this class if needed.
+
+
+### [CharStorage] storage() ###
+
+Returns the [storage](storage.md) which contains all the data of the
+`File` (note: this is _not_ a copy, but a _reference_ on this storage). The
+size of the storage is the size of the data in the `File`, plus one, the
+last character being `NULL`.
+
diff --git a/doc/pipefile.md b/doc/pipefile.md
new file mode 100644
index 00000000..ddc95b27
--- /dev/null
+++ b/doc/pipefile.md
@@ -0,0 +1,22 @@
+
+# PipeFile #
+
+Parent classes: [DiskFile](diskfile.md)
+
+A `PipeFile` is a particular `File` which is able to perform basic read/write operations
+on a command pipe. It implements all methods described in [DiskFile](diskfile.md) and [File](file.md).
+
+The file might be open in read or write mode, depending on the parameter
+`mode` (which can take the value `"r"` or `"w"`)
+given to the [torch.PipeFile(fileName, mode)](#torch.PipeFile). Read-write mode is not allowed.
+
+
+### torch.PipeFile(command, [mode], [quiet]) ###
+
+_Constructor_ which execute `command` by opening a pipe in read or write
+`mode`. Valid `mode` are `"r"` (read) or `"w"` (write). Default is read
+mode.
+
+If (and only if) `quiet` is `true`, no error will be raised in case of
+problem opening the file: instead `nil` will be returned.
+
diff --git a/doc/random.md b/doc/random.md
new file mode 100644
index 00000000..1c1317e8
--- /dev/null
+++ b/doc/random.md
@@ -0,0 +1,106 @@
+
+# Random Numbers #
+
+Torch provides accurate mathematical random generation, based on
+[Mersenne Twister](http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html)
+random number generator.
+
+
+## Seed Handling ##
+
+If no seed is provided to the random generator (using
+[seed()](#torch.seed) or [manualSeed()](#torch.manualSeed)), a
+random seed will be set according to [seed()](#torch.seed) the first
+time a random number is generated.
+
+Initial seed can be obtained using [initialSeed()](#torch.initialSeed).
+
+Setting a particular seed allows the user to (re)-generate a particular serie of
+random numbers. Example:
+```
+> torch.manualSeed(123)
+> = torch.uniform()
+0.69646918727085
+> return torch.uniform()
+0.71295532141812
+> return torch.uniform()
+0.28613933874294
+> torch.manualSeed(123)
+> return torch.uniform()
+0.69646918727085
+> return torch.uniform()
+0.71295532141812
+> return torch.uniform()
+0.28613933874294
+> torch.manualSeed(torch.initialSeed())
+> return torch.uniform()
+0.69646918727085
+> return torch.uniform()
+0.71295532141812
+> return torch.uniform()
+0.28613933874294
+```
+
+
+### [number] seed() ###
+
+Set the seed of the random number generator according to the time of the
+computer. Granularity is seconds. Returns the seed obtained.
+
+
+### manualSeed(number) ###
+
+Set the seed of the random number generator to the given `number`.
+
+
+### initialSeed() ###
+
+Returns the initial seed used to initialize the random generator.
+
+
+### [number] random() ###
+
+Returns a 32 bit integer random number.
+
+
+### [number] uniform([a],[b]) ###
+
+Returns a random real number according to uniform distribution on [a,b[. By default `a` is 0 and `b` is 1.
+
+
+### [number] normal([mean],[stdv]) ###
+
+Returns a random real number according to a normal distribution with the given `mean` and standard deviation `stdv`.
+`stdv` must be positive.
+
+
+### [number] exponential(lambda) ###
+
+Returns a random real number according to the exponential distribution
+''p(x) = lambda * exp(-lambda * x)''
+
+
+### [number] cauchy(median, sigma) ###
+
+Returns a random real number according to the Cauchy distribution
+''p(x) = sigma/(pi*(sigma^2 + (x-median)^2))''
+
+
+### [number] logNormal(mean, stdv) ###
+
+Returns a random real number according to the log-normal distribution, with
+the given `mean` and standard deviation `stdv`.
+`stdv` must be positive.
+
+
+### [number] geometric(p) ###
+
+Returns a random integer number according to a geometric distribution
+''p(i) = (1-p) * p^(i-1)`. `p` must satisfy `0 < p < 1''.
+
+
+### [number] bernoulli([p]) ###
+
+Returns `1` with probability `p` and `0` with probability `1-p`. `p` must satisfy `0 <= p <= 1`.
+By default `p` is equal to `0.5`.
+
diff --git a/dok/serialization.dok b/doc/serialization.md
similarity index 60%
rename from dok/serialization.dok
rename to doc/serialization.md
index 6d0d530c..42bde46f 100644
--- a/dok/serialization.dok
+++ b/doc/serialization.md
@@ -1,35 +1,35 @@
-====== Serialization ======
-{{anchor:torch.serialization.dok}}
+
+# Serialization #
Torch provides 4 high-level methods to serialize/deserialize arbitrary Lua/Torch objects.
-These functions are just abstractions over the [[#torch.File|File]] object, and were created
+These functions are just abstractions over the [File](#torch.File) object, and were created
for convenience (these are very common routines).
The first two functions are useful to serialize/deserialize data to/from files:
- - ''torch.save(filename, object [, format])''
- - ''[object] torch.load(filename [, format])''
+ - `torch.save(filename, object [, format])`
+ - `[object] torch.load(filename [, format])`
The next two functions are useful to serialize/deserialize data to/from strings:
- - ''[str] torch.serialize(object)''
- - ''[object] torch.deserialize(str)''
+ - `[str] torch.serialize(object)`
+ - `[object] torch.deserialize(str)`
Serializing to files is useful to save arbitrary data structures, or share them with other people.
Serializing to strings is useful to store arbitrary data structures in databases, or 3rd party
software.
-==== torch.save(filename, object [, format]) ====
-{{anchor:torch.save}}
+
+### torch.save(filename, object [, format]) ###
-Writes ''object'' into a file named ''filename''. The ''format'' can be set
-to ''ascii'' or ''binary'' (default is binary). Binary format is platform
+Writes `object` into a file named `filename`. The `format` can be set
+to `ascii` or `binary` (default is binary). Binary format is platform
dependent, but typically more compact and faster to read/write. The ASCII
format is platform-independent, and should be used to share data structures
across platforms.
-
+```
-- arbitrary object:
obj = {
mat = torch.randn(10,10),
@@ -41,18 +41,18 @@ obj = {
-- save to disk:
torch.save('test.dat', obj)
-
+```
-==== [object] torch.load(filename [, format]) ====
-{{anchor:torch.load}}
+
+### [object] torch.load(filename [, format]) ###
-Reads ''object'' from a file named ''filename''. The ''format'' can be set
-to ''ascii'' or ''binary'' (default is binary). Binary format is platform
+Reads `object` from a file named `filename`. The `format` can be set
+to `ascii` or `binary` (default is binary). Binary format is platform
dependent, but typically more compact and faster to read/write. The ASCII
format is platform-independent, and should be used to share data structures
across platforms.
-
+```
-- given serialized object from section above, reload:
obj = torch.load('test.dat')
@@ -61,14 +61,14 @@ print(obj)
-- {[mat] = DoubleTensor - size: 10x10
-- [name] = string : "10"
-- [test] = table - size: 0}
-
+```
-==== [str] torch.serialize(object) ====
-{{anchor:torch.serialize}}
+
+### [str] torch.serialize(object) ###
-Serializes ''object'' into a string.
+Serializes `object` into a string.
-
+```
-- arbitrary object:
obj = {
mat = torch.randn(10,10),
@@ -80,14 +80,14 @@ obj = {
-- serialize:
str = torch.serialize(obj)
-
+```
-==== [object] torch.deserialize(str) ====
-{{anchor:torch.deserialize}}
+
+### [object] torch.deserialize(str) ###
-Deserializes ''object'' from a string.
+Deserializes `object` from a string.
-
+```
-- given serialized object from section above, deserialize:
obj = torch.deserialize(str)
@@ -96,4 +96,5 @@ print(obj)
-- {[mat] = DoubleTensor - size: 10x10
-- [name] = string : "10"
-- [test] = table - size: 0}
-
+```
+
diff --git a/doc/storage.md b/doc/storage.md
new file mode 100644
index 00000000..6cbceb86
--- /dev/null
+++ b/doc/storage.md
@@ -0,0 +1,225 @@
+
+# Storage #
+
+
+
+
+
+
+
+
+_Storages_ are basically a way for `Lua` to access memory of a `C` pointer
+or array. _Storages_ can also [map the contents of a file to memory](#__torch.StorageMap).
+A `Storage` is an array of _basic_ `C` types. For arrays of `Torch` objects,
+use the `Lua` tables.
+
+Several `Storage` classes for all the basic `C` types exist and have the
+following self-explanatory names: `ByteStorage`, `CharStorage`, `ShortStorage`,
+`IntStorage`, `LongStorage`, `FloatStorage`, `DoubleStorage`.
+
+Note that `ByteStorage` and `CharStorage` represent both arrays of bytes. `ByteStorage` represents an array of
+_unsigned_ chars, while `CharStorage` represents an array of _signed_ chars.
+
+Conversions between two `Storage` type might be done using `copy`:
+```lua
+x = torch.IntStorage(10):fill(1)
+y = torch.DoubleStorage(10):copy(x)
+```
+
+[Classical storages](#torch.Storage) are [serializable](file.md#torch.File.serialization).
+[Storages mapping a file](#__torch.StorageMap) are also [serializable](#FileSerialization),
+but _will be saved as a normal storage_.
+
+An alias `torch.Storage()` is made over your preferred Storage type,
+controlled by the
+[torch.setdefaulttensortype](utility.md#torch.setdefaulttensortype)
+function. By default, this "points" on `torch.DoubleStorage`.
+
+## Constructors and Access Methods ##
+
+
+### torch.TYPEStorage([size]) ###
+
+Returns a new `Storage` of type `TYPE`. Valid `TYPE` are `Byte`, `Char`, `Short`,
+`Int`, `Long`, `Float`, and `Double`. If `size` is given, resize the
+`Storage` accordingly, else create an empty `Storage`.
+
+Example:
+```lua
+-- Creates a Storage of 10 double:
+x = torch.DoubleStorage(10)
+```
+
+The data in the `Storage` is _uninitialized_.
+
+
+### torch.TYPEStorage(table) ###
+
+The argument is assumed to be a Lua array of numbers. The constructor returns a new storage of the specified 'TYPE',
+of the size of the table, containing all the table elements converted
+
+Example:
+```lua
+> = torch.IntStorage({1,2,3,4})
+
+ 1
+ 2
+ 3
+ 4
+[torch.IntStorage of size 4]
+```
+
+
+### torch.TYPEStorage(filename [, shared]) ###
+
+
+Returns a new kind of `Storage` which maps the contents of the given
+`filename` to memory. Valid `TYPE` are `Byte`, `Char`, `Short`, `Int`, `Long`,
+`Float`, and `Double`. If the optional boolean argument `shared` is `true`,
+the mapped memory is shared amongst all processes on the computer.
+
+When `shared` is `true`, the file must be accessible in read-write mode. Any
+changes on the storage will be written in the file. The changes might be written
+only after destruction of the storage.
+
+When `shared` is `false` (or not provided), the file must be at least
+readable. Any changes on the storage will not affect the file. Note:
+changes made on the file after creation of the storage have an unspecified
+effect on the storage contents.
+
+The [size](#torch.Storage.size) of the returned `Storage` will be
+```lua
+(size of file in byte)/(size of TYPE).
+```
+
+Example:
+```lua
+$ echo "Hello World" > hello.txt
+$ lua
+Lua 5.1.3 Copyright (C) 1994-2008 Lua.org, PUC-Rio
+> require 'torch'
+> x = torch.CharStorage('hello.txt')
+> = x
+ 72
+ 101
+ 108
+ 108
+ 111
+ 32
+ 87
+ 111
+ 114
+ 108
+ 100
+ 10
+[torch.CharStorage of size 12]
+
+> = x:string()
+Hello World
+
+> = x:fill(42):string()
+____________
+>
+$ cat hello.txt
+Hello World
+$ lua
+Lua 5.1.3 Copyright (C) 1994-2008 Lua.org, PUC-Rio
+> require 'torch'
+> x = torch.CharStorage('hello.txt', true)
+> = x:string()
+Hello World
+
+> x:fill(42)
+>
+$ cat hello.txt
+____________
+```
+
+
+### [number] #self ###
+
+Returns the number of elements in the storage. Equivalent to [size()](#torch.Storage.size).
+
+
+### [number] self[index] ###
+
+Returns or set the element at position `index` in the storage. Valid range
+of `index` is 1 to [size()](#torch.Storage.size).
+
+Example:
+```lua
+x = torch.DoubleStorage(10)
+print(x[5])
+```
+
+
+### [self] copy(storage) ###
+
+Copy another `storage`. The types of the two storages might be different: in that case
+a conversion of types occur (which might result, of course, in loss of precision or rounding).
+This method returns self, allowing things like:
+```lua
+x = torch.IntStorage(10):fill(1)
+y = torch.DoubleStorage(10):copy(x) -- y won't be nil!
+```
+
+
+### [self] fill(value) ###
+
+Fill the `Storage` with the given value. This method returns self, allowing things like:
+```lua
+x = torch.IntStorage(10):fill(0) -- x won't be nil!
+```
+
+
+### [self] resize(size) ###
+
+Resize the storage to the provide `size`. _The new contents are undertermined_.
+
+This function returns self, allowing things like:
+```lua
+x = torch.DoubleStorage(10):fill(1)
+y = torch.DoubleStorage():resize(x:size()):copy(x) -- y won't be nil!
+```
+
+
+### [number] size() ###
+
+Returns the number of elements in the storage. Equivalent to [#](#__torch.StorageSharp).
+
+
+### [self] string(str) ###
+
+This function is available only on `ByteStorage` and `CharStorage`.
+
+This method resizes the storage to the length of the provided
+string `str`, and copy the contents of `str` into the storage. The `NULL` terminating character is not copied,
+but `str` might contain `NULL` characters. The method returns the `Storage`.
+```lua
+> x = torch.CharStorage():string("blah blah")
+> print(x)
+ 98
+ 108
+ 97
+ 104
+ 32
+ 98
+ 108
+ 97
+ 104
+[torch.CharStorage of size 9]
+```
+
+
+### [string] string() ###
+
+This function is available only on `ByteStorage` and `CharStorage`.
+
+The contents of the storage viewed as a string are returned. The string might contain
+`NULL` characters.
+```lua
+> x = torch.CharStorage():string("blah blah")
+> print(x:string())
+blah blah
+```
+
diff --git a/dok/tensor.dok b/doc/tensor.md
similarity index 57%
rename from dok/tensor.dok
rename to doc/tensor.md
index 4a092a8f..02d85fdd 100644
--- a/dok/tensor.dok
+++ b/doc/tensor.md
@@ -1,36 +1,36 @@
-====== Tensor ======
-{{anchor:torch.Tensor.dok}}
+
+# Tensor #
-The ''Tensor'' class is probably the most important class in
-''Torch''. Almost every package depends on this class. It is ***the***
+The `Tensor` class is probably the most important class in
+`Torch`. Almost every package depends on this class. It is *__the__*
class for handling numeric data. As with pretty much anything in
-[[.:..:index|Torch7]], tensors are
-[[File#torch.File.serialization|serializable]].
+[Torch7](./../index.md), tensors are
+[serializable](file.md#torch.File.serialization).
-**Multi-dimensional matrix**
+__Multi-dimensional matrix__
-A ''Tensor'' is a potentially multi-dimensional matrix. The number of
+A `Tensor` is a potentially multi-dimensional matrix. The number of
dimensions is unlimited that can be created using
-[[Storage|LongStorage]] with more dimensions.
+[LongStorage](storage.md) with more dimensions.
Example:
-
+```lua
--- creation of a 4D-tensor 4x5x6x2
z = torch.Tensor(4,5,6,2)
--- for more dimensions, (here a 6D tensor) one can do:
s = torch.LongStorage(6)
s[1] = 4; s[2] = 5; s[3] = 6; s[4] = 2; s[5] = 7; s[6] = 3;
x = torch.Tensor(s)
-
+```
-The number of dimensions of a ''Tensor'' can be queried by
-[[#torch.Tensor.nDimension|nDimension()]] or
-[[#torch.Tensor.dim|dim()]]. Size of the ''i-th'' dimension is
-returned by [[#torch.Tensor.size|size(i)]]. A [[Storage|LongStorage]]
+The number of dimensions of a `Tensor` can be queried by
+[nDimension()](#torch.Tensor.nDimension) or
+[dim()](#torch.Tensor.dim). Size of the `i-th` dimension is
+returned by [size(i)](#torch.Tensor.size). A [LongStorage](storage.md)
containing all the dimensions can be returned by
-[[#torch.Tensor.size|size()]].
+[size()](#torch.Tensor.size).
-
+```lua
> print(x:nDimension())
6
> print(x:size())
@@ -41,37 +41,37 @@ containing all the dimensions can be returned by
7
3
[torch.LongStorage of size 6]
-
-
-**Internal data representation**
-
-The actual data of a ''Tensor'' is contained into a
-[[Storage|Storage]]. It can be accessed using
-[[#torch.Tensor.storage|''storage()'']]. While the memory of a
-''Tensor'' has to be contained in this unique ''Storage'', it might
-not be contiguous: the first position used in the ''Storage'' is given
-by [[#torch.Tensor.storageOffset|''storageOffset()'']] (starting at
-''1''). And the //jump// needed to go from one element to another
-element in the ''i-th'' dimension is given by
-[[#torch.Tensor.stride|''stride(i)'']]. In other words, given a 3D
+```
+
+__Internal data representation__
+
+The actual data of a `Tensor` is contained into a
+[Storage](storage.md). It can be accessed using
+[`storage()`](#torch.Tensor.storage). While the memory of a
+`Tensor` has to be contained in this unique `Storage`, it might
+not be contiguous: the first position used in the `Storage` is given
+by [`storageOffset()`](#torch.Tensor.storageOffset) (starting at
+`1`). And the _jump_ needed to go from one element to another
+element in the `i-th` dimension is given by
+[`stride(i)`](#torch.Tensor.stride). In other words, given a 3D
tensor
-
+```lua
x = torch.Tensor(7,7,7)
-
-accessing the element ''(3,4,5)'' can be done by
-
+```
+accessing the element `(3,4,5)` can be done by
+```lua
= x[3][4][5]
-
+```
or equivalently (but slowly!)
-
+```lua
= x:storage()[x:storageOffset()
+(3-1)*x:stride(1)+(4-1)*x:stride(2)+(5-1)*x:stride(3)]
-
-One could say that a ''Tensor'' is a particular way of //viewing// a
-''Storage'': a ''Storage'' only represents a chunk of memory, while the
-''Tensor'' interprets this chunk of memory as having dimensions:
-
+```
+One could say that a `Tensor` is a particular way of _viewing_ a
+`Storage`: a `Storage` only represents a chunk of memory, while the
+`Tensor` interprets this chunk of memory as having dimensions:
+```lua
> x = torch.Tensor(4,5)
> s = x:storage()
> for i=1,s:size() do -- fill up the Storage
@@ -83,11 +83,11 @@ One could say that a ''Tensor'' is a particular way of //viewing// a
11 12 13 14 15
16 17 18 19 20
[torch.DoubleTensor of dimension 4x5]
-
+```
-Note also that in Torch7 **//elements in the same row//** [elements along the **last** dimension]
+Note also that in Torch7 ___elements in the same row___ [elements along the __last__ dimension]
are contiguous in memory for a matrix [tensor]:
-
+```lua
> x = torch.Tensor(4,5)
> i = 0
>
@@ -107,43 +107,43 @@ are contiguous in memory for a matrix [tensor]:
5
1 -- element in the last dimension are contiguous!
[torch.LongStorage of size 2]
-
-This is exactly like in C (and not ''Fortran'').
+```
+This is exactly like in C (and not `Fortran`).
-**Tensors of different types**
+__Tensors of different types__
-Actually, several types of ''Tensor'' exists:
-
+Actually, several types of `Tensor` exists:
+```lua
ByteTensor -- contains unsigned chars
CharTensor -- contains signed chars
ShortTensor -- contains shorts
IntTensor -- contains ints
FloatTensor -- contains floats
DoubleTensor -- contains doubles
-
+```
-Most numeric operations are implemented //only// for ''FloatTensor'' and ''DoubleTensor''.
+Most numeric operations are implemented _only_ for `FloatTensor` and `DoubleTensor`.
Other Tensor types are useful if you want to save memory space.
-**Default Tensor type**
+__Default Tensor type__
-For convenience, //an alias// ''torch.Tensor'' is provided, which allows the user to write
+For convenience, _an alias_ `torch.Tensor` is provided, which allows the user to write
type-independent scripts, which can then ran after choosing the desired Tensor type with
a call like
-
+```lua
torch.setdefaulttensortype('torch.FloatTensor')
-
-See [[Utility#torch.setdefaulttensortype|torch.setdefaulttensortype]] for more details.
-By default, the alias "points" on ''torch.DoubleTensor''.
+```
+See [torch.setdefaulttensortype](utility.md#torch.setdefaulttensortype) for more details.
+By default, the alias "points" on `torch.DoubleTensor`.
-**Efficient memory management**
+__Efficient memory management__
-//All// tensor operations in this class do //not// make any memory copy. All
+_All_ tensor operations in this class do _not_ make any memory copy. All
these methods transform the existing tensor, or return a new tensor
-referencing //the same storage//. This magical behavior is internally
-obtained by good usage of the [[#torch.Tensor.stride|stride()]] and
-[[#torch.Tensor.storageOffset|storageOffset()]]. Example:
-
+referencing _the same storage_. This magical behavior is internally
+obtained by good usage of the [stride()](#torch.Tensor.stride) and
+[storageOffset()](#torch.Tensor.storageOffset). Example:
+```lua
> x = torch.Tensor(5):zero()
> print(x)
0
@@ -161,48 +161,48 @@ obtained by good usage of the [[#torch.Tensor.stride|stride()]] and
1
0
[torch.Tensor of dimension 5]
-
+```
-If you really need to copy a ''Tensor'', you can use the [[#torch.Tensor.copy|copy()]] method:
-
+If you really need to copy a `Tensor`, you can use the [copy()](#torch.Tensor.copy) method:
+```lua
> y = torch.Tensor(x:size()):copy(x)
-
+```
Or the convenience method
-
+```lua
> y = x:clone()
-
+```
-We now describe all the methods for ''Tensor''. If you want to specify the Tensor type,
-just replace ''Tensor'' by the name of the Tensor variant (like ''CharTensor'').
+We now describe all the methods for `Tensor`. If you want to specify the Tensor type,
+just replace `Tensor` by the name of the Tensor variant (like `CharTensor`).
-===== Tensor constructors =====
-{{anchor:torch.Tensor}}
+
+## Tensor constructors ##
Tensor constructors, create new Tensor object, optionally, allocating
new memory. By default the elements of a newly allocated memory are
not initialized, therefore, might contain arbitrary numbers. Here are
-several ways to construct a new ''Tensor''.
+several ways to construct a new `Tensor`.
-==== torch.Tensor() ====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor() ###
Returns an empty tensor.
-==== torch.Tensor(tensor) ====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor(tensor) ###
Returns a new tensor which reference the same
-[[#torch.Tensor.storage|Storage]] than the given ''tensor''. The
-[[#torch.Tensor.size|size]], [[#torch.Tensor.stride|stride]], and
-[[#torch.Tensor.storageOffset|storage offset]] are the same than the
+[Storage](#torch.Tensor.storage) than the given `tensor`. The
+[size](#torch.Tensor.size), [stride](#torch.Tensor.stride), and
+[storage offset](#torch.Tensor.storageOffset) are the same than the
given tensor.
-The new ''Tensor'' is now going to "view" the same [[Storage|storage]]
-as the given ''tensor''. As a result, any modification in the elements
-of the ''Tensor'' will have a impact on the elements of the given
-''tensor'', and vice-versa. No memory copy!
+The new `Tensor` is now going to "view" the same [storage](storage.md)
+as the given `tensor`. As a result, any modification in the elements
+of the `Tensor` will have a impact on the elements of the given
+`tensor`, and vice-versa. No memory copy!
-
+```lua
> x = torch.Tensor(2,5):fill(3.14)
> print(x)
@@ -223,33 +223,33 @@ of the ''Tensor'' will have a impact on the elements of the given
0 0 0 0 0
0 0 0 0 0
[torch.DoubleTensor of dimension 2x5]
-
+```
-==== torch.Tensor(sz1 [,sz2 [,sz3 [,sz4]]]]) ====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor(sz1 [,sz2 [,sz3 [,sz4]]]]) ###
-Create a tensor up to 4 dimensions. The tensor size will be ''sz1 x sz2 x sx3 x sz4''.
+Create a tensor up to 4 dimensions. The tensor size will be `sz1 x sz2 x sx3 x sz4`.
-==== torch.Tensor(sizes, [strides]) ====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor(sizes, [strides]) ###
Create a tensor of any number of dimensions. The
-[[Storage|LongStorage]] ''sizes'' gives the size in each dimension of
-the tensor. The optional [[Storage|LongStorage]] ''strides'' gives the
+[LongStorage](storage.md) `sizes` gives the size in each dimension of
+the tensor. The optional [LongStorage](storage.md) `strides` gives the
jump necessary to go from one element to the next one in the each
-dimension. Of course, ''sizes'' and ''strides'' must have the same
-number of elements. If not given, or if some elements of ''strides''
-are //negative//, the [[#torch.Tensor.stride|stride()]] will be
+dimension. Of course, `sizes` and `strides` must have the same
+number of elements. If not given, or if some elements of `strides`
+are _negative_, the [stride()](#torch.Tensor.stride) will be
computed such that the tensor is as contiguous as possible in memory.
Example, create a 4D 4x4x3x2 tensor:
-
+```lua
x = torch.Tensor(torch.LongStorage({4,4,3,2}))
-
+```
Playing with the strides can give some interesting things:
-
+```lua
x = torch.Tensor(torch.LongStorage({4}), torch.LongStorage({0})):zero() -- zeroes the tensor
x[1] = 1 -- all elements point to the same address!
print(x)
@@ -259,34 +259,34 @@ print(x)
1
1
[torch.DoubleTensor of dimension 4]
-
+```
-Note that //negative strides are not allowed//, and, if given as
+Note that _negative strides are not allowed_, and, if given as
argument when constructing the Tensor, will be interpreted as //choose
the right stride such that the Tensor is contiguous in memory//.
-==== torch.Tensor(storage, [storageOffset, sizes, [strides]]) ====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor(storage, [storageOffset, sizes, [strides]]) ###
-Returns a tensor which uses the existing [[Storage|Storage]]
-''storage'', starting at position ''storageOffset'' (>=1). The size
+Returns a tensor which uses the existing [Storage](storage.md)
+`storage`, starting at position `storageOffset` (>=1). The size
of each dimension of the tensor is given by the
-[[Storage|LongStorage]] ''sizes''.
+[LongStorage](storage.md) `sizes`.
-If only ''storage'' is provided, it will create a 1D Tensor viewing
+If only `storage` is provided, it will create a 1D Tensor viewing
the all Storage.
The jump necessary to go from one element to the next one in each
-dimension is given by the optional argument [[Storage|LongStorage]]
-''strides''. If not given, or if some elements of ''strides'' are
-negative, the [[#torch.Tensor.stride|stride()]] will be computed such
+dimension is given by the optional argument [LongStorage](storage.md)
+`strides`. If not given, or if some elements of `strides` are
+negative, the [stride()](#torch.Tensor.stride) will be computed such
that the tensor is as contiguous as possible in memory.
-Any modification in the elements of the ''Storage'' will have an
-impact on the elements of the new ''Tensor'', and vice-versa. There is
+Any modification in the elements of the `Storage` will have an
+impact on the elements of the new `Tensor`, and vice-versa. There is
no memory copy!
-
+```lua
-- creates a storage with 10 elements
> s = torch.Storage(10):fill(1)
@@ -311,39 +311,39 @@ no memory copy!
0
0
[torch.DoubleStorage of size 10]
-
+```
-==== torch.Tensor(storage, [storageOffset, sz1 [, st1 ... [, sz4 [, st4]]]]) ====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor(storage, [storageOffset, sz1 [, st1 ... [, sz4 [, st4]]]]) ###
Convenience constructor (for the previous constructor) assuming a
-number of dimensions inferior or equal to 4. ''szi'' is the size in
-the ''i-th'' dimension, and ''sti'' it the stride in the ''i-th''
+number of dimensions inferior or equal to 4. `szi` is the size in
+the `i-th` dimension, and `sti` it the stride in the `i-th`
dimension.
-==== torch.Tensor(table) =====
-{{anchor:torch.Tensor}}
+
+### torch.Tensor(table) ###
The argument is assumed to be a Lua array of numbers. The constructor
returns a new Tensor of the size of the table, containing all the table
elements. The table might be multi-dimensional.
Example:
-
+```lua
> = torch.Tensor({{1,2,3,4}, {5,6,7,8}})
1 2 3 4
5 6 7 8
[torch.DoubleTensor of dimension 2x4]
-
+```
-===== Cloning =====
+## Cloning ##
-==== [Tensor] clone() ====
-{{anchor:torch.Tensor.clone}}
+
+### [Tensor] clone() ###
Returns a clone of a tensor. The memory is copied.
-
+```lua
i = 0
x = torch.Tensor(5):apply(function(x)
i = i + 1
@@ -390,15 +390,15 @@ y:fill(1)
4
5
[torch.DoubleTensor of dimension 5]
-
+```
-==== [Tensor] contiguous ====
-{{anchor:torch.Tensor.contiguous}}
+
+### [Tensor] contiguous ###
* If the given Tensor contents are contiguous in memory, returns the exact same Tensor (no memory copy).
- * Otherwise (//not contiguous in memory//), returns a [[#torch.Tensor.clone|clone]] (memory //copy//).
+ * Otherwise (_not contiguous in memory_), returns a [clone](#torch.Tensor.clone) (memory _copy_).
-
+```lua
x = torch.Tensor(2,3):fill(1)
= x
@@ -436,24 +436,24 @@ z = x:t():contiguous():fill(3.14)
2 2 2
2 2 2
[torch.DoubleTensor of dimension 2x3]
-
+```
-==== [Tensor or string] type(type) ====
-{{anchor:torch.Tensor.type}}
+
+### [Tensor or string] type(type) ###
-**If ''type'' is ''nil''**, returns atring containing the type name of
+__If `type` is `nil`__, returns atring containing the type name of
the given tensor.
-
+```lua
= torch.Tensor():type()
torch.DoubleTensor
-
+```
-**If ''type'' is a string** describing a Tensor type, and is equal to
+__If `type` is a string__ describing a Tensor type, and is equal to
the given tensor typename, returns the exact same tensor (//no memory
copy//).
-
+```lua
x = torch.Tensor(3):fill(3.14)
= x
@@ -481,15 +481,15 @@ y:zero()
0
[torch.DoubleTensor of dimension 3]
-
+```
-**If ''type'' is a string** describing a Tensor type, different from
+__If `type` is a string__ describing a Tensor type, different from
the type name of the given Tensor, returns a new Tensor of the
specified type, whose contents corresponds to the contents of the
original Tensor, casted to the given type (//memory copy occurs, with
possible loss of precision//).
-
+```lua
x = torch.Tensor(3):fill(3.14)
= x
@@ -506,28 +506,28 @@ y = x:type('torch.IntTensor')
3
[torch.IntTensor of dimension 3]
-
+```
-==== [Tensor] typeAs(tensor) ====
-{{anchor:torch.Tensor.typeAs}}
+
+### [Tensor] typeAs(tensor) ###
-Convenience method for the [[#torch.Tensor.type|type]] method. Equivalent to
-
+Convenience method for the [type](#torch.Tensor.type) method. Equivalent to
+```lua
type(tensor:type())
-
+```
-==== [Tensor] byte(), char(), short(), int(), long(), float(), double() ====
-{{anchor:torch.Tensor.byte}}
-{{anchor:torch.Tensor.char}}
-{{anchor:torch.Tensor.short}}
-{{anchor:torch.Tensor.int}}
-{{anchor:torch.Tensor.long}}
-{{anchor:torch.Tensor.float}}
-{{anchor:torch.Tensor.double}}
+
+### [Tensor] byte(), char(), short(), int(), long(), float(), double() ###
+
+
+
+
+
+
-Convenience methods for the [[#torch.Tensor.type|type]] method. For e.g.,
-
+Convenience methods for the [type](#torch.Tensor.type) method. For e.g.,
+```lua
x = torch.Tensor(3):fill(3.14)
= x
@@ -552,30 +552,30 @@ x = torch.Tensor(3):fill(3.14)
3
3
[torch.IntTensor of dimension 3]
-
+```
-===== Querying the size and structure =====
+## Querying the size and structure ##
-==== [number] nDimension() ====
-{{anchor:torch.Tensor.nDimension}}
+
+### [number] nDimension() ###
-Returns the number of dimensions in a ''Tensor''.
-
+Returns the number of dimensions in a `Tensor`.
+```lua
> x = torch.Tensor(4,5) -- a matrix
> = x:nDimension()
2
-
+```
-==== [number] dim() ====
-{{anchor:torch.Tensor.dim}}
+
+### [number] dim() ###
-Same as [[#torch.Tensor.nDimension|nDimension()]].
+Same as [nDimension()](#torch.Tensor.nDimension).
-==== [number] size(dim) ====
-{{anchor:torch.Tensor.size}}
+
+### [number] size(dim) ###
-Returns the size of the specified dimension ''dim''. Example:
-
+Returns the size of the specified dimension `dim`. Example:
+```lua
> x = torch.Tensor(4,5):zero()
> print(x)
@@ -587,14 +587,14 @@ Returns the size of the specified dimension ''dim''. Example:
> return x:size(2) -- gets the number of columns
5
-
+```
-==== [LongStorage] size() ====
-{{anchor:torch.Tensor.size}}
+
+### [LongStorage] size() ###
-Returns a [[Storage|LongStorage]] containing the size of each dimension
+Returns a [LongStorage](storage.md) containing the size of each dimension
of the tensor.
-
+```lua
> x = torch.Tensor(4,5):zero()
> print(x)
@@ -608,19 +608,19 @@ of the tensor.
4
5
[torch.LongStorage of size 2]
-
+```
-==== [LongStorage] #self ====
-{{anchor:torch.Tensor.size}}
+
+### [LongStorage] #self ###
-Same as [[#torch.Tensor.size|size()]] method.
+Same as [size()](#torch.Tensor.size) method.
-==== [number] stride(dim) ====
-{{anchor:torch.Tensor.stride}}
+
+### [number] stride(dim) ###
Returns the jump necessary to go from one element to the next one in the
-specified dimension ''dim''. Example:
-
+specified dimension `dim`. Example:
+```lua
> x = torch.Tensor(4,5):zero()
> print(x)
@@ -638,16 +638,16 @@ specified dimension ''dim''. Example:
--- we need here to jump the size of the row
> return x:stride(1)
5
-
+```
-Note also that in ''Torch'' //elements in the same row// [elements along the **last** dimension]
+Note also that in `Torch` _elements in the same row_ [elements along the __last__ dimension]
are contiguous in memory for a matrix [tensor].
-==== [LongStorage] stride() ====
-{{anchor:torch.Tensor.stride}}
+
+### [LongStorage] stride() ###
Returns the jump necessary to go from one element to the next one in each dimension. Example:
-
+```lua
> x = torch.Tensor(4,5):zero()
> print(x)
@@ -661,17 +661,17 @@ Returns the jump necessary to go from one element to the next one in each dimens
5
1 -- elements are contiguous in a row [last dimension]
[torch.LongStorage of size 2]
-
+```
-Note also that in ''Torch'' //elements in the same row// [elements along the **last** dimension]
+Note also that in `Torch` _elements in the same row_ [elements along the __last__ dimension]
are contiguous in memory for a matrix [tensor].
-==== [Storage] storage() ====
-{{anchor:torch.Tensor.storage}}
+
+### [Storage] storage() ###
-Returns the [[Storage|Storage]] used to store all the elements of the ''Tensor''.
-Basically, a ''Tensor'' is a particular way of //viewing// a ''Storage''.
-
+Returns the [Storage](storage.md) used to store all the elements of the `Tensor`.
+Basically, a `Tensor` is a particular way of _viewing_ a `Storage`.
+```lua
> x = torch.Tensor(4,5)
> s = x:storage()
> for i=1,s:size() do -- fill up the Storage
@@ -684,13 +684,13 @@ Basically, a ''Tensor'' is a particular way of //viewing// a ''Storage''.
11 12 13 14 15
16 17 18 19 20
[torch.DoubleTensor of dimension 4x5]
-
+```
-==== [boolean] isContiguous() ====
-{{anchor:torch.Tensor.isContiguous}}
+
+### [boolean] isContiguous() ###
-Returns ''true'' iff the elements of the ''Tensor'' are contiguous in memory.
-
+Returns `true` iff the elements of the `Tensor` are contiguous in memory.
+```lua
-- normal tensors are contiguous in memory
> x = torch.Tensor(4,5):zero()
> = x:isContiguous()
@@ -706,43 +706,43 @@ false
> = y:stride()
5
[torch.LongStorage of size 1]
-
+```
-==== [number] nElement() ====
-{{anchor:torch.Tensor.nElement}}
+
+### [number] nElement() ###
Returns the number of elements of a tensor.
-
+```lua
> x = torch.Tensor(4,5)
> = x:nElement() -- 4x5 = 20!
20
-
+```
-==== [number] storageOffset() ====
-{{anchor:torch.Tensor.storageOffset}}
+
+### [number] storageOffset() ###
-Return the first index (starting at 1) used in the tensor's [[#torch.Tensor.storage|storage]].
+Return the first index (starting at 1) used in the tensor's [storage](#torch.Tensor.storage).
-===== Querying elements =====
-{{anchor:torch.Tensor.__index__}}
+
+## Querying elements ##
-Elements of a tensor can be retrieved with the ''[index]'' operator.
+Elements of a tensor can be retrieved with the `[index]` operator.
-If ''index'' is a number, ''[index]'' operator is equivalent to a
-[[#torch.Tensor.select|''select(1, index)'']] if the tensor has more
+If `index` is a number, `[index]` operator is equivalent to a
+[`select(1, index)`](#torch.Tensor.select) if the tensor has more
than one dimension. If the tensor is a 1D tensor, it returns the value
-at ''index'' in this tensor.
+at `index` in this tensor.
-If ''index'' is a table, the table must contain //n// numbers, where
-//n// is the [[#torch.Tensor.nDimension|number of dimensions]] of the
+If `index` is a table, the table must contain _n_ numbers, where
+_n_ is the [number of dimensions](#torch.Tensor.nDimension) of the
Tensor. It will return the element at the given position.
-In the same spirit, ''index'' might be a [[Storage|LongStorage]],
+In the same spirit, `index` might be a [LongStorage](storage.md),
specifying the position (in the Tensor) of the element to be
retrieved.
Example:
-
+```lua
> x = torch.Tensor(3,3)
> i = 0; x:apply(function() i = i + 1; return i end)
> = x
@@ -768,35 +768,35 @@ Example:
> = x[torch.LongStorage{2,3}] -- yet another way to return row 2, column 3
6
-
+```
-===== Referencing a tensor to an existing tensor or chunk of memory =====
-{{anchor:torch.Tensor.set}}
+
+## Referencing a tensor to an existing tensor or chunk of memory ##
-A ''Tensor'' being a way of //viewing// a [[Storage|Storage]], it is
-possible to "set" a ''Tensor'' such that it views an existing [[Storage|Storage]].
+A `Tensor` being a way of _viewing_ a [Storage](storage.md), it is
+possible to "set" a `Tensor` such that it views an existing [Storage](storage.md).
-Note that if you want to perform a set on an empty ''Tensor'' like
-
+Note that if you want to perform a set on an empty `Tensor` like
+```lua
y = torch.Storage(10)
x = torch.Tensor()
x:set(y, 1, 10)
-
-you might want in that case to use one of the [[#torch.Tensor|equivalent constructor]].
-
+```
+you might want in that case to use one of the [equivalent constructor](#torch.Tensor).
+```lua
y = torch.Storage(10)
x = torch.Tensor(y, 1, 10)
-
+```
-==== [self] set(tensor) ====
-{{anchor:torch.Tensor.set}}
+
+### [self] set(tensor) ###
-The ''Tensor'' is now going to "view" the same [[#torch.Tensor.storage|storage]]
-as the given ''tensor''. As the result, any modification in the elements of
-the ''Tensor'' will have an impact on the elements of the given ''tensor'', and
+The `Tensor` is now going to "view" the same [storage](#torch.Tensor.storage)
+as the given `tensor`. As the result, any modification in the elements of
+the `Tensor` will have an impact on the elements of the given `tensor`, and
vice-versa. This is an efficient method, as there is no memory copy!
-
+```lua
> x = torch.Tensor(2,5):fill(3.14)
> print(x)
@@ -817,22 +817,22 @@ vice-versa. This is an efficient method, as there is no memory copy!
0 0 0 0 0
0 0 0 0 0
[torch.DoubleTensor of dimension 2x5]
-
+```
-==== [self] set(storage, [storageOffset, sizes, [strides]]) ====
-{{anchor:torch.Tensor.set}}
+
+### [self] set(storage, [storageOffset, sizes, [strides]]) ###
-The ''Tensor'' is now going to "view" the given
-[[Storage|''storage'']], starting at position ''storageOffset'' (>=1)
-with the given [[#torch.Tensor.size|dimension ''sizes'']] and the optional given
-[[#torch.Tensor.stride|''strides'']]. As the result, any modification in the
-elements of the ''Storage'' will have a impact on the elements of the
-''Tensor'', and vice-versa. This is an efficient method, as there is no
+The `Tensor` is now going to "view" the given
+[`storage`](storage.md), starting at position `storageOffset` (>=1)
+with the given [dimension `sizes`](#torch.Tensor.size) and the optional given
+[`strides`](#torch.Tensor.stride). As the result, any modification in the
+elements of the `Storage` will have a impact on the elements of the
+`Tensor`, and vice-versa. This is an efficient method, as there is no
memory copy!
-If only ''storage'' is provided, the whole storage will be viewed as a 1D Tensor.
+If only `storage` is provided, the whole storage will be viewed as a 1D Tensor.
-
+```lua
-- creates a storage with 10 elements
> s = torch.Storage(10):fill(1)
@@ -859,25 +859,25 @@ If only ''storage'' is provided, the whole storage will be viewed as a 1D Tensor
0
0
[torch.DoubleStorage of size 10]
-
+```
-==== [self] set(storage, [storageOffset, sz1 [, st1 ... [, sz4 [, st4]]]]) ====
-{{anchor:torch.Tensor.set}}
+
+### [self] set(storage, [storageOffset, sz1 [, st1 ... [, sz4 [, st4]]]]) ###
This is a "shorcut" for previous method.
-It works up to 4 dimensions. ''szi'' is the size of the ''i''-th dimension of the tensor.
-''sti'' is the stride in the ''i''-th dimension.
+It works up to 4 dimensions. `szi` is the size of the `i`-th dimension of the tensor.
+`sti` is the stride in the `i`-th dimension.
-===== Copying and initializing =====
+## Copying and initializing ##
-==== [self] copy(tensor) ====
-{{anchor:torch.Tensor.copy}}
+
+### [self] copy(tensor) ###
-Copy the elements of the given ''tensor''. The
-[[#torch.Tensor.nElement|number of elements]] must match, but the
+Copy the elements of the given `tensor`. The
+[number of elements](#torch.Tensor.nElement) must match, but the
sizes might be different.
-
+```lua
> x = torch.Tensor(4):fill(1)
> y = torch.Tensor(2,2):copy(x)
> print(x)
@@ -893,16 +893,16 @@ sizes might be different.
1 1
1 1
[torch.DoubleTensor of dimension 2x2]
-
+```
-If a different type of ''tensor'' is given, then a type conversion occurs,
+If a different type of `tensor` is given, then a type conversion occurs,
which, of course, might result in loss of precision.
-==== [self] fill(value) ====
-{{anchor:torch.Tensor.fill}}
+
+### [self] fill(value) ###
-Fill the tensor with the given ''value''.
-
+Fill the tensor with the given `value`.
+```lua
> = torch.DoubleTensor(4):fill(3.14)
3.1400
@@ -910,13 +910,13 @@ Fill the tensor with the given ''value''.
3.1400
3.1400
[torch.DoubleTensor of dimension 4]
-
+```
-==== [self] zero() ====
-{{anchor:torch.Tensor.zero}}
+
+### [self] zero() ###
Fill the tensor with zeros.
-
+```lua
> = torch.Tensor(4):zero()
0
@@ -924,49 +924,49 @@ Fill the tensor with zeros.
0
0
[torch.DoubleTensor of dimension 4]
-
+```
-===== Resizing =====
-{{anchor:torch.Tensor.resize.dok}}
+
+## Resizing ##
-**When resizing to a larger size**, the underlying [[Storage|Storage]] is resized to fit
-all the elements of the ''Tensor''.
+__When resizing to a larger size__, the underlying [Storage](storage.md) is resized to fit
+all the elements of the `Tensor`.
-**When resizing to a smaller size**, the underlying [[#Storage|Storage]] is not resized.
+__When resizing to a smaller size__, the underlying [Storage](#Storage) is not resized.
-**Important note:** the content of a ''Tensor'' after resizing is //undertermined// as [[#torch.Tensor.stride|strides]]
-might have been completely changed. In particular, //the elements of the resized tensor are contiguous in memory//.
+__Important note:__ the content of a `Tensor` after resizing is _undertermined_ as [strides](#torch.Tensor.stride)
+might have been completely changed. In particular, _the elements of the resized tensor are contiguous in memory_.
-==== [self] resizeAs(tensor) ====
-{{anchor:torch.Tensor.resizeAs}}
+
+### [self] resizeAs(tensor) ###
-Resize the ''tensor'' as the given ''tensor'' (of the same type).
+Resize the `tensor` as the given `tensor` (of the same type).
-==== [self] resize(sizes) ====
-{{anchor:torch.Tensor.resize}}
+
+### [self] resize(sizes) ###
-Resize the ''tensor'' according to the given [[Storage|LongStorage]] ''size''.
+Resize the `tensor` according to the given [LongStorage](storage.md) `size`.
-==== [self] resize(sz1 [,sz2 [,sz3 [,sz4]]]]) ====
-{{anchor:torch.Tensor.resize}}
+
+### [self] resize(sz1 [,sz2 [,sz3 [,sz4]]]]) ###
Convenience method of the previous method, working for a number of dimensions up to 4.
-===== Extracting sub-tensors =====
+## Extracting sub-tensors ##
-Each of these methods returns a ''Tensor'' which is a sub-tensor of the given
-tensor, //with the same ''Storage''//. Hence, any modification in the memory of
+Each of these methods returns a `Tensor` which is a sub-tensor of the given
+tensor, _with the same `Storage`_. Hence, any modification in the memory of
the sub-tensor will have an impact on the primary tensor, and vice-versa.
These methods are very fast, as they do not involve any memory copy.
-==== [Tensor] narrow(dim, index, size) ====
-{{anchor:torch.Tensor.narrow}}
+
+### [Tensor] narrow(dim, index, size) ###
-Returns a new ''Tensor'' which is a narrowed version of the current one: the dimension ''dim'' is narrowed
-from ''index'' to ''index+size-1''.
+Returns a new `Tensor` which is a narrowed version of the current one: the dimension `dim` is narrowed
+from `index` to `index+size-1`.
-
+```lua
> x = torch.Tensor(5, 6):zero()
> print(x)
@@ -994,19 +994,19 @@ from ''index'' to ''index+size-1''.
1 1 1 1 1 1
0 0 0 0 0 0
[torch.DoubleTensor of dimension 5x6]
-
+```
-==== [Tensor] sub(dim1s, dim1e ... [, dim4s [, dim4e]]) ====
-{{anchor:torch.Tensor.sub}}
+
+### [Tensor] sub(dim1s, dim1e ... [, dim4s [, dim4e]]) ###
This method is equivalent to do a series of
-[[#torch.Tensor.narrow|narrow]] up to the first 4 dimensions. It
-returns a new ''Tensor'' which is a sub-tensor going from index
-''dimis'' to ''dimie'' in the ''i''-th dimension. Negative values are
-interpreted index starting from the end: ''-1'' is the last index,
-''-2'' is the index before the last index, ...
+[narrow](#torch.Tensor.narrow) up to the first 4 dimensions. It
+returns a new `Tensor` which is a sub-tensor going from index
+`dimis` to `dimie` in the `i`-th dimension. Negative values are
+interpreted index starting from the end: `-1` is the last index,
+`-2` is the index before the last index, ...
-
+```lua
> x = torch.Tensor(5, 6):zero()
> print(x)
@@ -1055,19 +1055,19 @@ interpreted index starting from the end: ''-1'' is the last index,
2 2
[torch.DoubleTensor of dimension 1x2]
-
+```
-==== [Tensor] select(dim, index) ====
-{{anchor:torch.Tensor.select}}
+
+### [Tensor] select(dim, index) ###
-Returns a new ''Tensor'' which is a tensor slice at the given ''index'' in the
-dimension ''dim''. The returned tensor has one less dimension: the dimension
-''dim'' is removed. As a result, it is not possible to ''select()'' on a 1D
+Returns a new `Tensor` which is a tensor slice at the given `index` in the
+dimension `dim`. The returned tensor has one less dimension: the dimension
+`dim` is removed. As a result, it is not possible to `select()` on a 1D
tensor.
-Note that "selecting" on the first dimension is equivalent to use the [[#torch.Tensor.__index__ |[] operator]]
+Note that "selecting" on the first dimension is equivalent to use the [[] operator](#torch.Tensor.__index__ )
-
+```lua
> x = torch.Tensor(5,6):zero()
> print(x)
@@ -1116,16 +1116,16 @@ Note that "selecting" on the first dimension is equivalent to use the [[#torch.T
0 0 0 0 5 0
0 0 0 0 5 0
[torch.DoubleTensor of dimension 5x6]
-
+```
-==== [Tensor] [{ dim1,dim2,... }] or [{ {dim1s,dim1e}, {dim2s,dim2e} }] ====
-{{anchor:torch.Tensor.index}}
+
+### [Tensor] [{ dim1,dim2,... }] or [{ {dim1s,dim1e}, {dim2s,dim2e} }] ###
The indexing operator [] can be used to combine narrow/sub and
select in a concise an efficient way. It can also be used
to copy, and fill (sub) tensors.
-
+```lua
> x = torch.Tensor(5, 6):zero()
> print(x)
@@ -1175,14 +1175,14 @@ to copy, and fill (sub) tensors.
0 4 0 -1 0 0
0 5 0 -1 0 0
[torch.DoubleTensor of dimension 5x6]
-
+```
-==== [Tensor] index(dim, index) ====
-{{anchor:torch.Tensor.index}}
+
+### [Tensor] index(dim, index) ###
-Returns a new ''Tensor'' which indexes the given tensor along dimension ''dim'' and using the entries in ''torch.LongTensor'' ''index''. The returned tensor has the same number of dimensions as the original tensor. The returned tensor does **not** use the same storage as the original tensor.
+Returns a new `Tensor` which indexes the given tensor along dimension `dim` and using the entries in `torch.LongTensor` `index`. The returned tensor has the same number of dimensions as the original tensor. The returned tensor does __not__ use the same storage as the original tensor.
-
+```lua
t7> x = torch.rand(5,5)
t7> =x
0.8020 0.7246 0.1204 0.3419 0.4385
@@ -1211,16 +1211,16 @@ t7> =x
0.1412 0.6784 0.1624 0.8113 0.3949
[torch.DoubleTensor of dimension 5x5]
-
+```
-Note the explicit ''index'' function is different than the indexing operator ''[]''. The indexing operator ''[]'' is a syntactic shortcut for a series of select and narrow operations, therefore it always returns a new view on the original tensor that shares the same storage. However, he explicit ''index'' function can not use the same storage.
+Note the explicit `index` function is different than the indexing operator `[]`. The indexing operator `[]` is a syntactic shortcut for a series of select and narrow operations, therefore it always returns a new view on the original tensor that shares the same storage. However, he explicit `index` function can not use the same storage.
-==== [Tensor] indexCopy(dim, index, tensor) ====
-{{anchor:torch.Tensor.indexCopy}}
+
+### [Tensor] indexCopy(dim, index, tensor) ###
-Copies the elements of ''tensor'' into itself by selecting the indices in the order defined by the order given in ''index''.
+Copies the elements of `tensor` into itself by selecting the indices in the order defined by the order given in `index`.
-
+```lua
t7> =x
0.8020 0.7246 0.1204 0.3419 0.4385
0.0369 0.4158 0.0985 0.3024 0.8186
@@ -1248,14 +1248,14 @@ t7> =x
-2.0000 0.6784 0.1624 0.8113 -1.0000
[torch.DoubleTensor of dimension 5x5]
-
+```
-==== [Tensor] indexFill(dim, index, val) ====
-{{anchor:torch.Tensor.indexFill}}
+
+### [Tensor] indexFill(dim, index, val) ###
-Fills the elements of itself with value ''val'' by selecting the indices in the order defined by the order given in ''index''.
+Fills the elements of itself with value `val` by selecting the indices in the order defined by the order given in `index`.
-
+```lua
t7> x=torch.rand(5,5)
t7> =x
0.8414 0.4121 0.3934 0.5600 0.5403
@@ -1274,22 +1274,22 @@ t7> =x
0.8739 -10.0000 0.4271 -10.0000 0.9116
[torch.DoubleTensor of dimension 5x5]
-
+```
-===== Expanding/Replicating Tensors =====
+## Expanding/Replicating Tensors ##
-These methods returns a ''Tensor'' which is created by replications of the
+These methods returns a `Tensor` which is created by replications of the
original tensor.
-=== [Tensor] expand(sizes) ===
-{{anchor:torch.Tensor.expand}}
+
+#### [Tensor] expand(sizes) ####
-''sizes'' can either be a ''torch.LongStorage'' or numbers. Expanding a tensor
+`sizes` can either be a `torch.LongStorage` or numbers. Expanding a tensor
does not allocate new memory, but only creates a new view on the existing tensor where
-singleton dimensions can be expanded to multiple ones by setting the ''stride'' to 0.
+singleton dimensions can be expanded to multiple ones by setting the `stride` to 0.
Any dimension that is 1 can be expanded to arbitrary value without any new memory allocation.
-
+```lua
t7> x=torch.rand(10,1)
t7> =x
0.3837
@@ -1372,20 +1372,20 @@ t7> =x
20
[torch.DoubleTensor of dimension 10x1]
-
+```
-=== [Tensor] expandAs(tensor) ===
-{{anchor:torch.Tensor.expandAs}}
+
+#### [Tensor] expandAs(tensor) ####
This is equivalent to self:expand(tensor:size())
-=== [Tensor] repeatTensor(sizes) ===
-{{anchor:torch.Tensor.repeatTensor}}
+
+#### [Tensor] repeatTensor(sizes) ####
-''sizes'' can either be a ''torch.LongStorage'' or numbers. Repeating a tensor allocates
- new memory. ''sizes'' specify the number of times the tensor is repeated in each dimension.
+`sizes` can either be a `torch.LongStorage` or numbers. Repeating a tensor allocates
+ new memory. `sizes` specify the number of times the tensor is repeated in each dimension.
-
+ ```lua
t7> x=torch.rand(5)
t7> =x
0.7160
@@ -1415,24 +1415,24 @@ t7> return torch.repeatTensor(x,3,2,1)
0.7160 0.6514 0.0704 0.7856 0.7452
[torch.DoubleTensor of dimension 3x2x5]
-
+ ```
-===== Manipulating the tensor view =====
+## Manipulating the tensor view ##
-Each of these methods returns a ''Tensor'' which is another way of viewing
-the ''Storage'' of the given tensor. Hence, any modification in the memory of
+Each of these methods returns a `Tensor` which is another way of viewing
+the `Storage` of the given tensor. Hence, any modification in the memory of
the sub-tensor will have an impact on the primary tensor, and vice-versa.
These methods are very fast, are they do not involve any memory copy.
-==== [Tensor] transpose(dim1, dim2) ====
-{{anchor:torch.Tensor.transpose}}
+
+### [Tensor] transpose(dim1, dim2) ###
-Returns a tensor where dimensions ''dim1'' and ''dim2'' have been swapped. For 2D tensors,
-the convenience method of [[#torch.Tensor.t|t()]] is available.
-
+Returns a tensor where dimensions `dim1` and `dim2` have been swapped. For 2D tensors,
+the convenience method of [t()](#torch.Tensor.t) is available.
+```lua
> x = torch.Tensor(3,4):zero()
> x:select(2,3):fill(7) -- fill column 3 with 7
> print(x)
@@ -1466,15 +1466,15 @@ the convenience method of [[#torch.Tensor.t|t()]] is available.
0 0 7 0
8 8 8 8
[torch.DoubleTensor of dimension 3x4]
-
+```
-==== [Tensor] t() ====
-{{anchor:torch.Tensor.t}}
+
+### [Tensor] t() ###
-Convenience method of [[#torch.Tensor.transpose|transpose()]] for 2D
+Convenience method of [transpose()](#torch.Tensor.transpose) for 2D
tensors. The given tensor must be 2 dimensional. Swap dimensions 1 and 2.
-
+```lua
> x = torch.Tensor(3,4):zero()
> x:select(2,3):fill(7)
> y = x:t()
@@ -1492,20 +1492,20 @@ tensors. The given tensor must be 2 dimensional. Swap dimensions 1 and 2.
0 0 7 0
0 0 7 0
[torch.DoubleTensor of dimension 3x4]
-
+```
-==== [Tensor] unfold(dim, size, step) ====
-{{anchor:torch.Tensor.unfold}}
+
+### [Tensor] unfold(dim, size, step) ###
-Returns a tensor which contains all slices of size ''size'' in the dimension ''dim''. Step between
-two slices is given by ''step''.
+Returns a tensor which contains all slices of size `size` in the dimension `dim`. Step between
+two slices is given by `step`.
-If ''sizedim'' is the original size of dimension ''dim'', the size of dimension
-''dim'' in the returned tensor will be ''(sizedim - size) / step + 1''
+If `sizedim` is the original size of dimension `dim`, the size of dimension
+`dim` in the returned tensor will be `(sizedim - size) / step + 1`
-An additional dimension of size ''size'' is appended in the returned tensor.
+An additional dimension of size `size` is appended in the returned tensor.
-
+```lua
> x = torch.Tensor(7)
> for i=1,7 do x[i] = i end
> print(x)
@@ -1535,17 +1535,17 @@ An additional dimension of size ''size'' is appended in the returned tensor.
3 4
5 6
[torch.DoubleTensor of dimension 3x2]
-
+```
-===== Applying a function to a tensor =====
+## Applying a function to a tensor ##
These functions apply a function to each element of the tensor on which the
-method is called (self). These methods are much faster than using a ''for''
-loop in ''Lua''. The results is stored in ''self'' (if the function returns
+method is called (self). These methods are much faster than using a `for`
+loop in `Lua`. The results is stored in `self` (if the function returns
something).
-==== [self] apply(function) ====
-{{anchor:torch.Tensor.apply}}
+
+### [self] apply(function) ###
Apply the given function to all elements of self.
@@ -1553,7 +1553,7 @@ The function takes a number (the current element of the tensor) and might return
a number, in which case it will be stored in self.
Examples:
-
+```lua
> i = 0
> z = torch.Tensor(3,3)
> z:apply(function(x)
@@ -1583,19 +1583,19 @@ Examples:
1.9552094821074
> = z:sum() -- it is indeed correct!
1.9552094821074
-
+```
-==== [self] map(tensor, function(xs, xt)) ====
-{{anchor:torch.Tensor.map}}
+
+### [self] map(tensor, function(xs, xt)) ###
-Apply the given function to all elements of self and ''tensor''. The number of elements of both tensors
+Apply the given function to all elements of self and `tensor`. The number of elements of both tensors
must match, but sizes do not matter.
-The function takes two numbers (the current element of self and ''tensor'') and might return
+The function takes two numbers (the current element of self and `tensor`) and might return
a number, in which case it will be stored in self.
Example:
-
+```lua
> x = torch.Tensor(3,3)
> y = torch.Tensor(9)
> i = 0
@@ -1629,19 +1629,19 @@ Example:
16 25 36
49 64 81
[torch.DoubleTensor of dimension 3x3]
-
+```
-==== [self] map2(tensor1, tensor2, function(x, xt1, xt2)) ====
-{{anchor:torch.Tensor.map2}}
+
+### [self] map2(tensor1, tensor2, function(x, xt1, xt2)) ###
-Apply the given function to all elements of self, ''tensor1'' and ''tensor2''. The number of elements of all tensors
+Apply the given function to all elements of self, `tensor1` and `tensor2`. The number of elements of all tensors
must match, but sizes do not matter.
-The function takes three numbers (the current element of self, ''tensor1'' and ''tensor2'') and might return
+The function takes three numbers (the current element of self, `tensor1` and `tensor2`) and might return
a number, in which case it will be stored in self.
Example:
-
+```lua
> x = torch.Tensor(3,3)
> y = torch.Tensor(9)
> z = torch.Tensor(3,3)
@@ -1686,5 +1686,6 @@ Example:
16.4272 25.0805 36.9219
49.5684 64.0212 81.8302
[torch.DoubleTensor of dimension 3x3]
-
+```
+
diff --git a/doc/tester.md b/doc/tester.md
new file mode 100644
index 00000000..2b7f7e8a
--- /dev/null
+++ b/doc/tester.md
@@ -0,0 +1,151 @@
+
+# Tester #
+
+This class provides a generic unit testing framework. It is already
+being used in [nn](../nn/index.md) package to verify the correctness of classes.
+
+The framework is generally used as follows.
+
+```lua
+mytest = {}
+
+tester = torch.Tester()
+
+function mytest.TestA()
+ local a = 10
+ local b = 10
+ tester:asserteq(a,b,'a == b')
+ tester:assertne(a,b,'a ~= b')
+end
+
+function mytest.TestB()
+ local a = 10
+ local b = 9
+ tester:assertlt(a,b,'a < b')
+ tester:assertgt(a,b,'a > b')
+end
+
+tester:add(mytest)
+tester:run()
+
+```
+
+Running this code will report 2 errors in 2 test functions. Generally it is
+better to put single test cases in each test function unless several very related
+test cases exit. The error report includes the message and line number of the error.
+
+```
+
+Running 2 tests
+** ==> Done
+
+Completed 2 tests with 2 errors
+
+--------------------------------------------------------------------------------
+TestB
+a < b
+ LT(<) violation val=10, condition=9
+ ...y/usr.t7/local.master/share/lua/5.1/torch/Tester.lua:23: in function 'assertlt'
+ [string "function mytest.TestB()..."]:4: in function 'f'
+
+--------------------------------------------------------------------------------
+TestA
+a ~= b
+ NE(~=) violation val=10, condition=10
+ ...y/usr.t7/local.master/share/lua/5.1/torch/Tester.lua:38: in function 'assertne'
+ [string "function mytest.TestA()..."]:5: in function 'f'
+
+--------------------------------------------------------------------------------
+
+```
+
+
+
+### torch.Tester() ###
+
+Returns a new instance of `torch.Tester` class.
+
+
+### add(f, 'name') ###
+
+Adds a new test function with name `name`. The test function is stored in `f`.
+The function is supposed to run without any arguments and not return any values.
+
+
+### add(ftable) ###
+
+Recursively adds all function entries of the table `ftable` as tests. This table
+can only have functions or nested tables of functions.
+
+
+### assert(condition [, message]) ###
+
+Saves an error if condition is not true with the optional message.
+
+
+### assertlt(val, condition [, message]) ###
+
+Saves an error if `val < condition` is not true with the optional message.
+
+
+### assertgt(val, condition [, message]) ###
+
+Saves an error if `val > condition` is not true with the optional message.
+
+
+### assertle(val, condition [, message]) ###
+
+Saves an error if `val <= condition` is not true with the optional message.
+
+
+### assertge(val, condition [, message]) ###
+
+Saves an error if `val >= condition` is not true with the optional message.
+
+
+### asserteq(val, condition [, message]) ###
+
+Saves an error if `val == condition` is not true with the optional message.
+
+
+### assertne(val, condition [, message]) ###
+
+Saves an error if `val ~= condition` is not true with the optional message.
+
+
+### assertTensorEq(ta, tb, condition [, message]) ###
+
+Saves an error if `max(abs(ta-tb)) < condition` is not true with the optional message.
+
+
+### assertTensorNe(ta, tb, condition [, message]) ###
+
+Saves an error if `max(abs(ta-tb)) >= condition` is not true with the optional message.
+
+
+### assertTableEq(ta, tb, condition [, message]) ###
+
+Saves an error if `max(abs(ta-tb)) < condition` is not true with the optional message.
+
+
+### assertTableNe(ta, tb, condition [, message]) ###
+
+Saves an error if `max(abs(ta-tb)) >= condition` is not true with the optional message.
+
+
+### assertError(f [, message]) ###
+
+Saves an error if calling the function f() does not return an error, with the optional message.
+
+
+### run() ###
+
+Runs all the test functions that are stored using [add()](#torch.Tester.add) function.
+While running it reports progress and at the end gives a summary of all errors.
+
+
+
+
+
+
+
diff --git a/doc/timer.md b/doc/timer.md
new file mode 100644
index 00000000..f88f99d4
--- /dev/null
+++ b/doc/timer.md
@@ -0,0 +1,47 @@
+
+# Timer #
+
+This class is able to measure time (in seconds) elapsed in a particular period. Example:
+```lua
+ timer = torch.Timer() -- the Timer starts to count now
+ x = 0
+ for i=1,1000000 do
+ x = x + math.sin(x)
+ end
+ print('Time elapsed for 1,000,000 sin: ' .. timer:time().real .. ' seconds')
+```
+
+
+## Timer Class Constructor and Methods ##
+
+
+### torch.Timer() ###
+
+Returns a new `Timer`. The timer starts to count the time now.
+
+
+### [self] reset() ###
+
+Reset the timer accumulated time to `0`. If the timer was running, the timer
+restarts to count the time now. If the timer was stopped, it stays stopped.
+
+
+### [self] resume() ###
+
+Resume a stopped timer. The timer restarts to count the time, and addition
+the accumulated time with the time already counted before being stopped.
+
+
+### [self] stop() ###
+
+Stop the timer. The accumulated time counted until now is stored.
+
+
+### [table] time() ###
+
+Returns a table reporting the accumulated time elapsed until now. Following the UNIX shell `time` command,
+there are three fields in the table:
+ * `real`: the wall-clock elapsed time.
+ * `user`: the elapsed CPU time. Note that the CPU time of a threaded program sums time spent in all threads.
+ * `sys`: the time spent in system usage.
+
diff --git a/doc/utility.md b/doc/utility.md
new file mode 100644
index 00000000..a285e06c
--- /dev/null
+++ b/doc/utility.md
@@ -0,0 +1,254 @@
+
+# Torch utility functions #
+
+This functions are used in all Torch package for creating and handling classes.
+The most interesting function is probably [torch.class()](#torch.class) which allows
+the user to create easily new classes. [torch.typename()](#torch.typename) might
+also be interesting to check what is the class of a given Torch object.
+
+The other functions are more for advanced users.
+
+
+### [metatable] torch.class(name, [parentName]) ###
+
+Creates a new `Torch` class called `name`. If `parentName` is provided, the class will inherit
+`parentName` methods. A class is a table which has a particular metatable.
+
+If `name` is of the form `package.className` then the class `className` will be added to the specified `package`.
+In that case, `package` has to be a valid (and already loaded) package. If `name` does not contain any `"."`,
+then the class will be defined in the global environment.
+
+One [or two] (meta)tables are returned. These tables contain all the method
+provided by the class [and its parent class if it has been provided]. After
+a call to `torch.class()` you have to fill-up properly the metatable.
+
+After the class definition is complete, constructing a new class _name_ will be achieved by a call to `_name_()`.
+This call will first call the method ```lua__init()``` if it exists, passing all arguments of `_name_()`.
+
+```lua
+ require "torch"
+
+ -- for naming convenience
+ do
+ --- creates a class "Foo"
+ local Foo = torch.class('Foo')
+
+ --- the initializer
+ function Foo:__init()
+ self.contents = "this is some text"
+ end
+
+ --- a method
+ function Foo:print()
+ print(self.contents)
+ end
+
+ --- another one
+ function Foo:bip()
+ print('bip')
+ end
+
+ end
+
+ --- now create an instance of Foo
+ foo = Foo()
+
+ --- try it out
+ foo:print()
+
+ --- create a class torch.Bar which
+ --- inherits from Foo
+ do
+ local Bar, parent = torch.class('torch.Bar', 'Foo')
+
+ --- the initializer
+ function Bar:__init(stuff)
+ --- call the parent initializer on ourself
+ parent.__init(self)
+
+ --- do some stuff
+ self.stuff = stuff
+ end
+
+ --- a new method
+ function Bar:boing()
+ print('boing!')
+ end
+
+ --- override parent's method
+ function Bar:print()
+ print(self.contents)
+ print(self.stuff)
+ end
+ end
+
+ --- create a new instance and use it
+ bar = torch.Bar("ha ha!")
+ bar:print() -- overrided method
+ bar:boing() -- child method
+ bar:bip() -- parent's method
+
+```
+
+For advanced users, it is worth mentionning that `torch.class()` actually
+calls [torch.newmetatable()](#torch.newmetatable). with a particular
+constructor. The constructor creates a Lua table and set the right
+metatable on it, and then calls ```lua__init()``` if it exists in the
+metatable. It also sets a [factory](#torch.factory) field ```lua__factory``` such that it
+is possible to create an empty object of this class.
+
+
+### [string] torch.typename(object) ###
+
+Checks if `object` has a metatable. If it does, and if it corresponds to a
+`Torch` class, then returns a string containing the name of the
+class. Returns `nil` in any other cases.
+
+A Torch class is a class created with [torch.class()](#torch.class) or
+[torch.newmetatable()](#torch.newmetatable).
+
+
+### [userdata] torch.typename2id(string) ###
+
+Given a Torch class name specified by `string`, returns a unique
+corresponding id (defined by a `lightuserdata` pointing on the internal
+structure of the class). This might be useful to do a _fast_ check of the
+class of an object (if used with [torch.id()](#torch.id)), avoiding string
+comparisons.
+
+Returns `nil` if `string` does not specify a Torch object.
+
+
+### [userdata] torch.id(object) ###
+
+Returns a unique id corresponding to the _class_ of the given Torch object.
+The id is defined by a `lightuserdata` pointing on the internal structure
+of the class.
+
+Returns `nil` if `object` is not a Torch object.
+
+This is different from the _object_ id returned by [torch.pointer()](#torch.pointer).
+
+
+### [table] torch.newmetatable(name, parentName, constructor) ###
+
+Register a new metatable as a Torch type with the given string `name`. The new metatable is returned.
+
+If the string `parentName` is not `nil` and is a valid Torch type (previously created
+by `torch.newmetatable()`) then set the corresponding metatable as a metatable to the returned new
+metatable.
+
+If the given `constructor` function is not `nil`, then assign to the variable `name` the given constructor.
+The given `name` might be of the form `package.className`, in which case the `className` will be local to the
+specified `package`. In that case, `package` must be a valid and already loaded package.
+
+
+### [function] torch.factory(name) ###
+
+Returns the factory function of the Torch class `name`. If the class name is invalid or if the class
+has no factory, then returns `nil`.
+
+A Torch class is a class created with [torch.class()](#torch.class) or
+[torch.newmetatable()](#torch.newmetatable).
+
+A factory function is able to return a new (empty) object of its corresponding class. This is helpful for
+[object serialization](file.md#torch.File.serialization).
+
+
+### [table] torch.getmetatable(string) ###
+
+Given a `string`, returns a metatable corresponding to the Torch class described
+by `string`. Returns `nil` if the class does not exist.
+
+A Torch class is a class created with [torch.class()](#torch.class) or
+[torch.newmetatable()](#torch.newmetatable).
+
+Example:
+```lua
+> for k,v in pairs(torch.getmetatable("torch.CharStorage")) do print(k,v) end
+__index__ function: 0x1a4ba80
+__typename torch.CharStorage
+write function: 0x1a49cc0
+__tostring__ function: 0x1a586e0
+__newindex__ function: 0x1a4ba40
+string function: 0x1a4d860
+__version 1
+copy function: 0x1a49c80
+read function: 0x1a4d840
+__len__ function: 0x1a37440
+fill function: 0x1a375c0
+resize function: 0x1a37580
+__index table: 0x1a4a080
+size function: 0x1a4ba20
+```
+
+
+### [boolean] torch.isequal(object1, object2) ###
+
+If the two objects given as arguments are `Lua` tables (or Torch objects), then returns `true` if and only if the
+tables (or Torch objects) have the same address in memory. Returns `false` in any other cases.
+
+A Torch class is a class created with [torch.class()](#TorchClass) or
+[torch.newmetatable()](#torch.newmetatable).
+
+
+### [string] torch.getdefaulttensortype() ###
+
+Returns a string representing the default tensor type currently in use
+by Torch7.
+
+
+### [table] torch.getenv(function or userdata) ###
+
+Returns the Lua `table` environment of the given `function` or the given
+`userdata`. To know more about environments, please read the documentation
+of [lua_setfenv()](http://www.lua.org/manual/5.1/manual.html#lua_setfenv)
+and [lua_getfenv()](http://www.lua.org/manual/5.1/manual.html#lua_getfenv).
+
+
+### [number] torch.version(object) ###
+
+Returns the field ```lua__version``` of a given object. This might
+be helpful to handle variations in a class over time.
+
+
+### [number] torch.pointer(object) ###
+
+Returns a unique id (pointer) of the given `object`, which can be a Torch
+object, a table, a thread or a function.
+
+This is different from the _class_ id returned by [torch.id()](#torch.id).
+
+
+### torch.setdefaulttensortype([typename]) ###
+
+Sets the default tensor type for all the tensors allocated from this
+point on. Valid types are:
+ * `ByteTensor`
+ * `CharTensor`
+ * `ShortTensor`
+ * `IntTensor`
+ * `FloatTensor`
+ * `DoubleTensor`
+
+
+### torch.setenv(function or userdata, table) ###
+
+Assign `table` as the Lua environment of the given `function` or the given
+`userdata`. To know more about environments, please read the documentation
+of [lua_setfenv()](http://www.lua.org/manual/5.1/manual.html#lua_setfenv)
+and [lua_getfenv()](http://www.lua.org/manual/5.1/manual.html#lua_getfenv).
+
+
+### [object] torch.setmetatable(table, classname) ###
+
+Set the metatable of the given `table` to the metatable of the Torch
+object named `classname`. This function has to be used with a lot
+of care.
+
+
+### [table] torch.getconstructortable(string) ###
+
+BUGGY
+Return the constructor table of the Torch class specified by ''string'.
+
diff --git a/dok/diskfile.dok b/dok/diskfile.dok
deleted file mode 100644
index 87c372a1..00000000
--- a/dok/diskfile.dok
+++ /dev/null
@@ -1,64 +0,0 @@
-====== DiskFile ======
-{{anchor:torch.DiskFile.dok}}
-
-Parent classes: [[File|File]]
-
-A ''DiskFile'' is a particular ''File'' which is able to perform basic read/write operations
-on a file stored on disk. It implements all methods described in [[File|File]], and
-some additional methods relative to //endian// encoding.
-
-By default, a ''DiskFile'' is in [[File#torch.File.binary|ASCII]] mode. If changed to
-the [[File#torch.File.binary|binary]] mode, the default endian encoding is the native
-computer one.
-
-The file might be open in read, write, or read-write mode, depending on the parameter
-''mode'' (which can take the value ''"r"'', ''"w"'' or ''"rw"'' respectively)
-given to the [[#torch.DiskFile|torch.DiskFile(fileName, mode)]].
-
-==== torch.DiskFile(fileName, [mode], [quiet]) ====
-{{anchor:torch.DiskFile}}
-
-//Constructor// which opens ''fileName'' on disk, using the given ''mode''. Valid ''mode'' are
-''"r"'' (read), ''"w"'' (write) or ''"rw"'' (read-write). Default is read mode.
-
-If read-write mode, the file //will be created// if it does not exists. If it
-exists, it will be positionned at the beginning of the file after opening.
-
-If (and only if) ''quiet'' is ''true'', no error will be raised in case of
-problem opening the file: instead ''nil'' will be returned.
-
-The file is opened in [[File#torch.File.ascii|ASCII]] mode by default.
-
-==== bigEndianEncoding() ====
-{{anchor:torch.DiskFile.bigEndianEncoding}}
-
-In [[file#torch.File.binary|binary]] mode, force encoding in //big endian//.
-(//big end first//: decreasing numeric significance with increasing memory
-addresses)
-
-==== [boolean] isBigEndianCPU() ====
-{{anchor:torch.DiskFile.isBigEndianCPU}}
-
-Returns ''true'' if, and only if, the computer CPU operates in //big endian//.
-//Big end first//: decreasing numeric significance with increasing
-memory addresses.
-
-==== [boolean] isLittleEndianCPU() ====
-{{anchor:torch.DiskFile.isLittleEndianCPU}}
-
-Returns ''true'' if, and only if, the computer CPU operates in //little endian//.
-//Little end first//: increasing numeric significance with increasing
-memory addresses.
-
-==== littleEndianEncoding() ====
-{{anchor:torch.DiskFile.littleEndianEncoding}}
-
-In [[file#torch.File.binary|binary]] mode, force encoding in //little endian//.
-(//little end first//: increasing numeric significance with increasing memory
-addresses)
-
-==== nativeEndianEncoding() ====
-{{anchor:torch.DiskFile.nativeEndianEncoding}}
-
-In [[file#torch.File.binary|binary]] mode, force encoding in //native endian//.
-
diff --git a/dok/file.dok b/dok/file.dok
deleted file mode 100644
index bed446d6..00000000
--- a/dok/file.dok
+++ /dev/null
@@ -1,347 +0,0 @@
-====== File ======
-{{anchor:torch.File.dok}}
-
-This is an //abstract// class. It defines most methods implemented by its
-child classes, like [[DiskFile|DiskFile]],
-[[MemoryFile|MemoryFile]] and [[PipeFile|PipeFile]].
-
-Methods defined here are intended for basic read/write functionalities.
-Read/write methods might write in [[#torch.File.ascii|ASCII]] mode or
-[[#torch.File.binary|binary]] mode.
-
-In [[#torch.File.ascii|ASCII]] mode, numbers are converted in human readable
-format (characters). Booleans are converted into ''0'' (false) or ''1'' (true).
-In [[#torch.File.binary|binary]] mode, numbers and boolean are directly encoded
-as represented in a register of the computer. While not being human
-readable and less portable, the binary mode is obviously faster.
-
-In [[#torch.File.ascii|ASCII]] mode, if the default option
-[[#torch.File.autoSpacing|autoSpacing()]] is chosen, a space will be generated
-after each written number or boolean. A carriage return will also be added
-after each call to a write method. With this option, the spaces are
-supposed to exist while reading. This option can be deactivated with
-[[#torch.File.noAutoSpacing|noAutoSpacing()]].
-
-A ''Lua'' error might or might be not generated in case of read/write error
-or problem in the file. This depends on the choice made between
-[[#torch.File.quiet|quiet()]] and [[#torch.File.pedantic|pedantic()]] options. It
-is possible to query if an error occured in the last operation by calling
-[[#torch.File.hasError|hasError()]].
-
-===== Read methods =====
-{{anchor:torch.File.read}}
-{{anchor:torch.File.readBool}}
-{{anchor:torch.File.readByte}}
-{{anchor:torch.File.readChar}}
-{{anchor:torch.File.readShort}}
-{{anchor:torch.File.readInt}}
-{{anchor:torch.File.readLong}}
-{{anchor:torch.File.readFloat}}
-{{anchor:torch.File.readDouble}}
-
-They are three types of reading methods:
- - ''[number] readTYPE()''
- - ''[TYPEStorage] readTYPE(n)''
- - ''[number] readTYPE(TYPEStorage)''
-
-where ''TYPE'' can be either ''Byte'', ''Char'', ''Short'', ''Int'', ''Long'', ''Float'' or ''Double''.
-
-A convenience method also exist for boolean types: ''[boolean] readBool()''. It reads
-a value on the file with ''readInt()'' and returns ''true'' if and only if this value is ''1''. It is not possible
-to read storages of booleans.
-
-All these methods depends on the encoding choice: [[#torch.File.ascii|ASCII]]
-or [[#torch.File.binary|binary]] mode. In [[#torch.File.ascii|ASCII]] mode, the
-option [[#torch.File.autoSpacing|autoSpacing()]] and
-[[#torch.File.noAutoSpacing|noAutoSpacing()]] have also an effect on these
-methods.
-
-If no parameter is given, one element is returned. This element is
-converted to a ''Lua'' number when reading.
-
-If ''n'' is given, ''n'' values of the specified type are read
-and returned in a new [[Storage|Storage]] of that particular type.
-The storage size corresponds to the number of elements actually read.
-
-If a ''Storage'' is given, the method will attempt to read a number of elements
-equals to the size of the given storage, and fill up the storage with these elements.
-The number of elements actually read is returned.
-
-In case of read error, these methods will call the ''Lua'' error function using the default
-[[#torch.File.pedantic|pedantic]] option, or stay quiet with the [[#torch.File.quiet|quiet]]
-option. In the latter case, one can check if an error occurred with
-[[#torch.File.hasError|hasError()]].
-
-===== Write methods =====
-{{anchor:torch.File.write}}
-{{anchor:torch.File.writeBool}}
-{{anchor:torch.File.writeByte}}
-{{anchor:torch.File.writeChar}}
-{{anchor:torch.File.writeShort}}
-{{anchor:torch.File.writeInt}}
-{{anchor:torch.File.writeLong}}
-{{anchor:torch.File.writeFloat}}
-{{anchor:torch.File.writeDouble}}
-
-They are two types of reading methods:
- - ''[number] writeTYPE(number)''
- - ''[number] writeTYPE(TYPEStorage)''
-
-where ''TYPE'' can be either ''Byte'', ''Char'', ''Short'', ''Int'', ''Long'', ''Float'' or ''Double''.
-
-A convenience method also exist for boolean types: ''writeBool(value)''. If ''value'' is ''nil'' or
-not ''true'' a it is equivalent to a ''writeInt(0)'' call, else to ''writeInt(1)''. It is not possible
-to write storages of booleans.
-
-All these methods depends on the encoding choice: [[#torch.File.ascii|ASCII]]
-or [[#torch.File.ascii|binary]] mode. In [[#torch.File.ascii|ASCII]] mode, the
-option [[#torch.File.autoSpacing|autoSpacing()]] and
-[[#torch.File.noAutoSpacing|noAutoSpacing()]] have also an effect on these
-methods.
-
-If one ''Lua'' number is given, this number is converted according to the
-name of the method when writing (e.g. ''writeInt(3.14)'' will write ''3'').
-
-If a ''Storage'' is given, the method will attempt to write all the elements contained
-in the storage.
-
-These methods return the number of elements actually written.
-
-In case of read error, these methods will call the ''Lua'' error function using the default
-[[#torch.File.pedantic|pedantic]] option, or stay quiet with the [[#torch.File.quiet|quiet]]
-option. In the latter case, one can check if an error occurred with
-[[#torch.File.hasError|hasError()]].
-
-===== Serialization methods =====
-{{anchor:torch.File.serialization}}
-
-These methods allow the user to save any serializable objects on disk and
-reload it later in its original state. In other words, it can perform a
-//deep// copy of an object into a given ''File''.
-
-Serializable objects are ''Torch'' objects having a ''read()'' and
-''write()'' method. ''Lua'' objects such as ''table'', ''number'' or
-''string'' or //pure Lua// functions are also serializable.
-
-If the object to save contains several other objects (let say it is a tree
-of objects), then objects appearing several times in this tree will be
-//saved only once//. This saves disk space, speedup loading/saving and
-respect the dependencies between objects.
-
-Interestingly, if the ''File'' is a [[MemoryFile|MemoryFile]], it allows
-the user to easily make a //clone// of any serializable object:
-
-file = torch.MemoryFile() -- creates a file in memory
-file:writeObject(object) -- writes the object into file
-file:seek(1) -- comes back at the beginning of the file
-objectClone = file:readObject() -- gets a clone of object
-
-
-==== readObject() ====
-{{anchor:torch.File.readObject}}
-
-Returns the next [[#torch.File.serialization|serializable]] object saved beforehand
-in the file with [[#torch.File.writeObject|writeObject()]].
-
-Note that objects which were [[#torch.File.writeObject|written]] with the same
-reference have still the same reference after loading.
-
-Example:
-
--- creates an array which contains twice the same tensor
-array = {}
-x = torch.Tensor(1)
-table.insert(array, x)
-table.insert(array, x)
-
--- array[1] and array[2] refer to the same address
--- x[1] == array[1][1] == array[2][1] == 3.14
-array[1][1] = 3.14
-
--- write the array on disk
-file = torch.DiskFile('foo.asc', 'w')
-file:writeObject(array)
-file:close() -- make sure the data is written
-
--- reload the array
-file = torch.DiskFile('foo.asc', 'r')
-arrayNew = file:readObject()
-
--- arrayNew[1] and arrayNew[2] refer to the same address!
--- arrayNew[1][1] == arrayNew[2][1] == 3.14
--- so if we do now:
-arrayNew[1][1] = 2.72
--- arrayNew[1][1] == arrayNew[2][1] == 2.72 !
-
-
-==== writeObject(object) ====
-{{anchor:torch.File.writeObject}}
-
-Writes ''object'' into the file. This object can be read later using
-[[#torch.File.readObject|readObject()]]. Serializable objects are ''Torch''
-objects having a ''read()'' and ''write()'' method. ''Lua'' objects such as
-''table'', ''number'' or ''string'' or pure Lua functions are also serializable.
-
-If the object has been already written in the file, only a //reference// to
-this already saved object will be written: this saves space an speed-up
-writing; it also allows to keep the dependencies between objects intact.
-
-In returns, if one writes an object, modify its member, and write the
-object again in the same file, the modifications will not be recorded
-in the file, as only a reference to the original will be written. See
-[[#torch.File.readObject|readObject()]] for an example.
-
-==== [string] readString(format) ====
-{{anchor:torch.File.readString}}
-
-If ''format'' starts with ''"*l"'' then returns the next line in the ''File''. The end-of-line character is skipped.
-
-If ''format'' starts with ''"*a"'' then returns all the remaining contents of the ''File''.
-
-If no data is available, then an error is raised, except if ''File'' is in [[#torch.File.quiet|quiet()]] mode where
-it then returns ''nil''.
-
-Because Torch is more precised on number typing, the ''Lua'' format ''"*n"'' is not supported:
-instead use one of the [[#torch.File.read|number read methods]].
-
-==== [number] writeString(str) ====
-{{anchor:torch.File.writeString}}
-
-Writes the string ''str'' in the ''File''. If the string cannot be written completely an error is raised, except
-if ''File'' is in [[#torch.File.quiet|quiet()]] mode where it returns the number of character actually written.
-
-===== General Access and Control Methods =====
-
-==== ascii() [default] ====
-{{anchor:torch.File.ascii}}
-
-The data read or written will be in ''ASCII'' mode: all numbers are converted
-to characters (human readable format) and boolean are converted to ''0''
-(false) or ''1'' (true). The input-output format in this mode depends on the
-options [[#torch.File.autoSpacing|autoSpacing()]] and
-[[#torch.File.noAutoSpacing|noAutoSpacing()]].
-
-==== autoSpacing() [default] ====
-{{anchor:torch.File.autoSpacing}}
-
-In [[#torch.File.ascii|ASCII]] mode, write additional spaces around the elements
-written on disk: if writing a [[Storage|Storage]], a space will be
-generated between each //element// and a //return line// after the last
-element. If only writing one element, a //return line// will be generated
-after this element.
-
-Those spaces are supposed to exist while reading in this mode.
-
-This is the default behavior. You can de-activate this option with the
-[[#torch.File.noAutoSpacing|noAutoSpacing()]] method.
-
-==== binary() ====
-{{anchor:torch.File.binary}}
-
-The data read or written will be in binary mode: the representation in the
-''File'' is the same that the one in the computer memory/register (not human
-readable). This mode is faster than [[#torch.File.ascii|ASCII]] but less
-portable.
-
-==== clearError() ====
-{{anchor:torch.File.clearError}}
-
-Clear the error.flag returned by [[#torch.File.hasError|hasError()]].
-
-==== close() ====
-{{anchor:torch.File.close}}
-
-Close the file. Any subsequent operation will generate a ''Lua'' error.
-
-==== noAutoSpacing() ====
-{{anchor:torch.File.noAutoSpacing}}
-
-In [[#torch.File.ascii|ASCII]] mode, do not put extra spaces between element
-written on disk. This is the contrary of the option
-[[#torch.File.autoSpacing|autoSpacing()]].
-
-==== synchronize() ====
-{{anchor:torch.File.synchronize}}
-
-If the child class bufferize the data while writing, ensure that the data
-is actually written.
-
-
-==== pedantic() [default] ====
-{{anchor:torch.File.pedantic}}
-
-If this mode is chosen (which is the default), a ''Lua'' error will be
-generated in case of error (which will cause the program to stop).
-
-It is possible to use [[#torch.File.quiet|quiet()]] to avoid ''Lua'' error generation
-and set a flag instead.
-
-==== [number] position() ====
-{{anchor:torch.File.position}}
-
-Returns the current position (in bytes) in the file.
-The first position is ''1'' (following Lua standard indexing).
-
-==== quiet() ====
-{{anchor:torch.File.quiet}}
-
-If this mode is chosen instead of [[#torch.File.pedantic|pedantic()]], no ''Lua''
-error will be generated in case of read/write error. Instead, a flag will
-be raised, readable through [[#torch.File.hasError|hasError()]]. This flag can
-be cleared with [[#torch.File.clearError|clearError()]]
-
-Checking if a file is quiet can be performed using [[#torch.File.isQuiet|isQuiet()]].
-
-==== seek(position) ====
-{{anchor:torch.File.seek}}
-
-Jump into the file at the given ''position'' (in byte). Might generate/raise
-an error in case of problem. The first position is ''1'' (following Lua standard indexing).
-
-==== seekEnd() ====
-{{anchor:torch.File.seekEnd}}
-
-Jump at the end of the file. Might generate/raise an error in case of
-problem.
-
-===== File state query =====
-
-These methods allow the user to query the state of the given ''File''.
-
-==== [boolean] hasError() ====
-{{anchor:torch.File.hasError}}
-
-Returns if an error occurred since the last [[#torch.File.clearError|clearError()]] call, or since
-the opening of the file if ''clearError()'' has never been called.
-
-==== [boolean] isQuiet() ====
-{{anchor:torch.File.isQuiet}}
-
-Returns a boolean which tells if the file is in [[#torch.File.quiet|quiet]] mode or not.
-
-==== [boolean] isReadable() ====
-{{anchor:torch.File.isReadable}}
-
-Tells if one can read the file or not.
-
-==== [boolean] isWritable() ====
-{{anchor:torch.File.isWritable}}
-
-Tells if one can write in the file or not.
-
-==== [boolean] isAutoSpacing() ====
-{{anchor:torch.File.isAutoSpacing}}
-
-Return ''true'' if [[#torch.File.autoSpacing|autoSpacing]] has been chosen.
-
-==== referenced(ref) ====
-{{anchor:torch.File.referenced}}
-
-Sets the referenced property of the File to ''ref''. ''ref'' has to be ''true'' or ''false''. By default it is true, which means that a File object keeps track of objects written using [[#torch.File.writeObject|writeObject]] method. When one needs to push the same tensor repeatedly into a file but everytime changing its contents, calling ''referenced(false)'' ensures desired behaviour.
-
-==== isReferenced() ====
-{{anchor:torch.File.isReferenced}}
-
-Return the state set by [[#torch.File.referenced|referenced]].
-
-
diff --git a/dok/index.dok b/dok/index.dok
deleted file mode 100644
index 7b7a8923..00000000
--- a/dok/index.dok
+++ /dev/null
@@ -1,28 +0,0 @@
-====== Torch Package Reference Manual ======
-{{anchor:torch.reference.dok}}
-
-**Torch** is the main package in [[http://torch.ch|Torch7]] where data
-structures for multi-dimensional tensors and mathematical operations
-over these are defined. Additionally, it provides many utilities for
-accessing files, serializing objects of arbitrary types and other
-useful utilities.
-
-===== Torch Packages =====
-{{anchor:torch.reference.dok}}
-
- * Tensor Library
- * [[Tensor|Tensor]] defines the //all powerful// tensor object that prvides multi-dimensional numerical arrays with type templating.
- * [[maths|Mathematical operations]] that are defined for the tensor object types.
- * [[Storage|Storage]] defines a simple storage interface that controls the underlying storage for any tensor object.
- * File I/O Interface Library
- * [[File|File]] is an abstract interface for common file operations.
- * [[DiskFile|Disk File]] defines operations on files stored on disk.
- * [[MemoryFile|Memory File]] defines operations on stored in RAM.
- * [[PipeFile|Pipe File]] defines operations for using piped commands.
- * [[serialization|High-Level File operations]] defines higher-level serialization functions.
- * Useful Utilities
- * [[Timer|Timer]] provides functionality for //measuring time//.
- * [[Tester|Tester]] is a generic tester framework.
- * [[CmdLine|CmdLine]] is a command line argument parsing utility.
- * [[Random|Random]] defines a random number generator package with various distributions.
- * Finally useful [[Utility|utility]] functions are provided for easy handling of torch tensor types and class inheritance.
diff --git a/dok/maths.dok b/dok/maths.dok
deleted file mode 100644
index 724a263c..00000000
--- a/dok/maths.dok
+++ /dev/null
@@ -1,1651 +0,0 @@
-
-====== Math Functions ======
-{{anchor:torch.maths.dok}}
-
-Torch provides Matlab-like functions for manipulating
-[[index#Tensor|Tensor]] objects. Functions fall into several types of
-categories:
- * [[#torch.construction.dok|constructors]] like [[#torch.zeros|zeros]], [[#torch.ones|ones]]
- * extractors like [[#torch.diag|diag]] and [[#torch.triu|triu]],
- * [[#torch.elementwise.dok|Element-wise]] mathematical operations like [[#torch.abs|abs]] and [[#torch.pow|pow]],
- * [[#torch.basicoperations.dok|BLAS]] operations,
- * [[#torch.columnwise.dok|column or row-wise operations]] like [[#torch.sum|sum]] and [[#torch.max|max]],
- * [[#torch.matrixwide.dok|matrix-wide operations]] like [[#torch.trace|trace]] and [[#torch.norm|norm]].
- * [[#torch.conv.dok|Convolution and cross-correlation]] operations like [[#torch.conv2|conv2]].
- * [[#torch.linalg.dok|Basic linear algebra operations]] like [[#torch.eig|eigen value/vector calculation]].
- * [[#torch.logical.dok|Logical Operations on Tensors]].
-
-By default, all operations allocate a new tensor to return the
-result. However, all functions also support passing the target
-tensor(s) as the first argument(s), in which case the target tensor(s)
-will be resized accordingly and filled with result. This property is
-especially useful when one wants have tight control over when memory
-is allocated.
-
-The torch package adopts the same concept, so that calling a function
-directly on the tensor itself using an object-oriented syntax is
-equivalent to passing the tensor as the optional resulting tensor. The
-following two calls are equivalent.
-
-
-torch.log(x,x)
-x:log()
-
-
-Similarly, ''torch.conv2'' function can be used in the following manner.
-
-
-x = torch.rand(100,100)
-k = torch.rand(10,10)
-res1 = torch.conv2(x,k)
-
-res2 = torch.Tensor()
-torch.conv2(res2,x,k)
-
-=res2:dist(res1)
-0
-
-
-
-The advantage of second case is, same ''res2'' tensor can be used successively in a loop without any new allocation.
-
-
--- no new memory allocations...
-for i=1,100 do
- torch.conv2(res2,x,k)
-end
-=res2:dist(res1)
-0
-
-
-===== Construction or extraction functions =====
-{{anchor:torch.construction.dok}}
-
-==== [res] torch.cat( [res,] x_1, x_2, [dimension] ) ====
-{{anchor:torch.cat}}
-{{anchor:torch.Tensor.cat}}
-''x=torch.cat(x_1,x_2,[dimension])'' returns a tensor ''x'' which is the concatenation of tensors x_1 and x_2 along dimension ''dimension''.
-
-If ''dimension'' is not specified it is the last dimension.
-
-The other dimensions of x_1 and x_2 have to be equal.
-
-Examples:
-
-> print(torch.cat(torch.ones(3),torch.zeros(2)))
-
- 1
- 1
- 1
- 0
- 0
-[torch.Tensor of dimension 5]
-
-
-> print(torch.cat(torch.ones(3,2),torch.zeros(2,2),1))
-
- 1 1
- 1 1
- 1 1
- 0 0
- 0 0
-[torch.DoubleTensor of dimension 5x2]
-
-
-> print(torch.cat(torch.ones(2,2),torch.zeros(2,2),1))
- 1 1
- 1 1
- 0 0
- 0 0
-[torch.DoubleTensor of dimension 4x2]
-
-> print(torch.cat(torch.ones(2,2),torch.zeros(2,2),2))
- 1 1 0 0
- 1 1 0 0
-[torch.DoubleTensor of dimension 2x4]
-
-
-> print(torch.cat(torch.cat(torch.ones(2,2),torch.zeros(2,2),1),torch.rand(3,2),1))
-
- 1.0000 1.0000
- 1.0000 1.0000
- 0.0000 0.0000
- 0.0000 0.0000
- 0.3227 0.0493
- 0.9161 0.1086
- 0.2206 0.7449
-[torch.DoubleTensor of dimension 7x2]
-
-
-
-
-==== [res] torch.diag([res,] x [,k]) ====
-{{anchor:torch.diag}}
-{{anchor:torch.Tensor.diag}}
-
-''y=torch.diag(x)'' when x is of dimension 1 returns a diagonal matrix with diagonal elements constructed from x.
-
-''y=torch.diag(x)'' when x is of dimension 2 returns a tensor of dimension 1
-with elements constructed from the diagonal of x.
-
-''y=torch.diag(x,k)'' returns the k-th diagonal of x,
-wher k=0 is the main diagonal, k>0 is above the main diagonal and k<0
-is below the main diagonal.
-
-==== [res] torch.eye([res,] n [,m]) ====
-{{anchor:torch.eye}}
-{{anchor:torch.Tensor.eye}}
-
-''y=torch.eye(n)'' returns the n-by-n identity matrix.
-
-''y=torch.eye(n,m)'' returns an m-by-m identity matrix with ones on the diagonal and zeros elsewhere.
-
-
-==== [res] torch.linspace([res,] x1, x2, [,n]) ====
-{{anchor:torch.linspace}}
-{{anchor:torch.Tensor.linspace}}
-
-''y=torch.linspace(x1,x2)'' returns a one-dimensional tensor of size 100 equally spaced points between x1 and x2.
-
-''y=torch.linspace(x1,x2,n)'' returns a one-dimensional tensor of n equally spaced points between x1 and x2.
-
-
-==== [res] torch.logspace([res,] x1, x2, [,n]) ====
-{{anchor:torch.logspace}}
-{{anchor:torch.Tensor.logspace}}
-
-''y=torch.logspace(x1,x2)'' returns a one-dimensional tensor of 50 logarithmically eqally spaced points between x1 and x2.
-
-''y=torch.logspace(x1,x2,n)'' returns a one-dimensional tensor of n logarithmically equally spaced points between x1 and x2.
-
-==== [res] torch.ones([res,] m [,n...]) ====
-{{anchor:torch.ones}}
-{{anchor:torch.Tensor.ones}}
-
-''y=torch.ones(n)'' returns a one-dimensional tensor of size n filled with ones.
-
-''y=torch.ones(m,n)'' returns a mxn tensor filled with ones.
-
-For more than 4 dimensions, you can use a storage as argument:
-''y=torch.ones(torch.LongStorage{m,n,k,l,o})''.
-
-==== [res] torch.rand([res,] m [,n...]) ====
-{{anchor:torch.rand}}
-{{anchor:torch.Tensor.rand}}
-
-''y=torch.rand(n)'' returns a one-dimensional tensor of size n filled with random numbers from a uniform distribution on the interval (0,1).
-
-''y=torch.rand(m,n)'' returns a mxn tensor of random numbers from a uniform distribution on the interval (0,1).
-
-For more than 4 dimensions, you can use a storage as argument:
-''y=torch.rand(torch.LongStorage{m,n,k,l,o})''
-
-==== [res] torch.randn([res,] m [,n...]) ====
-{{anchor:torch.randn}}
-{{anchor:torch.Tensor.randn}}
-
-''y=torch.randn(n)'' returns a one-dimensional tensor of size n filled with random numbers from a normal distribution with mean zero and variance one.
-
-''y=torch.randn(m,n)'' returns a mxn tensor of random numbers from a normal distribution with mean zero and variance one.
-
-For more than 4 dimensions, you can use a storage as argument:
-''y=torch.rand(torch.LongStorage{m,n,k,l,o})''
-
-==== [res] torch.range([res,] x, y [,step]) ====
-{{anchor:torch.range}}
-{{anchor:torch.Tensor.range}}
-
-''y=torch.range(x,y)'' returns a tensor of size (int)(y-x)+1 with values
-from x to y with step 1. You can modifiy the default step with:
-''y=torch.range(x,y,step)''
-
-
-> print(torch.range(2,5))
-
- 2
- 3
- 4
- 5
-[torch.Tensor of dimension 4]
-
-
-''y=torch.range(n,m,incr)'' returns a tensor filled in range n to m with incr increments.
-
-print(torch.range(2,5,1.2))
- 2.0000
- 3.2000
- 4.4000
-[torch.DoubleTensor of dimension 3]
-
-
-==== [res] torch.randperm([res,] n) ====
-{{anchor:torch.randperm}}
-{{anchor:torch.Tensor.randperm}}
-
-''y=torch.randperm(n)'' returns a random permutation of integers from 1 to n.
-
-==== [res] torch.reshape([res,] x, m [,n...]) ====
-{{anchor:torch.reshape}}
-{{anchor:torch.Tensor.reshape}}
-
-''y=torch.reshape(x,m,n)'' returns a new mxn tensor y whose elements
-are taken rowwise from x, which must have m*n elements. The elements are copied into the new tensor.
-
-For more than 4 dimensions, you can use a storage:
-''y=torch.reshape(x,torch.LongStorage{m,n,k,l,o})''
-
-==== [res] torch.tril([res,] x [,k]) ====
-{{anchor:torch.tril}}
-{{anchor:torch.Tensor.tril}}
-
-''y=torch.tril(x)'' returns the lower triangular part of x, the other elements of y are set to 0.
-
-''torch.tril(x,k)'' returns the elements on and below the k-th diagonal of x as non-zero. k=0 is the main diagonal, k>0 is above the main diagonal and k<0
-is below the main diagonal.
-
-==== [res] torch.triu([res,] x, [,k]) ====
-{{anchor:torch.triu}}
-{{anchor:torch.Tensor.triu}}
-
-''y=torch.triu(x)'' returns the upper triangular part of x,
-the other elements of y are set to 0.
-
-''torch.triu(x,k)'' returns the elements on and above the k-th diagonal of x as non-zero. k=0 is the main diagonal, k>0 is above the main diagonal and k<0
-is below the main diagonal.
-
-==== [res] torch.zeros([res,] x) ====
-{{anchor:torch.zeros}}
-{{anchor:torch.Tensor.zeros}}
-
-''y=torch.zeros(n)'' returns a one-dimensional tensor of size n filled with zeros.
-
-''y=torch.zeros(m,n)'' returns a mxn tensor filled with zeros.
-
-For more than 4 dimensions, you can use a storage:
-''y=torch.zeros(torch.LongStorage{m,n,k,l,o})''
-
-==== Element-wise Mathematical Operations ====
-{{anchor:torch.elementwise.dok}}
-
-==== [res] torch.abs([res,] x) ====
-{{anchor:torch.abs}}
-{{anchor:torch.Tensor.abs}}
-
-''y=torch.abs(x)'' returns a new tensor with the absolute values of the elements of x.
-
-''x:abs()'' replaces all elements in-place with the absolute values of the elements of x.
-
-==== [res] torch.acos([res,] x) ====
-{{anchor:torch.acos}}
-{{anchor:torch.Tensor.acos}}
-
-''y=torch.acos(x)'' returns a new tensor with the arcosine of the elements of x.
-
-''x:acos()'' replaces all elements in-place with the arcosine of the elements of x.
-
-==== [res] torch.asin([res,] x) ====
-{{anchor:torch.asin}}
-{{anchor:torch.Tensor.asin}}
-
-''y=torch.asin(x)'' returns a new tensor with the arcsine of the elements of x.
-
-''x:asin()'' replaces all elements in-place with the arcsine of the elements of x.
-
-==== [res] torch.atan([res,] x) ====
-{{anchor:torch.atan}}
-{{anchor:torch.Tensor.atan}}
-
-''y=torch.atan(x)'' returns a new tensor with the arctangent of the elements of x.
-
-''x:atan()'' replaces all elements in-place with the arctangent of the elements of x.
-
-==== [res] torch.ceil([res,] x) ====
-{{anchor:torch.ceil}}
-{{anchor:torch.Tensor.ceil}}
-
-''y=torch.ceil(x)'' returns a new tensor with the values of the elements of x rounded up to the nearest integers.
-
-''x:ceil()'' replaces all elements in-place with the values of the elements of x rounded up to the nearest integers.
-
-==== [res] torch.cos([res,] x) ====
-{{anchor:torch.cos}}
-{{anchor:torch.Tensor.cos}}
-
-''y=torch.cos(x)'' returns a new tensor with the cosine of the elements of x.
-
-''x:cos()'' replaces all elements in-place with the cosine of the elements of x.
-
-==== [res] torch.cosh([res,] x) ====
-{{anchor:torch.cosh}}
-{{anchor:torch.Tensor.cosh}}
-
-''y=torch.cosh(x)'' returns a new tensor with the hyberbolic cosine of the elements of x.
-
-''x:cosh()'' replaces all elements in-place with the hyberbolic cosine of the elements of x.
-
-==== [res] torch.exp[res,] (x) ====
-{{anchor:torch.exp}}
-{{anchor:torch.Tensor.exp}}
-
-''y=torch.exp(x)'' returns, for each element in x, e (the base of natural logarithms) raised to the power of the element in x.
-
-''x:exp()'' returns, for each element in x, e (the base of natural logarithms) raised to the power of the element in x.
-
-==== [res] torch.floor([res,] x) ====
-{{anchor:torch.floor}}
-{{anchor:torch.Tensor.floor}}
-
-''y=torch.floor(x)'' returns a new tensor with the values of the elements of x rounded down to the nearest integers.
-
-''x:floor()'' replaces all elements in-place with the values of the elements of x rounded down to the nearest integers.
-
-==== [res] torch.log([res,] (x) ====
-{{anchor:torch.log}}
-{{anchor:torch.Tensor.log}}
-
-''y=torch.log(x)'' returns a new tensor with the natural logarithm of the elements of x.
-
-''x:log()'' replaces all elements in-place with the natural logarithm of the elements of x.
-
-==== [res] torch.log1p([res,] x) ====
-{{anchor:torch.log1p}}
-{{anchor:torch.Tensor.log1p}}
-
-''y=torch.log1p(x)'' returns a new tensor with the natural logarithm of the elements of x+1.
-
-''x:log1p()'' replaces all elements in-place with the natural logarithm of the elements of x+1.
-This function is more accurate than [[#torch.log|log()]] for small values of x.
-
-==== [res] torch.pow([res,] x) ====
-{{anchor:torch.pow}}
-{{anchor:torch.Tensor.pow}}
-
-''y=torch.pow(x,n)'' returns a new tensor with the elements of x to the power of n.
-
-''x:pow(n)'' replaces all elements in-place with the elements of x to the power of n.
-
-==== [res] torch.sin([res,] x) ====
-{{anchor:torch.sin}}
-{{anchor:torch.Tensor.sin}}
-
-''y=torch.sin(x)'' returns a new tensor with the sine of the elements of x.
-
-''x:sin()'' replaces all elements in-place with the sine of the elements of x.
-
-==== [res] torch.sinh([res,] x) ====
-{{anchor:torch.sinh}}
-{{anchor:torch.Tensor.sinh}}
-
-''y=torch.sinh(x)'' returns a new tensor with the hyperbolic sine of the elements of x.
-
-''x:sinh()'' replaces all elements in-place with the hyperbolic sine of the elements of x.
-
-==== [res] torch.sqrt([res,] x) ====
-{{anchor:torch.sqrt}}
-{{anchor:torch.Tensor.sqrt}}
-
-''y=torch.sqrt(x)'' returns a new tensor with the square root of the elements of x.
-
-''x:sqrt()'' replaces all elements in-place with the square root of the elements of x.
-
-==== [res] torch.tan([res,] x) ====
-{{anchor:torch.tan}}
-{{anchor:torch.Tensor.tan}}
-
-''y=torch.tan(x)'' returns a new tensor with the tangent of the elements of x.
-
-''x:tan()'' replaces all elements in-place with the tangent of the elements of x.
-
-==== [res] torch.tanh([res,] x) ====
-{{anchor:torch.tanh}}
-{{anchor:torch.Tensor.tanh}}
-
-''y=torch.tanh(x)'' returns a new tensor with the hyperbolic tangent of the elements of x.
-
-''x:tanh()'' replaces all elements in-place with the hyperbolic tangent of the elements of x.
-
-===== Basic operations =====
-{{anchor:torch.basicoperations.dok}}
-
-In this section, we explain basic mathematical operations for Tensors.
-
-==== [res] torch.add([res,] tensor, value) ====
-{{anchor:torch.Tensor.add}}
-{{anchor:torch.add}}
-
-Add the given value to all elements in the tensor.
-
-''y=torch.add(x,value)'' returns a new tensor.
-
-''x:add(value)'' add ''value'' to all elements in place.
-
-==== [res] torch.add([res,] tensor1, tensor2) ====
-{{anchor:torch.Tensor.add}}
-{{anchor:torch.add}}
-
-Add ''tensor1'' to ''tensor2'' and put result into ''res''. The number
-of elements must match, but sizes do not matter.
-
-
-> x = torch.Tensor(2,2):fill(2)
-> y = torch.Tensor(4):fill(3)
-> x:add(y)
-> = x
-
- 5 5
- 5 5
-[torch.Tensor of dimension 2x2]
-
-
-''y=torch.add(a,b)'' returns a new tensor.
-
-''torch.add(y,a,b)'' puts ''a+b'' in ''y''.
-
-''a:add(b)'' accumulates all elements of ''b'' into ''a''.
-
-''y:add(a,b)'' puts ''a+b'' in ''y''.
-
-==== [res] torch.add([res,] tensor1, value, tensor2) ====
-{{anchor:torch.Tensor.add}}
-{{anchor:torch.add}}
-
-Multiply elements of ''tensor2'' by the scalar ''value'' and add it to
-''tensor1''. The number of elements must match, but sizes do not
-matter.
-
-
-> x = torch.Tensor(2,2):fill(2)
-> y = torch.Tensor(4):fill(3)
-> x:add(2, y)
-> = x
-
- 8 8
- 8 8
-[torch.Tensor of dimension 2x2]
-
-
-''x:add(value,y)'' multiply-accumulates values of ''y'' into ''x''.
-
-''z:add(x,value,y)'' puts the result of ''x + value*y'' in ''z''.
-
-''torch.add(x,value,y)'' returns a new tensor ''x + value*y''.
-
-''torch.add(z,x,value,y)'' puts the result of ''x + value*y'' in ''z''.
-
-==== [res] torch.mul([res,] tensor1, value) ====
-{{anchor:torch.Tensor.mul}}
-{{anchor:torch.mul}}
-
-Multiply all elements in the tensor by the given ''value''.
-
-''z=torch.mul(x,2)'' will return a new tensor with the result of ''x*2''.
-
-''torch.mul(z,x,2)'' will put the result of ''x*2'' in ''z''.
-
-''x:mul(2)'' will multiply all elements of ''x'' with ''2'' in-place.
-
-''z:mul(x,2)'' with put the result of ''x*2'' in ''z''.
-
-==== [res] torch.cmul([res,] tensor1, tensor2) ====
-{{anchor:torch.Tensor.cmul}}
-{{anchor:torch.cmul}}
-
-Element-wise multiplication of ''tensor1'' by ''tensor2''. The number
-of elements must match, but sizes do not matter.
-
-
-> x = torch.Tensor(2,2):fill(2)
-> y = torch.Tensor(4):fill(3)
-> x:cmul(y)
-> = x
-
- 6 6
- 6 6
-[torch.Tensor of dimension 2x2]
-
-
-''z=torch.cmul(x,y)'' returns a new tensor.
-
-''torch.cmul(z,x,y)'' puts the result in ''z''.
-
-''y:cmul(x)'' multiplies all elements of ''y'' with corresponding elements of ''x''.
-
-''z:cmul(x,y)'' puts the result in ''z''.
-
-==== [res] torch.addcmul([res,] x [,value], tensor1, tensor2) ====
-{{anchor:torch.Tensor.addcmul}}
-{{anchor:torch.addcmul}}
-
-Performs the element-wise multiplication of ''tensor1'' by ''tensor2'',
-multiply the result by the scalar ''value'' (1 if not present) and add it
-to ''x''. The number of elements must match, but sizes do not matter.
-
-
-> x = torch.Tensor(2,2):fill(2)
-> y = torch.Tensor(4):fill(3)
-> z = torch.Tensor(2,2):fill(5)
-> x:addcmul(2, y, z)
-> = x
-
- 32 32
- 32 32
-[torch.Tensor of dimension 2x2]
-
-
-''z:addcmul(value,x,y)'' accumulates the result in ''z''.
-
-''torch.addcmul(z,value,x,y)'' returns a new tensor with the result.
-
-''torch.addcmul(z,z,value,x,y)'' puts the result in ''z''.
-
-==== [res] torch.div([res,] tensor, value) ====
-{{anchor:torch.Tensor.div}}
-{{anchor:torch.div}}
-
-Divide all elements in the tensor by the given ''value''.
-
-''z=torch.div(x,2)'' will return a new tensor with the result of ''x/2''.
-
-''torch.div(z,x,2)'' will put the result of ''x/2'' in ''z''.
-
-''x:div(2)'' will divide all elements of ''x'' with ''2'' in-place.
-
-''z:div(x,2)'' with put the result of ''x/2'' in ''z''.
-
-==== [res] torch.cdiv([res,] tensor1, tensor2) ====
-{{anchor:torch.Tensor.cdiv}}
-{{anchor:torch.cdiv}}
-
-Performs the element-wise division of ''tensor1'' by ''tensor2''. The
-number of elements must match, but sizes do not matter.
-
-
-> x = torch.Tensor(2,2):fill(1)
-> y = torch.Tensor(4)
-> for i=1,4 do y[i] = i end
-> x:cdiv(y)
-> = x
-
- 1.0000 0.3333
- 0.5000 0.2500
-[torch.Tensor of dimension 2x2]
-
-
-''z=torch.cdiv(x,y)'' returns a new tensor.
-
-''torch.cdiv(z,x,y)'' puts the result in ''z''.
-
-''y:cdiv(x)'' divides all elements of ''y'' with corresponding elements of ''x''.
-
-''z:cdiv(x,y)'' puts the result in ''z''.
-
-==== [res] torch.addcdiv([res,] x [,value], tensor1, tensor2) ====
-{{anchor:torch.Tensor.addcdiv}}
-{{anchor:torch.addcdiv}}
-
-Performs the element-wise division of ''tensor1'' by ''tensor1'',
-multiply the result by the scalar ''value'' and add it to ''x''.
-The number of elements must match, but sizes do not matter.
-
-
-> x = torch.Tensor(2,2):fill(1)
-> y = torch.Tensor(4)
-> z = torch.Tensor(2,2):fill(5)
-> for i=1,4 do y[i] = i end
-> x:addcdiv(2, y, z)
-> = x
-
- 1.4000 2.2000
- 1.8000 2.6000
-[torch.Tensor of dimension 2x2]
-
-
-''z:addcdiv(value,x,y)'' accumulates the result in ''z''.
-
-''torch.addcdiv(z,value,x,y)'' returns a new tensor with the result.
-
-''torch.addcdiv(z,z,value,x,y)'' puts the result in ''z''.
-
-==== [number] torch.dot(tensor1,tensor2) ====
-{{anchor:torch.Tensor.dot}}
-{{anchor:torch.dot}}
-
-Performs the dot product between ''tensor'' and self. The number of
-elements must match: both tensors are seen as a 1D vector.
-
-
-> x = torch.Tensor(2,2):fill(2)
-> y = torch.Tensor(4):fill(3)
-> = x:dot(y)
-24
-
-
-''torch.dot(x,y)'' returns dot product of ''x'' and ''y''.
-''x:dot(y)'' returns dot product of ''x'' and ''y''.
-
-==== [res] torch.addmv([res,] [v1,] vec1, [v2,] mat, vec2) ====
-{{anchor:torch.Tensor.addmv}}
-{{anchor:torch.addmv}}
-
-Performs a matrix-vector multiplication between ''mat'' (2D tensor)
-and ''vec'' (1D tensor) and add it to vec1. In other words,
-
-
-res = v1 * vec1 + v2 * mat*vec2
-
-
-Sizes must respect the matrix-multiplication operation: if ''mat'' is
-a ''n x m'' matrix, ''vec2'' must be vector of size ''m'' and ''vec1'' must
-be a vector of size ''n''.
-
-
-> x = torch.Tensor(3):fill(0)
-> M = torch.Tensor(3,2):fill(3)
-> y = torch.Tensor(2):fill(2)
-> x:addmv(M, y)
-> = x
-
- 12
- 12
- 12
-[torch.Tensor of dimension 3]
-
-
-''torch.addmv(x,y,z)'' returns a new tensor with the result.
-
-''torch.addmv(r,x,y,z)'' puts the result in ''r''.
-
-''x:addmv(y,z)'' accumulates ''y*z'' into ''x''.
-
-''r:addmv(x,y,z)'' puts the result of ''x+y*z'' into ''r''.
-
-Optional values ''v1'' and ''v2'' are scalars that multiply
-''vec1'' and ''mat*vec2'' respectively.
-
-==== [res] torch.addr([res,] [v1,] mat, [v2,] vec1, vec2) ====
-{{anchor:torch.Tensor.addr}}
-{{anchor:torch.addr}}
-
-Performs the outer-product between ''vec1'' (1D tensor) and ''vec2'' (1D tensor).
-In other words,
-
-
-res_ij = v1 * mat_ij + v2 * vec1_i * vec2_j
-
-
-If ''vec1'' is a vector of size ''n'' and ''vec2'' is a vector of size ''m'',
-then mat must be a matrix of size ''n x m''.
-
-
-> x = torch.Tensor(3)
-> y = torch.Tensor(2)
-> for i=1,3 do x[i] = i end
-> for i=1,2 do y[i] = i end
-> M = torch.Tensor(3, 2):zero()
-> M:addr(x, y)
-> = M
-
- 1 2
- 2 4
- 3 6
-[torch.Tensor of dimension 3x2]
-
-
-''torch.addr(M,x,y)'' returns the result in a new tensor.
-
-''torch.addr(r,M,x,y)'' puts the result in ''r''.
-
-''M:addr(x,y)'' puts the result in ''M''.
-
-''r:addr(M,x,y)'' puts the result in ''r''.
-
-Optional values ''v1'' and ''v2'' are scalars that multiply
-''M'' and ''vec1 [out] vec2'' respectively.
-
-
-==== [res] torch.addmm([res,] [v1,] M [v2,] mat1, mat2) ====
-{{anchor:torch.Tensor.addmm}}
-{{anchor:torch.addmm}}
-
-Performs a matrix-matrix multiplication between ''mat1'' (2D tensor)
-and ''mat2'' (2D tensor). In other words,
-
-
-res = v1 * M + v2 * mat1*mat2
-
-
-If ''mat1'' is a ''n x m'' matrix, ''mat2'' a ''m x p'' matrix,
-''M'' must be a ''n x p'' matrix.
-
-''torch.addmm(M,mat1,mat2)'' returns the result in a new tensor.
-
-''torch.addmm(r,M,mat1,mat2)'' puts the result in ''r''.
-
-''M:addmm(mat1,mat2)'' puts the result in ''M''.
-
-''r:addmm(M,mat1,mat2)'' puts the result in ''r''.
-
-Optional values ''v1'' and ''v2'' are scalars that multiply
-''M'' and ''mat1 * mat2'' respectively.
-
-==== [res] torch.mv([res,] mat, vec) ====
-{{anchor:torch.Tensor.mv}}
-{{anchor:torch.mv}}
-
-Matrix vector product of ''mat'' and ''vec''. Sizes must respect
-the matrix-multiplication operation: if ''mat'' is a ''n x m'' matrix,
-''vec'' must be vector of size ''m'' and res must be a vector of size ''n''.
-
-''torch.mv(x,y)'' puts the result in a new tensor.
-
-''torch.mv(M,x,y)'' puts the result in ''M''.
-
-''M:mv(x,y)'' puts the result in ''M''.
-
-==== [res] torch.mm([res,] mat1, mat2) ====
-{{anchor:torch.Tensor.mm}}
-{{anchor:torch.mm}}
-
-Matrix matrix product of ''mat1'' and ''mat2''. If ''mat1'' is a
-''n x m'' matrix, ''mat2'' a ''m x p'' matrix, res must be a
-''n x p'' matrix.
-
-
-''torch.mm(x,y)'' puts the result in a new tensor.
-
-''torch.mm(M,x,y)'' puts the result in ''M''.
-
-''M:mm(x,y)'' puts the result in ''M''.
-
-==== [res] torch.ger([res,] vec1, vec2) ====
-{{anchor:torch.Tensor.ger}}
-{{anchor:torch.ger}}
-
-Outer product of ''vec1'' and ''vec2''. If ''vec1'' is a vector of
-size ''n'' and ''vec2'' is a vector of size ''m'', then res must
-be a matrix of size ''n x m''.
-
-
-''torch.ger(x,y)'' puts the result in a new tensor.
-
-''torch.ger(M,x,y)'' puts the result in ''M''.
-
-''M:ger(x,y)'' puts the result in ''M''.
-
-
-===== Overloaded operators =====
-
-It is possible to use basic mathematic operators like ''+'', ''-'', ''/'' and =*=
-with tensors. These operators are provided as a convenience. While they
-might be handy, they create and return a new tensor containing the
-results. They are thus not as fast as the operations available in the
-[[#torch.Tensor.BasicOperations.dok|previous section]].
-
-==== Addition and substraction ====
-
-You can add a tensor to another one with the ''+'' operator. Substraction is done with ''-''.
-The number of elements in the tensors must match, but the sizes do not matter. The size
-of the returned tensor will be the size of the first tensor.
-
-> x = torch.Tensor(2,2):fill(2)
-> y = torch.Tensor(4):fill(3)
-> = x+y
-
- 5 5
- 5 5
-[torch.Tensor of dimension 2x2]
-
-> = y-x
-
- 1
- 1
- 1
- 1
-[torch.Tensor of dimension 4]
-
-
-A scalar might also be added or substracted to a tensor. The scalar might be on the right or left of the operator.
-
-> x = torch.Tensor(2,2):fill(2)
-> = x+3
-
- 5 5
- 5 5
-[torch.Tensor of dimension 2x2]
-
-> = 3-x
-
- 1 1
- 1 1
-[torch.Tensor of dimension 2x2]
-
-
-==== Negation ====
-
-A tensor can be negated with the ''-'' operator placed in front:
-
-> x = torch.Tensor(2,2):fill(2)
-> = -x
-
--2 -2
--2 -2
-[torch.Tensor of dimension 2x2]
-
-
-==== Multiplication ====
-
-Multiplication between two tensors is supported with the =*= operators. The result of the multiplication
-depends on the sizes of the tensors.
-$ 1D and 1D: Returns the dot product between the two tensors (scalar).
-$ 2D and 1D: Returns the matrix-vector operation between the two tensors (1D tensor).
-$ 2D and 2D: Returns the matrix-matrix operation between the two tensors (2D tensor).
-$ 4D and 2D: Returns a tensor product (2D tensor).
-Sizes must be relevant for the corresponding operation.
-
-A tensor might also be multiplied by a scalar. The scalar might be on the right or left of the operator.
-
-Examples:
-
-> M = torch.Tensor(2,2):fill(2)
-> N = torch.Tensor(2,4):fill(3)
-> x = torch.Tensor(2):fill(4)
-> y = torch.Tensor(2):fill(5)
-> = x*y -- dot product
-40
-> = M*x --- matrix-vector
-
- 16
- 16
-[torch.Tensor of dimension 2]
-
-> = M*N -- matrix-matrix
-
- 12 12 12 12
- 12 12 12 12
-[torch.Tensor of dimension 2x4]
-
-
-
-==== Division ====
-
-Only the division of a tensor by a scalar is supported with the operator ''/''.
-Example:
-
-> x = torch.Tensor(2,2):fill(2)
-> = x/3
-
- 0.6667 0.6667
- 0.6667 0.6667
-[torch.Tensor of dimension 2x2]
-
-
-
-===== Column or row-wise operations (dimension-wise operations) =====
-{{anchor:torch.columnwise.dok}}
-
-==== [res] torch.cross([res,] a, b [,n]) ====
-{{anchor:torch.cross}}
-
-''y=torch.cross(a,b)'' returns the cross product of the tensors a and b.
-a and b must be 3 element vectors.
-
-''y=cross(a,b)'' returns the cross product of a and b along the first dimension of length 3.
-
-''y=cross(a,b,n)'', where a and b returns the cross
-product of vectors in dimension n of a and b.
-a and b must have the same size,
-and both a:size(n) and b:size(n) must be 3.
-
-
-==== [res] torch.cumprod([res,] x [,dim]) ====
-{{anchor:torch.cumprod}}
-
-''y=torch.cumprod(x)'' returns the cumulative product of the elements
-of x, performing the operation over the last dimension.
-
-''y=torch.cumprod(x,n)'' returns the cumulative product of the
-elements of x, performing the operation over dimension n.
-
-==== [res] torch.cumsum([res,] x [,dim]) ====
-{{anchor:torch.cumsum}}
-
-''y=torch.cumsum(x)'' returns the cumulative sum of the elements
-of x, performing the operation over the first dimension.
-
-''y=torch.cumsum(x,n)'' returns the cumulative sum of the elements
-of x, performing the operation over dimension n.
-
-==== torch.max([resval, resind,] x [,dim]) ====
-{{anchor:torch.max}}
-
-''y=torch.max(x)'' returns the single largest element of x.
-
-''y,i=torch.max(x,1)'' returns the largest element in each column
-(across rows) of x, and a tensor i of their corresponding indices in
-x.
-
-''y,i=torch.max(x,2)'' performs the max operation across rows and
-
-''y,i=torch.max(x,n)'' performs the max operation over the dimension n.
-
-
-==== [res] torch.mean([res,] x [,dim]) ====
-{{anchor:torch.mean}}
-
-''y=torch.mean(x)'' returns the mean of all elements of x.
-
-''y=torch.mean(x,1)'' returns a tensor y of the mean of the elements in
-each column of x.
-
-''y=torch.mean(x,2)'' performs the mean operation for each row and
-
-
-''y=torch.mean(x,n)'' performs the mean operation over the dimension n.
-
-==== torch.min([resval, resind,] x) ====
-{{anchor:torch.min}}
-
-''y=torch.min(x)'' returns the single smallest element of x.
-
-''y,i=torch.min(x,1)'' returns the smallest element in each column
-(across rows) of x, and a tensor i of their corresponding indices in
-x.
-
-''y,i=torch.min(x,2)'' performs the min operation across rows and
-
-''y,i=torch.min(x,n)'' performs the min operation over the dimension n.
-
-
-==== [res] torch.prod([res,] x [,n]) ====
-{{anchor:torch.prod}}
-
-''y=torch.prod(x)'' returns a tensor y of the product of all elements in ''x''.
-
-''y=torch.prod(x,2)'' performs the prod operation for each row and
-
-''y=torch.prod(x,n)'' performs the prod operation over the dimension n.
-
-==== torch.sort([resval, resind,] x [,d] [,flag]) ====
-{{anchor:torch.sort}}
-
-''y,i=torch.sort(x)'' returns a tensor ''y'' where all entries
-are sorted along the last dimension, in **ascending** order. It also returns a tensor
-''i'' that provides the corresponding indices from ''x''.
-
-''y,i=torch.sort(x,d)'' performs the sort operation along
-a specific dimension ''d''.
-
-''y,i=torch.sort(x)'' is therefore equivalent to
-''y,i=torch.sort(x,x:dim())''
-
-''y,i=torch.sort(x,d,true)'' performs the sort operation along
-a specific dimension ''d'', in **descending** order.
-
-==== [res] torch.std([res,] x, [flag] [dim]) ====
-{{anchor:torch.std}}
-
-''y=torch.std(x)'' returns the standard deviation of the elements of x.
-
-''y=torch.std(x,dim)'' performs the std operation over the dimension dim.
-
-''y=torch.std(x,dim,false)'' performs the std operation normalizing by n-1 (this is the default).
-
-''y=torch.std(x,dim,true)'' performs the std operation normalizing by n instead of n-1.
-
-==== [res] torch.sum([res,] x) ====
-{{anchor:torch.sum}}
-
-''y=torch.sum(x)'' returns the sum of the elements of x.
-
-''y=torch.sum(x,2)'' performs the sum operation for each row and
-''y=torch.sum(x,n)'' performs the sum operation over the dimension n.
-
-==== [res] torch.var([res,] x [,dim] [,flag]) ====
-{{anchor:torch.var}}
-
-''y=torch.var(x)'' returns the variance of the elements of x.
-
-''y=torch.var(x,dim)'' performs the var operation over the dimension dim.
-
-''y=torch.var(x,dim,false)'' performs the var operation normalizing by n-1 (this is the default).
-
-''y=torch.var(x,dim,true)'' performs the var operation normalizing by n instead of n-1.
-
-===== Matrix-wide operations (tensor-wide operations) =====
-{{anchor:torch.matrixwide.dok}}
-
-==== torch.norm(x) ====
-{{anchor:torch.norm}}
-
-''y=torch.norm(x)'' returns the 2-norm of the tensor x.
-
-''y=torch.norm(x,p)'' returns the p-norm of the tensor x.
-
-''y=torch.norm(x,p,dim)'' returns the p-norms of the tensor x computed over the dimension dim.
-
-==== torch.dist(x,y) ====
-{{anchor:torch.dist}}
-
-''y=torch.dist(x,y)'' returns the 2-norm of (x-y).
-
-''y=torch.dist(x,y,p)'' returns the p-norm of (x-y).
-
-==== torch.numel(x) ====
-{{anchor:torch.numel}}
-
-''y=torch.numel(x)'' returns the count of the number of elements in the matrix x.
-
-==== torch.trace(x) ====
-{{anchor:torch.trace}}
-
-''y=torch.trace(x)'' returns the trace (sum of the diagonal elements)
-of a matrix x. This is equal to the sum of the eigenvalues of x.
-The returned value ''y'' is a number, not a tensor.
-
-===== Convolution Operations =====
-{{anchor:torch.conv.dok}}
-
-These function implement convolution or cross-correlation of an input
-image (or set of input images) with a kernel (or set of kernels). The
-convolution function in Torch can handle different types of
-input/kernel dimensions and produces corresponding outputs. The
-general form of operations always remain the same.
-
-==== [res] torch.conv2([res,] x, k, ['F' or 'V']) ====
-{{anchor:torch.conv2}}
-{{anchor:torch.Tensor.conv2}}
-
-This function computes 2 dimensional convolutions between '' x '' and '' k ''. These operations are similar to BLAS operations when number of dimensions of input and kernel are reduced by 2.
-
- * '' x '' and '' k '' are 2D : convolution of a single image with a single kernel (2D output). This operation is similar to multiplication of two scalars.
- * '' x '' and '' k '' are 3D : convolution of each input slice with corresponding kernel (3D output).
- * '' x (p x m x n) '' 3D, '' k (q x p x ki x kj)'' 4D : convolution of all input slices with the corresponding slice of kernel. Output is 3D '' (q x m x n) ''. This operation is similar to matrix vector product of matrix '' k '' and vector '' x ''.
-
-The last argument controls if the convolution is a full ('F') or valid ('V') convolution. The default is 'valid' convolution.
-
-
-x=torch.rand(100,100)
-k=torch.rand(10,10)
-c = torch.conv2(x,k)
-=c:size()
-
- 91
- 91
-[torch.LongStorage of size 2]
-
-c = torch.conv2(x,k,'F')
-=c:size()
-
- 109
- 109
-[torch.LongStorage of size 2]
-
-
-
-==== [res] torch.xcorr2([res,] x, k, ['F' or 'V']) ====
-{{anchor:torch.xcorr2}}
-{{anchor:torch.Tensor.xcorr2}}
-
-This function operates with same options and input/output
-configurations as [[#torch.conv2|torch.conv2]], but performs
-cross-correlation of the input with the kernel '' k ''.
-
-==== [res] torch.conv3([res,] x, k, ['F' or 'V']) ====
-{{anchor:torch.conv3}}
-{{anchor:torch.Tensor.conv3}}
-
-This function computes 3 dimensional convolutions between '' x '' and '' k ''. These operations are similar to BLAS operations when number of dimensions of input and kernel are reduced by 3.
-
- * '' x '' and '' k '' are 3D : convolution of a single image with a single kernel (3D output). This operation is similar to multiplication of two scalars.
- * '' x '' and '' k '' are 4D : convolution of each input slice with corresponding kernel (4D output).
- * '' x (p x m x n x o) '' 4D, '' k (q x p x ki x kj x kk)'' 5D : convolution of all input slices with the corresponding slice of kernel. Output is 4D '' (q x m x n x o) ''. This operation is similar to matrix vector product of matrix '' k '' and vector '' x ''.
-
-The last argument controls if the convolution is a full ('F') or valid ('V') convolution. The default is 'valid' convolution.
-
-
-x=torch.rand(100,100,100)
-k=torch.rand(10,10,10)
-c = torch.conv3(x,k)
-=c:size()
-
- 91
- 91
- 91
-[torch.LongStorage of size 3]
-
-c = torch.conv3(x,k,'F')
-=c:size()
-
- 109
- 109
- 109
-[torch.LongStorage of size 3]
-
-
-
-==== [res] torch.xcorr3([res,] x, k, ['F' or 'V']) ====
-{{anchor:torch.xcorr3}}
-{{anchor:torch.Tensor.xcorr3}}
-
-This function operates with same options and input/output
-configurations as [[#torch.conv3|torch.conv3]], but performs
-cross-correlation of the input with the kernel '' k ''.
-
-===== Eigenvalues, SVD, Linear System Solution =====
-{{anchor:torch.linalg.dok}}
-
-Functions in this section are implemented with an interface to LAPACK
-libraries. If LAPACK libraries are not found during compilation step,
-then these functions will not be available.
-
-==== torch.gesv([resb, resa,] b,a) ====
-{{anchor:torch.gesv}}
-
-Solution of '' AX=B '' and ''A'' has to be square and non-singular. ''
-A '' is '' m x m '', '' X '' is '' m x k '', '' B '' is '' m x k ''.
-
-If ''resb'' and ''resa'' are given, then they will be used for
-temporary storage and returning the result.
-
- * ''resa'' will contain L and U factors for ''LU'' factorization of ''A''.
- * ''resb'' will contain the solution.
-
-
-a=torch.Tensor({{6.80, -2.11, 5.66, 5.97, 8.23},
- {-6.05, -3.30, 5.36, -4.44, 1.08},
- {-0.45, 2.58, -2.70, 0.27, 9.04},
- {8.32, 2.71, 4.35, -7.17, 2.14},
- {-9.67, -5.14, -7.26, 6.08, -6.87}}):t()
-
-b=torch.Tensor({{4.02, 6.19, -8.22, -7.57, -3.03},
- {-1.56, 4.00, -8.67, 1.75, 2.86},
- {9.81, -4.09, -4.57, -8.61, 8.99}}):t()
-
- =b
- 4.0200 -1.5600 9.8100
- 6.1900 4.0000 -4.0900
--8.2200 -8.6700 -4.5700
--7.5700 1.7500 -8.6100
--3.0300 2.8600 8.9900
-[torch.DoubleTensor of dimension 5x3]
-
-=a
- 6.8000 -6.0500 -0.4500 8.3200 -9.6700
--2.1100 -3.3000 2.5800 2.7100 -5.1400
- 5.6600 5.3600 -2.7000 4.3500 -7.2600
- 5.9700 -4.4400 0.2700 -7.1700 6.0800
- 8.2300 1.0800 9.0400 2.1400 -6.8700
-[torch.DoubleTensor of dimension 5x5]
-
-
-x=torch.gesv(b,a)
- =x
--0.8007 -0.3896 0.9555
--0.6952 -0.5544 0.2207
- 0.5939 0.8422 1.9006
- 1.3217 -0.1038 5.3577
- 0.5658 0.1057 4.0406
-[torch.DoubleTensor of dimension 5x3]
-
-=b:dist(a*x)
-1.1682163181673e-14
-
-
-
-==== torch.gels([resb, resa,] b,a) ====
-{{anchor:torch.gels}}
-
-Solution of least squares and least norm problems for a full rank '' A '' that is '' m x n''.
- * If '' n %%<=%% m '', then solve '' ||AX-B||_F ''.
- * If '' n > m '' , then solve '' min ||X||_F s.t. AX=B ''.
-
-On return, first '' n '' rows of '' X '' matrix contains the solution
-and the rest contains residual information. Square root of sum squares
-of elements of each column of '' X '' starting at row '' n + 1 '' is
-the residual for corresponding column.
-
-
-
-a=torch.Tensor({{ 1.44, -9.96, -7.55, 8.34, 7.08, -5.45},
- {-7.84, -0.28, 3.24, 8.09, 2.52, -5.70},
- {-4.39, -3.24, 6.27, 5.28, 0.74, -1.19},
- {4.53, 3.83, -6.64, 2.06, -2.47, 4.70}}):t()
-
-b=torch.Tensor({{8.58, 8.26, 8.48, -5.28, 5.72, 8.93},
- {9.35, -4.43, -0.70, -0.26, -7.36, -2.52}}):t()
-
-=a
- 1.4400 -7.8400 -4.3900 4.5300
--9.9600 -0.2800 -3.2400 3.8300
--7.5500 3.2400 6.2700 -6.6400
- 8.3400 8.0900 5.2800 2.0600
- 7.0800 2.5200 0.7400 -2.4700
--5.4500 -5.7000 -1.1900 4.7000
-[torch.DoubleTensor of dimension 6x4]
-
-=b
- 8.5800 9.3500
- 8.2600 -4.4300
- 8.4800 -0.7000
--5.2800 -0.2600
- 5.7200 -7.3600
- 8.9300 -2.5200
-[torch.DoubleTensor of dimension 6x2]
-
-x = torch.gels(b,a)
-=x
- -0.4506 0.2497
- -0.8492 -0.9020
- 0.7066 0.6323
- 0.1289 0.1351
- 13.1193 -7.4922
- -4.8214 -7.1361
-[torch.DoubleTensor of dimension 6x2]
-
-=b:dist(a*x:narrow(1,1,4))
-17.390200628863
-
-=math.sqrt(x:narrow(1,5,2):pow(2):sumall())
-17.390200628863
-
-
-
-==== torch.symeig([rese, resv,] a, [, 'N' or 'V'] ['U' or 'L']) ====
-{{anchor:torch.symeig}}
-
-Eigen values and eigen vectors of a symmetric real matrix '' A '' of
-size '' m x m ''. This function calculates all eigenvalues (and
-vectors) of '' A '' such that '' A = V' diag(e) V ''. Since the input
-matrix '' A '' is supposed to be symmetric, only upper triangular
-portion is used by default. If the 4th argument is 'L', then lower
-triangular portion is used.
-
-Third argument defines computation of eigenvectors or eigenvalues
-only. If '' N '', only eignevalues are computed. If '' V '', both
-eigenvalues and eigenvectors are computed.
-
-
-
-a=torch.Tensor({{ 1.96, 0.00, 0.00, 0.00, 0.00},
- {-6.49, 3.80, 0.00, 0.00, 0.00},
- {-0.47, -6.39, 4.17, 0.00, 0.00},
- {-7.20, 1.50, -1.51, 5.70, 0.00},
- {-0.65, -6.34, 2.67, 1.80, -7.10}}):t()
-
-=a
- 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
- 0.0000 3.8000 -6.3900 1.5000 -6.3400
- 0.0000 0.0000 4.1700 -1.5100 2.6700
- 0.0000 0.0000 0.0000 5.7000 1.8000
- 0.0000 0.0000 0.0000 0.0000 -7.1000
-[torch.DoubleTensor of dimension 5x5]
-
-e = torch.symeig(a)
-=e
--11.0656
- -6.2287
- 0.8640
- 8.8655
- 16.0948
-[torch.DoubleTensor of dimension 5]
-
-e,v = torch.symeig(a,'V')
-=e
--11.0656
- -6.2287
- 0.8640
- 8.8655
- 16.0948
-[torch.DoubleTensor of dimension 5]
-
-=v
--0.2981 -0.6075 0.4026 -0.3745 0.4896
--0.5078 -0.2880 -0.4066 -0.3572 -0.6053
--0.0816 -0.3843 -0.6600 0.5008 0.3991
--0.0036 -0.4467 0.4553 0.6204 -0.4564
--0.8041 0.4480 0.1725 0.3108 0.1622
-[torch.DoubleTensor of dimension 5x5]
-
-=v*torch.diag(e)*v:t()
- 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
--6.4900 3.8000 -6.3900 1.5000 -6.3400
--0.4700 -6.3900 4.1700 -1.5100 2.6700
--7.2000 1.5000 -1.5100 5.7000 1.8000
--0.6500 -6.3400 2.6700 1.8000 -7.1000
-[torch.DoubleTensor of dimension 5x5]
-
-=a:dist(torch.triu(v*torch.diag(e)*v:t()))
-1.0219480822443e-14
-
-
-
-==== torch.eig([rese, resv,] a, [, 'N' or 'V']) ====
-{{anchor:torch.eig}}
-
-Eigen values and eigen vectors of a general real matrix '' A '' of
-size '' m x m ''. This function calculates all right eigenvalues (and
-vectors) of '' A '' such that '' A = V' diag(e) V ''.
-
-Third argument defines computation of eigenvectors or eigenvalues
-only. If '' N '', only eignevalues are computed. If '' V '', both
-eigenvalues and eigenvectors are computed.
-
-
-
-a=torch.Tensor({{ 1.96, 0.00, 0.00, 0.00, 0.00},
- {-6.49, 3.80, 0.00, 0.00, 0.00},
- {-0.47, -6.39, 4.17, 0.00, 0.00},
- {-7.20, 1.50, -1.51, 5.70, 0.00},
- {-0.65, -6.34, 2.67, 1.80, -7.10}}):t()
-
-=a
- 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
- 0.0000 3.8000 -6.3900 1.5000 -6.3400
- 0.0000 0.0000 4.1700 -1.5100 2.6700
- 0.0000 0.0000 0.0000 5.7000 1.8000
- 0.0000 0.0000 0.0000 0.0000 -7.1000
-[torch.DoubleTensor of dimension 5x5]
-
-b = a+torch.triu(a,1):t()
-=b
-
- 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
- -6.4900 3.8000 -6.3900 1.5000 -6.3400
- -0.4700 -6.3900 4.1700 -1.5100 2.6700
- -7.2000 1.5000 -1.5100 5.7000 1.8000
- -0.6500 -6.3400 2.6700 1.8000 -7.1000
-[torch.DoubleTensor of dimension 5x5]
-
-e = torch.eig(b)
-=e
- 16.0948 0.0000
--11.0656 0.0000
- -6.2287 0.0000
- 0.8640 0.0000
- 8.8655 0.0000
-[torch.DoubleTensor of dimension 5x2]
-
-e,v = torch.eig(b,'V')
-=e
- 16.0948 0.0000
--11.0656 0.0000
- -6.2287 0.0000
- 0.8640 0.0000
- 8.8655 0.0000
-[torch.DoubleTensor of dimension 5x2]
-
-=v
--0.4896 0.2981 -0.6075 -0.4026 -0.3745
- 0.6053 0.5078 -0.2880 0.4066 -0.3572
--0.3991 0.0816 -0.3843 0.6600 0.5008
- 0.4564 0.0036 -0.4467 -0.4553 0.6204
--0.1622 0.8041 0.4480 -0.1725 0.3108
-[torch.DoubleTensor of dimension 5x5]
-
-=v*torch.diag(e:select(2,1))*v:t()
- 1.9600 -6.4900 -0.4700 -7.2000 -0.6500
--6.4900 3.8000 -6.3900 1.5000 -6.3400
--0.4700 -6.3900 4.1700 -1.5100 2.6700
--7.2000 1.5000 -1.5100 5.7000 1.8000
--0.6500 -6.3400 2.6700 1.8000 -7.1000
-[torch.DoubleTensor of dimension 5x5]
-
-=b:dist(v*torch.diag(e:select(2,1))*v:t())
-3.5423944346685e-14
-
-
-
-==== torch.svd([resu, ress, resv] a, [, 'S' or 'A']) ====
-{{anchor:torch.svd}}
-
-Singular value decomposition of a real matrix '' A '' of size '' n x m
-'' such that '' A = USV**T ''. The call to ''svd'' returns ''U,S,V''.
-
-The last argument, if it is string, represents the number of singular
-values to be computed. 'S' stands for 'some' and 'A' stands for 'all'.
-
-
-
-a=torch.Tensor({{8.79, 6.11, -9.15, 9.57, -3.49, 9.84},
- {9.93, 6.91, -7.93, 1.64, 4.02, 0.15},
- {9.83, 5.04, 4.86, 8.83, 9.80, -8.99},
- {5.45, -0.27, 4.85, 0.74, 10.00, -6.02},
- {3.16, 7.98, 3.01, 5.80, 4.27, -5.31}}):t()
-=a
- 8.7900 9.9300 9.8300 5.4500 3.1600
- 6.1100 6.9100 5.0400 -0.2700 7.9800
- -9.1500 -7.9300 4.8600 4.8500 3.0100
- 9.5700 1.6400 8.8300 0.7400 5.8000
- -3.4900 4.0200 9.8000 10.0000 4.2700
- 9.8400 0.1500 -8.9900 -6.0200 -5.3100
-
-u,s,v = torch.svd(a)
-
-=u
--0.5911 0.2632 0.3554 0.3143 0.2299
--0.3976 0.2438 -0.2224 -0.7535 -0.3636
--0.0335 -0.6003 -0.4508 0.2334 -0.3055
--0.4297 0.2362 -0.6859 0.3319 0.1649
--0.4697 -0.3509 0.3874 0.1587 -0.5183
- 0.2934 0.5763 -0.0209 0.3791 -0.6526
-[torch.DoubleTensor of dimension 6x5]
-
-=s
- 27.4687
- 22.6432
- 8.5584
- 5.9857
- 2.0149
-[torch.DoubleTensor of dimension 5]
-
-=v
--0.2514 0.8148 -0.2606 0.3967 -0.2180
--0.3968 0.3587 0.7008 -0.4507 0.1402
--0.6922 -0.2489 -0.2208 0.2513 0.5891
--0.3662 -0.3686 0.3859 0.4342 -0.6265
--0.4076 -0.0980 -0.4933 -0.6227 -0.4396
-[torch.DoubleTensor of dimension 5x5]
-
-=u*torch.diag(s)*v:t()
- 8.7900 9.9300 9.8300 5.4500 3.1600
- 6.1100 6.9100 5.0400 -0.2700 7.9800
- -9.1500 -7.9300 4.8600 4.8500 3.0100
- 9.5700 1.6400 8.8300 0.7400 5.8000
- -3.4900 4.0200 9.8000 10.0000 4.2700
- 9.8400 0.1500 -8.9900 -6.0200 -5.3100
-[torch.DoubleTensor of dimension 6x5]
-
- =a:dist(u*torch.diag(s)*v:t())
-2.8923773593204e-14
-
-
-
-==== torch.inverse([res,] x) ====
-{{anchor:torch.inverse}}
-
-Computes the inverse of square matrix ''x''.
-
-''=torch.inverse(x)'' returns the result as a new matrix.
-
-''torch.inverse(y,x)'' puts the result in ''y''.
-
-
-x=torch.rand(10,10)
-y=torch.inverse(x)
-z=x*y
-print(z)
- 1.0000 -0.0000 0.0000 -0.0000 0.0000 0.0000 0.0000 -0.0000 0.0000 0.0000
- 0.0000 1.0000 -0.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 -0.0000 0.0000
- 0.0000 -0.0000 1.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000 0.0000
- 0.0000 -0.0000 -0.0000 1.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000
- 0.0000 -0.0000 0.0000 -0.0000 1.0000 0.0000 0.0000 -0.0000 -0.0000 0.0000
- 0.0000 -0.0000 0.0000 -0.0000 0.0000 1.0000 0.0000 -0.0000 -0.0000 0.0000
- 0.0000 -0.0000 0.0000 -0.0000 0.0000 0.0000 1.0000 -0.0000 0.0000 0.0000
- 0.0000 -0.0000 -0.0000 -0.0000 0.0000 0.0000 0.0000 1.0000 0.0000 0.0000
- 0.0000 -0.0000 -0.0000 -0.0000 0.0000 0.0000 -0.0000 -0.0000 1.0000 0.0000
- 0.0000 -0.0000 0.0000 -0.0000 0.0000 0.0000 0.0000 -0.0000 0.0000 1.0000
-[torch.DoubleTensor of dimension 10x10]
-
-print('Max nonzero = ', torch.max(torch.abs(z-torch.eye(10))))
-Max nonzero = 2.3092638912203e-14
-
-
-
-===== Logical Operations on Tensors =====
-{{anchor:torch.logical.dok}}
-
-These functions implement logical comparison operators that take a
-tensor as input and another tensor or a number as the comparison
-target. They return a ''ByteTensor'' in which each element is 0 or 1
-indicating if the comparison for the corresponding element was
-''false'' or ''true'' respectively.
-
-==== torch.lt(a, b) ====
-{{anchor:torch.lt}}
-
-Implements %%<%% operator comparing each element in ''a'' with ''b''
-(if ''b'' is a number) or each element in ''a'' with corresponding element in ''b''.
-
-==== torch.le(a, b) ====
-{{anchor:torch.lt}}
-
-Implements %%<=%% operator comparing each element in ''a'' with ''b''
-(if ''b'' is a number) or each element in ''a'' with corresponding element in ''b''.
-
-==== torch.gt(a, b) ====
-{{anchor:torch.lt}}
-
-Implements %%>%% operator comparing each element in ''a'' with ''b''
-(if ''b'' is a number) or each element in ''a'' with corresponding element in ''b''.
-
-==== torch.ge(a, b) ====
-{{anchor:torch.lt}}
-
-Implements %%>=%% operator comparing each element in ''a'' with ''b''
-(if ''b'' is a number) or each element in ''a'' with corresponding element in ''b''.
-
-==== torch.eq(a, b) ====
-{{anchor:torch.lt}}
-
-Implements %%==%% operator comparing each element in ''a'' with ''b''
-(if ''b'' is a number) or each element in ''a'' with corresponding element in ''b''.
-
-==== torch.ne(a, b) ====
-{{anchor:torch.lt}}
-
-Implements %%!=%% operator comparing each element in ''a'' with ''b''
-(if ''b'' is a number) or each element in ''a'' with corresponding element in ''b''.
-
-
-
-
-> a = torch.rand(10)
-> b = torch.rand(10)
-> =a
- 0.5694
- 0.5264
- 0.3041
- 0.4159
- 0.1677
- 0.7964
- 0.0257
- 0.2093
- 0.6564
- 0.0740
-[torch.DoubleTensor of dimension 10]
-
-> =b
- 0.2950
- 0.4867
- 0.9133
- 0.1291
- 0.1811
- 0.3921
- 0.7750
- 0.3259
- 0.2263
- 0.1737
-[torch.DoubleTensor of dimension 10]
-
-> =torch.lt(a,b)
- 0
- 0
- 1
- 0
- 1
- 0
- 1
- 1
- 0
- 1
-[torch.ByteTensor of dimension 10]
-
-> return torch.eq(a,b)
-0
-0
-0
-0
-0
-0
-0
-0
-0
-0
-[torch.ByteTensor of dimension 10]
-
-> return torch.ne(a,b)
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
-[torch.ByteTensor of dimension 10]
-
-> return torch.gt(a,b)
- 1
- 1
- 0
- 1
- 0
- 1
- 0
- 0
- 1
- 0
-[torch.ByteTensor of dimension 10]
-
-> a[torch.gt(a,b)] = 10
-> =a
- 10.0000
- 10.0000
- 0.3041
- 10.0000
- 0.1677
- 10.0000
- 0.0257
- 0.2093
- 10.0000
- 0.0740
-[torch.DoubleTensor of dimension 10]
-
-> a[torch.gt(a,1)] = -1
-> =a
--1.0000
--1.0000
- 0.3041
--1.0000
- 0.1677
--1.0000
- 0.0257
- 0.2093
--1.0000
- 0.0740
-[torch.DoubleTensor of dimension 10]
-
-
-
diff --git a/dok/memoryfile.dok b/dok/memoryfile.dok
deleted file mode 100644
index 2a928ea7..00000000
--- a/dok/memoryfile.dok
+++ /dev/null
@@ -1,36 +0,0 @@
-====== MemoryFile ======
-{{anchor:torch.MemoryFile.dok}}
-
-Parent classes: [[File|File]]
-
-A ''MemoryFile'' is a particular ''File'' which is able to perform basic
-read/write operations on a buffer in ''RAM''. It implements all methods
-described in [[File|File]].
-
-The data of the this ''File'' is contained into a ''NULL'' terminated
-[[Storage|CharStorage]].
-
-==== torch.MemoryFile([mode]) ====
-{{anchor:torch.MemoryFile}}
-
-//Constructor// which returns a new ''MemoryFile'' object using ''mode''. Valid
-''mode'' are ''"r"'' (read), ''"w"'' (write) or ''"rw"'' (read-write). Default is ''"rw"''.
-
-
-==== torch.MemoryFile(storage, mode) ====
-{{anchor:torch.MemoryFile}}
-
-//Constructor// which returns a new ''MemoryFile'' object, using the given
-[[Storage|storage]] (which must be a ''CharStorage'') and ''mode''. Valid
-''mode'' are ''"r"'' (read), ''"w"'' (write) or ''"rw"'' (read-write). The last character
-in this storage //must// be ''NULL'' or an error will be generated. This allow
-to read existing memory. If used for writing, not that the ''storage'' might
-be resized by this class if needed.
-
-==== [CharStorage] storage() ====
-{{anchor:torch.MemoryFile.storage}}
-
-Returns the [[Storage|storage]] which contains all the data of the
-''File'' (note: this is //not// a copy, but a //reference// on this storage). The
-size of the storage is the size of the data in the ''File'', plus one, the
-last character being ''NULL''.
diff --git a/dok/pipefile.dok b/dok/pipefile.dok
deleted file mode 100644
index b8f65b8d..00000000
--- a/dok/pipefile.dok
+++ /dev/null
@@ -1,21 +0,0 @@
-====== PipeFile ======
-{{anchor:torch.PipeFile.dok}}
-
-Parent classes: [[DiskFile|DiskFile]]
-
-A ''PipeFile'' is a particular ''File'' which is able to perform basic read/write operations
-on a command pipe. It implements all methods described in [[DiskFile|DiskFile]] and [[File|File]].
-
-The file might be open in read or write mode, depending on the parameter
-''mode'' (which can take the value ''"r"'' or ''"w"'')
-given to the [[#torch.PipeFile|torch.PipeFile(fileName, mode)]]. Read-write mode is not allowed.
-
-==== torch.PipeFile(command, [mode], [quiet]) ====
-{{anchor:torch.PipeFile}}
-
-//Constructor// which execute ''command'' by opening a pipe in read or write
-''mode''. Valid ''mode'' are ''"r"'' (read) or ''"w"'' (write). Default is read
-mode.
-
-If (and only if) ''quiet'' is ''true'', no error will be raised in case of
-problem opening the file: instead ''nil'' will be returned.
diff --git a/dok/random.dok b/dok/random.dok
deleted file mode 100644
index 6c06eee2..00000000
--- a/dok/random.dok
+++ /dev/null
@@ -1,105 +0,0 @@
-====== Random Numbers ======
-{{anchor:torch.random.dok}}
-
-Torch provides accurate mathematical random generation, based on
-[[http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html|Mersenne Twister]]
-random number generator.
-
-===== Seed Handling =====
-{{anchor::torch.seed.dok}}
-
-If no seed is provided to the random generator (using
-[[#torch.seed|seed()]] or [[#torch.manualSeed|manualSeed()]]), a
-random seed will be set according to [[#torch.seed|seed()]] the first
-time a random number is generated.
-
-Initial seed can be obtained using [[#torch.initialSeed|initialSeed()]].
-
-Setting a particular seed allows the user to (re)-generate a particular serie of
-random numbers. Example:
-
-> torch.manualSeed(123)
-> = torch.uniform()
-0.69646918727085
-> return torch.uniform()
-0.71295532141812
-> return torch.uniform()
-0.28613933874294
-> torch.manualSeed(123)
-> return torch.uniform()
-0.69646918727085
-> return torch.uniform()
-0.71295532141812
-> return torch.uniform()
-0.28613933874294
-> torch.manualSeed(torch.initialSeed())
-> return torch.uniform()
-0.69646918727085
-> return torch.uniform()
-0.71295532141812
-> return torch.uniform()
-0.28613933874294
-
-
-==== [number] seed() ====
-{{anchor:torch.seed}}
-
-Set the seed of the random number generator according to the time of the
-computer. Granularity is seconds. Returns the seed obtained.
-
-==== manualSeed(number) ====
-{{anchor:torch.manualSeed}}
-
-Set the seed of the random number generator to the given ''number''.
-
-==== initialSeed() ====
-{{anchor:torch.initialSeed}}
-
-Returns the initial seed used to initialize the random generator.
-
-==== [number] random() ====
-{{anchor:torch.random}}
-
-Returns a 32 bit integer random number.
-
-==== [number] uniform([a],[b]) ====
-{{anchor:torch.uniform}}
-
-Returns a random real number according to uniform distribution on [a,b[. By default ''a'' is 0 and ''b'' is 1.
-
-==== [number] normal([mean],[stdv]) ====
-{{anchor:torch.normal}}
-
-Returns a random real number according to a normal distribution with the given ''mean'' and standard deviation ''stdv''.
-''stdv'' must be positive.
-
-==== [number] exponential(lambda) ====
-{{anchor:torch.exponential}}
-
-Returns a random real number according to the exponential distribution
-''p(x) = lambda * exp(-lambda * x)''
-
-==== [number] cauchy(median, sigma) ====
-{{anchor:torch.cauchy}}
-
-Returns a random real number according to the Cauchy distribution
-''p(x) = sigma/(pi*(sigma^2 + (x-median)^2))''
-
-==== [number] logNormal(mean, stdv) ====
-{{anchor:torch.logNormal}}
-
-Returns a random real number according to the log-normal distribution, with
-the given ''mean'' and standard deviation ''stdv''.
-''stdv'' must be positive.
-
-==== [number] geometric(p) ====
-{{anchor:torch.geometric}}
-
-Returns a random integer number according to a geometric distribution
-''p(i) = (1-p) * p^(i-1)''. ''p'' must satisfy ''0 < p < 1''.
-
-==== [number] bernoulli([p]) ====
-{{anchor:torch.bernoulli}}
-
-Returns ''1'' with probability ''p'' and ''0'' with probability ''1-p''. ''p'' must satisfy ''0 <= p <= 1''.
-By default ''p'' is equal to ''0.5''.
diff --git a/dok/storage.dok b/dok/storage.dok
deleted file mode 100644
index efaf35f5..00000000
--- a/dok/storage.dok
+++ /dev/null
@@ -1,224 +0,0 @@
-====== Storage ======
-{{anchor:torch.Storage.dok}}
-{{anchor:torch.ByteStorage.dok}}
-{{anchor:torch.CharStorage.dok}}
-{{anchor:torch.ShortStorage.dok}}
-{{anchor:torch.IntStorage.dok}}
-{{anchor:torch.LongStorage.dok}}
-{{anchor:torch.FloatStorage.dok}}
-{{anchor:torch.DoubleStorage.dok}}
-
-//Storages// are basically a way for ''Lua'' to access memory of a ''C'' pointer
-or array. //Storages// can also [[#__torch.StorageMap|map the contents of a file to memory]].
-A ''Storage'' is an array of //basic// ''C'' types. For arrays of ''Torch'' objects,
-use the ''Lua'' tables.
-
-Several ''Storage'' classes for all the basic ''C'' types exist and have the
-following self-explanatory names: ''ByteStorage'', ''CharStorage'', ''ShortStorage'',
-''IntStorage'', ''LongStorage'', ''FloatStorage'', ''DoubleStorage''.
-
-Note that ''ByteStorage'' and ''CharStorage'' represent both arrays of bytes. ''ByteStorage'' represents an array of
-//unsigned// chars, while ''CharStorage'' represents an array of //signed// chars.
-
-Conversions between two ''Storage'' type might be done using ''copy'':
-
-x = torch.IntStorage(10):fill(1)
-y = torch.DoubleStorage(10):copy(x)
-
-
-[[#torch.Storage|Classical storages]] are [[File#torch.File.serialization|serializable]].
-[[#__torch.StorageMap|Storages mapping a file]] are also [[#FileSerialization|serializable]],
-but //will be saved as a normal storage//.
-
-An alias ''torch.Storage()'' is made over your preferred Storage type,
-controlled by the
-[[utility#torch.setdefaulttensortype|torch.setdefaulttensortype]]
-function. By default, this "points" on ''torch.DoubleStorage''.
-
-===== Constructors and Access Methods =====
-
-==== torch.TYPEStorage([size]) ====
-{{anchor:torch.Storage}}
-
-Returns a new ''Storage'' of type ''TYPE''. Valid ''TYPE'' are ''Byte'', ''Char'', ''Short'',
-''Int'', ''Long'', ''Float'', and ''Double''. If ''size'' is given, resize the
-''Storage'' accordingly, else create an empty ''Storage''.
-
-Example:
-
--- Creates a Storage of 10 double:
-x = torch.DoubleStorage(10)
-
-
-The data in the ''Storage'' is //uninitialized//.
-
-==== torch.TYPEStorage(table) ====
-{{anchor:torch.Storage}}
-
-The argument is assumed to be a Lua array of numbers. The constructor returns a new storage of the specified 'TYPE',
-of the size of the table, containing all the table elements converted
-
-Example:
-
-> = torch.IntStorage({1,2,3,4})
-
- 1
- 2
- 3
- 4
-[torch.IntStorage of size 4]
-
-
-==== torch.TYPEStorage(filename [, shared]) ====
-{{anchor:torch.Storage}}
-{{anchor:__torch.StorageMap}}
-
-Returns a new kind of ''Storage'' which maps the contents of the given
-''filename'' to memory. Valid ''TYPE'' are ''Byte'', ''Char'', ''Short'', ''Int'', ''Long'',
-''Float'', and ''Double''. If the optional boolean argument ''shared'' is ''true'',
-the mapped memory is shared amongst all processes on the computer.
-
-When ''shared'' is ''true'', the file must be accessible in read-write mode. Any
-changes on the storage will be written in the file. The changes might be written
-only after destruction of the storage.
-
-When ''shared'' is ''false'' (or not provided), the file must be at least
-readable. Any changes on the storage will not affect the file. Note:
-changes made on the file after creation of the storage have an unspecified
-effect on the storage contents.
-
-The [[#torch.Storage.size|size]] of the returned ''Storage'' will be
-
-(size of file in byte)/(size of TYPE).
-
-
-Example:
-
-$ echo "Hello World" > hello.txt
-$ lua
-Lua 5.1.3 Copyright (C) 1994-2008 Lua.org, PUC-Rio
-> require 'torch'
-> x = torch.CharStorage('hello.txt')
-> = x
- 72
- 101
- 108
- 108
- 111
- 32
- 87
- 111
- 114
- 108
- 100
- 10
-[torch.CharStorage of size 12]
-
-> = x:string()
-Hello World
-
-> = x:fill(42):string()
-************
->
-$ cat hello.txt
-Hello World
-$ lua
-Lua 5.1.3 Copyright (C) 1994-2008 Lua.org, PUC-Rio
-> require 'torch'
-> x = torch.CharStorage('hello.txt', true)
-> = x:string()
-Hello World
-
-> x:fill(42)
->
-$ cat hello.txt
-************
-
-
-==== [number] #self ====
-{{anchor:__torch.StorageSharp}}
-
-Returns the number of elements in the storage. Equivalent to [[#torch.Storage.size|size()]].
-
-==== [number] self[index] ====
-{{anchor:torch.Storage.__index__}}
-
-Returns or set the element at position ''index'' in the storage. Valid range
-of ''index'' is 1 to [[#torch.Storage.size|size()]].
-
-Example:
-
-x = torch.DoubleStorage(10)
-print(x[5])
-
-
-==== [self] copy(storage) ====
-{{anchor:torch.Storage.copy}}
-
-Copy another ''storage''. The types of the two storages might be different: in that case
-a conversion of types occur (which might result, of course, in loss of precision or rounding).
-This method returns self, allowing things like:
-
-x = torch.IntStorage(10):fill(1)
-y = torch.DoubleStorage(10):copy(x) -- y won't be nil!
-
-
-==== [self] fill(value) ====
-{{anchor:torch.Storage.fill}}
-
-Fill the ''Storage'' with the given value. This method returns self, allowing things like:
-
-x = torch.IntStorage(10):fill(0) -- x won't be nil!
-
-
-==== [self] resize(size) ====
-{{anchor:torch.Storage.resize}}
-
-Resize the storage to the provide ''size''. //The new contents are undertermined//.
-
-This function returns self, allowing things like:
-
-x = torch.DoubleStorage(10):fill(1)
-y = torch.DoubleStorage():resize(x:size()):copy(x) -- y won't be nil!
-
-
-==== [number] size() ====
-{{anchor:torch.Storage.size}}
-
-Returns the number of elements in the storage. Equivalent to [[#__torch.StorageSharp|#]].
-
-==== [self] string(str) ====
-{{anchor:torch.Storage.string}}
-
-This function is available only on ''ByteStorage'' and ''CharStorage''.
-
-This method resizes the storage to the length of the provided
-string ''str'', and copy the contents of ''str'' into the storage. The ''NULL'' terminating character is not copied,
-but ''str'' might contain ''NULL'' characters. The method returns the ''Storage''.
-
-> x = torch.CharStorage():string("blah blah")
-> print(x)
- 98
- 108
- 97
- 104
- 32
- 98
- 108
- 97
- 104
-[torch.CharStorage of size 9]
-
-
-==== [string] string() ====
-{{anchor:torch.Storage.string}}
-
-This function is available only on ''ByteStorage'' and ''CharStorage''.
-
-The contents of the storage viewed as a string are returned. The string might contain
-''NULL'' characters.
-
-> x = torch.CharStorage():string("blah blah")
-> print(x:string())
-blah blah
-
diff --git a/dok/tester.dok b/dok/tester.dok
deleted file mode 100644
index f6ce0224..00000000
--- a/dok/tester.dok
+++ /dev/null
@@ -1,150 +0,0 @@
-====== Tester ======
-{{anchor:torch.Tester.dok}}
-
-This class provides a generic unit testing framework. It is already
-being used in [[..:nn:index|nn]] package to verify the correctness of classes.
-
-The framework is generally used as follows.
-
-
-mytest = {}
-
-tester = torch.Tester()
-
-function mytest.TestA()
- local a = 10
- local b = 10
- tester:asserteq(a,b,'a == b')
- tester:assertne(a,b,'a ~= b')
-end
-
-function mytest.TestB()
- local a = 10
- local b = 9
- tester:assertlt(a,b,'a < b')
- tester:assertgt(a,b,'a > b')
-end
-
-tester:add(mytest)
-tester:run()
-
-
-
-Running this code will report 2 errors in 2 test functions. Generally it is
-better to put single test cases in each test function unless several very related
-test cases exit. The error report includes the message and line number of the error.
-
-
-
-Running 2 tests
-** ==> Done
-
-Completed 2 tests with 2 errors
-
---------------------------------------------------------------------------------
-TestB
-a < b
- LT(<) violation val=10, condition=9
- ...y/usr.t7/local.master/share/lua/5.1/torch/Tester.lua:23: in function 'assertlt'
- [string "function mytest.TestB()..."]:4: in function 'f'
-
---------------------------------------------------------------------------------
-TestA
-a ~= b
- NE(~=) violation val=10, condition=10
- ...y/usr.t7/local.master/share/lua/5.1/torch/Tester.lua:38: in function 'assertne'
- [string "function mytest.TestA()..."]:5: in function 'f'
-
---------------------------------------------------------------------------------
-
-
-
-
-==== torch.Tester() ====
-{{anchor:torch.Tester}}
-
-Returns a new instance of ''torch.Tester'' class.
-
-==== add(f, 'name') ====
-{{anchor:torch.Tester.add}}
-
-Adds a new test function with name ''name''. The test function is stored in ''f''.
-The function is supposed to run without any arguments and not return any values.
-
-==== add(ftable) ====
-{{anchor:torch.Tester.add}}
-
-Recursively adds all function entries of the table ''ftable'' as tests. This table
-can only have functions or nested tables of functions.
-
-==== assert(condition [, message]) ====
-{{anchor:torch.Tester.assert}}
-
-Saves an error if condition is not true with the optional message.
-
-==== assertlt(val, condition [, message]) ====
-{{anchor:torch.Tester.assertlt}}
-
-Saves an error if ''val < condition'' is not true with the optional message.
-
-==== assertgt(val, condition [, message]) ====
-{{anchor:torch.Tester.assertgt}}
-
-Saves an error if ''val > condition'' is not true with the optional message.
-
-==== assertle(val, condition [, message]) ====
-{{anchor:torch.Tester.assertle}}
-
-Saves an error if ''val <= condition'' is not true with the optional message.
-
-==== assertge(val, condition [, message]) ====
-{{anchor:torch.Tester.assertge}}
-
-Saves an error if ''val >= condition'' is not true with the optional message.
-
-==== asserteq(val, condition [, message]) ====
-{{anchor:torch.Tester.asserteq}}
-
-Saves an error if ''val == condition'' is not true with the optional message.
-
-==== assertne(val, condition [, message]) ====
-{{anchor:torch.Tester.assertne}}
-
-Saves an error if ''val ~= condition'' is not true with the optional message.
-
-==== assertTensorEq(ta, tb, condition [, message]) ====
-{{anchor:torch.Tester.assertTensorEq}}
-
-Saves an error if ''max(abs(ta-tb)) < condition'' is not true with the optional message.
-
-==== assertTensorNe(ta, tb, condition [, message]) ====
-{{anchor:torch.Tester.assertTensorNe}}
-
-Saves an error if ''max(abs(ta-tb)) >= condition'' is not true with the optional message.
-
-==== assertTableEq(ta, tb, condition [, message]) ====
-{{anchor:torch.Tester.assertTableEq}}
-
-Saves an error if ''max(abs(ta-tb)) < condition'' is not true with the optional message.
-
-==== assertTableNe(ta, tb, condition [, message]) ====
-{{anchor:torch.Tester.assertTableNe}}
-
-Saves an error if ''max(abs(ta-tb)) >= condition'' is not true with the optional message.
-
-==== assertError(f [, message]) ====
-{{anchor:torch.Tester.assertError}}
-
-Saves an error if calling the function f() does not return an error, with the optional message.
-
-==== run() ====
-{{anchor:torch.Tester.run}}
-
-Runs all the test functions that are stored using [[#torch.Tester.add|add()]] function.
-While running it reports progress and at the end gives a summary of all errors.
-
-
-
-
-
-
diff --git a/dok/timer.dok b/dok/timer.dok
deleted file mode 100644
index 2c593d15..00000000
--- a/dok/timer.dok
+++ /dev/null
@@ -1,46 +0,0 @@
-====== Timer ======
-{{anchor:torch.Timer.dok}}
-
-This class is able to measure time (in seconds) elapsed in a particular period. Example:
-
- timer = torch.Timer() -- the Timer starts to count now
- x = 0
- for i=1,1000000 do
- x = x + math.sin(x)
- end
- print('Time elapsed for 1,000,000 sin: ' .. timer:time().real .. ' seconds')
-
-
-===== Timer Class Constructor and Methods =====
-{{anchor:torch.Timer}}
-
-==== torch.Timer() ====
-{{anchor:torch.Timer}}
-
-Returns a new ''Timer''. The timer starts to count the time now.
-
-==== [self] reset() ====
-{{anchor:torch.Timer.reset}}
-
-Reset the timer accumulated time to ''0''. If the timer was running, the timer
-restarts to count the time now. If the timer was stopped, it stays stopped.
-
-==== [self] resume() ====
-{{anchor:torch.Timer.resume}}
-
-Resume a stopped timer. The timer restarts to count the time, and addition
-the accumulated time with the time already counted before being stopped.
-
-==== [self] stop() ====
-{{anchor:torch.Timer.stop}}
-
-Stop the timer. The accumulated time counted until now is stored.
-
-==== [table] time() ====
-{{anchor:torch.Timer.time}}
-
-Returns a table reporting the accumulated time elapsed until now. Following the UNIX shell ''time'' command,
-there are three fields in the table:
- * ''real'': the wall-clock elapsed time.
- * ''user'': the elapsed CPU time. Note that the CPU time of a threaded program sums time spent in all threads.
- * ''sys'': the time spent in system usage.
diff --git a/dok/utility.dok b/dok/utility.dok
deleted file mode 100644
index 63cbccae..00000000
--- a/dok/utility.dok
+++ /dev/null
@@ -1,253 +0,0 @@
-====== Torch utility functions ======
-{{anchor:torch.utility.dok}}
-
-This functions are used in all Torch package for creating and handling classes.
-The most interesting function is probably [[#torch.class|torch.class()]] which allows
-the user to create easily new classes. [[#torch.typename|torch.typename()]] might
-also be interesting to check what is the class of a given Torch object.
-
-The other functions are more for advanced users.
-
-==== [metatable] torch.class(name, [parentName]) ====
-{{anchor:torch.class}}
-
-Creates a new ''Torch'' class called ''name''. If ''parentName'' is provided, the class will inherit
-''parentName'' methods. A class is a table which has a particular metatable.
-
-If ''name'' is of the form ''package.className'' then the class ''className'' will be added to the specified ''package''.
-In that case, ''package'' has to be a valid (and already loaded) package. If ''name'' does not contain any ''"."'',
-then the class will be defined in the global environment.
-
-One [or two] (meta)tables are returned. These tables contain all the method
-provided by the class [and its parent class if it has been provided]. After
-a call to ''torch.class()'' you have to fill-up properly the metatable.
-
-After the class definition is complete, constructing a new class //name// will be achieved by a call to ''//name//()''.
-This call will first call the method __init() if it exists, passing all arguments of ''//name//()''.
-
-
- require "torch"
-
- -- for naming convenience
- do
- --- creates a class "Foo"
- local Foo = torch.class('Foo')
-
- --- the initializer
- function Foo:__init()
- self.contents = "this is some text"
- end
-
- --- a method
- function Foo:print()
- print(self.contents)
- end
-
- --- another one
- function Foo:bip()
- print('bip')
- end
-
- end
-
- --- now create an instance of Foo
- foo = Foo()
-
- --- try it out
- foo:print()
-
- --- create a class torch.Bar which
- --- inherits from Foo
- do
- local Bar, parent = torch.class('torch.Bar', 'Foo')
-
- --- the initializer
- function Bar:__init(stuff)
- --- call the parent initializer on ourself
- parent.__init(self)
-
- --- do some stuff
- self.stuff = stuff
- end
-
- --- a new method
- function Bar:boing()
- print('boing!')
- end
-
- --- override parent's method
- function Bar:print()
- print(self.contents)
- print(self.stuff)
- end
- end
-
- --- create a new instance and use it
- bar = torch.Bar("ha ha!")
- bar:print() -- overrided method
- bar:boing() -- child method
- bar:bip() -- parent's method
-
-
-
-For advanced users, it is worth mentionning that ''torch.class()'' actually
-calls [[#torch.newmetatable|torch.newmetatable()]]. with a particular
-constructor. The constructor creates a Lua table and set the right
-metatable on it, and then calls __init() if it exists in the
-metatable. It also sets a [[#torch.factory|factory]] field __factory such that it
-is possible to create an empty object of this class.
-
-==== [string] torch.typename(object) ====
-{{anchor:torch.typename}}
-
-Checks if ''object'' has a metatable. If it does, and if it corresponds to a
-''Torch'' class, then returns a string containing the name of the
-class. Returns ''nil'' in any other cases.
-
-A Torch class is a class created with [[#torch.class|torch.class()]] or
-[[#torch.newmetatable|torch.newmetatable()]].
-
-==== [userdata] torch.typename2id(string) ====
-{{anchor:torch.typename2id}}
-
-Given a Torch class name specified by ''string'', returns a unique
-corresponding id (defined by a ''lightuserdata'' pointing on the internal
-structure of the class). This might be useful to do a //fast// check of the
-class of an object (if used with [[#torch.id|torch.id()]]), avoiding string
-comparisons.
-
-Returns ''nil'' if ''string'' does not specify a Torch object.
-
-==== [userdata] torch.id(object) ====
-{{anchor:torch.id}}
-
-Returns a unique id corresponding to the //class// of the given Torch object.
-The id is defined by a ''lightuserdata'' pointing on the internal structure
-of the class.
-
-Returns ''nil'' if ''object'' is not a Torch object.
-
-This is different from the //object// id returned by [[#torch.pointer|torch.pointer()]].
-
-==== [table] torch.newmetatable(name, parentName, constructor) ====
-{{anchor:torch.newmetatable}}
-
-Register a new metatable as a Torch type with the given string ''name''. The new metatable is returned.
-
-If the string ''parentName'' is not ''nil'' and is a valid Torch type (previously created
-by ''torch.newmetatable()'') then set the corresponding metatable as a metatable to the returned new
-metatable.
-
-If the given ''constructor'' function is not ''nil'', then assign to the variable ''name'' the given constructor.
-The given ''name'' might be of the form ''package.className'', in which case the ''className'' will be local to the
-specified ''package''. In that case, ''package'' must be a valid and already loaded package.
-
-==== [function] torch.factory(name) ====
-{{anchor:torch.factory}}
-
-Returns the factory function of the Torch class ''name''. If the class name is invalid or if the class
-has no factory, then returns ''nil''.
-
-A Torch class is a class created with [[#torch.class|torch.class()]] or
-[[#torch.newmetatable|torch.newmetatable()]].
-
-A factory function is able to return a new (empty) object of its corresponding class. This is helpful for
-[[File#torch.File.serialization|object serialization]].
-
-==== [table] torch.getmetatable(string) ====
-{{anchor:torch.getmetatable}}
-
-Given a ''string'', returns a metatable corresponding to the Torch class described
-by ''string''. Returns ''nil'' if the class does not exist.
-
-A Torch class is a class created with [[#torch.class|torch.class()]] or
-[[#torch.newmetatable|torch.newmetatable()]].
-
-Example:
-
-> for k,v in pairs(torch.getmetatable("torch.CharStorage")) do print(k,v) end
-__index__ function: 0x1a4ba80
-__typename torch.CharStorage
-write function: 0x1a49cc0
-__tostring__ function: 0x1a586e0
-__newindex__ function: 0x1a4ba40
-string function: 0x1a4d860
-__version 1
-copy function: 0x1a49c80
-read function: 0x1a4d840
-__len__ function: 0x1a37440
-fill function: 0x1a375c0
-resize function: 0x1a37580
-__index table: 0x1a4a080
-size function: 0x1a4ba20
-
-
-==== [boolean] torch.isequal(object1, object2) ====
-{{anchor:torch.isequal}}
-
-If the two objects given as arguments are ''Lua'' tables (or Torch objects), then returns ''true'' if and only if the
-tables (or Torch objects) have the same address in memory. Returns ''false'' in any other cases.
-
-A Torch class is a class created with [[#TorchClass|torch.class()]] or
-[[#torch.newmetatable|torch.newmetatable()]].
-
-==== [string] torch.getdefaulttensortype() ====
-{{anchor:torch.getdefaulttensortype}}
-
-Returns a string representing the default tensor type currently in use
-by Torch7.
-
-==== [table] torch.getenv(function or userdata) ====
-{{anchor:torch.getenv}}
-
-Returns the Lua ''table'' environment of the given ''function'' or the given
-''userdata''. To know more about environments, please read the documentation
-of [[http://www.lua.org/manual/5.1/manual.html#lua_setfenv|lua_setfenv()]]
-and [[http://www.lua.org/manual/5.1/manual.html#lua_getfenv|lua_getfenv()]].
-
-==== [number] torch.version(object) ====
-{{anchor:torch.version}}
-
-Returns the field __version of a given object. This might
-be helpful to handle variations in a class over time.
-
-==== [number] torch.pointer(object) ====
-{{anchor:torch.pointer}}
-
-Returns a unique id (pointer) of the given ''object'', which can be a Torch
-object, a table, a thread or a function.
-
-This is different from the //class// id returned by [[#torch.id|torch.id()]].
-
-==== torch.setdefaulttensortype([typename]) ====
-{{anchor:torch.setdefaulttensortype}}
-
-Sets the default tensor type for all the tensors allocated from this
-point on. Valid types are:
- * ''ByteTensor''
- * ''CharTensor''
- * ''ShortTensor''
- * ''IntTensor''
- * ''FloatTensor''
- * ''DoubleTensor''
-
-==== torch.setenv(function or userdata, table) ====
-{{anchor:torch.setenv}}
-
-Assign ''table'' as the Lua environment of the given ''function'' or the given
-''userdata''. To know more about environments, please read the documentation
-of [[http://www.lua.org/manual/5.1/manual.html#lua_setfenv|lua_setfenv()]]
-and [[http://www.lua.org/manual/5.1/manual.html#lua_getfenv|lua_getfenv()]].
-
-==== [object] torch.setmetatable(table, classname) ====
-{{anchor:torch.setmetatable}}
-
-Set the metatable of the given ''table'' to the metatable of the Torch
-object named ''classname''. This function has to be used with a lot
-of care.
-
-==== [table] torch.getconstructortable(string) ====
-{{anchor:torch.getconstructortable}}
-
-BUGGY
-Return the constructor table of the Torch class specified by ''string'.
diff --git a/lib/luaT/README.md b/lib/luaT/README.md
new file mode 100644
index 00000000..6e9cf0dc
--- /dev/null
+++ b/lib/luaT/README.md
@@ -0,0 +1,239 @@
+
+# Lua Torch C API #
+
+luaT provides an API to interface Lua and C in Torch packages. It defines a
+concept of _classes_ to Lua for Torch, and provides a mechanism to easily
+handle these Lua classes from C.
+
+It additionally provides few functions that `luaL` should have defined, and
+defines several functions similar to `luaL` ones for better type error printing when using
+`luaT` classes.
+
+
+## Memory functions ##
+
+Classical memory allocation functions which generate a Lua error in case of
+problem.
+
+
+### void* luaT_alloc(lua_State *L, long size) ###
+
+Allocates `size` bytes, and return a pointer on the allocated
+memory. A Lua error will be generated if running out of memory.
+
+
+### void* luaT_realloc(lua_State *L, void *ptr, long size) ###
+
+Realloc `ptr` to `size` bytes. `ptr` must have been previously
+allocated with [luaT_alloc](#luaT_alloc) or
+[luaT_realloc](#luaT_realloc), or the C `malloc` or `realloc`
+functions. A Lua error will be generated if running out of memory.
+
+
+### void luaT_free(lua_State *L, void *ptr) ###
+
+Free memory allocated at address `ptr`. The memory must have been
+previously allocated with [luaT_alloc](#luaT_alloc) or
+[luaT_realloc](#luaT_realloc), or the C `malloc` or `realloc`
+functions.
+
+
+## Class creation and basic handling ##
+
+A `luaT` class is basically either a Lua _table_ or _userdata_ with
+an appropriate _metatable_. This appropriate metatable is created with
+[luaT_newmetatable](#luaT_newmetatable). Contrary to luaL userdata
+functions, luaT mechanism handles inheritance. If the class inherit from
+another class, then the metatable will itself have a metatable
+corresponding to the _parent metatable_: the metatables are cascaded
+according to the class inheritance. Multiple inheritance is not supported.
+
+
+### Operator overloading ###
+
+The metatable of a `luaT` object contains `Lua` operators like
+`__index`, `__newindex`, `__tostring`, `__add`
+(etc...). These operators will respectively look for `__index__`,
+`__newindex__`, `__tostring__`, `__add__` (etc...) in the
+metatable. If found, the corresponding function or value will be returned,
+else a Lua error will be raised.
+
+If one wants to provide `__index__` or `__newindex__` in the
+metaclass, these operators must follow a particular scheme:
+
+ * `__index__` must either return a value _and_ `true` or return `false` only. In the first case, it means `__index__` was able to handle the given argument (for e.g., the type was correct). The second case means it was not able to do anything, so `__index` in the root metatable can then try to see if the metaclass contains the required value.
+
+ * `__newindex__` must either return `true` or `false`. As for `__index__`, `true` means it could handle the argument and `false` not. If not, the root metatable `__newindex` will then raise an error if the object was a userdata, or apply a rawset if the object was a Lua table.
+
+Other metaclass operators like `__tostring__`, `__add__`, etc... do not have any particular constraint.
+
+
+### const char* luaT_newmetatable(lua_State *L, const char *tname, const char *parenttname, lua_CFunction constructor, lua_CFunction destructor, lua_CFunction factory) ###
+
+This function creates a new metatable, which is the Lua way to define a new
+object class. As for `luaL_newmetatable`, the metatable is registered in
+the Lua registry table, with the key `tname`. In addition, `tname` is
+also registered in the Lua registry, with the metatable as key (the
+typename of a given object can be thus easily retrieved).
+
+The class name `tname` must be of the form `modulename.classname`. The module name
+If not NULL, `parenttname` must be a valid typename corresponding to the
+parent class of the new class.
+
+If not NULL, `constructor`, a function `new` will be added to the metatable, pointing to this given function. The constructor might also
+be called through `modulename.classname()`, which is an alias setup by `luaT_metatable`.
+
+If not NULL, `destructor` will be called when garbage collecting the object.
+
+If not NULL, `factory` must be a Lua C function creating an empty object
+instance of the class. This functions are used in Torch for serialization.
+
+Note that classes can be partly defined in C and partly defined in Lua:
+once the metatable is created in C, it can be filled up with additional
+methods in Lua.
+
+The return value is the value returned by [luaT_typenameid](#luat_typenameid).
+
+
+### int luaT_pushmetatable(lua_State *L, const name *tname) ###
+
+Push the metatable with type name `tname` on the stack, it `tname` is a
+valid Torch class name (previously registered with luaT_newmetatable).
+
+On success, returns 1. If `tname` is invalid, nothing is pushed and it
+returns 0.
+
+
+### const char* luaT_typenameid(lua_State *L, const char *tname) ###
+
+If `tname` is a valid Torch class name, then returns a unique string (the
+contents will be the same than `tname`) pointing on the string registered
+in the Lua registry. This string is thus valid as long as Lua is
+running. The returned string shall not be freed.
+
+If `tname` is an invalid class name, returns NULL.
+
+
+### const char* luaT_typename(lua_State *L, int ud) ###
+
+Returns the typename of the object at index `ud` on the stack. If it is
+not a valid Torch object, returns NULL.
+
+
+### void luaT_pushudata(lua_State *L, void *udata, const char *tname) ###
+
+Given a C structure `udata`, push a userdata object on the stack with
+metatable corresponding to `tname`. Obviously, `tname` must be a valid
+Torch name registered with [luaT_newmetatable](#luat_newmetatable).
+
+
+### void *luaT_toudata(lua_State *L, int ud, const char *tname) ###
+
+Returns a pointer to the original C structure previously pushed on the
+stack with [luaT_pushudata](#luat_pushudata), if the object at index
+`ud` is a valid Torch class name. Returns NULL otherwise.
+
+
+### int luaT_isudata(lua_State *L, int ud, const char *tname) ###
+
+Returns 1 if the object at index `ud` on the stack is a valid Torch class name `tname`.
+Returns 0 otherwise.
+
+
+### Checking fields of a table ###
+
+This functions check that the table at the given index `ud` on the Lua
+stack has a field named `field`, and that it is of the specified type.
+These function raises a Lua error on failure.
+
+
+## void *luaT_getfieldcheckudata(lua_State *L, int ud, const char *field, const char *tname) ##
+
+Checks that the field named `field` of the table at index `ud` is a
+Torch class name `tname`. Returns the pointer of the C structure
+previously pushed on the stack with [luaT_pushudata](#luat_pushudata) on
+success. The function raises a Lua error on failure.
+
+
+## void *luaT_getfieldchecklightudata(lua_State *L, int ud, const char *field) ##
+
+Checks that the field named `field` of the table at index `ud` is a
+lightuserdata. Returns the lightuserdata pointer on success. The function
+raises a Lua error on failure.
+
+
+## int luaT_getfieldcheckint(lua_State *L, int ud, const char *field) ##
+
+Checks that the field named `field` of the table at index `ud` is an
+int. Returns the int value pointer on success. The function raises a Lua
+error on failure.
+
+
+## const char* luaT_getfieldcheckstring(lua_State *L, int ud, const char *field) ##
+
+Checks that the field named `field` of the table at index `ud` is a
+string. Returns a pointer to the string on success. The function raises a
+Lua error on failure.
+
+
+## int luaT_getfieldcheckboolean(lua_State *L, int ud, const char *field) ##
+
+Checks that the field named `field` of the table at index `ud` is a
+boolean. On success, returns 1 if the boolean is `true`, 0 if it is
+`false`. The function raises a Lua error on failure.
+
+
+## void luaT_getfieldchecktable(lua_State *L, int ud, const char *field) ##
+
+Checks that the field named `field` of the table at index `ud` is a
+table. On success, push the table on the stack. The function raises a Lua
+error on failure.
+
+
+### int luaT_typerror(lua_State *L, int ud, const char *tname) ###
+
+Raises a `luaL_argerror` (and returns its value), claiming that the
+object at index `ud` on the stack is not of type `tname`. Note that
+this function does not check the type, it only raises an error.
+
+
+### int luaT_checkboolean(lua_State *L, int ud) ###
+
+Checks that the value at index `ud` is a boolean. On success, returns 1
+if the boolean is `true`, 0 if it is `false`. The function raises a Lua
+error on failure.
+
+
+### int luaT_optboolean(lua_State *L, int ud, int def) ###
+
+Checks that the value at index `ud` is a boolean. On success, returns 1
+if the boolean is `true`, 0 if it is `false`. If there is no value at
+index `ud`, returns `def`. In any other cases, raises an error.
+
+
+### void luaT_registeratname(lua_State *L, const struct luaL_Reg *methods, const char *name) ###
+
+This function assume a table is on the stack. It creates a table field
+`name` in the table (if this field does not exist yet), and fill up
+`methods` in this table field.
+
+
+### const char *luaT_classrootname(const char *tname) ###
+
+Assuming `tname` is of the form `modulename.classname`, returns
+`classname`. The returned value shall not be freed. It is a pointer
+inside `tname` string.
+
+
+### const char *luaT_classmodulename(const char *tname) ###
+
+Assuming `tname` is of the form `modulename.classname`, returns
+`modulename`. The returned value shall not be freed. It is valid until the
+next call to `luaT_classrootname`.
+
+
+### void luaT_stackdump(lua_State *L) ###
+
+This function print outs the state of the Lua stack. It is useful for debug
+purposes.
+
diff --git a/lib/luaT/dok/index.dok b/lib/luaT/dok/index.dok
deleted file mode 100644
index cc0c0537..00000000
--- a/lib/luaT/dok/index.dok
+++ /dev/null
@@ -1,238 +0,0 @@
-====== Lua Torch C API ======
-{{anchor:luat.dok}}
-
-luaT provides an API to interface Lua and C in Torch packages. It defines a
-concept of //classes// to Lua for Torch, and provides a mechanism to easily
-handle these Lua classes from C.
-
-It additionally provides few functions that ''luaL'' should have defined, and
-defines several functions similar to ''luaL'' ones for better type error printing when using
-''luaT'' classes.
-
-===== Memory functions =====
-{{anchor:luat.memory.dok}}
-
-Classical memory allocation functions which generate a Lua error in case of
-problem.
-
-==== void* luaT_alloc(lua_State *L, long size) ====
-{{anchor:luaT_alloc}}
-
-Allocates ''size'' bytes, and return a pointer on the allocated
-memory. A Lua error will be generated if running out of memory.
-
-==== void* luaT_realloc(lua_State *L, void *ptr, long size) ====
-{{anchor:luaT_realloc}}
-
-Realloc ''ptr'' to ''size'' bytes. ''ptr'' must have been previously
-allocated with [[#luaT_alloc|luaT_alloc]] or
-[[#luaT_realloc|luaT_realloc]], or the C ''malloc'' or ''realloc''
-functions. A Lua error will be generated if running out of memory.
-
-==== void luaT_free(lua_State *L, void *ptr) ====
-{{anchor:luaT_free}}
-
-Free memory allocated at address ''ptr''. The memory must have been
-previously allocated with [[#luaT_alloc|luaT_alloc]] or
-[[#luaT_realloc|luaT_realloc]], or the C ''malloc'' or ''realloc''
-functions.
-
-===== Class creation and basic handling =====
-{{anchor:luat.classcreate}}
-
-A ''luaT'' class is basically either a Lua //table// or //userdata// with
-an appropriate //metatable//. This appropriate metatable is created with
-[[#luaT_newmetatable|luaT_newmetatable]]. Contrary to luaL userdata
-functions, luaT mechanism handles inheritance. If the class inherit from
-another class, then the metatable will itself have a metatable
-corresponding to the //parent metatable//: the metatables are cascaded
-according to the class inheritance. Multiple inheritance is not supported.
-
-==== Operator overloading ====
-{{anchor:luat.operatoroverloading}}
-
-The metatable of a ''luaT'' object contains ''Lua'' operators like
-''%%__index%%'', ''%%__newindex%%'', ''%%__tostring%%'', ''%%__add%%''
-(etc...). These operators will respectively look for ''%%__index__%%'',
-''%%__newindex__%%'', ''%%__tostring__%%'', ''%%__add__%%'' (etc...) in the
-metatable. If found, the corresponding function or value will be returned,
-else a Lua error will be raised.
-
-If one wants to provide ''%%__index__%%'' or ''%%__newindex__%%'' in the
-metaclass, these operators must follow a particular scheme:
-
- * ''%%__index__%%'' must either return a value //and// ''true'' or return ''false'' only. In the first case, it means ''%%__index__%%'' was able to handle the given argument (for e.g., the type was correct). The second case means it was not able to do anything, so ''%%__index%%'' in the root metatable can then try to see if the metaclass contains the required value.
-
- * ''%%__newindex__%%'' must either return ''true'' or ''false''. As for ''%%__index__%%'', ''true'' means it could handle the argument and ''false'' not. If not, the root metatable ''%%__newindex%%'' will then raise an error if the object was a userdata, or apply a rawset if the object was a Lua table.
-
-Other metaclass operators like ''%%__tostring__%%'', ''%%__add__%%'', etc... do not have any particular constraint.
-
-==== const char* luaT_newmetatable(lua_State *L, const char *tname, const char *parenttname, lua_CFunction constructor, lua_CFunction destructor, lua_CFunction factory) ====
-{{anchor:luat_newmetatable}}
-
-This function creates a new metatable, which is the Lua way to define a new
-object class. As for ''luaL_newmetatable'', the metatable is registered in
-the Lua registry table, with the key ''tname''. In addition, ''tname'' is
-also registered in the Lua registry, with the metatable as key (the
-typename of a given object can be thus easily retrieved).
-
-The class name ''tname'' must be of the form ''modulename.classname''. The module name
-If not NULL, ''parenttname'' must be a valid typename corresponding to the
-parent class of the new class.
-
-If not NULL, ''constructor'', a function ''new'' will be added to the metatable, pointing to this given function. The constructor might also
-be called through ''modulename.classname()'', which is an alias setup by ''luaT_metatable''.
-
-If not NULL, ''destructor'' will be called when garbage collecting the object.
-
-If not NULL, ''factory'' must be a Lua C function creating an empty object
-instance of the class. This functions are used in Torch for serialization.
-
-Note that classes can be partly defined in C and partly defined in Lua:
-once the metatable is created in C, it can be filled up with additional
-methods in Lua.
-
-The return value is the value returned by [[#luat_typenameid|luaT_typenameid]].
-
-==== int luaT_pushmetatable(lua_State *L, const name *tname) ====
-{{anchor:luat_pushmetatable}}
-
-Push the metatable with type name ''tname'' on the stack, it ''tname'' is a
-valid Torch class name (previously registered with luaT_newmetatable).
-
-On success, returns 1. If ''tname'' is invalid, nothing is pushed and it
-returns 0.
-
-==== const char* luaT_typenameid(lua_State *L, const char *tname) ====
-{{anchor:luat_typenameid}}
-
-If ''tname'' is a valid Torch class name, then returns a unique string (the
-contents will be the same than ''tname'') pointing on the string registered
-in the Lua registry. This string is thus valid as long as Lua is
-running. The returned string shall not be freed.
-
-If ''tname'' is an invalid class name, returns NULL.
-
-==== const char* luaT_typename(lua_State *L, int ud) ====
-{{anchor:luat_typename}}
-
-Returns the typename of the object at index ''ud'' on the stack. If it is
-not a valid Torch object, returns NULL.
-
-==== void luaT_pushudata(lua_State *L, void *udata, const char *tname) ====
-{{anchor:luat_pushudata}}
-
-Given a C structure ''udata'', push a userdata object on the stack with
-metatable corresponding to ''tname''. Obviously, ''tname'' must be a valid
-Torch name registered with [[#luat_newmetatable|luaT_newmetatable]].
-
-==== void *luaT_toudata(lua_State *L, int ud, const char *tname) ====
-{{anchor:luat_toudata}}
-
-Returns a pointer to the original C structure previously pushed on the
-stack with [[#luat_pushudata|luaT_pushudata]], if the object at index
-''ud'' is a valid Torch class name. Returns NULL otherwise.
-
-==== int luaT_isudata(lua_State *L, int ud, const char *tname) ====
-{{anchor:luat_isudata}}
-
-Returns 1 if the object at index ''ud'' on the stack is a valid Torch class name ''tname''.
-Returns 0 otherwise.
-
-==== Checking fields of a table ====
-{{anchor:luat_getfield}}
-
-This functions check that the table at the given index ''ud'' on the Lua
-stack has a field named ''field'', and that it is of the specified type.
-These function raises a Lua error on failure.
-
-===== void *luaT_getfieldcheckudata(lua_State *L, int ud, const char *field, const char *tname) =====
-{{anchor:luat_getfieldcheckudata}}
-
-Checks that the field named ''field'' of the table at index ''ud'' is a
-Torch class name ''tname''. Returns the pointer of the C structure
-previously pushed on the stack with [[#luat_pushudata|luaT_pushudata]] on
-success. The function raises a Lua error on failure.
-
-===== void *luaT_getfieldchecklightudata(lua_State *L, int ud, const char *field) =====
-{{anchor:luat_getfieldchecklightudata}}
-
-Checks that the field named ''field'' of the table at index ''ud'' is a
-lightuserdata. Returns the lightuserdata pointer on success. The function
-raises a Lua error on failure.
-
-===== int luaT_getfieldcheckint(lua_State *L, int ud, const char *field) =====
-{{anchor:luat_getfieldcheckint}}
-
-Checks that the field named ''field'' of the table at index ''ud'' is an
-int. Returns the int value pointer on success. The function raises a Lua
-error on failure.
-
-===== const char* luaT_getfieldcheckstring(lua_State *L, int ud, const char *field) =====
-{{anchor:luat_getfieldcheckstring}}
-
-Checks that the field named ''field'' of the table at index ''ud'' is a
-string. Returns a pointer to the string on success. The function raises a
-Lua error on failure.
-
-===== int luaT_getfieldcheckboolean(lua_State *L, int ud, const char *field) =====
-{{anchor:luat_getfieldcheckboolean}}
-
-Checks that the field named ''field'' of the table at index ''ud'' is a
-boolean. On success, returns 1 if the boolean is ''true'', 0 if it is
-''false''. The function raises a Lua error on failure.
-
-===== void luaT_getfieldchecktable(lua_State *L, int ud, const char *field) =====
-{{anchor:luat_getfieldchecktable}}
-
-Checks that the field named ''field'' of the table at index ''ud'' is a
-table. On success, push the table on the stack. The function raises a Lua
-error on failure.
-
-==== int luaT_typerror(lua_State *L, int ud, const char *tname) ====
-{{anchor:luat_typerror}}
-
-Raises a ''luaL_argerror'' (and returns its value), claiming that the
-object at index ''ud'' on the stack is not of type ''tname''. Note that
-this function does not check the type, it only raises an error.
-
-==== int luaT_checkboolean(lua_State *L, int ud) ====
-{{anchor:luat_checkboolean}}
-
-Checks that the value at index ''ud'' is a boolean. On success, returns 1
-if the boolean is ''true'', 0 if it is ''false''. The function raises a Lua
-error on failure.
-
-==== int luaT_optboolean(lua_State *L, int ud, int def) ====
-{{anchor:luat_optboolean}}
-
-Checks that the value at index ''ud'' is a boolean. On success, returns 1
-if the boolean is ''true'', 0 if it is ''false''. If there is no value at
-index ''ud'', returns ''def''. In any other cases, raises an error.
-
-==== void luaT_registeratname(lua_State *L, const struct luaL_Reg *methods, const char *name) ====
-{{anchor:luat_registeratname}}
-
-This function assume a table is on the stack. It creates a table field
-''name'' in the table (if this field does not exist yet), and fill up
-''methods'' in this table field.
-
-==== const char *luaT_classrootname(const char *tname) ====
-{{anchor:luat_classrootname}}
-
-Assuming ''tname'' is of the form ''modulename.classname'', returns
-''classname''. The returned value shall not be freed. It is a pointer
-inside ''tname'' string.
-
-==== const char *luaT_classmodulename(const char *tname) ====
-{{anchor:luat_classmodulename}}
-
-Assuming ''tname'' is of the form ''modulename.classname'', returns
-''modulename''. The returned value shall not be freed. It is valid until the
-next call to ''luaT_classrootname''.
-
-==== void luaT_stackdump(lua_State *L) ====
-{{anchor:luat_stackdump}}
-
-This function print outs the state of the Lua stack. It is useful for debug
-purposes.