Skip to content

Latest commit

 

History

History
170 lines (157 loc) · 20.9 KB

vector.md

File metadata and controls

170 lines (157 loc) · 20.9 KB

Home > @josh-brown/vector

vector package

Classes

Class Description
ArrayMatrix Implements Matrix with a 2-dimensional array of values.
ArrayVector Implements Vector with an array of values.
ComplexMatrix A dense Matrix of ComplexNumbers, implemented as an ArrayMatrix
ComplexNumber A number of the form _a + bi_ where _i_ is the imaginary unit.
ComplexNumberOperations Implements the basic ScalarOperations on ComplexNumbers
ComplexVector A dense Vector of ComplexNumbers implemented as an ArrayVector
FloatMatrix A dense matrix of JavaScript number primitives, implemented as a column-major Float64Array
FloatVector A dense Vector of numbers implemented as a Float64Array
LinearRegressor A Regressor model which uses an ordinary least squares model with regularization to predict a continuous target. The optimal set of parameters is computed with gradient descent.
LogisticRegressionClassifier A Classifier model which uses logistic regression to predict a discrete target. The optimal set of parameters is computed with gradient descent.
MatrixBuilder Provides methods for constructing Matrices of a given type
NumberMatrix A dense matrix of JavaScript number primitives, implemented as an ArrayMatrix
NumberOperations Implements the basic ScalarOperations on numbers
NumberVector A dense Vector of numbers implemented as a ArrayVector
RowOperations A wrapper for static methods representing the elementary row operations
ScalarOperations A class which encapsulates the basic arithmetic operations for an arbitrary scalar type.
SparseMatrix Implements Matrix with a map of indices to nonzero values.
SparseNumberMatrix A Matrix implemented as a sparse set of JS number primitives keyed by their indices.
SparseNumberVector A Vector implemented as a sparse set of JS number primitives keyed by their indices.
SparseVector Implements Vector as a map of indices to nonzero values.
SupportVectorMachineClassifier A Classifier model which uses logistic regression to predict a discrete target. The optimal set of parameters is computed with gradient descent.
VectorBuilder Provides methods for constructing Vectors of a given type

Enumerations

Enumeration Description
SolutionType Types of solution to a linear system.

Functions

Function Description
backwardDifferenceMatrix(binCount) Builds a matrix that transforms a vector to a vector of backward differences
calculateCholeskyDecomposition(A) Uses the serial version of the Cholesky algorith to calculate the Cholesky decomposition of a matrix A.
calculateEigenvalues(A, numIterations) Uses the QR algorithm to compute the eigenvalues of a matrix A
calculateGeneralLeastSquares(dataPoints, functionTemplate, numberOfTerms) Calculates a regression model for an arbitrary function.
calculateLinearLeastSquares(dataPoints) Calculates a linear regression model for the provided dataPoints.
calculateLUDecomposition(A) Uses the Doolittle algorithm to calculate the LU Decomposition of a matrix A.
calculateQRDecomposition(A) Uses the Graham-Schmidt process to calculate the QR decomposition of the matrix A.
calculateSingularValueDecomposition(A) Uses the Power Method to calculate the Singular Value Decomposition of a matrix A
center(x) Returns the vector x, shifted so that its mean is at 0
center(A) Returns the matrix A with each column shifted so that its mean is at 0
centralDifferenceMatrix(binCount) Builds a matrix that transforms a vector to a vector of central differences
chainProduct(matrices) Returns the product of the given array of matrices.
columnSumSupremumNorm(A) Calculates the 1-Norm of a matrix A
correlation(first, second) Calculates the correlation coefficient r of two vectors
correlation(A) Calculates the correlation matrix of a matrix A
covariance(first, second) Calculates the covariance of two vectors
covariance(A) Calculates the covariance matrix of a matrix A
crossProduct(first, second) Calculates the cross-product (vector-product) of two vectors. This is defined only for vectors with three dimensions.
derivative(f, xMin, xMax, binCount) Uses finite differences to build a vector containing approximate values of the derivative of f.
determinant(matrix) Uses expansion of minors to calculate the determinant of a matrix. Throws an error if the input is not square.
diag(elements) Creates a new matrix with the specified entries on the diagonal. See MatrixBuilder.diagonal()
dotProduct(first, second) Computes the dot/inner/scalar product of two vectors. See Vector.innerProduct().
eig(A, numIterations) Uses the QR algorithm to compute the eigenvalues and eigenvectors of a matrix A
euclideanNorm(v) Calculates the Euclidean Norm (or 2-Norm) of a vector v
exp(A, order) Implements the Pade Approximant to compute the exponential of matrix A
eye(size) Creates a new identity matrix of size size. See MatrixBuilder.identity()
forwardDifferenceMatrix(binCount) Builds a matrix that transforms a vector to a vector of forward differences
frobeniusNorm(A) Calculates the Frobenius Norm of a matrix A
GaussianKernel(sigmaSquared) Creates a gaussian Kernel for use in a SupportVectorMachineClassifier. The gaussian kernel converts a data Matrix into a similarity Matrix where the value of entry (i,j) expresses the similarity of rows i and j in the original data set.
getEigenvectorForEigenvalue(A, lambda) Given a matrix A and an eigenvalue lambda of that matrix, returns the eigenvector of A corresponding to lambda
gradientDescent(parameters) Learns an optimal set of parameters theta using gradient descent
hadamardProduct(first, second) Computes the hadamard (element-wise) product of two vectors.
hadamardProduct(first, second) Computes the hadamard (element-wise) product of two matrices.
inverse(matrix) Uses Gauss-Jordan elimination with pivoting to calculate the inverse of a matrix.
isHermitian(matrix) Tests if a matrix is Hermitian.
isIdentity(matrix) Tests if a matrix is an identity matrix
isLowerTriangular(matrix) Tests if a matrix is lower-triangular.
isOrthogonal(matrix) Tests if a matrix is orthogonal
isOrthonormal(matrix) Tests if a matrix is orthonormal
isSquare(matrix) Tests if a matrix is square.
isSymmetric(matrix) Tests if a matrix is symmetric.
isUpperTriangular(matrix) Tests if a matrix is upper-triangular.
kroneckerProduct(first, second) Computes the Kronecker product (generalized outer product) of two matrices.
LinearKernel(data) A linear kernel for use in a SupportVectorMachineClassifier. The linear kernel converts a data Matrix into a matrix which has been prepended with a column of all ones, representing the constant term in a linear model, or the bias term in an SVM.
linspace(xMin, xMax, binCount) Builds a vector of binCount evenly spaced numbers between xMin (inclusive) and xMax (exclusive).
mat(data) Creates a new Matrix of numbers. See MatrixBuilder.fromArray()
mean(x) Calculates the mean of the values in the vector x
mean(A) Calculates the mean vector of the matrix A
normalize(v) Returns a vector with the same direction as the input v, but with a Euclidean norm of 1
ones(entries) Creates a new vector of all 1s. See VectorBuilder.ones()
ones(shape) Creates a new matrix of all 1s. See MatrixBuilder.ones()
pca(A, useCorrelation) Conducts a principal component analysis of a matrix A, and returns A in a new basis corresponding to the principal components.
pNorm(v, p) Calculates the P-Norm of a vector v
pow(A, n) Computes _A^n_ recursively.
prettyPrint(num) Returns an easy-to-read string representing a number
prettyPrint(vector) Returns an easy-to-read string representing the contents of a Vector
prettyPrint(matrix) Returns an easy-to-read string representing the contents of a Matrix
RadialBasisFunction(distanceMetric) Creates a Kernel for use in a SupportVectorMachineClassifier. The RBF kernel converts a data Matrix into a similarity Matrix where the value of entry (i,j) expresses the similarity of rows i and j in the original data set.
rank(matrix) Calculates the rank of a matrix
reduceDimensions(A, options) Reduce the number of dimensions of a data matrix A while losing as little information as possible.
reducedRowEchelonForm(matrix) Uses Gauss-Jordan elimination with pivoting to convert a matrix to Reduced Row-Echelon Form (RREF)
rowEchelonForm(matrix) Uses Gauss-Jordan elimination with pivoting to convert a matrix to Row-Echelon Form (REF)
rowSumSupremumNorm(A) Calculates the Infinity-Norm of a matrix A
solve(A, b) Solves the matrix equation _Ax=b_ for the vector _x_ using the default implementation. See solveByGaussianElimination()
solveByGaussianElimination(A, b) Uses Gauss-Jordan elimination with pivoting and backward substitution to solve the linear equation _Ax=b_
solveOverdeterminedSystem(A, b) Gives an approximate solution to an overdetermined linear system.
standardDeviation(x) Calculates the standard deviation of a vector
standardDeviation(A) Calculates the standard deviation of each column of the matrix A
standardize(x) Returns the vector x shifted and scaled to have a mean of 0 and standard deviation of 1
standardize(A) Returns the matrix A with each column shifted and scaled to have a mean of 0 and standard deviation of 1
sumNorm(v) Calculates the Sum Norm (or 1-Norm) of a vector v
supremumNorm(v) Calculates the Supremum Norm (or Infinity-Norm) of a vector v
tripleProduct(first, second, third) Calculates the scalar triple-product of three vectors. This is defined only for vectors with three dimensions.
variance(x) Calculates the variance of a vector
variance(A) Calculates the variance of each column of the matrix A
vec(data) Creates a new Vector of numbers. See VectorBuilder.fromArray()
zeros(entries) Creates a new vector of all 0s. See VectorBuilder.zeros()
zeros(shape) Creates a new matrix of all 0s. See MatrixBuilder.zeros()

Interfaces

Interface Description
CholeskyDecomposition The result of a Cholesky Decomposition
Classifier A machine learning model with a continuous numeric target
Cost The output of a cost function
EigenPair An eigenvector and its corresponding eigenvalue
LeastSquaresApproximation The result of a least squares approximation.
LinearTransformation An abstract linear transformation between vectors of type V and vectors of type U.
LUDecomposition The result of an LU Decomposition
Matrix A generalized Matrix - one of the core data types
OverdeterminedSolution A type representing the lack of solution to a linear system.
PrincipalComponentAnalysis The result of a principal component analysis.
QRDecomposition The result of a QR decomposition.
Regressor A machine learning model with a continuous numeric target
RowOperationResult The result of a row operation (result), and the matrix that we multiply by the original matrix to yield that result (operator)
SingularValueDecomposition The result of a Singular Value Decomposition
UnderdeterminedSolution A particular solution to a linear system with infinitely many solutions.
UniqueSolution The unique solution to a linear system.
Vector A generalized Vector - one of the core data types

Type Aliases

Type Alias Description
ApproximationFunction A function that takes a vector of inputs and produces an output. This must always be a pure function that is linear in its coefficients.
ApproximationFunctionTemplate A higher-order function which is used to generate an ApproximationFunction. This must be linear in its coefficients, or the result of the linear regression will not be correct.
CostFunction A function that evaluates the cost of a set of parameters theta
DimensionReductionOptions Specify how dimension reduction ought to be done.
GradientDescentParameters The parameters for gradientDescent()
Kernel A function which takes a Matrix of data (and optionally another Matrix of data on which the kernel was trained) and returns a new Matrix which will be used to train a machine learning model.Generally intended for use with a SupportVectorMachineClassifier.
LearningAlgorithm An function which, given an initial value of theta and a CostFunction, will compute the optimal value of theta
LinearRegressorHyperparams The set of hyperparameters for a LinearRegressor
LinearSolution A general type representing any type of solution to a linear system.
LogisticRegressionHyperparams The set of hyperparameters for a LogisticRegressionClassifier
MatrixData The data stored in a Matrix represented as a 2-D array
MatrixEntryFunction A function that generates a matrix entry based on an existing entry entry, its row index i, and its column index j
MatrixShape A tuple representing the shape of a Matrix. The first entry is the number of rows, and the second entry is the number of columns.
Norm A function that calculates a norm for a vector.
SimilarityMetric A function which expresses the similarity of two Vectors as a number between 0 (very dissimilar) and 1 (identical).
Solver A function that solves a linear system _Ax=b_
SparseMatrixData The data stored in a Matrix represented as a map
SparseVectorData The data stored in a Vector represented as a map
SupportVectorMachineHyperparams The set of hyperparameters for a SupportVectorMachineClassifier
VectorData The data stored in a Vector represented as a map
VectorIndexFunction A function that generates a vector entry based on its index