diff --git a/.idea/modules.xml b/.idea/modules.xml
index 5d72818..85e2023 100644
--- a/.idea/modules.xml
+++ b/.idea/modules.xml
@@ -3,8 +3,11 @@
Computes an RQ decomposition of 3x3 matrices. The function computes a RQ decomposition using the given rotations. This
- * function is used in "decomposeProjectionMatrix" to decompose the left 3x3
- * submatrix of a projection matrix into a camera and a rotation matrix. It optionally returns three rotation matrices, one for each axis, and the
- * three Euler angles in degrees (as the return value) that could be used in
- * OpenGL. Note, there is always more than one sequence of rotations about the
- * three principle axes that results in the same orientation of an object, eg.
- * see [Slabaugh]. Returned tree rotation matrices and corresponding three Euler
- * angules are only one of the possible solutions. Computes an RQ decomposition of 3x3 matrices. The function computes a RQ decomposition using the given rotations. This
- * function is used in "decomposeProjectionMatrix" to decompose the left 3x3
- * submatrix of a projection matrix into a camera and a rotation matrix. It optionally returns three rotation matrices, one for each axis, and the
- * three Euler angles in degrees (as the return value) that could be used in
- * OpenGL. Note, there is always more than one sequence of rotations about the
- * three principle axes that results in the same orientation of an object, eg.
- * see [Slabaugh]. Returned tree rotation matrices and corresponding three Euler
- * angules are only one of the possible solutions. Converts a rotation matrix to a rotation vector or vice versa. theta <- norm(r)
- * r <- r/ theta
- * R = cos(theta) I + (1- cos(theta)) r r^T + sin(theta)
- * |0 -r_z r_y|
- * |r_z 0 -r_x|
- * |-r_y r_x 0|
- * Inverse transformation can be also done easily, since sin(theta)
- * |0 -r_z r_y|
- * |r_z 0 -r_x|
- * |-r_y r_x 0|
- * = (R - R^T)/2 A rotation vector is a convenient and most compact representation of a
- * rotation matrix (since any rotation matrix has just 3 degrees of freedom).
- * The representation is used in the global 3D geometry optimization procedures
- * like "calibrateCamera", "stereoCalibrate", or "solvePnP". Converts a rotation matrix to a rotation vector or vice versa. theta <- norm(r)
- * r <- r/ theta
- * R = cos(theta) I + (1- cos(theta)) r r^T + sin(theta)
- * |0 -r_z r_y|
- * |r_z 0 -r_x|
- * |-r_y r_x 0|
- * Inverse transformation can be also done easily, since sin(theta)
- * |0 -r_z r_y|
- * |r_z 0 -r_x|
- * |-r_y r_x 0|
- * = (R - R^T)/2 A rotation vector is a convenient and most compact representation of a
- * rotation matrix (since any rotation matrix has just 3 degrees of freedom).
- * The representation is used in the global 3D geometry optimization procedures
- * like "calibrateCamera", "stereoCalibrate", or "solvePnP". Finds the camera intrinsic and extrinsic parameters from several views of a
- * calibration pattern. The function estimates the intrinsic camera parameters and extrinsic
- * parameters for each of the views. The algorithm is based on [Zhang2000] and
- * [BouguetMCT]. The coordinates of 3D object points and their corresponding 2D
- * projections in each view must be specified. That may be achieved by using an
- * object with a known geometry and easily detectable feature points.
- * Such an object is called a calibration rig or calibration pattern, and OpenCV
- * has built-in support for a chessboard as a calibration rig (see
- * "findChessboardCorners"). Currently, initialization of intrinsic parameters
- * (when The algorithm performs the following steps: The function returns the final re-projection error. Note: If you use a non-square (=non-NxN) grid and "findChessboardCorners" for
- * calibration, and In the old interface all the vectors of object points from different views
- * are concatenated together. In the old interface all the vectors of object points from different views
- * are concatenated together. |f_x 0 c_x|
- * |0 f_y c_y|
- * |0 0 1|
- * CV_CALIB_USE_INTRINSIC_GUESS
is not set) is only
- * implemented for planar calibration patterns (where Z-coordinates of the
- * object points must be all zeros). 3D calibration rigs can also be used as
- * long as initial cameraMatrix
is provided.
- *
- *
- * CV_CALIB_FIX_K?
are specified.
- * imagePoints
and the projected (using
- * the current estimates for camera parameters and the poses) object points
- * objectPoints
. See "projectPoints" for details.
- * calibrateCamera
returns bad values (zero
- * distortion coefficients, an image center very far from (w/2-0.5,h/2-0.5)
,
- * and/or large differences between f_x and f_y (ratios of
- * 10:1 or more)), then you have probably used patternSize=cvSize(rows,cols)
- * instead of using patternSize=cvSize(cols,rows)
in
- * "findChessboardCorners".imagePoints.size()
and objectPoints.size()
and
- * imagePoints[i].size()
must be equal to objectPoints[i].size()
- * for each i
.
- *
- * CV_CALIB_USE_INTRINSIC_GUESS
and/or CV_CALIB_FIX_ASPECT_RATIO
- * are specified, some or all of fx, fy, cx, cy
must be initialized
- * before calling the function.
cameraMatrix
contains valid
- * initial values of fx, fy, cx, cy
that are optimized further.
- * Otherwise, (cx, cy)
is initially set to the image center
- * (imageSize
is used), and focal distances are computed in a
- * least-squares fashion. Note, that if intrinsic parameters are known, there is
- * no need to use this function just to estimate extrinsic parameters. Use
- * "solvePnP" instead.
- * CV_CALIB_USE_INTRINSIC_GUESS
is set too.
- * fy
- * as a free parameter. The ratio fx/fy
stays the same as in the
- * input cameraMatrix
. When CV_CALIB_USE_INTRINSIC_GUESS
- * is not set, the actual input values of fx
and fy
- * are ignored, only their ratio is computed and used further.
- * CV_CALIB_USE_INTRINSIC_GUESS
is set, the coefficient from the
- * supplied distCoeffs
matrix is used. Otherwise, it is set to 0.
- * Finds the camera intrinsic and extrinsic parameters from several views of a - * calibration pattern.
- * - *The function estimates the intrinsic camera parameters and extrinsic
- * parameters for each of the views. The algorithm is based on [Zhang2000] and
- * [BouguetMCT]. The coordinates of 3D object points and their corresponding 2D
- * projections in each view must be specified. That may be achieved by using an
- * object with a known geometry and easily detectable feature points.
- * Such an object is called a calibration rig or calibration pattern, and OpenCV
- * has built-in support for a chessboard as a calibration rig (see
- * "findChessboardCorners"). Currently, initialization of intrinsic parameters
- * (when CV_CALIB_USE_INTRINSIC_GUESS
is not set) is only
- * implemented for planar calibration patterns (where Z-coordinates of the
- * object points must be all zeros). 3D calibration rigs can also be used as
- * long as initial cameraMatrix
is provided.
The algorithm performs the following steps:
- *CV_CALIB_FIX_K?
are specified.
- * imagePoints
and the projected (using
- * the current estimates for camera parameters and the poses) object points
- * objectPoints
. See "projectPoints" for details.
- * The function returns the final re-projection error.
- * - *Note:
- * - *If you use a non-square (=non-NxN) grid and "findChessboardCorners" for
- * calibration, and calibrateCamera
returns bad values (zero
- * distortion coefficients, an image center very far from (w/2-0.5,h/2-0.5)
,
- * and/or large differences between f_x and f_y (ratios of
- * 10:1 or more)), then you have probably used patternSize=cvSize(rows,cols)
- * instead of using patternSize=cvSize(cols,rows)
in
- * "findChessboardCorners".
In the old interface all the vectors of object points from different views - * are concatenated together.
- * @param imagePoints In the new interface it is a vector of vectors of the - * projections of calibration pattern points (e.g. std.vectorimagePoints.size()
and objectPoints.size()
and
- * imagePoints[i].size()
must be equal to objectPoints[i].size()
- * for each i
.
- *
- * In the old interface all the vectors of object points from different views - * are concatenated together.
- * @param imageSize Size of the image used only to initialize the intrinsic - * camera matrix. - * @param cameraMatrix Output 3x3 floating-point camera matrix A = - *|f_x 0 c_x| - * |0 f_y c_y| - * |0 0 1| - *
. IfCV_CALIB_USE_INTRINSIC_GUESS
and/or CV_CALIB_FIX_ASPECT_RATIO
- * are specified, some or all of fx, fy, cx, cy
must be initialized
- * before calling the function.
- * @param distCoeffs Output vector of distortion coefficients (k_1, k_2,
- * p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements.
- * @param rvecs Output vector of rotation vectors (see "Rodrigues") estimated
- * for each pattern view (e.g. std.vectorcameraMatrix
contains valid
- * initial values of fx, fy, cx, cy
that are optimized further.
- * Otherwise, (cx, cy)
is initially set to the image center
- * (imageSize
is used), and focal distances are computed in a
- * least-squares fashion. Note, that if intrinsic parameters are known, there is
- * no need to use this function just to estimate extrinsic parameters. Use
- * "solvePnP" instead.
- * CV_CALIB_USE_INTRINSIC_GUESS
is set too.
- * fy
- * as a free parameter. The ratio fx/fy
stays the same as in the
- * input cameraMatrix
. When CV_CALIB_USE_INTRINSIC_GUESS
- * is not set, the actual input values of fx
and fy
- * are ignored, only their ratio is computed and used further.
- * CV_CALIB_USE_INTRINSIC_GUESS
is set, the coefficient from the
- * supplied distCoeffs
matrix is used. Otherwise, it is set to 0.
- * Finds the camera intrinsic and extrinsic parameters from several views of a - * calibration pattern.
- * - *The function estimates the intrinsic camera parameters and extrinsic
- * parameters for each of the views. The algorithm is based on [Zhang2000] and
- * [BouguetMCT]. The coordinates of 3D object points and their corresponding 2D
- * projections in each view must be specified. That may be achieved by using an
- * object with a known geometry and easily detectable feature points.
- * Such an object is called a calibration rig or calibration pattern, and OpenCV
- * has built-in support for a chessboard as a calibration rig (see
- * "findChessboardCorners"). Currently, initialization of intrinsic parameters
- * (when CV_CALIB_USE_INTRINSIC_GUESS
is not set) is only
- * implemented for planar calibration patterns (where Z-coordinates of the
- * object points must be all zeros). 3D calibration rigs can also be used as
- * long as initial cameraMatrix
is provided.
The algorithm performs the following steps:
- *CV_CALIB_FIX_K?
are specified.
- * imagePoints
and the projected (using
- * the current estimates for camera parameters and the poses) object points
- * objectPoints
. See "projectPoints" for details.
- * The function returns the final re-projection error.
- * - *Note:
- * - *If you use a non-square (=non-NxN) grid and "findChessboardCorners" for
- * calibration, and calibrateCamera
returns bad values (zero
- * distortion coefficients, an image center very far from (w/2-0.5,h/2-0.5)
,
- * and/or large differences between f_x and f_y (ratios of
- * 10:1 or more)), then you have probably used patternSize=cvSize(rows,cols)
- * instead of using patternSize=cvSize(cols,rows)
in
- * "findChessboardCorners".
In the old interface all the vectors of object points from different views - * are concatenated together.
- * @param imagePoints In the new interface it is a vector of vectors of the - * projections of calibration pattern points (e.g. std.vectorimagePoints.size()
and objectPoints.size()
and
- * imagePoints[i].size()
must be equal to objectPoints[i].size()
- * for each i
.
- *
- * In the old interface all the vectors of object points from different views - * are concatenated together.
- * @param imageSize Size of the image used only to initialize the intrinsic - * camera matrix. - * @param cameraMatrix Output 3x3 floating-point camera matrix A = - *|f_x 0 c_x| - * |0 f_y c_y| - * |0 0 1| - *
. IfCV_CALIB_USE_INTRINSIC_GUESS
and/or CV_CALIB_FIX_ASPECT_RATIO
- * are specified, some or all of fx, fy, cx, cy
must be initialized
- * before calling the function.
- * @param distCoeffs Output vector of distortion coefficients (k_1, k_2,
- * p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements.
- * @param rvecs Output vector of rotation vectors (see "Rodrigues") estimated
- * for each pattern view (e.g. std.vectorComputes useful camera characteristics from the camera matrix.
- * - *The function computes various useful camera characteristics from the - * previously estimated camera matrix.
- * - *Note:
- * - *Do keep in mind that the unity measure 'mm' stands for whatever unit of - * measure one chooses for the chessboard pitch (it can thus be any value).
- * - * @param cameraMatrix Input camera matrix that can be estimated by - * "calibrateCamera" or "stereoCalibrate". - * @param imageSize Input image size in pixels. - * @param apertureWidth Physical width in mm of the sensor. - * @param apertureHeight Physical height in mm of the sensor. - * @param fovx Output field of view in degrees along the horizontal sensor axis. - * @param fovy Output field of view in degrees along the vertical sensor axis. - * @param focalLength Focal length of the lens in mm. - * @param principalPoint Principal point in mm. - * @param aspectRatio f_y/f_x - * - * @see org.opencv.calib3d.Calib3d.calibrationMatrixValues - */ - public static void calibrationMatrixValues(Mat cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, double[] fovx, double[] fovy, double[] focalLength, Point principalPoint, double[] aspectRatio) + //javadoc: findEssentialMat(points1, points2, focal, pp, method, prob, threshold, mask) + public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold, Mat mask) { - double[] fovx_out = new double[1]; - double[] fovy_out = new double[1]; - double[] focalLength_out = new double[1]; - double[] principalPoint_out = new double[2]; - double[] aspectRatio_out = new double[1]; - calibrationMatrixValues_0(cameraMatrix.nativeObj, imageSize.width, imageSize.height, apertureWidth, apertureHeight, fovx_out, fovy_out, focalLength_out, principalPoint_out, aspectRatio_out); - if(fovx!=null) fovx[0] = (double)fovx_out[0]; - if(fovy!=null) fovy[0] = (double)fovy_out[0]; - if(focalLength!=null) focalLength[0] = (double)focalLength_out[0]; - if(principalPoint!=null){ principalPoint.x = principalPoint_out[0]; principalPoint.y = principalPoint_out[1]; } - if(aspectRatio!=null) aspectRatio[0] = (double)aspectRatio_out[0]; - return; + + Mat retVal = new Mat(findEssentialMat_3(points1.nativeObj, points2.nativeObj, focal, pp.x, pp.y, method, prob, threshold, mask.nativeObj)); + + return retVal; } - - // - // C++: void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat& rvec3, Mat& tvec3, Mat& dr3dr1 = Mat(), Mat& dr3dt1 = Mat(), Mat& dr3dr2 = Mat(), Mat& dr3dt2 = Mat(), Mat& dt3dr1 = Mat(), Mat& dt3dt1 = Mat(), Mat& dt3dr2 = Mat(), Mat& dt3dt2 = Mat()) - // - -/** - *Combines two rotation-and-shift transformations.
- * - *The functions compute:
- * - *rvec3 = rodrigues ^(-1)(rodrigues(rvec2) * rodrigues(rvec1)) - * tvec3 = rodrigues(rvec2) * tvec1 + tvec2,
- * - *where rodrigues denotes a rotation vector to a rotation matrix - * transformation, and rodrigues^(-1) denotes the inverse - * transformation. See "Rodrigues" for details.
- * - *Also, the functions can compute the derivatives of the output vectors with - * regards to the input vectors (see "matMulDeriv"). - * The functions are used inside "stereoCalibrate" but can also be used in your - * own code where Levenberg-Marquardt or another gradient-based solver is used - * to optimize a function that contains a matrix multiplication.
- * - * @param rvec1 First rotation vector. - * @param tvec1 First translation vector. - * @param rvec2 Second rotation vector. - * @param tvec2 Second translation vector. - * @param rvec3 Output rotation vector of the superposition. - * @param tvec3 Output translation vector of the superposition. - * @param dr3dr1 Optional output derivatives ofrvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dr3dt1 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dr3dr2 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dr3dt2 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dt3dr1 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dt3dt1 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dt3dr2 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- * @param dt3dt2 Optional output derivatives of rvec3
or
- * tvec3
with regard to rvec1
, rvec2
,
- * tvec1
and tvec2
, respectively.
- *
- * @see org.opencv.calib3d.Calib3d.composeRT
- */
- public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1, Mat dt3dr2, Mat dt3dt2)
+ //javadoc: findEssentialMat(points1, points2, focal, pp, method, prob, threshold)
+ public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold)
{
-
- composeRT_0(rvec1.nativeObj, tvec1.nativeObj, rvec2.nativeObj, tvec2.nativeObj, rvec3.nativeObj, tvec3.nativeObj, dr3dr1.nativeObj, dr3dt1.nativeObj, dr3dr2.nativeObj, dr3dt2.nativeObj, dt3dr1.nativeObj, dt3dt1.nativeObj, dt3dr2.nativeObj, dt3dt2.nativeObj);
-
- return;
+
+ Mat retVal = new Mat(findEssentialMat_4(points1.nativeObj, points2.nativeObj, focal, pp.x, pp.y, method, prob, threshold));
+
+ return retVal;
}
-/**
- * Combines two rotation-and-shift transformations.
- * - *The functions compute:
- * - *rvec3 = rodrigues ^(-1)(rodrigues(rvec2) * rodrigues(rvec1)) - * tvec3 = rodrigues(rvec2) * tvec1 + tvec2,
- * - *where rodrigues denotes a rotation vector to a rotation matrix - * transformation, and rodrigues^(-1) denotes the inverse - * transformation. See "Rodrigues" for details.
- * - *Also, the functions can compute the derivatives of the output vectors with - * regards to the input vectors (see "matMulDeriv"). - * The functions are used inside "stereoCalibrate" but can also be used in your - * own code where Levenberg-Marquardt or another gradient-based solver is used - * to optimize a function that contains a matrix multiplication.
- * - * @param rvec1 First rotation vector. - * @param tvec1 First translation vector. - * @param rvec2 Second rotation vector. - * @param tvec2 Second translation vector. - * @param rvec3 Output rotation vector of the superposition. - * @param tvec3 Output translation vector of the superposition. - * - * @see org.opencv.calib3d.Calib3d.composeRT - */ - public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3) + //javadoc: findEssentialMat(points1, points2) + public static Mat findEssentialMat(Mat points1, Mat points2) { - - composeRT_1(rvec1.nativeObj, tvec1.nativeObj, rvec2.nativeObj, tvec2.nativeObj, rvec3.nativeObj, tvec3.nativeObj); - - return; + + Mat retVal = new Mat(findEssentialMat_5(points1.nativeObj, points2.nativeObj)); + + return retVal; } // - // C++: void computeCorrespondEpilines(Mat points, int whichImage, Mat F, Mat& lines) + // C++: Mat findFundamentalMat(vector_Point2f points1, vector_Point2f points2, int method = FM_RANSAC, double param1 = 3., double param2 = 0.99, Mat& mask = Mat()) // -/** - *For points in an image of a stereo pair, computes the corresponding epilines - * in the other image.
- * - *For every point in one of the two images of a stereo pair, the function finds - * the equation of the corresponding epipolar line in the other image.
- * - *From the fundamental matrix definition (see "findFundamentalMat"), line
- * l^2_i in the second image for the point p^1_i in the first
- * image (when whichImage=1
) is computed as:
l^2_i = F p^1_i
- * - *And vice versa, when whichImage=2
, l^1_i is computed
- * from p^2_i as:
l^1_i = F^T p^2_i
- * - *Line coefficients are defined up to a scale. They are normalized so that - * a_i^2+b_i^2=1.
- * - * @param points Input points. N x 1 or 1 x N matrix of type - *CV_32FC2
or vector
.
- * @param whichImage Index of the image (1 or 2) that contains the
- * points
.
- * @param F Fundamental matrix that can be estimated using "findFundamentalMat"
- * or "stereoRectify".
- * @param lines Output vector of the epipolar lines corresponding to the points
- * in the other image. Each line ax + by + c=0 is encoded by 3 numbers
- * (a, b, c).
- *
- * @see org.opencv.calib3d.Calib3d.computeCorrespondEpilines
- */
- public static void computeCorrespondEpilines(Mat points, int whichImage, Mat F, Mat lines)
+ //javadoc: findFundamentalMat(points1, points2, method, param1, param2, mask)
+ public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double param1, double param2, Mat mask)
{
-
- computeCorrespondEpilines_0(points.nativeObj, whichImage, F.nativeObj, lines.nativeObj);
-
- return;
+ Mat points1_mat = points1;
+ Mat points2_mat = points2;
+ Mat retVal = new Mat(findFundamentalMat_0(points1_mat.nativeObj, points2_mat.nativeObj, method, param1, param2, mask.nativeObj));
+
+ return retVal;
}
-
- //
- // C++: void convertPointsFromHomogeneous(Mat src, Mat& dst)
- //
-
-/**
- * Converts points from homogeneous to Euclidean space.
- * - *The function converts points homogeneous to Euclidean space using perspective
- * projection. That is, each point (x1, x2,... x(n-1), xn)
is
- * converted to (x1/xn, x2/xn,..., x(n-1)/xn)
. When
- * xn=0
, the output point coordinates will be (0,0,0,...)
.
N
-dimensional points.
- * @param dst Output vector of N-1
-dimensional points.
- *
- * @see org.opencv.calib3d.Calib3d.convertPointsFromHomogeneous
- */
- public static void convertPointsFromHomogeneous(Mat src, Mat dst)
+ //javadoc: findFundamentalMat(points1, points2, method, param1, param2)
+ public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double param1, double param2)
{
+ Mat points1_mat = points1;
+ Mat points2_mat = points2;
+ Mat retVal = new Mat(findFundamentalMat_1(points1_mat.nativeObj, points2_mat.nativeObj, method, param1, param2));
+
+ return retVal;
+ }
- convertPointsFromHomogeneous_0(src.nativeObj, dst.nativeObj);
-
- return;
+ //javadoc: findFundamentalMat(points1, points2)
+ public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2)
+ {
+ Mat points1_mat = points1;
+ Mat points2_mat = points2;
+ Mat retVal = new Mat(findFundamentalMat_2(points1_mat.nativeObj, points2_mat.nativeObj));
+
+ return retVal;
}
//
- // C++: void convertPointsToHomogeneous(Mat src, Mat& dst)
+ // C++: Mat findHomography(vector_Point2f srcPoints, vector_Point2f dstPoints, int method = 0, double ransacReprojThreshold = 3, Mat& mask = Mat(), int maxIters = 2000, double confidence = 0.995)
//
-/**
- * Converts points from Euclidean to homogeneous space.
- * - *The function converts points from Euclidean to homogeneous space by appending
- * 1's to the tuple of point coordinates. That is, each point (x1, x2,...,
- * xn)
is converted to (x1, x2,..., xn, 1)
.
N
-dimensional points.
- * @param dst Output vector of N+1
-dimensional points.
- *
- * @see org.opencv.calib3d.Calib3d.convertPointsToHomogeneous
- */
- public static void convertPointsToHomogeneous(Mat src, Mat dst)
+ //javadoc: findHomography(srcPoints, dstPoints, method, ransacReprojThreshold, mask, maxIters, confidence)
+ public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask, int maxIters, double confidence)
{
+ Mat srcPoints_mat = srcPoints;
+ Mat dstPoints_mat = dstPoints;
+ Mat retVal = new Mat(findHomography_0(srcPoints_mat.nativeObj, dstPoints_mat.nativeObj, method, ransacReprojThreshold, mask.nativeObj, maxIters, confidence));
+
+ return retVal;
+ }
- convertPointsToHomogeneous_0(src.nativeObj, dst.nativeObj);
+ //javadoc: findHomography(srcPoints, dstPoints, method, ransacReprojThreshold)
+ public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold)
+ {
+ Mat srcPoints_mat = srcPoints;
+ Mat dstPoints_mat = dstPoints;
+ Mat retVal = new Mat(findHomography_1(srcPoints_mat.nativeObj, dstPoints_mat.nativeObj, method, ransacReprojThreshold));
+
+ return retVal;
+ }
- return;
+ //javadoc: findHomography(srcPoints, dstPoints)
+ public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints)
+ {
+ Mat srcPoints_mat = srcPoints;
+ Mat dstPoints_mat = dstPoints;
+ Mat retVal = new Mat(findHomography_2(srcPoints_mat.nativeObj, dstPoints_mat.nativeObj));
+
+ return retVal;
}
//
- // C++: void correctMatches(Mat F, Mat points1, Mat points2, Mat& newPoints1, Mat& newPoints2)
+ // C++: Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize = Size(), Rect* validPixROI = 0, bool centerPrincipalPoint = false)
//
-/**
- * Refines coordinates of corresponding points.
- * - *The function implements the Optimal Triangulation Method (see Multiple View - * Geometry for details). For each given point correspondence points1[i] <-> - * points2[i], and a fundamental matrix F, it computes the corrected - * correspondences newPoints1[i] <-> newPoints2[i] that minimize the geometric - * error d(points1[i], newPoints1[i])^2 + d(points2[i],newPoints2[i])^2 - * (where d(a,b) is the geometric distance between points a - * and b) subject to the epipolar constraint newPoints2^T * F * - * newPoints1 = 0.
- * - * @param F 3x3 fundamental matrix. - * @param points1 1xN array containing the first set of points. - * @param points2 1xN array containing the second set of points. - * @param newPoints1 The optimized points1. - * @param newPoints2 The optimized points2. - * - * @see org.opencv.calib3d.Calib3d.correctMatches - */ - public static void correctMatches(Mat F, Mat points1, Mat points2, Mat newPoints1, Mat newPoints2) + //javadoc: getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, alpha, newImgSize, validPixROI, centerPrincipalPoint) + public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize, Rect validPixROI, boolean centerPrincipalPoint) { + double[] validPixROI_out = new double[4]; + Mat retVal = new Mat(getOptimalNewCameraMatrix_0(cameraMatrix.nativeObj, distCoeffs.nativeObj, imageSize.width, imageSize.height, alpha, newImgSize.width, newImgSize.height, validPixROI_out, centerPrincipalPoint)); + if(validPixROI!=null){ validPixROI.x = (int)validPixROI_out[0]; validPixROI.y = (int)validPixROI_out[1]; validPixROI.width = (int)validPixROI_out[2]; validPixROI.height = (int)validPixROI_out[3]; } + return retVal; + } - correctMatches_0(F.nativeObj, points1.nativeObj, points2.nativeObj, newPoints1.nativeObj, newPoints2.nativeObj); - - return; + //javadoc: getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, alpha) + public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha) + { + + Mat retVal = new Mat(getOptimalNewCameraMatrix_1(cameraMatrix.nativeObj, distCoeffs.nativeObj, imageSize.width, imageSize.height, alpha)); + + return retVal; } // - // C++: void decomposeProjectionMatrix(Mat projMatrix, Mat& cameraMatrix, Mat& rotMatrix, Mat& transVect, Mat& rotMatrixX = Mat(), Mat& rotMatrixY = Mat(), Mat& rotMatrixZ = Mat(), Mat& eulerAngles = Mat()) + // C++: Mat initCameraMatrix2D(vector_vector_Point3f objectPoints, vector_vector_Point2f imagePoints, Size imageSize, double aspectRatio = 1.0) // -/** - *Decomposes a projection matrix into a rotation matrix and a camera matrix.
- * - *The function computes a decomposition of a projection matrix into a - * calibration and a rotation matrix and the position of a camera.
- * - *It optionally returns three rotation matrices, one for each axis, and three - * Euler angles that could be used in OpenGL. Note, there is always more than - * one sequence of rotations about the three principle axes that results in the - * same orientation of an object, eg. see [Slabaugh]. Returned tree rotation - * matrices and corresponding three Euler angules are only one of the possible - * solutions.
- * - *The function is based on "RQDecomp3x3".
- * - * @param projMatrix 3x4 input projection matrix P. - * @param cameraMatrix Output 3x3 camera matrix K. - * @param rotMatrix Output 3x3 external rotation matrix R. - * @param transVect Output 4x1 translation vector T. - * @param rotMatrixX a rotMatrixX - * @param rotMatrixY a rotMatrixY - * @param rotMatrixZ a rotMatrixZ - * @param eulerAngles Optional three-element vector containing three Euler - * angles of rotation in degrees. - * - * @see org.opencv.calib3d.Calib3d.decomposeProjectionMatrix - */ - public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY, Mat rotMatrixZ, Mat eulerAngles) + //javadoc: initCameraMatrix2D(objectPoints, imagePoints, imageSize, aspectRatio) + public static Mat initCameraMatrix2D(ListDecomposes a projection matrix into a rotation matrix and a camera matrix.
- * - *The function computes a decomposition of a projection matrix into a - * calibration and a rotation matrix and the position of a camera.
- * - *It optionally returns three rotation matrices, one for each axis, and three - * Euler angles that could be used in OpenGL. Note, there is always more than - * one sequence of rotations about the three principle axes that results in the - * same orientation of an object, eg. see [Slabaugh]. Returned tree rotation - * matrices and corresponding three Euler angules are only one of the possible - * solutions.
- * - *The function is based on "RQDecomp3x3".
- * - * @param projMatrix 3x4 input projection matrix P. - * @param cameraMatrix Output 3x3 camera matrix K. - * @param rotMatrix Output 3x3 external rotation matrix R. - * @param transVect Output 4x1 translation vector T. - * - * @see org.opencv.calib3d.Calib3d.decomposeProjectionMatrix - */ - public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect) + //javadoc: initCameraMatrix2D(objectPoints, imagePoints, imageSize) + public static Mat initCameraMatrix2D(ListRenders the detected chessboard corners.
- * - *The function draws individual chessboard corners detected either as red - * circles if the board was not found, or as colored corners connected with - * lines if the board was found.
- * - * @param image Destination image. It must be an 8-bit color image. - * @param patternSize Number of inner corners per a chessboard row and column - *(patternSize = cv.Size(points_per_row,points_per_column))
.
- * @param corners Array of detected corners, the output of findChessboardCorners
.
- * @param patternWasFound Parameter indicating whether the complete board was
- * found or not. The return value of "findChessboardCorners" should be passed
- * here.
- *
- * @see org.opencv.calib3d.Calib3d.drawChessboardCorners
- */
- public static void drawChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners, boolean patternWasFound)
+ //javadoc: getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize)
+ public static Rect getValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int SADWindowSize)
{
- Mat corners_mat = corners;
- drawChessboardCorners_0(image.nativeObj, patternSize.width, patternSize.height, corners_mat.nativeObj, patternWasFound);
-
- return;
+
+ Rect retVal = new Rect(getValidDisparityROI_0(roi1.x, roi1.y, roi1.width, roi1.height, roi2.x, roi2.y, roi2.width, roi2.height, minDisparity, numberOfDisparities, SADWindowSize));
+
+ return retVal;
}
//
- // C++: int estimateAffine3D(Mat src, Mat dst, Mat& out, Mat& inliers, double ransacThreshold = 3, double confidence = 0.99)
+ // C++: Vec3d RQDecomp3x3(Mat src, Mat& mtxR, Mat& mtxQ, Mat& Qx = Mat(), Mat& Qy = Mat(), Mat& Qz = Mat())
//
-/**
- * Computes an optimal affine transformation between two 3D point sets.
- * - *The function estimates an optimal 3D affine transformation between two 3D - * point sets using the RANSAC algorithm.
- * - * @param src First input 3D point set. - * @param dst Second input 3D point set. - * @param out Output 3D affine transformation matrix 3 x 4. - * @param inliers Output vector indicating which points are inliers. - * @param ransacThreshold Maximum reprojection error in the RANSAC algorithm to - * consider a point as an inlier. - * @param confidence Confidence level, between 0 and 1, for the estimated - * transformation. Anything between 0.95 and 0.99 is usually good enough. Values - * too close to 1 can slow down the estimation significantly. Values lower than - * 0.8-0.9 can result in an incorrectly estimated transformation. - * - * @see org.opencv.calib3d.Calib3d.estimateAffine3D - */ - public static int estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold, double confidence) + //javadoc: RQDecomp3x3(src, mtxR, mtxQ, Qx, Qy, Qz) + public static double[] RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx, Mat Qy, Mat Qz) { - - int retVal = estimateAffine3D_0(src.nativeObj, dst.nativeObj, out.nativeObj, inliers.nativeObj, ransacThreshold, confidence); - + + double[] retVal = RQDecomp3x3_0(src.nativeObj, mtxR.nativeObj, mtxQ.nativeObj, Qx.nativeObj, Qy.nativeObj, Qz.nativeObj); + return retVal; } -/** - *Computes an optimal affine transformation between two 3D point sets.
- * - *The function estimates an optimal 3D affine transformation between two 3D - * point sets using the RANSAC algorithm.
- * - * @param src First input 3D point set. - * @param dst Second input 3D point set. - * @param out Output 3D affine transformation matrix 3 x 4. - * @param inliers Output vector indicating which points are inliers. - * - * @see org.opencv.calib3d.Calib3d.estimateAffine3D - */ - public static int estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers) + //javadoc: RQDecomp3x3(src, mtxR, mtxQ) + public static double[] RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ) { - - int retVal = estimateAffine3D_1(src.nativeObj, dst.nativeObj, out.nativeObj, inliers.nativeObj); - + + double[] retVal = RQDecomp3x3_1(src.nativeObj, mtxR.nativeObj, mtxQ.nativeObj); + return retVal; } // - // C++: void filterSpeckles(Mat& img, double newVal, int maxSpeckleSize, double maxDiff, Mat& buf = Mat()) + // C++: bool findChessboardCorners(Mat image, Size patternSize, vector_Point2f& corners, int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE) // -/** - *Filters off small noise blobs (speckles) in the disparity map
- * - * @param img The input 16-bit signed disparity image - * @param newVal The disparity value used to paint-off the speckles - * @param maxSpeckleSize The maximum speckle size to consider it a speckle. - * Larger blobs are not affected by the algorithm - * @param maxDiff Maximum difference between neighbor disparity pixels to put - * them into the same blob. Note that since StereoBM, StereoSGBM and may be - * other algorithms return a fixed-point disparity map, where disparity values - * are multiplied by 16, this scale factor should be taken into account when - * specifying this parameter value. - * @param buf The optional temporary buffer to avoid memory allocation within - * the function. - * - * @see org.opencv.calib3d.Calib3d.filterSpeckles - */ - public static void filterSpeckles(Mat img, double newVal, int maxSpeckleSize, double maxDiff, Mat buf) - { - - filterSpeckles_0(img.nativeObj, newVal, maxSpeckleSize, maxDiff, buf.nativeObj); - - return; - } - -/** - *Filters off small noise blobs (speckles) in the disparity map
- * - * @param img The input 16-bit signed disparity image - * @param newVal The disparity value used to paint-off the speckles - * @param maxSpeckleSize The maximum speckle size to consider it a speckle. - * Larger blobs are not affected by the algorithm - * @param maxDiff Maximum difference between neighbor disparity pixels to put - * them into the same blob. Note that since StereoBM, StereoSGBM and may be - * other algorithms return a fixed-point disparity map, where disparity values - * are multiplied by 16, this scale factor should be taken into account when - * specifying this parameter value. - * - * @see org.opencv.calib3d.Calib3d.filterSpeckles - */ - public static void filterSpeckles(Mat img, double newVal, int maxSpeckleSize, double maxDiff) - { - - filterSpeckles_1(img.nativeObj, newVal, maxSpeckleSize, maxDiff); - - return; - } - - - // - // C++: bool findChessboardCorners(Mat image, Size patternSize, vector_Point2f& corners, int flags = CALIB_CB_ADAPTIVE_THRESH+CALIB_CB_NORMALIZE_IMAGE) - // - -/** - *Finds the positions of internal corners of the chessboard.
- * - *The function attempts to determine whether the input image is a view of the
- * chessboard pattern and locate the internal chessboard corners. The function
- * returns a non-zero value if all of the corners are found and they are placed
- * in a certain order (row by row, left to right in every row). Otherwise, if
- * the function fails to find all the corners or reorder them, it returns 0. For
- * example, a regular chessboard has 8 x 8 squares and 7 x 7 internal corners,
- * that is, points where the black squares touch each other.
- * The detected coordinates are approximate, and to determine their positions
- * more accurately, the function calls "cornerSubPix".
- * You also may use the function "cornerSubPix" with different parameters if
- * returned coordinates are not accurate enough.
- * Sample usage of detecting and drawing chessboard corners:
// C++ code:
- * - *Size patternsize(8,6); //interior number of corners
- * - *Mat gray =....; //source image
- * - *vector
//CALIB_CB_FAST_CHECK saves a lot of time on images
- * - *//that do not contain any chessboard corners
- * - *bool patternfound = findChessboardCorners(gray, patternsize, corners,
- * - *CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
- * - *+ CALIB_CB_FAST_CHECK);
- * - *if(patternfound)
- * - *cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
- * - *TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
- * - *drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
- * - *Note: The function requires white space (like a square-thick border, the - * wider the better) around the board to make the detection more robust in - * various environments. Otherwise, if there is no border and the background is - * dark, the outer black squares cannot be segmented properly and so the square - * grouping and ordering algorithm fails. - *
- * - * @param image Source chessboard view. It must be an 8-bit grayscale or color - * image. - * @param patternSize Number of inner corners per a chessboard row and column - *(patternSize = cvSize(points_per_row,points_per_colum) =
- * cvSize(columns,rows))
.
- * @param corners Output array of detected corners.
- * @param flags Various operation flags that can be zero or a combination of the
- * following values:
- * Finds the positions of internal corners of the chessboard.
- * - *The function attempts to determine whether the input image is a view of the
- * chessboard pattern and locate the internal chessboard corners. The function
- * returns a non-zero value if all of the corners are found and they are placed
- * in a certain order (row by row, left to right in every row). Otherwise, if
- * the function fails to find all the corners or reorder them, it returns 0. For
- * example, a regular chessboard has 8 x 8 squares and 7 x 7 internal corners,
- * that is, points where the black squares touch each other.
- * The detected coordinates are approximate, and to determine their positions
- * more accurately, the function calls "cornerSubPix".
- * You also may use the function "cornerSubPix" with different parameters if
- * returned coordinates are not accurate enough.
- * Sample usage of detecting and drawing chessboard corners:
// C++ code:
- * - *Size patternsize(8,6); //interior number of corners
- * - *Mat gray =....; //source image
- * - *vector
//CALIB_CB_FAST_CHECK saves a lot of time on images
- * - *//that do not contain any chessboard corners
- * - *bool patternfound = findChessboardCorners(gray, patternsize, corners,
- * - *CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE
- * - *+ CALIB_CB_FAST_CHECK);
- * - *if(patternfound)
- * - *cornerSubPix(gray, corners, Size(11, 11), Size(-1, -1),
- * - *TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
- * - *drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
- * - *Note: The function requires white space (like a square-thick border, the - * wider the better) around the board to make the detection more robust in - * various environments. Otherwise, if there is no border and the background is - * dark, the outer black squares cannot be segmented properly and so the square - * grouping and ordering algorithm fails. - *
- * - * @param image Source chessboard view. It must be an 8-bit grayscale or color - * image. - * @param patternSize Number of inner corners per a chessboard row and column - *(patternSize = cvSize(points_per_row,points_per_colum) =
- * cvSize(columns,rows))
.
- * @param corners Output array of detected corners.
- *
- * @see org.opencv.calib3d.Calib3d.findChessboardCorners
- */
+ //javadoc: findChessboardCorners(image, patternSize, corners)
public static boolean findChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners)
{
Mat corners_mat = corners;
boolean retVal = findChessboardCorners_1(image.nativeObj, patternSize.width, patternSize.height, corners_mat.nativeObj);
-
+
return retVal;
}
//
- // C++: bool findCirclesGrid(Mat image, Size patternSize, Mat& centers, int flags = CALIB_CB_SYMMETRIC_GRID, Ptr_FeatureDetector blobDetector = new SimpleBlobDetector())
+ // C++: bool findCirclesGrid(Mat image, Size patternSize, Mat& centers, int flags, Ptr_FeatureDetector blobDetector, CirclesGridFinderParameters parameters)
//
// Unknown type 'Ptr_FeatureDetector' (I), skipping the function
//
- // C++: bool findCirclesGridDefault(Mat image, Size patternSize, Mat& centers, int flags = CALIB_CB_SYMMETRIC_GRID)
+ // C++: bool findCirclesGrid(Mat image, Size patternSize, Mat& centers, int flags = CALIB_CB_SYMMETRIC_GRID, Ptr_FeatureDetector blobDetector = SimpleBlobDetector::create())
//
- public static boolean findCirclesGridDefault(Mat image, Size patternSize, Mat centers, int flags)
+ //javadoc: findCirclesGrid(image, patternSize, centers, flags)
+ public static boolean findCirclesGrid(Mat image, Size patternSize, Mat centers, int flags)
{
-
- boolean retVal = findCirclesGridDefault_0(image.nativeObj, patternSize.width, patternSize.height, centers.nativeObj, flags);
-
+
+ boolean retVal = findCirclesGrid_0(image.nativeObj, patternSize.width, patternSize.height, centers.nativeObj, flags);
+
return retVal;
}
- public static boolean findCirclesGridDefault(Mat image, Size patternSize, Mat centers)
+ //javadoc: findCirclesGrid(image, patternSize, centers)
+ public static boolean findCirclesGrid(Mat image, Size patternSize, Mat centers)
{
-
- boolean retVal = findCirclesGridDefault_1(image.nativeObj, patternSize.width, patternSize.height, centers.nativeObj);
-
+
+ boolean retVal = findCirclesGrid_1(image.nativeObj, patternSize.width, patternSize.height, centers.nativeObj);
+
return retVal;
}
//
- // C++: Mat findFundamentalMat(vector_Point2f points1, vector_Point2f points2, int method = FM_RANSAC, double param1 = 3., double param2 = 0.99, Mat& mask = Mat())
+ // C++: bool solvePnP(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int flags = SOLVEPNP_ITERATIVE)
//
-/**
- * Calculates a fundamental matrix from the corresponding points in two images.
- * - *The epipolar geometry is described by the following equation:
- * - *[p_2; 1]^T F [p_1; 1] = 0
- * - *where F is a fundamental matrix, p_1 and p_2 are - * corresponding points in the first and the second images, respectively.
- * - *The function calculates the fundamental matrix using one of four methods - * listed above and returns the found fundamental matrix. Normally just one - * matrix is found. But in case of the 7-point algorithm, the function may - * return up to 3 solutions (9 x 3 matrix that stores all 3 matrices - * sequentially).
- * - *The calculated fundamental matrix may be passed further to "computeCorrespondEpilines"
- * that finds the epipolar lines corresponding to the specified points. It can
- * also be passed to"stereoRectifyUncalibrated" to compute the rectification
- * transformation.
- *
// C++ code:
- * - *// Example. Estimation of fundamental matrix using the RANSAC algorithm
- * - *int point_count = 100;
- * - *vector
vector
// initialize the points here... * /
- * - *for(int i = 0; i < point_count; i++)
- * - * - *points1[i] =...;
- * - *points2[i] =...;
- * - * - *Mat fundamental_matrix =
- * - *findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
- * - * @param points1 Array ofN
points from the first image. The point
- * coordinates should be floating-point (single or double precision).
- * @param points2 Array of the second image points of the same size and format
- * as points1
.
- * @param method Method for computing a fundamental matrix.
- * Calculates a fundamental matrix from the corresponding points in two images.
- * - *The epipolar geometry is described by the following equation:
- * - *[p_2; 1]^T F [p_1; 1] = 0
- * - *where F is a fundamental matrix, p_1 and p_2 are - * corresponding points in the first and the second images, respectively.
- * - *The function calculates the fundamental matrix using one of four methods - * listed above and returns the found fundamental matrix. Normally just one - * matrix is found. But in case of the 7-point algorithm, the function may - * return up to 3 solutions (9 x 3 matrix that stores all 3 matrices - * sequentially).
- * - *The calculated fundamental matrix may be passed further to "computeCorrespondEpilines"
- * that finds the epipolar lines corresponding to the specified points. It can
- * also be passed to"stereoRectifyUncalibrated" to compute the rectification
- * transformation.
- *
// C++ code:
- * - *// Example. Estimation of fundamental matrix using the RANSAC algorithm
- * - *int point_count = 100;
- * - *vector
vector
// initialize the points here... * /
- * - *for(int i = 0; i < point_count; i++)
- * - * - *points1[i] =...;
- * - *points2[i] =...;
- * - * - *Mat fundamental_matrix =
- * - *findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
- * - * @param points1 Array ofN
points from the first image. The point
- * coordinates should be floating-point (single or double precision).
- * @param points2 Array of the second image points of the same size and format
- * as points1
.
- * @param method Method for computing a fundamental matrix.
- * Calculates a fundamental matrix from the corresponding points in two images.
- * - *The epipolar geometry is described by the following equation:
- * - *[p_2; 1]^T F [p_1; 1] = 0
- * - *where F is a fundamental matrix, p_1 and p_2 are - * corresponding points in the first and the second images, respectively.
- * - *The function calculates the fundamental matrix using one of four methods - * listed above and returns the found fundamental matrix. Normally just one - * matrix is found. But in case of the 7-point algorithm, the function may - * return up to 3 solutions (9 x 3 matrix that stores all 3 matrices - * sequentially).
- * - *The calculated fundamental matrix may be passed further to "computeCorrespondEpilines"
- * that finds the epipolar lines corresponding to the specified points. It can
- * also be passed to"stereoRectifyUncalibrated" to compute the rectification
- * transformation.
- *
// C++ code:
- * - *// Example. Estimation of fundamental matrix using the RANSAC algorithm
- * - *int point_count = 100;
- * - *vector
vector
// initialize the points here... * /
- * - *for(int i = 0; i < point_count; i++)
- * - * - *points1[i] =...;
- * - *points2[i] =...;
- * - * - *Mat fundamental_matrix =
- * - *findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
- * - * @param points1 Array ofN
points from the first image. The point
- * coordinates should be floating-point (single or double precision).
- * @param points2 Array of the second image points of the same size and format
- * as points1
.
- *
- * @see org.opencv.calib3d.Calib3d.findFundamentalMat
- */
- public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2)
+
+ //
+ // C++: bool solvePnPRansac(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8.0, double confidence = 0.99, Mat& inliers = Mat(), int flags = SOLVEPNP_ITERATIVE)
+ //
+
+ //javadoc: solvePnPRansac(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec, useExtrinsicGuess, iterationsCount, reprojectionError, confidence, inliers, flags)
+ public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence, Mat inliers, int flags)
{
- Mat points1_mat = points1;
- Mat points2_mat = points2;
- Mat retVal = new Mat(findFundamentalMat_2(points1_mat.nativeObj, points2_mat.nativeObj));
+ Mat objectPoints_mat = objectPoints;
+ Mat imagePoints_mat = imagePoints;
+ Mat distCoeffs_mat = distCoeffs;
+ boolean retVal = solvePnPRansac_0(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, cameraMatrix.nativeObj, distCoeffs_mat.nativeObj, rvec.nativeObj, tvec.nativeObj, useExtrinsicGuess, iterationsCount, reprojectionError, confidence, inliers.nativeObj, flags);
+
+ return retVal;
+ }
+ //javadoc: solvePnPRansac(objectPoints, imagePoints, cameraMatrix, distCoeffs, rvec, tvec)
+ public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
+ {
+ Mat objectPoints_mat = objectPoints;
+ Mat imagePoints_mat = imagePoints;
+ Mat distCoeffs_mat = distCoeffs;
+ boolean retVal = solvePnPRansac_1(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, cameraMatrix.nativeObj, distCoeffs_mat.nativeObj, rvec.nativeObj, tvec.nativeObj);
+
return retVal;
}
//
- // C++: Mat findHomography(vector_Point2f srcPoints, vector_Point2f dstPoints, int method = 0, double ransacReprojThreshold = 3, Mat& mask = Mat())
- //
-
-/**
- * Finds a perspective transformation between two planes.
- * - *The functions find and return the perspective transformation H - * between the source and the destination planes:
- * - *s_i [x'_i y'_i 1] ~ H [x_i y_i 1]
- * - *so that the back-projection error
- * - *sum _i(x'_i- (h_11 x_i + h_12 y_i + h_13)/(h_(31) x_i + h_32 y_i + - * h_33))^2+ (y'_i- (h_21 x_i + h_22 y_i + h_23)/(h_(31) x_i + h_32 y_i + - * h_33))^2
- * - *is minimized. If the parameter method
is set to the default
- * value 0, the function uses all the point pairs to compute an initial
- * homography estimate with a simple least-squares scheme.
However, if not all of the point pairs (srcPoints_i,
- * dstPoints_i) fit the rigid perspective transformation (that is,
- * there are some outliers), this initial estimate will be poor.
- * In this case, you can use one of the two robust methods. Both methods,
- * RANSAC
and LMeDS
, try many different random subsets
- * of the corresponding point pairs (of four pairs each), estimate the
- * homography matrix using this subset and a simple least-square algorithm, and
- * then compute the quality/goodness of the computed homography (which is the
- * number of inliers for RANSAC or the median re-projection error for LMeDs).
- * The best subset is then used to produce the initial estimate of the
- * homography matrix and the mask of inliers/outliers.
Regardless of the method, robust or not, the computed homography matrix is - * refined further (using inliers only in case of a robust method) with the - * Levenberg-Marquardt method to reduce the re-projection error even more.
- * - *The method RANSAC
can handle practically any ratio of outliers
- * but it needs a threshold to distinguish inliers from outliers.
- * The method LMeDS
does not need any threshold but it works
- * correctly only when there are more than 50% of inliers. Finally, if there are
- * no outliers and the noise is rather small, use the default method
- * (method=0
).
The function is used to find initial intrinsic and extrinsic matrices. - * Homography matrix is determined up to a scale. Thus, it is normalized so that - * h_33=1. Note that whenever an H matrix cannot be estimated, an empty - * one will be returned.
- * - *Note:
- *CV_32FC2
or vector
.
- * @param dstPoints Coordinates of the points in the target plane, a matrix of
- * the type CV_32FC2
or a vector
.
- * @param method Method used to computed a homography matrix. The following
- * methods are possible:
- * | dstPoints _i - convertPointsHomogeneous(H * srcPoints _i)| > - * ransacReprojThreshold
- * - *then the point i is considered an outlier. If srcPoints
- * and dstPoints
are measured in pixels, it usually makes sense to
- * set this parameter somewhere in the range of 1 to 10.
CV_RANSAC
- * or CV_LMEDS
). Note that the input mask values are ignored.
- *
- * @see org.opencv.calib3d.Calib3d.findHomography
- * @see org.opencv.imgproc.Imgproc#warpPerspective
- * @see org.opencv.core.Core#perspectiveTransform
- * @see org.opencv.video.Video#estimateRigidTransform
- * @see org.opencv.imgproc.Imgproc#getAffineTransform
- * @see org.opencv.imgproc.Imgproc#getPerspectiveTransform
- */
- public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask)
+ // C++: bool stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat& H1, Mat& H2, double threshold = 5)
+ //
+
+ //javadoc: stereoRectifyUncalibrated(points1, points2, F, imgSize, H1, H2, threshold)
+ public static boolean stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2, double threshold)
{
- Mat srcPoints_mat = srcPoints;
- Mat dstPoints_mat = dstPoints;
- Mat retVal = new Mat(findHomography_0(srcPoints_mat.nativeObj, dstPoints_mat.nativeObj, method, ransacReprojThreshold, mask.nativeObj));
+
+ boolean retVal = stereoRectifyUncalibrated_0(points1.nativeObj, points2.nativeObj, F.nativeObj, imgSize.width, imgSize.height, H1.nativeObj, H2.nativeObj, threshold);
+
+ return retVal;
+ }
+ //javadoc: stereoRectifyUncalibrated(points1, points2, F, imgSize, H1, H2)
+ public static boolean stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2)
+ {
+
+ boolean retVal = stereoRectifyUncalibrated_1(points1.nativeObj, points2.nativeObj, F.nativeObj, imgSize.width, imgSize.height, H1.nativeObj, H2.nativeObj);
+
return retVal;
}
-/**
- * Finds a perspective transformation between two planes.
- * - *The functions find and return the perspective transformation H - * between the source and the destination planes:
- * - *s_i [x'_i y'_i 1] ~ H [x_i y_i 1]
- * - *so that the back-projection error
- * - *sum _i(x'_i- (h_11 x_i + h_12 y_i + h_13)/(h_(31) x_i + h_32 y_i + - * h_33))^2+ (y'_i- (h_21 x_i + h_22 y_i + h_23)/(h_(31) x_i + h_32 y_i + - * h_33))^2
- * - *is minimized. If the parameter method
is set to the default
- * value 0, the function uses all the point pairs to compute an initial
- * homography estimate with a simple least-squares scheme.
However, if not all of the point pairs (srcPoints_i,
- * dstPoints_i) fit the rigid perspective transformation (that is,
- * there are some outliers), this initial estimate will be poor.
- * In this case, you can use one of the two robust methods. Both methods,
- * RANSAC
and LMeDS
, try many different random subsets
- * of the corresponding point pairs (of four pairs each), estimate the
- * homography matrix using this subset and a simple least-square algorithm, and
- * then compute the quality/goodness of the computed homography (which is the
- * number of inliers for RANSAC or the median re-projection error for LMeDs).
- * The best subset is then used to produce the initial estimate of the
- * homography matrix and the mask of inliers/outliers.
Regardless of the method, robust or not, the computed homography matrix is - * refined further (using inliers only in case of a robust method) with the - * Levenberg-Marquardt method to reduce the re-projection error even more.
- * - *The method RANSAC
can handle practically any ratio of outliers
- * but it needs a threshold to distinguish inliers from outliers.
- * The method LMeDS
does not need any threshold but it works
- * correctly only when there are more than 50% of inliers. Finally, if there are
- * no outliers and the noise is rather small, use the default method
- * (method=0
).
The function is used to find initial intrinsic and extrinsic matrices. - * Homography matrix is determined up to a scale. Thus, it is normalized so that - * h_33=1. Note that whenever an H matrix cannot be estimated, an empty - * one will be returned.
- * - *Note:
- *CV_32FC2
or vector
.
- * @param dstPoints Coordinates of the points in the target plane, a matrix of
- * the type CV_32FC2
or a vector
.
- * @param method Method used to computed a homography matrix. The following
- * methods are possible:
- * | dstPoints _i - convertPointsHomogeneous(H * srcPoints _i)| > - * ransacReprojThreshold
- * - *then the point i is considered an outlier. If srcPoints
- * and dstPoints
are measured in pixels, it usually makes sense to
- * set this parameter somewhere in the range of 1 to 10.
Finds a perspective transformation between two planes.
- * - *The functions find and return the perspective transformation H - * between the source and the destination planes:
- * - *s_i [x'_i y'_i 1] ~ H [x_i y_i 1]
- * - *so that the back-projection error
- * - *sum _i(x'_i- (h_11 x_i + h_12 y_i + h_13)/(h_(31) x_i + h_32 y_i + - * h_33))^2+ (y'_i- (h_21 x_i + h_22 y_i + h_23)/(h_(31) x_i + h_32 y_i + - * h_33))^2
- * - *is minimized. If the parameter method
is set to the default
- * value 0, the function uses all the point pairs to compute an initial
- * homography estimate with a simple least-squares scheme.
However, if not all of the point pairs (srcPoints_i,
- * dstPoints_i) fit the rigid perspective transformation (that is,
- * there are some outliers), this initial estimate will be poor.
- * In this case, you can use one of the two robust methods. Both methods,
- * RANSAC
and LMeDS
, try many different random subsets
- * of the corresponding point pairs (of four pairs each), estimate the
- * homography matrix using this subset and a simple least-square algorithm, and
- * then compute the quality/goodness of the computed homography (which is the
- * number of inliers for RANSAC or the median re-projection error for LMeDs).
- * The best subset is then used to produce the initial estimate of the
- * homography matrix and the mask of inliers/outliers.
Regardless of the method, robust or not, the computed homography matrix is - * refined further (using inliers only in case of a robust method) with the - * Levenberg-Marquardt method to reduce the re-projection error even more.
- * - *The method RANSAC
can handle practically any ratio of outliers
- * but it needs a threshold to distinguish inliers from outliers.
- * The method LMeDS
does not need any threshold but it works
- * correctly only when there are more than 50% of inliers. Finally, if there are
- * no outliers and the noise is rather small, use the default method
- * (method=0
).
The function is used to find initial intrinsic and extrinsic matrices. - * Homography matrix is determined up to a scale. Thus, it is normalized so that - * h_33=1. Note that whenever an H matrix cannot be estimated, an empty - * one will be returned.
- * - *Note:
- *CV_32FC2
or vector
.
- * @param dstPoints Coordinates of the points in the target plane, a matrix of
- * the type CV_32FC2
or a vector
.
- *
- * @see org.opencv.calib3d.Calib3d.findHomography
- * @see org.opencv.imgproc.Imgproc#warpPerspective
- * @see org.opencv.core.Core#perspectiveTransform
- * @see org.opencv.video.Video#estimateRigidTransform
- * @see org.opencv.imgproc.Imgproc#getAffineTransform
- * @see org.opencv.imgproc.Imgproc#getPerspectiveTransform
- */
- public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints)
+ //javadoc: calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, stdDeviationsIntrinsics, stdDeviationsExtrinsics, perViewErrors)
+ public static double calibrateCameraExtended(ListReturns the new camera matrix based on the free scaling parameter.
- * - *The function computes and returns the optimal new camera matrix based on the
- * free scaling parameter. By varying this parameter, you may retrieve only
- * sensible pixels alpha=0
, keep all the original image pixels if
- * there is valuable information in the corners alpha=1
, or get
- * something in between. When alpha>0
, the undistortion result is
- * likely to have some black pixels corresponding to "virtual" pixels outside of
- * the captured distorted image. The original camera matrix, distortion
- * coefficients, the computed new camera matrix, and newImageSize
- * should be passed to "initUndistortRectifyMap" to produce the maps for
- * "remap".
roi1, roi2
description in
- * "stereoRectify".
- * @param centerPrincipalPoint Optional flag that indicates whether in the new
- * camera matrix the principal point should be at the image center or not. By
- * default, the principal point is chosen to best fit a subset of the source
- * image (determined by alpha
) to the corrected image.
- *
- * @see org.opencv.calib3d.Calib3d.getOptimalNewCameraMatrix
- */
- public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize, Rect validPixROI, boolean centerPrincipalPoint)
+ //javadoc: sampsonDistance(pt1, pt2, F)
+ public static double sampsonDistance(Mat pt1, Mat pt2, Mat F)
{
- double[] validPixROI_out = new double[4];
- Mat retVal = new Mat(getOptimalNewCameraMatrix_0(cameraMatrix.nativeObj, distCoeffs.nativeObj, imageSize.width, imageSize.height, alpha, newImgSize.width, newImgSize.height, validPixROI_out, centerPrincipalPoint));
- if(validPixROI!=null){ validPixROI.x = (int)validPixROI_out[0]; validPixROI.y = (int)validPixROI_out[1]; validPixROI.width = (int)validPixROI_out[2]; validPixROI.height = (int)validPixROI_out[3]; }
+
+ double retVal = sampsonDistance_0(pt1.nativeObj, pt2.nativeObj, F.nativeObj);
+
return retVal;
}
-/**
- * Returns the new camera matrix based on the free scaling parameter.
- * - *The function computes and returns the optimal new camera matrix based on the
- * free scaling parameter. By varying this parameter, you may retrieve only
- * sensible pixels alpha=0
, keep all the original image pixels if
- * there is valuable information in the corners alpha=1
, or get
- * something in between. When alpha>0
, the undistortion result is
- * likely to have some black pixels corresponding to "virtual" pixels outside of
- * the captured distorted image. The original camera matrix, distortion
- * coefficients, the computed new camera matrix, and newImageSize
- * should be passed to "initUndistortRectifyMap" to produce the maps for
- * "remap".
Finds an initial camera matrix from 3D-2D point correspondences.
- * - *The function estimates and returns an initial camera matrix for the camera - * calibration process. - * Currently, the function only supports planar calibration patterns, which are - * patterns where each object point has z-coordinate =0.
- * - * @param objectPoints Vector of vectors of the calibration pattern points in - * the calibration pattern coordinate space. In the old interface all the - * per-view vectors are concatenated. See "calibrateCamera" for details. - * @param imagePoints Vector of vectors of the projections of the calibration - * pattern points. In the old interface all the per-view vectors are - * concatenated. - * @param imageSize Image size in pixels used to initialize the principal point. - * @param aspectRatio If it is zero or negative, both f_x and - * f_y are estimated independently. Otherwise, f_x = f_y * - * aspectRatio. - * - * @see org.opencv.calib3d.Calib3d.initCameraMatrix2D - */ - public static Mat initCameraMatrix2D(ListFinds an initial camera matrix from 3D-2D point correspondences.
- * - *The function estimates and returns an initial camera matrix for the camera - * calibration process. - * Currently, the function only supports planar calibration patterns, which are - * patterns where each object point has z-coordinate =0.
- * - * @param objectPoints Vector of vectors of the calibration pattern points in - * the calibration pattern coordinate space. In the old interface all the - * per-view vectors are concatenated. See "calibrateCamera" for details. - * @param imagePoints Vector of vectors of the projections of the calibration - * pattern points. In the old interface all the per-view vectors are - * concatenated. - * @param imageSize Image size in pixels used to initialize the principal point. - * - * @see org.opencv.calib3d.Calib3d.initCameraMatrix2D - */ - public static Mat initCameraMatrix2D(ListThe function computes partial derivatives of the elements of the matrix - * product A*B with regard to the elements of each of the two input - * matrices. The function is used to compute the Jacobian matrices in - * "stereoCalibrate" but can also be used in any other similar optimization - * function.
- * - * @param A First multiplied matrix. - * @param B Second multiplied matrix. - * @param dABdA First output derivative matrixd(A*B)/dA
of size
- * A.rows*B.cols x (A.rows*A.cols).
- * @param dABdB Second output derivative matrix d(A*B)/dB
of size
- * A.rows*B.cols x (B.rows*B.cols).
- *
- * @see org.opencv.calib3d.Calib3d.matMulDeriv
- */
+ //javadoc: matMulDeriv(A, B, dABdA, dABdB)
public static void matMulDeriv(Mat A, Mat B, Mat dABdA, Mat dABdB)
{
-
+
matMulDeriv_0(A.nativeObj, B.nativeObj, dABdA.nativeObj, dABdB.nativeObj);
-
+
return;
}
@@ -1952,1059 +1084,400 @@ public static void matMulDeriv(Mat A, Mat B, Mat dABdA, Mat dABdB)
// C++: void projectPoints(vector_Point3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, vector_double distCoeffs, vector_Point2f& imagePoints, Mat& jacobian = Mat(), double aspectRatio = 0)
//
-/**
- * Projects 3D points to an image plane.
- * - *The function computes projections of 3D points to the image plane given - * intrinsic and extrinsic camera parameters. Optionally, the function computes - * Jacobians - matrices of partial derivatives of image points coordinates (as - * functions of all the input parameters) with respect to the particular - * parameters, intrinsic and/or extrinsic. The Jacobians are used during the - * global optimization in "calibrateCamera", "solvePnP", and "stereoCalibrate". - * The function itself can also be used to compute a re-projection error given - * the current intrinsic and extrinsic parameters.
- * - *Note: By setting rvec=tvec=(0,0,0)
or by setting
- * cameraMatrix
to a 3x3 identity matrix, or by passing zero
- * distortion coefficients, you can get various useful partial cases of the
- * function. This means that you can compute the distorted coordinates for a
- * sparse set of points or apply a perspective transformation (and also compute
- * the derivatives) in the ideal zero-distortion setup.
vector
), where N is the number of points
- * in the view.
- * @param rvec Rotation vector. See "Rodrigues" for details.
- * @param tvec Translation vector.
- * @param cameraMatrix Camera matrix A =
- * |f_x 0 c_x| - * |0 f_y c_y| - * |0 0 _1| - *
. - * @param distCoeffs Input vector of distortion coefficients (k_1, k_2, p_1, - * p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is - * NULL/empty, the zero distortion coefficients are assumed. - * @param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 - * 2-channel, orvector
.
- * @param jacobian Optional output 2Nx(10+Projects 3D points to an image plane.
- * - *The function computes projections of 3D points to the image plane given - * intrinsic and extrinsic camera parameters. Optionally, the function computes - * Jacobians - matrices of partial derivatives of image points coordinates (as - * functions of all the input parameters) with respect to the particular - * parameters, intrinsic and/or extrinsic. The Jacobians are used during the - * global optimization in "calibrateCamera", "solvePnP", and "stereoCalibrate". - * The function itself can also be used to compute a re-projection error given - * the current intrinsic and extrinsic parameters.
- * - *Note: By setting rvec=tvec=(0,0,0)
or by setting
- * cameraMatrix
to a 3x3 identity matrix, or by passing zero
- * distortion coefficients, you can get various useful partial cases of the
- * function. This means that you can compute the distorted coordinates for a
- * sparse set of points or apply a perspective transformation (and also compute
- * the derivatives) in the ideal zero-distortion setup.
vector
), where N is the number of points
- * in the view.
- * @param rvec Rotation vector. See "Rodrigues" for details.
- * @param tvec Translation vector.
- * @param cameraMatrix Camera matrix A =
- * |f_x 0 c_x| - * |0 f_y c_y| - * |0 0 _1| - *
. - * @param distCoeffs Input vector of distortion coefficients (k_1, k_2, p_1, - * p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is - * NULL/empty, the zero distortion coefficients are assumed. - * @param imagePoints Output array of image points, 2xN/Nx2 1-channel or 1xN/Nx1 - * 2-channel, orvector
.
- *
- * @see org.opencv.calib3d.Calib3d.projectPoints
- */
+ //javadoc: projectPoints(objectPoints, rvec, tvec, cameraMatrix, distCoeffs, imagePoints)
public static void projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints)
{
Mat objectPoints_mat = objectPoints;
Mat distCoeffs_mat = distCoeffs;
Mat imagePoints_mat = imagePoints;
projectPoints_1(objectPoints_mat.nativeObj, rvec.nativeObj, tvec.nativeObj, cameraMatrix.nativeObj, distCoeffs_mat.nativeObj, imagePoints_mat.nativeObj);
-
+
return;
}
- //
- // C++: float rectify3Collinear(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat cameraMatrix3, Mat distCoeffs3, vector_Mat imgpt1, vector_Mat imgpt3, Size imageSize, Mat R12, Mat T12, Mat R13, Mat T13, Mat& R1, Mat& R2, Mat& R3, Mat& P1, Mat& P2, Mat& P3, Mat& Q, double alpha, Size newImgSize, Rect* roi1, Rect* roi2, int flags)
- //
-
- public static float rectify3Collinear(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat cameraMatrix3, Mat distCoeffs3, ListReprojects a disparity image to 3D space.
- * - *The function transforms a single-channel disparity map to a 3-channel image
- * representing a 3D surface. That is, for each pixel (x,y)
andthe
- * corresponding disparity d=disparity(x,y)
, it computes:
[X Y Z W]^T = Q *[x y disparity(x,y) 1]^T - * _3dImage(x,y) = (X/W, Y/W, Z/W)
- * - *The matrix Q
can be an arbitrary 4 x 4 matrix (for
- * example, the one computed by "stereoRectify"). To reproject a sparse set of
- * points {(x,y,d),...} to 3D space, use "perspectiveTransform".
disparity
. Each element of _3dImage(x,y)
contains
- * 3D coordinates of the point (x,y)
computed from the disparity
- * map.
- * @param Q 4 x 4 perspective transformation matrix that can be
- * obtained with "stereoRectify".
- * @param handleMissingValues Indicates, whether the function should handle
- * missing values (i.e. points where the disparity was not computed). If
- * handleMissingValues=true
, then pixels with the minimal disparity
- * that corresponds to the outliers (see :ocv:funcx:"StereoBM.operator()") are
- * transformed to 3D points with a very large Z value (currently set to 10000).
- * @param ddepth The optional output array depth. If it is -1
, the
- * output image will have CV_32F
depth. ddepth
can
- * also be set to CV_16S
, CV_32S
or CV_32F
.
- *
- * @see org.opencv.calib3d.Calib3d.reprojectImageTo3D
- */
+ //javadoc: reprojectImageTo3D(disparity, _3dImage, Q, handleMissingValues, ddepth)
public static void reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q, boolean handleMissingValues, int ddepth)
{
-
+
reprojectImageTo3D_0(disparity.nativeObj, _3dImage.nativeObj, Q.nativeObj, handleMissingValues, ddepth);
-
+
return;
}
-/**
- * Reprojects a disparity image to 3D space.
- * - *The function transforms a single-channel disparity map to a 3-channel image
- * representing a 3D surface. That is, for each pixel (x,y)
andthe
- * corresponding disparity d=disparity(x,y)
, it computes:
[X Y Z W]^T = Q *[x y disparity(x,y) 1]^T - * _3dImage(x,y) = (X/W, Y/W, Z/W)
- * - *The matrix Q
can be an arbitrary 4 x 4 matrix (for
- * example, the one computed by "stereoRectify"). To reproject a sparse set of
- * points {(x,y,d),...} to 3D space, use "perspectiveTransform".
disparity
. Each element of _3dImage(x,y)
contains
- * 3D coordinates of the point (x,y)
computed from the disparity
- * map.
- * @param Q 4 x 4 perspective transformation matrix that can be
- * obtained with "stereoRectify".
- * @param handleMissingValues Indicates, whether the function should handle
- * missing values (i.e. points where the disparity was not computed). If
- * handleMissingValues=true
, then pixels with the minimal disparity
- * that corresponds to the outliers (see :ocv:funcx:"StereoBM.operator()") are
- * transformed to 3D points with a very large Z value (currently set to 10000).
- *
- * @see org.opencv.calib3d.Calib3d.reprojectImageTo3D
- */
+ //javadoc: reprojectImageTo3D(disparity, _3dImage, Q, handleMissingValues)
public static void reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q, boolean handleMissingValues)
{
-
+
reprojectImageTo3D_1(disparity.nativeObj, _3dImage.nativeObj, Q.nativeObj, handleMissingValues);
-
+
return;
}
-/**
- * Reprojects a disparity image to 3D space.
- * - *The function transforms a single-channel disparity map to a 3-channel image
- * representing a 3D surface. That is, for each pixel (x,y)
andthe
- * corresponding disparity d=disparity(x,y)
, it computes:
[X Y Z W]^T = Q *[x y disparity(x,y) 1]^T - * _3dImage(x,y) = (X/W, Y/W, Z/W)
- * - *The matrix Q
can be an arbitrary 4 x 4 matrix (for
- * example, the one computed by "stereoRectify"). To reproject a sparse set of
- * points {(x,y,d),...} to 3D space, use "perspectiveTransform".
disparity
. Each element of _3dImage(x,y)
contains
- * 3D coordinates of the point (x,y)
computed from the disparity
- * map.
- * @param Q 4 x 4 perspective transformation matrix that can be
- * obtained with "stereoRectify".
- *
- * @see org.opencv.calib3d.Calib3d.reprojectImageTo3D
- */
+ //javadoc: reprojectImageTo3D(disparity, _3dImage, Q)
public static void reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q)
{
-
+
reprojectImageTo3D_2(disparity.nativeObj, _3dImage.nativeObj, Q.nativeObj);
-
+
return;
}
//
- // C++: bool solvePnP(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int flags = ITERATIVE)
- //
-
-/**
- * Finds an object pose from 3D-2D point correspondences.
- * - *The function estimates the object pose given a set of object points, their - * corresponding image projections, as well as the camera matrix and the - * distortion coefficients.
- * - *Note:
- *vector
can be also passed here.
- * @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or
- * 1xN/Nx1 2-channel, where N is the number of points. vector
- * can be also passed here.
- * @param cameraMatrix Input camera matrix A =
- * |fx 0 cx| - * |0 fy cy| - * |0 0 1| - *
. - * @param distCoeffs Input vector of distortion coefficients (k_1, k_2, p_1, - * p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is - * NULL/empty, the zero distortion coefficients are assumed. - * @param rvec Output rotation vector (see "Rodrigues") that, together with - *tvec
, brings points from the model coordinate system to the
- * camera coordinate system.
- * @param tvec Output translation vector.
- * @param useExtrinsicGuess If true (1), the function uses the provided
- * rvec
and tvec
values as initial approximations of
- * the rotation and translation vectors, respectively, and further optimizes
- * them.
- * @param flags Method for solving a PnP problem:
- * imagePoints
and the projected (using
- * "projectPoints") objectPoints
.
- * Finds an object pose from 3D-2D point correspondences.
- * - *The function estimates the object pose given a set of object points, their - * corresponding image projections, as well as the camera matrix and the - * distortion coefficients.
- * - *Note:
- *vector
can be also passed here.
- * @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or
- * 1xN/Nx1 2-channel, where N is the number of points. vector
- * can be also passed here.
- * @param cameraMatrix Input camera matrix A =
- * |fx 0 cx| - * |0 fy cy| - * |0 0 1| - *
. - * @param distCoeffs Input vector of distortion coefficients (k_1, k_2, p_1, - * p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is - * NULL/empty, the zero distortion coefficients are assumed. - * @param rvec Output rotation vector (see "Rodrigues") that, together with - *tvec
, brings points from the model coordinate system to the
- * camera coordinate system.
- * @param tvec Output translation vector.
- *
- * @see org.opencv.calib3d.Calib3d.solvePnP
- */
- public static boolean solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
+ //javadoc: stereoRectify(cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, imageSize, R, T, R1, R2, P1, P2, Q)
+ public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q)
{
- Mat objectPoints_mat = objectPoints;
- Mat imagePoints_mat = imagePoints;
- Mat distCoeffs_mat = distCoeffs;
- boolean retVal = solvePnP_1(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, cameraMatrix.nativeObj, distCoeffs_mat.nativeObj, rvec.nativeObj, tvec.nativeObj);
-
- return retVal;
+
+ stereoRectify_1(cameraMatrix1.nativeObj, distCoeffs1.nativeObj, cameraMatrix2.nativeObj, distCoeffs2.nativeObj, imageSize.width, imageSize.height, R.nativeObj, T.nativeObj, R1.nativeObj, R2.nativeObj, P1.nativeObj, P2.nativeObj, Q.nativeObj);
+
+ return;
}
//
- // C++: void solvePnPRansac(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8.0, int minInliersCount = 100, Mat& inliers = Mat(), int flags = ITERATIVE)
- //
-
-/**
- * Finds an object pose from 3D-2D point correspondences using the RANSAC - * scheme.
- * - *The function estimates an object pose given a set of object points, their
- * corresponding image projections, as well as the camera matrix and the
- * distortion coefficients. This function finds such a pose that minimizes
- * reprojection error, that is, the sum of squared distances between the
- * observed projections imagePoints
and the projected (using
- * "projectPoints") objectPoints
. The use of RANSAC makes the
- * function resistant to outliers. The function is parallelized with the TBB
- * library.
vector
can be also passed here.
- * @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or
- * 1xN/Nx1 2-channel, where N is the number of points. vector
- * can be also passed here.
- * @param cameraMatrix Input camera matrix A =
- * |fx 0 cx| - * |0 fy cy| - * |0 0 1| - *
. - * @param distCoeffs Input vector of distortion coefficients (k_1, k_2, p_1, - * p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is - * NULL/empty, the zero distortion coefficients are assumed. - * @param rvec Output rotation vector (see "Rodrigues") that, together with - *tvec
, brings points from the model coordinate system to the
- * camera coordinate system.
- * @param tvec Output translation vector.
- * @param useExtrinsicGuess If true (1), the function uses the provided
- * rvec
and tvec
values as initial approximations of
- * the rotation and translation vectors, respectively, and further optimizes
- * them.
- * @param iterationsCount Number of iterations.
- * @param reprojectionError Inlier threshold value used by the RANSAC procedure.
- * The parameter value is the maximum allowed distance between the observed and
- * computed point projections to consider it an inlier.
- * @param minInliersCount Number of inliers. If the algorithm at some stage
- * finds more inliers than minInliersCount
, it finishes.
- * @param inliers Output vector that contains indices of inliers in
- * objectPoints
and imagePoints
.
- * @param flags Method for solving a PnP problem (see "solvePnP").
- *
- * @see org.opencv.calib3d.Calib3d.solvePnPRansac
- */
- public static void solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, int minInliersCount, Mat inliers, int flags)
- {
- Mat objectPoints_mat = objectPoints;
- Mat imagePoints_mat = imagePoints;
- Mat distCoeffs_mat = distCoeffs;
- solvePnPRansac_0(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, cameraMatrix.nativeObj, distCoeffs_mat.nativeObj, rvec.nativeObj, tvec.nativeObj, useExtrinsicGuess, iterationsCount, reprojectionError, minInliersCount, inliers.nativeObj, flags);
+ // C++: void triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat& points4D)
+ //
+ //javadoc: triangulatePoints(projMatr1, projMatr2, projPoints1, projPoints2, points4D)
+ public static void triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat points4D)
+ {
+
+ triangulatePoints_0(projMatr1.nativeObj, projMatr2.nativeObj, projPoints1.nativeObj, projPoints2.nativeObj, points4D.nativeObj);
+
return;
}
-/**
- * Finds an object pose from 3D-2D point correspondences using the RANSAC - * scheme.
- * - *The function estimates an object pose given a set of object points, their
- * corresponding image projections, as well as the camera matrix and the
- * distortion coefficients. This function finds such a pose that minimizes
- * reprojection error, that is, the sum of squared distances between the
- * observed projections imagePoints
and the projected (using
- * "projectPoints") objectPoints
. The use of RANSAC makes the
- * function resistant to outliers. The function is parallelized with the TBB
- * library.
vector
can be also passed here.
- * @param imagePoints Array of corresponding image points, 2xN/Nx2 1-channel or
- * 1xN/Nx1 2-channel, where N is the number of points. vector
- * can be also passed here.
- * @param cameraMatrix Input camera matrix A =
- * |fx 0 cx| - * |0 fy cy| - * |0 0 1| - *
. - * @param distCoeffs Input vector of distortion coefficients (k_1, k_2, p_1, - * p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. If the vector is - * NULL/empty, the zero distortion coefficients are assumed. - * @param rvec Output rotation vector (see "Rodrigues") that, together with - *tvec
, brings points from the model coordinate system to the
- * camera coordinate system.
- * @param tvec Output translation vector.
- *
- * @see org.opencv.calib3d.Calib3d.solvePnPRansac
- */
- public static void solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
+
+ //
+ // C++: void validateDisparity(Mat& disparity, Mat cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp = 1)
+ //
+
+ //javadoc: validateDisparity(disparity, cost, minDisparity, numberOfDisparities, disp12MaxDisp)
+ public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp)
{
- Mat objectPoints_mat = objectPoints;
- Mat imagePoints_mat = imagePoints;
- Mat distCoeffs_mat = distCoeffs;
- solvePnPRansac_1(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, cameraMatrix.nativeObj, distCoeffs_mat.nativeObj, rvec.nativeObj, tvec.nativeObj);
+
+ validateDisparity_0(disparity.nativeObj, cost.nativeObj, minDisparity, numberOfDisparities, disp12MaxDisp);
+
+ return;
+ }
+ //javadoc: validateDisparity(disparity, cost, minDisparity, numberOfDisparities)
+ public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities)
+ {
+
+ validateDisparity_1(disparity.nativeObj, cost.nativeObj, minDisparity, numberOfDisparities);
+
return;
}
//
- // C++: double stereoCalibrate(vector_Mat objectPoints, vector_Mat imagePoints1, vector_Mat imagePoints2, Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2, Size imageSize, Mat& R, Mat& T, Mat& E, Mat& F, TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6), int flags = CALIB_FIX_INTRINSIC)
- //
-
-/**
- * Calibrates the stereo camera.
- * - *The function estimates transformation between two cameras making a stereo - * pair. If you have a stereo camera where the relative position and orientation - * of two cameras is fixed, and if you computed poses of an object relative to - * the first camera and to the second camera, (R1, T1) and (R2, T2), - * respectively (this can be done with "solvePnP"), then those poses definitely - * relate to each other. This means that, given (R_1,T_1), it - * should be possible to compute (R_2,T_2). You only need to - * know the position and orientation of the second camera relative to the first - * camera. This is what the described function does. It computes - * (R,T) so that:
- * - *R_2=R*R_1<BR>T_2=R*T_1 + T,
- * - *Optionally, it computes the essential matrix E:
- * - *E= - * |0 -T_2 T_1| - * |T_2 0 -T_0| - * |-T_1 T_0 0|
- *where T_i are components of the translation vector T : - * T=[T_0, T_1, T_2]^T. And the function can also compute the - * fundamental matrix F:
- * - *F = cameraMatrix2^(-T) E cameraMatrix1^(-1)
- * - *Besides the stereo-related information, the function can also perform a full
- * calibration of each of two cameras. However, due to the high dimensionality
- * of the parameter space and noise in the input data, the function can diverge
- * from the correct solution. If the intrinsic parameters can be estimated with
- * high accuracy for each of the cameras individually (for example, using
- * "calibrateCamera"), you are recommended to do so and then pass
- * CV_CALIB_FIX_INTRINSIC
flag to the function along with the
- * computed intrinsic parameters. Otherwise, if all the parameters are estimated
- * at once, it makes sense to restrict some parameters, for example, pass
- * CV_CALIB_SAME_FOCAL_LENGTH
and CV_CALIB_ZERO_TANGENT_DIST
- * flags, which is usually a reasonable assumption.
Similarly to "calibrateCamera", the function minimizes the total - * re-projection error for all the points in all the available views from both - * cameras. The function returns the final value of the re-projection error.
- * - * @param objectPoints Vector of vectors of the calibration pattern points. - * @param imagePoints1 Vector of vectors of the projections of the calibration - * pattern points, observed by the first camera. - * @param imagePoints2 Vector of vectors of the projections of the calibration - * pattern points, observed by the second camera. - * @param cameraMatrix1 Input/output first camera matrix: - *|f_x^j 0 c_x^j| - * |0 f_y^j c_y^j| - * |0 0 1| - *
, j = 0, 1. If any ofCV_CALIB_USE_INTRINSIC_GUESS
,
- * CV_CALIB_FIX_ASPECT_RATIO
, CV_CALIB_FIX_INTRINSIC
,
- * or CV_CALIB_FIX_FOCAL_LENGTH
are specified, some or all of the
- * matrix components must be initialized. See the flags description for details.
- * @param distCoeffs1 Input/output vector of distortion coefficients (k_1,
- * k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The
- * output vector length depends on the flags.
- * @param cameraMatrix2 Input/output second camera matrix. The parameter is
- * similar to cameraMatrix1
.
- * @param distCoeffs2 Input/output lens distortion coefficients for the second
- * camera. The parameter is similar to distCoeffs1
.
- * @param imageSize Size of the image used only to initialize intrinsic camera
- * matrix.
- * @param R Output rotation matrix between the 1st and the 2nd camera coordinate
- * systems.
- * @param T Output translation vector between the coordinate systems of the
- * cameras.
- * @param E Output essential matrix.
- * @param F Output fundamental matrix.
- * @param criteria a criteria
- * @param flags Different flags that may be zero or a combination of the
- * following values:
- * cameraMatrix?
and
- * distCoeffs?
so that only R, T, E
, and
- * F
matrices are estimated.
- * CV_CALIB_USE_INTRINSIC_GUESS
- * is set, the coefficient from the supplied distCoeffs
matrix is
- * used. Otherwise, it is set to 0.
- * Calibrates the stereo camera.
- * - *The function estimates transformation between two cameras making a stereo - * pair. If you have a stereo camera where the relative position and orientation - * of two cameras is fixed, and if you computed poses of an object relative to - * the first camera and to the second camera, (R1, T1) and (R2, T2), - * respectively (this can be done with "solvePnP"), then those poses definitely - * relate to each other. This means that, given (R_1,T_1), it - * should be possible to compute (R_2,T_2). You only need to - * know the position and orientation of the second camera relative to the first - * camera. This is what the described function does. It computes - * (R,T) so that:
- * - *R_2=R*R_1<BR>T_2=R*T_1 + T,
- * - *Optionally, it computes the essential matrix E:
- * - *E= - * |0 -T_2 T_1| - * |T_2 0 -T_0| - * |-T_1 T_0 0|
- *where T_i are components of the translation vector T : - * T=[T_0, T_1, T_2]^T. And the function can also compute the - * fundamental matrix F:
- * - *F = cameraMatrix2^(-T) E cameraMatrix1^(-1)
- * - *Besides the stereo-related information, the function can also perform a full
- * calibration of each of two cameras. However, due to the high dimensionality
- * of the parameter space and noise in the input data, the function can diverge
- * from the correct solution. If the intrinsic parameters can be estimated with
- * high accuracy for each of the cameras individually (for example, using
- * "calibrateCamera"), you are recommended to do so and then pass
- * CV_CALIB_FIX_INTRINSIC
flag to the function along with the
- * computed intrinsic parameters. Otherwise, if all the parameters are estimated
- * at once, it makes sense to restrict some parameters, for example, pass
- * CV_CALIB_SAME_FOCAL_LENGTH
and CV_CALIB_ZERO_TANGENT_DIST
- * flags, which is usually a reasonable assumption.
Similarly to "calibrateCamera", the function minimizes the total - * re-projection error for all the points in all the available views from both - * cameras. The function returns the final value of the re-projection error.
- * - * @param objectPoints Vector of vectors of the calibration pattern points. - * @param imagePoints1 Vector of vectors of the projections of the calibration - * pattern points, observed by the first camera. - * @param imagePoints2 Vector of vectors of the projections of the calibration - * pattern points, observed by the second camera. - * @param cameraMatrix1 Input/output first camera matrix: - *|f_x^j 0 c_x^j| - * |0 f_y^j c_y^j| - * |0 0 1| - *
, j = 0, 1. If any ofCV_CALIB_USE_INTRINSIC_GUESS
,
- * CV_CALIB_FIX_ASPECT_RATIO
, CV_CALIB_FIX_INTRINSIC
,
- * or CV_CALIB_FIX_FOCAL_LENGTH
are specified, some or all of the
- * matrix components must be initialized. See the flags description for details.
- * @param distCoeffs1 Input/output vector of distortion coefficients (k_1,
- * k_2, p_1, p_2[, k_3[, k_4, k_5, k_6]]) of 4, 5, or 8 elements. The
- * output vector length depends on the flags.
- * @param cameraMatrix2 Input/output second camera matrix. The parameter is
- * similar to cameraMatrix1
.
- * @param distCoeffs2 Input/output lens distortion coefficients for the second
- * camera. The parameter is similar to distCoeffs1
.
- * @param imageSize Size of the image used only to initialize intrinsic camera
- * matrix.
- * @param R Output rotation matrix between the 1st and the 2nd camera coordinate
- * systems.
- * @param T Output translation vector between the coordinate systems of the
- * cameras.
- * @param E Output essential matrix.
- * @param F Output fundamental matrix.
- *
- * @see org.opencv.calib3d.Calib3d.stereoCalibrate
- */
- public static double stereoCalibrate(ListComputes rectification transforms for each head of a calibrated stereo - * camera.
- * - *The function computes the rotation matrices for each camera that (virtually) - * make both camera image planes the same plane. Consequently, this makes all - * the epipolar lines parallel and thus simplifies the dense stereo - * correspondence problem. The function takes the matrices computed by - * "stereoCalibrate" as input. As output, it provides two rotation matrices and - * also two projection matrices in the new coordinates. The function - * distinguishes the following two cases:
- *P1 = f 0 cx_1 0 - * 0 f cy 0 - * 0 0 1 0
- * - * - * - *P2 = f 0 cx_2 T_x*f - * 0 f cy 0 - * 0 0 1 0,
- * - *where T_x is a horizontal shift between the cameras and
- * cx_1=cx_2 if CV_CALIB_ZERO_DISPARITY
is set.
P1 = f 0 cx 0 - * 0 f cy_1 0 - * 0 0 1 0
- * - * - * - *P2 = f 0 cx 0 - * 0 f cy_2 T_y*f - * 0 0 1 0,
- * - *where T_y is a vertical shift between the cameras and
- * cy_1=cy_2 if CALIB_ZERO_DISPARITY
is set.
As you can see, the first three columns of P1
and
- * P2
will effectively be the new "rectified" camera matrices.
- * The matrices, together with R1
and R2
, can then be
- * passed to "initUndistortRectifyMap" to initialize the rectification map for
- * each camera.
See below the screenshot from the stereo_calib.cpp
sample. Some
- * red horizontal lines pass through the corresponding image regions. This means
- * that the images are well rectified, which is what most stereo correspondence
- * algorithms rely on. The green rectangles are roi1
and
- * roi2
. You see that their interiors are all valid pixels.
CV_CALIB_ZERO_DISPARITY
.
- * If the flag is set, the function makes the principal points of each camera
- * have the same pixel coordinates in the rectified views. And if the flag is
- * not set, the function may still shift the images in the horizontal or
- * vertical direction (depending on the orientation of epipolar lines) to
- * maximize the useful image area.
- * @param alpha Free scaling parameter. If it is -1 or absent, the function
- * performs the default scaling. Otherwise, the parameter should be between 0
- * and 1. alpha=0
means that the rectified images are zoomed and
- * shifted so that only valid pixels are visible (no black areas after
- * rectification). alpha=1
means that the rectified image is
- * decimated and shifted so that all the pixels from the original images from
- * the cameras are retained in the rectified images (no source image pixels are
- * lost). Obviously, any intermediate value yields an intermediate result
- * between those two extreme cases.
- * @param newImageSize New image resolution after rectification. The same size
- * should be passed to "initUndistortRectifyMap" (see the stereo_calib.cpp
- * sample in OpenCV samples directory). When (0,0) is passed (default), it is
- * set to the original imageSize
. Setting it to larger value can
- * help you preserve details in the original image, especially when there is a
- * big radial distortion.
- * @param validPixROI1 Optional output rectangles inside the rectified images
- * where all the pixels are valid. If alpha=0
, the ROIs cover the
- * whole images. Otherwise, they are likely to be smaller (see the picture
- * below).
- * @param validPixROI2 Optional output rectangles inside the rectified images
- * where all the pixels are valid. If alpha=0
, the ROIs cover the
- * whole images. Otherwise, they are likely to be smaller (see the picture
- * below).
- *
- * @see org.opencv.calib3d.Calib3d.stereoRectify
- */
- public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize, Rect validPixROI1, Rect validPixROI2)
+ //javadoc: estimateNewCameraMatrixForUndistortRectify(K, D, image_size, R, P, balance, new_size, fov_scale)
+ public static void estimateNewCameraMatrixForUndistortRectify(Mat K, Mat D, Size image_size, Mat R, Mat P, double balance, Size new_size, double fov_scale)
{
- double[] validPixROI1_out = new double[4];
- double[] validPixROI2_out = new double[4];
- stereoRectify_0(cameraMatrix1.nativeObj, distCoeffs1.nativeObj, cameraMatrix2.nativeObj, distCoeffs2.nativeObj, imageSize.width, imageSize.height, R.nativeObj, T.nativeObj, R1.nativeObj, R2.nativeObj, P1.nativeObj, P2.nativeObj, Q.nativeObj, flags, alpha, newImageSize.width, newImageSize.height, validPixROI1_out, validPixROI2_out);
- if(validPixROI1!=null){ validPixROI1.x = (int)validPixROI1_out[0]; validPixROI1.y = (int)validPixROI1_out[1]; validPixROI1.width = (int)validPixROI1_out[2]; validPixROI1.height = (int)validPixROI1_out[3]; }
- if(validPixROI2!=null){ validPixROI2.x = (int)validPixROI2_out[0]; validPixROI2.y = (int)validPixROI2_out[1]; validPixROI2.width = (int)validPixROI2_out[2]; validPixROI2.height = (int)validPixROI2_out[3]; }
+
+ estimateNewCameraMatrixForUndistortRectify_0(K.nativeObj, D.nativeObj, image_size.width, image_size.height, R.nativeObj, P.nativeObj, balance, new_size.width, new_size.height, fov_scale);
+
return;
}
-/**
- * Computes rectification transforms for each head of a calibrated stereo - * camera.
- * - *The function computes the rotation matrices for each camera that (virtually) - * make both camera image planes the same plane. Consequently, this makes all - * the epipolar lines parallel and thus simplifies the dense stereo - * correspondence problem. The function takes the matrices computed by - * "stereoCalibrate" as input. As output, it provides two rotation matrices and - * also two projection matrices in the new coordinates. The function - * distinguishes the following two cases:
- *P1 = f 0 cx_1 0 - * 0 f cy 0 - * 0 0 1 0
- * - * - * - *P2 = f 0 cx_2 T_x*f - * 0 f cy 0 - * 0 0 1 0,
- * - *where T_x is a horizontal shift between the cameras and
- * cx_1=cx_2 if CV_CALIB_ZERO_DISPARITY
is set.
P1 = f 0 cx 0 - * 0 f cy_1 0 - * 0 0 1 0
- * - * - * - *P2 = f 0 cx 0 - * 0 f cy_2 T_y*f - * 0 0 1 0,
- * - *where T_y is a vertical shift between the cameras and
- * cy_1=cy_2 if CALIB_ZERO_DISPARITY
is set.
As you can see, the first three columns of P1
and
- * P2
will effectively be the new "rectified" camera matrices.
- * The matrices, together with R1
and R2
, can then be
- * passed to "initUndistortRectifyMap" to initialize the rectification map for
- * each camera.
See below the screenshot from the stereo_calib.cpp
sample. Some
- * red horizontal lines pass through the corresponding image regions. This means
- * that the images are well rectified, which is what most stereo correspondence
- * algorithms rely on. The green rectangles are roi1
and
- * roi2
. You see that their interiors are all valid pixels.
Computes a rectification transform for an uncalibrated stereo camera.
- * - *The function computes the rectification transformations without knowing
- * intrinsic parameters of the cameras and their relative position in the space,
- * which explains the suffix "uncalibrated". Another related difference from
- * "stereoRectify" is that the function outputs not the rectification
- * transformations in the object (3D) space, but the planar perspective
- * transformations encoded by the homography matrices H1
and
- * H2
. The function implements the algorithm [Hartley99].
Note:
- * - *While the algorithm does not need to know the intrinsic parameters of the - * cameras, it heavily depends on the epipolar geometry. Therefore, if the - * camera lenses have a significant distortion, it would be better to correct it - * before computing the fundamental matrix and calling this function. For - * example, distortion coefficients can be estimated for each head of stereo - * camera separately by using "calibrateCamera". Then, the images can be - * corrected using "undistort", or just the point coordinates can be corrected - * with "undistortPoints".
- * - * @param points1 Array of feature points in the first image. - * @param points2 The corresponding points in the second image. The same formats - * as in "findFundamentalMat" are supported. - * @param F Input fundamental matrix. It can be computed from the same set of - * point pairs using "findFundamentalMat". - * @param imgSize Size of the image. - * @param H1 Output rectification homography matrix for the first image. - * @param H2 Output rectification homography matrix for the second image. - * @param threshold Optional threshold used to filter out the outliers. If the - * parameter is greater than zero, all the point pairs that do not comply with - * the epipolar geometry (that is, the points for which |points2[i]^T*F*points1[i]|>threshold) - * are rejected prior to computing the homographies. Otherwise,all the points - * are considered inliers. - * - * @see org.opencv.calib3d.Calib3d.stereoRectifyUncalibrated - */ - public static boolean stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2, double threshold) + //javadoc: initUndistortRectifyMap(K, D, R, P, size, m1type, map1, map2) + public static void initUndistortRectifyMap(Mat K, Mat D, Mat R, Mat P, Size size, int m1type, Mat map1, Mat map2) { + + initUndistortRectifyMap_0(K.nativeObj, D.nativeObj, R.nativeObj, P.nativeObj, size.width, size.height, m1type, map1.nativeObj, map2.nativeObj); + + return; + } - boolean retVal = stereoRectifyUncalibrated_0(points1.nativeObj, points2.nativeObj, F.nativeObj, imgSize.width, imgSize.height, H1.nativeObj, H2.nativeObj, threshold); - return retVal; - } + // + // C++: void projectPoints(vector_Point3f objectPoints, vector_Point2f& imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha = 0, Mat& jacobian = Mat()) + // -/** - *Computes a rectification transform for an uncalibrated stereo camera.
- * - *The function computes the rectification transformations without knowing
- * intrinsic parameters of the cameras and their relative position in the space,
- * which explains the suffix "uncalibrated". Another related difference from
- * "stereoRectify" is that the function outputs not the rectification
- * transformations in the object (3D) space, but the planar perspective
- * transformations encoded by the homography matrices H1
and
- * H2
. The function implements the algorithm [Hartley99].
Note:
- * - *While the algorithm does not need to know the intrinsic parameters of the - * cameras, it heavily depends on the epipolar geometry. Therefore, if the - * camera lenses have a significant distortion, it would be better to correct it - * before computing the fundamental matrix and calling this function. For - * example, distortion coefficients can be estimated for each head of stereo - * camera separately by using "calibrateCamera". Then, the images can be - * corrected using "undistort", or just the point coordinates can be corrected - * with "undistortPoints".
- * - * @param points1 Array of feature points in the first image. - * @param points2 The corresponding points in the second image. The same formats - * as in "findFundamentalMat" are supported. - * @param F Input fundamental matrix. It can be computed from the same set of - * point pairs using "findFundamentalMat". - * @param imgSize Size of the image. - * @param H1 Output rectification homography matrix for the first image. - * @param H2 Output rectification homography matrix for the second image. - * - * @see org.opencv.calib3d.Calib3d.stereoRectifyUncalibrated - */ - public static boolean stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2) + //javadoc: projectPoints(objectPoints, imagePoints, rvec, tvec, K, D, alpha, jacobian) + public static void projectPoints(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha, Mat jacobian) { + Mat objectPoints_mat = objectPoints; + Mat imagePoints_mat = imagePoints; + projectPoints_2(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, rvec.nativeObj, tvec.nativeObj, K.nativeObj, D.nativeObj, alpha, jacobian.nativeObj); + + return; + } - boolean retVal = stereoRectifyUncalibrated_1(points1.nativeObj, points2.nativeObj, F.nativeObj, imgSize.width, imgSize.height, H1.nativeObj, H2.nativeObj); - - return retVal; + //javadoc: projectPoints(objectPoints, imagePoints, rvec, tvec, K, D) + public static void projectPoints(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat rvec, Mat tvec, Mat K, Mat D) + { + Mat objectPoints_mat = objectPoints; + Mat imagePoints_mat = imagePoints; + projectPoints_3(objectPoints_mat.nativeObj, imagePoints_mat.nativeObj, rvec.nativeObj, tvec.nativeObj, K.nativeObj, D.nativeObj); + + return; } // - // C++: void triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat& points4D) + // C++: void stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q, int flags, Size newImageSize = Size(), double balance = 0.0, double fov_scale = 1.0) // -/** - *Reconstructs points by triangulation.
- * - *The function reconstructs 3-dimensional points (in homogeneous coordinates) - * by using their observations with a stereo camera. Projections matrices can be - * obtained from "stereoRectify".
- * - *Note:
- * - *Keep in mind that all input data should be of float type in order for this - * function to work.
- * - * @param projMatr1 3x4 projection matrix of the first camera. - * @param projMatr2 3x4 projection matrix of the second camera. - * @param projPoints1 2xN array of feature points in the first image. In case of - * c++ version it can be also a vector of feature points or two-channel matrix - * of size 1xN or Nx1. - * @param projPoints2 2xN array of corresponding points in the second image. In - * case of c++ version it can be also a vector of feature points or two-channel - * matrix of size 1xN or Nx1. - * @param points4D 4xN array of reconstructed points in homogeneous coordinates. - * - * @see org.opencv.calib3d.Calib3d.triangulatePoints - * @see org.opencv.calib3d.Calib3d#reprojectImageTo3D - */ - public static void triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat points4D) + //javadoc: stereoRectify(K1, D1, K2, D2, imageSize, R, tvec, R1, R2, P1, P2, Q, flags, newImageSize, balance, fov_scale) + public static void stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, Size newImageSize, double balance, double fov_scale) { + + stereoRectify_2(K1.nativeObj, D1.nativeObj, K2.nativeObj, D2.nativeObj, imageSize.width, imageSize.height, R.nativeObj, tvec.nativeObj, R1.nativeObj, R2.nativeObj, P1.nativeObj, P2.nativeObj, Q.nativeObj, flags, newImageSize.width, newImageSize.height, balance, fov_scale); + + return; + } - triangulatePoints_0(projMatr1.nativeObj, projMatr2.nativeObj, projPoints1.nativeObj, projPoints2.nativeObj, points4D.nativeObj); - + //javadoc: stereoRectify(K1, D1, K2, D2, imageSize, R, tvec, R1, R2, P1, P2, Q, flags) + public static void stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags) + { + + stereoRectify_3(K1.nativeObj, D1.nativeObj, K2.nativeObj, D2.nativeObj, imageSize.width, imageSize.height, R.nativeObj, tvec.nativeObj, R1.nativeObj, R2.nativeObj, P1.nativeObj, P2.nativeObj, Q.nativeObj, flags); + return; } // - // C++: void validateDisparity(Mat& disparity, Mat cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp = 1) + // C++: void undistortImage(Mat distorted, Mat& undistorted, Mat K, Mat D, Mat Knew = cv::Mat(), Size new_size = Size()) // - public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp) + //javadoc: undistortImage(distorted, undistorted, K, D, Knew, new_size) + public static void undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D, Mat Knew, Size new_size) { - - validateDisparity_0(disparity.nativeObj, cost.nativeObj, minDisparity, numberOfDisparities, disp12MaxDisp); - + + undistortImage_0(distorted.nativeObj, undistorted.nativeObj, K.nativeObj, D.nativeObj, Knew.nativeObj, new_size.width, new_size.height); + return; } - public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities) + //javadoc: undistortImage(distorted, undistorted, K, D) + public static void undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D) { + + undistortImage_1(distorted.nativeObj, undistorted.nativeObj, K.nativeObj, D.nativeObj); + + return; + } - validateDisparity_1(disparity.nativeObj, cost.nativeObj, minDisparity, numberOfDisparities); + // + // C++: void undistortPoints(Mat distorted, Mat& undistorted, Mat K, Mat D, Mat R = Mat(), Mat P = Mat()) + // + + //javadoc: undistortPoints(distorted, undistorted, K, D, R, P) + public static void undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R, Mat P) + { + + undistortPoints_0(distorted.nativeObj, undistorted.nativeObj, K.nativeObj, D.nativeObj, R.nativeObj, P.nativeObj); + + return; + } + + //javadoc: undistortPoints(distorted, undistorted, K, D) + public static void undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D) + { + + undistortPoints_1(distorted.nativeObj, undistorted.nativeObj, K.nativeObj, D.nativeObj); + return; } + // C++: Mat estimateAffine2D(Mat from, Mat to, Mat& inliers = Mat(), int method = RANSAC, double ransacReprojThreshold = 3, size_t maxIters = 2000, double confidence = 0.99, size_t refineIters = 10) + private static native long estimateAffine2D_0(long from_nativeObj, long to_nativeObj, long inliers_nativeObj, int method, double ransacReprojThreshold, long maxIters, double confidence, long refineIters); + private static native long estimateAffine2D_1(long from_nativeObj, long to_nativeObj); + + // C++: Mat estimateAffinePartial2D(Mat from, Mat to, Mat& inliers = Mat(), int method = RANSAC, double ransacReprojThreshold = 3, size_t maxIters = 2000, double confidence = 0.99, size_t refineIters = 10) + private static native long estimateAffinePartial2D_0(long from_nativeObj, long to_nativeObj, long inliers_nativeObj, int method, double ransacReprojThreshold, long maxIters, double confidence, long refineIters); + private static native long estimateAffinePartial2D_1(long from_nativeObj, long to_nativeObj); + + // C++: Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method = RANSAC, double prob = 0.999, double threshold = 1.0, Mat& mask = Mat()) + private static native long findEssentialMat_0(long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj, int method, double prob, double threshold, long mask_nativeObj); + private static native long findEssentialMat_1(long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj, int method, double prob, double threshold); + private static native long findEssentialMat_2(long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj); + + // C++: Mat findEssentialMat(Mat points1, Mat points2, double focal = 1.0, Point2d pp = Point2d(0, 0), int method = RANSAC, double prob = 0.999, double threshold = 1.0, Mat& mask = Mat()) + private static native long findEssentialMat_3(long points1_nativeObj, long points2_nativeObj, double focal, double pp_x, double pp_y, int method, double prob, double threshold, long mask_nativeObj); + private static native long findEssentialMat_4(long points1_nativeObj, long points2_nativeObj, double focal, double pp_x, double pp_y, int method, double prob, double threshold); + private static native long findEssentialMat_5(long points1_nativeObj, long points2_nativeObj); + + // C++: Mat findFundamentalMat(vector_Point2f points1, vector_Point2f points2, int method = FM_RANSAC, double param1 = 3., double param2 = 0.99, Mat& mask = Mat()) + private static native long findFundamentalMat_0(long points1_mat_nativeObj, long points2_mat_nativeObj, int method, double param1, double param2, long mask_nativeObj); + private static native long findFundamentalMat_1(long points1_mat_nativeObj, long points2_mat_nativeObj, int method, double param1, double param2); + private static native long findFundamentalMat_2(long points1_mat_nativeObj, long points2_mat_nativeObj); + + // C++: Mat findHomography(vector_Point2f srcPoints, vector_Point2f dstPoints, int method = 0, double ransacReprojThreshold = 3, Mat& mask = Mat(), int maxIters = 2000, double confidence = 0.995) + private static native long findHomography_0(long srcPoints_mat_nativeObj, long dstPoints_mat_nativeObj, int method, double ransacReprojThreshold, long mask_nativeObj, int maxIters, double confidence); + private static native long findHomography_1(long srcPoints_mat_nativeObj, long dstPoints_mat_nativeObj, int method, double ransacReprojThreshold); + private static native long findHomography_2(long srcPoints_mat_nativeObj, long dstPoints_mat_nativeObj); + + // C++: Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize = Size(), Rect* validPixROI = 0, bool centerPrincipalPoint = false) + private static native long getOptimalNewCameraMatrix_0(long cameraMatrix_nativeObj, long distCoeffs_nativeObj, double imageSize_width, double imageSize_height, double alpha, double newImgSize_width, double newImgSize_height, double[] validPixROI_out, boolean centerPrincipalPoint); + private static native long getOptimalNewCameraMatrix_1(long cameraMatrix_nativeObj, long distCoeffs_nativeObj, double imageSize_width, double imageSize_height, double alpha); + + // C++: Mat initCameraMatrix2D(vector_vector_Point3f objectPoints, vector_vector_Point2f imagePoints, Size imageSize, double aspectRatio = 1.0) + private static native long initCameraMatrix2D_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, double aspectRatio); + private static native long initCameraMatrix2D_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height); + + // C++: Rect getValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int SADWindowSize) + private static native double[] getValidDisparityROI_0(int roi1_x, int roi1_y, int roi1_width, int roi1_height, int roi2_x, int roi2_y, int roi2_width, int roi2_height, int minDisparity, int numberOfDisparities, int SADWindowSize); + // C++: Vec3d RQDecomp3x3(Mat src, Mat& mtxR, Mat& mtxQ, Mat& Qx = Mat(), Mat& Qy = Mat(), Mat& Qz = Mat()) private static native double[] RQDecomp3x3_0(long src_nativeObj, long mtxR_nativeObj, long mtxQ_nativeObj, long Qx_nativeObj, long Qy_nativeObj, long Qz_nativeObj); private static native double[] RQDecomp3x3_1(long src_nativeObj, long mtxR_nativeObj, long mtxQ_nativeObj); - // C++: void Rodrigues(Mat src, Mat& dst, Mat& jacobian = Mat()) - private static native void Rodrigues_0(long src_nativeObj, long dst_nativeObj, long jacobian_nativeObj); - private static native void Rodrigues_1(long src_nativeObj, long dst_nativeObj); + // C++: bool findChessboardCorners(Mat image, Size patternSize, vector_Point2f& corners, int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE) + private static native boolean findChessboardCorners_0(long image_nativeObj, double patternSize_width, double patternSize_height, long corners_mat_nativeObj, int flags); + private static native boolean findChessboardCorners_1(long image_nativeObj, double patternSize_width, double patternSize_height, long corners_mat_nativeObj); + + // C++: bool findCirclesGrid(Mat image, Size patternSize, Mat& centers, int flags = CALIB_CB_SYMMETRIC_GRID, Ptr_FeatureDetector blobDetector = SimpleBlobDetector::create()) + private static native boolean findCirclesGrid_0(long image_nativeObj, double patternSize_width, double patternSize_height, long centers_nativeObj, int flags); + private static native boolean findCirclesGrid_1(long image_nativeObj, double patternSize_width, double patternSize_height, long centers_nativeObj); + + // C++: bool solvePnP(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int flags = SOLVEPNP_ITERATIVE) + private static native boolean solvePnP_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, boolean useExtrinsicGuess, int flags); + private static native boolean solvePnP_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj); - // C++: double calibrateCamera(vector_Mat objectPoints, vector_Mat imagePoints, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs, vector_Mat& rvecs, vector_Mat& tvecs, int flags = 0, TermCriteria criteria = TermCriteria( TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON)) + // C++: bool solvePnPRansac(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8.0, double confidence = 0.99, Mat& inliers = Mat(), int flags = SOLVEPNP_ITERATIVE) + private static native boolean solvePnPRansac_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence, long inliers_nativeObj, int flags); + private static native boolean solvePnPRansac_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj); + + // C++: bool stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat& H1, Mat& H2, double threshold = 5) + private static native boolean stereoRectifyUncalibrated_0(long points1_nativeObj, long points2_nativeObj, long F_nativeObj, double imgSize_width, double imgSize_height, long H1_nativeObj, long H2_nativeObj, double threshold); + private static native boolean stereoRectifyUncalibrated_1(long points1_nativeObj, long points2_nativeObj, long F_nativeObj, double imgSize_width, double imgSize_height, long H1_nativeObj, long H2_nativeObj); + + // C++: double calibrateCamera(vector_Mat objectPoints, vector_Mat imagePoints, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs, vector_Mat& rvecs, vector_Mat& tvecs, Mat& stdDeviationsIntrinsics, Mat& stdDeviationsExtrinsics, Mat& perViewErrors, int flags = 0, TermCriteria criteria = TermCriteria( TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON)) + private static native double calibrateCameraExtended_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, long stdDeviationsIntrinsics_nativeObj, long stdDeviationsExtrinsics_nativeObj, long perViewErrors_nativeObj, int flags, int criteria_type, int criteria_maxCount, double criteria_epsilon); + private static native double calibrateCameraExtended_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, long stdDeviationsIntrinsics_nativeObj, long stdDeviationsExtrinsics_nativeObj, long perViewErrors_nativeObj, int flags); + private static native double calibrateCameraExtended_2(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, long stdDeviationsIntrinsics_nativeObj, long stdDeviationsExtrinsics_nativeObj, long perViewErrors_nativeObj); + + // C++: double calibrateCamera(vector_Mat objectPoints, vector_Mat imagePoints, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs, vector_Mat& rvecs, vector_Mat& tvecs, int flags = 0, TermCriteria criteria = TermCriteria( TermCriteria::COUNT + TermCriteria::EPS, 30, DBL_EPSILON)) private static native double calibrateCamera_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, int flags, int criteria_type, int criteria_maxCount, double criteria_epsilon); private static native double calibrateCamera_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, int flags); private static native double calibrateCamera_2(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj); + // C++: double sampsonDistance(Mat pt1, Mat pt2, Mat F) + private static native double sampsonDistance_0(long pt1_nativeObj, long pt2_nativeObj, long F_nativeObj); + + // C++: double stereoCalibrate(vector_Mat objectPoints, vector_Mat imagePoints1, vector_Mat imagePoints2, Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2, Size imageSize, Mat& R, Mat& T, Mat& E, Mat& F, int flags = CALIB_FIX_INTRINSIC, TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6)) + private static native double stereoCalibrate_0(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long E_nativeObj, long F_nativeObj, int flags, int criteria_type, int criteria_maxCount, double criteria_epsilon); + private static native double stereoCalibrate_1(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long E_nativeObj, long F_nativeObj, int flags); + private static native double stereoCalibrate_2(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long E_nativeObj, long F_nativeObj); + + // C++: double calibrate(vector_Mat objectPoints, vector_Mat imagePoints, Size image_size, Mat& K, Mat& D, vector_Mat& rvecs, vector_Mat& tvecs, int flags = 0, TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON)) + private static native double calibrate_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double image_size_width, double image_size_height, long K_nativeObj, long D_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, int flags, int criteria_type, int criteria_maxCount, double criteria_epsilon); + private static native double calibrate_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double image_size_width, double image_size_height, long K_nativeObj, long D_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, int flags); + private static native double calibrate_2(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double image_size_width, double image_size_height, long K_nativeObj, long D_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj); + + // C++: double stereoCalibrate(vector_Mat objectPoints, vector_Mat imagePoints1, vector_Mat imagePoints2, Mat& K1, Mat& D1, Mat& K2, Mat& D2, Size imageSize, Mat& R, Mat& T, int flags = fisheye::CALIB_FIX_INTRINSIC, TermCriteria criteria = TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 100, DBL_EPSILON)) + private static native double stereoCalibrate_3(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long K1_nativeObj, long D1_nativeObj, long K2_nativeObj, long D2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, int flags, int criteria_type, int criteria_maxCount, double criteria_epsilon); + private static native double stereoCalibrate_4(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long K1_nativeObj, long D1_nativeObj, long K2_nativeObj, long D2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, int flags); + private static native double stereoCalibrate_5(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long K1_nativeObj, long D1_nativeObj, long K2_nativeObj, long D2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj); + + // C++: float rectify3Collinear(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat cameraMatrix3, Mat distCoeffs3, vector_Mat imgpt1, vector_Mat imgpt3, Size imageSize, Mat R12, Mat T12, Mat R13, Mat T13, Mat& R1, Mat& R2, Mat& R3, Mat& P1, Mat& P2, Mat& P3, Mat& Q, double alpha, Size newImgSize, Rect* roi1, Rect* roi2, int flags) + private static native float rectify3Collinear_0(long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, long cameraMatrix3_nativeObj, long distCoeffs3_nativeObj, long imgpt1_mat_nativeObj, long imgpt3_mat_nativeObj, double imageSize_width, double imageSize_height, long R12_nativeObj, long T12_nativeObj, long R13_nativeObj, long T13_nativeObj, long R1_nativeObj, long R2_nativeObj, long R3_nativeObj, long P1_nativeObj, long P2_nativeObj, long P3_nativeObj, long Q_nativeObj, double alpha, double newImgSize_width, double newImgSize_height, double[] roi1_out, double[] roi2_out, int flags); + + // C++: int decomposeHomographyMat(Mat H, Mat K, vector_Mat& rotations, vector_Mat& translations, vector_Mat& normals) + private static native int decomposeHomographyMat_0(long H_nativeObj, long K_nativeObj, long rotations_mat_nativeObj, long translations_mat_nativeObj, long normals_mat_nativeObj); + + // C++: int estimateAffine3D(Mat src, Mat dst, Mat& out, Mat& inliers, double ransacThreshold = 3, double confidence = 0.99) + private static native int estimateAffine3D_0(long src_nativeObj, long dst_nativeObj, long out_nativeObj, long inliers_nativeObj, double ransacThreshold, double confidence); + private static native int estimateAffine3D_1(long src_nativeObj, long dst_nativeObj, long out_nativeObj, long inliers_nativeObj); + + // C++: int recoverPose(Mat E, Mat points1, Mat points2, Mat& R, Mat& t, double focal = 1.0, Point2d pp = Point2d(0, 0), Mat& mask = Mat()) + private static native int recoverPose_0(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long R_nativeObj, long t_nativeObj, double focal, double pp_x, double pp_y, long mask_nativeObj); + private static native int recoverPose_1(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long R_nativeObj, long t_nativeObj, double focal, double pp_x, double pp_y); + private static native int recoverPose_2(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long R_nativeObj, long t_nativeObj); + + // C++: int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat& R, Mat& t, Mat& mask = Mat()) + private static native int recoverPose_3(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj, long R_nativeObj, long t_nativeObj, long mask_nativeObj); + private static native int recoverPose_4(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj, long R_nativeObj, long t_nativeObj); + + // C++: int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat& R, Mat& t, double distanceThresh, Mat& mask = Mat(), Mat& triangulatedPoints = Mat()) + private static native int recoverPose_5(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj, long R_nativeObj, long t_nativeObj, double distanceThresh, long mask_nativeObj, long triangulatedPoints_nativeObj); + private static native int recoverPose_6(long E_nativeObj, long points1_nativeObj, long points2_nativeObj, long cameraMatrix_nativeObj, long R_nativeObj, long t_nativeObj, double distanceThresh); + + // C++: int solveP3P(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, vector_Mat& rvecs, vector_Mat& tvecs, int flags) + private static native int solveP3P_0(long objectPoints_nativeObj, long imagePoints_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_nativeObj, long rvecs_mat_nativeObj, long tvecs_mat_nativeObj, int flags); + + // C++: void Rodrigues(Mat src, Mat& dst, Mat& jacobian = Mat()) + private static native void Rodrigues_0(long src_nativeObj, long dst_nativeObj, long jacobian_nativeObj); + private static native void Rodrigues_1(long src_nativeObj, long dst_nativeObj); + // C++: void calibrationMatrixValues(Mat cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, double& fovx, double& fovy, double& focalLength, Point2d& principalPoint, double& aspectRatio) private static native void calibrationMatrixValues_0(long cameraMatrix_nativeObj, double imageSize_width, double imageSize_height, double apertureWidth, double apertureHeight, double[] fovx_out, double[] fovy_out, double[] focalLength_out, double[] principalPoint_out, double[] aspectRatio_out); @@ -3024,6 +1497,9 @@ public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, // C++: void correctMatches(Mat F, Mat points1, Mat points2, Mat& newPoints1, Mat& newPoints2) private static native void correctMatches_0(long F_nativeObj, long points1_nativeObj, long points2_nativeObj, long newPoints1_nativeObj, long newPoints2_nativeObj); + // C++: void decomposeEssentialMat(Mat E, Mat& R1, Mat& R2, Mat& t) + private static native void decomposeEssentialMat_0(long E_nativeObj, long R1_nativeObj, long R2_nativeObj, long t_nativeObj); + // C++: void decomposeProjectionMatrix(Mat projMatrix, Mat& cameraMatrix, Mat& rotMatrix, Mat& transVect, Mat& rotMatrixX = Mat(), Mat& rotMatrixY = Mat(), Mat& rotMatrixZ = Mat(), Mat& eulerAngles = Mat()) private static native void decomposeProjectionMatrix_0(long projMatrix_nativeObj, long cameraMatrix_nativeObj, long rotMatrix_nativeObj, long transVect_nativeObj, long rotMatrixX_nativeObj, long rotMatrixY_nativeObj, long rotMatrixZ_nativeObj, long eulerAngles_nativeObj); private static native void decomposeProjectionMatrix_1(long projMatrix_nativeObj, long cameraMatrix_nativeObj, long rotMatrix_nativeObj, long transVect_nativeObj); @@ -3031,43 +1507,10 @@ public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, // C++: void drawChessboardCorners(Mat& image, Size patternSize, vector_Point2f corners, bool patternWasFound) private static native void drawChessboardCorners_0(long image_nativeObj, double patternSize_width, double patternSize_height, long corners_mat_nativeObj, boolean patternWasFound); - // C++: int estimateAffine3D(Mat src, Mat dst, Mat& out, Mat& inliers, double ransacThreshold = 3, double confidence = 0.99) - private static native int estimateAffine3D_0(long src_nativeObj, long dst_nativeObj, long out_nativeObj, long inliers_nativeObj, double ransacThreshold, double confidence); - private static native int estimateAffine3D_1(long src_nativeObj, long dst_nativeObj, long out_nativeObj, long inliers_nativeObj); - // C++: void filterSpeckles(Mat& img, double newVal, int maxSpeckleSize, double maxDiff, Mat& buf = Mat()) private static native void filterSpeckles_0(long img_nativeObj, double newVal, int maxSpeckleSize, double maxDiff, long buf_nativeObj); private static native void filterSpeckles_1(long img_nativeObj, double newVal, int maxSpeckleSize, double maxDiff); - // C++: bool findChessboardCorners(Mat image, Size patternSize, vector_Point2f& corners, int flags = CALIB_CB_ADAPTIVE_THRESH+CALIB_CB_NORMALIZE_IMAGE) - private static native boolean findChessboardCorners_0(long image_nativeObj, double patternSize_width, double patternSize_height, long corners_mat_nativeObj, int flags); - private static native boolean findChessboardCorners_1(long image_nativeObj, double patternSize_width, double patternSize_height, long corners_mat_nativeObj); - - // C++: bool findCirclesGridDefault(Mat image, Size patternSize, Mat& centers, int flags = CALIB_CB_SYMMETRIC_GRID) - private static native boolean findCirclesGridDefault_0(long image_nativeObj, double patternSize_width, double patternSize_height, long centers_nativeObj, int flags); - private static native boolean findCirclesGridDefault_1(long image_nativeObj, double patternSize_width, double patternSize_height, long centers_nativeObj); - - // C++: Mat findFundamentalMat(vector_Point2f points1, vector_Point2f points2, int method = FM_RANSAC, double param1 = 3., double param2 = 0.99, Mat& mask = Mat()) - private static native long findFundamentalMat_0(long points1_mat_nativeObj, long points2_mat_nativeObj, int method, double param1, double param2, long mask_nativeObj); - private static native long findFundamentalMat_1(long points1_mat_nativeObj, long points2_mat_nativeObj, int method, double param1, double param2); - private static native long findFundamentalMat_2(long points1_mat_nativeObj, long points2_mat_nativeObj); - - // C++: Mat findHomography(vector_Point2f srcPoints, vector_Point2f dstPoints, int method = 0, double ransacReprojThreshold = 3, Mat& mask = Mat()) - private static native long findHomography_0(long srcPoints_mat_nativeObj, long dstPoints_mat_nativeObj, int method, double ransacReprojThreshold, long mask_nativeObj); - private static native long findHomography_1(long srcPoints_mat_nativeObj, long dstPoints_mat_nativeObj, int method, double ransacReprojThreshold); - private static native long findHomography_2(long srcPoints_mat_nativeObj, long dstPoints_mat_nativeObj); - - // C++: Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize = Size(), Rect* validPixROI = 0, bool centerPrincipalPoint = false) - private static native long getOptimalNewCameraMatrix_0(long cameraMatrix_nativeObj, long distCoeffs_nativeObj, double imageSize_width, double imageSize_height, double alpha, double newImgSize_width, double newImgSize_height, double[] validPixROI_out, boolean centerPrincipalPoint); - private static native long getOptimalNewCameraMatrix_1(long cameraMatrix_nativeObj, long distCoeffs_nativeObj, double imageSize_width, double imageSize_height, double alpha); - - // C++: Rect getValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int SADWindowSize) - private static native double[] getValidDisparityROI_0(int roi1_x, int roi1_y, int roi1_width, int roi1_height, int roi2_x, int roi2_y, int roi2_width, int roi2_height, int minDisparity, int numberOfDisparities, int SADWindowSize); - - // C++: Mat initCameraMatrix2D(vector_vector_Point3f objectPoints, vector_vector_Point2f imagePoints, Size imageSize, double aspectRatio = 1.) - private static native long initCameraMatrix2D_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height, double aspectRatio); - private static native long initCameraMatrix2D_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, double imageSize_width, double imageSize_height); - // C++: void matMulDeriv(Mat A, Mat B, Mat& dABdA, Mat& dABdB) private static native void matMulDeriv_0(long A_nativeObj, long B_nativeObj, long dABdA_nativeObj, long dABdB_nativeObj); @@ -3075,34 +1518,15 @@ public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, private static native void projectPoints_0(long objectPoints_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long imagePoints_mat_nativeObj, long jacobian_nativeObj, double aspectRatio); private static native void projectPoints_1(long objectPoints_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long imagePoints_mat_nativeObj); - // C++: float rectify3Collinear(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat cameraMatrix3, Mat distCoeffs3, vector_Mat imgpt1, vector_Mat imgpt3, Size imageSize, Mat R12, Mat T12, Mat R13, Mat T13, Mat& R1, Mat& R2, Mat& R3, Mat& P1, Mat& P2, Mat& P3, Mat& Q, double alpha, Size newImgSize, Rect* roi1, Rect* roi2, int flags) - private static native float rectify3Collinear_0(long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, long cameraMatrix3_nativeObj, long distCoeffs3_nativeObj, long imgpt1_mat_nativeObj, long imgpt3_mat_nativeObj, double imageSize_width, double imageSize_height, long R12_nativeObj, long T12_nativeObj, long R13_nativeObj, long T13_nativeObj, long R1_nativeObj, long R2_nativeObj, long R3_nativeObj, long P1_nativeObj, long P2_nativeObj, long P3_nativeObj, long Q_nativeObj, double alpha, double newImgSize_width, double newImgSize_height, double[] roi1_out, double[] roi2_out, int flags); - // C++: void reprojectImageTo3D(Mat disparity, Mat& _3dImage, Mat Q, bool handleMissingValues = false, int ddepth = -1) private static native void reprojectImageTo3D_0(long disparity_nativeObj, long _3dImage_nativeObj, long Q_nativeObj, boolean handleMissingValues, int ddepth); private static native void reprojectImageTo3D_1(long disparity_nativeObj, long _3dImage_nativeObj, long Q_nativeObj, boolean handleMissingValues); private static native void reprojectImageTo3D_2(long disparity_nativeObj, long _3dImage_nativeObj, long Q_nativeObj); - // C++: bool solvePnP(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int flags = ITERATIVE) - private static native boolean solvePnP_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, boolean useExtrinsicGuess, int flags); - private static native boolean solvePnP_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj); - - // C++: void solvePnPRansac(vector_Point3f objectPoints, vector_Point2f imagePoints, Mat cameraMatrix, vector_double distCoeffs, Mat& rvec, Mat& tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8.0, int minInliersCount = 100, Mat& inliers = Mat(), int flags = ITERATIVE) - private static native void solvePnPRansac_0(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, int minInliersCount, long inliers_nativeObj, int flags); - private static native void solvePnPRansac_1(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long cameraMatrix_nativeObj, long distCoeffs_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj); - - // C++: double stereoCalibrate(vector_Mat objectPoints, vector_Mat imagePoints1, vector_Mat imagePoints2, Mat& cameraMatrix1, Mat& distCoeffs1, Mat& cameraMatrix2, Mat& distCoeffs2, Size imageSize, Mat& R, Mat& T, Mat& E, Mat& F, TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6), int flags = CALIB_FIX_INTRINSIC) - private static native double stereoCalibrate_0(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long E_nativeObj, long F_nativeObj, int criteria_type, int criteria_maxCount, double criteria_epsilon, int flags); - private static native double stereoCalibrate_1(long objectPoints_mat_nativeObj, long imagePoints1_mat_nativeObj, long imagePoints2_mat_nativeObj, long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long E_nativeObj, long F_nativeObj); - // C++: void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q, int flags = CALIB_ZERO_DISPARITY, double alpha = -1, Size newImageSize = Size(), Rect* validPixROI1 = 0, Rect* validPixROI2 = 0) private static native void stereoRectify_0(long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long R1_nativeObj, long R2_nativeObj, long P1_nativeObj, long P2_nativeObj, long Q_nativeObj, int flags, double alpha, double newImageSize_width, double newImageSize_height, double[] validPixROI1_out, double[] validPixROI2_out); private static native void stereoRectify_1(long cameraMatrix1_nativeObj, long distCoeffs1_nativeObj, long cameraMatrix2_nativeObj, long distCoeffs2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long T_nativeObj, long R1_nativeObj, long R2_nativeObj, long P1_nativeObj, long P2_nativeObj, long Q_nativeObj); - // C++: bool stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat& H1, Mat& H2, double threshold = 5) - private static native boolean stereoRectifyUncalibrated_0(long points1_nativeObj, long points2_nativeObj, long F_nativeObj, double imgSize_width, double imgSize_height, long H1_nativeObj, long H2_nativeObj, double threshold); - private static native boolean stereoRectifyUncalibrated_1(long points1_nativeObj, long points2_nativeObj, long F_nativeObj, double imgSize_width, double imgSize_height, long H1_nativeObj, long H2_nativeObj); - // C++: void triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat& points4D) private static native void triangulatePoints_0(long projMatr1_nativeObj, long projMatr2_nativeObj, long projPoints1_nativeObj, long projPoints2_nativeObj, long points4D_nativeObj); @@ -3110,4 +1534,31 @@ public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, private static native void validateDisparity_0(long disparity_nativeObj, long cost_nativeObj, int minDisparity, int numberOfDisparities, int disp12MaxDisp); private static native void validateDisparity_1(long disparity_nativeObj, long cost_nativeObj, int minDisparity, int numberOfDisparities); + // C++: void distortPoints(Mat undistorted, Mat& distorted, Mat K, Mat D, double alpha = 0) + private static native void distortPoints_0(long undistorted_nativeObj, long distorted_nativeObj, long K_nativeObj, long D_nativeObj, double alpha); + private static native void distortPoints_1(long undistorted_nativeObj, long distorted_nativeObj, long K_nativeObj, long D_nativeObj); + + // C++: void estimateNewCameraMatrixForUndistortRectify(Mat K, Mat D, Size image_size, Mat R, Mat& P, double balance = 0.0, Size new_size = Size(), double fov_scale = 1.0) + private static native void estimateNewCameraMatrixForUndistortRectify_0(long K_nativeObj, long D_nativeObj, double image_size_width, double image_size_height, long R_nativeObj, long P_nativeObj, double balance, double new_size_width, double new_size_height, double fov_scale); + private static native void estimateNewCameraMatrixForUndistortRectify_1(long K_nativeObj, long D_nativeObj, double image_size_width, double image_size_height, long R_nativeObj, long P_nativeObj); + + // C++: void initUndistortRectifyMap(Mat K, Mat D, Mat R, Mat P, Size size, int m1type, Mat& map1, Mat& map2) + private static native void initUndistortRectifyMap_0(long K_nativeObj, long D_nativeObj, long R_nativeObj, long P_nativeObj, double size_width, double size_height, int m1type, long map1_nativeObj, long map2_nativeObj); + + // C++: void projectPoints(vector_Point3f objectPoints, vector_Point2f& imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha = 0, Mat& jacobian = Mat()) + private static native void projectPoints_2(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, long K_nativeObj, long D_nativeObj, double alpha, long jacobian_nativeObj); + private static native void projectPoints_3(long objectPoints_mat_nativeObj, long imagePoints_mat_nativeObj, long rvec_nativeObj, long tvec_nativeObj, long K_nativeObj, long D_nativeObj); + + // C++: void stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat& R1, Mat& R2, Mat& P1, Mat& P2, Mat& Q, int flags, Size newImageSize = Size(), double balance = 0.0, double fov_scale = 1.0) + private static native void stereoRectify_2(long K1_nativeObj, long D1_nativeObj, long K2_nativeObj, long D2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long tvec_nativeObj, long R1_nativeObj, long R2_nativeObj, long P1_nativeObj, long P2_nativeObj, long Q_nativeObj, int flags, double newImageSize_width, double newImageSize_height, double balance, double fov_scale); + private static native void stereoRectify_3(long K1_nativeObj, long D1_nativeObj, long K2_nativeObj, long D2_nativeObj, double imageSize_width, double imageSize_height, long R_nativeObj, long tvec_nativeObj, long R1_nativeObj, long R2_nativeObj, long P1_nativeObj, long P2_nativeObj, long Q_nativeObj, int flags); + + // C++: void undistortImage(Mat distorted, Mat& undistorted, Mat K, Mat D, Mat Knew = cv::Mat(), Size new_size = Size()) + private static native void undistortImage_0(long distorted_nativeObj, long undistorted_nativeObj, long K_nativeObj, long D_nativeObj, long Knew_nativeObj, double new_size_width, double new_size_height); + private static native void undistortImage_1(long distorted_nativeObj, long undistorted_nativeObj, long K_nativeObj, long D_nativeObj); + + // C++: void undistortPoints(Mat distorted, Mat& undistorted, Mat K, Mat D, Mat R = Mat(), Mat P = Mat()) + private static native void undistortPoints_0(long distorted_nativeObj, long undistorted_nativeObj, long K_nativeObj, long D_nativeObj, long R_nativeObj, long P_nativeObj); + private static native void undistortPoints_1(long distorted_nativeObj, long undistorted_nativeObj, long K_nativeObj, long D_nativeObj); + } diff --git a/imaging-utils/src/main/java/org/opencv/calib3d/StereoBM.java b/imaging-utils/src/main/java/org/opencv/calib3d/StereoBM.java index 8c453c7..18aba05 100644 --- a/imaging-utils/src/main/java/org/opencv/calib3d/StereoBM.java +++ b/imaging-utils/src/main/java/org/opencv/calib3d/StereoBM.java @@ -4,238 +4,263 @@ // package org.opencv.calib3d; -import org.opencv.core.Mat; +import org.opencv.core.Rect; // C++: class StereoBM -/** - *Class for computing stereo correspondence using the block matching algorithm.
- * - *// Block matching stereo correspondence algorithm class StereoBM
// C++ code:
- * - * - *enum { NORMALIZED_RESPONSE = CV_STEREO_BM_NORMALIZED_RESPONSE,
- * - *BASIC_PRESET=CV_STEREO_BM_BASIC,
- * - *FISH_EYE_PRESET=CV_STEREO_BM_FISH_EYE,
- * - *NARROW_PRESET=CV_STEREO_BM_NARROW };
- * - *StereoBM();
- * - *// the preset is one of..._PRESET above.
- * - *// ndisparities is the size of disparity range,
- * - *// in which the optimal disparity at each pixel is searched for.
- * - *// SADWindowSize is the size of averaging window used to match pixel blocks
- * - *// (larger values mean better robustness to noise, but yield blurry disparity - * maps)
- * - *StereoBM(int preset, int ndisparities=0, int SADWindowSize=21);
- * - *// separate initialization function
- * - *void init(int preset, int ndisparities=0, int SADWindowSize=21);
- * - *// computes the disparity for the two rectified 8-bit single-channel images.
- * - *// the disparity will be 16-bit signed (fixed-point) or 32-bit floating-point - * image of the same size as left.
- * - *void operator()(InputArray left, InputArray right, OutputArray disparity, int - * disptype=CV_16S);
- * - *Ptr
};
- * - *The class is a C++ wrapper for the associated functions. In particular, - * :ocv:funcx:"StereoBM.operator()" is the wrapper for
- * - *"cvFindStereoCorrespondenceBM"... Sample code:
- * - *(Ocl) An example for using the stereoBM matching algorithm can be found at - * opencv_source_code/samples/ocl/stereo_match.cpp
- * - * @see org.opencv.calib3d.StereoBM - */ -public class StereoBM { - - protected final long nativeObj; - protected StereoBM(long addr) { nativeObj = addr; } +//javadoc: StereoBM +public class StereoBM extends StereoMatcher { + + protected StereoBM(long addr) { super(addr); } public static final int PREFILTER_NORMALIZED_RESPONSE = 0, - PREFILTER_XSOBEL = 1, - BASIC_PRESET = 0, - FISH_EYE_PRESET = 1, - NARROW_PRESET = 2; + PREFILTER_XSOBEL = 1; // - // C++: StereoBM::StereoBM() + // C++: static Ptr_StereoBM create(int numDisparities = 0, int blockSize = 21) // -/** - *The constructors.
- * - *The constructors initialize StereoBM
state. You can then call
- * StereoBM.operator()
to compute disparity for a specific stereo
- * pair.
Note: In the C API you need to deallocate CvStereoBM
state when
- * it is not needed anymore using cvReleaseStereoBMState(&stereobm)
.
The constructors.
- * - *The constructors initialize StereoBM
state. You can then call
- * StereoBM.operator()
to compute disparity for a specific stereo
- * pair.
Note: In the C API you need to deallocate CvStereoBM
state when
- * it is not needed anymore using cvReleaseStereoBMState(&stereobm)
.
After constructing the class, you can override any parameters set by the - * preset.
- * @param ndisparities the disparity search range. For each pixel algorithm will - * find the best disparity from 0 (default minimum disparity) to - *ndisparities
. The search range can then be shifted by changing
- * the minimum disparity.
- * @param SADWindowSize the linear size of the blocks compared by the algorithm.
- * The size should be odd (as the block is centered at the current pixel).
- * Larger block size implies smoother, though less accurate disparity map.
- * Smaller block size gives more detailed disparity map, but there is higher
- * chance for algorithm to find a wrong correspondence.
- *
- * @see org.opencv.calib3d.StereoBM.StereoBM
- */
- public StereoBM(int preset, int ndisparities, int SADWindowSize)
+ // C++: int getPreFilterSize()
+ //
+
+ //javadoc: StereoBM::getPreFilterSize()
+ public int getPreFilterSize()
{
+
+ int retVal = getPreFilterSize_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getPreFilterType()
+ //
+
+ //javadoc: StereoBM::getPreFilterType()
+ public int getPreFilterType()
+ {
+
+ int retVal = getPreFilterType_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getSmallerBlockSize()
+ //
+
+ //javadoc: StereoBM::getSmallerBlockSize()
+ public int getSmallerBlockSize()
+ {
+
+ int retVal = getSmallerBlockSize_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getTextureThreshold()
+ //
+
+ //javadoc: StereoBM::getTextureThreshold()
+ public int getTextureThreshold()
+ {
+
+ int retVal = getTextureThreshold_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getUniquenessRatio()
+ //
+
+ //javadoc: StereoBM::getUniquenessRatio()
+ public int getUniquenessRatio()
+ {
+
+ int retVal = getUniquenessRatio_0(nativeObj);
+
+ return retVal;
+ }
- nativeObj = StereoBM_1(preset, ndisparities, SADWindowSize);
+ //
+ // C++: void setPreFilterCap(int preFilterCap)
+ //
+
+ //javadoc: StereoBM::setPreFilterCap(preFilterCap)
+ public void setPreFilterCap(int preFilterCap)
+ {
+
+ setPreFilterCap_0(nativeObj, preFilterCap);
+
return;
}
-/**
- * The constructors.
- * - *The constructors initialize StereoBM
state. You can then call
- * StereoBM.operator()
to compute disparity for a specific stereo
- * pair.
Note: In the C API you need to deallocate CvStereoBM
state when
- * it is not needed anymore using cvReleaseStereoBMState(&stereobm)
.
After constructing the class, you can override any parameters set by the - * preset.
- * - * @see org.opencv.calib3d.StereoBM.StereoBM - */ - public StereoBM(int preset) + + // + // C++: void setPreFilterSize(int preFilterSize) + // + + //javadoc: StereoBM::setPreFilterSize(preFilterSize) + public void setPreFilterSize(int preFilterSize) { + + setPreFilterSize_0(nativeObj, preFilterSize); + + return; + } - nativeObj = StereoBM_2(preset); + // + // C++: void setPreFilterType(int preFilterType) + // + + //javadoc: StereoBM::setPreFilterType(preFilterType) + public void setPreFilterType(int preFilterType) + { + + setPreFilterType_0(nativeObj, preFilterType); + return; } // - // C++: void StereoBM::operator ()(Mat left, Mat right, Mat& disparity, int disptype = CV_16S) - // - -/** - *Computes disparity using the BM algorithm for a rectified stereo pair.
- * - *The method executes the BM algorithm on a rectified stereo pair. See the
- * stereo_match.cpp
OpenCV sample on how to prepare images and call
- * the method. Note that the method is not constant, thus you should not use the
- * same StereoBM
instance from within different threads
- * simultaneously. The function is parallelized with the TBB library.
disptype==CV_16S
, the map is a 16-bit signed
- * single-channel image, containing disparity values scaled by 16. To get the
- * true disparity values from such fixed-point representation, you will need to
- * divide each disp
element by 16. If disptype==CV_32F
,
- * the disparity map will already contain the real disparity values on output.
- * @param disptype Type of the output disparity map, CV_16S
- * (default) or CV_32F
.
- *
- * @see org.opencv.calib3d.StereoBM.operator()
- */
- public void compute(Mat left, Mat right, Mat disparity, int disptype)
+ // C++: void setROI1(Rect roi1)
+ //
+
+ //javadoc: StereoBM::setROI1(roi1)
+ public void setROI1(Rect roi1)
{
+
+ setROI1_0(nativeObj, roi1.x, roi1.y, roi1.width, roi1.height);
+
+ return;
+ }
+
+
+ //
+ // C++: void setROI2(Rect roi2)
+ //
+
+ //javadoc: StereoBM::setROI2(roi2)
+ public void setROI2(Rect roi2)
+ {
+
+ setROI2_0(nativeObj, roi2.x, roi2.y, roi2.width, roi2.height);
+
+ return;
+ }
+
- compute_0(nativeObj, left.nativeObj, right.nativeObj, disparity.nativeObj, disptype);
+ //
+ // C++: void setSmallerBlockSize(int blockSize)
+ //
+ //javadoc: StereoBM::setSmallerBlockSize(blockSize)
+ public void setSmallerBlockSize(int blockSize)
+ {
+
+ setSmallerBlockSize_0(nativeObj, blockSize);
+
return;
}
-/**
- * Computes disparity using the BM algorithm for a rectified stereo pair.
- * - *The method executes the BM algorithm on a rectified stereo pair. See the
- * stereo_match.cpp
OpenCV sample on how to prepare images and call
- * the method. Note that the method is not constant, thus you should not use the
- * same StereoBM
instance from within different threads
- * simultaneously. The function is parallelized with the TBB library.
disptype==CV_16S
, the map is a 16-bit signed
- * single-channel image, containing disparity values scaled by 16. To get the
- * true disparity values from such fixed-point representation, you will need to
- * divide each disp
element by 16. If disptype==CV_32F
,
- * the disparity map will already contain the real disparity values on output.
- *
- * @see org.opencv.calib3d.StereoBM.operator()
- */
- public void compute(Mat left, Mat right, Mat disparity)
+
+ //
+ // C++: void setTextureThreshold(int textureThreshold)
+ //
+
+ //javadoc: StereoBM::setTextureThreshold(textureThreshold)
+ public void setTextureThreshold(int textureThreshold)
{
+
+ setTextureThreshold_0(nativeObj, textureThreshold);
+
+ return;
+ }
- compute_1(nativeObj, left.nativeObj, right.nativeObj, disparity.nativeObj);
+ //
+ // C++: void setUniquenessRatio(int uniquenessRatio)
+ //
+
+ //javadoc: StereoBM::setUniquenessRatio(uniquenessRatio)
+ public void setUniquenessRatio(int uniquenessRatio)
+ {
+
+ setUniquenessRatio_0(nativeObj, uniquenessRatio);
+
return;
}
@@ -247,16 +272,57 @@ protected void finalize() throws Throwable {
- // C++: StereoBM::StereoBM()
- private static native long StereoBM_0();
+ // C++: static Ptr_StereoBM create(int numDisparities = 0, int blockSize = 21)
+ private static native long create_0(int numDisparities, int blockSize);
+ private static native long create_1();
+
+ // C++: Rect getROI1()
+ private static native double[] getROI1_0(long nativeObj);
+
+ // C++: Rect getROI2()
+ private static native double[] getROI2_0(long nativeObj);
+
+ // C++: int getPreFilterCap()
+ private static native int getPreFilterCap_0(long nativeObj);
+
+ // C++: int getPreFilterSize()
+ private static native int getPreFilterSize_0(long nativeObj);
+
+ // C++: int getPreFilterType()
+ private static native int getPreFilterType_0(long nativeObj);
+
+ // C++: int getSmallerBlockSize()
+ private static native int getSmallerBlockSize_0(long nativeObj);
+
+ // C++: int getTextureThreshold()
+ private static native int getTextureThreshold_0(long nativeObj);
+
+ // C++: int getUniquenessRatio()
+ private static native int getUniquenessRatio_0(long nativeObj);
+
+ // C++: void setPreFilterCap(int preFilterCap)
+ private static native void setPreFilterCap_0(long nativeObj, int preFilterCap);
+
+ // C++: void setPreFilterSize(int preFilterSize)
+ private static native void setPreFilterSize_0(long nativeObj, int preFilterSize);
+
+ // C++: void setPreFilterType(int preFilterType)
+ private static native void setPreFilterType_0(long nativeObj, int preFilterType);
+
+ // C++: void setROI1(Rect roi1)
+ private static native void setROI1_0(long nativeObj, int roi1_x, int roi1_y, int roi1_width, int roi1_height);
+
+ // C++: void setROI2(Rect roi2)
+ private static native void setROI2_0(long nativeObj, int roi2_x, int roi2_y, int roi2_width, int roi2_height);
+
+ // C++: void setSmallerBlockSize(int blockSize)
+ private static native void setSmallerBlockSize_0(long nativeObj, int blockSize);
- // C++: StereoBM::StereoBM(int preset, int ndisparities = 0, int SADWindowSize = 21)
- private static native long StereoBM_1(int preset, int ndisparities, int SADWindowSize);
- private static native long StereoBM_2(int preset);
+ // C++: void setTextureThreshold(int textureThreshold)
+ private static native void setTextureThreshold_0(long nativeObj, int textureThreshold);
- // C++: void StereoBM::operator ()(Mat left, Mat right, Mat& disparity, int disptype = CV_16S)
- private static native void compute_0(long nativeObj, long left_nativeObj, long right_nativeObj, long disparity_nativeObj, int disptype);
- private static native void compute_1(long nativeObj, long left_nativeObj, long right_nativeObj, long disparity_nativeObj);
+ // C++: void setUniquenessRatio(int uniquenessRatio)
+ private static native void setUniquenessRatio_0(long nativeObj, int uniquenessRatio);
// native support for java finalize()
private static native void delete(long nativeObj);
diff --git a/imaging-utils/src/main/java/org/opencv/calib3d/StereoMatcher.java b/imaging-utils/src/main/java/org/opencv/calib3d/StereoMatcher.java
new file mode 100644
index 0000000..6b3c4f6
--- /dev/null
+++ b/imaging-utils/src/main/java/org/opencv/calib3d/StereoMatcher.java
@@ -0,0 +1,253 @@
+
+//
+// This file is auto-generated. Please don't modify it!
+//
+package org.opencv.calib3d;
+
+import org.opencv.core.Algorithm;
+import org.opencv.core.Mat;
+
+// C++: class StereoMatcher
+//javadoc: StereoMatcher
+public class StereoMatcher extends Algorithm {
+
+ protected StereoMatcher(long addr) { super(addr); }
+
+
+ public static final int
+ DISP_SHIFT = 4,
+ DISP_SCALE = (1 << DISP_SHIFT);
+
+
+ //
+ // C++: int getBlockSize()
+ //
+
+ //javadoc: StereoMatcher::getBlockSize()
+ public int getBlockSize()
+ {
+
+ int retVal = getBlockSize_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getDisp12MaxDiff()
+ //
+
+ //javadoc: StereoMatcher::getDisp12MaxDiff()
+ public int getDisp12MaxDiff()
+ {
+
+ int retVal = getDisp12MaxDiff_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getMinDisparity()
+ //
+
+ //javadoc: StereoMatcher::getMinDisparity()
+ public int getMinDisparity()
+ {
+
+ int retVal = getMinDisparity_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getNumDisparities()
+ //
+
+ //javadoc: StereoMatcher::getNumDisparities()
+ public int getNumDisparities()
+ {
+
+ int retVal = getNumDisparities_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getSpeckleRange()
+ //
+
+ //javadoc: StereoMatcher::getSpeckleRange()
+ public int getSpeckleRange()
+ {
+
+ int retVal = getSpeckleRange_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: int getSpeckleWindowSize()
+ //
+
+ //javadoc: StereoMatcher::getSpeckleWindowSize()
+ public int getSpeckleWindowSize()
+ {
+
+ int retVal = getSpeckleWindowSize_0(nativeObj);
+
+ return retVal;
+ }
+
+
+ //
+ // C++: void compute(Mat left, Mat right, Mat& disparity)
+ //
+
+ //javadoc: StereoMatcher::compute(left, right, disparity)
+ public void compute(Mat left, Mat right, Mat disparity)
+ {
+
+ compute_0(nativeObj, left.nativeObj, right.nativeObj, disparity.nativeObj);
+
+ return;
+ }
+
+
+ //
+ // C++: void setBlockSize(int blockSize)
+ //
+
+ //javadoc: StereoMatcher::setBlockSize(blockSize)
+ public void setBlockSize(int blockSize)
+ {
+
+ setBlockSize_0(nativeObj, blockSize);
+
+ return;
+ }
+
+
+ //
+ // C++: void setDisp12MaxDiff(int disp12MaxDiff)
+ //
+
+ //javadoc: StereoMatcher::setDisp12MaxDiff(disp12MaxDiff)
+ public void setDisp12MaxDiff(int disp12MaxDiff)
+ {
+
+ setDisp12MaxDiff_0(nativeObj, disp12MaxDiff);
+
+ return;
+ }
+
+
+ //
+ // C++: void setMinDisparity(int minDisparity)
+ //
+
+ //javadoc: StereoMatcher::setMinDisparity(minDisparity)
+ public void setMinDisparity(int minDisparity)
+ {
+
+ setMinDisparity_0(nativeObj, minDisparity);
+
+ return;
+ }
+
+
+ //
+ // C++: void setNumDisparities(int numDisparities)
+ //
+
+ //javadoc: StereoMatcher::setNumDisparities(numDisparities)
+ public void setNumDisparities(int numDisparities)
+ {
+
+ setNumDisparities_0(nativeObj, numDisparities);
+
+ return;
+ }
+
+
+ //
+ // C++: void setSpeckleRange(int speckleRange)
+ //
+
+ //javadoc: StereoMatcher::setSpeckleRange(speckleRange)
+ public void setSpeckleRange(int speckleRange)
+ {
+
+ setSpeckleRange_0(nativeObj, speckleRange);
+
+ return;
+ }
+
+
+ //
+ // C++: void setSpeckleWindowSize(int speckleWindowSize)
+ //
+
+ //javadoc: StereoMatcher::setSpeckleWindowSize(speckleWindowSize)
+ public void setSpeckleWindowSize(int speckleWindowSize)
+ {
+
+ setSpeckleWindowSize_0(nativeObj, speckleWindowSize);
+
+ return;
+ }
+
+
+ @Override
+ protected void finalize() throws Throwable {
+ delete(nativeObj);
+ }
+
+
+
+ // C++: int getBlockSize()
+ private static native int getBlockSize_0(long nativeObj);
+
+ // C++: int getDisp12MaxDiff()
+ private static native int getDisp12MaxDiff_0(long nativeObj);
+
+ // C++: int getMinDisparity()
+ private static native int getMinDisparity_0(long nativeObj);
+
+ // C++: int getNumDisparities()
+ private static native int getNumDisparities_0(long nativeObj);
+
+ // C++: int getSpeckleRange()
+ private static native int getSpeckleRange_0(long nativeObj);
+
+ // C++: int getSpeckleWindowSize()
+ private static native int getSpeckleWindowSize_0(long nativeObj);
+
+ // C++: void compute(Mat left, Mat right, Mat& disparity)
+ private static native void compute_0(long nativeObj, long left_nativeObj, long right_nativeObj, long disparity_nativeObj);
+
+ // C++: void setBlockSize(int blockSize)
+ private static native void setBlockSize_0(long nativeObj, int blockSize);
+
+ // C++: void setDisp12MaxDiff(int disp12MaxDiff)
+ private static native void setDisp12MaxDiff_0(long nativeObj, int disp12MaxDiff);
+
+ // C++: void setMinDisparity(int minDisparity)
+ private static native void setMinDisparity_0(long nativeObj, int minDisparity);
+
+ // C++: void setNumDisparities(int numDisparities)
+ private static native void setNumDisparities_0(long nativeObj, int numDisparities);
+
+ // C++: void setSpeckleRange(int speckleRange)
+ private static native void setSpeckleRange_0(long nativeObj, int speckleRange);
+
+ // C++: void setSpeckleWindowSize(int speckleWindowSize)
+ private static native void setSpeckleWindowSize_0(long nativeObj, int speckleWindowSize);
+
+ // native support for java finalize()
+ private static native void delete(long nativeObj);
+
+}
diff --git a/imaging-utils/src/main/java/org/opencv/calib3d/StereoSGBM.java b/imaging-utils/src/main/java/org/opencv/calib3d/StereoSGBM.java
index 11bcb6d..d1d7b59 100644
--- a/imaging-utils/src/main/java/org/opencv/calib3d/StereoSGBM.java
+++ b/imaging-utils/src/main/java/org/opencv/calib3d/StereoSGBM.java
@@ -4,505 +4,181 @@
//
package org.opencv.calib3d;
-import org.opencv.core.Mat;
-// C++: class StereoSGBM
-/**
- * Class for computing stereo correspondence using the semi-global block - * matching algorithm.
- * - *class StereoSGBM
// C++ code:
- * - * - *StereoSGBM();
- * - *StereoSGBM(int minDisparity, int numDisparities, int SADWindowSize,
- * - *int P1=0, int P2=0, int disp12MaxDiff=0,
- * - *int preFilterCap=0, int uniquenessRatio=0,
- * - *int speckleWindowSize=0, int speckleRange=0,
- * - *bool fullDP=false);
- * - *virtual ~StereoSGBM();
- * - *virtual void operator()(InputArray left, InputArray right, OutputArray disp);
- * - *int minDisparity;
- * - *int numberOfDisparities;
- * - *int SADWindowSize;
- * - *int preFilterCap;
- * - *int uniquenessRatio;
- * - *int P1, P2;
- * - *int speckleWindowSize;
- * - *int speckleRange;
- * - *int disp12MaxDiff;
- * - *bool fullDP;...
- * - *};
- * - *The class implements the modified H. Hirschmuller algorithm [HH08] that - * differs from the original one as follows:
- *fullDP=true
to run
- * the full variant of the algorithm but beware that it may consume a lot of
- * memory.
- * SADWindowSize=1
reduces the blocks to single pixels.
- * CV_STEREO_BM_XSOBEL
type) and post-filtering (uniqueness check,
- * quadratic interpolation and speckle filtering).
- * Note:
- *StereoSGBM
and sets parameters to custom values.??
- *
- * The first constructor initializes StereoSGBM
with all the
- * default parameters. So, you only have to set StereoSGBM.numberOfDisparities
- * at minimum. The second constructor enables you to set each parameter to a
- * custom value.
Initializes StereoSGBM
and sets parameters to custom values.??
The first constructor initializes StereoSGBM
with all the
- * default parameters. So, you only have to set StereoSGBM.numberOfDisparities
- * at minimum. The second constructor enables you to set each parameter to a
- * custom value.
>=1
. Normally, it should be somewhere in the 3..11
- * range.
- * @param P1 The first parameter controlling the disparity smoothness. See
- * below.
- * @param P2 The second parameter controlling the disparity smoothness. The
- * larger the values are, the smoother the disparity is. P1
is the
- * penalty on the disparity change by plus or minus 1 between neighbor pixels.
- * P2
is the penalty on the disparity change by more than 1 between
- * neighbor pixels. The algorithm requires P2 > P1
. See
- * stereo_match.cpp
sample where some reasonably good
- * P1
and P2
values are shown (like 8*number_of_image_channels*SADWindowSize*SADWindowSize
- * and 32*number_of_image_channels*SADWindowSize*SADWindowSize
,
- * respectively).
- * @param disp12MaxDiff Maximum allowed difference (in integer pixel units) in
- * the left-right disparity check. Set it to a non-positive value to disable the
- * check.
- * @param preFilterCap Truncation value for the prefiltered image pixels. The
- * algorithm first computes x-derivative at each pixel and clips its value by
- * [-preFilterCap, preFilterCap]
interval. The result values are
- * passed to the Birchfield-Tomasi pixel cost function.
- * @param uniquenessRatio Margin in percentage by which the best (minimum)
- * computed cost function value should "win" the second best value to consider
- * the found match correct. Normally, a value within the 5-15 range is good
- * enough.
- * @param speckleWindowSize Maximum size of smooth disparity regions to consider
- * their noise speckles and invalidate. Set it to 0 to disable speckle
- * filtering. Otherwise, set it somewhere in the 50-200 range.
- * @param speckleRange Maximum disparity variation within each connected
- * component. If you do speckle filtering, set the parameter to a positive
- * value, it will be implicitly multiplied by 16. Normally, 1 or 2 is good
- * enough.
- * @param fullDP Set it to true
to run the full-scale two-pass
- * dynamic programming algorithm. It will consume O(W*H*numDisparities) bytes,
- * which is large for 640x480 stereo and huge for HD-size pictures. By default,
- * it is set to false
.
- *
- * @see org.opencv.calib3d.StereoSGBM.StereoSGBM
- */
- public StereoSGBM(int minDisparity, int numDisparities, int SADWindowSize, int P1, int P2, int disp12MaxDiff, int preFilterCap, int uniquenessRatio, int speckleWindowSize, int speckleRange, boolean fullDP)
- {
-
- nativeObj = StereoSGBM_1(minDisparity, numDisparities, SADWindowSize, P1, P2, disp12MaxDiff, preFilterCap, uniquenessRatio, speckleWindowSize, speckleRange, fullDP);
-
- return;
- }
-
-/**
- * Initializes StereoSGBM
and sets parameters to custom values.??
The first constructor initializes StereoSGBM
with all the
- * default parameters. So, you only have to set StereoSGBM.numberOfDisparities
- * at minimum. The second constructor enables you to set each parameter to a
- * custom value.
>=1
. Normally, it should be somewhere in the 3..11
- * range.
- *
- * @see org.opencv.calib3d.StereoSGBM.StereoSGBM
- */
- public StereoSGBM(int minDisparity, int numDisparities, int SADWindowSize)
- {
-
- nativeObj = StereoSGBM_2(minDisparity, numDisparities, SADWindowSize);
-
- return;
- }
-
-
- //
- // C++: void StereoSGBM::operator ()(Mat left, Mat right, Mat& disp)
- //
+// C++: class StereoSGBM
+//javadoc: StereoSGBM
+public class StereoSGBM extends StereoMatcher {
- public void compute(Mat left, Mat right, Mat disp)
- {
+ protected StereoSGBM(long addr) { super(addr); }
- compute_0(nativeObj, left.nativeObj, right.nativeObj, disp.nativeObj);
- return;
- }
+ public static final int
+ MODE_SGBM = 0,
+ MODE_HH = 1,
+ MODE_SGBM_3WAY = 2,
+ MODE_HH4 = 3;
//
- // C++: int StereoSGBM::minDisparity
+ // C++: static Ptr_StereoSGBM create(int minDisparity = 0, int numDisparities = 16, int blockSize = 3, int P1 = 0, int P2 = 0, int disp12MaxDiff = 0, int preFilterCap = 0, int uniquenessRatio = 0, int speckleWindowSize = 0, int speckleRange = 0, int mode = StereoSGBM::MODE_SGBM)
//
- public int get_minDisparity()
+ //javadoc: StereoSGBM::create(minDisparity, numDisparities, blockSize, P1, P2, disp12MaxDiff, preFilterCap, uniquenessRatio, speckleWindowSize, speckleRange, mode)
+ public static StereoSGBM create(int minDisparity, int numDisparities, int blockSize, int P1, int P2, int disp12MaxDiff, int preFilterCap, int uniquenessRatio, int speckleWindowSize, int speckleRange, int mode)
{
-
- int retVal = get_minDisparity_0(nativeObj);
-
+
+ StereoSGBM retVal = new StereoSGBM(create_0(minDisparity, numDisparities, blockSize, P1, P2, disp12MaxDiff, preFilterCap, uniquenessRatio, speckleWindowSize, speckleRange, mode));
+
return retVal;
}
-
- //
- // C++: void StereoSGBM::minDisparity
- //
-
- public void set_minDisparity(int minDisparity)
- {
-
- set_minDisparity_0(nativeObj, minDisparity);
-
- return;
- }
-
-
- //
- // C++: int StereoSGBM::numberOfDisparities
- //
-
- public int get_numberOfDisparities()
+ //javadoc: StereoSGBM::create()
+ public static StereoSGBM create()
{
-
- int retVal = get_numberOfDisparities_0(nativeObj);
-
+
+ StereoSGBM retVal = new StereoSGBM(create_1());
+
return retVal;
}
//
- // C++: void StereoSGBM::numberOfDisparities
+ // C++: int getMode()
//
- public void set_numberOfDisparities(int numberOfDisparities)
+ //javadoc: StereoSGBM::getMode()
+ public int getMode()
{
-
- set_numberOfDisparities_0(nativeObj, numberOfDisparities);
-
- return;
- }
-
-
- //
- // C++: int StereoSGBM::SADWindowSize
- //
-
- public int get_SADWindowSize()
- {
-
- int retVal = get_SADWindowSize_0(nativeObj);
-
+
+ int retVal = getMode_0(nativeObj);
+
return retVal;
}
//
- // C++: void StereoSGBM::SADWindowSize
+ // C++: int getP1()
//
- public void set_SADWindowSize(int SADWindowSize)
+ //javadoc: StereoSGBM::getP1()
+ public int getP1()
{
-
- set_SADWindowSize_0(nativeObj, SADWindowSize);
-
- return;
- }
-
-
- //
- // C++: int StereoSGBM::preFilterCap
- //
-
- public int get_preFilterCap()
- {
-
- int retVal = get_preFilterCap_0(nativeObj);
-
+
+ int retVal = getP1_0(nativeObj);
+
return retVal;
}
//
- // C++: void StereoSGBM::preFilterCap
- //
-
- public void set_preFilterCap(int preFilterCap)
- {
-
- set_preFilterCap_0(nativeObj, preFilterCap);
-
- return;
- }
-
-
- //
- // C++: int StereoSGBM::uniquenessRatio
+ // C++: int getP2()
//
- public int get_uniquenessRatio()
+ //javadoc: StereoSGBM::getP2()
+ public int getP2()
{
-
- int retVal = get_uniquenessRatio_0(nativeObj);
-
+
+ int retVal = getP2_0(nativeObj);
+
return retVal;
}
//
- // C++: void StereoSGBM::uniquenessRatio
- //
-
- public void set_uniquenessRatio(int uniquenessRatio)
- {
-
- set_uniquenessRatio_0(nativeObj, uniquenessRatio);
-
- return;
- }
-
-
- //
- // C++: int StereoSGBM::P1
+ // C++: int getPreFilterCap()
//
- public int get_P1()
+ //javadoc: StereoSGBM::getPreFilterCap()
+ public int getPreFilterCap()
{
-
- int retVal = get_P1_0(nativeObj);
-
+
+ int retVal = getPreFilterCap_0(nativeObj);
+
return retVal;
}
//
- // C++: void StereoSGBM::P1
+ // C++: int getUniquenessRatio()
//
- public void set_P1(int P1)
+ //javadoc: StereoSGBM::getUniquenessRatio()
+ public int getUniquenessRatio()
{
-
- set_P1_0(nativeObj, P1);
-
- return;
- }
-
-
- //
- // C++: int StereoSGBM::P2
- //
-
- public int get_P2()
- {
-
- int retVal = get_P2_0(nativeObj);
-
+
+ int retVal = getUniquenessRatio_0(nativeObj);
+
return retVal;
}
//
- // C++: void StereoSGBM::P2
+ // C++: void setMode(int mode)
//
- public void set_P2(int P2)
+ //javadoc: StereoSGBM::setMode(mode)
+ public void setMode(int mode)
{
-
- set_P2_0(nativeObj, P2);
-
+
+ setMode_0(nativeObj, mode);
+
return;
}
//
- // C++: int StereoSGBM::speckleWindowSize
- //
-
- public int get_speckleWindowSize()
- {
-
- int retVal = get_speckleWindowSize_0(nativeObj);
-
- return retVal;
- }
-
-
- //
- // C++: void StereoSGBM::speckleWindowSize
+ // C++: void setP1(int P1)
//
- public void set_speckleWindowSize(int speckleWindowSize)
+ //javadoc: StereoSGBM::setP1(P1)
+ public void setP1(int P1)
{
-
- set_speckleWindowSize_0(nativeObj, speckleWindowSize);
-
+
+ setP1_0(nativeObj, P1);
+
return;
}
//
- // C++: int StereoSGBM::speckleRange
- //
-
- public int get_speckleRange()
- {
-
- int retVal = get_speckleRange_0(nativeObj);
-
- return retVal;
- }
-
-
- //
- // C++: void StereoSGBM::speckleRange
+ // C++: void setP2(int P2)
//
- public void set_speckleRange(int speckleRange)
+ //javadoc: StereoSGBM::setP2(P2)
+ public void setP2(int P2)
{
-
- set_speckleRange_0(nativeObj, speckleRange);
-
+
+ setP2_0(nativeObj, P2);
+
return;
}
//
- // C++: int StereoSGBM::disp12MaxDiff
+ // C++: void setPreFilterCap(int preFilterCap)
//
- public int get_disp12MaxDiff()
+ //javadoc: StereoSGBM::setPreFilterCap(preFilterCap)
+ public void setPreFilterCap(int preFilterCap)
{
-
- int retVal = get_disp12MaxDiff_0(nativeObj);
-
- return retVal;
- }
-
-
- //
- // C++: void StereoSGBM::disp12MaxDiff
- //
-
- public void set_disp12MaxDiff(int disp12MaxDiff)
- {
-
- set_disp12MaxDiff_0(nativeObj, disp12MaxDiff);
-
+
+ setPreFilterCap_0(nativeObj, preFilterCap);
+
return;
}
//
- // C++: bool StereoSGBM::fullDP
+ // C++: void setUniquenessRatio(int uniquenessRatio)
//
- public boolean get_fullDP()
+ //javadoc: StereoSGBM::setUniquenessRatio(uniquenessRatio)
+ public void setUniquenessRatio(int uniquenessRatio)
{
-
- boolean retVal = get_fullDP_0(nativeObj);
-
- return retVal;
- }
-
-
- //
- // C++: void StereoSGBM::fullDP
- //
-
- public void set_fullDP(boolean fullDP)
- {
-
- set_fullDP_0(nativeObj, fullDP);
-
+
+ setUniquenessRatio_0(nativeObj, uniquenessRatio);
+
return;
}
@@ -514,81 +190,39 @@ protected void finalize() throws Throwable {
- // C++: StereoSGBM::StereoSGBM()
- private static native long StereoSGBM_0();
-
- // C++: StereoSGBM::StereoSGBM(int minDisparity, int numDisparities, int SADWindowSize, int P1 = 0, int P2 = 0, int disp12MaxDiff = 0, int preFilterCap = 0, int uniquenessRatio = 0, int speckleWindowSize = 0, int speckleRange = 0, bool fullDP = false)
- private static native long StereoSGBM_1(int minDisparity, int numDisparities, int SADWindowSize, int P1, int P2, int disp12MaxDiff, int preFilterCap, int uniquenessRatio, int speckleWindowSize, int speckleRange, boolean fullDP);
- private static native long StereoSGBM_2(int minDisparity, int numDisparities, int SADWindowSize);
-
- // C++: void StereoSGBM::operator ()(Mat left, Mat right, Mat& disp)
- private static native void compute_0(long nativeObj, long left_nativeObj, long right_nativeObj, long disp_nativeObj);
-
- // C++: int StereoSGBM::minDisparity
- private static native int get_minDisparity_0(long nativeObj);
-
- // C++: void StereoSGBM::minDisparity
- private static native void set_minDisparity_0(long nativeObj, int minDisparity);
-
- // C++: int StereoSGBM::numberOfDisparities
- private static native int get_numberOfDisparities_0(long nativeObj);
-
- // C++: void StereoSGBM::numberOfDisparities
- private static native void set_numberOfDisparities_0(long nativeObj, int numberOfDisparities);
-
- // C++: int StereoSGBM::SADWindowSize
- private static native int get_SADWindowSize_0(long nativeObj);
-
- // C++: void StereoSGBM::SADWindowSize
- private static native void set_SADWindowSize_0(long nativeObj, int SADWindowSize);
-
- // C++: int StereoSGBM::preFilterCap
- private static native int get_preFilterCap_0(long nativeObj);
-
- // C++: void StereoSGBM::preFilterCap
- private static native void set_preFilterCap_0(long nativeObj, int preFilterCap);
-
- // C++: int StereoSGBM::uniquenessRatio
- private static native int get_uniquenessRatio_0(long nativeObj);
-
- // C++: void StereoSGBM::uniquenessRatio
- private static native void set_uniquenessRatio_0(long nativeObj, int uniquenessRatio);
-
- // C++: int StereoSGBM::P1
- private static native int get_P1_0(long nativeObj);
-
- // C++: void StereoSGBM::P1
- private static native void set_P1_0(long nativeObj, int P1);
+ // C++: static Ptr_StereoSGBM create(int minDisparity = 0, int numDisparities = 16, int blockSize = 3, int P1 = 0, int P2 = 0, int disp12MaxDiff = 0, int preFilterCap = 0, int uniquenessRatio = 0, int speckleWindowSize = 0, int speckleRange = 0, int mode = StereoSGBM::MODE_SGBM)
+ private static native long create_0(int minDisparity, int numDisparities, int blockSize, int P1, int P2, int disp12MaxDiff, int preFilterCap, int uniquenessRatio, int speckleWindowSize, int speckleRange, int mode);
+ private static native long create_1();
- // C++: int StereoSGBM::P2
- private static native int get_P2_0(long nativeObj);
+ // C++: int getMode()
+ private static native int getMode_0(long nativeObj);
- // C++: void StereoSGBM::P2
- private static native void set_P2_0(long nativeObj, int P2);
+ // C++: int getP1()
+ private static native int getP1_0(long nativeObj);
- // C++: int StereoSGBM::speckleWindowSize
- private static native int get_speckleWindowSize_0(long nativeObj);
+ // C++: int getP2()
+ private static native int getP2_0(long nativeObj);
- // C++: void StereoSGBM::speckleWindowSize
- private static native void set_speckleWindowSize_0(long nativeObj, int speckleWindowSize);
+ // C++: int getPreFilterCap()
+ private static native int getPreFilterCap_0(long nativeObj);
- // C++: int StereoSGBM::speckleRange
- private static native int get_speckleRange_0(long nativeObj);
+ // C++: int getUniquenessRatio()
+ private static native int getUniquenessRatio_0(long nativeObj);
- // C++: void StereoSGBM::speckleRange
- private static native void set_speckleRange_0(long nativeObj, int speckleRange);
+ // C++: void setMode(int mode)
+ private static native void setMode_0(long nativeObj, int mode);
- // C++: int StereoSGBM::disp12MaxDiff
- private static native int get_disp12MaxDiff_0(long nativeObj);
+ // C++: void setP1(int P1)
+ private static native void setP1_0(long nativeObj, int P1);
- // C++: void StereoSGBM::disp12MaxDiff
- private static native void set_disp12MaxDiff_0(long nativeObj, int disp12MaxDiff);
+ // C++: void setP2(int P2)
+ private static native void setP2_0(long nativeObj, int P2);
- // C++: bool StereoSGBM::fullDP
- private static native boolean get_fullDP_0(long nativeObj);
+ // C++: void setPreFilterCap(int preFilterCap)
+ private static native void setPreFilterCap_0(long nativeObj, int preFilterCap);
- // C++: void StereoSGBM::fullDP
- private static native void set_fullDP_0(long nativeObj, boolean fullDP);
+ // C++: void setUniquenessRatio(int uniquenessRatio)
+ private static native void setUniquenessRatio_0(long nativeObj, int uniquenessRatio);
// native support for java finalize()
private static native void delete(long nativeObj);
diff --git a/imaging-utils/src/main/java/org/opencv/contrib/Contrib.java b/imaging-utils/src/main/java/org/opencv/contrib/Contrib.java
deleted file mode 100644
index eed1429..0000000
--- a/imaging-utils/src/main/java/org/opencv/contrib/Contrib.java
+++ /dev/null
@@ -1,146 +0,0 @@
-
-//
-// This file is auto-generated. Please don't modify it!
-//
-package org.opencv.contrib;
-
-import java.util.List;
-import org.opencv.core.Mat;
-import org.opencv.core.MatOfFloat;
-import org.opencv.core.MatOfPoint;
-import org.opencv.utils.Converters;
-
-public class Contrib {
-
- public static final int
- RETINA_COLOR_RANDOM = 0,
- RETINA_COLOR_DIAGONAL = 1,
- RETINA_COLOR_BAYER = 2,
- ROTATION = 1,
- TRANSLATION = 2,
- RIGID_BODY_MOTION = 4,
- COLORMAP_AUTUMN = 0,
- COLORMAP_BONE = 1,
- COLORMAP_JET = 2,
- COLORMAP_WINTER = 3,
- COLORMAP_RAINBOW = 4,
- COLORMAP_OCEAN = 5,
- COLORMAP_SUMMER = 6,
- COLORMAP_SPRING = 7,
- COLORMAP_COOL = 8,
- COLORMAP_HSV = 9,
- COLORMAP_PINK = 10,
- COLORMAP_HOT = 11;
-
-
- //
- // C++: void applyColorMap(Mat src, Mat& dst, int colormap)
- //
-
-/**
- * Applies a GNU Octave/MATLAB equivalent colormap on a given image.
- * - *Currently the following GNU Octave/MATLAB equivalent colormaps are
- * implemented: enum
// C++ code:
- * - * - *COLORMAP_AUTUMN = 0,
- * - *COLORMAP_BONE = 1,
- * - *COLORMAP_JET = 2,
- * - *COLORMAP_WINTER = 3,
- * - *COLORMAP_RAINBOW = 4,
- * - *COLORMAP_OCEAN = 5,
- * - *COLORMAP_SUMMER = 6,
- * - *COLORMAP_SPRING = 7,
- * - *COLORMAP_COOL = 8,
- * - *COLORMAP_HSV = 9,
- * - *COLORMAP_PINK = 10,
- * - *COLORMAP_HOT = 11
- * - * - * @param src The source image, grayscale or colored does not matter. - * @param dst The result is the colormapped source image. Note: "Mat.create" is - * called on dst. - * @param colormap The colormap to apply, see the list of available colormaps - * below. - * - * @see org.opencv.contrib.Contrib.applyColorMap - */ - public static void applyColorMap(Mat src, Mat dst, int colormap) - { - - applyColorMap_0(src.nativeObj, dst.nativeObj, colormap); - - return; - } - - - // - // C++: int chamerMatching(Mat img, Mat templ, vector_vector_Point& results, vector_float& cost, double templScale = 1, int maxMatches = 20, double minMatchDistance = 1.0, int padX = 3, int padY = 3, int scales = 5, double minScale = 0.6, double maxScale = 1.6, double orientationWeight = 0.5, double truncate = 20) - // - - public static int chamerMatching(Mat img, Mat templ, ListAll face recognition models in OpenCV are derived from the abstract base - * class "FaceRecognizer", which provides a unified access to all face - * recongition algorithms in OpenCV.
- * - *class FaceRecognizer : public Algorithm
// C++ code:
- * - * - *public:
- * - *//! virtual destructor
- * - *virtual ~FaceRecognizer() {}
- * - *// Trains a FaceRecognizer.
- * - *virtual void train(InputArray src, InputArray labels) = 0;
- * - *// Updates a FaceRecognizer.
- * - *virtual void update(InputArrayOfArrays src, InputArray labels);
- * - *// Gets a prediction from a FaceRecognizer.
- * - *virtual int predict(InputArray src) const = 0;
- * - *// Predicts the label and confidence for a given sample.
- * - *virtual void predict(InputArray src, int &label, double &confidence) const = - * 0;
- * - *// Serializes this object to a given filename.
- * - *virtual void save(const string& filename) const;
- * - *// Deserializes this object from a given filename.
- * - *virtual void load(const string& filename);
- * - *// Serializes this object to a given cv.FileStorage.
- * - *virtual void save(FileStorage& fs) const = 0;
- * - *// Deserializes this object from a given cv.FileStorage.
- * - *virtual void load(const FileStorage& fs) = 0;
- * - *// Sets additional information as pairs label - info.
- * - *void setLabelsInfo(const std.map
// Gets string information by label
- * - *string getLabelInfo(const int &label);
- * - *// Gets labels by string
- * - *vector
};
- * - * @see org.opencv.contrib.FaceRecognizer : public Algorithm - */ -public class FaceRecognizer extends Algorithm { - - protected FaceRecognizer(long addr) { super(addr); } - - - // - // C++: void FaceRecognizer::load(string filename) - // - -/** - *Loads a "FaceRecognizer" and its model state.
- * - *Loads a persisted model and state from a given XML or YAML file. Every
- * "FaceRecognizer" has to overwrite FaceRecognizer.load(FileStorage&
- * fs)
to enable loading the model state. FaceRecognizer.load(FileStorage&
- * fs)
in turn gets called by FaceRecognizer.load(const string&
- * filename)
, to ease saving a model.
Predicts a label and associated confidence (e.g. distance) for a given input - * image.
- * - *The suffix const
means that prediction does not affect the
- * internal model state, so the method can be safely called from within
- * different threads.
The following example shows how to get a prediction from a trained model:
- * using namespace cv;
// C++ code:
- * - *// Do your initialization here (create the cv.FaceRecognizer model)...
- * - *//...
- * - *// Read in a sample image:
- * - *Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
- * - *// And get a prediction from the cv.FaceRecognizer:
- * - *int predicted = model->predict(img);
- * - *Or to get a prediction and the associated confidence (e.g. distance):
- * - *using namespace cv;
// C++ code:
- * - *// Do your initialization here (create the cv.FaceRecognizer model)...
- * - *//...
- * - *Mat img = imread("person1/3.jpg", CV_LOAD_IMAGE_GRAYSCALE);
- * - *// Some variables for the predicted label and associated confidence (e.g. - * distance):
- * - *int predicted_label = -1;
- * - *double predicted_confidence = 0.0;
- * - *// Get the prediction and associated confidence from the model
- * - *model->predict(img, predicted_label, predicted_confidence);
- * - * @param src Sample image to get a prediction from. - * @param label The predicted label for the given image. - * @param confidence Associated confidence (e.g. distance) for the predicted - * label. - * - * @see org.opencv.contrib.FaceRecognizer.predict - */ - public void predict(Mat src, int[] label, double[] confidence) - { - double[] label_out = new double[1]; - double[] confidence_out = new double[1]; - predict_0(nativeObj, src.nativeObj, label_out, confidence_out); - if(label!=null) label[0] = (int)label_out[0]; - if(confidence!=null) confidence[0] = (double)confidence_out[0]; - return; - } - - - // - // C++: void FaceRecognizer::save(string filename) - // - -/** - *Saves a "FaceRecognizer" and its model state.
- * - *Saves this model to a given filename, either as XML or YAML.
- * - *Saves this model to a given "FileStorage".
- * - *Every "FaceRecognizer" overwrites FaceRecognizer.save(FileStorage&
- * fs)
to save the internal model state. FaceRecognizer.save(const
- * string& filename)
saves the state of a model to the given filename.
The suffix const
means that prediction does not affect the
- * internal model state, so the method can be safely called from within
- * different threads.
Trains a FaceRecognizer with given data and associated labels.
- * - *The following source code snippet shows you how to learn a Fisherfaces model
- * on a given set of images. The images are read with "imread" and pushed into a
- * std.vector
. The labels of each image are stored within a
- * std.vector
(you could also use a "Mat" of type
- * "CV_32SC1"). Think of the label as the subject (the person) this image
- * belongs to, so same subjects (persons) should have the same label. For the
- * available "FaceRecognizer" you don't have to pay any attention to the order
- * of the labels, just make sure same persons have the same label: // holds
- * images and labels
// C++ code:
- * - *vector
vector
// images for first person
- * - *images.push_back(imread("person0/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); - * labels.push_back(0);
- * - *images.push_back(imread("person0/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); - * labels.push_back(0);
- * - *images.push_back(imread("person0/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); - * labels.push_back(0);
- * - *// images for second person
- * - *images.push_back(imread("person1/0.jpg", CV_LOAD_IMAGE_GRAYSCALE)); - * labels.push_back(1);
- * - *images.push_back(imread("person1/1.jpg", CV_LOAD_IMAGE_GRAYSCALE)); - * labels.push_back(1);
- * - *images.push_back(imread("person1/2.jpg", CV_LOAD_IMAGE_GRAYSCALE)); - * labels.push_back(1);
- * - *Now that you have read some images, we can create a new "FaceRecognizer". In - * this example I'll create a Fisherfaces model and decide to keep all of the - * possible Fisherfaces:
- * - *// Create a new Fisherfaces model and retain all available Fisherfaces,
- *
// C++ code:
- * - *// this is the most common usage of this specific FaceRecognizer:
- * - *//
- * - *Ptr
And finally train it on the given dataset (the face images and labels): - *
- * - *// This is the common interface to train all of the available
- * cv.FaceRecognizer
// C++ code:
- * - *// implementations:
- * - *//
- * - *model->train(images, labels);
- * - * @param src The training images, that means the faces you want to learn. The - * data has to be given as avector
.
- * @param labels The labels corresponding to the images have to be given either
- * as a vector
or a
- *
- * @see org.opencv.contrib.FaceRecognizer.train
- */
- public void train(ListUpdates a FaceRecognizer with given data and associated labels.
- * - *This method updates a (probably trained) "FaceRecognizer", but only if the
- * algorithm supports it. The Local Binary Patterns Histograms (LBPH) recognizer
- * (see "createLBPHFaceRecognizer") can be updated. For the Eigenfaces and
- * Fisherfaces method, this is algorithmically not possible and you have to
- * re-estimate the model with "FaceRecognizer.train". In any case, a call to
- * train empties the existing model and learns a new model, while update does
- * not delete any model data.
- * // Create a new LBPH model (it can be updated) and use the default
- * parameters,
// C++ code:
- * - *// this is the most common usage of this specific FaceRecognizer:
- * - *//
- * - *Ptr
// This is the common interface to train all of the available - * cv.FaceRecognizer
- * - *// implementations:
- * - *//
- * - *model->train(images, labels);
- * - *// Some containers to hold new image:
- * - *vector
vector
// You should add some images to the containers:
- * - *//
- * - *//...
- * - *//
- * - *// Now updating the model is as easy as calling:
- * - *model->update(newImages,newLabels);
- * - *// This will preserve the old model data and extend the existing model
- * - *// with the new features extracted from newImages!
- * - *Calling update on an Eigenfaces model (see "createEigenFaceRecognizer"), - * which doesn't support updating, will throw an error similar to:
- * - *OpenCV Error: The function/feature is not implemented (This FaceRecognizer
- * (FaceRecognizer.Eigenfaces) does not support updating, you have to use
- * FaceRecognizer.train to update it.) in update, file /home/philipp/git/opencv/modules/contrib/src/facerec.cpp,
- * line 305
// C++ code:
- * - *terminate called after throwing an instance of 'cv.Exception'
- * - *Please note: The "FaceRecognizer" does not store your training images, - * because this would be very memory intense and it's not the responsibility of - * te "FaceRecognizer" to do so. The caller is responsible for maintaining the - * dataset, he want to work with. - *
- * - * @param src The training images, that means the faces you want to learn. The - * data has to be given as avector
.
- * @param labels The labels corresponding to the images have to be given either
- * as a vector
or a
- *
- * @see org.opencv.contrib.FaceRecognizer.update
- */
- public void update(ListClass for computing stereo correspondence using the variational matching - * algorithm
- * - *class StereoVar
// C++ code:
- * - * - *StereoVar();
- * - *StereoVar(int levels, double pyrScale,
- * - *int nIt, int minDisp, int maxDisp,
- * - *int poly_n, double poly_sigma, float fi,
- * - *float lambda, int penalization, int cycle,
- * - *int flags);
- * - *virtual ~StereoVar();
- * - *virtual void operator()(InputArray left, InputArray right, OutputArray disp);
- * - *int levels;
- * - *double pyrScale;
- * - *int nIt;
- * - *int minDisp;
- * - *int maxDisp;
- * - *int poly_n;
- * - *double poly_sigma;
- * - *float fi;
- * - *float lambda;
- * - *int penalization;
- * - *int cycle;
- * - *int flags;...
- * - *};
- * - *The class implements the modified S. G. Kosov algorithm [KTS09] that differs - * from the original one as follows:
- *The constructor
- * - *The first constructor initializes StereoVar
with all the default
- * parameters. So, you only have to set StereoVar.maxDisp
and / or
- * StereoVar.minDisp
at minimum. The second constructor enables
- * you to set each parameter to a custom value.
The constructor
- * - *The first constructor initializes StereoVar
with all the default
- * parameters. So, you only have to set StereoVar.maxDisp
and / or
- * StereoVar.minDisp
at minimum. The second constructor enables
- * you to set each parameter to a custom value.
class CV_EXPORTS_W Algorithm
// C++ code:
- * - * - *public:
- * - *Algorithm();
- * - *virtual ~Algorithm();
- * - *string name() const;
- * - *template
template
CV_WRAP int getInt(const string& name) const;
- * - *CV_WRAP double getDouble(const string& name) const;
- * - *CV_WRAP bool getBool(const string& name) const;
- * - *CV_WRAP string getString(const string& name) const;
- * - *CV_WRAP Mat getMat(const string& name) const;
- * - *CV_WRAP vector
CV_WRAP Ptr
void set(const string& name, int value);
- * - *void set(const string& name, double value);
- * - *void set(const string& name, bool value);
- * - *void set(const string& name, const string& value);
- * - *void set(const string& name, const Mat& value);
- * - *void set(const string& name, const vector
void set(const string& name, const Ptr
template
CV_WRAP void setInt(const string& name, int value);
- * - *CV_WRAP void setDouble(const string& name, double value);
- * - *CV_WRAP void setBool(const string& name, bool value);
- * - *CV_WRAP void setString(const string& name, const string& value);
- * - *CV_WRAP void setMat(const string& name, const Mat& value);
- * - *CV_WRAP void setMatVector(const string& name, const vector
CV_WRAP void setAlgorithm(const string& name, const Ptr
template
void set(const char* name, int value);
- * - *void set(const char* name, double value);
- * - *void set(const char* name, bool value);
- * - *void set(const char* name, const string& value);
- * - *void set(const char* name, const Mat& value);
- * - *void set(const char* name, const vector
void set(const char* name, const Ptr
template
void setInt(const char* name, int value);
- * - *void setDouble(const char* name, double value);
- * - *void setBool(const char* name, bool value);
- * - *void setString(const char* name, const string& value);
- * - *void setMat(const char* name, const Mat& value);
- * - *void setMatVector(const char* name, const vector
void setAlgorithm(const char* name, const Ptr
template
CV_WRAP string paramHelp(const string& name) const;
- * - *int paramType(const char* name) const;
- * - *CV_WRAP int paramType(const string& name) const;
- * - *CV_WRAP void getParams(CV_OUT vector
virtual void write(FileStorage& fs) const;
- * - *virtual void read(const FileNode& fn);
- * - *typedef Algorithm* (*Constructor)(void);
- * - *typedef int (Algorithm.*Getter)() const;
- * - *typedef void (Algorithm.*Setter)(int);
- * - *CV_WRAP static void getList(CV_OUT vector
CV_WRAP static Ptr
template
virtual AlgorithmInfo* info() const / * TODO: make it = 0;* / { return 0; }
- * - *};
- * - *This is a base class for all more or less complex algorithms in OpenCV, - * especially for classes of algorithms, for which there can be multiple - * implementations. The examples are stereo correspondence (for which there are - * algorithms like block matching, semi-global block matching, graph-cut etc.), - * background subtraction (which can be done using mixture-of-gaussians models, - * codebook-based algorithm etc.), optical flow (block matching, Lucas-Kanade, - * Horn-Schunck etc.). - *
- * - *The class provides the following features for all derived classes:
- *Algorithm.create
). If you plan to add your own algorithms, it
- * is good practice to add a unique prefix to your algorithms to distinguish
- * them from other algorithms.
- * cvSetCaptureProperty()
, cvGetCaptureProperty()
,
- * VideoCapture.set()
and VideoCapture.get()
.
- * Algorithm
provides similar method where instead of integer id's
- * you specify the parameter names as text strings. See Algorithm.set
- * and Algorithm.get
for details.
- * Here is example of SIFT use in your application via Algorithm interface:
- *
// C++ code:
- * - *#include "opencv2/opencv.hpp"
- * - *#include "opencv2/nonfree/nonfree.hpp"...
- * - *initModule_nonfree(); // to load SURF/SIFT etc.
- * - *Ptr
FileStorage fs("sift_params.xml", FileStorage.READ);
- * - *if(fs.isOpened()) // if we have file with parameters, read them
- * - * - *sift->read(fs["sift_params"]);
- * - *fs.release();
- * - * - *else // else modify the parameters and store them; user can later edit the - * file to use different parameters
- * - * - *sift->set("contrastThreshold", 0.01f); // lower the contrast threshold, - * compared to the default value
- * - * - *WriteStructContext ws(fs, "sift_params", CV_NODE_MAP);
- * - *sift->write(fs);
- * - * - * - *Mat image = imread("myimage.png", 0), descriptors;
- * - *vector
(*sift)(image, noArray(), keypoints, descriptors);
- * - * @see org.opencv.core.Algorithm - */ +//javadoc: Algorithm public class Algorithm { protected final long nativeObj; protected Algorithm(long addr) { nativeObj = addr; } + public long getNativeObjAddr() { return nativeObj; } // - // C++: static Ptr_Algorithm Algorithm::_create(string name) + // C++: String getDefaultName() // - // Return type 'Ptr_Algorithm' is not supported, skipping the function - - - // - // C++: Ptr_Algorithm Algorithm::getAlgorithm(string name) - // - - // Return type 'Ptr_Algorithm' is not supported, skipping the function - - - // - // C++: bool Algorithm::getBool(string name) - // - - public boolean getBool(String name) - { - - boolean retVal = getBool_0(nativeObj, name); - - return retVal; - } - - - // - // C++: double Algorithm::getDouble(string name) - // - - public double getDouble(String name) + //javadoc: Algorithm::getDefaultName() + public String getDefaultName() { - - double retVal = getDouble_0(nativeObj, name); - + + String retVal = getDefaultName_0(nativeObj); + return retVal; } // - // C++: int Algorithm::getInt(string name) + // C++: void clear() // - public int getInt(String name) + //javadoc: Algorithm::clear() + public void clear() { - - int retVal = getInt_0(nativeObj, name); - - return retVal; - } - - - // - // C++: static void Algorithm::getList(vector_string& algorithms) - // - - // Unknown type 'vector_string' (O), skipping the function - - - // - // C++: Mat Algorithm::getMat(string name) - // - - public Mat getMat(String name) - { - - Mat retVal = new Mat(getMat_0(nativeObj, name)); - - return retVal; - } - - - // - // C++: vector_Mat Algorithm::getMatVector(string name) - // - - public ListPerforms a look-up table transform of an array.
- * - *The function LUT
fills the output array with values from the
- * look-up table. Indices of the entries are taken from the input array. That
- * is, the function processes each element of src
as follows:
dst(I) <- lut(src(I) + d)
- * - *where
- * - *d = 0 if src has depth CV_8U; 128 if src has depth CV_8S
- * - * @param src input array of 8-bit elements. - * @param lut look-up table of 256 elements; in case of multi-channel input - * array, the table should either have a single channel (in this case the same - * table is used for all channels) or the same number of channels as in the - * input array. - * @param dst output array of the same size and number of channels as - *src
, and the same depth as lut
.
- * @param interpolation a interpolation
- *
- * @see org.opencv.core.Core.LUT
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#convertScaleAbs
- */
- public static void LUT(Mat src, Mat lut, Mat dst, int interpolation)
- {
-
- LUT_0(src.nativeObj, lut.nativeObj, dst.nativeObj, interpolation);
-
- return;
- }
-
-/**
- * Performs a look-up table transform of an array.
- * - *The function LUT
fills the output array with values from the
- * look-up table. Indices of the entries are taken from the input array. That
- * is, the function processes each element of src
as follows:
dst(I) <- lut(src(I) + d)
- * - *where
- * - *d = 0 if src has depth CV_8U; 128 if src has depth CV_8S
- * - * @param src input array of 8-bit elements. - * @param lut look-up table of 256 elements; in case of multi-channel input - * array, the table should either have a single channel (in this case the same - * table is used for all channels) or the same number of channels as in the - * input array. - * @param dst output array of the same size and number of channels as - *src
, and the same depth as lut
.
- *
- * @see org.opencv.core.Core.LUT
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#convertScaleAbs
- */
- public static void LUT(Mat src, Mat lut, Mat dst)
- {
-
- LUT_1(src.nativeObj, lut.nativeObj, dst.nativeObj);
-
- return;
- }
+ FONT_ITALIC = 16,
+ ROTATE_90_CLOCKWISE = 0,
+ ROTATE_180 = 1,
+ ROTATE_90_COUNTERCLOCKWISE = 2,
+ TYPE_GENERAL = 0,
+ TYPE_MARKER = 0+1,
+ TYPE_WRAPPER = 0+2,
+ TYPE_FUN = 0+3,
+ IMPL_PLAIN = 0,
+ IMPL_IPP = 0+1,
+ IMPL_OPENCL = 0+2,
+ FLAGS_NONE = 0,
+ FLAGS_MAPPING = 0x01,
+ FLAGS_EXPAND_SAME_NAMES = 0x02;
//
- // C++: double Mahalanobis(Mat v1, Mat v2, Mat icovar)
+ // C++: Scalar mean(Mat src, Mat mask = Mat())
//
-/**
- * Calculates the Mahalanobis distance between two vectors.
- * - *The function Mahalanobis
calculates and returns the weighted
- * distance between two vectors:
d(vec1, vec2)= sqrt(sum_(i,j)(icovar(i,j)*(vec1(I)-vec2(I))*(vec1(j)-vec2(j))))
- * - *The covariance matrix may be calculated using the "calcCovarMatrix" function
- * and then inverted using the "invert" function (preferably using the
- * DECOMP_SVD
method, as the most accurate).
Calculates the per-element absolute difference between two arrays or between - * an array and a scalar.
- * - *The function absdiff
calculates:
dst(I) = saturate(| src1(I) - src2(I)|)
- * - *Scalar
or has as many elements as the
- * number of channels in src1
:
- * dst(I) = saturate(| src1(I) - src2|)
- * - *Scalar
or has as many elements as the number
- * of channels in src2
:
- * dst(I) = saturate(| src1 - src2(I)|)
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
Note: Saturation is not applied when the arrays have the depth
- * CV_32S
. You may even get a negative value in the case of
- * overflow.
Calculates the per-element absolute difference between two arrays or between - * an array and a scalar.
- * - *The function absdiff
calculates:
dst(I) = saturate(| src1(I) - src2(I)|)
- * - *Scalar
or has as many elements as the
- * number of channels in src1
:
- * dst(I) = saturate(| src1(I) - src2|)
- * - *Scalar
or has as many elements as the number
- * of channels in src2
:
- * dst(I) = saturate(| src1 - src2(I)|)
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
Note: Saturation is not applied when the arrays have the depth
- * CV_32S
. You may even get a negative value in the case of
- * overflow.
Calculates the per-element sum of two arrays or an array and a scalar.
- * - *The function add
calculates:
dst(I) = saturate(src1(I) + src2(I)) if mask(I) != 0
- * - *src2
is constructed
- * from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) + src2) if mask(I) != 0
- * - *src1
is constructed
- * from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 + src2(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 + src2;
- * - *dst += src1; // equivalent to add(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can add a 16-bit unsigned array to a 8-bit signed
- * array and store the sum as a 32-bit floating-point array. Depth of the output
- * array is determined by the dtype
parameter. In the second and
- * third cases above, as well as in the first case, when src1.depth() ==
- * src2.depth()
, dtype
can be set to the default
- * -1
. In this case, the output array will have the same depth as
- * the input array, be it src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
dtype
or
- * src1
/src2
.
- * @param mask optional operation mask - 8-bit single channel array, that
- * specifies elements of the output array to be changed.
- * @param dtype optional depth of the output array (see the discussion below).
- *
- * @see org.opencv.core.Core.add
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- */
- public static void add(Mat src1, Mat src2, Mat dst, Mat mask, int dtype)
+ //javadoc: PSNR(src1, src2)
+ public static double PSNR(Mat src1, Mat src2)
{
-
- add_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj, dtype);
-
- return;
+
+ double retVal = PSNR_0(src1.nativeObj, src2.nativeObj);
+
+ return retVal;
}
-/**
- * Calculates the per-element sum of two arrays or an array and a scalar.
- * - *The function add
calculates:
dst(I) = saturate(src1(I) + src2(I)) if mask(I) != 0
- * - *src2
is constructed
- * from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) + src2) if mask(I) != 0
- * - *src1
is constructed
- * from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 + src2(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 + src2;
- * - *dst += src1; // equivalent to add(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can add a 16-bit unsigned array to a 8-bit signed
- * array and store the sum as a 32-bit floating-point array. Depth of the output
- * array is determined by the dtype
parameter. In the second and
- * third cases above, as well as in the first case, when src1.depth() ==
- * src2.depth()
, dtype
can be set to the default
- * -1
. In this case, the output array will have the same depth as
- * the input array, be it src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
dtype
or
- * src1
/src2
.
- * @param mask optional operation mask - 8-bit single channel array, that
- * specifies elements of the output array to be changed.
- *
- * @see org.opencv.core.Core.add
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- */
- public static void add(Mat src1, Mat src2, Mat dst, Mat mask)
- {
-
- add_1(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj);
- return;
- }
+ //
+ // C++: double determinant(Mat mtx)
+ //
-/**
- * Calculates the per-element sum of two arrays or an array and a scalar.
- * - *The function add
calculates:
dst(I) = saturate(src1(I) + src2(I)) if mask(I) != 0
- * - *src2
is constructed
- * from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) + src2) if mask(I) != 0
- * - *src1
is constructed
- * from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 + src2(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 + src2;
- * - *dst += src1; // equivalent to add(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can add a 16-bit unsigned array to a 8-bit signed
- * array and store the sum as a 32-bit floating-point array. Depth of the output
- * array is determined by the dtype
parameter. In the second and
- * third cases above, as well as in the first case, when src1.depth() ==
- * src2.depth()
, dtype
can be set to the default
- * -1
. In this case, the output array will have the same depth as
- * the input array, be it src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
dtype
or
- * src1
/src2
.
- *
- * @see org.opencv.core.Core.add
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- */
- public static void add(Mat src1, Mat src2, Mat dst)
+ //javadoc: determinant(mtx)
+ public static double determinant(Mat mtx)
{
-
- add_2(src1.nativeObj, src2.nativeObj, dst.nativeObj);
-
- return;
+
+ double retVal = determinant_0(mtx.nativeObj);
+
+ return retVal;
}
//
- // C++: void add(Mat src1, Scalar src2, Mat& dst, Mat mask = Mat(), int dtype = -1)
+ // C++: double getTickFrequency()
//
-/**
- * Calculates the per-element sum of two arrays or an array and a scalar.
- * - *The function add
calculates:
dst(I) = saturate(src1(I) + src2(I)) if mask(I) != 0
- * - *src2
is constructed
- * from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) + src2) if mask(I) != 0
- * - *src1
is constructed
- * from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 + src2(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 + src2;
- * - *dst += src1; // equivalent to add(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can add a 16-bit unsigned array to a 8-bit signed
- * array and store the sum as a 32-bit floating-point array. Depth of the output
- * array is determined by the dtype
parameter. In the second and
- * third cases above, as well as in the first case, when src1.depth() ==
- * src2.depth()
, dtype
can be set to the default
- * -1
. In this case, the output array will have the same depth as
- * the input array, be it src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
dtype
or
- * src1
/src2
.
- * @param mask optional operation mask - 8-bit single channel array, that
- * specifies elements of the output array to be changed.
- * @param dtype optional depth of the output array (see the discussion below).
- *
- * @see org.opencv.core.Core.add
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- */
- public static void add(Mat src1, Scalar src2, Mat dst, Mat mask, int dtype)
+ //javadoc: getTickFrequency()
+ public static double getTickFrequency()
{
-
- add_3(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, mask.nativeObj, dtype);
-
- return;
+
+ double retVal = getTickFrequency_0();
+
+ return retVal;
}
-/**
- * Calculates the per-element sum of two arrays or an array and a scalar.
- * - *The function add
calculates:
dst(I) = saturate(src1(I) + src2(I)) if mask(I) != 0
- * - *src2
is constructed
- * from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) + src2) if mask(I) != 0
- * - *src1
is constructed
- * from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 + src2(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 + src2;
- * - *dst += src1; // equivalent to add(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can add a 16-bit unsigned array to a 8-bit signed
- * array and store the sum as a 32-bit floating-point array. Depth of the output
- * array is determined by the dtype
parameter. In the second and
- * third cases above, as well as in the first case, when src1.depth() ==
- * src2.depth()
, dtype
can be set to the default
- * -1
. In this case, the output array will have the same depth as
- * the input array, be it src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
dtype
or
- * src1
/src2
.
- * @param mask optional operation mask - 8-bit single channel array, that
- * specifies elements of the output array to be changed.
- *
- * @see org.opencv.core.Core.add
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- */
- public static void add(Mat src1, Scalar src2, Mat dst, Mat mask)
- {
- add_4(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, mask.nativeObj);
+ //
+ // C++: double invert(Mat src, Mat& dst, int flags = DECOMP_LU)
+ //
- return;
+ //javadoc: invert(src, dst, flags)
+ public static double invert(Mat src, Mat dst, int flags)
+ {
+
+ double retVal = invert_0(src.nativeObj, dst.nativeObj, flags);
+
+ return retVal;
}
-/**
- * Calculates the per-element sum of two arrays or an array and a scalar.
- * - *The function add
calculates:
dst(I) = saturate(src1(I) + src2(I)) if mask(I) != 0
- * - *src2
is constructed
- * from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) + src2) if mask(I) != 0
- * - *src1
is constructed
- * from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 + src2(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 + src2;
- * - *dst += src1; // equivalent to add(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can add a 16-bit unsigned array to a 8-bit signed
- * array and store the sum as a 32-bit floating-point array. Depth of the output
- * array is determined by the dtype
parameter. In the second and
- * third cases above, as well as in the first case, when src1.depth() ==
- * src2.depth()
, dtype
can be set to the default
- * -1
. In this case, the output array will have the same depth as
- * the input array, be it src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
dtype
or
- * src1
/src2
.
- *
- * @see org.opencv.core.Core.add
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- */
- public static void add(Mat src1, Scalar src2, Mat dst)
+ //javadoc: invert(src, dst)
+ public static double invert(Mat src, Mat dst)
{
-
- add_5(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj);
-
- return;
+
+ double retVal = invert_1(src.nativeObj, dst.nativeObj);
+
+ return retVal;
}
//
- // C++: void addWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat& dst, int dtype = -1)
+ // C++: double kmeans(Mat data, int K, Mat& bestLabels, TermCriteria criteria, int attempts, int flags, Mat& centers = Mat())
//
-/**
- * Calculates the weighted sum of two arrays.
- * - *The function addWeighted
calculates the weighted sum of two
- * arrays as follows:
dst(I)= saturate(src1(I)* alpha + src2(I)* beta + gamma)
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The function can be replaced with a matrix expression:
// C++ code:
- * - *dst = src1*alpha + src2*beta + gamma;
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
- *
src1
.
- * @param beta weight of the second array elements.
- * @param gamma scalar added to each sum.
- * @param dst output array that has the same size and number of channels as the
- * input arrays.
- * @param dtype optional depth of the output array; when both input arrays have
- * the same depth, dtype
can be set to -1
, which will
- * be equivalent to src1.depth()
.
- *
- * @see org.opencv.core.Core.addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.core.Mat#convertTo
- */
- public static void addWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat dst, int dtype)
+ //javadoc: kmeans(data, K, bestLabels, criteria, attempts, flags, centers)
+ public static double kmeans(Mat data, int K, Mat bestLabels, TermCriteria criteria, int attempts, int flags, Mat centers)
{
-
- addWeighted_0(src1.nativeObj, alpha, src2.nativeObj, beta, gamma, dst.nativeObj, dtype);
-
- return;
+
+ double retVal = kmeans_0(data.nativeObj, K, bestLabels.nativeObj, criteria.type, criteria.maxCount, criteria.epsilon, attempts, flags, centers.nativeObj);
+
+ return retVal;
}
-/**
- * Calculates the weighted sum of two arrays.
- * - *The function addWeighted
calculates the weighted sum of two
- * arrays as follows:
dst(I)= saturate(src1(I)* alpha + src2(I)* beta + gamma)
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The function can be replaced with a matrix expression:
// C++ code:
- * - *dst = src1*alpha + src2*beta + gamma;
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
- *
src1
.
- * @param beta weight of the second array elements.
- * @param gamma scalar added to each sum.
- * @param dst output array that has the same size and number of channels as the
- * input arrays.
- *
- * @see org.opencv.core.Core.addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.core.Mat#convertTo
- */
- public static void addWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat dst)
+ //javadoc: kmeans(data, K, bestLabels, criteria, attempts, flags)
+ public static double kmeans(Mat data, int K, Mat bestLabels, TermCriteria criteria, int attempts, int flags)
{
-
- addWeighted_1(src1.nativeObj, alpha, src2.nativeObj, beta, gamma, dst.nativeObj);
-
- return;
+
+ double retVal = kmeans_1(data.nativeObj, K, bestLabels.nativeObj, criteria.type, criteria.maxCount, criteria.epsilon, attempts, flags);
+
+ return retVal;
}
//
- // C++: void arrowedLine(Mat& img, Point pt1, Point pt2, Scalar color, int thickness = 1, int line_type = 8, int shift = 0, double tipLength = 0.1)
+ // C++: double norm(Mat src1, Mat src2, int normType = NORM_L2, Mat mask = Mat())
//
-/**
- * Draws a arrow segment pointing from the first point to the second one.
- * - *The function arrowedLine
draws an arrow between pt1
- * and pt2
points in the image. See also "line".
Draws a arrow segment pointing from the first point to the second one.
- * - *The function arrowedLine
draws an arrow between pt1
- * and pt2
points in the image. See also "line".
Calculates the per-element bit-wise conjunction of two arrays or an array and - * a scalar.
- * - *The function calculates the per-element bit-wise logical conjunction for:
- *src1
and src2
have the same
- * size:
- * dst(I) = src1(I) / src2(I) if mask(I) != 0
- * - *src2
is constructed from
- * Scalar
or has the same number of elements as src1.channels()
:
- * dst(I) = src1(I) / src2 if mask(I) != 0
- * - *src1
is constructed from
- * Scalar
or has the same number of elements as src2.channels()
:
- * dst(I) = src1 / src2(I) if mask(I) != 0
- * - *In case of floating-point arrays, their machine-specific bit representations - * (usually IEEE754-compliant) are used for the operation. In case of - * multi-channel arrays, each channel is processed independently. In the second - * and third cases above, the scalar is first converted to the array type.
- * - * @param src1 first input array or a scalar. - * @param src2 second input array or a scalar. - * @param dst output array that has the same size and type as the input arrays. - * @param mask optional operation mask, 8-bit single channel array, that - * specifies elements of the output array to be changed. - * - * @see org.opencv.core.Core.bitwise_and - */ - public static void bitwise_and(Mat src1, Mat src2, Mat dst, Mat mask) + //javadoc: solvePoly(coeffs, roots, maxIters) + public static double solvePoly(Mat coeffs, Mat roots, int maxIters) { - - bitwise_and_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj); - - return; + + double retVal = solvePoly_0(coeffs.nativeObj, roots.nativeObj, maxIters); + + return retVal; } -/** - *Calculates the per-element bit-wise conjunction of two arrays or an array and - * a scalar.
- * - *The function calculates the per-element bit-wise logical conjunction for:
- *src1
and src2
have the same
- * size:
- * dst(I) = src1(I) / src2(I) if mask(I) != 0
- * - *src2
is constructed from
- * Scalar
or has the same number of elements as src1.channels()
:
- * dst(I) = src1(I) / src2 if mask(I) != 0
- * - *src1
is constructed from
- * Scalar
or has the same number of elements as src2.channels()
:
- * dst(I) = src1 / src2(I) if mask(I) != 0
- * - *In case of floating-point arrays, their machine-specific bit representations - * (usually IEEE754-compliant) are used for the operation. In case of - * multi-channel arrays, each channel is processed independently. In the second - * and third cases above, the scalar is first converted to the array type.
- * - * @param src1 first input array or a scalar. - * @param src2 second input array or a scalar. - * @param dst output array that has the same size and type as the input arrays. - * - * @see org.opencv.core.Core.bitwise_and - */ - public static void bitwise_and(Mat src1, Mat src2, Mat dst) + //javadoc: solvePoly(coeffs, roots) + public static double solvePoly(Mat coeffs, Mat roots) { - - bitwise_and_1(src1.nativeObj, src2.nativeObj, dst.nativeObj); - - return; + + double retVal = solvePoly_1(coeffs.nativeObj, roots.nativeObj); + + return retVal; } // - // C++: void bitwise_not(Mat src, Mat& dst, Mat mask = Mat()) + // C++: float cubeRoot(float val) // -/** - *Inverts every bit of an array.
- * - *The function calculates per-element bit-wise inversion of the input array:
- * - *dst(I) = !src(I)
- * - *In case of a floating-point input array, its machine-specific bit - * representation (usually IEEE754-compliant) is used for the operation. In case - * of multi-channel arrays, each channel is processed independently.
- * - * @param src input array. - * @param dst output array that has the same size and type as the input array. - * @param mask optional operation mask, 8-bit single channel array, that - * specifies elements of the output array to be changed. - * - * @see org.opencv.core.Core.bitwise_not - */ - public static void bitwise_not(Mat src, Mat dst, Mat mask) + //javadoc: cubeRoot(val) + public static float cubeRoot(float val) { - - bitwise_not_0(src.nativeObj, dst.nativeObj, mask.nativeObj); - - return; + + float retVal = cubeRoot_0(val); + + return retVal; } -/** - *Inverts every bit of an array.
- * - *The function calculates per-element bit-wise inversion of the input array:
- * - *dst(I) = !src(I)
- * - *In case of a floating-point input array, its machine-specific bit - * representation (usually IEEE754-compliant) is used for the operation. In case - * of multi-channel arrays, each channel is processed independently.
- * - * @param src input array. - * @param dst output array that has the same size and type as the input array. - * - * @see org.opencv.core.Core.bitwise_not - */ - public static void bitwise_not(Mat src, Mat dst) - { - bitwise_not_1(src.nativeObj, dst.nativeObj); + // + // C++: float fastAtan2(float y, float x) + // - return; + //javadoc: fastAtan2(y, x) + public static float fastAtan2(float y, float x) + { + + float retVal = fastAtan2_0(y, x); + + return retVal; } // - // C++: void bitwise_or(Mat src1, Mat src2, Mat& dst, Mat mask = Mat()) + // C++: int borderInterpolate(int p, int len, int borderType) // -/** - *Calculates the per-element bit-wise disjunction of two arrays or an array and - * a scalar.
- * - *The function calculates the per-element bit-wise logical disjunction for:
- *src1
and src2
have the same
- * size:
- * dst(I) = src1(I) V src2(I) if mask(I) != 0
- * - *src2
is constructed from
- * Scalar
or has the same number of elements as src1.channels()
:
- * dst(I) = src1(I) V src2 if mask(I) != 0
- * - *src1
is constructed from
- * Scalar
or has the same number of elements as src2.channels()
:
- * dst(I) = src1 V src2(I) if mask(I) != 0
- * - *In case of floating-point arrays, their machine-specific bit representations - * (usually IEEE754-compliant) are used for the operation. In case of - * multi-channel arrays, each channel is processed independently. In the second - * and third cases above, the scalar is first converted to the array type.
- * - * @param src1 first input array or a scalar. - * @param src2 second input array or a scalar. - * @param dst output array that has the same size and type as the input arrays. - * @param mask optional operation mask, 8-bit single channel array, that - * specifies elements of the output array to be changed. - * - * @see org.opencv.core.Core.bitwise_or - */ - public static void bitwise_or(Mat src1, Mat src2, Mat dst, Mat mask) - { - - bitwise_or_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj); - - return; - } - -/** - *Calculates the per-element bit-wise disjunction of two arrays or an array and - * a scalar.
- * - *The function calculates the per-element bit-wise logical disjunction for:
- *src1
and src2
have the same
- * size:
- * dst(I) = src1(I) V src2(I) if mask(I) != 0
- * - *src2
is constructed from
- * Scalar
or has the same number of elements as src1.channels()
:
- * dst(I) = src1(I) V src2 if mask(I) != 0
- * - *src1
is constructed from
- * Scalar
or has the same number of elements as src2.channels()
:
- * dst(I) = src1 V src2(I) if mask(I) != 0
- * - *In case of floating-point arrays, their machine-specific bit representations - * (usually IEEE754-compliant) are used for the operation. In case of - * multi-channel arrays, each channel is processed independently. In the second - * and third cases above, the scalar is first converted to the array type.
- * - * @param src1 first input array or a scalar. - * @param src2 second input array or a scalar. - * @param dst output array that has the same size and type as the input arrays. - * - * @see org.opencv.core.Core.bitwise_or - */ - public static void bitwise_or(Mat src1, Mat src2, Mat dst) + //javadoc: borderInterpolate(p, len, borderType) + public static int borderInterpolate(int p, int len, int borderType) { - - bitwise_or_1(src1.nativeObj, src2.nativeObj, dst.nativeObj); - - return; + + int retVal = borderInterpolate_0(p, len, borderType); + + return retVal; } // - // C++: void bitwise_xor(Mat src1, Mat src2, Mat& dst, Mat mask = Mat()) + // C++: int countNonZero(Mat src) // -/** - *Calculates the per-element bit-wise "exclusive or" operation on two arrays or - * an array and a scalar.
- * - *The function calculates the per-element bit-wise logical "exclusive-or" - * operation for:
- *src1
and src2
have the same
- * size:
- * dst(I) = src1(I)(+) src2(I) if mask(I) != 0
- * - *src2
is constructed from
- * Scalar
or has the same number of elements as src1.channels()
:
- * dst(I) = src1(I)(+) src2 if mask(I) != 0
- * - *src1
is constructed from
- * Scalar
or has the same number of elements as src2.channels()
:
- * dst(I) = src1(+) src2(I) if mask(I) != 0
- * - *In case of floating-point arrays, their machine-specific bit representations - * (usually IEEE754-compliant) are used for the operation. In case of - * multi-channel arrays, each channel is processed independently. In the 2nd and - * 3rd cases above, the scalar is first converted to the array type.
- * - * @param src1 first input array or a scalar. - * @param src2 second input array or a scalar. - * @param dst output array that has the same size and type as the input arrays. - * @param mask optional operation mask, 8-bit single channel array, that - * specifies elements of the output array to be changed. - * - * @see org.opencv.core.Core.bitwise_xor - */ - public static void bitwise_xor(Mat src1, Mat src2, Mat dst, Mat mask) - { - - bitwise_xor_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj); - - return; - } - -/** - *Calculates the per-element bit-wise "exclusive or" operation on two arrays or - * an array and a scalar.
- * - *The function calculates the per-element bit-wise logical "exclusive-or" - * operation for:
- *src1
and src2
have the same
- * size:
- * dst(I) = src1(I)(+) src2(I) if mask(I) != 0
- * - *src2
is constructed from
- * Scalar
or has the same number of elements as src1.channels()
:
- * dst(I) = src1(I)(+) src2 if mask(I) != 0
- * - *src1
is constructed from
- * Scalar
or has the same number of elements as src2.channels()
:
- * dst(I) = src1(+) src2(I) if mask(I) != 0
- * - *In case of floating-point arrays, their machine-specific bit representations - * (usually IEEE754-compliant) are used for the operation. In case of - * multi-channel arrays, each channel is processed independently. In the 2nd and - * 3rd cases above, the scalar is first converted to the array type.
- * - * @param src1 first input array or a scalar. - * @param src2 second input array or a scalar. - * @param dst output array that has the same size and type as the input arrays. - * - * @see org.opencv.core.Core.bitwise_xor - */ - public static void bitwise_xor(Mat src1, Mat src2, Mat dst) + //javadoc: countNonZero(src) + public static int countNonZero(Mat src) { - - bitwise_xor_1(src1.nativeObj, src2.nativeObj, dst.nativeObj); - - return; + + int retVal = countNonZero_0(src.nativeObj); + + return retVal; } // - // C++: void calcCovarMatrix(Mat samples, Mat& covar, Mat& mean, int flags, int ctype = CV_64F) + // C++: int getNumThreads() // -/** - *Calculates the covariance matrix of a set of vectors.
- * - *The functions calcCovarMatrix
calculate the covariance matrix
- * and, optionally, the mean vector of the set of input vectors.
ctype
and
- * square size.
- * @param mean input or output (depending on the flags) array as the average
- * value of the input vectors.
- * @param flags operation flags as a combination of the following values:
- * scale * [ vects [0]- mean, vects [1]- mean,...]^T * [ vects [0]- mean, - * vects [1]- mean,...],
- * - *The covariance matrix will be nsamples x nsamples
. Such an
- * unusual covariance matrix is used for fast PCA of a set of very large vectors
- * (see, for example, the EigenFaces technique for face recognition).
- * Eigenvalues of this "scrambled" matrix match the eigenvalues of the true
- * covariance matrix. The "true" eigenvectors can be easily calculated from the
- * eigenvectors of the "scrambled" covariance matrix.
scale * [ vects [0]- mean, vects [1]- mean,...] * [ vects [0]- mean, - * vects [1]- mean,...]^T,
- * - *covar
will be a square matrix of the same size as the total
- * number of elements in each input vector. One and only one of
- * CV_COVAR_SCRAMBLED
and CV_COVAR_NORMAL
must be
- * specified.
mean
from the input vectors but, instead, uses the
- * passed mean
vector. This is useful if mean
has been
- * pre-calculated or known in advance, or if the covariance matrix is calculated
- * by parts. In this case, mean
is not a mean vector of the input
- * sub-set of vectors but rather the mean vector of the whole set.
- * scale
is 1./nsamples
.
- * In the "scrambled" mode, scale
is the reciprocal of the total
- * number of elements in each input vector. By default (if the flag is not
- * specified), the covariance matrix is not scaled (scale=1
).
- * samples
matrix. mean
should be a single-row vector
- * in this case.
- * samples
matrix. mean
should be a single-column
- * vector in this case.
- * Calculates the covariance matrix of a set of vectors.
- * - *The functions calcCovarMatrix
calculate the covariance matrix
- * and, optionally, the mean vector of the set of input vectors.
ctype
and
- * square size.
- * @param mean input or output (depending on the flags) array as the average
- * value of the input vectors.
- * @param flags operation flags as a combination of the following values:
- * scale * [ vects [0]- mean, vects [1]- mean,...]^T * [ vects [0]- mean, - * vects [1]- mean,...],
- * - *The covariance matrix will be nsamples x nsamples
. Such an
- * unusual covariance matrix is used for fast PCA of a set of very large vectors
- * (see, for example, the EigenFaces technique for face recognition).
- * Eigenvalues of this "scrambled" matrix match the eigenvalues of the true
- * covariance matrix. The "true" eigenvectors can be easily calculated from the
- * eigenvectors of the "scrambled" covariance matrix.
scale * [ vects [0]- mean, vects [1]- mean,...] * [ vects [0]- mean, - * vects [1]- mean,...]^T,
- * - *covar
will be a square matrix of the same size as the total
- * number of elements in each input vector. One and only one of
- * CV_COVAR_SCRAMBLED
and CV_COVAR_NORMAL
must be
- * specified.
mean
from the input vectors but, instead, uses the
- * passed mean
vector. This is useful if mean
has been
- * pre-calculated or known in advance, or if the covariance matrix is calculated
- * by parts. In this case, mean
is not a mean vector of the input
- * sub-set of vectors but rather the mean vector of the whole set.
- * scale
is 1./nsamples
.
- * In the "scrambled" mode, scale
is the reciprocal of the total
- * number of elements in each input vector. By default (if the flag is not
- * specified), the covariance matrix is not scaled (scale=1
).
- * samples
matrix. mean
should be a single-row vector
- * in this case.
- * samples
matrix. mean
should be a single-column
- * vector in this case.
- * Calculates the magnitude and angle of 2D vectors.
- * - *The function cartToPolar
calculates either the magnitude, angle,
- * or both for every 2D vector (x(I),y(I)):
magnitude(I)= sqrt(x(I)^2+y(I)^2), - * angle(I)= atan2(y(I), x(I))[ *180 / pi ]
- * - *The angles are calculated with accuracy about 0.3 degrees. For the point - * (0,0), the angle is set to 0.
- * - * @param x array of x-coordinates; this must be a single-precision or - * double-precision floating-point array. - * @param y array of y-coordinates, that must have the same size and same type - * asx
.
- * @param magnitude output array of magnitudes of the same size and type as
- * x
.
- * @param angle output array of angles that has the same size and type as
- * x
; the angles are measured in radians (from 0 to 2*Pi) or in
- * degrees (0 to 360 degrees).
- * @param angleInDegrees a flag, indicating whether the angles are measured in
- * radians (which is by default), or in degrees.
- *
- * @see org.opencv.core.Core.cartToPolar
- * @see org.opencv.imgproc.Imgproc#Scharr
- * @see org.opencv.imgproc.Imgproc#Sobel
- */
- public static void cartToPolar(Mat x, Mat y, Mat magnitude, Mat angle, boolean angleInDegrees)
+ //javadoc: getOptimalDFTSize(vecsize)
+ public static int getOptimalDFTSize(int vecsize)
{
-
- cartToPolar_0(x.nativeObj, y.nativeObj, magnitude.nativeObj, angle.nativeObj, angleInDegrees);
-
- return;
+
+ int retVal = getOptimalDFTSize_0(vecsize);
+
+ return retVal;
}
-/**
- * Calculates the magnitude and angle of 2D vectors.
- * - *The function cartToPolar
calculates either the magnitude, angle,
- * or both for every 2D vector (x(I),y(I)):
magnitude(I)= sqrt(x(I)^2+y(I)^2), - * angle(I)= atan2(y(I), x(I))[ *180 / pi ]
- * - *The angles are calculated with accuracy about 0.3 degrees. For the point - * (0,0), the angle is set to 0.
- * - * @param x array of x-coordinates; this must be a single-precision or - * double-precision floating-point array. - * @param y array of y-coordinates, that must have the same size and same type - * asx
.
- * @param magnitude output array of magnitudes of the same size and type as
- * x
.
- * @param angle output array of angles that has the same size and type as
- * x
; the angles are measured in radians (from 0 to 2*Pi) or in
- * degrees (0 to 360 degrees).
- *
- * @see org.opencv.core.Core.cartToPolar
- * @see org.opencv.imgproc.Imgproc#Scharr
- * @see org.opencv.imgproc.Imgproc#Sobel
- */
- public static void cartToPolar(Mat x, Mat y, Mat magnitude, Mat angle)
- {
- cartToPolar_1(x.nativeObj, y.nativeObj, magnitude.nativeObj, angle.nativeObj);
+ //
+ // C++: int getThreadNum()
+ //
- return;
+ //javadoc: getThreadNum()
+ public static int getThreadNum()
+ {
+
+ int retVal = getThreadNum_0();
+
+ return retVal;
}
//
- // C++: bool checkRange(Mat a, bool quiet = true, _hidden_ * pos = 0, double minVal = -DBL_MAX, double maxVal = DBL_MAX)
+ // C++: int solveCubic(Mat coeffs, Mat& roots)
//
-/**
- * Checks every element of an input array for invalid values.
- * - *The functions checkRange
check that every array element is
- * neither NaN nor infinite. When minVal < -DBL_MAX
and
- * maxVal < DBL_MAX
, the functions also check that each value is
- * between minVal
and maxVal
. In case of multi-channel
- * arrays, each channel is processed independently.
- * If some values are out of range, position of the first outlier is stored in
- * pos
(when pos != NULL
). Then, the functions either
- * return false (when quiet=true
) or throw an exception.
Checks every element of an input array for invalid values.
- * - *The functions checkRange
check that every array element is
- * neither NaN nor infinite. When minVal < -DBL_MAX
and
- * maxVal < DBL_MAX
, the functions also check that each value is
- * between minVal
and maxVal
. In case of multi-channel
- * arrays, each channel is processed independently.
- * If some values are out of range, position of the first outlier is stored in
- * pos
(when pos != NULL
). Then, the functions either
- * return false (when quiet=true
) or throw an exception.
Draws a circle.
- * - *The function circle
draws a simple or filled circle with a given
- * center and radius.
Draws a circle.
- * - *The function circle
draws a simple or filled circle with a given
- * center and radius.
Draws a circle.
- * - *The function circle
draws a simple or filled circle with a given
- * center and radius.
Clips the line against the image rectangle.
- * - *The functions clipLine
calculate a part of the line segment that
- * is entirely within the specified rectangle.
- * They return false
if the line segment is completely outside the
- * rectangle. Otherwise, they return true
.
Performs the per-element comparison of two arrays or an array and scalar - * value.
- * - *The function compares:
- *src1
and src2
- * have the same size:
- * dst(I) = src1(I) cmpop src2(I)
- * - *src1
with a scalar src2
when
- * src2
is constructed from Scalar
or has a single
- * element:
- * dst(I) = src1(I) cmpop src2
- * - *src1
with elements of src2
when
- * src1
is constructed from Scalar
or has a single
- * element:
- * dst(I) = src1 cmpop src2(I)
- * - *When the comparison result is true, the corresponding element of output array
- * is set to 255.The comparison operations can be replaced with the equivalent
- * matrix expressions:
// C++ code:
- * - *Mat dst1 = src1 >= src2;
- * - *Mat dst2 = src1 < 8;...
- * - * @param src1 first input array or a scalar (in the case ofcvCmp
,
- * cv.Cmp
, cvCmpS
, cv.CmpS
it is always
- * an array); when it is an array, it must have a single channel.
- * @param src2 second input array or a scalar (in the case of cvCmp
- * and cv.Cmp
it is always an array; in the case of
- * cvCmpS
, cv.CmpS
it is always a scalar); when it is
- * an array, it must have a single channel.
- * @param dst output array that has the same size and type as the input arrays.
- * @param cmpop a flag, that specifies correspondence between the arrays:
- * src1
is equal to src2
.
- * src1
is greater than src2
.
- * src1
is greater than or equal to src2
.
- * src1
is less than src2
.
- * src1
is less than or equal to src2
.
- * src1
is unequal to src2
.
- * Performs the per-element comparison of two arrays or an array and scalar - * value.
- * - *The function compares:
- *src1
and src2
- * have the same size:
- * dst(I) = src1(I) cmpop src2(I)
- * - *src1
with a scalar src2
when
- * src2
is constructed from Scalar
or has a single
- * element:
- * dst(I) = src1(I) cmpop src2
- * - *src1
with elements of src2
when
- * src1
is constructed from Scalar
or has a single
- * element:
- * dst(I) = src1 cmpop src2(I)
- * - *When the comparison result is true, the corresponding element of output array
- * is set to 255.The comparison operations can be replaced with the equivalent
- * matrix expressions:
// C++ code:
- * - *Mat dst1 = src1 >= src2;
- * - *Mat dst2 = src1 < 8;...
- * - * @param src1 first input array or a scalar (in the case ofcvCmp
,
- * cv.Cmp
, cvCmpS
, cv.CmpS
it is always
- * an array); when it is an array, it must have a single channel.
- * @param src2 second input array or a scalar (in the case of cvCmp
- * and cv.Cmp
it is always an array; in the case of
- * cvCmpS
, cv.CmpS
it is always a scalar); when it is
- * an array, it must have a single channel.
- * @param dst output array that has the same size and type as the input arrays.
- * @param cmpop a flag, that specifies correspondence between the arrays:
- * src1
is equal to src2
.
- * src1
is greater than src2
.
- * src1
is greater than or equal to src2
.
- * src1
is less than src2
.
- * src1
is less than or equal to src2
.
- * src1
is unequal to src2
.
- * Copies the lower or the upper half of a square matrix to another half.
- * - *The function completeSymm
copies the lower half of a square
- * matrix to its another half. The matrix diagonal remains unchanged:
lowerToUpper=false
- * lowerToUpper=true
- * Copies the lower or the upper half of a square matrix to another half.
- * - *The function completeSymm
copies the lower half of a square
- * matrix to its another half. The matrix diagonal remains unchanged:
lowerToUpper=false
- * lowerToUpper=true
- * Scales, calculates absolute values, and converts the result to 8-bit.
- * - *On each element of the input array, the function convertScaleAbs
- * performs three operations sequentially: scaling, taking an absolute value,
- * conversion to an unsigned 8-bit type:
dst(I)= saturate_cast<uchar>(| src(I)* alpha + beta|)<BR>In case
- * of multi-channel arrays, the function processes each channel independently.
- * When the output is not 8-bit, the operation can be emulated by calling the
- * Mat.convertTo
method(or by using matrix expressions) and then
- * by calculating an absolute value of the result. For example:
- * <BR><code>
// C++ code:
- * - *Mat_
randu(A, Scalar(-100), Scalar(100));
- * - *Mat_
B = abs(B);
- * - *// Mat_
// but it will allocate a temporary matrix
- * - * @param src input array. - * @param dst output array. - * @param alpha optional scale factor. - * @param beta optional delta added to the scaled values. - * - * @see org.opencv.core.Core.convertScaleAbs - * @see org.opencv.core.Mat#convertTo - */ - public static void convertScaleAbs(Mat src, Mat dst, double alpha, double beta) + //javadoc: SVDecomp(src, w, u, vt, flags) + public static void SVDecomp(Mat src, Mat w, Mat u, Mat vt, int flags) { - - convertScaleAbs_0(src.nativeObj, dst.nativeObj, alpha, beta); - + + SVDecomp_0(src.nativeObj, w.nativeObj, u.nativeObj, vt.nativeObj, flags); + return; } -/** - *Scales, calculates absolute values, and converts the result to 8-bit.
- * - *On each element of the input array, the function convertScaleAbs
- * performs three operations sequentially: scaling, taking an absolute value,
- * conversion to an unsigned 8-bit type:
dst(I)= saturate_cast<uchar>(| src(I)* alpha + beta|)<BR>In case
- * of multi-channel arrays, the function processes each channel independently.
- * When the output is not 8-bit, the operation can be emulated by calling the
- * Mat.convertTo
method(or by using matrix expressions) and then
- * by calculating an absolute value of the result. For example:
- * <BR><code>
// C++ code:
- * - *Mat_
randu(A, Scalar(-100), Scalar(100));
- * - *Mat_
B = abs(B);
- * - *// Mat_
// but it will allocate a temporary matrix
- * - * @param src input array. - * @param dst output array. - * - * @see org.opencv.core.Core.convertScaleAbs - * @see org.opencv.core.Mat#convertTo - */ - public static void convertScaleAbs(Mat src, Mat dst) + //javadoc: SVDecomp(src, w, u, vt) + public static void SVDecomp(Mat src, Mat w, Mat u, Mat vt) { - - convertScaleAbs_1(src.nativeObj, dst.nativeObj); - + + SVDecomp_1(src.nativeObj, w.nativeObj, u.nativeObj, vt.nativeObj); + return; } // - // C++: int countNonZero(Mat src) + // C++: void absdiff(Mat src1, Mat src2, Mat& dst) // -/** - *Counts non-zero array elements.
- * - *The function returns the number of non-zero elements in src
:
sum(by: I: src(I) != 0) 1
- * - * @param src single-channel array. - * - * @see org.opencv.core.Core.countNonZero - * @see org.opencv.core.Core#minMaxLoc - * @see org.opencv.core.Core#calcCovarMatrix - * @see org.opencv.core.Core#meanStdDev - * @see org.opencv.core.Core#norm - * @see org.opencv.core.Core#mean - */ - public static int countNonZero(Mat src) + //javadoc: absdiff(src1, src2, dst) + public static void absdiff(Mat src1, Mat src2, Mat dst) { - - int retVal = countNonZero_0(src.nativeObj); - - return retVal; + + absdiff_0(src1.nativeObj, src2.nativeObj, dst.nativeObj); + + return; } // - // C++: float cubeRoot(float val) + // C++: void absdiff(Mat src1, Scalar src2, Mat& dst) // -/** - *Computes the cube root of an argument.
- * - *The function cubeRoot
computes sqrt3(val). Negative
- * arguments are handled correctly. NaN and Inf are not handled. The accuracy
- * approaches the maximum possible accuracy for single-precision data.
Performs a forward or inverse discrete Cosine transform of 1D or 2D array.
- * - *The function dct
performs a forward or inverse discrete Cosine
- * transform (DCT) of a 1D or 2D floating-point array:
N
elements:
- * Y = C^N * X
- * - *where
- * - *C^N_(jk)= sqrt(alpha_j/N) cos((pi(2k+1)j)/(2N))
- * - *and
- * - *alpha_0=1, alpha_j=2 for *j > 0*.
- *N
elements:
- * X = (C^N)^(-1) * Y = (C^N)^T * Y
- * - *(since C^N is an orthogonal matrix, C^N * (C^N)^T = I)
- *M x N
matrix:
- * Y = C^N * X * (C^N)^T
- * - *M x N
matrix:
- * X = (C^N)^T * X * C^N
- * - *The function chooses the mode of operation by looking at the flags and size - * of the input array:
- *(flags & DCT_INVERSE) == 0
, the function does a
- * forward 1D or 2D transform. Otherwise, it is an inverse 1D or 2D transform.
- * (flags & DCT_ROWS) != 0
, the function performs a 1D
- * transform of each row.
- * Note:
- * - *Currently dct
supports even-size arrays (2, 4, 6...). For data
- * analysis and approximation, you can pad the array when necessary.
Also, the function performance depends very much, and not monotonically, on
- * the array size (see"getOptimalDFTSize"). In the current implementation DCT of
- * a vector of size N
is calculated via DFT of a vector of size
- * N/2
. Thus, the optimal DCT size N1 >= N
can be
- * calculated as:
// C++ code:
- * - *size_t getOptimalDCTSize(size_t N) { return 2*getOptimalDFTSize((N+1)/2); }
- * - *N1 = getOptimalDCTSize(N);
- * - * - * - * @param src input floating-point array. - * @param dst output array of the same size and type assrc
.
- * @param flags transformation flags as a combination of the following values:
- * Performs a forward or inverse discrete Cosine transform of 1D or 2D array.
- * - *The function dct
performs a forward or inverse discrete Cosine
- * transform (DCT) of a 1D or 2D floating-point array:
N
elements:
- * Y = C^N * X
- * - *where
- * - *C^N_(jk)= sqrt(alpha_j/N) cos((pi(2k+1)j)/(2N))
- * - *and
- * - *alpha_0=1, alpha_j=2 for *j > 0*.
- *N
elements:
- * X = (C^N)^(-1) * Y = (C^N)^T * Y
- * - *(since C^N is an orthogonal matrix, C^N * (C^N)^T = I)
- *M x N
matrix:
- * Y = C^N * X * (C^N)^T
- * - *M x N
matrix:
- * X = (C^N)^T * X * C^N
- * - *The function chooses the mode of operation by looking at the flags and size - * of the input array:
- *(flags & DCT_INVERSE) == 0
, the function does a
- * forward 1D or 2D transform. Otherwise, it is an inverse 1D or 2D transform.
- * (flags & DCT_ROWS) != 0
, the function performs a 1D
- * transform of each row.
- * Note:
- * - *Currently dct
supports even-size arrays (2, 4, 6...). For data
- * analysis and approximation, you can pad the array when necessary.
Also, the function performance depends very much, and not monotonically, on
- * the array size (see"getOptimalDFTSize"). In the current implementation DCT of
- * a vector of size N
is calculated via DFT of a vector of size
- * N/2
. Thus, the optimal DCT size N1 >= N
can be
- * calculated as:
// C++ code:
- * - *size_t getOptimalDCTSize(size_t N) { return 2*getOptimalDFTSize((N+1)/2); }
- * - *N1 = getOptimalDCTSize(N);
- * - * - * - * @param src input floating-point array. - * @param dst output array of the same size and type assrc
.
- *
- * @see org.opencv.core.Core.dct
- * @see org.opencv.core.Core#dft
- * @see org.opencv.core.Core#idct
- * @see org.opencv.core.Core#getOptimalDFTSize
- */
- public static void dct(Mat src, Mat dst)
+ //javadoc: add(src1, src2, dst, mask)
+ public static void add(Mat src1, Mat src2, Mat dst, Mat mask)
{
+
+ add_1(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj);
+
+ return;
+ }
- dct_1(src.nativeObj, dst.nativeObj);
-
+ //javadoc: add(src1, src2, dst)
+ public static void add(Mat src1, Mat src2, Mat dst)
+ {
+
+ add_2(src1.nativeObj, src2.nativeObj, dst.nativeObj);
+
return;
}
//
- // C++: double determinant(Mat mtx)
+ // C++: void add(Mat src1, Scalar src2, Mat& dst, Mat mask = Mat(), int dtype = -1)
//
-/**
- * Returns the determinant of a square floating-point matrix.
- * - *The function determinant
calculates and returns the determinant
- * of the specified matrix. For small matrices (mtx.cols=mtx.rows<=3
),
- * the direct method is used. For larger matrices, the function uses LU
- * factorization with partial pivoting.
For symmetric positively-determined matrices, it is also possible to use - * "eigen" decomposition to calculate the determinant.
- * - * @param mtx input matrix that must haveCV_32FC1
or
- * CV_64FC1
type and square size.
- *
- * @see org.opencv.core.Core.determinant
- * @see org.opencv.core.Core#invert
- * @see org.opencv.core.Core#solve
- * @see org.opencv.core.Core#eigen
- * @see org.opencv.core.Core#trace
- */
- public static double determinant(Mat mtx)
+ //javadoc: add(src1, src2, dst, mask, dtype)
+ public static void add(Mat src1, Scalar src2, Mat dst, Mat mask, int dtype)
{
+
+ add_3(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, mask.nativeObj, dtype);
+
+ return;
+ }
- double retVal = determinant_0(mtx.nativeObj);
+ //javadoc: add(src1, src2, dst, mask)
+ public static void add(Mat src1, Scalar src2, Mat dst, Mat mask)
+ {
+
+ add_4(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, mask.nativeObj);
+
+ return;
+ }
- return retVal;
+ //javadoc: add(src1, src2, dst)
+ public static void add(Mat src1, Scalar src2, Mat dst)
+ {
+
+ add_5(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj);
+
+ return;
}
//
- // C++: void dft(Mat src, Mat& dst, int flags = 0, int nonzeroRows = 0)
+ // C++: void addWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat& dst, int dtype = -1)
//
-/**
- * Performs a forward or inverse Discrete Fourier transform of a 1D or 2D - * floating-point array.
- * - *The function performs one of the following:
- *N
- * elements:
- * Y = F^N * X,
- * - *where F^N_(jk)=exp(-2pi i j k/N) and i=sqrt(-1)
- *N
- * elements:
- * X'= (F^N)^(-1) * Y = (F^N)^* * y - * X = (1/N) * X,
- * - *where F^*=(Re(F^N)-Im(F^N))^T
- *M x N
matrix:
- * Y = F^M * X * F^N
- * - *M x N
matrix:
- * X'= (F^M)^* * Y * (F^N)^* - * X = 1/(M * N) * X'
- * - *In case of real (single-channel) data, the output spectrum of the forward - * Fourier transform or input spectrum of the inverse Fourier transform can be - * represented in a packed format called *CCS* (complex-conjugate-symmetrical). - * It was borrowed from IPL (Intel* Image Processing Library). Here is how 2D - * *CCS* spectrum looks:
- * - *Re Y_(0,0) Re Y_(0,1) Im Y_(0,1) Re Y_(0,2) Im Y_(0,2) *s Re Y_(0,N/2-1) - * Im Y_(0,N/2-1) Re Y_(0,N/2) - * Re Y_(1,0) Re Y_(1,1) Im Y_(1,1) Re Y_(1,2) Im Y_(1,2) *s Re Y_(1,N/2-1) Im - * Y_(1,N/2-1) Re Y_(1,N/2) - * Im Y_(1,0) Re Y_(2,1) Im Y_(2,1) Re Y_(2,2) Im Y_(2,2) *s Re Y_(2,N/2-1) Im - * Y_(2,N/2-1) Im Y_(1,N/2)........................... - * Re Y_(M/2-1,0) Re Y_(M-3,1) Im Y_(M-3,1)......... Re Y_(M-3,N/2-1) Im - * Y_(M-3,N/2-1) Re Y_(M/2-1,N/2) - * Im Y_(M/2-1,0) Re Y_(M-2,1) Im Y_(M-2,1)......... Re Y_(M-2,N/2-1) Im - * Y_(M-2,N/2-1) Im Y_(M/2-1,N/2) - * Re Y_(M/2,0) Re Y_(M-1,1) Im Y_(M-1,1)......... Re Y_(M-1,N/2-1) Im - * Y_(M-1,N/2-1) Re Y_(M/2,N/2)
- * - *In case of 1D transform of a real vector, the output looks like the first row - * of the matrix above.
- * - *So, the function chooses an operation mode depending on the flags and size of - * the input array:
- *DFT_ROWS
is set or the input array has a single row or
- * single column, the function performs a 1D forward or inverse transform of
- * each row of a matrix when DFT_ROWS
is set. Otherwise, it
- * performs a 2D transform.
- * DFT_INVERSE
is not set,
- * the function performs a forward 1D or 2D transform:
- * DFT_COMPLEX_OUTPUT
is set, the output is a complex
- * matrix of the same size as input.
- * DFT_COMPLEX_OUTPUT
is not set, the output is a real
- * matrix of the same size as input. In case of 2D transform, it uses the packed
- * format as shown above. In case of a single 1D transform, it looks like the
- * first row of the matrix above. In case of multiple 1D transforms (when using
- * the DFT_ROWS
flag), each row of the output matrix looks like the
- * first row of the matrix above.
- * DFT_INVERSE
or
- * DFT_REAL_OUTPUT
are not set, the output is a complex array of
- * the same size as input. The function performs a forward or inverse 1D or 2D
- * transform of the whole input array or each row of the input array
- * independently, depending on the flags DFT_INVERSE
and
- * DFT_ROWS
.
- * DFT_INVERSE
is set and the input array is real, or
- * it is complex but DFT_REAL_OUTPUT
is set, the output is a real
- * array of the same size as input. The function performs a 1D or 2D inverse
- * transformation of the whole input array or each individual row, depending on
- * the flags DFT_INVERSE
and DFT_ROWS
.
- * If DFT_SCALE
is set, the scaling is done after the
- * transformation.
Unlike "dct", the function supports arrays of arbitrary size. But only those
- * arrays are processed efficiently, whose sizes can be factorized in a product
- * of small prime numbers (2, 3, and 5 in the current implementation). Such an
- * efficient DFT size can be calculated using the "getOptimalDFTSize" method.
- * The sample below illustrates how to calculate a DFT-based convolution of two
- * 2D real arrays:
// C++ code:
- * - *void convolveDFT(InputArray A, InputArray B, OutputArray C)
- * - * - *// reallocate the output array if needed
- * - *C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());
- * - *Size dftSize;
- * - *// calculate the size of DFT transform
- * - *dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
- * - *dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);
- * - *// allocate temporary buffers and initialize them with 0's
- * - *Mat tempA(dftSize, A.type(), Scalar.all(0));
- * - *Mat tempB(dftSize, B.type(), Scalar.all(0));
- * - *// copy A and B to the top-left corners of tempA and tempB, respectively
- * - *Mat roiA(tempA, Rect(0,0,A.cols,A.rows));
- * - *A.copyTo(roiA);
- * - *Mat roiB(tempB, Rect(0,0,B.cols,B.rows));
- * - *B.copyTo(roiB);
- * - *// now transform the padded A & B in-place;
- * - *// use "nonzeroRows" hint for faster processing
- * - *dft(tempA, tempA, 0, A.rows);
- * - *dft(tempB, tempB, 0, B.rows);
- * - *// multiply the spectrums;
- * - *// the function handles packed spectrum representations well
- * - *mulSpectrums(tempA, tempB, tempA);
- * - *// transform the product back from the frequency domain.
- * - *// Even though all the result rows will be non-zero,
- * - *// you need only the first C.rows of them, and thus you
- * - *// pass nonzeroRows == C.rows
- * - *dft(tempA, tempA, DFT_INVERSE + DFT_SCALE, C.rows);
- * - *// now copy the result back to C.
- * - *tempA(Rect(0, 0, C.cols, C.rows)).copyTo(C);
- * - *// all the temporary buffers will be deallocated automatically
- * - * - *To optimize this sample, consider the following approaches:
- *nonzeroRows != 0
is passed to the forward transform
- * calls and since A
and B
are copied to the top-left
- * corners of tempA
and tempB
, respectively, it is not
- * necessary to clear the whole tempA
and tempB
. It is
- * only necessary to clear the tempA.cols - A.cols
- * (tempB.cols - B.cols
) rightmost columns of the matrices.
- * B
is significantly smaller than
- * A
or vice versa. Instead, you can calculate convolution by
- * parts. To do this, you need to split the output array C
into
- * multiple tiles. For each tile, estimate which parts of A
and
- * B
are required to calculate convolution in this tile. If the
- * tiles in C
are too small, the speed will decrease a lot because
- * of repeated work. In the ultimate case, when each tile in C
is a
- * single pixel, the algorithm becomes equivalent to the naive convolution
- * algorithm. If the tiles are too big, the temporary arrays tempA
- * and tempB
become too big and there is also a slowdown because of
- * bad cache locality. So, there is an optimal tile size somewhere in the
- * middle.
- * C
can be calculated in parallel
- * and, thus, the convolution is done by parts, the loop can be threaded.
- * All of the above improvements have been implemented in "matchTemplate" and
- * "filter2D". Therefore, by using them, you can get the performance even better
- * than with the above theoretically optimal implementation. Though, those two
- * functions actually calculate cross-correlation, not convolution, so you need
- * to "flip" the second convolution operand B
vertically and
- * horizontally using "flip".
Note:
- *flags
.
- * @param flags transformation flags, representing a combination of the
- * following values:
- * DFT_INVERSE
.
- * DFT_COMPLEX_OUTPUT
- * flag), the output is a real array; while the function itself does not check
- * whether the input is symmetrical or not, you can pass the flag and then the
- * function will assume the symmetry and produce the real output array (note
- * that when the input is packed into a real array and inverse transformation is
- * executed, the function treats the input as a packed complex-conjugate
- * symmetrical array, and the output will also be a real array).
- * nonzeroRows
rows of the input array
- * (DFT_INVERSE
is not set) or only the first nonzeroRows
- * of the output array (DFT_INVERSE
is set) contain non-zeros,
- * thus, the function can handle the rest of the rows more efficiently and save
- * some time; this technique is very useful for calculating array
- * cross-correlation or convolution using DFT.
- *
- * @see org.opencv.core.Core.dft
- * @see org.opencv.imgproc.Imgproc#matchTemplate
- * @see org.opencv.core.Core#mulSpectrums
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#flip
- * @see org.opencv.core.Core#magnitude
- * @see org.opencv.core.Core#phase
- * @see org.opencv.core.Core#dct
- * @see org.opencv.imgproc.Imgproc#filter2D
- * @see org.opencv.core.Core#getOptimalDFTSize
- */
- public static void dft(Mat src, Mat dst, int flags, int nonzeroRows)
+ //javadoc: addWeighted(src1, alpha, src2, beta, gamma, dst, dtype)
+ public static void addWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat dst, int dtype)
{
-
- dft_0(src.nativeObj, dst.nativeObj, flags, nonzeroRows);
-
+
+ addWeighted_0(src1.nativeObj, alpha, src2.nativeObj, beta, gamma, dst.nativeObj, dtype);
+
return;
}
-/**
- * Performs a forward or inverse Discrete Fourier transform of a 1D or 2D - * floating-point array.
- * - *The function performs one of the following:
- *N
- * elements:
- * Y = F^N * X,
- * - *where F^N_(jk)=exp(-2pi i j k/N) and i=sqrt(-1)
- *N
- * elements:
- * X'= (F^N)^(-1) * Y = (F^N)^* * y - * X = (1/N) * X,
- * - *where F^*=(Re(F^N)-Im(F^N))^T
- *M x N
matrix:
- * Y = F^M * X * F^N
- * - *M x N
matrix:
- * X'= (F^M)^* * Y * (F^N)^* - * X = 1/(M * N) * X'
- * - *In case of real (single-channel) data, the output spectrum of the forward - * Fourier transform or input spectrum of the inverse Fourier transform can be - * represented in a packed format called *CCS* (complex-conjugate-symmetrical). - * It was borrowed from IPL (Intel* Image Processing Library). Here is how 2D - * *CCS* spectrum looks:
- * - *Re Y_(0,0) Re Y_(0,1) Im Y_(0,1) Re Y_(0,2) Im Y_(0,2) *s Re Y_(0,N/2-1) - * Im Y_(0,N/2-1) Re Y_(0,N/2) - * Re Y_(1,0) Re Y_(1,1) Im Y_(1,1) Re Y_(1,2) Im Y_(1,2) *s Re Y_(1,N/2-1) Im - * Y_(1,N/2-1) Re Y_(1,N/2) - * Im Y_(1,0) Re Y_(2,1) Im Y_(2,1) Re Y_(2,2) Im Y_(2,2) *s Re Y_(2,N/2-1) Im - * Y_(2,N/2-1) Im Y_(1,N/2)........................... - * Re Y_(M/2-1,0) Re Y_(M-3,1) Im Y_(M-3,1)......... Re Y_(M-3,N/2-1) Im - * Y_(M-3,N/2-1) Re Y_(M/2-1,N/2) - * Im Y_(M/2-1,0) Re Y_(M-2,1) Im Y_(M-2,1)......... Re Y_(M-2,N/2-1) Im - * Y_(M-2,N/2-1) Im Y_(M/2-1,N/2) - * Re Y_(M/2,0) Re Y_(M-1,1) Im Y_(M-1,1)......... Re Y_(M-1,N/2-1) Im - * Y_(M-1,N/2-1) Re Y_(M/2,N/2)
- * - *In case of 1D transform of a real vector, the output looks like the first row - * of the matrix above.
- * - *So, the function chooses an operation mode depending on the flags and size of - * the input array:
- *DFT_ROWS
is set or the input array has a single row or
- * single column, the function performs a 1D forward or inverse transform of
- * each row of a matrix when DFT_ROWS
is set. Otherwise, it
- * performs a 2D transform.
- * DFT_INVERSE
is not set,
- * the function performs a forward 1D or 2D transform:
- * DFT_COMPLEX_OUTPUT
is set, the output is a complex
- * matrix of the same size as input.
- * DFT_COMPLEX_OUTPUT
is not set, the output is a real
- * matrix of the same size as input. In case of 2D transform, it uses the packed
- * format as shown above. In case of a single 1D transform, it looks like the
- * first row of the matrix above. In case of multiple 1D transforms (when using
- * the DFT_ROWS
flag), each row of the output matrix looks like the
- * first row of the matrix above.
- * DFT_INVERSE
or
- * DFT_REAL_OUTPUT
are not set, the output is a complex array of
- * the same size as input. The function performs a forward or inverse 1D or 2D
- * transform of the whole input array or each row of the input array
- * independently, depending on the flags DFT_INVERSE
and
- * DFT_ROWS
.
- * DFT_INVERSE
is set and the input array is real, or
- * it is complex but DFT_REAL_OUTPUT
is set, the output is a real
- * array of the same size as input. The function performs a 1D or 2D inverse
- * transformation of the whole input array or each individual row, depending on
- * the flags DFT_INVERSE
and DFT_ROWS
.
- * If DFT_SCALE
is set, the scaling is done after the
- * transformation.
Unlike "dct", the function supports arrays of arbitrary size. But only those
- * arrays are processed efficiently, whose sizes can be factorized in a product
- * of small prime numbers (2, 3, and 5 in the current implementation). Such an
- * efficient DFT size can be calculated using the "getOptimalDFTSize" method.
- * The sample below illustrates how to calculate a DFT-based convolution of two
- * 2D real arrays:
// C++ code:
- * - *void convolveDFT(InputArray A, InputArray B, OutputArray C)
- * - * - *// reallocate the output array if needed
- * - *C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());
- * - *Size dftSize;
- * - *// calculate the size of DFT transform
- * - *dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
- * - *dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);
- * - *// allocate temporary buffers and initialize them with 0's
- * - *Mat tempA(dftSize, A.type(), Scalar.all(0));
- * - *Mat tempB(dftSize, B.type(), Scalar.all(0));
- * - *// copy A and B to the top-left corners of tempA and tempB, respectively
- * - *Mat roiA(tempA, Rect(0,0,A.cols,A.rows));
- * - *A.copyTo(roiA);
- * - *Mat roiB(tempB, Rect(0,0,B.cols,B.rows));
- * - *B.copyTo(roiB);
- * - *// now transform the padded A & B in-place;
- * - *// use "nonzeroRows" hint for faster processing
- * - *dft(tempA, tempA, 0, A.rows);
- * - *dft(tempB, tempB, 0, B.rows);
- * - *// multiply the spectrums;
- * - *// the function handles packed spectrum representations well
- * - *mulSpectrums(tempA, tempB, tempA);
- * - *// transform the product back from the frequency domain.
- * - *// Even though all the result rows will be non-zero,
- * - *// you need only the first C.rows of them, and thus you
- * - *// pass nonzeroRows == C.rows
- * - *dft(tempA, tempA, DFT_INVERSE + DFT_SCALE, C.rows);
- * - *// now copy the result back to C.
- * - *tempA(Rect(0, 0, C.cols, C.rows)).copyTo(C);
- * - *// all the temporary buffers will be deallocated automatically
- * - * - *To optimize this sample, consider the following approaches:
- *nonzeroRows != 0
is passed to the forward transform
- * calls and since A
and B
are copied to the top-left
- * corners of tempA
and tempB
, respectively, it is not
- * necessary to clear the whole tempA
and tempB
. It is
- * only necessary to clear the tempA.cols - A.cols
- * (tempB.cols - B.cols
) rightmost columns of the matrices.
- * B
is significantly smaller than
- * A
or vice versa. Instead, you can calculate convolution by
- * parts. To do this, you need to split the output array C
into
- * multiple tiles. For each tile, estimate which parts of A
and
- * B
are required to calculate convolution in this tile. If the
- * tiles in C
are too small, the speed will decrease a lot because
- * of repeated work. In the ultimate case, when each tile in C
is a
- * single pixel, the algorithm becomes equivalent to the naive convolution
- * algorithm. If the tiles are too big, the temporary arrays tempA
- * and tempB
become too big and there is also a slowdown because of
- * bad cache locality. So, there is an optimal tile size somewhere in the
- * middle.
- * C
can be calculated in parallel
- * and, thus, the convolution is done by parts, the loop can be threaded.
- * All of the above improvements have been implemented in "matchTemplate" and
- * "filter2D". Therefore, by using them, you can get the performance even better
- * than with the above theoretically optimal implementation. Though, those two
- * functions actually calculate cross-correlation, not convolution, so you need
- * to "flip" the second convolution operand B
vertically and
- * horizontally using "flip".
Note:
- *flags
.
- *
- * @see org.opencv.core.Core.dft
- * @see org.opencv.imgproc.Imgproc#matchTemplate
- * @see org.opencv.core.Core#mulSpectrums
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#flip
- * @see org.opencv.core.Core#magnitude
- * @see org.opencv.core.Core#phase
- * @see org.opencv.core.Core#dct
- * @see org.opencv.imgproc.Imgproc#filter2D
- * @see org.opencv.core.Core#getOptimalDFTSize
- */
- public static void dft(Mat src, Mat dst)
+ //javadoc: addWeighted(src1, alpha, src2, beta, gamma, dst)
+ public static void addWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat dst)
{
-
- dft_1(src.nativeObj, dst.nativeObj);
-
+
+ addWeighted_1(src1.nativeObj, alpha, src2.nativeObj, beta, gamma, dst.nativeObj);
+
return;
}
//
- // C++: void divide(Mat src1, Mat src2, Mat& dst, double scale = 1, int dtype = -1)
+ // C++: void batchDistance(Mat src1, Mat src2, Mat& dist, int dtype, Mat& nidx, int normType = NORM_L2, int K = 0, Mat mask = Mat(), int update = 0, bool crosscheck = false)
//
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- * @param scale scalar factor.
- * @param dtype optional depth of the output array; if -1
,
- * dst
will have depth src2.depth()
, but in case of an
- * array-by-array division, you can only pass -1
when
- * src1.depth()==src2.depth()
.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(Mat src1, Mat src2, Mat dst, double scale, int dtype)
+ //javadoc: batchDistance(src1, src2, dist, dtype, nidx, normType, K, mask, update, crosscheck)
+ public static void batchDistance(Mat src1, Mat src2, Mat dist, int dtype, Mat nidx, int normType, int K, Mat mask, int update, boolean crosscheck)
{
-
- divide_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, scale, dtype);
-
+
+ batchDistance_0(src1.nativeObj, src2.nativeObj, dist.nativeObj, dtype, nidx.nativeObj, normType, K, mask.nativeObj, update, crosscheck);
+
return;
}
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- * @param scale scalar factor.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(Mat src1, Mat src2, Mat dst, double scale)
+ //javadoc: batchDistance(src1, src2, dist, dtype, nidx, normType, K)
+ public static void batchDistance(Mat src1, Mat src2, Mat dist, int dtype, Mat nidx, int normType, int K)
{
-
- divide_1(src1.nativeObj, src2.nativeObj, dst.nativeObj, scale);
-
+
+ batchDistance_1(src1.nativeObj, src2.nativeObj, dist.nativeObj, dtype, nidx.nativeObj, normType, K);
+
return;
}
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(Mat src1, Mat src2, Mat dst)
+ //javadoc: batchDistance(src1, src2, dist, dtype, nidx)
+ public static void batchDistance(Mat src1, Mat src2, Mat dist, int dtype, Mat nidx)
{
-
- divide_2(src1.nativeObj, src2.nativeObj, dst.nativeObj);
-
+
+ batchDistance_2(src1.nativeObj, src2.nativeObj, dist.nativeObj, dtype, nidx.nativeObj);
+
return;
}
//
- // C++: void divide(double scale, Mat src2, Mat& dst, int dtype = -1)
+ // C++: void bitwise_and(Mat src1, Mat src2, Mat& dst, Mat mask = Mat())
//
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- * @param dtype optional depth of the output array; if -1
,
- * dst
will have depth src2.depth()
, but in case of an
- * array-by-array division, you can only pass -1
when
- * src1.depth()==src2.depth()
.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(double scale, Mat src2, Mat dst, int dtype)
+ //javadoc: bitwise_and(src1, src2, dst, mask)
+ public static void bitwise_and(Mat src1, Mat src2, Mat dst, Mat mask)
{
+
+ bitwise_and_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj);
+
+ return;
+ }
- divide_3(scale, src2.nativeObj, dst.nativeObj, dtype);
-
- return;
- }
-
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(double scale, Mat src2, Mat dst)
+ //javadoc: bitwise_and(src1, src2, dst)
+ public static void bitwise_and(Mat src1, Mat src2, Mat dst)
{
-
- divide_4(scale, src2.nativeObj, dst.nativeObj);
-
+
+ bitwise_and_1(src1.nativeObj, src2.nativeObj, dst.nativeObj);
+
return;
}
//
- // C++: void divide(Mat src1, Scalar src2, Mat& dst, double scale = 1, int dtype = -1)
+ // C++: void bitwise_not(Mat src, Mat& dst, Mat mask = Mat())
//
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- * @param scale scalar factor.
- * @param dtype optional depth of the output array; if -1
,
- * dst
will have depth src2.depth()
, but in case of an
- * array-by-array division, you can only pass -1
when
- * src1.depth()==src2.depth()
.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(Mat src1, Scalar src2, Mat dst, double scale, int dtype)
- {
-
- divide_5(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, scale, dtype);
-
- return;
- }
-
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- * @param scale scalar factor.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(Mat src1, Scalar src2, Mat dst, double scale)
+ //javadoc: bitwise_not(src, dst, mask)
+ public static void bitwise_not(Mat src, Mat dst, Mat mask)
{
+
+ bitwise_not_0(src.nativeObj, dst.nativeObj, mask.nativeObj);
+
+ return;
+ }
- divide_6(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, scale);
-
- return;
- }
-
-/**
- * Performs per-element division of two arrays or a scalar by an array.
- * - *The functions divide
divide one array by another:
dst(I) = saturate(src1(I)*scale/src2(I))
- * - *or a scalar by an array when there is no src1
:
dst(I) = saturate(scale/src2(I))
- * - *When src2(I)
is zero, dst(I)
will also be zero.
- * Different channels of multi-channel arrays are processed independently.
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src2
.
- *
- * @see org.opencv.core.Core.divide
- * @see org.opencv.core.Core#multiply
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
- public static void divide(Mat src1, Scalar src2, Mat dst)
+ //javadoc: bitwise_not(src, dst)
+ public static void bitwise_not(Mat src, Mat dst)
{
-
- divide_7(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj);
-
+
+ bitwise_not_1(src.nativeObj, dst.nativeObj);
+
return;
}
//
- // C++: void drawMarker(Mat& img, Point position, Scalar color, int markerType = MARKER_CROSS, int markerSize = 20, int thickness = 1, int line_type = 8)
+ // C++: void bitwise_or(Mat src1, Mat src2, Mat& dst, Mat mask = Mat())
//
- public static void drawMarker(Mat img, Point position, Scalar color, int markerType, int markerSize, int thickness, int line_type)
+ //javadoc: bitwise_or(src1, src2, dst, mask)
+ public static void bitwise_or(Mat src1, Mat src2, Mat dst, Mat mask)
{
-
- drawMarker_0(img.nativeObj, position.x, position.y, color.val[0], color.val[1], color.val[2], color.val[3], markerType, markerSize, thickness, line_type);
-
+
+ bitwise_or_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj);
+
return;
}
- public static void drawMarker(Mat img, Point position, Scalar color)
+ //javadoc: bitwise_or(src1, src2, dst)
+ public static void bitwise_or(Mat src1, Mat src2, Mat dst)
{
-
- drawMarker_1(img.nativeObj, position.x, position.y, color.val[0], color.val[1], color.val[2], color.val[3]);
-
+
+ bitwise_or_1(src1.nativeObj, src2.nativeObj, dst.nativeObj);
+
return;
}
//
- // C++: bool eigen(Mat src, bool computeEigenvectors, Mat& eigenvalues, Mat& eigenvectors)
+ // C++: void bitwise_xor(Mat src1, Mat src2, Mat& dst, Mat mask = Mat())
//
-/**
- * Calculates eigenvalues and eigenvectors of a symmetric matrix.
- * - *The functions eigen
calculate just eigenvalues, or eigenvalues
- * and eigenvectors of the symmetric matrix src
:
// C++ code:
- * - *src*eigenvectors.row(i).t() = eigenvalues.at
Note: in the new and the old interfaces different ordering of eigenvalues and - * eigenvectors parameters is used. - *
- * - * @param src input matrix that must haveCV_32FC1
or
- * CV_64FC1
type, square size and be symmetrical (src
^"T"
- * == src
).
- * @param computeEigenvectors a computeEigenvectors
- * @param eigenvalues output vector of eigenvalues of the same type as
- * src
; the eigenvalues are stored in the descending order.
- * @param eigenvectors output matrix of eigenvectors; it has the same size and
- * type as src
; the eigenvectors are stored as subsequent matrix
- * rows, in the same order as the corresponding eigenvalues.
- *
- * @see org.opencv.core.Core.eigen
- * @see org.opencv.core.Core#completeSymm
- */
- public static boolean eigen(Mat src, boolean computeEigenvectors, Mat eigenvalues, Mat eigenvectors)
+ //javadoc: bitwise_xor(src1, src2, dst, mask)
+ public static void bitwise_xor(Mat src1, Mat src2, Mat dst, Mat mask)
{
+
+ bitwise_xor_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, mask.nativeObj);
+
+ return;
+ }
- boolean retVal = eigen_0(src.nativeObj, computeEigenvectors, eigenvalues.nativeObj, eigenvectors.nativeObj);
-
- return retVal;
+ //javadoc: bitwise_xor(src1, src2, dst)
+ public static void bitwise_xor(Mat src1, Mat src2, Mat dst)
+ {
+
+ bitwise_xor_1(src1.nativeObj, src2.nativeObj, dst.nativeObj);
+
+ return;
}
//
- // C++: void ellipse(Mat& img, Point center, Size axes, double angle, double startAngle, double endAngle, Scalar color, int thickness = 1, int lineType = 8, int shift = 0)
- //
-
-/**
- * Draws a simple or thick elliptic arc or fills an ellipse sector.
- * - *The functions ellipse
with less parameters draw an ellipse
- * outline, a filled ellipse, an elliptic arc, or a filled ellipse sector.
- * A piecewise-linear curve is used to approximate the elliptic arc boundary. If
- * you need more control of the ellipse rendering, you can retrieve the curve
- * using "ellipse2Poly" and then render it with "polylines" or fill it with
- * "fillPoly". If you use the first variant of the function and want to draw the
- * whole ellipse, not an arc, pass startAngle=0
and
- * endAngle=360
. The figure below explains the meaning of the
- * parameters.
- * Figure 1. Parameters of Elliptic Arc
Draws a simple or thick elliptic arc or fills an ellipse sector.
- * - *The functions ellipse
with less parameters draw an ellipse
- * outline, a filled ellipse, an elliptic arc, or a filled ellipse sector.
- * A piecewise-linear curve is used to approximate the elliptic arc boundary. If
- * you need more control of the ellipse rendering, you can retrieve the curve
- * using "ellipse2Poly" and then render it with "polylines" or fill it with
- * "fillPoly". If you use the first variant of the function and want to draw the
- * whole ellipse, not an arc, pass startAngle=0
and
- * endAngle=360
. The figure below explains the meaning of the
- * parameters.
- * Figure 1. Parameters of Elliptic Arc
Draws a simple or thick elliptic arc or fills an ellipse sector.
- * - *The functions ellipse
with less parameters draw an ellipse
- * outline, a filled ellipse, an elliptic arc, or a filled ellipse sector.
- * A piecewise-linear curve is used to approximate the elliptic arc boundary. If
- * you need more control of the ellipse rendering, you can retrieve the curve
- * using "ellipse2Poly" and then render it with "polylines" or fill it with
- * "fillPoly". If you use the first variant of the function and want to draw the
- * whole ellipse, not an arc, pass startAngle=0
and
- * endAngle=360
. The figure below explains the meaning of the
- * parameters.
- * Figure 1. Parameters of Elliptic Arc
Draws a simple or thick elliptic arc or fills an ellipse sector.
- * - *The functions ellipse
with less parameters draw an ellipse
- * outline, a filled ellipse, an elliptic arc, or a filled ellipse sector.
- * A piecewise-linear curve is used to approximate the elliptic arc boundary. If
- * you need more control of the ellipse rendering, you can retrieve the curve
- * using "ellipse2Poly" and then render it with "polylines" or fill it with
- * "fillPoly". If you use the first variant of the function and want to draw the
- * whole ellipse, not an arc, pass startAngle=0
and
- * endAngle=360
. The figure below explains the meaning of the
- * parameters.
- * Figure 1. Parameters of Elliptic Arc
CvBox2D
. This means that the function draws an ellipse inscribed
- * in the rotated rectangle.
- * @param color Ellipse color.
- * @param thickness Thickness of the ellipse arc outline, if positive.
- * Otherwise, this indicates that a filled ellipse sector is to be drawn.
- * @param lineType Type of the ellipse boundary. See the "line" description.
- *
- * @see org.opencv.core.Core.ellipse
- */
- public static void ellipse(Mat img, RotatedRect box, Scalar color, int thickness, int lineType)
- {
-
- ellipse_3(img.nativeObj, box.center.x, box.center.y, box.size.width, box.size.height, box.angle, color.val[0], color.val[1], color.val[2], color.val[3], thickness, lineType);
-
- return;
- }
-
-/**
- * Draws a simple or thick elliptic arc or fills an ellipse sector.
- * - *The functions ellipse
with less parameters draw an ellipse
- * outline, a filled ellipse, an elliptic arc, or a filled ellipse sector.
- * A piecewise-linear curve is used to approximate the elliptic arc boundary. If
- * you need more control of the ellipse rendering, you can retrieve the curve
- * using "ellipse2Poly" and then render it with "polylines" or fill it with
- * "fillPoly". If you use the first variant of the function and want to draw the
- * whole ellipse, not an arc, pass startAngle=0
and
- * endAngle=360
. The figure below explains the meaning of the
- * parameters.
- * Figure 1. Parameters of Elliptic Arc
CvBox2D
. This means that the function draws an ellipse inscribed
- * in the rotated rectangle.
- * @param color Ellipse color.
- * @param thickness Thickness of the ellipse arc outline, if positive.
- * Otherwise, this indicates that a filled ellipse sector is to be drawn.
- *
- * @see org.opencv.core.Core.ellipse
- */
- public static void ellipse(Mat img, RotatedRect box, Scalar color, int thickness)
- {
+ // C++: void calcCovarMatrix(Mat samples, Mat& covar, Mat& mean, int flags, int ctype = CV_64F)
+ //
- ellipse_4(img.nativeObj, box.center.x, box.center.y, box.size.width, box.size.height, box.angle, color.val[0], color.val[1], color.val[2], color.val[3], thickness);
-
- return;
- }
-
-/**
- * Draws a simple or thick elliptic arc or fills an ellipse sector.
- * - *The functions ellipse
with less parameters draw an ellipse
- * outline, a filled ellipse, an elliptic arc, or a filled ellipse sector.
- * A piecewise-linear curve is used to approximate the elliptic arc boundary. If
- * you need more control of the ellipse rendering, you can retrieve the curve
- * using "ellipse2Poly" and then render it with "polylines" or fill it with
- * "fillPoly". If you use the first variant of the function and want to draw the
- * whole ellipse, not an arc, pass startAngle=0
and
- * endAngle=360
. The figure below explains the meaning of the
- * parameters.
- * Figure 1. Parameters of Elliptic Arc
CvBox2D
. This means that the function draws an ellipse inscribed
- * in the rotated rectangle.
- * @param color Ellipse color.
- *
- * @see org.opencv.core.Core.ellipse
- */
- public static void ellipse(Mat img, RotatedRect box, Scalar color)
+ //javadoc: calcCovarMatrix(samples, covar, mean, flags, ctype)
+ public static void calcCovarMatrix(Mat samples, Mat covar, Mat mean, int flags, int ctype)
{
+
+ calcCovarMatrix_0(samples.nativeObj, covar.nativeObj, mean.nativeObj, flags, ctype);
+
+ return;
+ }
- ellipse_5(img.nativeObj, box.center.x, box.center.y, box.size.width, box.size.height, box.angle, color.val[0], color.val[1], color.val[2], color.val[3]);
-
+ //javadoc: calcCovarMatrix(samples, covar, mean, flags)
+ public static void calcCovarMatrix(Mat samples, Mat covar, Mat mean, int flags)
+ {
+
+ calcCovarMatrix_1(samples.nativeObj, covar.nativeObj, mean.nativeObj, flags);
+
return;
}
- //
- // C++: void ellipse2Poly(Point center, Size axes, int angle, int arcStart, int arcEnd, int delta, vector_Point& pts)
- //
-
-/**
- * Approximates an elliptic arc with a polyline.
- * - *The function ellipse2Poly
computes the vertices of a polyline
- * that approximates the specified elliptic arc. It is used by "ellipse".
Calculates the exponent of every array element.
- * - *The function exp
calculates the exponent of every element of the
- * input array:
dst [I] = e^(src(I))
- * - *The maximum relative error is about 7e-6
for single-precision
- * input and less than 1e-10
for double-precision input. Currently,
- * the function converts denormalized values to zeros on output. Special values
- * (NaN, Inf) are not handled.
src
.
- *
- * @see org.opencv.core.Core.exp
- * @see org.opencv.core.Core#log
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#pow
- * @see org.opencv.core.Core#sqrt
- * @see org.opencv.core.Core#magnitude
- * @see org.opencv.core.Core#polarToCart
- * @see org.opencv.core.Core#phase
- */
- public static void exp(Mat src, Mat dst)
+ //javadoc: compare(src1, src2, dst, cmpop)
+ public static void compare(Mat src1, Mat src2, Mat dst, int cmpop)
{
+
+ compare_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, cmpop);
+
+ return;
+ }
- exp_0(src.nativeObj, dst.nativeObj);
+ //
+ // C++: void compare(Mat src1, Scalar src2, Mat& dst, int cmpop)
+ //
+
+ //javadoc: compare(src1, src2, dst, cmpop)
+ public static void compare(Mat src1, Scalar src2, Mat dst, int cmpop)
+ {
+
+ compare_1(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, cmpop);
+
return;
}
//
- // C++: void extractChannel(Mat src, Mat& dst, int coi)
+ // C++: void completeSymm(Mat& mtx, bool lowerToUpper = false)
//
- public static void extractChannel(Mat src, Mat dst, int coi)
+ //javadoc: completeSymm(mtx, lowerToUpper)
+ public static void completeSymm(Mat mtx, boolean lowerToUpper)
{
+
+ completeSymm_0(mtx.nativeObj, lowerToUpper);
+
+ return;
+ }
- extractChannel_0(src.nativeObj, dst.nativeObj, coi);
-
+ //javadoc: completeSymm(mtx)
+ public static void completeSymm(Mat mtx)
+ {
+
+ completeSymm_1(mtx.nativeObj);
+
return;
}
//
- // C++: float fastAtan2(float y, float x)
+ // C++: void convertFp16(Mat src, Mat& dst)
//
-/**
- * Calculates the angle of a 2D vector in degrees.
- * - *The function fastAtan2
calculates the full-range angle of an
- * input 2D vector. The angle is measured in degrees and varies from 0 to 360
- * degrees. The accuracy is about 0.3 degrees.
Fills a convex polygon.
- * - *The function fillConvexPoly
draws a filled convex polygon.
- * This function is much faster than the function fillPoly
. It can
- * fill not only convex polygons but any monotonic polygon without
- * self-intersections, that is, a polygon whose contour intersects every
- * horizontal line (scan line) twice at the most (though, its top-most and/or
- * the bottom edge could be horizontal).
Fills a convex polygon.
- * - *The function fillConvexPoly
draws a filled convex polygon.
- * This function is much faster than the function fillPoly
. It can
- * fill not only convex polygons but any monotonic polygon without
- * self-intersections, that is, a polygon whose contour intersects every
- * horizontal line (scan line) twice at the most (though, its top-most and/or
- * the bottom edge could be horizontal).
Fills the area bounded by one or more polygons.
- * - *The function fillPoly
fills an area bounded by several polygonal
- * contours. The function can fill complex areas, for example, areas with holes,
- * contours with self-intersections (some of their parts), and so forth.
Fills the area bounded by one or more polygons.
- * - *The function fillPoly
fills an area bounded by several polygonal
- * contours. The function can fill complex areas, for example, areas with holes,
- * contours with self-intersections (some of their parts), and so forth.
Flips a 2D array around vertical, horizontal, or both axes.
- * - *The function flip
flips the array in one of three different ways
- * (row and column indices are 0-based):
dst _(ij) =<BR> <= ft(<BR> ltBR gtsrc _(src.rows-i-1,j) if - * flipCode = 0 - * ltBR gtsrc _(i, src.cols -j-1) if flipCode gt 0 - * ltBR gtsrc _(src.rows -i-1, src.cols -j-1) if flipCode lt 0 - * ltBR gt<BR>right.
- * - *The example scenarios of using the function are the following:
- *flipCode == 0
) to switch
- * between top-left and bottom-left image origin. This is a typical operation in
- * video processing on Microsoft Windows* OS.
- * flipCode > 0
).
- * flipCode < 0
).
- * flipCode > 0
or
- * flipCode == 0
).
- * src
.
- * @param flipCode a flag to specify how to flip the array; 0 means flipping
- * around the x-axis and positive value (for example, 1) means flipping around
- * y-axis. Negative value (for example, -1) means flipping around both axes (see
- * the discussion below for the formulas).
- *
- * @see org.opencv.core.Core.flip
- * @see org.opencv.core.Core#repeat
- * @see org.opencv.core.Core#transpose
- * @see org.opencv.core.Core#completeSymm
- */
- public static void flip(Mat src, Mat dst, int flipCode)
+ //javadoc: dct(src, dst)
+ public static void dct(Mat src, Mat dst)
{
-
- flip_0(src.nativeObj, dst.nativeObj, flipCode);
-
+
+ dct_1(src.nativeObj, dst.nativeObj);
+
return;
}
//
- // C++: void gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat& dst, int flags = 0)
+ // C++: void dft(Mat src, Mat& dst, int flags = 0, int nonzeroRows = 0)
//
-/**
- * Performs generalized matrix multiplication.
- * - *The function performs generalized matrix multiplication similar to the
- * gemm
functions in BLAS level 3. For example, gemm(src1,
- * src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T)
corresponds to
dst = alpha * src1 ^T * src2 + beta * src3 ^T<BR>The function can be - * replaced with a matrix expression. For example, the above call can be - * replaced with: <BR><code>
- * - *// C++ code:
- * - *dst = alpha*src1.t()*src2 + beta*src3.t();
- * - * - * - * @param src1 first multiplied input matrix that should haveCV_32FC1
,
- * CV_64FC1
, CV_32FC2
, or CV_64FC2
type.
- * @param src2 second multiplied input matrix of the same type as
- * src1
.
- * @param alpha weight of the matrix product.
- * @param src3 third optional delta matrix added to the matrix product; it
- * should have the same type as src1
and src2
.
- * @param beta weight of src3
.
- * @param dst output matrix; it has the proper size and the same type as input
- * matrices.
- * @param flags operation flags:
- * src1
.
- * src2
.
- * src3
.
- * Performs generalized matrix multiplication.
- * - *The function performs generalized matrix multiplication similar to the
- * gemm
functions in BLAS level 3. For example, gemm(src1,
- * src2, alpha, src3, beta, dst, GEMM_1_T + GEMM_3_T)
corresponds to
dst = alpha * src1 ^T * src2 + beta * src3 ^T<BR>The function can be - * replaced with a matrix expression. For example, the above call can be - * replaced with: <BR><code>
- * - *// C++ code:
- * - *dst = alpha*src1.t()*src2 + beta*src3.t();
- * - * - * - * @param src1 first multiplied input matrix that should haveCV_32FC1
,
- * CV_64FC1
, CV_32FC2
, or CV_64FC2
type.
- * @param src2 second multiplied input matrix of the same type as
- * src1
.
- * @param alpha weight of the matrix product.
- * @param src3 third optional delta matrix added to the matrix product; it
- * should have the same type as src1
and src2
.
- * @param beta weight of src3
.
- * @param dst output matrix; it has the proper size and the same type as input
- * matrices.
- *
- * @see org.opencv.core.Core.gemm
- * @see org.opencv.core.Core#mulTransposed
- * @see org.opencv.core.Core#transform
- */
- public static void gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat dst)
+ //javadoc: dft(src, dst)
+ public static void dft(Mat src, Mat dst)
{
-
- gemm_1(src1.nativeObj, src2.nativeObj, alpha, src3.nativeObj, beta, dst.nativeObj);
-
+
+ dft_1(src.nativeObj, dst.nativeObj);
+
return;
}
//
- // C++: string getBuildInformation()
+ // C++: void divide(Mat src1, Mat src2, Mat& dst, double scale = 1, int dtype = -1)
//
-/**
- * Returns full configuration time cmake output.
- * - *Returned value is raw cmake output including version control system revision, - * compiler version, compiler flags, enabled modules and third party libraries, - * etc. Output format depends on target architecture.
- * - * @see org.opencv.core.Core.getBuildInformation - */ - public static String getBuildInformation() + //javadoc: divide(src1, src2, dst, scale, dtype) + public static void divide(Mat src1, Mat src2, Mat dst, double scale, int dtype) { + + divide_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, scale, dtype); + + return; + } - String retVal = getBuildInformation_0(); + //javadoc: divide(src1, src2, dst, scale) + public static void divide(Mat src1, Mat src2, Mat dst, double scale) + { + + divide_1(src1.nativeObj, src2.nativeObj, dst.nativeObj, scale); + + return; + } - return retVal; + //javadoc: divide(src1, src2, dst) + public static void divide(Mat src1, Mat src2, Mat dst) + { + + divide_2(src1.nativeObj, src2.nativeObj, dst.nativeObj); + + return; } // - // C++: int64 getCPUTickCount() + // C++: void divide(Mat src1, Scalar src2, Mat& dst, double scale = 1, int dtype = -1) // -/** - *Returns the number of CPU ticks.
- * - *The function returns the current number of CPU ticks on some architectures
- * (such as x86, x64, PowerPC). On other platforms the function is equivalent to
- * getTickCount
. It can also be used for very accurate time
- * measurements, as well as for RNG initialization. Note that in case of
- * multi-CPU systems a thread, from which getCPUTickCount
is
- * called, can be suspended and resumed at another CPU with its own counter. So,
- * theoretically (and practically) the subsequent calls to the function do not
- * necessary return the monotonously increasing values. Also, since a modern CPU
- * varies the CPU frequency depending on the load, the number of CPU clocks
- * spent in some code cannot be directly converted to time units. Therefore,
- * getTickCount
is generally a preferable solution for measuring
- * execution time.
Returns the number of threads used by OpenCV for parallel regions. - * Always returns 1 if OpenCV is built without threading support.
- * - *The exact meaning of return value depends on the threading framework used by - * OpenCV library:
- *If there is any tbb.thread_scheduler_init
in user code
- * conflicting with OpenCV, then function returns default number of threads used
- * by TBB library.
setNumThreads
with threads >
- * 0
, otherwise returns the number of logical CPUs, available for the
- * process.
- * Returns the number of logical CPUs available for the process.
- * - * @see org.opencv.core.Core.getNumberOfCPUs - */ - public static int getNumberOfCPUs() + //javadoc: exp(src, dst) + public static void exp(Mat src, Mat dst) { - - int retVal = getNumberOfCPUs_0(); - - return retVal; + + exp_0(src.nativeObj, dst.nativeObj); + + return; } // - // C++: int getOptimalDFTSize(int vecsize) + // C++: void extractChannel(Mat src, Mat& dst, int coi) // -/** - *Returns the optimal DFT size for a given vector size.
- * - *DFT performance is not a monotonic function of a vector size. Therefore, when - * you calculate convolution of two arrays or perform the spectral analysis of - * an array, it usually makes sense to pad the input data with zeros to get a - * bit larger array that can be transformed much faster than the original one. - * Arrays whose size is a power-of-two (2, 4, 8, 16, 32,...) are the fastest to - * process. Though, the arrays whose size is a product of 2's, 3's, and 5's (for - * example, 300 = 5*5*3*2*2) are also processed quite efficiently.
- * - *The function getOptimalDFTSize
returns the minimum number
- * N
that is greater than or equal to vecsize
so that
- * the DFT of a vector of size N
can be processed efficiently. In
- * the current implementation N
= 2^"p" * 3^"q" * 5^"r" for some
- * integer p
, q
, r
.
The function returns a negative number if vecsize
is too large
- * (very close to INT_MAX
).
While the function cannot be used directly to estimate the optimal vector
- * size for DCT transform (since the current DCT implementation supports only
- * even-size vectors), it can be easily processed as getOptimalDFTSize((vecsize+1)/2)*2
.
Returns the index of the currently executed thread within the current - * parallel region. - * Always returns 0 if called outside of parallel region.
- * - *The exact meaning of return value depends on the threading framework used by - * OpenCV library:
- *Returns the number of ticks.
- * - *The function returns the number of ticks after the certain event (for - * example, when the machine was turned on). - * It can be used to initialize "RNG" or to measure a function execution time by - * reading the tick count before and after the function call. See also the tick - * frequency.
- * - * @see org.opencv.core.Core.getTickCount - */ - public static long getTickCount() + //javadoc: flip(src, dst, flipCode) + public static void flip(Mat src, Mat dst, int flipCode) { - - long retVal = getTickCount_0(); - - return retVal; + + flip_0(src.nativeObj, dst.nativeObj, flipCode); + + return; } // - // C++: double getTickFrequency() + // C++: void gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat& dst, int flags = 0) // -/** - *Returns the number of ticks per second.
- * - *The function returns the number of ticks per second.That is, the following
- * code computes the execution time in seconds:
// C++ code:
- * - *double t = (double)getTickCount();
- * - *// do something...
- * - *t = ((double)getTickCount() - t)/getTickFrequency();
- * - * @see org.opencv.core.Core.getTickFrequency - */ - public static double getTickFrequency() + //javadoc: gemm(src1, src2, alpha, src3, beta, dst, flags) + public static void gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat dst, int flags) { + + gemm_0(src1.nativeObj, src2.nativeObj, alpha, src3.nativeObj, beta, dst.nativeObj, flags); + + return; + } - double retVal = getTickFrequency_0(); - - return retVal; + //javadoc: gemm(src1, src2, alpha, src3, beta, dst) + public static void gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat dst) + { + + gemm_1(src1.nativeObj, src2.nativeObj, alpha, src3.nativeObj, beta, dst.nativeObj); + + return; } @@ -4059,11 +1406,12 @@ public static double getTickFrequency() // C++: void hconcat(vector_Mat src, Mat& dst) // + //javadoc: hconcat(src, dst) public static void hconcat(ListCalculates the inverse Discrete Cosine Transform of a 1D or 2D array.
- * - *idct(src, dst, flags)
is equivalent to dct(src, dst, flags
- * | DCT_INVERSE)
.
src
.
- * @param flags operation flags.
- *
- * @see org.opencv.core.Core.idct
- * @see org.opencv.core.Core#dft
- * @see org.opencv.core.Core#dct
- * @see org.opencv.core.Core#getOptimalDFTSize
- * @see org.opencv.core.Core#idft
- */
+ //javadoc: idct(src, dst, flags)
public static void idct(Mat src, Mat dst, int flags)
{
-
+
idct_0(src.nativeObj, dst.nativeObj, flags);
-
+
return;
}
-/**
- * Calculates the inverse Discrete Cosine Transform of a 1D or 2D array.
- * - *idct(src, dst, flags)
is equivalent to dct(src, dst, flags
- * | DCT_INVERSE)
.
src
.
- *
- * @see org.opencv.core.Core.idct
- * @see org.opencv.core.Core#dft
- * @see org.opencv.core.Core#dct
- * @see org.opencv.core.Core#getOptimalDFTSize
- * @see org.opencv.core.Core#idft
- */
+ //javadoc: idct(src, dst)
public static void idct(Mat src, Mat dst)
{
-
+
idct_1(src.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -4124,69 +1443,21 @@ public static void idct(Mat src, Mat dst)
// C++: void idft(Mat src, Mat& dst, int flags = 0, int nonzeroRows = 0)
//
-/**
- * Calculates the inverse Discrete Fourier Transform of a 1D or 2D array.
- * - *idft(src, dst, flags)
is equivalent to dft(src, dst, flags
- * | DFT_INVERSE)
.
See "dft" for details.
- * - *Note: None of dft
and idft
scales the result by
- * default. So, you should pass DFT_SCALE
to one of
- * dft
or idft
explicitly to make these transforms
- * mutually inverse.
flags
.
- * @param flags operation flags (see "dft").
- * @param nonzeroRows number of dst
rows to process; the rest of
- * the rows have undefined content (see the convolution sample in "dft"
- * description.
- *
- * @see org.opencv.core.Core.idft
- * @see org.opencv.core.Core#dft
- * @see org.opencv.core.Core#dct
- * @see org.opencv.core.Core#getOptimalDFTSize
- * @see org.opencv.core.Core#idct
- * @see org.opencv.core.Core#mulSpectrums
- */
+ //javadoc: idft(src, dst, flags, nonzeroRows)
public static void idft(Mat src, Mat dst, int flags, int nonzeroRows)
{
-
+
idft_0(src.nativeObj, dst.nativeObj, flags, nonzeroRows);
-
+
return;
}
-/**
- * Calculates the inverse Discrete Fourier Transform of a 1D or 2D array.
- * - *idft(src, dst, flags)
is equivalent to dft(src, dst, flags
- * | DFT_INVERSE)
.
See "dft" for details.
- * - *Note: None of dft
and idft
scales the result by
- * default. So, you should pass DFT_SCALE
to one of
- * dft
or idft
explicitly to make these transforms
- * mutually inverse.
flags
.
- *
- * @see org.opencv.core.Core.idft
- * @see org.opencv.core.Core#dft
- * @see org.opencv.core.Core#dct
- * @see org.opencv.core.Core#getOptimalDFTSize
- * @see org.opencv.core.Core#idct
- * @see org.opencv.core.Core#mulSpectrums
- */
+ //javadoc: idft(src, dst)
public static void idft(Mat src, Mat dst)
{
-
+
idft_1(src.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -4195,48 +1466,12 @@ public static void idft(Mat src, Mat dst)
// C++: void inRange(Mat src, Scalar lowerb, Scalar upperb, Mat& dst)
//
-/**
- * Checks if array elements lie between the elements of two other arrays.
- * - *The function checks the range as follows:
- *dst(I)= lowerb(I)_0 <= src(I)_0 <= upperb(I)_0
- * - *dst(I)= lowerb(I)_0 <= src(I)_0 <= upperb(I)_0 land lowerb(I)_1 <= - * src(I)_1 <= upperb(I)_1
- * - *That is, dst
(I) is set to 255 (all 1
-bits) if
- * src
(I) is within the specified 1D, 2D, 3D,... box and 0
- * otherwise.
When the lower and/or upper boundary parameters are scalars, the indexes
- * (I)
at lowerb
and upperb
in the above
- * formulas should be omitted.
src
and
- * CV_8U
type.
- *
- * @see org.opencv.core.Core.inRange
- */
+ //javadoc: inRange(src, lowerb, upperb, dst)
public static void inRange(Mat src, Scalar lowerb, Scalar upperb, Mat dst)
{
-
+
inRange_0(src.nativeObj, lowerb.val[0], lowerb.val[1], lowerb.val[2], lowerb.val[3], upperb.val[0], upperb.val[1], upperb.val[2], upperb.val[3], dst.nativeObj);
-
+
return;
}
@@ -4245,334 +1480,12 @@ public static void inRange(Mat src, Scalar lowerb, Scalar upperb, Mat dst)
// C++: void insertChannel(Mat src, Mat& dst, int coi)
//
+ //javadoc: insertChannel(src, dst, coi)
public static void insertChannel(Mat src, Mat dst, int coi)
{
-
+
insertChannel_0(src.nativeObj, dst.nativeObj, coi);
-
- return;
- }
-
-
- //
- // C++: double invert(Mat src, Mat& dst, int flags = DECOMP_LU)
- //
-
-/**
- * Finds the inverse or pseudo-inverse of a matrix.
- * - *The function invert
inverts the matrix src
and
- * stores the result in dst
.
- * When the matrix src
is singular or non-square, the function
- * calculates the pseudo-inverse matrix (the dst
matrix) so that
- * norm(src*dst - I)
is minimal, where I is an identity matrix.
In case of the DECOMP_LU
method, the function returns non-zero
- * value if the inverse has been successfully calculated and 0 if
- * src
is singular.
In case of the DECOMP_SVD
method, the function returns the
- * inverse condition number of src
(the ratio of the smallest
- * singular value to the largest singular value) and 0 if src
is
- * singular. The SVD method calculates a pseudo-inverse matrix if
- * src
is singular.
Similarly to DECOMP_LU
, the method DECOMP_CHOLESKY
- * works only with non-singular square matrices that should also be symmetrical
- * and positively defined. In this case, the function stores the inverted matrix
- * in dst
and returns non-zero. Otherwise, it returns 0.
M x N
matrix.
- * @param dst output matrix of N x M
size and the same type as
- * src
.
- * @param flags inversion method :
- * Finds the inverse or pseudo-inverse of a matrix.
- * - *The function invert
inverts the matrix src
and
- * stores the result in dst
.
- * When the matrix src
is singular or non-square, the function
- * calculates the pseudo-inverse matrix (the dst
matrix) so that
- * norm(src*dst - I)
is minimal, where I is an identity matrix.
In case of the DECOMP_LU
method, the function returns non-zero
- * value if the inverse has been successfully calculated and 0 if
- * src
is singular.
In case of the DECOMP_SVD
method, the function returns the
- * inverse condition number of src
(the ratio of the smallest
- * singular value to the largest singular value) and 0 if src
is
- * singular. The SVD method calculates a pseudo-inverse matrix if
- * src
is singular.
Similarly to DECOMP_LU
, the method DECOMP_CHOLESKY
- * works only with non-singular square matrices that should also be symmetrical
- * and positively defined. In this case, the function stores the inverted matrix
- * in dst
and returns non-zero. Otherwise, it returns 0.
M x N
matrix.
- * @param dst output matrix of N x M
size and the same type as
- * src
.
- *
- * @see org.opencv.core.Core.invert
- * @see org.opencv.core.Core#solve
- */
- public static double invert(Mat src, Mat dst)
- {
-
- double retVal = invert_1(src.nativeObj, dst.nativeObj);
-
- return retVal;
- }
-
-
- //
- // C++: double kmeans(Mat data, int K, Mat& bestLabels, TermCriteria criteria, int attempts, int flags, Mat& centers = Mat())
- //
-
-/**
- * Finds centers of clusters and groups input samples around the clusters.
- * - *The function kmeans
implements a k-means algorithm that finds
- * the centers of cluster_count
clusters and groups the input
- * samples around the clusters. As an output, labels_i contains a
- * 0-based cluster index for the sample stored in the i^(th) row of the
- * samples
matrix.
The function returns the compactness measure that is computed as
- * - *sum _i|samples _i - centers _(labels _i)| ^2
- * - *after every attempt. The best (minimum) value is chosen and the corresponding
- * labels and the compactness value are returned by the function.
- * Basically, you can use only the core of the function, set the number of
- * attempts to 1, initialize labels each time using a custom algorithm, pass
- * them with the (flags
= KMEANS_USE_INITIAL_LABELS
)
- * flag, and then choose the best (most-compact) clustering.
Note:
- *Mat points(count, 2, CV_32F);
- * Mat points(count, 1, CV_32FC2);
- * Mat points(1, count, CV_32FC2);
- * std.vector points(sampleCount);
- * criteria.epsilon
. As soon as each of the cluster centers
- * moves by less than criteria.epsilon
on some iteration, the
- * algorithm stops.
- * @param attempts Flag to specify the number of times the algorithm is executed
- * using different initial labellings. The algorithm returns the labels that
- * yield the best compactness (see the last function parameter).
- * @param flags Flag that can take the following values:
- * kmeans++
center initialization by
- * Arthur and Vassilvitskii [Arthur2007].
- * KMEANS_*_CENTERS
flag to specify
- * the exact method.
- * Finds centers of clusters and groups input samples around the clusters.
- * - *The function kmeans
implements a k-means algorithm that finds
- * the centers of cluster_count
clusters and groups the input
- * samples around the clusters. As an output, labels_i contains a
- * 0-based cluster index for the sample stored in the i^(th) row of the
- * samples
matrix.
The function returns the compactness measure that is computed as
- * - *sum _i|samples _i - centers _(labels _i)| ^2
- * - *after every attempt. The best (minimum) value is chosen and the corresponding
- * labels and the compactness value are returned by the function.
- * Basically, you can use only the core of the function, set the number of
- * attempts to 1, initialize labels each time using a custom algorithm, pass
- * them with the (flags
= KMEANS_USE_INITIAL_LABELS
)
- * flag, and then choose the best (most-compact) clustering.
Note:
- *Mat points(count, 2, CV_32F);
- * Mat points(count, 1, CV_32FC2);
- * Mat points(1, count, CV_32FC2);
- * std.vector points(sampleCount);
- * criteria.epsilon
. As soon as each of the cluster centers
- * moves by less than criteria.epsilon
on some iteration, the
- * algorithm stops.
- * @param attempts Flag to specify the number of times the algorithm is executed
- * using different initial labellings. The algorithm returns the labels that
- * yield the best compactness (see the last function parameter).
- * @param flags Flag that can take the following values:
- * kmeans++
center initialization by
- * Arthur and Vassilvitskii [Arthur2007].
- * KMEANS_*_CENTERS
flag to specify
- * the exact method.
- * Draws a line segment connecting two points.
- * - *The function line
draws the line segment between
- * pt1
and pt2
points in the image. The line is
- * clipped by the image boundaries. For non-antialiased lines with integer
- * coordinates, the 8-connected or 4-connected Bresenham algorithm is used.
- * Thick lines are drawn with rounding endings.
- * Antialiased lines are drawn using Gaussian filtering. To specify the line
- * color, you may use the macro CV_RGB(r, g, b)
.
Draws a line segment connecting two points.
- * - *The function line
draws the line segment between
- * pt1
and pt2
points in the image. The line is
- * clipped by the image boundaries. For non-antialiased lines with integer
- * coordinates, the 8-connected or 4-connected Bresenham algorithm is used.
- * Thick lines are drawn with rounding endings.
- * Antialiased lines are drawn using Gaussian filtering. To specify the line
- * color, you may use the macro CV_RGB(r, g, b)
.
Draws a line segment connecting two points.
- * - *The function line
draws the line segment between
- * pt1
and pt2
points in the image. The line is
- * clipped by the image boundaries. For non-antialiased lines with integer
- * coordinates, the 8-connected or 4-connected Bresenham algorithm is used.
- * Thick lines are drawn with rounding endings.
- * Antialiased lines are drawn using Gaussian filtering. To specify the line
- * color, you may use the macro CV_RGB(r, g, b)
.
Calculates the natural logarithm of every array element.
- * - *The function log
calculates the natural logarithm of the
- * absolute value of every element of the input array:
dst(I) = log|src(I)| if src(I) != 0 ; C otherwise
- * - *where C
is a large negative number (about -700 in the current
- * implementation).
- * The maximum relative error is about 7e-6
for single-precision
- * input and less than 1e-10
for double-precision input. Special
- * values (NaN, Inf) are not handled.
src
.
- *
- * @see org.opencv.core.Core.log
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#pow
- * @see org.opencv.core.Core#sqrt
- * @see org.opencv.core.Core#magnitude
- * @see org.opencv.core.Core#polarToCart
- * @see org.opencv.core.Core#exp
- * @see org.opencv.core.Core#phase
- */
+ //javadoc: log(src, dst)
public static void log(Mat src, Mat dst)
{
-
+
log_0(src.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -4620,31 +1508,12 @@ public static void log(Mat src, Mat dst)
// C++: void magnitude(Mat x, Mat y, Mat& magnitude)
//
-/**
- * Calculates the magnitude of 2D vectors.
- * - *The function magnitude
calculates the magnitude of 2D vectors
- * formed from the corresponding elements of x
and y
- * arrays:
dst(I) = sqrt(x(I)^2 + y(I)^2)
- * - * @param x floating-point array of x-coordinates of the vectors. - * @param y floating-point array of y-coordinates of the vectors; it must have - * the same size asx
.
- * @param magnitude output array of the same size and type as x
.
- *
- * @see org.opencv.core.Core.magnitude
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#phase
- * @see org.opencv.core.Core#sqrt
- * @see org.opencv.core.Core#polarToCart
- */
+ //javadoc: magnitude(x, y, magnitude)
public static void magnitude(Mat x, Mat y, Mat magnitude)
{
-
+
magnitude_0(x.nativeObj, y.nativeObj, magnitude.nativeObj);
-
+
return;
}
@@ -4653,40 +1522,12 @@ public static void magnitude(Mat x, Mat y, Mat magnitude)
// C++: void max(Mat src1, Mat src2, Mat& dst)
//
-/**
- * Calculates per-element maximum of two arrays or an array and a scalar.
- * - *The functions max
calculate the per-element maximum of two
- * arrays:
dst(I)= max(src1(I), src2(I))
- * - *or array and a scalar:
- * - *dst(I)= max(src1(I), value)
- * - *In the second variant, when the input array is multi-channel, each channel is
- * compared with value
independently.
The first 3 variants of the function listed above are actually a part of - * "MatrixExpressions". They return an expression object that can be further - * either transformed/ assigned to a matrix, or passed to a function, and so on.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and type assrc1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.max
- * @see org.opencv.core.Core#compare
- * @see org.opencv.core.Core#inRange
- * @see org.opencv.core.Core#minMaxLoc
- * @see org.opencv.core.Core#min
- */
+ //javadoc: max(src1, src2, dst)
public static void max(Mat src1, Mat src2, Mat dst)
{
-
+
max_0(src1.nativeObj, src2.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -4695,103 +1536,13 @@ public static void max(Mat src1, Mat src2, Mat dst)
// C++: void max(Mat src1, Scalar src2, Mat& dst)
//
-/**
- * Calculates per-element maximum of two arrays or an array and a scalar.
- * - *The functions max
calculate the per-element maximum of two
- * arrays:
dst(I)= max(src1(I), src2(I))
- * - *or array and a scalar:
- * - *dst(I)= max(src1(I), value)
- * - *In the second variant, when the input array is multi-channel, each channel is
- * compared with value
independently.
The first 3 variants of the function listed above are actually a part of - * "MatrixExpressions". They return an expression object that can be further - * either transformed/ assigned to a matrix, or passed to a function, and so on.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and type assrc1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.max
- * @see org.opencv.core.Core#compare
- * @see org.opencv.core.Core#inRange
- * @see org.opencv.core.Core#minMaxLoc
- * @see org.opencv.core.Core#min
- */
+ //javadoc: max(src1, src2, dst)
public static void max(Mat src1, Scalar src2, Mat dst)
- {
-
- max_1(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj);
-
- return;
- }
-
-
- //
- // C++: Scalar mean(Mat src, Mat mask = Mat())
- //
-
-/**
- * Calculates an average (mean) of array elements.
- * - *The function mean
calculates the mean value M
of
- * array elements, independently for each channel, and return it:
N = sum(by: I: mask(I) != 0) 1 - * M_c = (sum(by: I: mask(I) != 0)(mtx(I)_c))/N
- * - *When all the mask elements are 0's, the functions return Scalar.all(0)
.
Calculates an average (mean) of array elements.
- * - *The function mean
calculates the mean value M
of
- * array elements, independently for each channel, and return it:
N = sum(by: I: mask(I) != 0) 1 - * M_c = (sum(by: I: mask(I) != 0)(mtx(I)_c))/N
- * - *When all the mask elements are 0's, the functions return Scalar.all(0)
.
Calculates a mean and standard deviation of array elements.
- * - *The function meanStdDev
calculates the mean and the standard
- * deviation M
of array elements independently for each channel and
- * returns it via the output parameters:
N = sum(by: I, mask(I) != 0) 1 - * mean _c = (sum_(I: mask(I) != 0) src(I)_c)/(N) - * stddev _c = sqrt((sum_(I: mask(I) != 0)(src(I)_c - mean _c)^2)/(N))
- * - *When all the mask elements are 0's, the functions return mean=stddev=Scalar.all(0)
.
Note: The calculated standard deviation is only the diagonal of the complete
- * normalized covariance matrix. If the full matrix is needed, you can reshape
- * the multi-channel array M x N
to the single-channel array
- * M*N x mtx.channels()
(only possible when the matrix is
- * continuous) and then pass the matrix to "calcCovarMatrix".
Calculates a mean and standard deviation of array elements.
- * - *The function meanStdDev
calculates the mean and the standard
- * deviation M
of array elements independently for each channel and
- * returns it via the output parameters:
N = sum(by: I, mask(I) != 0) 1 - * mean _c = (sum_(I: mask(I) != 0) src(I)_c)/(N) - * stddev _c = sqrt((sum_(I: mask(I) != 0)(src(I)_c - mean _c)^2)/(N))
- * - *When all the mask elements are 0's, the functions return mean=stddev=Scalar.all(0)
.
Note: The calculated standard deviation is only the diagonal of the complete
- * normalized covariance matrix. If the full matrix is needed, you can reshape
- * the multi-channel array M x N
to the single-channel array
- * M*N x mtx.channels()
(only possible when the matrix is
- * continuous) and then pass the matrix to "calcCovarMatrix".
Creates one multichannel array out of several single-channel ones.
- * - *The functions merge
merge several arrays to make a single
- * multi-channel array. That is, each element of the output array will be a
- * concatenation of the elements of the input arrays, where elements of i-th
- * input array are treated as mv[i].channels()
-element vectors.
The function "split" does the reverse operation. If you need to shuffle - * channels in some other advanced way, use "mixChannels".
- * - * @param mv input array or vector of matrices to be merged; all the matrices in - *mv
must have the same size and the same depth.
- * @param dst output array of the same size and the same depth as
- * mv[0]
; The number of channels will be the total number of
- * channels in the matrix array.
- *
- * @see org.opencv.core.Core.merge
- * @see org.opencv.core.Mat#reshape
- * @see org.opencv.core.Core#mixChannels
- * @see org.opencv.core.Core#split
- */
+ //javadoc: merge(mv, dst)
public static void merge(ListCalculates per-element minimum of two arrays or an array and a scalar.
- * - *The functions min
calculate the per-element minimum of two
- * arrays:
dst(I)= min(src1(I), src2(I))
- * - *or array and a scalar:
- * - *dst(I)= min(src1(I), value)
- * - *In the second variant, when the input array is multi-channel, each channel is
- * compared with value
independently.
The first three variants of the function listed above are actually a part of - * "MatrixExpressions". They return the expression object that can be further - * either transformed/assigned to a matrix, or passed to a function, and so on.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and type assrc1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.min
- * @see org.opencv.core.Core#max
- * @see org.opencv.core.Core#compare
- * @see org.opencv.core.Core#inRange
- * @see org.opencv.core.Core#minMaxLoc
- */
+ //javadoc: min(src1, src2, dst)
public static void min(Mat src1, Mat src2, Mat dst)
{
-
+
min_0(src1.nativeObj, src2.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -4962,40 +1603,12 @@ public static void min(Mat src1, Mat src2, Mat dst)
// C++: void min(Mat src1, Scalar src2, Mat& dst)
//
-/**
- * Calculates per-element minimum of two arrays or an array and a scalar.
- * - *The functions min
calculate the per-element minimum of two
- * arrays:
dst(I)= min(src1(I), src2(I))
- * - *or array and a scalar:
- * - *dst(I)= min(src1(I), value)
- * - *In the second variant, when the input array is multi-channel, each channel is
- * compared with value
independently.
The first three variants of the function listed above are actually a part of - * "MatrixExpressions". They return the expression object that can be further - * either transformed/assigned to a matrix, or passed to a function, and so on.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and type assrc1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.min
- * @see org.opencv.core.Core#max
- * @see org.opencv.core.Core#compare
- * @see org.opencv.core.Core#inRange
- * @see org.opencv.core.Core#minMaxLoc
- */
+ //javadoc: min(src1, src2, dst)
public static void min(Mat src1, Scalar src2, Mat dst)
{
-
+
min_1(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj);
-
+
return;
}
@@ -5004,73 +1617,14 @@ public static void min(Mat src1, Scalar src2, Mat dst)
// C++: void mixChannels(vector_Mat src, vector_Mat dst, vector_int fromTo)
//
-/**
- * Copies specified channels from input arrays to the specified channels of - * output arrays.
- * - *The functions mixChannels
provide an advanced mechanism for
- * shuffling image channels.
"split" and "merge" and some forms of "cvtColor" are partial cases of
- * mixChannels
.
- * In the example below, the code splits a 4-channel RGBA image into a 3-channel
- * BGR (with R and B channels swapped) and a separate alpha-channel image:
- *
// C++ code:
- * - *Mat rgba(100, 100, CV_8UC4, Scalar(1,2,3,4));
- * - *Mat bgr(rgba.rows, rgba.cols, CV_8UC3);
- * - *Mat alpha(rgba.rows, rgba.cols, CV_8UC1);
- * - *// forming an array of matrices is a quite efficient operation,
- * - *// because the matrix data is not copied, only the headers
- * - *Mat out[] = { bgr, alpha };
- * - *// rgba[0] -> bgr[2], rgba[1] -> bgr[1],
- * - *// rgba[2] -> bgr[0], rgba[3] -> alpha[0]
- * - *int from_to[] = { 0,2, 1,1, 2,0, 3,3 };
- * - *mixChannels(&rgba, 1, out, 2, from_to, 4);
- * - *Note: Unlike many other new-style C++ functions in OpenCV (see the
- * introduction section and "Mat.create"), mixChannels
requires
- * the output arrays to be pre-allocated before calling the function.
- *
src[0]
.
- * @param fromTo array of index pairs specifying which channels are copied and
- * where; fromTo[k*2]
is a 0-based index of the input channel in
- * src
, fromTo[k*2+1]
is an index of the output
- * channel in dst
; the continuous channel numbering is used: the
- * first input image channels are indexed from 0
to
- * src[0].channels()-1
, the second input image channels are indexed
- * from src[0].channels()
to src[0].channels() +
- * src[1].channels()-1
, and so on, the same scheme is used for the output
- * image channels; as a special case, when fromTo[k*2]
is negative,
- * the corresponding output channel is filled with zero.
- *
- * @see org.opencv.core.Core.mixChannels
- * @see org.opencv.core.Core#merge
- * @see org.opencv.core.Core#split
- * @see org.opencv.imgproc.Imgproc#cvtColor
- */
+ //javadoc: mixChannels(src, dst, fromTo)
public static void mixChannels(ListPerforms the per-element multiplication of two Fourier spectrums.
- * - *The function mulSpectrums
performs the per-element
- * multiplication of the two CCS-packed or complex matrices that are results of
- * a real or complex Fourier transform.
The function, together with "dft" and "idft", may be used to calculate
- * convolution (pass conjB=false
) or correlation (pass
- * conjB=true
) of two arrays rapidly. When the arrays are complex,
- * they are simply multiplied (per element) with an optional conjugation of the
- * second-array elements. When the arrays are real, they are assumed to be
- * CCS-packed (see "dft" for details).
DFT_ROWS
, which indicates that each row of src1
and
- * src2
is an independent 1D Fourier spectrum. If you do not want
- * to use this flag, then simply add a "0" as value.
- * @param conjB optional flag that conjugates the second input array before the
- * multiplication (true) or not (false).
- *
- * @see org.opencv.core.Core.mulSpectrums
- */
+ //javadoc: mulSpectrums(a, b, c, flags, conjB)
public static void mulSpectrums(Mat a, Mat b, Mat c, int flags, boolean conjB)
{
-
+
mulSpectrums_0(a.nativeObj, b.nativeObj, c.nativeObj, flags, conjB);
-
+
return;
}
-/**
- * Performs the per-element multiplication of two Fourier spectrums.
- * - *The function mulSpectrums
performs the per-element
- * multiplication of the two CCS-packed or complex matrices that are results of
- * a real or complex Fourier transform.
The function, together with "dft" and "idft", may be used to calculate
- * convolution (pass conjB=false
) or correlation (pass
- * conjB=true
) of two arrays rapidly. When the arrays are complex,
- * they are simply multiplied (per element) with an optional conjugation of the
- * second-array elements. When the arrays are real, they are assumed to be
- * CCS-packed (see "dft" for details).
DFT_ROWS
, which indicates that each row of src1
and
- * src2
is an independent 1D Fourier spectrum. If you do not want
- * to use this flag, then simply add a "0" as value.
- *
- * @see org.opencv.core.Core.mulSpectrums
- */
+ //javadoc: mulSpectrums(a, b, c, flags)
public static void mulSpectrums(Mat a, Mat b, Mat c, int flags)
{
-
+
mulSpectrums_1(a.nativeObj, b.nativeObj, c.nativeObj, flags);
-
+
return;
}
@@ -5150,133 +1656,30 @@ public static void mulSpectrums(Mat a, Mat b, Mat c, int flags)
// C++: void mulTransposed(Mat src, Mat& dst, bool aTa, Mat delta = Mat(), double scale = 1, int dtype = -1)
//
-/**
- * Calculates the product of a matrix and its transposition.
- * - *The function mulTransposed
calculates the product of
- * src
and its transposition:
dst = scale(src - delta)^T(src - delta)
- * - *if aTa=true
, and
dst = scale(src - delta)(src - delta)^T
- * - *otherwise. The function is used to calculate the covariance matrix. With zero
- * delta, it can be used as a faster substitute for general matrix product
- * A*B
when B=A'
src
before
- * the multiplication. When the matrix is empty (delta=noArray()
),
- * it is assumed to be zero, that is, nothing is subtracted. If it has the same
- * size as src
, it is simply subtracted. Otherwise, it is
- * "repeated" (see "repeat") to cover the full src
and then
- * subtracted. Type of the delta matrix, when it is not empty, must be the same
- * as the type of created output matrix. See the dtype
parameter
- * description below.
- * @param scale Optional scale factor for the matrix product.
- * @param dtype Optional type of the output matrix. When it is negative, the
- * output matrix will have the same type as src
. Otherwise, it will
- * be type=CV_MAT_DEPTH(dtype)
that should be either
- * CV_32F
or CV_64F
.
- *
- * @see org.opencv.core.Core.mulTransposed
- * @see org.opencv.core.Core#calcCovarMatrix
- * @see org.opencv.core.Core#repeat
- * @see org.opencv.core.Core#reduce
- * @see org.opencv.core.Core#gemm
- */
+ //javadoc: mulTransposed(src, dst, aTa, delta, scale, dtype)
public static void mulTransposed(Mat src, Mat dst, boolean aTa, Mat delta, double scale, int dtype)
{
-
+
mulTransposed_0(src.nativeObj, dst.nativeObj, aTa, delta.nativeObj, scale, dtype);
-
+
return;
}
-/**
- * Calculates the product of a matrix and its transposition.
- * - *The function mulTransposed
calculates the product of
- * src
and its transposition:
dst = scale(src - delta)^T(src - delta)
- * - *if aTa=true
, and
dst = scale(src - delta)(src - delta)^T
- * - *otherwise. The function is used to calculate the covariance matrix. With zero
- * delta, it can be used as a faster substitute for general matrix product
- * A*B
when B=A'
src
before
- * the multiplication. When the matrix is empty (delta=noArray()
),
- * it is assumed to be zero, that is, nothing is subtracted. If it has the same
- * size as src
, it is simply subtracted. Otherwise, it is
- * "repeated" (see "repeat") to cover the full src
and then
- * subtracted. Type of the delta matrix, when it is not empty, must be the same
- * as the type of created output matrix. See the dtype
parameter
- * description below.
- * @param scale Optional scale factor for the matrix product.
- *
- * @see org.opencv.core.Core.mulTransposed
- * @see org.opencv.core.Core#calcCovarMatrix
- * @see org.opencv.core.Core#repeat
- * @see org.opencv.core.Core#reduce
- * @see org.opencv.core.Core#gemm
- */
+ //javadoc: mulTransposed(src, dst, aTa, delta, scale)
public static void mulTransposed(Mat src, Mat dst, boolean aTa, Mat delta, double scale)
{
-
+
mulTransposed_1(src.nativeObj, dst.nativeObj, aTa, delta.nativeObj, scale);
-
+
return;
}
-/**
- * Calculates the product of a matrix and its transposition.
- * - *The function mulTransposed
calculates the product of
- * src
and its transposition:
dst = scale(src - delta)^T(src - delta)
- * - *if aTa=true
, and
dst = scale(src - delta)(src - delta)^T
- * - *otherwise. The function is used to calculate the covariance matrix. With zero
- * delta, it can be used as a faster substitute for general matrix product
- * A*B
when B=A'
Calculates the per-element scaled product of two arrays.
- * - *The function multiply
calculates the per-element product of two
- * arrays:
dst(I)= saturate(scale * src1(I) * src2(I))
- * - *There is also a "MatrixExpressions" -friendly variant of the first function. - * See "Mat.mul".
- * - *For a not-per-element matrix product, see "gemm".
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src1
.
- * @param scale optional scale factor.
- * @param dtype a dtype
- *
- * @see org.opencv.core.Core.multiply
- * @see org.opencv.core.Core#divide
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.imgproc.Imgproc#accumulateSquare
- * @see org.opencv.imgproc.Imgproc#accumulate
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.imgproc.Imgproc#accumulateProduct
- */
+ //javadoc: multiply(src1, src2, dst, scale, dtype)
public static void multiply(Mat src1, Mat src2, Mat dst, double scale, int dtype)
{
-
+
multiply_0(src1.nativeObj, src2.nativeObj, dst.nativeObj, scale, dtype);
-
+
return;
}
-/**
- * Calculates the per-element scaled product of two arrays.
- * - *The function multiply
calculates the per-element product of two
- * arrays:
dst(I)= saturate(scale * src1(I) * src2(I))
- * - *There is also a "MatrixExpressions" -friendly variant of the first function. - * See "Mat.mul".
- * - *For a not-per-element matrix product, see "gemm".
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src1
.
- * @param scale optional scale factor.
- *
- * @see org.opencv.core.Core.multiply
- * @see org.opencv.core.Core#divide
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.imgproc.Imgproc#accumulateSquare
- * @see org.opencv.imgproc.Imgproc#accumulate
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.imgproc.Imgproc#accumulateProduct
- */
+ //javadoc: multiply(src1, src2, dst, scale)
public static void multiply(Mat src1, Mat src2, Mat dst, double scale)
{
-
+
multiply_1(src1.nativeObj, src2.nativeObj, dst.nativeObj, scale);
-
+
return;
}
-/**
- * Calculates the per-element scaled product of two arrays.
- * - *The function multiply
calculates the per-element product of two
- * arrays:
dst(I)= saturate(scale * src1(I) * src2(I))
- * - *There is also a "MatrixExpressions" -friendly variant of the first function. - * See "Mat.mul".
- * - *For a not-per-element matrix product, see "gemm".
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.multiply
- * @see org.opencv.core.Core#divide
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.imgproc.Imgproc#accumulateSquare
- * @see org.opencv.imgproc.Imgproc#accumulate
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.imgproc.Imgproc#accumulateProduct
- */
+ //javadoc: multiply(src1, src2, dst)
public static void multiply(Mat src1, Mat src2, Mat dst)
{
-
+
multiply_2(src1.nativeObj, src2.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -5416,612 +1720,71 @@ public static void multiply(Mat src1, Mat src2, Mat dst)
// C++: void multiply(Mat src1, Scalar src2, Mat& dst, double scale = 1, int dtype = -1)
//
-/**
- * Calculates the per-element scaled product of two arrays.
- * - *The function multiply
calculates the per-element product of two
- * arrays:
dst(I)= saturate(scale * src1(I) * src2(I))
- * - *There is also a "MatrixExpressions" -friendly variant of the first function. - * See "Mat.mul".
- * - *For a not-per-element matrix product, see "gemm".
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src1
.
- * @param scale optional scale factor.
- * @param dtype a dtype
- *
- * @see org.opencv.core.Core.multiply
- * @see org.opencv.core.Core#divide
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.imgproc.Imgproc#accumulateSquare
- * @see org.opencv.imgproc.Imgproc#accumulate
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.imgproc.Imgproc#accumulateProduct
- */
+ //javadoc: multiply(src1, src2, dst, scale, dtype)
public static void multiply(Mat src1, Scalar src2, Mat dst, double scale, int dtype)
{
-
+
multiply_3(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, scale, dtype);
-
+
return;
}
-/**
- * Calculates the per-element scaled product of two arrays.
- * - *The function multiply
calculates the per-element product of two
- * arrays:
dst(I)= saturate(scale * src1(I) * src2(I))
- * - *There is also a "MatrixExpressions" -friendly variant of the first function. - * See "Mat.mul".
- * - *For a not-per-element matrix product, see "gemm".
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src1
.
- * @param scale optional scale factor.
- *
- * @see org.opencv.core.Core.multiply
- * @see org.opencv.core.Core#divide
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.imgproc.Imgproc#accumulateSquare
- * @see org.opencv.imgproc.Imgproc#accumulate
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.imgproc.Imgproc#accumulateProduct
- */
+ //javadoc: multiply(src1, src2, dst, scale)
public static void multiply(Mat src1, Scalar src2, Mat dst, double scale)
{
-
+
multiply_4(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj, scale);
-
+
return;
}
-/**
- * Calculates the per-element scaled product of two arrays.
- * - *The function multiply
calculates the per-element product of two
- * arrays:
dst(I)= saturate(scale * src1(I) * src2(I))
- * - *There is also a "MatrixExpressions" -friendly variant of the first function. - * See "Mat.mul".
- * - *For a not-per-element matrix product, see "gemm".
- * - *Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
src1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.multiply
- * @see org.opencv.core.Core#divide
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.imgproc.Imgproc#accumulateSquare
- * @see org.opencv.imgproc.Imgproc#accumulate
- * @see org.opencv.core.Core#scaleAdd
- * @see org.opencv.core.Core#subtract
- * @see org.opencv.imgproc.Imgproc#accumulateProduct
- */
+ //javadoc: multiply(src1, src2, dst)
public static void multiply(Mat src1, Scalar src2, Mat dst)
{
-
+
multiply_5(src1.nativeObj, src2.val[0], src2.val[1], src2.val[2], src2.val[3], dst.nativeObj);
-
+
return;
}
- //
- // C++: double norm(Mat src1, int normType = NORM_L2, Mat mask = Mat())
- //
-
-/**
- * Calculates an absolute array norm, an absolute difference norm, or a relative - * difference norm.
- * - *The functions norm
calculate an absolute norm of
- * src1
(when there is no src2
):
norm = forkthree(|src1|_(L_(infty)) = max _I|src1(I)|)(if normType = - * NORM_INF)<BR>(|src1|_(L_1) = sum _I|src1(I)|)(if normType = - * NORM_L1)<BR>(|src1|_(L_2) = sqrt(sum_I src1(I)^2))(if normType = - * NORM_L2)
- * - *or an absolute or relative difference norm if src2
is there:
norm = forkthree(|src1-src2|_(L_(infty)) = max _I|src1(I) - src2(I)|)(if - * normType = NORM_INF)<BR>(|src1 - src2|_(L_1) = sum _I|src1(I) - - * src2(I)|)(if normType = NORM_L1)<BR>(|src1 - src2|_(L_2) = - * sqrt(sum_I(src1(I) - src2(I))^2))(if normType = NORM_L2)
- * - *or
- * - *norm = forkthree((|src1-src2|_(L_(infty)))/(|src2|_(L_(infty))))(if - * normType = NORM_RELATIVE_INF)<BR>((|src1-src2|_(L_1))/(|src2|_(L_1)))(if - * normType = NORM_RELATIVE_L1)<BR>((|src1-src2|_(L_2))/(|src2|_(L_2)))(if - * normType = NORM_RELATIVE_L2)
- * - *The functions norm
return the calculated norm.
When the mask
parameter is specified and it is not empty, the
- * norm is calculated only over the region specified by the mask.
A multi-channel input arrays are treated as a single-channel, that is, the - * results for all channels are combined.
- * - * @param src1 first input array. - * @param normType type of the norm (see the details below). - * @param mask optional operation mask; it must have the same size as - *src1
and CV_8UC1
type.
- *
- * @see org.opencv.core.Core.norm
- */
- public static double norm(Mat src1, int normType, Mat mask)
- {
-
- double retVal = norm_0(src1.nativeObj, normType, mask.nativeObj);
-
- return retVal;
- }
-
-/**
- * Calculates an absolute array norm, an absolute difference norm, or a relative - * difference norm.
- * - *The functions norm
calculate an absolute norm of
- * src1
(when there is no src2
):
norm = forkthree(|src1|_(L_(infty)) = max _I|src1(I)|)(if normType = - * NORM_INF)<BR>(|src1|_(L_1) = sum _I|src1(I)|)(if normType = - * NORM_L1)<BR>(|src1|_(L_2) = sqrt(sum_I src1(I)^2))(if normType = - * NORM_L2)
- * - *or an absolute or relative difference norm if src2
is there:
norm = forkthree(|src1-src2|_(L_(infty)) = max _I|src1(I) - src2(I)|)(if - * normType = NORM_INF)<BR>(|src1 - src2|_(L_1) = sum _I|src1(I) - - * src2(I)|)(if normType = NORM_L1)<BR>(|src1 - src2|_(L_2) = - * sqrt(sum_I(src1(I) - src2(I))^2))(if normType = NORM_L2)
- * - *or
- * - *norm = forkthree((|src1-src2|_(L_(infty)))/(|src2|_(L_(infty))))(if - * normType = NORM_RELATIVE_INF)<BR>((|src1-src2|_(L_1))/(|src2|_(L_1)))(if - * normType = NORM_RELATIVE_L1)<BR>((|src1-src2|_(L_2))/(|src2|_(L_2)))(if - * normType = NORM_RELATIVE_L2)
- * - *The functions norm
return the calculated norm.
When the mask
parameter is specified and it is not empty, the
- * norm is calculated only over the region specified by the mask.
A multi-channel input arrays are treated as a single-channel, that is, the - * results for all channels are combined.
- * - * @param src1 first input array. - * @param normType type of the norm (see the details below). - * - * @see org.opencv.core.Core.norm - */ - public static double norm(Mat src1, int normType) - { - - double retVal = norm_1(src1.nativeObj, normType); - - return retVal; - } - -/** - *Calculates an absolute array norm, an absolute difference norm, or a relative - * difference norm.
- * - *The functions norm
calculate an absolute norm of
- * src1
(when there is no src2
):
norm = forkthree(|src1|_(L_(infty)) = max _I|src1(I)|)(if normType = - * NORM_INF)<BR>(|src1|_(L_1) = sum _I|src1(I)|)(if normType = - * NORM_L1)<BR>(|src1|_(L_2) = sqrt(sum_I src1(I)^2))(if normType = - * NORM_L2)
- * - *or an absolute or relative difference norm if src2
is there:
norm = forkthree(|src1-src2|_(L_(infty)) = max _I|src1(I) - src2(I)|)(if - * normType = NORM_INF)<BR>(|src1 - src2|_(L_1) = sum _I|src1(I) - - * src2(I)|)(if normType = NORM_L1)<BR>(|src1 - src2|_(L_2) = - * sqrt(sum_I(src1(I) - src2(I))^2))(if normType = NORM_L2)
- * - *or
- * - *norm = forkthree((|src1-src2|_(L_(infty)))/(|src2|_(L_(infty))))(if - * normType = NORM_RELATIVE_INF)<BR>((|src1-src2|_(L_1))/(|src2|_(L_1)))(if - * normType = NORM_RELATIVE_L1)<BR>((|src1-src2|_(L_2))/(|src2|_(L_2)))(if - * normType = NORM_RELATIVE_L2)
- * - *The functions norm
return the calculated norm.
When the mask
parameter is specified and it is not empty, the
- * norm is calculated only over the region specified by the mask.
A multi-channel input arrays are treated as a single-channel, that is, the - * results for all channels are combined.
- * - * @param src1 first input array. - * - * @see org.opencv.core.Core.norm - */ - public static double norm(Mat src1) - { - - double retVal = norm_2(src1.nativeObj); - - return retVal; - } - - - // - // C++: double norm(Mat src1, Mat src2, int normType = NORM_L2, Mat mask = Mat()) - // - -/** - *Calculates an absolute array norm, an absolute difference norm, or a relative - * difference norm.
- * - *The functions norm
calculate an absolute norm of
- * src1
(when there is no src2
):
norm = forkthree(|src1|_(L_(infty)) = max _I|src1(I)|)(if normType = - * NORM_INF)<BR>(|src1|_(L_1) = sum _I|src1(I)|)(if normType = - * NORM_L1)<BR>(|src1|_(L_2) = sqrt(sum_I src1(I)^2))(if normType = - * NORM_L2)
- * - *or an absolute or relative difference norm if src2
is there:
norm = forkthree(|src1-src2|_(L_(infty)) = max _I|src1(I) - src2(I)|)(if - * normType = NORM_INF)<BR>(|src1 - src2|_(L_1) = sum _I|src1(I) - - * src2(I)|)(if normType = NORM_L1)<BR>(|src1 - src2|_(L_2) = - * sqrt(sum_I(src1(I) - src2(I))^2))(if normType = NORM_L2)
- * - *or
- * - *norm = forkthree((|src1-src2|_(L_(infty)))/(|src2|_(L_(infty))))(if - * normType = NORM_RELATIVE_INF)<BR>((|src1-src2|_(L_1))/(|src2|_(L_1)))(if - * normType = NORM_RELATIVE_L1)<BR>((|src1-src2|_(L_2))/(|src2|_(L_2)))(if - * normType = NORM_RELATIVE_L2)
- * - *The functions norm
return the calculated norm.
When the mask
parameter is specified and it is not empty, the
- * norm is calculated only over the region specified by the mask.
A multi-channel input arrays are treated as a single-channel, that is, the - * results for all channels are combined.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and the same type as - *src1
.
- * @param normType type of the norm (see the details below).
- * @param mask optional operation mask; it must have the same size as
- * src1
and CV_8UC1
type.
- *
- * @see org.opencv.core.Core.norm
- */
- public static double norm(Mat src1, Mat src2, int normType, Mat mask)
- {
-
- double retVal = norm_3(src1.nativeObj, src2.nativeObj, normType, mask.nativeObj);
-
- return retVal;
- }
-
-/**
- * Calculates an absolute array norm, an absolute difference norm, or a relative - * difference norm.
- * - *The functions norm
calculate an absolute norm of
- * src1
(when there is no src2
):
norm = forkthree(|src1|_(L_(infty)) = max _I|src1(I)|)(if normType = - * NORM_INF)<BR>(|src1|_(L_1) = sum _I|src1(I)|)(if normType = - * NORM_L1)<BR>(|src1|_(L_2) = sqrt(sum_I src1(I)^2))(if normType = - * NORM_L2)
- * - *or an absolute or relative difference norm if src2
is there:
norm = forkthree(|src1-src2|_(L_(infty)) = max _I|src1(I) - src2(I)|)(if - * normType = NORM_INF)<BR>(|src1 - src2|_(L_1) = sum _I|src1(I) - - * src2(I)|)(if normType = NORM_L1)<BR>(|src1 - src2|_(L_2) = - * sqrt(sum_I(src1(I) - src2(I))^2))(if normType = NORM_L2)
- * - *or
- * - *norm = forkthree((|src1-src2|_(L_(infty)))/(|src2|_(L_(infty))))(if - * normType = NORM_RELATIVE_INF)<BR>((|src1-src2|_(L_1))/(|src2|_(L_1)))(if - * normType = NORM_RELATIVE_L1)<BR>((|src1-src2|_(L_2))/(|src2|_(L_2)))(if - * normType = NORM_RELATIVE_L2)
- * - *The functions norm
return the calculated norm.
When the mask
parameter is specified and it is not empty, the
- * norm is calculated only over the region specified by the mask.
A multi-channel input arrays are treated as a single-channel, that is, the - * results for all channels are combined.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and the same type as - *src1
.
- * @param normType type of the norm (see the details below).
- *
- * @see org.opencv.core.Core.norm
- */
- public static double norm(Mat src1, Mat src2, int normType)
- {
-
- double retVal = norm_4(src1.nativeObj, src2.nativeObj, normType);
-
- return retVal;
- }
-
-/**
- * Calculates an absolute array norm, an absolute difference norm, or a relative - * difference norm.
- * - *The functions norm
calculate an absolute norm of
- * src1
(when there is no src2
):
norm = forkthree(|src1|_(L_(infty)) = max _I|src1(I)|)(if normType = - * NORM_INF)<BR>(|src1|_(L_1) = sum _I|src1(I)|)(if normType = - * NORM_L1)<BR>(|src1|_(L_2) = sqrt(sum_I src1(I)^2))(if normType = - * NORM_L2)
- * - *or an absolute or relative difference norm if src2
is there:
norm = forkthree(|src1-src2|_(L_(infty)) = max _I|src1(I) - src2(I)|)(if - * normType = NORM_INF)<BR>(|src1 - src2|_(L_1) = sum _I|src1(I) - - * src2(I)|)(if normType = NORM_L1)<BR>(|src1 - src2|_(L_2) = - * sqrt(sum_I(src1(I) - src2(I))^2))(if normType = NORM_L2)
- * - *or
- * - *norm = forkthree((|src1-src2|_(L_(infty)))/(|src2|_(L_(infty))))(if - * normType = NORM_RELATIVE_INF)<BR>((|src1-src2|_(L_1))/(|src2|_(L_1)))(if - * normType = NORM_RELATIVE_L1)<BR>((|src1-src2|_(L_2))/(|src2|_(L_2)))(if - * normType = NORM_RELATIVE_L2)
- * - *The functions norm
return the calculated norm.
When the mask
parameter is specified and it is not empty, the
- * norm is calculated only over the region specified by the mask.
A multi-channel input arrays are treated as a single-channel, that is, the - * results for all channels are combined.
- * - * @param src1 first input array. - * @param src2 second input array of the same size and the same type as - *src1
.
- *
- * @see org.opencv.core.Core.norm
- */
- public static double norm(Mat src1, Mat src2)
- {
-
- double retVal = norm_5(src1.nativeObj, src2.nativeObj);
-
- return retVal;
- }
-
-
//
// C++: void normalize(Mat src, Mat& dst, double alpha = 1, double beta = 0, int norm_type = NORM_L2, int dtype = -1, Mat mask = Mat())
//
-/**
- * Normalizes the norm or value range of an array.
- * - *The functions normalize
scale and shift the input array elements
- * so that
| dst|_(L_p)= alpha
- * - *(where p=Inf, 1 or 2) when normType=NORM_INF
, NORM_L1
,
- * or NORM_L2
, respectively; or so that
min _I dst(I)= alpha, max _I dst(I)= beta
- * - *when normType=NORM_MINMAX
(for dense arrays only).
- * The optional mask specifies a sub-array to be normalized. This means that the
- * norm or min-n-max are calculated over the sub-array, and then this sub-array
- * is modified to be normalized. If you want to only use the mask to calculate
- * the norm or min-max but modify the whole array, you can use "norm" and
- * "Mat.convertTo".
In case of sparse matrices, only the non-zero values are analyzed and - * transformed. Because of this, the range transformation for sparse matrices is - * not allowed since it can shift the zero level.
- * - * @param src input array. - * @param dst output array of the same size assrc
.
- * @param alpha norm value to normalize to or the lower range boundary in case
- * of the range normalization.
- * @param beta upper range boundary in case of the range normalization; it is
- * not used for the norm normalization.
- * @param norm_type a norm_type
- * @param dtype when negative, the output array has the same type as
- * src
; otherwise, it has the same number of channels as
- * src
and the depth =CV_MAT_DEPTH(dtype)
.
- * @param mask optional operation mask.
- *
- * @see org.opencv.core.Core.normalize
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#norm
- */
+ //javadoc: normalize(src, dst, alpha, beta, norm_type, dtype, mask)
public static void normalize(Mat src, Mat dst, double alpha, double beta, int norm_type, int dtype, Mat mask)
{
-
+
normalize_0(src.nativeObj, dst.nativeObj, alpha, beta, norm_type, dtype, mask.nativeObj);
-
+
return;
}
-/**
- * Normalizes the norm or value range of an array.
- * - *The functions normalize
scale and shift the input array elements
- * so that
| dst|_(L_p)= alpha
- * - *(where p=Inf, 1 or 2) when normType=NORM_INF
, NORM_L1
,
- * or NORM_L2
, respectively; or so that
min _I dst(I)= alpha, max _I dst(I)= beta
- * - *when normType=NORM_MINMAX
(for dense arrays only).
- * The optional mask specifies a sub-array to be normalized. This means that the
- * norm or min-n-max are calculated over the sub-array, and then this sub-array
- * is modified to be normalized. If you want to only use the mask to calculate
- * the norm or min-max but modify the whole array, you can use "norm" and
- * "Mat.convertTo".
In case of sparse matrices, only the non-zero values are analyzed and - * transformed. Because of this, the range transformation for sparse matrices is - * not allowed since it can shift the zero level.
- * - * @param src input array. - * @param dst output array of the same size assrc
.
- * @param alpha norm value to normalize to or the lower range boundary in case
- * of the range normalization.
- * @param beta upper range boundary in case of the range normalization; it is
- * not used for the norm normalization.
- * @param norm_type a norm_type
- * @param dtype when negative, the output array has the same type as
- * src
; otherwise, it has the same number of channels as
- * src
and the depth =CV_MAT_DEPTH(dtype)
.
- *
- * @see org.opencv.core.Core.normalize
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#norm
- */
+ //javadoc: normalize(src, dst, alpha, beta, norm_type, dtype)
public static void normalize(Mat src, Mat dst, double alpha, double beta, int norm_type, int dtype)
{
-
+
normalize_1(src.nativeObj, dst.nativeObj, alpha, beta, norm_type, dtype);
-
+
return;
}
-/**
- * Normalizes the norm or value range of an array.
- * - *The functions normalize
scale and shift the input array elements
- * so that
| dst|_(L_p)= alpha
- * - *(where p=Inf, 1 or 2) when normType=NORM_INF
, NORM_L1
,
- * or NORM_L2
, respectively; or so that
min _I dst(I)= alpha, max _I dst(I)= beta
- * - *when normType=NORM_MINMAX
(for dense arrays only).
- * The optional mask specifies a sub-array to be normalized. This means that the
- * norm or min-n-max are calculated over the sub-array, and then this sub-array
- * is modified to be normalized. If you want to only use the mask to calculate
- * the norm or min-max but modify the whole array, you can use "norm" and
- * "Mat.convertTo".
In case of sparse matrices, only the non-zero values are analyzed and - * transformed. Because of this, the range transformation for sparse matrices is - * not allowed since it can shift the zero level.
- * - * @param src input array. - * @param dst output array of the same size assrc
.
- * @param alpha norm value to normalize to or the lower range boundary in case
- * of the range normalization.
- * @param beta upper range boundary in case of the range normalization; it is
- * not used for the norm normalization.
- * @param norm_type a norm_type
- *
- * @see org.opencv.core.Core.normalize
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#norm
- */
+ //javadoc: normalize(src, dst, alpha, beta, norm_type)
public static void normalize(Mat src, Mat dst, double alpha, double beta, int norm_type)
{
-
+
normalize_2(src.nativeObj, dst.nativeObj, alpha, beta, norm_type);
-
+
return;
}
-/**
- * Normalizes the norm or value range of an array.
- * - *The functions normalize
scale and shift the input array elements
- * so that
| dst|_(L_p)= alpha
- * - *(where p=Inf, 1 or 2) when normType=NORM_INF
, NORM_L1
,
- * or NORM_L2
, respectively; or so that
min _I dst(I)= alpha, max _I dst(I)= beta
- * - *when normType=NORM_MINMAX
(for dense arrays only).
- * The optional mask specifies a sub-array to be normalized. This means that the
- * norm or min-n-max are calculated over the sub-array, and then this sub-array
- * is modified to be normalized. If you want to only use the mask to calculate
- * the norm or min-max but modify the whole array, you can use "norm" and
- * "Mat.convertTo".
In case of sparse matrices, only the non-zero values are analyzed and - * transformed. Because of this, the range transformation for sparse matrices is - * not allowed since it can shift the zero level.
- * - * @param src input array. - * @param dst output array of the same size assrc
.
- *
- * @see org.opencv.core.Core.normalize
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#norm
- */
+ //javadoc: normalize(src, dst)
public static void normalize(Mat src, Mat dst)
{
-
+
normalize_3(src.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -6030,19 +1793,21 @@ public static void normalize(Mat src, Mat dst)
// C++: void patchNaNs(Mat& a, double val = 0)
//
+ //javadoc: patchNaNs(a, val)
public static void patchNaNs(Mat a, double val)
{
-
+
patchNaNs_0(a.nativeObj, val);
-
+
return;
}
+ //javadoc: patchNaNs(a)
public static void patchNaNs(Mat a)
{
-
+
patchNaNs_1(a.nativeObj);
-
+
return;
}
@@ -6051,48 +1816,12 @@ public static void patchNaNs(Mat a)
// C++: void perspectiveTransform(Mat src, Mat& dst, Mat m)
//
-/**
- * Performs the perspective matrix transformation of vectors.
- * - *The function perspectiveTransform
transforms every element of
- * src
by treating it as a 2D or 3D vector, in the following way:
(x, y, z) -> (x'/w, y'/w, z'/w)
- * - *where
- * - *(x', y', z', w') = mat * x y z 1
- * - *and
- * - *w = w' if w' != 0; infty otherwise
- * - *Here a 3D vector transformation is shown. In case of a 2D vector
- * transformation, the z
component is omitted.
Note: The function transforms a sparse set of 2D or 3D vectors. If you want - * to transform an image using perspective transformation, use "warpPerspective". - * If you have an inverse problem, that is, you want to compute the most - * probable perspective transformation out of several pairs of corresponding - * points, you can use "getPerspectiveTransform" or "findHomography".
- * - * @param src input two-channel or three-channel floating-point array; each - * element is a 2D/3D vector to be transformed. - * @param dst output array of the same size and type assrc
.
- * @param m 3x3
or 4x4
floating-point transformation
- * matrix.
- *
- * @see org.opencv.core.Core.perspectiveTransform
- * @see org.opencv.calib3d.Calib3d#findHomography
- * @see org.opencv.imgproc.Imgproc#warpPerspective
- * @see org.opencv.core.Core#transform
- * @see org.opencv.imgproc.Imgproc#getPerspectiveTransform
- */
+ //javadoc: perspectiveTransform(src, dst, m)
public static void perspectiveTransform(Mat src, Mat dst, Mat m)
{
-
+
perspectiveTransform_0(src.nativeObj, dst.nativeObj, m.nativeObj);
-
+
return;
}
@@ -6101,61 +1830,21 @@ public static void perspectiveTransform(Mat src, Mat dst, Mat m)
// C++: void phase(Mat x, Mat y, Mat& angle, bool angleInDegrees = false)
//
-/**
- * Calculates the rotation angle of 2D vectors.
- * - *The function phase
calculates the rotation angle of each 2D
- * vector that is formed from the corresponding elements of x
and
- * y
:
angle(I) = atan2(y(I), x(I))
- * - *The angle estimation accuracy is about 0.3 degrees. When x(I)=y(I)=0
,
- * the corresponding angle(I)
is set to 0.
x
.
- * @param angle output array of vector angles; it has the same size and same
- * type as x
.
- * @param angleInDegrees when true, the function calculates the angle in
- * degrees, otherwise, they are measured in radians.
- *
- * @see org.opencv.core.Core.phase
- */
+ //javadoc: phase(x, y, angle, angleInDegrees)
public static void phase(Mat x, Mat y, Mat angle, boolean angleInDegrees)
{
-
+
phase_0(x.nativeObj, y.nativeObj, angle.nativeObj, angleInDegrees);
-
+
return;
}
-/**
- * Calculates the rotation angle of 2D vectors.
- * - *The function phase
calculates the rotation angle of each 2D
- * vector that is formed from the corresponding elements of x
and
- * y
:
angle(I) = atan2(y(I), x(I))
- * - *The angle estimation accuracy is about 0.3 degrees. When x(I)=y(I)=0
,
- * the corresponding angle(I)
is set to 0.
x
.
- * @param angle output array of vector angles; it has the same size and same
- * type as x
.
- *
- * @see org.opencv.core.Core.phase
- */
+ //javadoc: phase(x, y, angle)
public static void phase(Mat x, Mat y, Mat angle)
{
-
+
phase_1(x.nativeObj, y.nativeObj, angle.nativeObj);
-
+
return;
}
@@ -6164,163 +1853,21 @@ public static void phase(Mat x, Mat y, Mat angle)
// C++: void polarToCart(Mat magnitude, Mat angle, Mat& x, Mat& y, bool angleInDegrees = false)
//
-/**
- * Calculates x and y coordinates of 2D vectors from their magnitude and angle.
- * - *The function polarToCart
calculates the Cartesian coordinates of
- * each 2D vector represented by the corresponding elements of magnitude
- * and angle
:
x(I) = magnitude(I) cos(angle(I)) - * y(I) = magnitude(I) sin(angle(I)) - *
- * - *The relative accuracy of the estimated coordinates is about 1e-6
.
=Mat()
), in this case, the function
- * assumes that all the magnitudes are =1; if it is not empty, it must have the
- * same size and type as angle
.
- * @param angle input floating-point array of angles of 2D vectors.
- * @param x output array of x-coordinates of 2D vectors; it has the same size
- * and type as angle
.
- * @param y output array of y-coordinates of 2D vectors; it has the same size
- * and type as angle
.
- * @param angleInDegrees when true, the input angles are measured in degrees,
- * otherwise, they are measured in radians.
- *
- * @see org.opencv.core.Core.polarToCart
- * @see org.opencv.core.Core#log
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#pow
- * @see org.opencv.core.Core#sqrt
- * @see org.opencv.core.Core#magnitude
- * @see org.opencv.core.Core#exp
- * @see org.opencv.core.Core#phase
- */
+ //javadoc: polarToCart(magnitude, angle, x, y, angleInDegrees)
public static void polarToCart(Mat magnitude, Mat angle, Mat x, Mat y, boolean angleInDegrees)
{
-
+
polarToCart_0(magnitude.nativeObj, angle.nativeObj, x.nativeObj, y.nativeObj, angleInDegrees);
-
+
return;
}
-/**
- * Calculates x and y coordinates of 2D vectors from their magnitude and angle.
- * - *The function polarToCart
calculates the Cartesian coordinates of
- * each 2D vector represented by the corresponding elements of magnitude
- * and angle
:
x(I) = magnitude(I) cos(angle(I)) - * y(I) = magnitude(I) sin(angle(I)) - *
- * - *The relative accuracy of the estimated coordinates is about 1e-6
.
=Mat()
), in this case, the function
- * assumes that all the magnitudes are =1; if it is not empty, it must have the
- * same size and type as angle
.
- * @param angle input floating-point array of angles of 2D vectors.
- * @param x output array of x-coordinates of 2D vectors; it has the same size
- * and type as angle
.
- * @param y output array of y-coordinates of 2D vectors; it has the same size
- * and type as angle
.
- *
- * @see org.opencv.core.Core.polarToCart
- * @see org.opencv.core.Core#log
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#pow
- * @see org.opencv.core.Core#sqrt
- * @see org.opencv.core.Core#magnitude
- * @see org.opencv.core.Core#exp
- * @see org.opencv.core.Core#phase
- */
+ //javadoc: polarToCart(magnitude, angle, x, y)
public static void polarToCart(Mat magnitude, Mat angle, Mat x, Mat y)
{
-
+
polarToCart_1(magnitude.nativeObj, angle.nativeObj, x.nativeObj, y.nativeObj);
-
- return;
- }
-
-
- //
- // C++: void polylines(Mat& img, vector_vector_Point pts, bool isClosed, Scalar color, int thickness = 1, int lineType = 8, int shift = 0)
- //
-
-/**
- * Draws several polygonal curves.
- * - *The function polylines
draws one or more polygonal curves.
Draws several polygonal curves.
- * - *The function polylines
draws one or more polygonal curves.
Draws several polygonal curves.
- * - *The function polylines
draws one or more polygonal curves.
Raises every array element to a power.
- * - *The function pow
raises every element of the input array to
- * power
:
dst(I) = src(I)^(power) if power is integer; |src(I)|^(power)
- * otherwise<BR>So, for a non-integer power exponent, the absolute values of
- * input array elements are used. However, it is possible to get true values for
- * negative values using some extra operations. In the example below, computing
- * the 5th root of array src
shows: <BR><code>
// C++ code:
- * - *Mat mask = src < 0;
- * - *pow(src, 1./5, dst);
- * - *subtract(Scalar.all(0), dst, dst, mask);
- * - *For some values of power
, such as integer values, 0.5 and -0.5,
- * specialized faster algorithms are used.
- *
Special values (NaN, Inf) are not handled.
- * - * @param src input array. - * @param power exponent of power. - * @param dst output array of the same size and type assrc
.
- *
- * @see org.opencv.core.Core.pow
- * @see org.opencv.core.Core#cartToPolar
- * @see org.opencv.core.Core#polarToCart
- * @see org.opencv.core.Core#exp
- * @see org.opencv.core.Core#sqrt
- * @see org.opencv.core.Core#log
- */
+ //javadoc: pow(src, power, dst)
public static void pow(Mat src, double power, Mat dst)
{
-
+
pow_0(src.nativeObj, power, dst.nativeObj);
-
+
return;
}
//
- // C++: void putText(Mat img, string text, Point org, int fontFace, double fontScale, Scalar color, int thickness = 1, int lineType = 8, bool bottomLeftOrigin = false)
- //
-
-/**
- * Draws a text string.
- * - *The function putText
renders the specified text string in the
- * image.
- * Symbols that cannot be rendered using the specified font are replaced by
- * question marks. See "getTextSize" for a text rendering code example.
FONT_HERSHEY_SIMPLEX
,
- * FONT_HERSHEY_PLAIN
, FONT_HERSHEY_DUPLEX
,
- * FONT_HERSHEY_COMPLEX
, FONT_HERSHEY_TRIPLEX
,
- * FONT_HERSHEY_COMPLEX_SMALL
, FONT_HERSHEY_SCRIPT_SIMPLEX
,
- * or FONT_HERSHEY_SCRIPT_COMPLEX
, where each of the font ID's can
- * be combined with FONT_ITALIC
to get the slanted letters.
- * @param fontScale Font scale factor that is multiplied by the font-specific
- * base size.
- * @param color Text color.
- * @param thickness Thickness of the lines used to draw a text.
- * @param lineType Line type. See the line
for details.
- * @param bottomLeftOrigin When true, the image data origin is at the
- * bottom-left corner. Otherwise, it is at the top-left corner.
- *
- * @see org.opencv.core.Core.putText
- */
- public static void putText(Mat img, String text, Point org, int fontFace, double fontScale, Scalar color, int thickness, int lineType, boolean bottomLeftOrigin)
- {
-
- putText_0(img.nativeObj, text, org.x, org.y, fontFace, fontScale, color.val[0], color.val[1], color.val[2], color.val[3], thickness, lineType, bottomLeftOrigin);
-
- return;
- }
-
-/**
- * Draws a text string.
- * - *The function putText
renders the specified text string in the
- * image.
- * Symbols that cannot be rendered using the specified font are replaced by
- * question marks. See "getTextSize" for a text rendering code example.
FONT_HERSHEY_SIMPLEX
,
- * FONT_HERSHEY_PLAIN
, FONT_HERSHEY_DUPLEX
,
- * FONT_HERSHEY_COMPLEX
, FONT_HERSHEY_TRIPLEX
,
- * FONT_HERSHEY_COMPLEX_SMALL
, FONT_HERSHEY_SCRIPT_SIMPLEX
,
- * or FONT_HERSHEY_SCRIPT_COMPLEX
, where each of the font ID's can
- * be combined with FONT_ITALIC
to get the slanted letters.
- * @param fontScale Font scale factor that is multiplied by the font-specific
- * base size.
- * @param color Text color.
- * @param thickness Thickness of the lines used to draw a text.
- *
- * @see org.opencv.core.Core.putText
- */
- public static void putText(Mat img, String text, Point org, int fontFace, double fontScale, Scalar color, int thickness)
- {
-
- putText_1(img.nativeObj, text, org.x, org.y, fontFace, fontScale, color.val[0], color.val[1], color.val[2], color.val[3], thickness);
-
- return;
- }
-
-/**
- * Draws a text string.
- * - *The function putText
renders the specified text string in the
- * image.
- * Symbols that cannot be rendered using the specified font are replaced by
- * question marks. See "getTextSize" for a text rendering code example.
FONT_HERSHEY_SIMPLEX
,
- * FONT_HERSHEY_PLAIN
, FONT_HERSHEY_DUPLEX
,
- * FONT_HERSHEY_COMPLEX
, FONT_HERSHEY_TRIPLEX
,
- * FONT_HERSHEY_COMPLEX_SMALL
, FONT_HERSHEY_SCRIPT_SIMPLEX
,
- * or FONT_HERSHEY_SCRIPT_COMPLEX
, where each of the font ID's can
- * be combined with FONT_ITALIC
to get the slanted letters.
- * @param fontScale Font scale factor that is multiplied by the font-specific
- * base size.
- * @param color Text color.
- *
- * @see org.opencv.core.Core.putText
- */
- public static void putText(Mat img, String text, Point org, int fontFace, double fontScale, Scalar color)
- {
-
- putText_2(img.nativeObj, text, org.x, org.y, fontFace, fontScale, color.val[0], color.val[1], color.val[2], color.val[3]);
-
- return;
- }
-
-
- //
- // C++: void randShuffle_(Mat& dst, double iterFactor = 1.)
+ // C++: void randShuffle(Mat& dst, double iterFactor = 1., RNG* rng = 0)
//
+ //javadoc: randShuffle(dst, iterFactor)
public static void randShuffle(Mat dst, double iterFactor)
{
-
+
randShuffle_0(dst.nativeObj, iterFactor);
-
+
return;
}
+ //javadoc: randShuffle(dst)
public static void randShuffle(Mat dst)
{
-
+
randShuffle_1(dst.nativeObj);
-
+
return;
}
@@ -6503,29 +1913,12 @@ public static void randShuffle(Mat dst)
// C++: void randn(Mat& dst, double mean, double stddev)
//
-/**
- * Fills the array with normally distributed random numbers.
- * - *The function randn
fills the matrix dst
with
- * normally distributed random numbers with the specified mean vector and the
- * standard deviation matrix. The generated random numbers are clipped to fit
- * the value range of the output array data type.
Generates a single uniformly-distributed random number or an array of random - * numbers.
- * - *The template functions randu
generate and return the next
- * uniformly-distributed random value of the specified type. randu
- * is an equivalent to (int)theRNG();
, and so on. See "RNG"
- * description.
The second non-template variant of the function fills the matrix
- * dst
with uniformly-distributed random numbers from the specified
- * range:
low _c <= dst(I)_c < high _c
- * - * @param dst output array of random numbers; the array must be pre-allocated. - * @param low inclusive lower boundary of the generated random numbers. - * @param high exclusive upper boundary of the generated random numbers. - * - * @see org.opencv.core.Core.randu - * @see org.opencv.core.Core#randn - */ + //javadoc: randu(dst, low, high) public static void randu(Mat dst, double low, double high) { - + randu_0(dst.nativeObj, low, high); - - return; - } - - - // - // C++: void rectangle(Mat& img, Point pt1, Point pt2, Scalar color, int thickness = 1, int lineType = 8, int shift = 0) - // - -/** - *Draws a simple, thick, or filled up-right rectangle.
- * - *The function rectangle
draws a rectangle outline or a filled
- * rectangle whose two opposite corners are pt1
and
- * pt2
, or r.tl()
and r.br()-Point(1,1)
.
pt1
.
- * @param color Rectangle color or brightness (grayscale image).
- * @param thickness Thickness of lines that make up the rectangle. Negative
- * values, like CV_FILLED
, mean that the function has to draw a
- * filled rectangle.
- * @param lineType Type of the line. See the "line" description.
- * @param shift Number of fractional bits in the point coordinates.
- *
- * @see org.opencv.core.Core.rectangle
- */
- public static void rectangle(Mat img, Point pt1, Point pt2, Scalar color, int thickness, int lineType, int shift)
- {
-
- rectangle_0(img.nativeObj, pt1.x, pt1.y, pt2.x, pt2.y, color.val[0], color.val[1], color.val[2], color.val[3], thickness, lineType, shift);
-
- return;
- }
-
-/**
- * Draws a simple, thick, or filled up-right rectangle.
- * - *The function rectangle
draws a rectangle outline or a filled
- * rectangle whose two opposite corners are pt1
and
- * pt2
, or r.tl()
and r.br()-Point(1,1)
.
pt1
.
- * @param color Rectangle color or brightness (grayscale image).
- * @param thickness Thickness of lines that make up the rectangle. Negative
- * values, like CV_FILLED
, mean that the function has to draw a
- * filled rectangle.
- *
- * @see org.opencv.core.Core.rectangle
- */
- public static void rectangle(Mat img, Point pt1, Point pt2, Scalar color, int thickness)
- {
-
- rectangle_1(img.nativeObj, pt1.x, pt1.y, pt2.x, pt2.y, color.val[0], color.val[1], color.val[2], color.val[3], thickness);
-
- return;
- }
-
-/**
- * Draws a simple, thick, or filled up-right rectangle.
- * - *The function rectangle
draws a rectangle outline or a filled
- * rectangle whose two opposite corners are pt1
and
- * pt2
, or r.tl()
and r.br()-Point(1,1)
.
pt1
.
- * @param color Rectangle color or brightness (grayscale image).
- *
- * @see org.opencv.core.Core.rectangle
- */
- public static void rectangle(Mat img, Point pt1, Point pt2, Scalar color)
- {
-
- rectangle_2(img.nativeObj, pt1.x, pt1.y, pt2.x, pt2.y, color.val[0], color.val[1], color.val[2], color.val[3]);
-
+
return;
}
@@ -6648,86 +1941,21 @@ public static void rectangle(Mat img, Point pt1, Point pt2, Scalar color)
// C++: void reduce(Mat src, Mat& dst, int dim, int rtype, int dtype = -1)
//
-/**
- * Reduces a matrix to a vector.
- * - *The function reduce
reduces the matrix to a vector by treating
- * the matrix rows/columns as a set of 1D vectors and performing the specified
- * operation on the vectors until a single row/column is obtained. For example,
- * the function can be used to compute horizontal and vertical projections of a
- * raster image. In case of CV_REDUCE_SUM
and CV_REDUCE_AVG
,
- * the output may have a larger element bit-depth to preserve accuracy. And
- * multi-channel arrays are also supported in these two reduction modes.
dim
- * and dtype
parameters.
- * @param dim dimension index along which the matrix is reduced. 0 means that
- * the matrix is reduced to a single row. 1 means that the matrix is reduced to
- * a single column.
- * @param rtype reduction operation that could be one of the following:
- * CV_MAKE_TYPE(CV_MAT_DEPTH(dtype),
- * src.channels())
.
- *
- * @see org.opencv.core.Core.reduce
- * @see org.opencv.core.Core#repeat
- */
+ //javadoc: reduce(src, dst, dim, rtype, dtype)
public static void reduce(Mat src, Mat dst, int dim, int rtype, int dtype)
{
-
+
reduce_0(src.nativeObj, dst.nativeObj, dim, rtype, dtype);
-
+
return;
}
-/**
- * Reduces a matrix to a vector.
- * - *The function reduce
reduces the matrix to a vector by treating
- * the matrix rows/columns as a set of 1D vectors and performing the specified
- * operation on the vectors until a single row/column is obtained. For example,
- * the function can be used to compute horizontal and vertical projections of a
- * raster image. In case of CV_REDUCE_SUM
and CV_REDUCE_AVG
,
- * the output may have a larger element bit-depth to preserve accuracy. And
- * multi-channel arrays are also supported in these two reduction modes.
dim
- * and dtype
parameters.
- * @param dim dimension index along which the matrix is reduced. 0 means that
- * the matrix is reduced to a single row. 1 means that the matrix is reduced to
- * a single column.
- * @param rtype reduction operation that could be one of the following:
- * Fills the output array with repeated copies of the input array.
- * - *The functions "repeat" duplicate the input array one or more times along each - * of the two axes:
- * - *dst _(ij)= src _(i mod src.rows, j mod src.cols)
- * - *The second variant of the function is more convenient to use with - * "MatrixExpressions".
- * - * @param src input array to replicate. - * @param ny Flag to specify how many times thesrc
is repeated
- * along the vertical axis.
- * @param nx Flag to specify how many times the src
is repeated
- * along the horizontal axis.
- * @param dst output array of the same type as src
.
- *
- * @see org.opencv.core.Core.repeat
- * @see org.opencv.core.Core#reduce
- */
+ //javadoc: repeat(src, ny, nx, dst)
public static void repeat(Mat src, int ny, int nx, Mat dst)
{
-
+
repeat_0(src.nativeObj, ny, nx, dst.nativeObj);
+
+ return;
+ }
+
+
+ //
+ // C++: void rotate(Mat src, Mat& dst, int rotateCode)
+ //
+ //javadoc: rotate(src, dst, rotateCode)
+ public static void rotate(Mat src, Mat dst, int rotateCode)
+ {
+
+ rotate_0(src.nativeObj, dst.nativeObj, rotateCode);
+
return;
}
@@ -6770,40 +1992,12 @@ public static void repeat(Mat src, int ny, int nx, Mat dst)
// C++: void scaleAdd(Mat src1, double alpha, Mat src2, Mat& dst)
//
-/**
- * Calculates the sum of a scaled array and another array.
- * - *The function scaleAdd
is one of the classical primitive linear
- * algebra operations, known as DAXPY
or SAXPY
in BLAS
- * (http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms). It
- * calculates the sum of a scaled array and another array:
dst(I)= scale * src1(I) + src2(I)<BR>The function can also be - * emulated with a matrix expression, for example: <BR><code>
- * - *// C++ code:
- * - *Mat A(3, 3, CV_64F);...
- * - *A.row(0) = A.row(1)*2 + A.row(2);
- * - * @param src1 first input array. - * @param alpha a alpha - * @param src2 second input array of the same size and type assrc1
.
- * @param dst output array of the same size and type as src1
.
- *
- * @see org.opencv.core.Core.scaleAdd
- * @see org.opencv.core.Mat#dot
- * @see org.opencv.core.Mat#convertTo
- * @see org.opencv.core.Core#addWeighted
- * @see org.opencv.core.Core#add
- * @see org.opencv.core.Core#subtract
- */
+ //javadoc: scaleAdd(src1, alpha, src2, dst)
public static void scaleAdd(Mat src1, double alpha, Mat src2, Mat dst)
{
-
+
scaleAdd_0(src1.nativeObj, alpha, src2.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -6812,11 +2006,12 @@ public static void scaleAdd(Mat src1, double alpha, Mat src2, Mat dst)
// C++: void setErrorVerbosity(bool verbose)
//
+ //javadoc: setErrorVerbosity(verbose)
public static void setErrorVerbosity(boolean verbose)
{
-
+
setErrorVerbosity_0(verbose);
-
+
return;
}
@@ -6825,295 +2020,50 @@ public static void setErrorVerbosity(boolean verbose)
// C++: void setIdentity(Mat& mtx, Scalar s = Scalar(1))
//
-/**
- * Initializes a scaled identity matrix.
- * - *The function "setIdentity" initializes a scaled identity matrix:
- * - *mtx(i,j)= value if i=j; 0 otherwise<BR>The function can also be - * emulated using the matrix initializers and the matrix expressions: - * <BR><code>
- * - *// C++ code:
- * - *Mat A = Mat.eye(4, 3, CV_32F)*5;
- * - *// A will be set to [[5, 0, 0], [0, 5, 0], [0, 0, 5], [0, 0, 0]]
- * - * @param mtx matrix to initialize (not necessarily square). - * @param s a s - * - * @see org.opencv.core.Core.setIdentity - * @see org.opencv.core.Mat#setTo - * @see org.opencv.core.Mat#ones - * @see org.opencv.core.Mat#zeros - */ + //javadoc: setIdentity(mtx, s) public static void setIdentity(Mat mtx, Scalar s) { - + setIdentity_0(mtx.nativeObj, s.val[0], s.val[1], s.val[2], s.val[3]); - + return; } -/** - *Initializes a scaled identity matrix.
- * - *The function "setIdentity" initializes a scaled identity matrix:
- * - *mtx(i,j)= value if i=j; 0 otherwise<BR>The function can also be - * emulated using the matrix initializers and the matrix expressions: - * <BR><code>
- * - *// C++ code:
- * - *Mat A = Mat.eye(4, 3, CV_32F)*5;
- * - *// A will be set to [[5, 0, 0], [0, 5, 0], [0, 0, 5], [0, 0, 0]]
- * - * @param mtx matrix to initialize (not necessarily square). - * - * @see org.opencv.core.Core.setIdentity - * @see org.opencv.core.Mat#setTo - * @see org.opencv.core.Mat#ones - * @see org.opencv.core.Mat#zeros - */ + //javadoc: setIdentity(mtx) public static void setIdentity(Mat mtx) { - + setIdentity_1(mtx.nativeObj); - - return; - } - - - // - // C++: void setNumThreads(int nthreads) - // - -/** - *OpenCV will try to set the number of threads for the next parallel region.
- * If threads == 0
, OpenCV will disable threading optimizations and
- * run all it's functions sequentially. Passing threads < 0
will
- * reset threads number to system default.
- * This function must be called outside of parallel region.
OpenCV will try to run it's functions with specified threads number, but some - * behaviour differs from framework:
- *threads == 1
, OpenCV will disable
- * threading optimizations and run it's functions sequentially.
- * Solves one or more linear systems or least-squares problems.
- * - *The function solve
solves a linear system or least-squares
- * problem (the latter is possible with SVD or QR methods, or by specifying the
- * flag DECOMP_NORMAL
):
dst = arg min _X|src1 * X - src2|
- * - *If DECOMP_LU
or DECOMP_CHOLESKY
method is used, the
- * function returns 1 if src1
(or src1^Tsrc1) is
- * non-singular. Otherwise, it returns 0. In the latter case, dst
- * is not valid. Other methods find a pseudo-solution in case of a singular
- * left-hand side part.
Note: If you want to find a unity-norm solution of an under-defined singular
- * system src1*dst=0, the function solve
will not do the
- * work. Use "SVD.solveZ" instead.
src1
must be symmetrical and positively defined.
- * src1
must
- * be symmetrical.
- * src1
can be singular.
- * src1
can be singular.
- * Solves one or more linear systems or least-squares problems.
- * - *The function solve
solves a linear system or least-squares
- * problem (the latter is possible with SVD or QR methods, or by specifying the
- * flag DECOMP_NORMAL
):
dst = arg min _X|src1 * X - src2|
- * - *If DECOMP_LU
or DECOMP_CHOLESKY
method is used, the
- * function returns 1 if src1
(or src1^Tsrc1) is
- * non-singular. Otherwise, it returns 0. In the latter case, dst
- * is not valid. Other methods find a pseudo-solution in case of a singular
- * left-hand side part.
Note: If you want to find a unity-norm solution of an under-defined singular
- * system src1*dst=0, the function solve
will not do the
- * work. Use "SVD.solveZ" instead.
Finds the real roots of a cubic equation.
- * - *The function solveCubic
finds the real roots of a cubic
- * equation:
coeffs
is a 4-element vector:
- * coeffs [0] x^3 + coeffs [1] x^2 + coeffs [2] x + coeffs [3] = 0
- * - *coeffs
is a 3-element vector:
- * x^3 + coeffs [0] x^2 + coeffs [1] x + coeffs [2] = 0
- * - *The roots are stored in the roots
array.
Finds the real or complex roots of a polynomial equation.
- * - *The function solvePoly
finds real and complex roots of a
- * polynomial equation:
coeffs [n] x^(n) + coeffs [n-1] x^(n-1) +... + coeffs [1] x + coeffs [0] - * = 0
- * - * @param coeffs array of polynomial coefficients. - * @param roots output (complex) array of roots. - * @param maxIters maximum number of iterations the algorithm does. - * - * @see org.opencv.core.Core.solvePoly - */ - public static double solvePoly(Mat coeffs, Mat roots, int maxIters) + //javadoc: setNumThreads(nthreads) + public static void setNumThreads(int nthreads) { - - double retVal = solvePoly_0(coeffs.nativeObj, roots.nativeObj, maxIters); - - return retVal; + + setNumThreads_0(nthreads); + + return; } -/** - *Finds the real or complex roots of a polynomial equation.
- * - *The function solvePoly
finds real and complex roots of a
- * polynomial equation:
coeffs [n] x^(n) + coeffs [n-1] x^(n-1) +... + coeffs [1] x + coeffs [0] - * = 0
- * - * @param coeffs array of polynomial coefficients. - * @param roots output (complex) array of roots. - * - * @see org.opencv.core.Core.solvePoly - */ - public static double solvePoly(Mat coeffs, Mat roots) - { - double retVal = solvePoly_1(coeffs.nativeObj, roots.nativeObj); + // + // C++: void setRNGSeed(int seed) + // - return retVal; + //javadoc: setRNGSeed(seed) + public static void setRNGSeed(int seed) + { + + setRNGSeed_0(seed); + + return; } @@ -7121,36 +2071,12 @@ public static double solvePoly(Mat coeffs, Mat roots) // C++: void sort(Mat src, Mat& dst, int flags) // -/** - *Sorts each row or each column of a matrix.
- * - *The function sort
sorts each matrix row or each matrix column in
- * ascending or descending order. So you should pass two operation flags to get
- * desired behaviour. If you want to sort matrix rows or columns
- * lexicographically, you can use STL std.sort
generic function
- * with the proper comparison predicate.
src
.
- * @param flags operation flags, a combination of the following values:
- * Sorts each row or each column of a matrix.
- * - *The function sortIdx
sorts each matrix row or each matrix column
- * in the ascending or descending order. So you should pass two operation flags
- * to get desired behaviour. Instead of reordering the elements themselves, it
- * stores the indices of sorted elements in the output array. For example:
- *
// C++ code:
- * - *Mat A = Mat.eye(3,3,CV_32F), B;
- * - *sortIdx(A, B, CV_SORT_EVERY_ROW + CV_SORT_ASCENDING);
- * - *// B will probably contain
- * - *// (because of equal elements in A some permutations are possible):
- * - *// [[1, 2, 0], [0, 2, 1], [0, 1, 2]]
- * - * @param src input single-channel array. - * @param dst output integer array of the same size assrc
.
- * @param flags operation flags that could be a combination of the following
- * values:
- * Divides a multi-channel array into several single-channel arrays.
- * - *The functions split
split a multi-channel array into separate
- * single-channel arrays:
mv [c](I) = src(I)_c
- * - *If you need to extract a single channel or do some other sophisticated - * channel permutation, use "mixChannels".
- * - * @param m a m - * @param mv output array or vector of arrays; in the first variant of the - * function the number of arrays must matchsrc.channels()
; the
- * arrays themselves are reallocated, if needed.
- *
- * @see org.opencv.core.Core.split
- * @see org.opencv.core.Core#merge
- * @see org.opencv.imgproc.Imgproc#cvtColor
- * @see org.opencv.core.Core#mixChannels
- */
+ //javadoc: split(m, mv)
public static void split(Mat m, ListCalculates a square root of array elements.
- * - *The functions sqrt
calculate a square root of each input array
- * element. In case of multi-channel arrays, each channel is processed
- * independently. The accuracy is approximately the same as of the built-in
- * std.sqrt
.
src
.
- *
- * @see org.opencv.core.Core.sqrt
- * @see org.opencv.core.Core#pow
- * @see org.opencv.core.Core#magnitude
- */
+ //javadoc: sqrt(src, dst)
public static void sqrt(Mat src, Mat dst)
{
-
+
sqrt_0(src.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -7273,245 +2128,30 @@ public static void sqrt(Mat src, Mat dst)
// C++: void subtract(Mat src1, Mat src2, Mat& dst, Mat mask = Mat(), int dtype = -1)
//
-/**
- * Calculates the per-element difference between two arrays or array and a - * scalar.
- * - *The function subtract
calculates:
dst(I) = saturate(src1(I) - src2(I)) if mask(I) != 0
- * - *src2
is
- * constructed from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) - src2) if mask(I) != 0
- * - *src1
is
- * constructed from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 - src2(I)) if mask(I) != 0
- * - *SubRS
:
- * dst(I) = saturate(src2 - src1(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 - src2;
- * - *dst -= src1; // equivalent to subtract(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can subtract to 8-bit unsigned arrays and store the
- * difference in a 16-bit signed array. Depth of the output array is determined
- * by dtype
parameter. In the second and third cases above, as well
- * as in the first case, when src1.depth() == src2.depth()
,
- * dtype
can be set to the default -1
. In this case
- * the output array will have the same depth as the input array, be it
- * src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
Calculates the per-element difference between two arrays or array and a - * scalar.
- * - *The function subtract
calculates:
dst(I) = saturate(src1(I) - src2(I)) if mask(I) != 0
- * - *src2
is
- * constructed from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) - src2) if mask(I) != 0
- * - *src1
is
- * constructed from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 - src2(I)) if mask(I) != 0
- * - *SubRS
:
- * dst(I) = saturate(src2 - src1(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 - src2;
- * - *dst -= src1; // equivalent to subtract(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can subtract to 8-bit unsigned arrays and store the
- * difference in a 16-bit signed array. Depth of the output array is determined
- * by dtype
parameter. In the second and third cases above, as well
- * as in the first case, when src1.depth() == src2.depth()
,
- * dtype
can be set to the default -1
. In this case
- * the output array will have the same depth as the input array, be it
- * src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
Calculates the per-element difference between two arrays or array and a - * scalar.
- * - *The function subtract
calculates:
dst(I) = saturate(src1(I) - src2(I)) if mask(I) != 0
- * - *src2
is
- * constructed from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) - src2) if mask(I) != 0
- * - *src1
is
- * constructed from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 - src2(I)) if mask(I) != 0
- * - *SubRS
:
- * dst(I) = saturate(src2 - src1(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 - src2;
- * - *dst -= src1; // equivalent to subtract(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can subtract to 8-bit unsigned arrays and store the
- * difference in a 16-bit signed array. Depth of the output array is determined
- * by dtype
parameter. In the second and third cases above, as well
- * as in the first case, when src1.depth() == src2.depth()
,
- * dtype
can be set to the default -1
. In this case
- * the output array will have the same depth as the input array, be it
- * src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
Calculates the per-element difference between two arrays or array and a - * scalar.
- * - *The function subtract
calculates:
dst(I) = saturate(src1(I) - src2(I)) if mask(I) != 0
- * - *src2
is
- * constructed from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) - src2) if mask(I) != 0
- * - *src1
is
- * constructed from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 - src2(I)) if mask(I) != 0
- * - *SubRS
:
- * dst(I) = saturate(src2 - src1(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 - src2;
- * - *dst -= src1; // equivalent to subtract(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can subtract to 8-bit unsigned arrays and store the
- * difference in a 16-bit signed array. Depth of the output array is determined
- * by dtype
parameter. In the second and third cases above, as well
- * as in the first case, when src1.depth() == src2.depth()
,
- * dtype
can be set to the default -1
. In this case
- * the output array will have the same depth as the input array, be it
- * src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
Calculates the per-element difference between two arrays or array and a - * scalar.
- * - *The function subtract
calculates:
dst(I) = saturate(src1(I) - src2(I)) if mask(I) != 0
- * - *src2
is
- * constructed from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) - src2) if mask(I) != 0
- * - *src1
is
- * constructed from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 - src2(I)) if mask(I) != 0
- * - *SubRS
:
- * dst(I) = saturate(src2 - src1(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 - src2;
- * - *dst -= src1; // equivalent to subtract(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can subtract to 8-bit unsigned arrays and store the
- * difference in a 16-bit signed array. Depth of the output array is determined
- * by dtype
parameter. In the second and third cases above, as well
- * as in the first case, when src1.depth() == src2.depth()
,
- * dtype
can be set to the default -1
. In this case
- * the output array will have the same depth as the input array, be it
- * src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
Calculates the per-element difference between two arrays or array and a - * scalar.
- * - *The function subtract
calculates:
dst(I) = saturate(src1(I) - src2(I)) if mask(I) != 0
- * - *src2
is
- * constructed from Scalar
or has the same number of elements as
- * src1.channels()
:
- * dst(I) = saturate(src1(I) - src2) if mask(I) != 0
- * - *src1
is
- * constructed from Scalar
or has the same number of elements as
- * src2.channels()
:
- * dst(I) = saturate(src1 - src2(I)) if mask(I) != 0
- * - *SubRS
:
- * dst(I) = saturate(src2 - src1(I)) if mask(I) != 0
- * - *where I
is a multi-dimensional index of array elements. In case
- * of multi-channel arrays, each channel is processed independently.
- * The first function in the list above can be replaced with matrix expressions:
- *
// C++ code:
- * - *dst = src1 - src2;
- * - *dst -= src1; // equivalent to subtract(dst, src1, dst);
- * - *The input arrays and the output array can all have the same or different
- * depths. For example, you can subtract to 8-bit unsigned arrays and store the
- * difference in a 16-bit signed array. Depth of the output array is determined
- * by dtype
parameter. In the second and third cases above, as well
- * as in the first case, when src1.depth() == src2.depth()
,
- * dtype
can be set to the default -1
. In this case
- * the output array will have the same depth as the input array, be it
- * src1
, src2
or both.
- *
Note: Saturation is not applied when the output array has the depth
- * CV_32S
. You may even get result of an incorrect sign in the case
- * of overflow.
Calculates the sum of array elements.
- * - *The functions sum
calculate and return the sum of array
- * elements, independently for each channel.
Returns the trace of a matrix.
- * - *The function trace
returns the sum of the diagonal elements of
- * the matrix mtx
.
tr(mtx) = sum _i mtx(i,i)
- * - * @param mtx a mtx - * - * @see org.opencv.core.Core.trace - */ - public static Scalar trace(Mat mtx) - { - - Scalar retVal = new Scalar(trace_0(mtx.nativeObj)); - - return retVal; - } - - // // C++: void transform(Mat src, Mat& dst, Mat m) // -/** - *Performs the matrix transformation of every array element.
- * - *The function transform
performs the matrix transformation of
- * every element of the array src
and stores the results in
- * dst
:
dst(I) = m * src(I)
- * - *(when m.cols=src.channels()
), or
dst(I) = m * [ src(I); 1]
- * - *(when m.cols=src.channels()+1
)
Every element of the N
-channel array src
is
- * interpreted as N
-element vector that is transformed using the
- * M x N
or M x (N+1)
matrix m
to
- * M
-element vector - the corresponding element of the output array
- * dst
.
The function may be used for geometrical transformation of N
- * -dimensional points, arbitrary linear color space transformation (such as
- * various kinds of RGB to YUV transforms), shuffling the image channels, and so
- * forth.
m.cols
or m.cols-1
.
- * @param dst output array of the same size and depth as src
; it
- * has as many channels as m.rows
.
- * @param m transformation 2x2
or 2x3
floating-point
- * matrix.
- *
- * @see org.opencv.core.Core.transform
- * @see org.opencv.imgproc.Imgproc#warpAffine
- * @see org.opencv.core.Core#perspectiveTransform
- * @see org.opencv.imgproc.Imgproc#warpPerspective
- * @see org.opencv.imgproc.Imgproc#getAffineTransform
- * @see org.opencv.video.Video#estimateRigidTransform
- */
+ //javadoc: transform(src, dst, m)
public static void transform(Mat src, Mat dst, Mat m)
{
-
+
transform_0(src.nativeObj, dst.nativeObj, m.nativeObj);
-
+
return;
}
@@ -7874,26 +2206,12 @@ public static void transform(Mat src, Mat dst, Mat m)
// C++: void transpose(Mat src, Mat& dst)
//
-/**
- * Transposes a matrix.
- * - *The function "transpose" transposes the matrix src
:
dst(i,j) = src(j,i)
- * - *Note: No complex conjugation is done in case of a complex matrix. It it - * should be done separately if needed.
- * - * @param src input array. - * @param dst output array of the same type assrc
.
- *
- * @see org.opencv.core.Core.transpose
- */
+ //javadoc: transpose(src, dst)
public static void transpose(Mat src, Mat dst)
{
-
+
transpose_0(src.nativeObj, dst.nativeObj);
-
+
return;
}
@@ -7902,192 +2220,179 @@ public static void transpose(Mat src, Mat dst)
// C++: void vconcat(vector_Mat src, Mat& dst)
//
+ //javadoc: vconcat(src, dst)
public static void vconcat(ListFinds the global minimum and maximum in an array.
- * - *The functions minMaxLoc
find the minimum and maximum element
- * values and their positions. The extremums are searched across the whole array
- * or, if mask
is not an empty array, in the specified array
- * region.
The functions do not work with multi-channel arrays. If you need to find - * minimum or maximum elements across all the channels, use "Mat.reshape" first - * to reinterpret the array as single-channel. Or you may extract the particular - * channel using either "extractImageCOI", or "mixChannels", or "split".
- * - * @param src input single-channel array. - * @param mask optional mask used to select a sub-array. - * - * @see org.opencv.core.Core.minMaxLoc - * @see org.opencv.core.Core#compare - * @see org.opencv.core.Core#min - * @see org.opencv.core.Core#mixChannels - * @see org.opencv.core.Mat#reshape - * @see org.opencv.core.Core#split - * @see org.opencv.core.Core#max - * @see org.opencv.core.Core#inRange - */ - public static MinMaxLocResult minMaxLoc(Mat src, Mat mask) { - MinMaxLocResult res = new MinMaxLocResult(); - long maskNativeObj=0; - if (mask != null) { - maskNativeObj=mask.nativeObj; - } - double resarr[] = n_minMaxLocManual(src.nativeObj, maskNativeObj); - res.minVal=resarr[0]; - res.maxVal=resarr[1]; - res.minLoc.x=resarr[2]; - res.minLoc.y=resarr[3]; - res.maxLoc.x=resarr[4]; - res.maxLoc.y=resarr[5]; - return res; - } - -/** - *Finds the global minimum and maximum in an array.
- * - *The functions minMaxLoc
find the minimum and maximum element
- * values and their positions. The extremums are searched across the whole array
- * or, if mask
is not an empty array, in the specified array
- * region.
The functions do not work with multi-channel arrays. If you need to find - * minimum or maximum elements across all the channels, use "Mat.reshape" first - * to reinterpret the array as single-channel. Or you may extract the particular - * channel using either "extractImageCOI", or "mixChannels", or "split".
- * - * @param src input single-channel array. - * - * @see org.opencv.core.Core.minMaxLoc - * @see org.opencv.core.Core#compare - * @see org.opencv.core.Core#min - * @see org.opencv.core.Core#mixChannels - * @see org.opencv.core.Mat#reshape - * @see org.opencv.core.Core#split - * @see org.opencv.core.Core#max - * @see org.opencv.core.Core#inRange - */ - public static MinMaxLocResult minMaxLoc(Mat src) { - return minMaxLoc(src, null); - } - - - // C++: Size getTextSize(const string& text, int fontFace, double fontScale, int thickness, int* baseLine); -/** - *Calculates the width and height of a text string.
- * - *The function getTextSize
calculates and returns the size of a
- * box that contains the specified text.That is, the following code renders some
- * text, the tight box surrounding it, and the baseline:
// C++ code:
- * - *string text = "Funny text inside the box";
- * - *int fontFace = FONT_HERSHEY_SCRIPT_SIMPLEX;
- * - *double fontScale = 2;
- * - *int thickness = 3;
- * - *Mat img(600, 800, CV_8UC3, Scalar.all(0));
- * - *int baseline=0;
- * - *Size textSize = getTextSize(text, fontFace,
- * - *fontScale, thickness, &baseline);
- * - *baseline += thickness;
- * - *// center the text
- * - *Point textOrg((img.cols - textSize.width)/2,
- * - *(img.rows + textSize.height)/2);
- * - *// draw the box
- * - *rectangle(img, textOrg + Point(0, baseline),
- * - *textOrg + Point(textSize.width, -textSize.height),
- * - *Scalar(0,0,255));
- * - *//... and the baseline first
- * - *line(img, textOrg + Point(0, thickness),
- * - *textOrg + Point(textSize.width, thickness),
- * - *Scalar(0, 0, 255));
- * - *// then put the text itself
- * - *putText(img, text, textOrg, fontFace, fontScale,
- * - *Scalar.all(255), thickness, 8);
- * - * @param text Input text string. - * @param fontFace Font to use. See the "putText" for details. - * @param fontScale Font scale. See the "putText" for details. - * @param thickness Thickness of lines used to render the text. See "putText" - * for details. - * @param baseLine Output parameter - y-coordinate of the baseline relative to - * the bottom-most text point. - * - * @see org.opencv.core.Core.getTextSize - */ - public static Size getTextSize(String text, int fontFace, double fontScale, int thickness, int[] baseLine) { - if(baseLine != null && baseLine.length != 1) - throw new java.lang.IllegalArgumentException("'baseLine' must be 'int[1]' or 'null'."); - Size retVal = new Size(n_getTextSize(text, fontFace, fontScale, thickness, baseLine)); - return retVal; + + public MinMaxLocResult() { + minVal=0; maxVal=0; + minLoc=new Point(); + maxLoc=new Point(); + } +} + + +// C++: minMaxLoc(Mat src, double* minVal, double* maxVal=0, Point* minLoc=0, Point* maxLoc=0, InputArray mask=noArray()) + + +//javadoc: minMaxLoc(src, mask) +public static MinMaxLocResult minMaxLoc(Mat src, Mat mask) { + MinMaxLocResult res = new MinMaxLocResult(); + long maskNativeObj=0; + if (mask != null) { + maskNativeObj=mask.nativeObj; } + double resarr[] = n_minMaxLocManual(src.nativeObj, maskNativeObj); + res.minVal=resarr[0]; + res.maxVal=resarr[1]; + res.minLoc.x=resarr[2]; + res.minLoc.y=resarr[3]; + res.maxLoc.x=resarr[4]; + res.maxLoc.y=resarr[5]; + return res; +} + + +//javadoc: minMaxLoc(src) +public static MinMaxLocResult minMaxLoc(Mat src) { + return minMaxLoc(src, null); +} + + + // C++: Scalar mean(Mat src, Mat mask = Mat()) + private static native double[] mean_0(long src_nativeObj, long mask_nativeObj); + private static native double[] mean_1(long src_nativeObj); + + // C++: Scalar sum(Mat src) + private static native double[] sumElems_0(long src_nativeObj); + + // C++: Scalar trace(Mat mtx) + private static native double[] trace_0(long mtx_nativeObj); + + // C++: String getBuildInformation() + private static native String getBuildInformation_0(); + + // C++: bool checkRange(Mat a, bool quiet = true, _hidden_ * pos = 0, double minVal = -DBL_MAX, double maxVal = DBL_MAX) + private static native boolean checkRange_0(long a_nativeObj, boolean quiet, double minVal, double maxVal); + private static native boolean checkRange_1(long a_nativeObj); + // C++: bool eigen(Mat src, Mat& eigenvalues, Mat& eigenvectors = Mat()) + private static native boolean eigen_0(long src_nativeObj, long eigenvalues_nativeObj, long eigenvectors_nativeObj); + private static native boolean eigen_1(long src_nativeObj, long eigenvalues_nativeObj); + // C++: bool solve(Mat src1, Mat src2, Mat& dst, int flags = DECOMP_LU) + private static native boolean solve_0(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj, int flags); + private static native boolean solve_1(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj); - // C++: void LUT(Mat src, Mat lut, Mat& dst, int interpolation = 0) - private static native void LUT_0(long src_nativeObj, long lut_nativeObj, long dst_nativeObj, int interpolation); - private static native void LUT_1(long src_nativeObj, long lut_nativeObj, long dst_nativeObj); + // C++: bool useIPP() + private static native boolean useIPP_0(); // C++: double Mahalanobis(Mat v1, Mat v2, Mat icovar) private static native double Mahalanobis_0(long v1_nativeObj, long v2_nativeObj, long icovar_nativeObj); + // C++: double PSNR(Mat src1, Mat src2) + private static native double PSNR_0(long src1_nativeObj, long src2_nativeObj); + + // C++: double determinant(Mat mtx) + private static native double determinant_0(long mtx_nativeObj); + + // C++: double getTickFrequency() + private static native double getTickFrequency_0(); + + // C++: double invert(Mat src, Mat& dst, int flags = DECOMP_LU) + private static native double invert_0(long src_nativeObj, long dst_nativeObj, int flags); + private static native double invert_1(long src_nativeObj, long dst_nativeObj); + + // C++: double kmeans(Mat data, int K, Mat& bestLabels, TermCriteria criteria, int attempts, int flags, Mat& centers = Mat()) + private static native double kmeans_0(long data_nativeObj, int K, long bestLabels_nativeObj, int criteria_type, int criteria_maxCount, double criteria_epsilon, int attempts, int flags, long centers_nativeObj); + private static native double kmeans_1(long data_nativeObj, int K, long bestLabels_nativeObj, int criteria_type, int criteria_maxCount, double criteria_epsilon, int attempts, int flags); + + // C++: double norm(Mat src1, Mat src2, int normType = NORM_L2, Mat mask = Mat()) + private static native double norm_0(long src1_nativeObj, long src2_nativeObj, int normType, long mask_nativeObj); + private static native double norm_1(long src1_nativeObj, long src2_nativeObj, int normType); + private static native double norm_2(long src1_nativeObj, long src2_nativeObj); + + // C++: double norm(Mat src1, int normType = NORM_L2, Mat mask = Mat()) + private static native double norm_3(long src1_nativeObj, int normType, long mask_nativeObj); + private static native double norm_4(long src1_nativeObj, int normType); + private static native double norm_5(long src1_nativeObj); + + // C++: double solvePoly(Mat coeffs, Mat& roots, int maxIters = 300) + private static native double solvePoly_0(long coeffs_nativeObj, long roots_nativeObj, int maxIters); + private static native double solvePoly_1(long coeffs_nativeObj, long roots_nativeObj); + + // C++: float cubeRoot(float val) + private static native float cubeRoot_0(float val); + + // C++: float fastAtan2(float y, float x) + private static native float fastAtan2_0(float y, float x); + + // C++: int borderInterpolate(int p, int len, int borderType) + private static native int borderInterpolate_0(int p, int len, int borderType); + + // C++: int countNonZero(Mat src) + private static native int countNonZero_0(long src_nativeObj); + + // C++: int getNumThreads() + private static native int getNumThreads_0(); + + // C++: int getNumberOfCPUs() + private static native int getNumberOfCPUs_0(); + + // C++: int getOptimalDFTSize(int vecsize) + private static native int getOptimalDFTSize_0(int vecsize); + + // C++: int getThreadNum() + private static native int getThreadNum_0(); + + // C++: int solveCubic(Mat coeffs, Mat& roots) + private static native int solveCubic_0(long coeffs_nativeObj, long roots_nativeObj); + + // C++: int64 getCPUTickCount() + private static native long getCPUTickCount_0(); + + // C++: int64 getTickCount() + private static native long getTickCount_0(); + + // C++: void LUT(Mat src, Mat lut, Mat& dst) + private static native void LUT_0(long src_nativeObj, long lut_nativeObj, long dst_nativeObj); + // C++: void PCABackProject(Mat data, Mat mean, Mat eigenvectors, Mat& result) private static native void PCABackProject_0(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj, long result_nativeObj); - // C++: void PCACompute(Mat data, Mat& mean, Mat& eigenvectors, int maxComponents = 0) - private static native void PCACompute_0(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj, int maxComponents); - private static native void PCACompute_1(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj); + // C++: void PCACompute(Mat data, Mat& mean, Mat& eigenvectors, double retainedVariance) + private static native void PCACompute_0(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj, double retainedVariance); - // C++: void PCAComputeVar(Mat data, Mat& mean, Mat& eigenvectors, double retainedVariance) - private static native void PCAComputeVar_0(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj, double retainedVariance); + // C++: void PCACompute(Mat data, Mat& mean, Mat& eigenvectors, int maxComponents = 0) + private static native void PCACompute_1(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj, int maxComponents); + private static native void PCACompute_2(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj); // C++: void PCAProject(Mat data, Mat mean, Mat eigenvectors, Mat& result) private static native void PCAProject_0(long data_nativeObj, long mean_nativeObj, long eigenvectors_nativeObj, long result_nativeObj); @@ -8119,10 +2424,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void addWeighted_0(long src1_nativeObj, double alpha, long src2_nativeObj, double beta, double gamma, long dst_nativeObj, int dtype); private static native void addWeighted_1(long src1_nativeObj, double alpha, long src2_nativeObj, double beta, double gamma, long dst_nativeObj); - // C++: void arrowedLine(Mat& img, Point pt1, Point pt2, Scalar color, int thickness = 1, int line_type = 8, int shift = 0, double tipLength = 0.1) - private static native void arrowedLine_0(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int line_type, int shift, double tipLength); - private static native void arrowedLine_1(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3); - // C++: void batchDistance(Mat src1, Mat src2, Mat& dist, int dtype, Mat& nidx, int normType = NORM_L2, int K = 0, Mat mask = Mat(), int update = 0, bool crosscheck = false) private static native void batchDistance_0(long src1_nativeObj, long src2_nativeObj, long dist_nativeObj, int dtype, long nidx_nativeObj, int normType, int K, long mask_nativeObj, int update, boolean crosscheck); private static native void batchDistance_1(long src1_nativeObj, long src2_nativeObj, long dist_nativeObj, int dtype, long nidx_nativeObj, int normType, int K); @@ -8152,18 +2453,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void cartToPolar_0(long x_nativeObj, long y_nativeObj, long magnitude_nativeObj, long angle_nativeObj, boolean angleInDegrees); private static native void cartToPolar_1(long x_nativeObj, long y_nativeObj, long magnitude_nativeObj, long angle_nativeObj); - // C++: bool checkRange(Mat a, bool quiet = true, _hidden_ * pos = 0, double minVal = -DBL_MAX, double maxVal = DBL_MAX) - private static native boolean checkRange_0(long a_nativeObj, boolean quiet, double minVal, double maxVal); - private static native boolean checkRange_1(long a_nativeObj); - - // C++: void circle(Mat& img, Point center, int radius, Scalar color, int thickness = 1, int lineType = 8, int shift = 0) - private static native void circle_0(long img_nativeObj, double center_x, double center_y, int radius, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType, int shift); - private static native void circle_1(long img_nativeObj, double center_x, double center_y, int radius, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void circle_2(long img_nativeObj, double center_x, double center_y, int radius, double color_val0, double color_val1, double color_val2, double color_val3); - - // C++: bool clipLine(Rect imgRect, Point& pt1, Point& pt2) - private static native boolean clipLine_0(int imgRect_x, int imgRect_y, int imgRect_width, int imgRect_height, double pt1_x, double pt1_y, double[] pt1_out, double pt2_x, double pt2_y, double[] pt2_out); - // C++: void compare(Mat src1, Mat src2, Mat& dst, int cmpop) private static native void compare_0(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj, int cmpop); @@ -8174,23 +2463,21 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void completeSymm_0(long mtx_nativeObj, boolean lowerToUpper); private static native void completeSymm_1(long mtx_nativeObj); + // C++: void convertFp16(Mat src, Mat& dst) + private static native void convertFp16_0(long src_nativeObj, long dst_nativeObj); + // C++: void convertScaleAbs(Mat src, Mat& dst, double alpha = 1, double beta = 0) private static native void convertScaleAbs_0(long src_nativeObj, long dst_nativeObj, double alpha, double beta); private static native void convertScaleAbs_1(long src_nativeObj, long dst_nativeObj); - // C++: int countNonZero(Mat src) - private static native int countNonZero_0(long src_nativeObj); - - // C++: float cubeRoot(float val) - private static native float cubeRoot_0(float val); + // C++: void copyMakeBorder(Mat src, Mat& dst, int top, int bottom, int left, int right, int borderType, Scalar value = Scalar()) + private static native void copyMakeBorder_0(long src_nativeObj, long dst_nativeObj, int top, int bottom, int left, int right, int borderType, double value_val0, double value_val1, double value_val2, double value_val3); + private static native void copyMakeBorder_1(long src_nativeObj, long dst_nativeObj, int top, int bottom, int left, int right, int borderType); // C++: void dct(Mat src, Mat& dst, int flags = 0) private static native void dct_0(long src_nativeObj, long dst_nativeObj, int flags); private static native void dct_1(long src_nativeObj, long dst_nativeObj); - // C++: double determinant(Mat mtx) - private static native double determinant_0(long mtx_nativeObj); - // C++: void dft(Mat src, Mat& dst, int flags = 0, int nonzeroRows = 0) private static native void dft_0(long src_nativeObj, long dst_nativeObj, int flags, int nonzeroRows); private static native void dft_1(long src_nativeObj, long dst_nativeObj); @@ -8200,34 +2487,14 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void divide_1(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj, double scale); private static native void divide_2(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj); - // C++: void divide(double scale, Mat src2, Mat& dst, int dtype = -1) - private static native void divide_3(double scale, long src2_nativeObj, long dst_nativeObj, int dtype); - private static native void divide_4(double scale, long src2_nativeObj, long dst_nativeObj); - // C++: void divide(Mat src1, Scalar src2, Mat& dst, double scale = 1, int dtype = -1) - private static native void divide_5(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj, double scale, int dtype); - private static native void divide_6(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj, double scale); - private static native void divide_7(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj); + private static native void divide_3(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj, double scale, int dtype); + private static native void divide_4(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj, double scale); + private static native void divide_5(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj); - // C++: void drawMarker(Mat& img, Point position, Scalar color, int markerType = MARKER_CROSS, int markerSize = 20, int thickness = 1, int line_type = 8) - private static native void drawMarker_0(long img_nativeObj, double position_x, double position_y, double color_val0, double color_val1, double color_val2, double color_val3, int markerType, int markerSize, int thickness, int line_type); - private static native void drawMarker_1(long img_nativeObj, double position_x, double position_y, double color_val0, double color_val1, double color_val2, double color_val3); - - // C++: bool eigen(Mat src, bool computeEigenvectors, Mat& eigenvalues, Mat& eigenvectors) - private static native boolean eigen_0(long src_nativeObj, boolean computeEigenvectors, long eigenvalues_nativeObj, long eigenvectors_nativeObj); - - // C++: void ellipse(Mat& img, Point center, Size axes, double angle, double startAngle, double endAngle, Scalar color, int thickness = 1, int lineType = 8, int shift = 0) - private static native void ellipse_0(long img_nativeObj, double center_x, double center_y, double axes_width, double axes_height, double angle, double startAngle, double endAngle, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType, int shift); - private static native void ellipse_1(long img_nativeObj, double center_x, double center_y, double axes_width, double axes_height, double angle, double startAngle, double endAngle, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void ellipse_2(long img_nativeObj, double center_x, double center_y, double axes_width, double axes_height, double angle, double startAngle, double endAngle, double color_val0, double color_val1, double color_val2, double color_val3); - - // C++: void ellipse(Mat& img, RotatedRect box, Scalar color, int thickness = 1, int lineType = 8) - private static native void ellipse_3(long img_nativeObj, double box_center_x, double box_center_y, double box_size_width, double box_size_height, double box_angle, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType); - private static native void ellipse_4(long img_nativeObj, double box_center_x, double box_center_y, double box_size_width, double box_size_height, double box_angle, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void ellipse_5(long img_nativeObj, double box_center_x, double box_center_y, double box_size_width, double box_size_height, double box_angle, double color_val0, double color_val1, double color_val2, double color_val3); - - // C++: void ellipse2Poly(Point center, Size axes, int angle, int arcStart, int arcEnd, int delta, vector_Point& pts) - private static native void ellipse2Poly_0(double center_x, double center_y, double axes_width, double axes_height, int angle, int arcStart, int arcEnd, int delta, long pts_mat_nativeObj); + // C++: void divide(double scale, Mat src2, Mat& dst, int dtype = -1) + private static native void divide_6(double scale, long src2_nativeObj, long dst_nativeObj, int dtype); + private static native void divide_7(double scale, long src2_nativeObj, long dst_nativeObj); // C++: void exp(Mat src, Mat& dst) private static native void exp_0(long src_nativeObj, long dst_nativeObj); @@ -8235,17 +2502,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void extractChannel(Mat src, Mat& dst, int coi) private static native void extractChannel_0(long src_nativeObj, long dst_nativeObj, int coi); - // C++: float fastAtan2(float y, float x) - private static native float fastAtan2_0(float y, float x); - - // C++: void fillConvexPoly(Mat& img, vector_Point points, Scalar color, int lineType = 8, int shift = 0) - private static native void fillConvexPoly_0(long img_nativeObj, long points_mat_nativeObj, double color_val0, double color_val1, double color_val2, double color_val3, int lineType, int shift); - private static native void fillConvexPoly_1(long img_nativeObj, long points_mat_nativeObj, double color_val0, double color_val1, double color_val2, double color_val3); - - // C++: void fillPoly(Mat& img, vector_vector_Point pts, Scalar color, int lineType = 8, int shift = 0, Point offset = Point()) - private static native void fillPoly_0(long img_nativeObj, long pts_mat_nativeObj, double color_val0, double color_val1, double color_val2, double color_val3, int lineType, int shift, double offset_x, double offset_y); - private static native void fillPoly_1(long img_nativeObj, long pts_mat_nativeObj, double color_val0, double color_val1, double color_val2, double color_val3); - // C++: void findNonZero(Mat src, Mat& idx) private static native void findNonZero_0(long src_nativeObj, long idx_nativeObj); @@ -8256,30 +2512,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void gemm_0(long src1_nativeObj, long src2_nativeObj, double alpha, long src3_nativeObj, double beta, long dst_nativeObj, int flags); private static native void gemm_1(long src1_nativeObj, long src2_nativeObj, double alpha, long src3_nativeObj, double beta, long dst_nativeObj); - // C++: string getBuildInformation() - private static native String getBuildInformation_0(); - - // C++: int64 getCPUTickCount() - private static native long getCPUTickCount_0(); - - // C++: int getNumThreads() - private static native int getNumThreads_0(); - - // C++: int getNumberOfCPUs() - private static native int getNumberOfCPUs_0(); - - // C++: int getOptimalDFTSize(int vecsize) - private static native int getOptimalDFTSize_0(int vecsize); - - // C++: int getThreadNum() - private static native int getThreadNum_0(); - - // C++: int64 getTickCount() - private static native long getTickCount_0(); - - // C++: double getTickFrequency() - private static native double getTickFrequency_0(); - // C++: void hconcat(vector_Mat src, Mat& dst) private static native void hconcat_0(long src_mat_nativeObj, long dst_nativeObj); @@ -8297,19 +2529,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void insertChannel(Mat src, Mat& dst, int coi) private static native void insertChannel_0(long src_nativeObj, long dst_nativeObj, int coi); - // C++: double invert(Mat src, Mat& dst, int flags = DECOMP_LU) - private static native double invert_0(long src_nativeObj, long dst_nativeObj, int flags); - private static native double invert_1(long src_nativeObj, long dst_nativeObj); - - // C++: double kmeans(Mat data, int K, Mat& bestLabels, TermCriteria criteria, int attempts, int flags, Mat& centers = Mat()) - private static native double kmeans_0(long data_nativeObj, int K, long bestLabels_nativeObj, int criteria_type, int criteria_maxCount, double criteria_epsilon, int attempts, int flags, long centers_nativeObj); - private static native double kmeans_1(long data_nativeObj, int K, long bestLabels_nativeObj, int criteria_type, int criteria_maxCount, double criteria_epsilon, int attempts, int flags); - - // C++: void line(Mat& img, Point pt1, Point pt2, Scalar color, int thickness = 1, int lineType = 8, int shift = 0) - private static native void line_0(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType, int shift); - private static native void line_1(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void line_2(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3); - // C++: void log(Mat src, Mat& dst) private static native void log_0(long src_nativeObj, long dst_nativeObj); @@ -8322,10 +2541,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void max(Mat src1, Scalar src2, Mat& dst) private static native void max_1(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj); - // C++: Scalar mean(Mat src, Mat mask = Mat()) - private static native double[] mean_0(long src_nativeObj, long mask_nativeObj); - private static native double[] mean_1(long src_nativeObj); - // C++: void meanStdDev(Mat src, vector_double& mean, vector_double& stddev, Mat mask = Mat()) private static native void meanStdDev_0(long src_nativeObj, long mean_mat_nativeObj, long stddev_mat_nativeObj, long mask_nativeObj); private static native void meanStdDev_1(long src_nativeObj, long mean_mat_nativeObj, long stddev_mat_nativeObj); @@ -8361,16 +2576,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void multiply_4(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj, double scale); private static native void multiply_5(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj); - // C++: double norm(Mat src1, int normType = NORM_L2, Mat mask = Mat()) - private static native double norm_0(long src1_nativeObj, int normType, long mask_nativeObj); - private static native double norm_1(long src1_nativeObj, int normType); - private static native double norm_2(long src1_nativeObj); - - // C++: double norm(Mat src1, Mat src2, int normType = NORM_L2, Mat mask = Mat()) - private static native double norm_3(long src1_nativeObj, long src2_nativeObj, int normType, long mask_nativeObj); - private static native double norm_4(long src1_nativeObj, long src2_nativeObj, int normType); - private static native double norm_5(long src1_nativeObj, long src2_nativeObj); - // C++: void normalize(Mat src, Mat& dst, double alpha = 1, double beta = 0, int norm_type = NORM_L2, int dtype = -1, Mat mask = Mat()) private static native void normalize_0(long src_nativeObj, long dst_nativeObj, double alpha, double beta, int norm_type, int dtype, long mask_nativeObj); private static native void normalize_1(long src_nativeObj, long dst_nativeObj, double alpha, double beta, int norm_type, int dtype); @@ -8392,20 +2597,10 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void polarToCart_0(long magnitude_nativeObj, long angle_nativeObj, long x_nativeObj, long y_nativeObj, boolean angleInDegrees); private static native void polarToCart_1(long magnitude_nativeObj, long angle_nativeObj, long x_nativeObj, long y_nativeObj); - // C++: void polylines(Mat& img, vector_vector_Point pts, bool isClosed, Scalar color, int thickness = 1, int lineType = 8, int shift = 0) - private static native void polylines_0(long img_nativeObj, long pts_mat_nativeObj, boolean isClosed, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType, int shift); - private static native void polylines_1(long img_nativeObj, long pts_mat_nativeObj, boolean isClosed, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void polylines_2(long img_nativeObj, long pts_mat_nativeObj, boolean isClosed, double color_val0, double color_val1, double color_val2, double color_val3); - // C++: void pow(Mat src, double power, Mat& dst) private static native void pow_0(long src_nativeObj, double power, long dst_nativeObj); - // C++: void putText(Mat img, string text, Point org, int fontFace, double fontScale, Scalar color, int thickness = 1, int lineType = 8, bool bottomLeftOrigin = false) - private static native void putText_0(long img_nativeObj, String text, double org_x, double org_y, int fontFace, double fontScale, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType, boolean bottomLeftOrigin); - private static native void putText_1(long img_nativeObj, String text, double org_x, double org_y, int fontFace, double fontScale, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void putText_2(long img_nativeObj, String text, double org_x, double org_y, int fontFace, double fontScale, double color_val0, double color_val1, double color_val2, double color_val3); - - // C++: void randShuffle_(Mat& dst, double iterFactor = 1.) + // C++: void randShuffle(Mat& dst, double iterFactor = 1., RNG* rng = 0) private static native void randShuffle_0(long dst_nativeObj, double iterFactor); private static native void randShuffle_1(long dst_nativeObj); @@ -8415,11 +2610,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void randu(Mat& dst, double low, double high) private static native void randu_0(long dst_nativeObj, double low, double high); - // C++: void rectangle(Mat& img, Point pt1, Point pt2, Scalar color, int thickness = 1, int lineType = 8, int shift = 0) - private static native void rectangle_0(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3, int thickness, int lineType, int shift); - private static native void rectangle_1(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3, int thickness); - private static native void rectangle_2(long img_nativeObj, double pt1_x, double pt1_y, double pt2_x, double pt2_y, double color_val0, double color_val1, double color_val2, double color_val3); - // C++: void reduce(Mat src, Mat& dst, int dim, int rtype, int dtype = -1) private static native void reduce_0(long src_nativeObj, long dst_nativeObj, int dim, int rtype, int dtype); private static native void reduce_1(long src_nativeObj, long dst_nativeObj, int dim, int rtype); @@ -8427,6 +2617,9 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void repeat(Mat src, int ny, int nx, Mat& dst) private static native void repeat_0(long src_nativeObj, int ny, int nx, long dst_nativeObj); + // C++: void rotate(Mat src, Mat& dst, int rotateCode) + private static native void rotate_0(long src_nativeObj, long dst_nativeObj, int rotateCode); + // C++: void scaleAdd(Mat src1, double alpha, Mat src2, Mat& dst) private static native void scaleAdd_0(long src1_nativeObj, double alpha, long src2_nativeObj, long dst_nativeObj); @@ -8443,17 +2636,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void setRNGSeed(int seed) private static native void setRNGSeed_0(int seed); - // C++: bool solve(Mat src1, Mat src2, Mat& dst, int flags = DECOMP_LU) - private static native boolean solve_0(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj, int flags); - private static native boolean solve_1(long src1_nativeObj, long src2_nativeObj, long dst_nativeObj); - - // C++: int solveCubic(Mat coeffs, Mat& roots) - private static native int solveCubic_0(long coeffs_nativeObj, long roots_nativeObj); - - // C++: double solvePoly(Mat coeffs, Mat& roots, int maxIters = 300) - private static native double solvePoly_0(long coeffs_nativeObj, long roots_nativeObj, int maxIters); - private static native double solvePoly_1(long coeffs_nativeObj, long roots_nativeObj); - // C++: void sort(Mat src, Mat& dst, int flags) private static native void sort_0(long src_nativeObj, long dst_nativeObj, int flags); @@ -8476,12 +2658,6 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int private static native void subtract_4(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj, long mask_nativeObj); private static native void subtract_5(long src1_nativeObj, double src2_val0, double src2_val1, double src2_val2, double src2_val3, long dst_nativeObj); - // C++: Scalar sum(Mat src) - private static native double[] sumElems_0(long src_nativeObj); - - // C++: Scalar trace(Mat mtx) - private static native double[] trace_0(long mtx_nativeObj); - // C++: void transform(Mat src, Mat& dst, Mat m) private static native void transform_0(long src_nativeObj, long dst_nativeObj, long m_nativeObj); @@ -8490,7 +2666,9 @@ public static Size getTextSize(String text, int fontFace, double fontScale, int // C++: void vconcat(vector_Mat src, Mat& dst) private static native void vconcat_0(long src_mat_nativeObj, long dst_nativeObj); - private static native double[] n_minMaxLocManual(long src_nativeObj, long mask_nativeObj); - private static native double[] n_getTextSize(String text, int fontFace, double fontScale, int thickness, int[] baseLine); + + // C++: void setUseIPP(bool flag) + private static native void setUseIPP_0(boolean flag); +private static native double[] n_minMaxLocManual(long src_nativeObj, long mask_nativeObj); } diff --git a/imaging-utils/src/main/java/org/opencv/features2d/DMatch.java b/imaging-utils/src/main/java/org/opencv/core/DMatch.java similarity index 83% rename from imaging-utils/src/main/java/org/opencv/features2d/DMatch.java rename to imaging-utils/src/main/java/org/opencv/core/DMatch.java index d520a2e..db44d9a 100644 --- a/imaging-utils/src/main/java/org/opencv/features2d/DMatch.java +++ b/imaging-utils/src/main/java/org/opencv/core/DMatch.java @@ -1,4 +1,4 @@ -package org.opencv.features2d; +package org.opencv.core; //C++: class DMatch @@ -21,12 +21,15 @@ public class DMatch { */ public int imgIdx; + // javadoc: DMatch::distance public float distance; + // javadoc: DMatch::DMatch() public DMatch() { this(-1, -1, Float.MAX_VALUE); } + // javadoc: DMatch::DMatch(_queryIdx, _trainIdx, _distance) public DMatch(int _queryIdx, int _trainIdx, float _distance) { queryIdx = _queryIdx; trainIdx = _trainIdx; @@ -34,6 +37,7 @@ public DMatch(int _queryIdx, int _trainIdx, float _distance) { distance = _distance; } + // javadoc: DMatch::DMatch(_queryIdx, _trainIdx, _imgIdx, _distance) public DMatch(int _queryIdx, int _trainIdx, int _imgIdx, float _distance) { queryIdx = _queryIdx; trainIdx = _trainIdx; @@ -41,9 +45,6 @@ public DMatch(int _queryIdx, int _trainIdx, int _imgIdx, float _distance) { distance = _distance; } - /** - * Less is better. - */ public boolean lessThan(DMatch it) { return distance < it.distance; } diff --git a/imaging-utils/src/main/java/org/opencv/core/KeyPoint.java b/imaging-utils/src/main/java/org/opencv/core/KeyPoint.java new file mode 100644 index 0000000..de5b215 --- /dev/null +++ b/imaging-utils/src/main/java/org/opencv/core/KeyPoint.java @@ -0,0 +1,83 @@ +package org.opencv.core; + +import org.opencv.core.Point; + +//javadoc: KeyPoint +public class KeyPoint { + + /** + * Coordinates of the keypoint. + */ + public Point pt; + /** + * Diameter of the useful keypoint adjacent area. + */ + public float size; + /** + * Computed orientation of the keypoint (-1 if not applicable). + */ + public float angle; + /** + * The response, by which the strongest keypoints have been selected. Can + * be used for further sorting or subsampling. + */ + public float response; + /** + * Octave (pyramid layer), from which the keypoint has been extracted. + */ + public int octave; + /** + * Object ID, that can be used to cluster keypoints by an object they + * belong to. + */ + public int class_id; + + // javadoc:KeyPoint::KeyPoint(x,y,_size,_angle,_response,_octave,_class_id) + public KeyPoint(float x, float y, float _size, float _angle, float _response, int _octave, int _class_id) + { + pt = new Point(x, y); + size = _size; + angle = _angle; + response = _response; + octave = _octave; + class_id = _class_id; + } + + // javadoc: KeyPoint::KeyPoint() + public KeyPoint() + { + this(0, 0, 0, -1, 0, 0, -1); + } + + // javadoc: KeyPoint::KeyPoint(x, y, _size, _angle, _response, _octave) + public KeyPoint(float x, float y, float _size, float _angle, float _response, int _octave) + { + this(x, y, _size, _angle, _response, _octave, -1); + } + + // javadoc: KeyPoint::KeyPoint(x, y, _size, _angle, _response) + public KeyPoint(float x, float y, float _size, float _angle, float _response) + { + this(x, y, _size, _angle, _response, 0, -1); + } + + // javadoc: KeyPoint::KeyPoint(x, y, _size, _angle) + public KeyPoint(float x, float y, float _size, float _angle) + { + this(x, y, _size, _angle, 0, 0, -1); + } + + // javadoc: KeyPoint::KeyPoint(x, y, _size) + public KeyPoint(float x, float y, float _size) + { + this(x, y, _size, -1, 0, 0, -1); + } + + @Override + public String toString() { + return "KeyPoint [pt=" + pt + ", size=" + size + ", angle=" + angle + + ", response=" + response + ", octave=" + octave + + ", class_id=" + class_id + "]"; + } + +} diff --git a/imaging-utils/src/main/java/org/opencv/core/Mat.java b/imaging-utils/src/main/java/org/opencv/core/Mat.java index d4aa1c6..6db2554 100644 --- a/imaging-utils/src/main/java/org/opencv/core/Mat.java +++ b/imaging-utils/src/main/java/org/opencv/core/Mat.java @@ -1,419 +1,7 @@ package org.opencv.core; // C++: class Mat -/** - *OpenCV C++ n-dimensional dense array class
- * - *class CV_EXPORTS Mat
// C++ code:
- * - * - *public:
- * - *//... a lot of methods......
- * - */ *! includes several bit-fields:
- * - *- the magic signature
- * - *- continuity flag
- * - *- depth
- * - *- number of channels
- *int flags;
- * - *//! the array dimensionality, >= 2
- * - *int dims;
- * - *//! the number of rows and columns or (-1, -1) when the array has more than 2 - * dimensions
- * - *int rows, cols;
- * - *//! pointer to the data
- * - *uchar* data;
- * - *//! pointer to the reference counter;
- * - *// when array points to user-allocated data, the pointer is NULL
- * - *int* refcount;
- * - *// other members...
- * - *};
- * - *The class Mat
represents an n-dimensional dense numerical
- * single-channel or multi-channel array. It can be used to store real or
- * complex-valued vectors and matrices, grayscale or color images, voxel
- * volumes, vector fields, point clouds, tensors, histograms (though, very
- * high-dimensional histograms may be better stored in a SparseMat
).
- * The data layout of the array
M is defined by the array M.step[]
, so that the address
- * of element (i_0,...,i_(M.dims-1)), where 0 <= i_k<M.size[k],
- * is computed as:
addr(M_(i_0,...,i_(M.dims-1))) = M.data + M.step[0]*i_0 + M.step[1]*i_1 - * +... + M.step[M.dims-1]*i_(M.dims-1)
- * - *In case of a 2-dimensional array, the above formula is reduced to:
- * - *addr(M_(i,j)) = M.data + M.step[0]*i + M.step[1]*j
- * - *Note that M.step[i] >= M.step[i+1]
(in fact, M.step[i] >=
- * M.step[i+1]*M.size[i+1]
). This means that 2-dimensional matrices are
- * stored row-by-row, 3-dimensional matrices are stored plane-by-plane, and so
- * on. M.step[M.dims-1]
is minimal and always equal to the element
- * size M.elemSize()
.
So, the data layout in Mat
is fully compatible with
- * CvMat
, IplImage
, and CvMatND
types
- * from OpenCV 1.x. It is also compatible with the majority of dense array types
- * from the standard toolkits and SDKs, such as Numpy (ndarray), Win32
- * (independent device bitmaps), and others, that is, with any array that uses
- * *steps* (or *strides*) to compute the position of a pixel. Due to this
- * compatibility, it is possible to make a Mat
header for
- * user-allocated data and process it in-place using OpenCV functions.
There are many different ways to create a Mat
object. The most
- * popular options are listed below:
create(nrows, ncols, type)
method or the similar
- * Mat(nrows, ncols, type[, fillValue])
constructor. A new array of
- * the specified size and type is allocated. type
has the same
- * meaning as in the cvCreateMat
method.
- * For example, CV_8UC1
means a 8-bit single-channel array,
- * CV_32FC2
means a 2-channel (complex) floating-point array, and
- * so on.
// C++ code:
- * - *// make a 7x7 complex matrix filled with 1+3j.
- * - *Mat M(7,7,CV_32FC2,Scalar(1,3));
- * - *// and now turn M to a 100x60 15-channel 8-bit matrix.
- * - *// The old content will be deallocated
- * - *M.create(100,60,CV_8UC(15));
- * - * - * - *As noted in the introduction to this chapter, create()
allocates
- * only a new array when the shape or type of the current array are different
- * from the specified ones.
// C++ code:
- * - *// create a 100x100x100 8-bit array
- * - *int sz[] = {100, 100, 100};
- * - *Mat bigCube(3, sz, CV_8U, Scalar.all(0));
- * - * - * - *It passes the number of dimensions =1 to the Mat
constructor but
- * the created array will be 2-dimensional with the number of columns set to 1.
- * So, Mat.dims
is always >= 2 (can also be 0 when the array is
- * empty).
Mat.clone()
- * method can be used to get a full (deep) copy of the array when you need it.
- * // C++ code:
- * - *// add the 5-th row, multiplied by 3 to the 3rd row
- * - *M.row(3) = M.row(3) + M.row(5)*3;
- * - *// now copy the 7-th column to the 1-st column
- * - *// M.col(1) = M.col(7); // this will not work
- * - *Mat M1 = M.col(1);
- * - *M.col(7).copyTo(M1);
- * - *// create a new 320x240 image
- * - *Mat img(Size(320,240),CV_8UC3);
- * - *// select a ROI
- * - *Mat roi(img, Rect(10,10,100,100));
- * - *// fill the ROI with (0,255,0) (which is green in RGB space);
- * - *// the original 320x240 image will be modified
- * - *roi = Scalar(0,255,0);
- * - * - * - *Due to the additional datastart
and dataend
- * members, it is possible to compute a relative sub-array position in the main
- * *container* array using locateROI()
:
// C++ code:
- * - *Mat A = Mat.eye(10, 10, CV_32S);
- * - *// extracts A columns, 1 (inclusive) to 3 (exclusive).
- * - *Mat B = A(Range.all(), Range(1, 3));
- * - *// extracts B rows, 5 (inclusive) to 9 (exclusive).
- * - *// that is, C ~ A(Range(5, 9), Range(1, 3))
- * - *Mat C = B(Range(5, 9), Range.all());
- * - *Size size; Point ofs;
- * - *C.locateROI(size, ofs);
- * - *// size will be (width=10,height=10) and the ofs will be (x=1, y=5)
- * - * - * - *As in case of whole matrices, if you need a deep copy, use the
- * clone()
method of the extracted sub-matrices.
gstreamer
, and so
- * on). For example:
- * // C++ code:
- * - *void process_video_frame(const unsigned char* pixels,
- * - *int width, int height, int step)
- * - * - *Mat img(height, width, CV_8UC3, pixels, step);
- * - *GaussianBlur(img, img, Size(7,7), 1.5, 1.5);
- * - * - * - *// C++ code:
- * - *double m[3][3] = {{a, b, c}, {d, e, f}, {g, h, i}};
- * - *Mat M = Mat(3, 3, CV_64F, m).inv();
- * - * - * - *Partial yet very common cases of this *user-allocated data* case are
- * conversions from CvMat
and IplImage
to
- * Mat
. For this purpose, there are special constructors taking
- * pointers to CvMat
or IplImage
and the optional flag
- * indicating whether to copy the data or not.
Backward conversion from Mat
to CvMat
or
- * IplImage
is provided via cast operators Mat.operator
- * CvMat() const
and Mat.operator IplImage()
. The operators
- * do NOT copy the data.
// C++ code:
- * - *IplImage* img = cvLoadImage("greatwave.jpg", 1);
- * - *Mat mtx(img); // convert IplImage* -> Mat
- * - *CvMat oldmat = mtx; // convert Mat -> CvMat
- * - *CV_Assert(oldmat.cols == img->width && oldmat.rows == img->height &&
- * - *oldmat.data.ptr == (uchar*)img->imageData && oldmat.step == img->widthStep);
- * - * - *zeros(), ones(),
- * eye()
, for example:
- * // C++ code:
- * - *// create a double-precision identity martix and add it to M.
- * - *M += Mat.eye(M.rows, M.cols, CV_64F);
- * - * - *// C++ code:
- * - *// create a 3x3 double-precision identity matrix
- * - *Mat M = (Mat_
With this approach, you first call a constructor of the "Mat_" class with the
- * proper parameters, and then you just put <<
operator followed by
- * comma-separated values that can be constants, variables, expressions, and so
- * on. Also, note the extra parentheses required to avoid compilation errors.
Once the array is created, it is automatically managed via a
- * reference-counting mechanism. If the array header is built on top of
- * user-allocated data, you should handle the data by yourself.
- * The array data is deallocated when no one points to it. If you want to
- * release the data pointed by a array header before the array destructor is
- * called, use Mat.release()
.
The next important thing to learn about the array class is element access.
- * This manual already described how to compute an address of each array
- * element. Normally, you are not required to use the formula directly in the
- * code. If you know the array element type (which can be retrieved using the
- * method Mat.type()
), you can access the elementM_(ij)
- * of a 2-dimensional array as:
// C++ code:
- * - *M.at
assuming that M is a double-precision floating-point array. There are several
- * variants of the method at
for a different number of dimensions.
- *
If you need to process a whole row of a 2D array, the most efficient way is
- * to get the pointer to the row first, and then just use the plain C operator
- * []
:
// C++ code:
- * - *// compute sum of positive matrix elements
- * - *// (assuming that M isa double-precision matrix)
- * - *double sum=0;
- * - *for(int i = 0; i < M.rows; i++)
- * - * - *const double* Mi = M.ptr
for(int j = 0; j < M.cols; j++)
- * - *sum += std.max(Mi[j], 0.);
- * - * - *Some operations, like the one above, do not actually depend on the array - * shape. They just process elements of an array one by one (or elements from - * multiple arrays that have the same coordinates, for example, array addition). - * Such operations are called *element-wise*. It makes sense to check whether - * all the input/output arrays are continuous, namely, have no gaps at the end - * of each row. If yes, process them as a long single row:
- * - *// compute the sum of positive matrix elements, optimized variant
- * - *double sum=0;
- * - *int cols = M.cols, rows = M.rows;
- * - *if(M.isContinuous())
- * - * - *cols *= rows;
- * - *rows = 1;
- * - * - *for(int i = 0; i < rows; i++)
- * - * - *const double* Mi = M.ptr
for(int j = 0; j < cols; j++)
- * - *sum += std.max(Mi[j], 0.);
- * - * - *In case of the continuous matrix, the outer loop body is executed just once. - * So, the overhead is smaller, which is especially noticeable in case of small - * matrices. - *
- * - *Finally, there are STL-style iterators that are smart enough to skip gaps
- * between successive rows:
// C++ code:
- * - *// compute sum of positive matrix elements, iterator-based variant
- * - *double sum=0;
- * - *MatConstIterator_
for(; it != it_end; ++it)
- * - *sum += std.max(*it, 0.);
- * - *The matrix iterators are random-access iterators, so they can be passed to
- * any STL algorithm, including std.sort()
.
- *
Note:
- *Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @see org.opencv.core.Mat.Mat - */ + // javadoc: Mat::Mat() public Mat() { @@ -453,24 +30,7 @@ public Mat() // C++: Mat::Mat(int rows, int cols, int type) // -/** - *Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param rows Number of rows in a 2D array. - * @param cols Number of columns in a 2D array. - * @param type Array type. UseCV_8UC1,..., CV_64FC4
to create 1-4
- * channel matrices, or CV_8UC(n),..., CV_64FC(n)
to create
- * multi-channel (up to CV_CN_MAX
channels) matrices.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(rows, cols, type)
public Mat(int rows, int cols, int type)
{
@@ -483,25 +43,7 @@ public Mat(int rows, int cols, int type)
// C++: Mat::Mat(Size size, int type)
//
-/**
- * Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param size 2D array size:Size(cols, rows)
. In the
- * Size()
constructor, the number of rows and the number of columns
- * go in the reverse order.
- * @param type Array type. Use CV_8UC1,..., CV_64FC4
to create 1-4
- * channel matrices, or CV_8UC(n),..., CV_64FC(n)
to create
- * multi-channel (up to CV_CN_MAX
channels) matrices.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(size, type)
public Mat(Size size, int type)
{
@@ -514,27 +56,7 @@ public Mat(Size size, int type)
// C++: Mat::Mat(int rows, int cols, int type, Scalar s)
//
-/**
- * Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param rows Number of rows in a 2D array. - * @param cols Number of columns in a 2D array. - * @param type Array type. UseCV_8UC1,..., CV_64FC4
to create 1-4
- * channel matrices, or CV_8UC(n),..., CV_64FC(n)
to create
- * multi-channel (up to CV_CN_MAX
channels) matrices.
- * @param s An optional value to initialize each matrix element with. To set all
- * the matrix elements to the particular value after the construction, use the
- * assignment operator Mat.operator=(const Scalar& value)
.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(rows, cols, type, s)
public Mat(int rows, int cols, int type, Scalar s)
{
@@ -547,28 +69,7 @@ public Mat(int rows, int cols, int type, Scalar s)
// C++: Mat::Mat(Size size, int type, Scalar s)
//
-/**
- * Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param size 2D array size:Size(cols, rows)
. In the
- * Size()
constructor, the number of rows and the number of columns
- * go in the reverse order.
- * @param type Array type. Use CV_8UC1,..., CV_64FC4
to create 1-4
- * channel matrices, or CV_8UC(n),..., CV_64FC(n)
to create
- * multi-channel (up to CV_CN_MAX
channels) matrices.
- * @param s An optional value to initialize each matrix element with. To set all
- * the matrix elements to the particular value after the construction, use the
- * assignment operator Mat.operator=(const Scalar& value)
.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(size, type, s)
public Mat(Size size, int type, Scalar s)
{
@@ -581,31 +82,7 @@ public Mat(Size size, int type, Scalar s)
// C++: Mat::Mat(Mat m, Range rowRange, Range colRange = Range::all())
//
-/**
- * Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param m Array that (as a whole or partly) is assigned to the constructed - * matrix. No data is copied by these constructors. Instead, the header pointing - * tom
data or its sub-array is constructed and associated with
- * it. The reference counter, if any, is incremented. So, when you modify the
- * matrix formed using such a constructor, you also modify the corresponding
- * elements of m
. If you want to have an independent copy of the
- * sub-array, use Mat.clone()
.
- * @param rowRange Range of the m
rows to take. As usual, the range
- * start is inclusive and the range end is exclusive. Use Range.all()
- * to take all the rows.
- * @param colRange Range of the m
columns to take. Use
- * Range.all()
to take all the columns.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(m, rowRange, colRange)
public Mat(Mat m, Range rowRange, Range colRange)
{
@@ -614,29 +91,7 @@ public Mat(Mat m, Range rowRange, Range colRange)
return;
}
-/**
- * Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param m Array that (as a whole or partly) is assigned to the constructed - * matrix. No data is copied by these constructors. Instead, the header pointing - * tom
data or its sub-array is constructed and associated with
- * it. The reference counter, if any, is incremented. So, when you modify the
- * matrix formed using such a constructor, you also modify the corresponding
- * elements of m
. If you want to have an independent copy of the
- * sub-array, use Mat.clone()
.
- * @param rowRange Range of the m
rows to take. As usual, the range
- * start is inclusive and the range end is exclusive. Use Range.all()
- * to take all the rows.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(m, rowRange)
public Mat(Mat m, Range rowRange)
{
@@ -649,27 +104,7 @@ public Mat(Mat m, Range rowRange)
// C++: Mat::Mat(Mat m, Rect roi)
//
-/**
- * Various Mat constructors
- * - *These are various constructors that form a matrix. As noted in the - * "AutomaticAllocation", often the default constructor is enough, and the - * proper matrix will be allocated by an OpenCV function. The constructed matrix - * can further be assigned to another matrix or matrix expression or can be - * allocated with "Mat.create". In the former case, the old content is - * de-referenced.
- * - * @param m Array that (as a whole or partly) is assigned to the constructed - * matrix. No data is copied by these constructors. Instead, the header pointing - * tom
data or its sub-array is constructed and associated with
- * it. The reference counter, if any, is incremented. So, when you modify the
- * matrix formed using such a constructor, you also modify the corresponding
- * elements of m
. If you want to have an independent copy of the
- * sub-array, use Mat.clone()
.
- * @param roi Region of interest.
- *
- * @see org.opencv.core.Mat.Mat
- */
+ // javadoc: Mat::Mat(m, roi)
public Mat(Mat m, Rect roi)
{
@@ -682,43 +117,7 @@ public Mat(Mat m, Rect roi)
// C++: Mat Mat::adjustROI(int dtop, int dbottom, int dleft, int dright)
//
-/**
- * Adjusts a submatrix size and position within the parent matrix.
- * - *The method is complimentary to"Mat.locateROI". The typical use of these
- * functions is to determine the submatrix position within the parent matrix and
- * then shift the position somehow. Typically, it can be required for filtering
- * operations when pixels outside of the ROI should be taken into account. When
- * all the method parameters are positive, the ROI needs to grow in all
- * directions by the specified amount, for example:
// C++ code:
- * - *A.adjustROI(2, 2, 2, 2);
- * - *In this example, the matrix size is increased by 4 elements in each - * direction. The matrix is shifted by 2 elements to the left and 2 elements up, - * which brings in all the necessary pixels for the filtering with the 5x5 - * kernel. - *
- * - *adjustROI
forces the adjusted ROI to be inside of the parent
- * matrix that is boundaries of the adjusted ROI are constrained by boundaries
- * of the parent matrix. For example, if the submatrix A
is located
- * in the first row of a parent matrix and you called A.adjustROI(2, 2, 2,
- * 2)
then A
will not be increased in the upward direction.
The function is used internally by the OpenCV filtering functions, like - * "filter2D", morphological operations, and so on.
- * - * @param dtop Shift of the top submatrix boundary upwards. - * @param dbottom Shift of the bottom submatrix boundary downwards. - * @param dleft Shift of the left submatrix boundary to the left. - * @param dright Shift of the right submatrix boundary to the right. - * - * @see org.opencv.core.Mat.adjustROI - * @see org.opencv.imgproc.Imgproc#copyMakeBorder - */ + // javadoc: Mat::adjustROI(dtop, dbottom, dleft, dright) public Mat adjustROI(int dtop, int dbottom, int dleft, int dright) { @@ -731,17 +130,7 @@ public Mat adjustROI(int dtop, int dbottom, int dleft, int dright) // C++: void Mat::assignTo(Mat m, int type = -1) // -/** - *Provides a functional form of convertTo
.
This is an internally used method called by the "MatrixExpressions" engine.
- * - * @param m Destination array. - * @param type Desired destination array depth (or -1 if it should be the same - * as the source type). - * - * @see org.opencv.core.Mat.assignTo - */ + // javadoc: Mat::assignTo(m, type) public void assignTo(Mat m, int type) { @@ -750,15 +139,7 @@ public void assignTo(Mat m, int type) return; } -/** - *Provides a functional form of convertTo
.
This is an internally used method called by the "MatrixExpressions" engine.
- * - * @param m Destination array. - * - * @see org.opencv.core.Mat.assignTo - */ + // javadoc: Mat::assignTo(m) public void assignTo(Mat m) { @@ -771,13 +152,7 @@ public void assignTo(Mat m) // C++: int Mat::channels() // -/** - *Returns the number of matrix channels.
- * - *The method returns the number of matrix channels.
- * - * @see org.opencv.core.Mat.channels - */ + // javadoc: Mat::channels() public int channels() { @@ -791,6 +166,7 @@ public int channels() // requireContinuous = true) // + // javadoc: Mat::checkVector(elemChannels, depth, requireContinuous) public int checkVector(int elemChannels, int depth, boolean requireContinuous) { @@ -799,6 +175,7 @@ public int checkVector(int elemChannels, int depth, boolean requireContinuous) return retVal; } + // javadoc: Mat::checkVector(elemChannels, depth) public int checkVector(int elemChannels, int depth) { @@ -807,6 +184,7 @@ public int checkVector(int elemChannels, int depth) return retVal; } + // javadoc: Mat::checkVector(elemChannels) public int checkVector(int elemChannels) { @@ -819,15 +197,7 @@ public int checkVector(int elemChannels) // C++: Mat Mat::clone() // -/** - *Creates a full copy of the array and the underlying data.
- * - *The method creates a full copy of the array. The original step[]
- * is not taken into account. So, the array copy is a continuous array occupying
- * total()*elemSize()
bytes.
Creates a matrix header for the specified matrix column.
- * - *The method makes a new header for the specified matrix column and returns it. - * This is an O(1) operation, regardless of the matrix size. The underlying data - * of the new matrix is shared with the original matrix. See also the "Mat.row" - * description.
- * - * @param x A 0-based column index. - * - * @see org.opencv.core.Mat.col - */ + // javadoc: Mat::col(x) public Mat col(int x) { @@ -864,17 +223,7 @@ public Mat col(int x) // C++: Mat Mat::colRange(int startcol, int endcol) // -/** - *Creates a matrix header for the specified column span.
- * - *The method makes a new header for the specified column span of the matrix. - * Similarly to "Mat.row" and "Mat.col", this is an O(1) operation.
- * - * @param startcol An inclusive 0-based start index of the column span. - * @param endcol An exclusive 0-based ending index of the column span. - * - * @see org.opencv.core.Mat.colRange - */ + // javadoc: Mat::colRange(startcol, endcol) public Mat colRange(int startcol, int endcol) { @@ -887,16 +236,7 @@ public Mat colRange(int startcol, int endcol) // C++: Mat Mat::colRange(Range r) // -/** - *Creates a matrix header for the specified column span.
- * - *The method makes a new header for the specified column span of the matrix. - * Similarly to "Mat.row" and "Mat.col", this is an O(1) operation.
- * - * @param r "Range" structure containing both the start and the end indices. - * - * @see org.opencv.core.Mat.colRange - */ + // javadoc: Mat::colRange(r) public Mat colRange(Range r) { @@ -909,6 +249,7 @@ public Mat colRange(Range r) // C++: int Mat::dims() // + // javadoc: Mat::dims() public int dims() { @@ -921,6 +262,7 @@ public int dims() // C++: int Mat::cols() // + // javadoc: Mat::cols() public int cols() { @@ -934,25 +276,7 @@ public int cols() // = 0) // -/** - *Converts an array to another data type with optional scaling.
- * - *The method converts source pixel values to the target data type.
- * saturate_cast<>
is applied at the end to avoid possible
- * overflows:
m(x,y) = saturate _ cast<rType>(alpha(*this)(x,y) + beta)
- * - * @param m output matrix; if it does not have a proper size or type before the - * operation, it is reallocated. - * @param rtype desired output matrix type or, rather, the depth since the - * number of channels are the same as the input has; ifrtype
is
- * negative, the output matrix will have the same type as the input.
- * @param alpha optional scale factor.
- * @param beta optional delta added to the scaled values.
- *
- * @see org.opencv.core.Mat.convertTo
- */
+ // javadoc: Mat::convertTo(m, rtype, alpha, beta)
public void convertTo(Mat m, int rtype, double alpha, double beta)
{
@@ -961,24 +285,7 @@ public void convertTo(Mat m, int rtype, double alpha, double beta)
return;
}
-/**
- * Converts an array to another data type with optional scaling.
- * - *The method converts source pixel values to the target data type.
- * saturate_cast<>
is applied at the end to avoid possible
- * overflows:
m(x,y) = saturate _ cast<rType>(alpha(*this)(x,y) + beta)
- * - * @param m output matrix; if it does not have a proper size or type before the - * operation, it is reallocated. - * @param rtype desired output matrix type or, rather, the depth since the - * number of channels are the same as the input has; ifrtype
is
- * negative, the output matrix will have the same type as the input.
- * @param alpha optional scale factor.
- *
- * @see org.opencv.core.Mat.convertTo
- */
+ // javadoc: Mat::convertTo(m, rtype, alpha)
public void convertTo(Mat m, int rtype, double alpha)
{
@@ -987,23 +294,7 @@ public void convertTo(Mat m, int rtype, double alpha)
return;
}
-/**
- * Converts an array to another data type with optional scaling.
- * - *The method converts source pixel values to the target data type.
- * saturate_cast<>
is applied at the end to avoid possible
- * overflows:
m(x,y) = saturate _ cast<rType>(alpha(*this)(x,y) + beta)
- * - * @param m output matrix; if it does not have a proper size or type before the - * operation, it is reallocated. - * @param rtype desired output matrix type or, rather, the depth since the - * number of channels are the same as the input has; ifrtype
is
- * negative, the output matrix will have the same type as the input.
- *
- * @see org.opencv.core.Mat.convertTo
- */
+ // javadoc: Mat::convertTo(m, rtype)
public void convertTo(Mat m, int rtype)
{
@@ -1016,30 +307,7 @@ public void convertTo(Mat m, int rtype)
// C++: void Mat::copyTo(Mat& m)
//
-/**
- * Copies the matrix to another one.
- * - *The method copies the matrix data to another matrix. Before copying the data,
- * the method invokes
// C++ code:
- * - *m.create(this->size(), this->type());
- * - *so that the destination matrix is reallocated if needed. While
- * m.copyTo(m);
works flawlessly, the function does not handle the
- * case of a partial overlap between the source and the destination matrices.
- *
When the operation mask is specified, if the Mat.create
call
- * shown above reallocates the matrix, the newly allocated matrix is initialized
- * with all zeros before copying the data.
Copies the matrix to another one.
- * - *The method copies the matrix data to another matrix. Before copying the data,
- * the method invokes
// C++ code:
- * - *m.create(this->size(), this->type());
- * - *so that the destination matrix is reallocated if needed. While
- * m.copyTo(m);
works flawlessly, the function does not handle the
- * case of a partial overlap between the source and the destination matrices.
- *
When the operation mask is specified, if the Mat.create
call
- * shown above reallocates the matrix, the newly allocated matrix is initialized
- * with all zeros before copying the data.
Allocates new array data if needed.
- * - *This is one of the key Mat
methods. Most new-style OpenCV
- * functions and methods that produce arrays call this method for each output
- * array. The method uses the following algorithm:
total()*elemSize()
bytes.
- * Such a scheme makes the memory management robust and efficient at the same
- * time and helps avoid extra typing for you. This means that usually there is
- * no need to explicitly allocate output arrays. That is, instead of writing:
- *
// C++ code:
- * - *Mat color;...
- * - *Mat gray(color.rows, color.cols, color.depth());
- * - *cvtColor(color, gray, CV_BGR2GRAY);
- * - *you can simply write:
- * - *Mat color;...
- * - *Mat gray;
- * - *cvtColor(color, gray, CV_BGR2GRAY);
- * - *because cvtColor
, as well as the most of OpenCV functions, calls
- * Mat.create()
for the output array internally.
- *
Allocates new array data if needed.
- * - *This is one of the key Mat
methods. Most new-style OpenCV
- * functions and methods that produce arrays call this method for each output
- * array. The method uses the following algorithm:
total()*elemSize()
bytes.
- * Such a scheme makes the memory management robust and efficient at the same
- * time and helps avoid extra typing for you. This means that usually there is
- * no need to explicitly allocate output arrays. That is, instead of writing:
- *
// C++ code:
- * - *Mat color;...
- * - *Mat gray(color.rows, color.cols, color.depth());
- * - *cvtColor(color, gray, CV_BGR2GRAY);
- * - *you can simply write:
- * - *Mat color;...
- * - *Mat gray;
- * - *cvtColor(color, gray, CV_BGR2GRAY);
- * - *because cvtColor
, as well as the most of OpenCV functions, calls
- * Mat.create()
for the output array internally.
- *
Size(cols,
- * rows)
- * @param type New matrix type.
- *
- * @see org.opencv.core.Mat.create
- */
+ // javadoc: Mat::create(size, type)
public void create(Size size, int type)
{
@@ -1207,17 +359,7 @@ public void create(Size size, int type)
// C++: Mat Mat::cross(Mat m)
//
-/**
- * Computes a cross-product of two 3-element vectors.
- * - *The method computes a cross-product of two 3-element vectors. The vectors - * must be 3-element floating-point vectors of the same shape and size. The - * result is another 3-element vector of the same shape and type as operands.
- * - * @param m Another cross-product operand. - * - * @see org.opencv.core.Mat.cross - */ + // javadoc: Mat::cross(m) public Mat cross(Mat m) { @@ -1230,6 +372,7 @@ public Mat cross(Mat m) // C++: long Mat::dataAddr() // + // javadoc: Mat::dataAddr() public long dataAddr() { @@ -1242,27 +385,7 @@ public long dataAddr() // C++: int Mat::depth() // -/** - *Returns the depth of a matrix element.
- * - *The method returns the identifier of the matrix element depth (the type of
- * each individual channel). For example, for a 16-bit signed element array, the
- * method returns CV_16S
. A complete list of matrix types contains
- * the following values:
CV_8U
- 8-bit unsigned integers (0..255
)
- * CV_8S
- 8-bit signed integers (-128..127
)
- * CV_16U
- 16-bit unsigned integers (0..65535
)
- * CV_16S
- 16-bit signed integers (-32768..32767
)
- * CV_32S
- 32-bit signed integers (-2147483648..2147483647
)
- * CV_32F
- 32-bit floating-point numbers (-FLT_MAX..FLT_MAX,
- * INF, NAN
)
- * CV_64F
- 64-bit floating-point numbers (-DBL_MAX..DBL_MAX,
- * INF, NAN
)
- * Extracts a diagonal from a matrix, or creates a diagonal matrix.
- * - *The method makes a new header for the specified matrix diagonal. The new - * matrix is represented as a single-column matrix. Similarly to "Mat.row" and - * "Mat.col", this is an O(1) operation.
- * - * @param d Single-column matrix that forms a diagonal matrix or index of the - * diagonal, with the following values: - *d=1
- * means the diagonal is set immediately below the main one.
- * d=1
- * means the diagonal is set immediately above the main one.
- * Extracts a diagonal from a matrix, or creates a diagonal matrix.
- * - *The method makes a new header for the specified matrix diagonal. The new - * matrix is represented as a single-column matrix. Similarly to "Mat.row" and - * "Mat.col", this is an O(1) operation.
- * - * @see org.opencv.core.Mat.diag - */ + // javadoc: Mat::diag() public Mat diag() { @@ -1323,25 +420,7 @@ public Mat diag() // C++: static Mat Mat::diag(Mat d) // -/** - *Extracts a diagonal from a matrix, or creates a diagonal matrix.
- * - *The method makes a new header for the specified matrix diagonal. The new - * matrix is represented as a single-column matrix. Similarly to "Mat.row" and - * "Mat.col", this is an O(1) operation.
- * - * @param d Single-column matrix that forms a diagonal matrix or index of the - * diagonal, with the following values: - *d=1
- * means the diagonal is set immediately below the main one.
- * d=1
- * means the diagonal is set immediately above the main one.
- * Computes a dot-product of two vectors.
- * - *The method computes a dot-product of two matrices. If the matrices are not - * single-column or single-row vectors, the top-to-bottom left-to-right scan - * ordering is used to treat them as 1D vectors. The vectors must have the same - * size and type. If the matrices have more than one channel, the dot products - * from all the channels are summed together.
- * - * @param m another dot-product operand. - * - * @see org.opencv.core.Mat.dot - */ + // javadoc: Mat::dot(m) public double dot(Mat m) { @@ -1379,15 +446,7 @@ public double dot(Mat m) // C++: size_t Mat::elemSize() // -/** - *Returns the matrix element size in bytes.
- * - *The method returns the matrix element size in bytes. For example, if the
- * matrix type is CV_16SC3
, the method returns 3*sizeof(short)
- * or 6.
Returns the size of each matrix element channel in bytes.
- * - *The method returns the matrix element channel size in bytes, that is, it
- * ignores the number of channels. For example, if the matrix type is
- * CV_16SC3
, the method returns sizeof(short)
or 2.
Returns true
if the array has no elements.
The method returns true
if Mat.total()
is 0 or if
- * Mat.data
is NULL. Because of pop_back()
and
- * resize()
methods M.total() == 0
does not imply that
- * M.data == NULL
.
Returns an identity matrix of the specified size and type.
- * - *The method returns a Matlab-style identity matrix initializer, similarly to
- * "Mat.zeros". Similarly to"Mat.ones", you can use a scale operation to
- * create a scaled identity matrix efficiently:
// C++ code:
- * - *// make a 4x4 diagonal matrix with 0.1's on the diagonal.
- * - *Mat A = Mat.eye(4, 4, CV_32F)*0.1;
- * - * @param rows Number of rows. - * @param cols Number of columns. - * @param type Created matrix type. - * - * @see org.opencv.core.Mat.eye - */ + // javadoc: Mat::eye(rows, cols, type) public static Mat eye(int rows, int cols, int type) { @@ -1474,25 +498,7 @@ public static Mat eye(int rows, int cols, int type) // C++: static Mat Mat::eye(Size size, int type) // -/** - *Returns an identity matrix of the specified size and type.
- * - *The method returns a Matlab-style identity matrix initializer, similarly to
- * "Mat.zeros". Similarly to"Mat.ones", you can use a scale operation to
- * create a scaled identity matrix efficiently:
// C++ code:
- * - *// make a 4x4 diagonal matrix with 0.1's on the diagonal.
- * - *Mat A = Mat.eye(4, 4, CV_32F)*0.1;
- * - * @param size Alternative matrix size specification asSize(cols,
- * rows)
.
- * @param type Created matrix type.
- *
- * @see org.opencv.core.Mat.eye
- */
+ // javadoc: Mat::eye(size, type)
public static Mat eye(Size size, int type)
{
@@ -1505,26 +511,7 @@ public static Mat eye(Size size, int type)
// C++: Mat Mat::inv(int method = DECOMP_LU)
//
-/**
- * Inverses a matrix.
- * - *The method performs a matrix inversion by means of matrix expressions. This - * means that a temporary matrix inversion object is returned by the method and - * can be used further as a part of more complex matrix expressions or can be - * assigned to a matrix.
- * - * @param method Matrix inversion method. Possible values are the following: - *Inverses a matrix.
- * - *The method performs a matrix inversion by means of matrix expressions. This - * means that a temporary matrix inversion object is returned by the method and - * can be used further as a part of more complex matrix expressions or can be - * assigned to a matrix.
- * - * @see org.opencv.core.Mat.inv - */ + // javadoc: Mat::inv() public Mat inv() { @@ -1555,118 +533,7 @@ public Mat inv() // C++: bool Mat::isContinuous() // -/** - *Reports whether the matrix is continuous or not.
- * - *The method returns true
if the matrix elements are stored
- * continuously without gaps at the end of each row. Otherwise, it returns
- * false
. Obviously, 1x1
or 1xN
matrices
- * are always continuous. Matrices created with "Mat.create" are always
- * continuous. But if you extract a part of the matrix using "Mat.col",
- * "Mat.diag", and so on, or constructed a matrix header for externally
- * allocated data, such matrices may no longer have this property.
- * The continuity flag is stored as a bit in the Mat.flags
field
- * and is computed automatically when you construct a matrix header. Thus, the
- * continuity check is a very fast operation, though theoretically it could be
- * done as follows:
// C++ code:
- * - *// alternative implementation of Mat.isContinuous()
- * - *bool myCheckMatContinuity(const Mat& m)
- * - * - *//return (m.flags & Mat.CONTINUOUS_FLAG) != 0;
- * - *return m.rows == 1 || m.step == m.cols*m.elemSize();
- * - * - *The method is used in quite a few of OpenCV functions. The point is that - * element-wise operations (such as arithmetic and logical operations, math - * functions, alpha blending, color space transformations, and others) do not - * depend on the image geometry. Thus, if all the input and output arrays are - * continuous, the functions can process them as very long single-row vectors. - * The example below illustrates how an alpha-blending function can be - * implemented.
- * - *template
void alphaBlendRGBA(const Mat& src1, const Mat& src2, Mat& dst)
- * - * - *const float alpha_scale = (float)std.numeric_limits
inv_scale = 1.f/alpha_scale;
- * - *CV_Assert(src1.type() == src2.type() &&
- * - *src1.type() == CV_MAKETYPE(DataType
src1.size() == src2.size());
- * - *Size size = src1.size();
- * - *dst.create(size, src1.type());
- * - *// here is the idiom: check the arrays for continuity and,
- * - *// if this is the case,
- * - *// treat the arrays as 1D vectors
- * - *if(src1.isContinuous() && src2.isContinuous() && dst.isContinuous())
- * - * - *size.width *= size.height;
- * - *size.height = 1;
- * - * - *size.width *= 4;
- * - *for(int i = 0; i < size.height; i++)
- * - * - *// when the arrays are continuous,
- * - *// the outer loop is executed only once
- * - *const T* ptr1 = src1.ptr
const T* ptr2 = src2.ptr
T* dptr = dst.ptr
for(int j = 0; j < size.width; j += 4)
- * - * - *float alpha = ptr1[j+3]*inv_scale, beta = ptr2[j+3]*inv_scale;
- * - *dptr[j] = saturate_cast
dptr[j+1] = saturate_cast
dptr[j+2] = saturate_cast
dptr[j+3] = saturate_cast
This approach, while being very simple, can boost the performance of a simple - * element-operation by 10-20 percents, especially if the image is rather small - * and the operation is quite simple. - *
- * - *Another OpenCV idiom in this function, a call of "Mat.create" for the - * destination array, that allocates the destination array unless it already has - * the proper size and type. And while the newly allocated arrays are always - * continuous, you still need to check the destination array because - * "Mat.create" does not always allocate a new matrix.
- * - * @see org.opencv.core.Mat.isContinuous - */ + // javadoc: Mat::isContinuous() public boolean isContinuous() { @@ -1679,6 +546,7 @@ public boolean isContinuous() // C++: bool Mat::isSubmatrix() // + // javadoc: Mat::isSubmatrix() public boolean isSubmatrix() { @@ -1691,24 +559,7 @@ public boolean isSubmatrix() // C++: void Mat::locateROI(Size wholeSize, Point ofs) // -/** - *Locates the matrix header within a parent matrix.
- * - *After you extracted a submatrix from a matrix using "Mat.row", "Mat.col",
- * "Mat.rowRange", "Mat.colRange", and others, the resultant submatrix points
- * just to the part of the original big matrix. However, each submatrix contains
- * information (represented by datastart
and dataend
- * fields) that helps reconstruct the original matrix size and the position of
- * the extracted submatrix within the original matrix. The method
- * locateROI
does exactly that.
*this
as a part.
- * @param ofs Output parameter that contains an offset of *this
- * inside the whole matrix.
- *
- * @see org.opencv.core.Mat.locateROI
- */
+ // javadoc: Mat::locateROI(wholeSize, ofs)
public void locateROI(Size wholeSize, Point ofs)
{
double[] wholeSize_out = new double[2];
@@ -1723,24 +574,7 @@ public void locateROI(Size wholeSize, Point ofs)
// C++: Mat Mat::mul(Mat m, double scale = 1)
//
-/**
- * Performs an element-wise multiplication or division of the two matrices.
- * - *The method returns a temporary object encoding per-element array
- * multiplication, with optional scale. Note that this is not a matrix
- * multiplication that corresponds to a simpler "*" operator.
- * Example:
// C++ code:
- * - *Mat C = A.mul(5/B); // equivalent to divide(A, B, C, 5)
- * - * @param m Another array of the same type and the same size as - **this
, or a matrix expression.
- * @param scale Optional scale factor.
- *
- * @see org.opencv.core.Mat.mul
- */
+ // javadoc: Mat::mul(m, scale)
public Mat mul(Mat m, double scale)
{
@@ -1749,23 +583,7 @@ public Mat mul(Mat m, double scale)
return retVal;
}
-/**
- * Performs an element-wise multiplication or division of the two matrices.
- * - *The method returns a temporary object encoding per-element array
- * multiplication, with optional scale. Note that this is not a matrix
- * multiplication that corresponds to a simpler "*" operator.
- * Example:
// C++ code:
- * - *Mat C = A.mul(5/B); // equivalent to divide(A, B, C, 5)
- * - * @param m Another array of the same type and the same size as - **this
, or a matrix expression.
- *
- * @see org.opencv.core.Mat.mul
- */
+ // javadoc: Mat::mul(m)
public Mat mul(Mat m)
{
@@ -1778,28 +596,7 @@ public Mat mul(Mat m)
// C++: static Mat Mat::ones(int rows, int cols, int type)
//
-/**
- * Returns an array of all 1's of the specified size and type.
- * - *The method returns a Matlab-style 1's array initializer, similarly
- * to"Mat.zeros". Note that using this method you can initialize an array with
- * an arbitrary value, using the following Matlab idiom:
// C++ code:
- * - *Mat A = Mat.ones(100, 100, CV_8U)*3; // make 100x100 matrix filled with 3.
- * - *The above operation does not form a 100x100 matrix of 1's and then multiply - * it by 3. Instead, it just remembers the scale factor (3 in this case) and use - * it when actually invoking the matrix initializer. - *
- * - * @param rows Number of rows. - * @param cols Number of columns. - * @param type Created matrix type. - * - * @see org.opencv.core.Mat.ones - */ + // javadoc: Mat::ones(rows, cols, type) public static Mat ones(int rows, int cols, int type) { @@ -1812,28 +609,7 @@ public static Mat ones(int rows, int cols, int type) // C++: static Mat Mat::ones(Size size, int type) // -/** - *Returns an array of all 1's of the specified size and type.
- * - *The method returns a Matlab-style 1's array initializer, similarly
- * to"Mat.zeros". Note that using this method you can initialize an array with
- * an arbitrary value, using the following Matlab idiom:
// C++ code:
- * - *Mat A = Mat.ones(100, 100, CV_8U)*3; // make 100x100 matrix filled with 3.
- * - *The above operation does not form a 100x100 matrix of 1's and then multiply - * it by 3. Instead, it just remembers the scale factor (3 in this case) and use - * it when actually invoking the matrix initializer. - *
- * - * @param size Alternative to the matrix size specificationSize(cols,
- * rows)
.
- * @param type Created matrix type.
- *
- * @see org.opencv.core.Mat.ones
- */
+ // javadoc: Mat::ones(size, type)
public static Mat ones(Size size, int type)
{
@@ -1846,18 +622,7 @@ public static Mat ones(Size size, int type)
// C++: void Mat::push_back(Mat m)
//
-/**
- * Adds elements to the bottom of the matrix.
- * - *The methods add one or more elements to the bottom of the matrix. They
- * emulate the corresponding method of the STL vector class. When
- * elem
is Mat
, its type and the number of columns
- * must be the same as in the container matrix.
Decrements the reference counter and deallocates the matrix if needed.
- * - *The method decrements the reference counter associated with the matrix data. - * When the reference counter reaches 0, the matrix data is deallocated and the - * data and the reference counter pointers are set to NULL's. If the matrix - * header points to an external data set (see "Mat.Mat"), the reference counter - * is NULL, and the method has no effect in this case.
- * - *This method can be called manually to force the matrix data deallocation. But - * since this method is automatically called in the destructor, or by any other - * method that changes the data pointer, it is usually not needed. The reference - * counter decrement and check for 0 is an atomic operation on the platforms - * that support it. Thus, it is safe to operate on the same matrices - * asynchronously in different threads.
- * - * @see org.opencv.core.Mat.release - */ + // javadoc: Mat::release() public void release() { @@ -1900,47 +648,7 @@ public void release() // C++: Mat Mat::reshape(int cn, int rows = 0) // -/** - *Changes the shape and/or the number of channels of a 2D matrix without - * copying the data.
- * - *The method makes a new matrix header for *this
elements. The new
- * matrix may have a different size and/or different number of channels. Any
- * combination is possible if:
rows*cols*channels()
must
- * stay the same after the transformation.
- * For example, if there is a set of 3D points stored as an STL vector, and you
- * want to represent the points as a 3xN
matrix, do the following:
- *
// C++ code:
- * - *std.vector
Mat pointMat = Mat(vec). // convert vector to Mat, O(1) operation
- * - *reshape(1). // make Nx3 1-channel matrix out of Nx1 3-channel.
- * - *// Also, an O(1) operation
- * - *t(); // finally, transpose the Nx3 matrix.
- * - *// This involves copying all the elements
- * - * @param cn New number of channels. If the parameter is 0, the number of - * channels remains the same. - * @param rows New number of rows. If the parameter is 0, the number of rows - * remains the same. - * - * @see org.opencv.core.Mat.reshape - */ + // javadoc: Mat::reshape(cn, rows) public Mat reshape(int cn, int rows) { @@ -1949,45 +657,7 @@ public Mat reshape(int cn, int rows) return retVal; } -/** - *Changes the shape and/or the number of channels of a 2D matrix without - * copying the data.
- * - *The method makes a new matrix header for *this
elements. The new
- * matrix may have a different size and/or different number of channels. Any
- * combination is possible if:
rows*cols*channels()
must
- * stay the same after the transformation.
- * For example, if there is a set of 3D points stored as an STL vector, and you
- * want to represent the points as a 3xN
matrix, do the following:
- *
// C++ code:
- * - *std.vector
Mat pointMat = Mat(vec). // convert vector to Mat, O(1) operation
- * - *reshape(1). // make Nx3 1-channel matrix out of Nx1 3-channel.
- * - *// Also, an O(1) operation
- * - *t(); // finally, transpose the Nx3 matrix.
- * - *// This involves copying all the elements
- * - * @param cn New number of channels. If the parameter is 0, the number of - * channels remains the same. - * - * @see org.opencv.core.Mat.reshape - */ + // javadoc: Mat::reshape(cn) public Mat reshape(int cn) { @@ -2000,55 +670,7 @@ public Mat reshape(int cn) // C++: Mat Mat::row(int y) // -/** - *Creates a matrix header for the specified matrix row.
- * - *The method makes a new header for the specified matrix row and returns it.
- * This is an O(1) operation, regardless of the matrix size. The underlying data
- * of the new matrix is shared with the original matrix. Here is the example of
- * one of the classical basic matrix processing operations, axpy
,
- * used by LU and many other algorithms:
// C++ code:
- * - *inline void matrix_axpy(Mat& A, int i, int j, double alpha)
- * - * - *A.row(i) += A.row(j)*alpha;
- * - * - *Note:
- * - *In the current implementation, the following code does not work as expected:
- *
// C++ code:
- * - *Mat A;...
- * - *A.row(i) = A.row(j); // will not work
- * - *This happens because A.row(i)
forms a temporary header that is
- * further assigned to another header. Remember that each of these operations is
- * O(1), that is, no data is copied. Thus, the above assignment is not true if
- * you may have expected the j-th row to be copied to the i-th row. To achieve
- * that, you should either turn this simple assignment into an expression or use
- * the "Mat.copyTo" method:
Mat A;...
- * - *// works, but looks a bit obscure.
- * - *A.row(i) = A.row(j) + 0;
- * - *// this is a bit longer, but the recommended method.
- * - *A.row(j).copyTo(A.row(i));
- * - * @param y A 0-based row index. - * - * @see org.opencv.core.Mat.row - */ + // javadoc: Mat::row(y) public Mat row(int y) { @@ -2061,17 +683,7 @@ public Mat row(int y) // C++: Mat Mat::rowRange(int startrow, int endrow) // -/** - *Creates a matrix header for the specified row span.
- * - *The method makes a new header for the specified row span of the matrix. - * Similarly to "Mat.row" and "Mat.col", this is an O(1) operation.
- * - * @param startrow An inclusive 0-based start index of the row span. - * @param endrow An exclusive 0-based ending index of the row span. - * - * @see org.opencv.core.Mat.rowRange - */ + // javadoc: Mat::rowRange(startrow, endrow) public Mat rowRange(int startrow, int endrow) { @@ -2084,16 +696,7 @@ public Mat rowRange(int startrow, int endrow) // C++: Mat Mat::rowRange(Range r) // -/** - *Creates a matrix header for the specified row span.
- * - *The method makes a new header for the specified row span of the matrix. - * Similarly to "Mat.row" and "Mat.col", this is an O(1) operation.
- * - * @param r "Range" structure containing both the start and the end indices. - * - * @see org.opencv.core.Mat.rowRange - */ + // javadoc: Mat::rowRange(r) public Mat rowRange(Range r) { @@ -2106,6 +709,7 @@ public Mat rowRange(Range r) // C++: int Mat::rows() // + // javadoc: Mat::rows() public int rows() { @@ -2118,6 +722,7 @@ public int rows() // C++: Mat Mat::operator =(Scalar s) // + // javadoc: Mat::operator =(s) public Mat setTo(Scalar s) { @@ -2130,16 +735,7 @@ public Mat setTo(Scalar s) // C++: Mat Mat::setTo(Scalar value, Mat mask = Mat()) // -/** - *Sets all or some of the array elements to the specified value.
- * - * @param value Assigned scalar converted to the actual array type. - * @param mask Operation mask of the same size as*this
. This is an
- * advanced variant of the Mat.operator=(const Scalar& s)
- * operator.
- *
- * @see org.opencv.core.Mat.setTo
- */
+ // javadoc: Mat::setTo(value, mask)
public Mat setTo(Scalar value, Mat mask)
{
@@ -2152,16 +748,7 @@ public Mat setTo(Scalar value, Mat mask)
// C++: Mat Mat::setTo(Mat value, Mat mask = Mat())
//
-/**
- * Sets all or some of the array elements to the specified value.
- * - * @param value Assigned scalar converted to the actual array type. - * @param mask Operation mask of the same size as*this
. This is an
- * advanced variant of the Mat.operator=(const Scalar& s)
- * operator.
- *
- * @see org.opencv.core.Mat.setTo
- */
+ // javadoc: Mat::setTo(value, mask)
public Mat setTo(Mat value, Mat mask)
{
@@ -2170,13 +757,7 @@ public Mat setTo(Mat value, Mat mask)
return retVal;
}
-/**
- * Sets all or some of the array elements to the specified value.
- * - * @param value Assigned scalar converted to the actual array type. - * - * @see org.opencv.core.Mat.setTo - */ + // javadoc: Mat::setTo(value) public Mat setTo(Mat value) { @@ -2189,14 +770,7 @@ public Mat setTo(Mat value) // C++: Size Mat::size() // -/** - *Returns a matrix size.
- * - *The method returns a matrix size: Size(cols, rows)
. When the
- * matrix is more than 2-dimensional, the returned size is (-1, -1).
Returns a normalized step.
- * - *The method returns a matrix step divided by "Mat.elemSize1()". It can be - * useful to quickly access an arbitrary matrix element.
- * - * @param i a i - * - * @see org.opencv.core.Mat.step1 - */ + // javadoc: Mat::step1(i) public long step1(int i) { @@ -2227,14 +792,7 @@ public long step1(int i) return retVal; } -/** - *Returns a normalized step.
- * - *The method returns a matrix step divided by "Mat.elemSize1()". It can be - * useful to quickly access an arbitrary matrix element.
- * - * @see org.opencv.core.Mat.step1 - */ + // javadoc: Mat::step1() public long step1() { @@ -2248,23 +806,7 @@ public long step1() // colEnd) // -/** - *Extracts a rectangular submatrix.
- * - *The operators make a new header for the specified sub-array of
- * *this
. They are the most generalized forms of "Mat.row",
- * "Mat.col", "Mat.rowRange", and "Mat.colRange". For example,
- * A(Range(0, 10), Range.all())
is equivalent to A.rowRange(0,
- * 10)
. Similarly to all of the above, the operators are O(1) operations,
- * that is, no matrix data is copied.
Extracts a rectangular submatrix.
- * - *The operators make a new header for the specified sub-array of
- * *this
. They are the most generalized forms of "Mat.row",
- * "Mat.col", "Mat.rowRange", and "Mat.colRange". For example,
- * A(Range(0, 10), Range.all())
is equivalent to A.rowRange(0,
- * 10)
. Similarly to all of the above, the operators are O(1) operations,
- * that is, no matrix data is copied.
Range.all()
.
- * @param colRange Start and end column of the extracted submatrix. The upper
- * boundary is not included. To select all the columns, use Range.all()
.
- *
- * @see org.opencv.core.Mat.operator()
- */
+ // javadoc: Mat::operator()(rowRange, colRange)
public Mat submat(Range rowRange, Range colRange)
{
@@ -2306,20 +832,7 @@ public Mat submat(Range rowRange, Range colRange)
// C++: Mat Mat::operator()(Rect roi)
//
-/**
- * Extracts a rectangular submatrix.
- * - *The operators make a new header for the specified sub-array of
- * *this
. They are the most generalized forms of "Mat.row",
- * "Mat.col", "Mat.rowRange", and "Mat.colRange". For example,
- * A(Range(0, 10), Range.all())
is equivalent to A.rowRange(0,
- * 10)
. Similarly to all of the above, the operators are O(1) operations,
- * that is, no matrix data is copied.
Transposes a matrix.
- * - *The method performs matrix transposition by means of matrix expressions. It
- * does not perform the actual transposition but returns a temporary matrix
- * transposition object that can be further used as a part of more complex
- * matrix expressions or can be assigned to a matrix:
// C++ code:
- * - *Mat A1 = A + Mat.eye(A.size(), A.type())*lambda;
- * - *Mat C = A1.t()*A1; // compute (A + lambda*I)^t * (A + lamda*I)
- * - * @see org.opencv.core.Mat.t - */ + // javadoc: Mat::t() public Mat t() { @@ -2360,14 +858,7 @@ public Mat t() // C++: size_t Mat::total() // -/** - *Returns the total number of array elements.
- * - *The method returns the number of array elements (a number of pixels if the - * array represents an image).
- * - * @see org.opencv.core.Mat.total - */ + // javadoc: Mat::total() public long total() { @@ -2380,15 +871,7 @@ public long total() // C++: int Mat::type() // -/** - *Returns the type of a matrix element.
- * - *The method returns a matrix element type. This is an identifier compatible
- * with the CvMat
type system, like CV_16SC3
or 16-bit
- * signed 3-channel array, and so on.
Returns a zero array of the specified size and type.
- * - *The method returns a Matlab-style zero array initializer. It can be used to
- * quickly form a constant array as a function parameter, part of a matrix
- * expression, or as a matrix initializer.
- *
// C++ code:
- * - *Mat A;
- * - *A = Mat.zeros(3, 3, CV_32F);
- * - *In the example above, a new matrix is allocated only if A
is not
- * a 3x3 floating-point matrix. Otherwise, the existing matrix A
is
- * filled with zeros.
- *
Returns a zero array of the specified size and type.
- * - *The method returns a Matlab-style zero array initializer. It can be used to
- * quickly form a constant array as a function parameter, part of a matrix
- * expression, or as a matrix initializer.
- *
// C++ code:
- * - *Mat A;
- * - *A = Mat.zeros(3, 3, CV_32F);
- * - *In the example above, a new matrix is allocated only if A
is not
- * a 3x3 floating-point matrix. Otherwise, the existing matrix A
is
- * filled with zeros.
- *
Size(cols,
- * rows)
.
- * @param type Created matrix type.
- *
- * @see org.opencv.core.Mat.zeros
- */
+ // javadoc: Mat::zeros(size, type)
public static Mat zeros(Size size, int type)
{
@@ -2477,6 +912,7 @@ protected void finalize() throws Throwable {
super.finalize();
}
+ // javadoc:Mat::toString()
@Override
public String toString() {
return "Mat [ " +
@@ -2487,10 +923,12 @@ public String toString() {
" ]";
}
+ // javadoc:Mat::dump()
public String dump() {
return nDump(nativeObj);
}
+ // javadoc:Mat::put(row,col,data)
public int put(int row, int col, double... data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2502,6 +940,7 @@ public int put(int row, int col, double... data) {
return nPutD(nativeObj, row, col, data.length, data);
}
+ // javadoc:Mat::put(row,col,data)
public int put(int row, int col, float[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2516,6 +955,7 @@ public int put(int row, int col, float[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::put(row,col,data)
public int put(int row, int col, int[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2530,6 +970,7 @@ public int put(int row, int col, int[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::put(row,col,data)
public int put(int row, int col, short[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2544,6 +985,7 @@ public int put(int row, int col, short[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::put(row,col,data)
public int put(int row, int col, byte[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2558,6 +1000,7 @@ public int put(int row, int col, byte[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::get(row,col,data)
public int get(int row, int col, byte[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2572,6 +1015,7 @@ public int get(int row, int col, byte[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::get(row,col,data)
public int get(int row, int col, short[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2586,6 +1030,7 @@ public int get(int row, int col, short[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::get(row,col,data)
public int get(int row, int col, int[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2600,6 +1045,7 @@ public int get(int row, int col, int[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::get(row,col,data)
public int get(int row, int col, float[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2614,6 +1060,7 @@ public int get(int row, int col, float[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::get(row,col,data)
public int get(int row, int col, double[] data) {
int t = type();
if (data == null || data.length % CvType.channels(t) != 0)
@@ -2628,18 +1075,22 @@ public int get(int row, int col, double[] data) {
throw new java.lang.UnsupportedOperationException("Mat data type is not compatible: " + t);
}
+ // javadoc:Mat::get(row,col)
public double[] get(int row, int col) {
return nGet(nativeObj, row, col);
}
+ // javadoc:Mat::height()
public int height() {
return rows();
}
+ // javadoc:Mat::width()
public int width() {
return cols();
}
+ // javadoc:Mat::getNativeObjAddr()
public long getNativeObjAddr() {
return nativeObj;
}
diff --git a/imaging-utils/src/main/java/org/opencv/core/MatOfDMatch.java b/imaging-utils/src/main/java/org/opencv/core/MatOfDMatch.java
index b703c5c..2c99e14 100644
--- a/imaging-utils/src/main/java/org/opencv/core/MatOfDMatch.java
+++ b/imaging-utils/src/main/java/org/opencv/core/MatOfDMatch.java
@@ -3,7 +3,7 @@
import java.util.Arrays;
import java.util.List;
-import org.opencv.features2d.DMatch;
+import org.opencv.core.DMatch;
public class MatOfDMatch extends Mat {
// 32FC4
diff --git a/imaging-utils/src/main/java/org/opencv/core/MatOfKeyPoint.java b/imaging-utils/src/main/java/org/opencv/core/MatOfKeyPoint.java
index d0a1879..24b9a81 100644
--- a/imaging-utils/src/main/java/org/opencv/core/MatOfKeyPoint.java
+++ b/imaging-utils/src/main/java/org/opencv/core/MatOfKeyPoint.java
@@ -3,7 +3,7 @@
import java.util.Arrays;
import java.util.List;
-import org.opencv.features2d.KeyPoint;
+import org.opencv.core.KeyPoint;
public class MatOfKeyPoint extends Mat {
// 32FC7
diff --git a/imaging-utils/src/main/java/org/opencv/core/MatOfRect2d.java b/imaging-utils/src/main/java/org/opencv/core/MatOfRect2d.java
new file mode 100644
index 0000000..71c4b1a
--- /dev/null
+++ b/imaging-utils/src/main/java/org/opencv/core/MatOfRect2d.java
@@ -0,0 +1,81 @@
+package org.opencv.core;
+
+import java.util.Arrays;
+import java.util.List;
+
+
+public class MatOfRect2d extends Mat {
+ // 64FC4
+ private static final int _depth = CvType.CV_64F;
+ private static final int _channels = 4;
+
+ public MatOfRect2d() {
+ super();
+ }
+
+ protected MatOfRect2d(long addr) {
+ super(addr);
+ if( !empty() && checkVector(_channels, _depth) < 0 )
+ throw new IllegalArgumentException("Incompatible Mat");
+ //FIXME: do we need release() here?
+ }
+
+ public static MatOfRect2d fromNativeAddr(long addr) {
+ return new MatOfRect2d(addr);
+ }
+
+ public MatOfRect2d(Mat m) {
+ super(m, Range.all());
+ if( !empty() && checkVector(_channels, _depth) < 0 )
+ throw new IllegalArgumentException("Incompatible Mat");
+ //FIXME: do we need release() here?
+ }
+
+ public MatOfRect2d(Rect2d...a) {
+ super();
+ fromArray(a);
+ }
+
+ public void alloc(int elemNumber) {
+ if(elemNumber>0)
+ super.create(elemNumber, 1, CvType.makeType(_depth, _channels));
+ }
+
+ public void fromArray(Rect2d...a) {
+ if(a==null || a.length==0)
+ return;
+ int num = a.length;
+ alloc(num);
+ double buff[] = new double[num * _channels];
+ for(int i=0; itemplate
// C++ code:
- * - * - *public:
- * - *typedef _Tp value_type;
- * - *// various constructors
- * - *Point_();
- * - *Point_(_Tp _x, _Tp _y);
- * - *Point_(const Point_& pt);
- * - *Point_(const CvPoint& pt);
- * - *Point_(const CvPoint2D32f& pt);
- * - *Point_(const Size_<_Tp>& sz);
- * - *Point_(const Vec<_Tp, 2>& v);
- * - *Point_& operator = (const Point_& pt);
- * - *//! conversion to another data type
- * - *template
//! conversion to the old-style C structures
- * - *operator CvPoint() const;
- * - *operator CvPoint2D32f() const;
- * - *operator Vec<_Tp, 2>() const;
- * - *//! dot product
- * - *_Tp dot(const Point_& pt) const;
- * - *//! dot product computed in double-precision arithmetics
- * - *double ddot(const Point_& pt) const;
- * - *//! cross-product
- * - *double cross(const Point_& pt) const;
- * - *//! checks whether the point is inside the specified rectangle
- * - *bool inside(const Rect_<_Tp>& r) const;
- * - *_Tp x, y; //< the point coordinates
- * - *};
- * - *Template class for 2D points specified by its coordinates
- * - *x and y.
- * An instance of the class is interchangeable with C structures,
- * CvPoint
and CvPoint2D32f
. There is also a cast
- * operator to convert point coordinates to the specified type. The conversion
- * from floating-point coordinates to integer coordinates is done by rounding.
- * Commonly, the conversion uses thisoperation for each of the coordinates.
- * Besides the class members listed in the declaration above, the following
- * operations on points are implemented:
// C++ code:
- * - *pt1 = pt2 + pt3;
- * - *pt1 = pt2 - pt3;
- * - *pt1 = pt2 * a;
- * - *pt1 = a * pt2;
- * - *pt1 += pt2;
- * - *pt1 -= pt2;
- * - *pt1 *= a;
- * - *double value = norm(pt); // L2 norm
- * - *pt1 == pt2;
- * - *pt1 != pt2;
- * - *For your convenience, the following type aliases are defined:
- * - *typedef Point_
typedef Point2i Point;
- * - *typedef Point_
typedef Point_
Example:
- * - *Point2f a(0.3f, 0.f), b(0.f, 0.4f);
- * - *Point pt = (a + b)*10.f;
- * - *cout << pt.x << ", " << pt.y << endl;
- * - * @see org.opencv.core.Point_ - */ +//javadoc:Point_ public class Point { public double x, y; diff --git a/imaging-utils/src/main/java/org/opencv/core/Point3.java b/imaging-utils/src/main/java/org/opencv/core/Point3.java index 839dae0..14b91c6 100644 --- a/imaging-utils/src/main/java/org/opencv/core/Point3.java +++ b/imaging-utils/src/main/java/org/opencv/core/Point3.java @@ -1,78 +1,6 @@ package org.opencv.core; -/** - *template
// C++ code:
- * - * - *public:
- * - *typedef _Tp value_type;
- * - *// various constructors
- * - *Point3_();
- * - *Point3_(_Tp _x, _Tp _y, _Tp _z);
- * - *Point3_(const Point3_& pt);
- * - *explicit Point3_(const Point_<_Tp>& pt);
- * - *Point3_(const CvPoint3D32f& pt);
- * - *Point3_(const Vec<_Tp, 3>& v);
- * - *Point3_& operator = (const Point3_& pt);
- * - *//! conversion to another data type
- * - *template
//! conversion to the old-style CvPoint...
- * - *operator CvPoint3D32f() const;
- * - *//! conversion to cv.Vec<>
- * - *operator Vec<_Tp, 3>() const;
- * - *//! dot product
- * - *_Tp dot(const Point3_& pt) const;
- * - *//! dot product computed in double-precision arithmetics
- * - *double ddot(const Point3_& pt) const;
- * - *//! cross product of the 2 3D points
- * - *Point3_ cross(const Point3_& pt) const;
- * - *_Tp x, y, z; //< the point coordinates
- * - *};
- * - *Template class for 3D points specified by its coordinates
- * - *x, y and z.
- * An instance of the class is interchangeable with the C structure
- * CvPoint2D32f
. Similarly to Point_
, the coordinates
- * of 3D points can be converted to another type. The vector arithmetic and
- * comparison operations are also supported.
- * The following Point3_<>
aliases are available:
// C++ code:
- * - *typedef Point3_
typedef Point3_
typedef Point3_
Template class specifying a continuous subsequence (slice) of a sequence.
- * - *class CV_EXPORTS Range
// C++ code:
- * - * - *public:
- * - *Range();
- * - *Range(int _start, int _end);
- * - *Range(const CvSlice& slice);
- * - *int size() const;
- * - *bool empty() const;
- * - *static Range all();
- * - *operator CvSlice() const;
- * - *int start, end;
- * - *};
- * - *The class is used to specify a row or a column span in a matrix (
- * - *"Mat") and for many other purposes. Range(a,b)
is basically the
- * same as a:b
in Matlab or a..b
in Python. As in
- * Python, start
is an inclusive left boundary of the range and
- * end
is an exclusive right boundary of the range. Such a
- * half-opened interval is usually denoted as [start,end).
- * The static method Range.all()
returns a special variable that
- * means "the whole sequence" or "the whole range", just like " :
"
- * in Matlab or " ...
" in Python. All the methods and functions in
- * OpenCV that take Range
support this special Range.all()
- * value. But, of course, in case of your own custom processing, you will
- * probably have to check and handle it explicitly:
// C++ code:
- * - *void my_function(..., const Range& r,....)
- * - * - *if(r == Range.all()) {
- * - *// process all the data
- * - * - *else {
- * - *// process [r.start, r.end)
- * - * - * - * - * - * @see org.opencv.core.Range - */ +//javadoc:Range public class Range { public int start, end; diff --git a/imaging-utils/src/main/java/org/opencv/core/Rect.java b/imaging-utils/src/main/java/org/opencv/core/Rect.java index 056167d..c68e818 100644 --- a/imaging-utils/src/main/java/org/opencv/core/Rect.java +++ b/imaging-utils/src/main/java/org/opencv/core/Rect.java @@ -1,129 +1,6 @@ package org.opencv.core; -/** - *template
// C++ code:
- * - * - *public:
- * - *typedef _Tp value_type;
- * - *//! various constructors
- * - *Rect_();
- * - *Rect_(_Tp _x, _Tp _y, _Tp _width, _Tp _height);
- * - *Rect_(const Rect_& r);
- * - *Rect_(const CvRect& r);
- * - *Rect_(const Point_<_Tp>& org, const Size_<_Tp>& sz);
- * - *Rect_(const Point_<_Tp>& pt1, const Point_<_Tp>& pt2);
- * - *Rect_& operator = (const Rect_& r);
- * - *//! the top-left corner
- * - *Point_<_Tp> tl() const;
- * - *//! the bottom-right corner
- * - *Point_<_Tp> br() const;
- * - *//! size (width, height) of the rectangle
- * - *Size_<_Tp> size() const;
- * - *//! area (width*height) of the rectangle
- * - *_Tp area() const;
- * - *//! conversion to another data type
- * - *template
//! conversion to the old-style CvRect
- * - *operator CvRect() const;
- * - *//! checks whether the rectangle contains the point
- * - *bool contains(const Point_<_Tp>& pt) const;
- * - *_Tp x, y, width, height; //< the top-left corner, as well as width and height - * of the rectangle
- * - *};
- * - *Template class for 2D rectangles, described by the following parameters: - *
- *Rect_.x
and Rect_.y
in OpenCV. Though, in your
- * algorithms you may count x
and y
from the
- * bottom-left corner.
- * OpenCV typically assumes that the top and left boundary of the rectangle are
- * inclusive, while the right and bottom boundaries are not. For example, the
- * method Rect_.contains
returns true
if
x <= pt.x < x+width,<BR>y <= pt.y < y+height
- * - *Virtually every loop over an imageROI in OpenCV (where ROI is specified by
- * Rect_
) is implemented as:
// C++ code:
- * - *for(int y = roi.y; y < roi.y + rect.height; y++)
- * - *for(int x = roi.x; x < roi.x + rect.width; x++)
- * - * - *//...
- * - * - *In addition to the class members, the following operations on rectangles are - * implemented:
- *rect += point, rect -= point, rect += size, rect -= size
- * (augmenting operations)
- * rect = rect1 & rect2
(rectangle intersection)
- * rect = rect1 | rect2
(minimum area rectangle containing
- * rect2
and rect3
)
- * rect &= rect1, rect |= rect1
(and the corresponding
- * augmenting operations)
- * rect == rect1, rect != rect1
(rectangle comparison)
- * This is an example how the partial ordering on rectangles can be established
- * (rect1subseteq rect2):
// C++ code:
- * - *template
operator <= (const Rect_<_Tp>& r1, const Rect_<_Tp>& r2)
- * - * - *return (r1 & r2) == r1;
- * - * - *For your convenience, the Rect_<>
alias is available:
typedef Rect_
Template class for a 4-element vector derived from Vec.
- * - *template
// C++ code:
- * - * - *public:
- * - *//! various constructors
- * - *Scalar_();
- * - *Scalar_(_Tp v0, _Tp v1, _Tp v2=0, _Tp v3=0);
- * - *Scalar_(const CvScalar& s);
- * - *Scalar_(_Tp v0);
- * - *//! returns a scalar with all elements set to v0
- * - *static Scalar_<_Tp> all(_Tp v0);
- * - *//! conversion to the old-style CvScalar
- * - *operator CvScalar() const;
- * - *//! conversion to another data type
- * - *template
//! per-element product
- * - *Scalar_<_Tp> mul(const Scalar_<_Tp>& t, double scale=1) const;
- * - *// returns (v0, -v1, -v2, -v3)
- * - *Scalar_<_Tp> conj() const;
- * - *// returns true iff v1 == v2 == v3 == 0
- * - *bool isReal() const;
- * - *};
- * - *typedef Scalar_
Being derived from Vec<_Tp, 4>
, Scalar_
and
- * Scalar
can be used just as typical 4-element vectors. In
- * addition, they can be converted to/from CvScalar
. The type
- * Scalar
is widely used in OpenCV to pass pixel values.
- *
template
// C++ code:
- * - * - *public:
- * - *typedef _Tp value_type;
- * - *//! various constructors
- * - *Size_();
- * - *Size_(_Tp _width, _Tp _height);
- * - *Size_(const Size_& sz);
- * - *Size_(const CvSize& sz);
- * - *Size_(const CvSize2D32f& sz);
- * - *Size_(const Point_<_Tp>& pt);
- * - *Size_& operator = (const Size_& sz);
- * - *//! the area (width*height)
- * - *_Tp area() const;
- * - *//! conversion of another data type.
- * - *template
//! conversion to the old-style OpenCV types
- * - *operator CvSize() const;
- * - *operator CvSize2D32f() const;
- * - *_Tp width, height; // the width and the height
- * - *};
- * - *Template class for specifying the size of an image or rectangle. The class
- * includes two members called width
and height
. The
- * structure can be converted to and from the old OpenCV structures
CvSize
and CvSize2D32f
. The same set of arithmetic
- * and comparison operations as for Point_
is available.
- * OpenCV defines the following Size_<>
aliases:
// C++ code:
- * - *typedef Size_
typedef Size2i Size;
- * - *typedef Size_
class CV_EXPORTS TermCriteria
// C++ code:
- * - * - *public:
- * - *enum
- * - * - *COUNT=1, //!< the maximum number of iterations or elements to compute
- * - *MAX_ITER=COUNT, //!< ditto
- * - *EPS=2 //!< the desired accuracy or change in parameters at which the - * iterative algorithm stops
- * - *};
- * - *//! default constructor
- * - *TermCriteria();
- * - *//! full constructor
- * - *TermCriteria(int type, int maxCount, double epsilon);
- * - *//! conversion from CvTermCriteria
- * - *TermCriteria(const CvTermCriteria& criteria);
- * - *//! conversion to CvTermCriteria
- * - *operator CvTermCriteria() const;
- * - *int type; //!< the type of termination criteria: COUNT, EPS or COUNT + EPS
- * - *int maxCount; // the maximum number of iterations/elements
- * - *double epsilon; // the desired accuracy
- * - *};
- * - *The class defining termination criteria for iterative algorithms. You can - * initialize it by default constructor and then override any parameters, or the - * structure may be fully initialized using the advanced variant of the - * constructor. - *
- * - * @see org.opencv.core.TermCriteria - */ +//javadoc:TermCriteria public class TermCriteria { /** diff --git a/imaging-utils/src/main/java/org/opencv/core/TickMeter.java b/imaging-utils/src/main/java/org/opencv/core/TickMeter.java new file mode 100644 index 0000000..1ab5a56 --- /dev/null +++ b/imaging-utils/src/main/java/org/opencv/core/TickMeter.java @@ -0,0 +1,181 @@ + +// +// This file is auto-generated. Please don't modify it! +// +package org.opencv.core; + + + +// C++: class TickMeter +//javadoc: TickMeter +public class TickMeter { + + protected final long nativeObj; + protected TickMeter(long addr) { nativeObj = addr; } + + public long getNativeObjAddr() { return nativeObj; } + + // + // C++: TickMeter() + // + + //javadoc: TickMeter::TickMeter() + public TickMeter() + { + + nativeObj = TickMeter_0(); + + return; + } + + + // + // C++: double getTimeMicro() + // + + //javadoc: TickMeter::getTimeMicro() + public double getTimeMicro() + { + + double retVal = getTimeMicro_0(nativeObj); + + return retVal; + } + + + // + // C++: double getTimeMilli() + // + + //javadoc: TickMeter::getTimeMilli() + public double getTimeMilli() + { + + double retVal = getTimeMilli_0(nativeObj); + + return retVal; + } + + + // + // C++: double getTimeSec() + // + + //javadoc: TickMeter::getTimeSec() + public double getTimeSec() + { + + double retVal = getTimeSec_0(nativeObj); + + return retVal; + } + + + // + // C++: int64 getCounter() + // + + //javadoc: TickMeter::getCounter() + public long getCounter() + { + + long retVal = getCounter_0(nativeObj); + + return retVal; + } + + + // + // C++: int64 getTimeTicks() + // + + //javadoc: TickMeter::getTimeTicks() + public long getTimeTicks() + { + + long retVal = getTimeTicks_0(nativeObj); + + return retVal; + } + + + // + // C++: void reset() + // + + //javadoc: TickMeter::reset() + public void reset() + { + + reset_0(nativeObj); + + return; + } + + + // + // C++: void start() + // + + //javadoc: TickMeter::start() + public void start() + { + + start_0(nativeObj); + + return; + } + + + // + // C++: void stop() + // + + //javadoc: TickMeter::stop() + public void stop() + { + + stop_0(nativeObj); + + return; + } + + + @Override + protected void finalize() throws Throwable { + delete(nativeObj); + } + + + + // C++: TickMeter() + private static native long TickMeter_0(); + + // C++: double getTimeMicro() + private static native double getTimeMicro_0(long nativeObj); + + // C++: double getTimeMilli() + private static native double getTimeMilli_0(long nativeObj); + + // C++: double getTimeSec() + private static native double getTimeSec_0(long nativeObj); + + // C++: int64 getCounter() + private static native long getCounter_0(long nativeObj); + + // C++: int64 getTimeTicks() + private static native long getTimeTicks_0(long nativeObj); + + // C++: void reset() + private static native void reset_0(long nativeObj); + + // C++: void start() + private static native void start_0(long nativeObj); + + // C++: void stop() + private static native void stop_0(long nativeObj); + + // native support for java finalize() + private static native void delete(long nativeObj); + +} diff --git a/imaging-utils/src/main/java/org/opencv/dnn/DictValue.java b/imaging-utils/src/main/java/org/opencv/dnn/DictValue.java new file mode 100644 index 0000000..87c76a2 --- /dev/null +++ b/imaging-utils/src/main/java/org/opencv/dnn/DictValue.java @@ -0,0 +1,211 @@ + +// +// This file is auto-generated. Please don't modify it! +// +package org.opencv.dnn; + +import java.lang.String; + +// C++: class DictValue +//javadoc: DictValue +public class DictValue { + + protected final long nativeObj; + protected DictValue(long addr) { nativeObj = addr; } + + public long getNativeObjAddr() { return nativeObj; } + + // + // C++: DictValue(String s) + // + + //javadoc: DictValue::DictValue(s) + public DictValue(String s) + { + + nativeObj = DictValue_0(s); + + return; + } + + + // + // C++: DictValue(double p) + // + + //javadoc: DictValue::DictValue(p) + public DictValue(double p) + { + + nativeObj = DictValue_1(p); + + return; + } + + + // + // C++: DictValue(int i) + // + + //javadoc: DictValue::DictValue(i) + public DictValue(int i) + { + + nativeObj = DictValue_2(i); + + return; + } + + + // + // C++: String getStringValue(int idx = -1) + // + + //javadoc: DictValue::getStringValue(idx) + public String getStringValue(int idx) + { + + String retVal = getStringValue_0(nativeObj, idx); + + return retVal; + } + + //javadoc: DictValue::getStringValue() + public String getStringValue() + { + + String retVal = getStringValue_1(nativeObj); + + return retVal; + } + + + // + // C++: bool isInt() + // + + //javadoc: DictValue::isInt() + public boolean isInt() + { + + boolean retVal = isInt_0(nativeObj); + + return retVal; + } + + + // + // C++: bool isReal() + // + + //javadoc: DictValue::isReal() + public boolean isReal() + { + + boolean retVal = isReal_0(nativeObj); + + return retVal; + } + + + // + // C++: bool isString() + // + + //javadoc: DictValue::isString() + public boolean isString() + { + + boolean retVal = isString_0(nativeObj); + + return retVal; + } + + + // + // C++: double getRealValue(int idx = -1) + // + + //javadoc: DictValue::getRealValue(idx) + public double getRealValue(int idx) + { + + double retVal = getRealValue_0(nativeObj, idx); + + return retVal; + } + + //javadoc: DictValue::getRealValue() + public double getRealValue() + { + + double retVal = getRealValue_1(nativeObj); + + return retVal; + } + + + // + // C++: int getIntValue(int idx = -1) + // + + //javadoc: DictValue::getIntValue(idx) + public int getIntValue(int idx) + { + + int retVal = getIntValue_0(nativeObj, idx); + + return retVal; + } + + //javadoc: DictValue::getIntValue() + public int getIntValue() + { + + int retVal = getIntValue_1(nativeObj); + + return retVal; + } + + + @Override + protected void finalize() throws Throwable { + delete(nativeObj); + } + + + + // C++: DictValue(String s) + private static native long DictValue_0(String s); + + // C++: DictValue(double p) + private static native long DictValue_1(double p); + + // C++: DictValue(int i) + private static native long DictValue_2(int i); + + // C++: String getStringValue(int idx = -1) + private static native String getStringValue_0(long nativeObj, int idx); + private static native String getStringValue_1(long nativeObj); + + // C++: bool isInt() + private static native boolean isInt_0(long nativeObj); + + // C++: bool isReal() + private static native boolean isReal_0(long nativeObj); + + // C++: bool isString() + private static native boolean isString_0(long nativeObj); + + // C++: double getRealValue(int idx = -1) + private static native double getRealValue_0(long nativeObj, int idx); + private static native double getRealValue_1(long nativeObj); + + // C++: int getIntValue(int idx = -1) + private static native int getIntValue_0(long nativeObj, int idx); + private static native int getIntValue_1(long nativeObj); + + // native support for java finalize() + private static native void delete(long nativeObj); + +} diff --git a/imaging-utils/src/main/java/org/opencv/dnn/Dnn.java b/imaging-utils/src/main/java/org/opencv/dnn/Dnn.java new file mode 100644 index 0000000..1e92e75 --- /dev/null +++ b/imaging-utils/src/main/java/org/opencv/dnn/Dnn.java @@ -0,0 +1,249 @@ + +// +// This file is auto-generated. Please don't modify it! +// +package org.opencv.dnn; + +import java.lang.String; +import java.util.ArrayList; +import java.util.List; +import org.opencv.core.Mat; +import org.opencv.core.Scalar; +import org.opencv.core.Size; +import org.opencv.utils.Converters; + +public class Dnn { + + public static final int + DNN_BACKEND_DEFAULT = 0, + DNN_BACKEND_HALIDE = 1, + DNN_TARGET_CPU = 0, + DNN_TARGET_OPENCL = 1; + + + // + // C++: Mat blobFromImage(Mat image, double scalefactor = 1.0, Size size = Size(), Scalar mean = Scalar(), bool swapRB = true) + // + + //javadoc: blobFromImage(image, scalefactor, size, mean, swapRB) + public static Mat blobFromImage(Mat image, double scalefactor, Size size, Scalar mean, boolean swapRB) + { + + Mat retVal = new Mat(blobFromImage_0(image.nativeObj, scalefactor, size.width, size.height, mean.val[0], mean.val[1], mean.val[2], mean.val[3], swapRB)); + + return retVal; + } + + //javadoc: blobFromImage(image) + public static Mat blobFromImage(Mat image) + { + + Mat retVal = new Mat(blobFromImage_1(image.nativeObj)); + + return retVal; + } + + + // + // C++: Mat blobFromImages(vector_Mat images, double scalefactor = 1.0, Size size = Size(), Scalar mean = Scalar(), bool swapRB = true) + // + + //javadoc: blobFromImages(images, scalefactor, size, mean, swapRB) + public static Mat blobFromImages(ListAbstract base class for computing descriptors for image keypoints.
- * - *class CV_EXPORTS DescriptorExtractor
// C++ code:
- * - * - *public:
- * - *virtual ~DescriptorExtractor();
- * - *void compute(const Mat& image, vector
Mat& descriptors) const;
- * - *void compute(const vector
vector
virtual void read(const FileNode&);
- * - *virtual void write(FileStorage&) const;
- * - *virtual int descriptorSize() const = 0;
- * - *virtual int descriptorType() const = 0;
- * - *static Ptr
protected:...
- * - *};
- * - *In this interface, a keypoint descriptor can be represented as a
- * - *dense, fixed-dimension vector of a basic type. Most descriptors follow this - * pattern as it simplifies computing distances between descriptors. Therefore, - * a collection of descriptors is represented as "Mat", where each row is a - * keypoint descriptor.
- * - * @see org.opencv.features2d.DescriptorExtractor : public Algorithm - */ +//javadoc: javaDescriptorExtractor public class DescriptorExtractor { protected final long nativeObj; protected DescriptorExtractor(long addr) { nativeObj = addr; } + public long getNativeObjAddr() { return nativeObj; } private static final int OPPONENTEXTRACTOR = 1000; @@ -72,176 +31,130 @@ public class DescriptorExtractor { BRIEF = 4, BRISK = 5, FREAK = 6, + AKAZE = 7, OPPONENT_SIFT = OPPONENTEXTRACTOR + SIFT, OPPONENT_SURF = OPPONENTEXTRACTOR + SURF, OPPONENT_ORB = OPPONENTEXTRACTOR + ORB, OPPONENT_BRIEF = OPPONENTEXTRACTOR + BRIEF, OPPONENT_BRISK = OPPONENTEXTRACTOR + BRISK, - OPPONENT_FREAK = OPPONENTEXTRACTOR + FREAK; + OPPONENT_FREAK = OPPONENTEXTRACTOR + FREAK, + OPPONENT_AKAZE = OPPONENTEXTRACTOR + AKAZE; // - // C++: void javaDescriptorExtractor::compute(Mat image, vector_KeyPoint& keypoints, Mat descriptors) + // C++: static Ptr_javaDescriptorExtractor create(int extractorType) // -/** - *Computes the descriptors for a set of keypoints detected in an image (first - * variant) or image set (second variant).
- * - * @param image Image. - * @param keypoints Input collection of keypoints. Keypoints for which a - * descriptor cannot be computed are removed and the remaining ones may be - * reordered. Sometimes new keypoints can be added, for example: - *SIFT
duplicates a keypoint with several dominant orientations
- * (for each orientation).
- * @param descriptors Computed descriptors. In the second variant of the method
- * descriptors[i]
are descriptors computed for a keypoints[i]
.
- * Row j
is the keypoints
(or keypoints[i]
)
- * is the descriptor for keypoint j
-th keypoint.
- *
- * @see org.opencv.features2d.DescriptorExtractor.compute
- */
- public void compute(Mat image, MatOfKeyPoint keypoints, Mat descriptors)
- {
- Mat keypoints_mat = keypoints;
- compute_0(nativeObj, image.nativeObj, keypoints_mat.nativeObj, descriptors.nativeObj);
-
- return;
- }
-
-
- //
- // C++: void javaDescriptorExtractor::compute(vector_Mat images, vector_vector_KeyPoint& keypoints, vector_Mat& descriptors)
- //
-
-/**
- * Computes the descriptors for a set of keypoints detected in an image (first - * variant) or image set (second variant).
- * - * @param images Image set. - * @param keypoints Input collection of keypoints. Keypoints for which a - * descriptor cannot be computed are removed and the remaining ones may be - * reordered. Sometimes new keypoints can be added, for example: - *SIFT
duplicates a keypoint with several dominant orientations
- * (for each orientation).
- * @param descriptors Computed descriptors. In the second variant of the method
- * descriptors[i]
are descriptors computed for a keypoints[i]
.
- * Row j
is the keypoints
(or keypoints[i]
)
- * is the descriptor for keypoint j
-th keypoint.
- *
- * @see org.opencv.features2d.DescriptorExtractor.compute
- */
- public void compute(ListCreates a descriptor extractor by name.
- * - *The current implementation supports the following types of a descriptor - * extractor:
- *"SIFT"
-- "SIFT"
- * "SURF"
-- "SURF"
- * "BRIEF"
-- "BriefDescriptorExtractor"
- * "BRISK"
-- "BRISK"
- * "ORB"
-- "ORB"
- * "FREAK"
-- "FREAK"
- * A combined format is also supported: descriptor extractor adapter name
- * ("Opponent"
-- "OpponentColorDescriptorExtractor") + descriptor
- * extractor name (see above), for example: "OpponentSIFT"
.
Abstract base class for matching keypoint descriptors. It has two groups of - * match methods: for matching descriptors of an image with another image or - * with an image set.
- * - *class DescriptorMatcher
// C++ code:
- * - * - *public:
- * - *virtual ~DescriptorMatcher();
- * - *virtual void add(const vector
const vector
virtual void clear();
- * - *bool empty() const;
- * - *virtual bool isMaskSupported() const = 0;
- * - *virtual void train();
- * - */ *
- *void match(const Mat& queryDescriptors, const Mat& trainDescriptors,
- * - *vector
void knnMatch(const Mat& queryDescriptors, const Mat& trainDescriptors,
- * - *vector
const Mat& mask=Mat(), bool compactResult=false) const;
- * - *void radiusMatch(const Mat& queryDescriptors, const Mat& trainDescriptors,
- * - *vector
const Mat& mask=Mat(), bool compactResult=false) const;
- * - */ *
- *void match(const Mat& queryDescriptors, vector
const vector
void knnMatch(const Mat& queryDescriptors, vector
int k, const vector
bool compactResult=false);
- * - *void radiusMatch(const Mat& queryDescriptors, vector
float maxDistance, const vector
bool compactResult=false);
- * - *virtual void read(const FileNode&);
- * - *virtual void write(FileStorage&) const;
- * - *virtual Ptr
static Ptr
protected:
- * - *vector
};
- * - * @see org.opencv.features2d.DescriptorMatcher : public Algorithm - */ -public class DescriptorMatcher { - - protected final long nativeObj; - protected DescriptorMatcher(long addr) { nativeObj = addr; } +// C++: class DescriptorMatcher +//javadoc: DescriptorMatcher +public class DescriptorMatcher extends Algorithm { + + protected DescriptorMatcher(long addr) { super(addr); } public static final int @@ -115,115 +29,89 @@ public class DescriptorMatcher { // - // C++: void javaDescriptorMatcher::add(vector_Mat descriptors) + // C++: Ptr_DescriptorMatcher clone(bool emptyTrainData = false) // -/** - *Adds descriptors to train a descriptor collection. If the collection
- * trainDescCollectionis
is not empty, the new descriptors are
- * added to existing train descriptors.
descriptors[i]
is a
- * set of descriptors from the same train image.
- *
- * @see org.opencv.features2d.DescriptorMatcher.add
- */
- public void add(ListClears the train descriptor collection.
- * - * @see org.opencv.features2d.DescriptorMatcher.clear - */ - public void clear() + //javadoc: DescriptorMatcher::clone() + public DescriptorMatcher clone() { - - clear_0(nativeObj); - - return; + + DescriptorMatcher retVal = new DescriptorMatcher(clone_1(nativeObj)); + + return retVal; } // - // C++: javaDescriptorMatcher* javaDescriptorMatcher::jclone(bool emptyTrainData = false) + // C++: static Ptr_DescriptorMatcher create(String descriptorMatcherType) // - public DescriptorMatcher clone(boolean emptyTrainData) + //javadoc: DescriptorMatcher::create(descriptorMatcherType) + public static DescriptorMatcher create(String descriptorMatcherType) { - - DescriptorMatcher retVal = new DescriptorMatcher(clone_0(nativeObj, emptyTrainData)); - + + DescriptorMatcher retVal = new DescriptorMatcher(create_0(descriptorMatcherType)); + return retVal; } - public DescriptorMatcher clone() - { - DescriptorMatcher retVal = new DescriptorMatcher(clone_1(nativeObj)); + // + // C++: static Ptr_DescriptorMatcher create(int matcherType) + // + //javadoc: DescriptorMatcher::create(matcherType) + public static DescriptorMatcher create(int matcherType) + { + + DescriptorMatcher retVal = new DescriptorMatcher(create_1(matcherType)); + return retVal; } // - // C++: static javaDescriptorMatcher* javaDescriptorMatcher::create(int matcherType) + // C++: bool empty() // -/** - *Creates a descriptor matcher of a given type with the default parameters - * (using default constructor).
- * - * @param matcherType a matcherType - * - * @see org.opencv.features2d.DescriptorMatcher.create - */ - public static DescriptorMatcher create(int matcherType) + //javadoc: DescriptorMatcher::empty() + public boolean empty() { - - DescriptorMatcher retVal = new DescriptorMatcher(create_0(matcherType)); - + + boolean retVal = empty_0(nativeObj); + return retVal; } // - // C++: bool javaDescriptorMatcher::empty() + // C++: bool isMaskSupported() // -/** - *Returns true if there are no train descriptors in the collection.
- * - * @see org.opencv.features2d.DescriptorMatcher.empty - */ - public boolean empty() + //javadoc: DescriptorMatcher::isMaskSupported() + public boolean isMaskSupported() { - - boolean retVal = empty_0(nativeObj); - + + boolean retVal = isMaskSupported_0(nativeObj); + return retVal; } // - // C++: vector_Mat javaDescriptorMatcher::getTrainDescriptors() + // C++: vector_Mat getTrainDescriptors() // -/** - *Returns a constant link to the train descriptor collection trainDescCollection
.
Returns true if the descriptor matcher supports masking permissible matches.
- * - * @see org.opencv.features2d.DescriptorMatcher.isMaskSupported - */ - public boolean isMaskSupported() + //javadoc: DescriptorMatcher::add(descriptors) + public void add(ListFinds the k best matches for each descriptor from a query set.
- * - *These extended variants of "DescriptorMatcher.match" methods find several - * best matches for each query descriptor. The matches are returned in the - * distance increasing order. See "DescriptorMatcher.match" for the details - * about query and train descriptors.
- * - * @param queryDescriptors Query set of descriptors. - * @param trainDescriptors Train set of descriptors. This set is not added to - * the train descriptors collection stored in the class object. - * @param matches Matches. Eachmatches[i]
is k or less matches for
- * the same query descriptor.
- * @param k Count of best matches found per each query descriptor or less if a
- * query descriptor has less than k possible matches in total.
- * @param mask Mask specifying permissible matches between an input query and
- * train matrices of descriptors.
- * @param compactResult Parameter used when the mask (or masks) is not empty. If
- * compactResult
is false, the matches
vector has the
- * same size as queryDescriptors
rows. If compactResult
- * is true, the matches
vector does not contain matches for fully
- * masked-out query descriptors.
- *
- * @see org.opencv.features2d.DescriptorMatcher.knnMatch
- */
+ // C++: void knnMatch(Mat queryDescriptors, Mat trainDescriptors, vector_vector_DMatch& matches, int k, Mat mask = Mat(), bool compactResult = false)
+ //
+
+ //javadoc: DescriptorMatcher::knnMatch(queryDescriptors, trainDescriptors, matches, k, mask, compactResult)
public void knnMatch(Mat queryDescriptors, Mat trainDescriptors, ListThese extended variants of "DescriptorMatcher.match" methods find several - * best matches for each query descriptor. The matches are returned in the - * distance increasing order. See "DescriptorMatcher.match" for the details - * about query and train descriptors.
- * - * @param queryDescriptors Query set of descriptors. - * @param trainDescriptors Train set of descriptors. This set is not added to - * the train descriptors collection stored in the class object. - * @param matches Matches. Eachmatches[i]
is k or less matches for
- * the same query descriptor.
- * @param k Count of best matches found per each query descriptor or less if a
- * query descriptor has less than k possible matches in total.
- *
- * @see org.opencv.features2d.DescriptorMatcher.knnMatch
- */
+ //javadoc: DescriptorMatcher::knnMatch(queryDescriptors, trainDescriptors, matches, k)
public void knnMatch(Mat queryDescriptors, Mat trainDescriptors, ListFinds the k best matches for each descriptor from a query set.
- * - *These extended variants of "DescriptorMatcher.match" methods find several - * best matches for each query descriptor. The matches are returned in the - * distance increasing order. See "DescriptorMatcher.match" for the details - * about query and train descriptors.
- * - * @param queryDescriptors Query set of descriptors. - * @param matches Matches. Eachmatches[i]
is k or less matches for
- * the same query descriptor.
- * @param k Count of best matches found per each query descriptor or less if a
- * query descriptor has less than k possible matches in total.
- * @param masks Set of masks. Each masks[i]
specifies permissible
- * matches between the input query descriptors and stored train descriptors from
- * the i-th image trainDescCollection[i]
.
- * @param compactResult Parameter used when the mask (or masks) is not empty. If
- * compactResult
is false, the matches
vector has the
- * same size as queryDescriptors
rows. If compactResult
- * is true, the matches
vector does not contain matches for fully
- * masked-out query descriptors.
- *
- * @see org.opencv.features2d.DescriptorMatcher.knnMatch
- */
+ // C++: void knnMatch(Mat queryDescriptors, vector_vector_DMatch& matches, int k, vector_Mat masks = vector_Mat(), bool compactResult = false)
+ //
+
+ //javadoc: DescriptorMatcher::knnMatch(queryDescriptors, matches, k, masks, compactResult)
public void knnMatch(Mat queryDescriptors, ListFinds the k best matches for each descriptor from a query set.
- * - *These extended variants of "DescriptorMatcher.match" methods find several - * best matches for each query descriptor. The matches are returned in the - * distance increasing order. See "DescriptorMatcher.match" for the details - * about query and train descriptors.
- * - * @param queryDescriptors Query set of descriptors. - * @param matches Matches. Eachmatches[i]
is k or less matches for
- * the same query descriptor.
- * @param k Count of best matches found per each query descriptor or less if a
- * query descriptor has less than k possible matches in total.
- *
- * @see org.opencv.features2d.DescriptorMatcher.knnMatch
- */
+ //javadoc: DescriptorMatcher::knnMatch(queryDescriptors, matches, k)
public void knnMatch(Mat queryDescriptors, ListFinds the best match for each descriptor from a query set.
- * - *In the first variant of this method, the train descriptors are passed as an
- * input argument. In the second variant of the method, train descriptors
- * collection that was set by DescriptorMatcher.add
is used.
- * Optional mask (or masks) can be passed to specify which query and training
- * descriptors can be matched. Namely, queryDescriptors[i]
can be
- * matched with trainDescriptors[j]
only if mask.at
- * is non-zero.
mask
, no match is added for this descriptor. So,
- * matches
size may be smaller than the query descriptors count.
- * @param mask Mask specifying permissible matches between an input query and
- * train matrices of descriptors.
- *
- * @see org.opencv.features2d.DescriptorMatcher.match
- */
+ // C++: void match(Mat queryDescriptors, Mat trainDescriptors, vector_DMatch& matches, Mat mask = Mat())
+ //
+
+ //javadoc: DescriptorMatcher::match(queryDescriptors, trainDescriptors, matches, mask)
public void match(Mat queryDescriptors, Mat trainDescriptors, MatOfDMatch matches, Mat mask)
{
Mat matches_mat = matches;
match_0(nativeObj, queryDescriptors.nativeObj, trainDescriptors.nativeObj, matches_mat.nativeObj, mask.nativeObj);
-
+
return;
}
-/**
- * Finds the best match for each descriptor from a query set.
- * - *In the first variant of this method, the train descriptors are passed as an
- * input argument. In the second variant of the method, train descriptors
- * collection that was set by DescriptorMatcher.add
is used.
- * Optional mask (or masks) can be passed to specify which query and training
- * descriptors can be matched. Namely, queryDescriptors[i]
can be
- * matched with trainDescriptors[j]
only if mask.at
- * is non-zero.
mask
, no match is added for this descriptor. So,
- * matches
size may be smaller than the query descriptors count.
- *
- * @see org.opencv.features2d.DescriptorMatcher.match
- */
+ //javadoc: DescriptorMatcher::match(queryDescriptors, trainDescriptors, matches)
public void match(Mat queryDescriptors, Mat trainDescriptors, MatOfDMatch matches)
{
Mat matches_mat = matches;
match_1(nativeObj, queryDescriptors.nativeObj, trainDescriptors.nativeObj, matches_mat.nativeObj);
-
+
return;
}
//
- // C++: void javaDescriptorMatcher::match(Mat queryDescriptors, vector_DMatch& matches, vector_Mat masks = vectorFinds the best match for each descriptor from a query set.
- * - *In the first variant of this method, the train descriptors are passed as an
- * input argument. In the second variant of the method, train descriptors
- * collection that was set by DescriptorMatcher.add
is used.
- * Optional mask (or masks) can be passed to specify which query and training
- * descriptors can be matched. Namely, queryDescriptors[i]
can be
- * matched with trainDescriptors[j]
only if mask.at
- * is non-zero.
mask
, no match is added for this descriptor. So,
- * matches
size may be smaller than the query descriptors count.
- * @param masks Set of masks. Each masks[i]
specifies permissible
- * matches between the input query descriptors and stored train descriptors from
- * the i-th image trainDescCollection[i]
.
- *
- * @see org.opencv.features2d.DescriptorMatcher.match
- */
+ // C++: void match(Mat queryDescriptors, vector_DMatch& matches, vector_Mat masks = vector_Mat())
+ //
+
+ //javadoc: DescriptorMatcher::match(queryDescriptors, matches, masks)
public void match(Mat queryDescriptors, MatOfDMatch matches, ListFinds the best match for each descriptor from a query set.
- * - *In the first variant of this method, the train descriptors are passed as an
- * input argument. In the second variant of the method, train descriptors
- * collection that was set by DescriptorMatcher.add
is used.
- * Optional mask (or masks) can be passed to specify which query and training
- * descriptors can be matched. Namely, queryDescriptors[i]
can be
- * matched with trainDescriptors[j]
only if mask.at
- * is non-zero.
mask
, no match is added for this descriptor. So,
- * matches
size may be smaller than the query descriptors count.
- *
- * @see org.opencv.features2d.DescriptorMatcher.match
- */
+ //javadoc: DescriptorMatcher::match(queryDescriptors, matches)
public void match(Mat queryDescriptors, MatOfDMatch matches)
{
Mat matches_mat = matches;
match_3(nativeObj, queryDescriptors.nativeObj, matches_mat.nativeObj);
-
+
return;
}
//
- // C++: void javaDescriptorMatcher::radiusMatch(Mat queryDescriptors, Mat trainDescriptors, vector_vector_DMatch& matches, float maxDistance, Mat mask = Mat(), bool compactResult = false)
- //
-
-/**
- * For each query descriptor, finds the training descriptors not farther than - * the specified distance.
- * - *For each query descriptor, the methods find such training descriptors that
- * the distance between the query descriptor and the training descriptor is
- * equal or smaller than maxDistance
. Found matches are returned in
- * the distance increasing order.
compactResult
is false, the matches
vector has the
- * same size as queryDescriptors
rows. If compactResult
- * is true, the matches
vector does not contain matches for fully
- * masked-out query descriptors.
- *
- * @see org.opencv.features2d.DescriptorMatcher.radiusMatch
- */
+ // C++: void radiusMatch(Mat queryDescriptors, Mat trainDescriptors, vector_vector_DMatch& matches, float maxDistance, Mat mask = Mat(), bool compactResult = false)
+ //
+
+ //javadoc: DescriptorMatcher::radiusMatch(queryDescriptors, trainDescriptors, matches, maxDistance, mask, compactResult)
public void radiusMatch(Mat queryDescriptors, Mat trainDescriptors, ListFor each query descriptor, the methods find such training descriptors that
- * the distance between the query descriptor and the training descriptor is
- * equal or smaller than maxDistance
. Found matches are returned in
- * the distance increasing order.
For each query descriptor, finds the training descriptors not farther than - * the specified distance.
- * - *For each query descriptor, the methods find such training descriptors that
- * the distance between the query descriptor and the training descriptor is
- * equal or smaller than maxDistance
. Found matches are returned in
- * the distance increasing order.
masks[i]
specifies permissible
- * matches between the input query descriptors and stored train descriptors from
- * the i-th image trainDescCollection[i]
.
- * @param compactResult Parameter used when the mask (or masks) is not empty. If
- * compactResult
is false, the matches
vector has the
- * same size as queryDescriptors
rows. If compactResult
- * is true, the matches
vector does not contain matches for fully
- * masked-out query descriptors.
- *
- * @see org.opencv.features2d.DescriptorMatcher.radiusMatch
- */
+ // C++: void radiusMatch(Mat queryDescriptors, vector_vector_DMatch& matches, float maxDistance, vector_Mat masks = vector_Mat(), bool compactResult = false)
+ //
+
+ //javadoc: DescriptorMatcher::radiusMatch(queryDescriptors, matches, maxDistance, masks, compactResult)
public void radiusMatch(Mat queryDescriptors, ListFor each query descriptor, finds the training descriptors not farther than - * the specified distance.
- * - *For each query descriptor, the methods find such training descriptors that
- * the distance between the query descriptor and the training descriptor is
- * equal or smaller than maxDistance
. Found matches are returned in
- * the distance increasing order.
Trains a descriptor matcher
- * - *Trains a descriptor matcher (for example, the flann index). In all methods to
- * match, the method train()
is run every time before matching.
- * Some descriptor matchers (for example, BruteForceMatcher
) have
- * an empty implementation of this method. Other matchers really train their
- * inner structures (for example, FlannBasedMatcher
trains
- * flann.Index
).