diff --git a/docs/reference/body-segmentation.md b/docs/reference/body-segmentation.md
index 5370be3..e85ae90 100644
--- a/docs/reference/body-segmentation.md
+++ b/docs/reference/body-segmentation.md
@@ -146,9 +146,85 @@ You have successfully built the BodySegmentation Mask Body Part example! Press t
?> If you have any questions or spot something unclear in this step-by-step code guide, we'd love to hear from you! Join us on [Discord](https://discord.com/invite/3CVauZMSt7) and let us know how we can make it better.
+## Properties
+
+### bodySegmentation.modelName
+
+- **Description**
+ - The name of the model being used, typically "BodyPix" or "SelfieSegmentation".
+- **Type**
+ - String
+
+---
+
+### bodySegmentation.video
+
+- **Description**
+ - The video element on which segmentation is performed.
+- **Type**
+ - HTMLVideoElement
+
+---
+
+### bodySegmentation.model
+
+- **Description**
+ - The TensorFlow.js model used for body segmentation.
+- **Type**
+ - tf.LayersModel
+
+---
+
+### bodySegmentation.config
+
+- **Description**
+ - Configuration options provided by the user for the model.
+- **Type**
+ - Object
+
+---
+
+### bodySegmentation.runtimeConfig
+
+- **Description**
+ - Configuration options related to the runtime behavior of the model.
+- **Type**
+ - Object
+
+---
+
+### bodySegmentation.detectMedia
+
+- **Description**
+ - The media element (image, video, or canvas) on which body segmentation is performed.
+- **Type**
+ - HTMLElement
+
+---
+
+### bodySegmentation.detectCallback
+
+- **Description**
+ - The callback function to handle body segmentation results.
+- **Type**
+ - Function
+
+---
+
+### bodySegmentation.ready
+
+- **Description**
+ - A promise that resolves when the model has loaded.
+- **Type**
+ - Promise
+
+---
+
+
+
## Methods
-#### ml5.bodySegmentation()
+### ml5.bodySegmentation()
This method is used to initialize the bodySegmentation object.
@@ -158,7 +234,11 @@ const bodySegmentation = ml5.bodySegmentation(?modelName, ?options, ?callback);
**Parameters:**
-- **modelName**: Optional: A string specifying which model to use, either `SelfieSegmentation` or `BodyPix`.
+- **modelName**: Optional. A string specifying which model to use. Types of model:
+ - _SelfieSegmentation_(default): A model that can be used to segment people from the background.
+ - _BodyPix_: A model that can be used to segment people and body parts.
+
+
- **options**: Optional. An object to change the default configuration of the model. See the example options object:
@@ -170,6 +250,12 @@ const bodySegmentation = ml5.bodySegmentation(?modelName, ?options, ?callback);
}
```
+ Important Option:
+ - **maskType**: The type of mask to output. The options are:
+ - _background_: A mask of the background. The result is an image with transparent pixels on the background and black pixels on the person.
+ - _body_: A mask of the person. The result is an image with black pixels on the background and transparent pixels on the person.
+ - _parts_: **BodyPix** only. A mask of the body parts. The result is an image with white pixels on the background and various color pixels for each body part.
+
[More info on options for SelfieSegmentation model with tfjs runtime](https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation/src/selfie_segmentation_tfjs#create-a-detector).
[More info on options for SelfieSegmentation model with mediaPipe runtime](https://github.com/tensorflow/tfjs-models/tree/master/body-segmentation/src/selfie_segmentation_mediapipe#create-a-detector).
@@ -179,9 +265,12 @@ const bodySegmentation = ml5.bodySegmentation(?modelName, ?options, ?callback);
- **callback(bodySegmentation, error)**: Optional. A function to run once the model has been loaded. Alternatively, call `ml5.bodySegmentation()` within the p5 `preload` function.
**Returns:**
-The bodySegmentation object.
-#### bodySegmentation.detectStart()
+- **Object**: The bodySegmentation object. This object contains the methods to start and stop the body segment detection process.
+
+---
+
+### bodySegmentation.detectStart()
This method repeatedly outputs segmentation masks on an image media through a callback function.
@@ -208,7 +297,14 @@ The `output` will contain an object with the following properties. Based on the
The `data` array contains the underlying segmentation result of the image, stored as one number per pixel of the input image. (With the BodyPix model, the right hand is e.g. the number 11, which is the same as `bodySegmentation.LEFT_HAND`.)
-#### bodySegmentation.detectStop()
+ _results.mask_ under different _maskType_ options:
+ - _background_: A mask of the background. _results.mask_ is an image with transparent pixels on the background and black pixels on the person.
+ - _body_: A mask of the person. _results.mask_ is an image with black pixels on the background and transparent pixels on the person.
+ - _parts_: **BodyPix** only. _results.mask_ is an image with white pixels on the background and various color pixels for each body part.
+
+---
+
+### bodySegmentation.detectStop()
This method can be called after a call to `bodySegmentation.detectStart` to stop the repeating pose estimation.
@@ -216,7 +312,9 @@ This method can be called after a call to `bodySegmentation.detectStart` to stop
bodySegmentation.detectStop();
```
-#### bodySegmentation.detect()
+---
+
+### bodySegmentation.detect()
This method asynchronously outputs a single segmentation mask on an image media when called.
@@ -232,3 +330,4 @@ bodySegmentation.detect(media, ?callback);
**Returns:**
A promise that resolves to the segmentation output.
+
diff --git a/docs/reference/bodypose.md b/docs/reference/bodypose.md
index 82be755..74d4f4a 100644
--- a/docs/reference/bodypose.md
+++ b/docs/reference/bodypose.md
@@ -159,7 +159,7 @@ Within each pose, we only want to draw the skeleton connections that the model h
We iterate through the connections array, with each item being a link of `pointA` and `pointB`. For instance, `connections[1]` is `[0, 2]`, where 0 is the index of `pointA` and 2 is the index of `pointB`. Thus, `let pointAIndex = connections[j][0];` means we get the starting point (pointA) of the link `j`, and `let pointBIndex = connections[j][1];` means we get the ending point (pointB) of the link `j`.
-Use the indices to retrieve the `pointA` and `pointB` objects from the `pose.keypoints`. As with all keypoints, `pointA` is an object with properties `x`, `y`, and `score`.
+Use the indices to retrieve the `pointA` and `pointB` objects from the `pose.keypoints`. As with all keypoints, `pointA` is an object with properties `x`, `y`, and `confidence`.
```javascript
for (let j = 0; j < connections.length; j++) {
@@ -222,6 +222,97 @@ Voila! You have successfully built the BodyPose model to detect and draw body po
?> If you have any questions or spot something unclear in this step-by-step code guide, we'd love to hear from you! Join us on [Discord](https://discord.com/invite/3CVauZMSt7) and let us know how we can make it better.
+## Properties
+
+### bodyPose.modelName
+
+- **Description**
+ - The name of the model being used, either "MoveNet" or "BlazePose".
+- **Type**
+ - String
+
+---
+
+### bodyPose.model
+
+- **Description**
+ - The TensorFlow.js model used for pose detection.
+- **Type**
+ - tf.LayersModel
+
+---
+
+### bodyPose.config
+
+- **Description**
+ - Configuration options provided by the user for the model.
+- **Type**
+ - Object
+
+---
+
+### bodyPose.runtimeConfig
+
+- **Description**
+ - Configuration options related to the runtime behavior of the model.
+- **Type**
+ - Object
+
+---
+
+### bodyPose.detectMedia
+
+- **Description**
+ - The media element (image, video, or canvas) on which pose detection is performed.
+- **Type**
+ - HTMLElement
+
+---
+
+### bodyPose.detectCallback
+
+- **Description**
+ - The callback function to handle pose detection results.
+- **Type**
+ - Function
+
+---
+
+### bodyPose.detecting
+
+- **Description**
+ - A flag indicating whether the detection loop is currently running.
+- **Type**
+ - Boolean
+
+---
+
+### bodyPose.signalStop
+
+- **Description**
+ - A flag used to signal the detection loop to stop.
+- **Type**
+ - Boolean
+
+---
+
+### bodyPose.prevCall
+
+- **Description**
+ - Tracks the previous call to `detectStart` or `detectStop` to handle warnings.
+- **Type**
+ - String
+
+---
+
+### bodyPose.ready
+
+- **Description**
+ - A promise that resolves when the model has loaded.
+- **Type**
+ - Promise
+
+
## Methods
### ml5.bodypose()
@@ -240,12 +331,66 @@ let bodypose = ml5.bodypose(?model, ?options, ?callback);
- **options**: Optional. An object to change the default configuration of the model. The available options differ depending on which of the two underlying models are used.
-See See the [MoveNet documentation](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/movenet#create-a-detector) and the [BlazePose documentation](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_tfjs#create-a-detector) for more information on available options.
+ The default and available options are:
+
+ ```javascript
+ {
+ modelType: "MULTIPOSE_LIGHTNING" // "MULTIPOSE_LIGHTNING", "SINGLEPOSE_LIGHTNING", or "SINGLEPOSE_THUNDE"
+ enableSmoothing: true,
+ minPoseScore: 0.25,
+ multiPoseMaxDimension: 256,
+ enableTracking: true,
+ trackerType: "boundingBox", // "keypoint" or "boundingBox"
+ trackerConfig: {},
+ modelUrl: undefined,
+ }
+ ```
+
+ Options for both models:
+ - _modelType_ - Optional
+ - String: The type of model to use. Default: "MULTIPOSE_LIGHTNING".
+ - _enableSmoothing_ - Optional
+ - Boolean: Whether to smooth the pose landmarks across different input images to reduce jitter. Default: true.
+
+ Options for the MoveNet model only:
+ - _minPoseScore_ - Optional
+ - Number: The minimum confidence score for a pose to be detected. Default: 0.25.
+ - _multiPoseMaxDimension_ - Optional
+ - Number: The target maximum dimension to use as the input to the multi-pose model. Must be a mutiple of 32. Default: 256.
+ - _enableTracking_ - Optional
+ - Boolean: Track each person across the frame with a unique ID. Default: true.
+ - _trackerType_ - Optional
+ - String: Specify what type of tracker to use. Default: "boundingBox".
+ - _trackerConfig_ - Optional
+ - Object: Specify tracker configurations. Use tf.js settings by default.
+
+ Options for the BlazePose model only:
+ - _runtime_ - Optional
+ - String: Either "tfjs" or "mediapipe". Default: "tfjs"
+ - _enableSegmentation_ - Optional
+ - Boolean: A boolean indicating whether to generate the segmentation mask.
+ - _smoothSegmentation_ - Optional
+ - Boolean: whether to filters segmentation masks across different input images to reduce jitter.
+
+ For using custom or offline models
+ - _modelUrl_ - Optional
+ - String: The file path or URL to the MoveNet model.
+ - _solutionPath_ - Optional
+ - String: The file path or URL to the mediaPipe BlazePose model.
+ - _detectorModelUrl_ - Optional
+ - String: The file path or URL to the tfjs BlazePose detector model.
+ - _landmarkModelUrl_ - Optional
+ - String: The file path or URL to the tfjs BlazePose landmark model.
+
+ See See the [MoveNet documentation](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/movenet#create-a-detector) and the [BlazePose documentation](https://github.com/tensorflow/tfjs-models/tree/master/pose-detection/src/blazepose_tfjs#create-a-detector) for more information on available options.
- **callback(bodypose, error)**: Optional. A "callback" function that runs when the model has been successfully loaded. Most ml5.js example call `ml5.bodyPose()` in the p5.js `preload()` function and no callback is needed.
**Returns:**
-The bodyPose object.
+
+- **Object**: The bodyPose object. This object contains the methods to start and stop the pose detection process.
+
+---
### bodypose.detectStart()
@@ -257,8 +402,7 @@ bodypose.detectStart(media, callback);
**Parameters:**
-- **media**: An HMTL or p5.js image, video, or canvas element to run the estimation on.
-
+- **media**: An HTML or p5.js image, video, or canvas element to run the estimation on.
- **callback(results, error)**: A callback function to handle the results of the pose estimation. See below for an example of the model's results:
```javascript
@@ -266,7 +410,7 @@ bodypose.detectStart(media, callback);
{
box: { width, height, xMax, xMin, yMax, yMin },
id: 1,
- keypoints: [{ x, y, score, name }, ...],
+ keypoints: [{ x, y, confidence, name }, ...],
left_ankle: { x, y, confidence },
left_ear: { x, y, confidence },
left_elbow: { x, y, confidence },
@@ -302,8 +446,8 @@ bodypose.detectStart(media, callback);
{
box: { width, height, xMax, xMin, yMax, yMin },
id: 1,
- keypoints: [{ x, y, z, score, name }, ...],
- keypoints3D: [{ x, y, z, score, name }, ...],
+ keypoints: [{ x, y, z, confidence, name }, ...],
+ keypoints3D: [{ x, y, z, confidence, name }, ...],
left_ankle: { x, y, z, confidence },
left_ear: { x, y, z, confidence },
left_elbow: { x, y, z, confidence },
@@ -316,6 +460,8 @@ bodypose.detectStart(media, callback);
?> The `keypoints3D` array contains the 3D coordinates of the keypoints, with the `z` property representing the depth of each keypoint. The 2D `keypoints` still include z-coordinates to provide additional depth information. This helps in understanding the relative positioning of body parts, enhancing the accuracy of applications that primarily work with 2D data.
+---
+
### bodypose.detectStop()
This method can be called to stop the continuous pose estimation process.
@@ -324,6 +470,28 @@ This method can be called to stop the continuous pose estimation process.
bodypose.detectStop();
```
+For example, you can toggle the pose estimation with click event in p5.js by using this function as follows:
+
+```javascript
+// Toggle detection when mouse is pressed
+function mousePressed() {
+ toggleDetection();
+}
+
+// Call this function to start and stop detection
+function toggleDetection() {
+ if (isDetecting) {
+ bodypose.detectStop();
+ isDetecting = false;
+ } else {
+ bodyPose.detectStart(video, gotPoses);
+ isDetecting = true;
+ }
+}
+```
+
+---
+
### bodypose.detect()
This method runs the pose estimation on an image once, not continuously!
@@ -336,4 +504,40 @@ bodypose.detect(media, ?callback);
- **media**: An HTML or p5.js image, video, or canvas element to run the estimation on.
-- **callback(output, error)**: Optional. A callback function to handle the results of the pose estimation. See the results above for an example of the model's output.
+- **callback(results, error)**: Optional. A callback function to handle the results of the pose estimation. See the results above for an example of the model's output.
+
+---
+
+### bodypose.getSkeleton()
+
+This method returns an array of arrays, where each sub-array contains the indices of the connected keypoints.
+
+```javascript
+const connections = bodypose.getSkeleton();
+```
+
+**Returns:**
+
+- **Array**: An array of arrays representing the connections between keypoints. For example, using BlazePose model will returns:
+
+```js
+[
+ [0,1],
+ [0,4],
+ [1,2],
+ ...
+ [28,32],
+ [29,31],
+ [30,32]
+]
+```
+
+This array represents the connections between keypoints, please refer to these images to understand the connections:
+
+
MoveNet
+
+
+
+
BlazePose
+
+
diff --git a/docs/reference/facemesh.md b/docs/reference/facemesh.md
index cb9d874..0fe70ae 100644
--- a/docs/reference/facemesh.md
+++ b/docs/reference/facemesh.md
@@ -173,14 +173,95 @@ And, that's it! You have successfully built the FaceMesh Keypoints example from
?> If you have any questions or spot something unclear in this step-by-step code guide, we'd love to hear from you! Join us on [Discord](https://discord.com/invite/3CVauZMSt7) and let us know how we can make it better.
+## Properties
+
+### faceMesh.model
+
+- **Description**
+ - The TensorFlow.js model used for face landmarks detection.
+- **Type**
+ - tf.LayersModel
+
+---
+
+### faceMesh.config
+
+- **Description**
+ - Configuration options provided by the user for the model.
+- **Type**
+ - Object
+
+---
+
+### faceMesh.runtimeConfig
+
+- **Description**
+ - Configuration options related to the runtime behavior of the model.
+- **Type**
+ - Object
+
+---
+
+### faceMesh.detectMedia
+
+- **Description**
+ - The media element (image, video, or canvas) on which face detection is performed.
+- **Type**
+ - HTMLElement
+
+---
+
+### faceMesh.detectCallback
+
+- **Description**
+ - The callback function to handle face detection results.
+- **Type**
+ - Function
+
+---
+
+### faceMesh.detecting
+
+- **Description**
+ - A flag indicating whether the detection loop is currently running.
+- **Type**
+ - Boolean
+
+---
+
+### faceMesh.signalStop
+
+- **Description**
+ - A flag used to signal the detection loop to stop.
+- **Type**
+ - Boolean
+
+---
+
+### faceMesh.prevCall
+
+- **Description**
+ - Tracks the previous call to `detectStart` or `detectStop` to handle warnings.
+- **Type**
+ - String
+
+---
+
+### faceMesh.ready
+
+- **Description**
+ - A promise that resolves when the model has loaded.
+- **Type**
+ - Promise
+
## Methods
-#### ml5.faceMesh()
+### ml5.faceMesh()
-This method is used to initialize the facemesh object.
+This method is used to initialize the faceMesh object.
```javascript
-const facemesh = ml5.faceMesh(?options, ?callback);
+const faceMesh = ml5.faceMesh(?options, ?callback);
```
**Parameters:**
@@ -195,26 +276,44 @@ const facemesh = ml5.faceMesh(?options, ?callback);
}
```
+ Options for face detection:
+
+ - _maxFacess_
+ - Number: The maximum number of faces to detect. Defaults to 2.
+ - _refineLandmarks_
+ - Boolean: Refine the landmarks. Defaults to false.
+ - _flipHorizontal_
+ - Boolean: Flip the result horizontally. Defaults to false.
+ - _runtime_
+ - String: The runtime to use. "tfjs" (default) or "mediapipe".
+
+ For using custom or offline models:
+
+ - _solutionPath_
+ - String: The file path or URL to the model.
+
More info on options [here](https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection/src/mediapipe#create-a-detector).
-- **callback(facemesh, error)**: Optional. A function to run once the model has been loaded. Alternatively, call `ml5.faceMesh()` within the p5 `preload` function.
+- **callback(faceMesh, error)**: Optional. A function to run once the model has been loaded. Alternatively, call `ml5.faceMesh()` within the p5 `preload` function.
**Returns:**
-The facemesh object.
-#### facemesh.detectStart()
+- **Object**: The faceMesh object. This object contains the methods to start and stop the detection process.
+
+---
+
+### faceMesh.detectStart()
This method repeatedly outputs face estimations on an image media through a callback function.
```javascript
-facemesh.detectStart(media, callback);
+faceMesh.detectStart(media, callback);
```
**Parameters:**
-- **media**: An HMTL or p5.js image, video, or canvas element to run the estimation on.
-
-- **callback(output, error)**: A callback function to handle the output of the estimation. See below for an example output passed into the callback function:
+- **media**: An HTML or p5.js image, video, or canvas element to run the estimation on.
+- **callback(results, error)**: A callback function to handle the output of the estimation. See below for an example output passed into the callback function:
```javascript
[
@@ -231,27 +330,28 @@ facemesh.detectStart(media, callback);
[Here](https://github.com/tensorflow/tfjs-models/blob/master/face-landmarks-detection/mesh_map.jpg) is a diagram for the position of each keypoint (download and zoom in to see the index).
-#### facemesh.detectStop()
+---
-This method can be called after a call to `facemesh.detectStart` to stop the repeating face estimation.
+### faceMesh.detectStop()
+
+This method can be called after a call to `faceMesh.detectStart` to stop the repeating face estimation.
```javascript
-facemesh.detectStop();
+faceMesh.detectStop();
```
-#### facemesh.detect()
+---
+
+### faceMesh.detect()
This method asynchronously outputs a single face estimation on an image media when called.
```javascript
-facemesh.detect(media, ?callback);
+faceMesh.detect(media, ?callback);
```
**Parameters:**
-- **media**: An HMTL or p5.js image, video, or canvas element to run the estimation on.
+- **media**: An HTML or p5.js image, video, or canvas element to run the estimation on.
+- **callback(results, error)**: Optional. A callback function to handle the output of the estimation, see output example above.
-- **callback(output, error)**: Optional. A callback function to handle the output of the estimation, see output example above.
-
-**Returns:**
-A promise that resolves to the estimation output.
diff --git a/docs/reference/handpose.md b/docs/reference/handpose.md
index af5af66..312e94c 100644
--- a/docs/reference/handpose.md
+++ b/docs/reference/handpose.md
@@ -164,9 +164,91 @@ Voila! You have successfully built the HandPose Keypoints example. Press the If you have any questions or spot something unclear in this step-by-step code guide, we'd love to hear from you! Join us on [Discord](https://discord.com/invite/3CVauZMSt7) and let us know how we can make it better.
+## Properties
+
+### handPose.model
+
+- **Description**
+ - The TensorFlow.js model used for hand pose detection.
+- **Type**
+ - tf.LayersModel
+
+---
+
+### handPose.config
+
+- **Description**
+ - Configuration options provided by the user for the model.
+- **Type**
+ - Object
+
+---
+
+### handPose.runtimeConfig
+
+- **Description**
+ - Configuration options related to the runtime behavior of the model.
+- **Type**
+ - Object
+
+---
+
+### handPose.detectMedia
+
+- **Description**
+ - The media element (image, video, or canvas) on which hand pose detection is performed.
+- **Type**
+ - HTMLElement
+
+---
+
+### handPose.detectCallback
+
+- **Description**
+ - The callback function to handle hand pose detection results.
+- **Type**
+ - Function
+
+---
+
+### handPose.detecting
+
+- **Description**
+ - A flag indicating whether the detection loop is currently running.
+- **Type**
+ - Boolean
+
+---
+
+### handPose.signalStop
+
+- **Description**
+ - A flag used to signal the detection loop to stop.
+- **Type**
+ - Boolean
+
+---
+
+### handPose.prevCall
+
+- **Description**
+ - Tracks the previous call to `detectStart` or `detectStop` to handle warnings.
+- **Type**
+ - String
+
+---
+
+### handPose.ready
+
+- **Description**
+ - A promise that resolves when the model has loaded.
+- **Type**
+ - Promise
+
+
## Methods
-#### ml5.handpose()
+### ml5.handpose()
This method is used to initialize the handpose object.
@@ -181,23 +263,47 @@ const handpose = ml5.handpose(?options, ?callback);
```javascript
{
maxHands: 2,
- runtime: "mediapipe",
+ flipHorizontal: false,
+ runtime: "tfjs",
modelType: "full",
- solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/hands",
detectorModelUrl: undefined, //default to use the tf.hub model
landmarkModelUrl: undefined, //default to use the tf.hub model
}
```
+ Options for hand detection:
+
+ - _maxHands_ - Optional
+ - Number: The maximum number of hands to detect. Default: 2.
+ - _modelType_ - Optional
+ - String: The type of model to use: "lite" or "full". Default: "full".
+ - _flipHorizontal_ - Optional
+ - Boolean: Flip the result data horizontally. Default: false.
+ - _runtime_ - Optional
+ - String: The runtime of the model: "mediapipe" or "tfjs". Default: "tfjs".
+
+ For using custom or offline models:
+
+ - _solutionPath_ - Optional
+ - String: The file path or URL to the model. Only used when using "mediapipe" runtime.
+ - _detectorModelUrl_ - Optional
+ - String: The file path or URL to the hand detector model. Only used when using "tfjs" runtime.
+ - _landmarkModelUrl_ - Optional
+ - String: The file path or URL to the hand landmark model. Only used when using "tfjs" runtime.
+
More info on options [here](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection/src/mediapipe#create-a-detector) for "mediapipe" runtime.
+
More info on options [here](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection/src/tfjs#create-a-detector) for "tfjs" runtime.
- **callback(handpose, error)**: Optional. A function to run once the model has been loaded. Alternatively, call `ml5.handpose()` within the p5 `preload` function.
-**Returns:**
-The handpose object.
+**Returns:**
+
+- **Object**: The handpose object. This object contains the methods to start and stop the hand pose detection process.
-#### handpose.detectStart()
+---
+
+### handpose.detectStart()
This method repeatedly outputs hand estimations on an image media through a callback function.
@@ -207,17 +313,16 @@ handpose.detectStart(media, callback);
**Parameters:**
-- **media**: An HMTL or p5.js image, video, or canvas element to run the estimation on.
-
-- **callback(output, error)**: A callback function to handle the output of the estimation. See below for an example output passed into the callback function:
+- **media**: An HTML or p5.js image, video, or canvas element to run the estimation on.
+- **callback(results, error)**: A callback function to handle the output of the estimation. See below for an example output passed into the callback function:
```javascript
[
{
confidence,
handedness,
- keypoints: [{ x, y, score, name }, ...],
- keypoints3D: [{ x, y, z, score, name }, ...],
+ keypoints: [{ x, y, confidence, name }, ...],
+ keypoints3D: [{ x, y, z, confidence, name }, ...],
index_finger_dip: { x, y, x3D, y3D, z3D },
index_finger_mcp: { x, y, x3D, y3D, z3D },
...
@@ -232,15 +337,38 @@ handpose.detectStart(media, callback);
-#### handpose.detectStop()
+---
+
+### handpose.detectStop()
-This method can be called after a call to `handpose.detectStart` to stop the repeating hand estimation.
+This method can be called to stop the continuous pose estimation process.
```javascript
handpose.detectStop();
```
-#### handpose.detect()
+For example, you can toggle the hand pose estimation with click event in p5.js by using this function as follows:
+
+```javascript
+// Toggle detection when mouse is pressed
+function mousePressed() {
+ toggleDetection();
+}
+
+// Call this function to start and stop detection
+function toggleDetection() {
+ if (isDetecting) {
+ handpose.detectStop();
+ isDetecting = false;
+ } else {
+ handpose.detectStart(video, gotHands);
+ isDetecting = true;
+ }
+}
+```
+
+---
+### handpose.detect()
This method asynchronously outputs a single hand estimation on an image media when called.
@@ -250,9 +378,7 @@ handpose.detect(media, ?callback);
**Parameters:**
-- **media**: An HMTL or p5.js image, video, or canvas element to run the estimation on.
+- **media**: An HTML or p5.js image, video, or canvas element to run the estimation on.
-- **callback(output, error)**: Optional. A callback function to handle the output of the estimation, see output example above.
+- **callback(results, error)**: Optional. A callback function to handle the output of the estimation, see output example above.
-**Returns:**
-A promise that resolves to the estimation output.
diff --git a/docs/reference/image-classifier-tm.md b/docs/reference/image-classifier-tm.md
index 981ad04..104beff 100644
--- a/docs/reference/image-classifier-tm.md
+++ b/docs/reference/image-classifier-tm.md
@@ -224,7 +224,7 @@ imageClassifier.classify(media, ?kNumber, ?callback);
- **media**: An HTML or p5.js image, video, or canvas element to run the classification on.
- **kNumber**: The number of labels returned by the image classification.
-- **callback(output, error)**: OPTIONAL. A callback function to handle the output of the classification.
+- **callback(output, error)**: Optional. A callback function to handle the output of the classification.
**Returns:**
A promise that resolves to the estimation output.
diff --git a/docs/reference/image-classifier.md b/docs/reference/image-classifier.md
index f54b9e5..83734c8 100644
--- a/docs/reference/image-classifier.md
+++ b/docs/reference/image-classifier.md
@@ -154,37 +154,155 @@ Voila! You have successfully built the ImageClassifier Single Image example. Pre
?> If you have any questions or spot something unclear in this step-by-step code guide, we'd love to hear from you! Join us on [Discord](https://discord.com/invite/3CVauZMSt7) and let us know how we can make it better.
+## Properties
+
+### imageClassifier.modelName
+
+- **Description**
+ - The name of the model being used, typically one of "mobilenet", "darknet", "darknet-tiny", or "doodlenet".
+- **Type**
+ - String
+
+---
+
+### imageClassifier.modelUrl
+
+- **Description**
+ - The URL of the model if a custom model is being used.
+- **Type**
+ - String
+
+---
+
+### imageClassifier.model
+
+- **Description**
+ - The TensorFlow.js model used for image classification.
+- **Type**
+ - tf.LayersModel
+
+---
+
+### imageClassifier.modelToUse
+
+- **Description**
+ - The specific model module to be used for image classification, such as MobileNet, Darknet, or Doodlenet.
+- **Type**
+ - Object
+
+---
+
+### imageClassifier.mapStringToIndex
+
+- **Description**
+ - An array mapping string labels to indices for custom models.
+- **Type**
+ - Array
+
+---
+
+### imageClassifier.version
+
+- **Description**
+ - The version of the model being used, applicable to MobileNet.
+- **Type**
+ - Number
+
+---
+
+### imageClassifier.alpha
+
+- **Description**
+ - The alpha value (width multiplier) of the model being used, applicable to MobileNet.
+- **Type**
+ - Number
+
+---
+
+### imageClassifier.topk
+
+- **Description**
+ - The number of top predictions to return, applicable to MobileNet.
+- **Type**
+ - Number
+
+---
+
+### imageClassifier.isClassifying
+
+- **Description**
+ - A flag indicating whether the classification loop is currently running.
+- **Type**
+ - Boolean
+
+---
+
+### imageClassifier.signalStop
+
+- **Description**
+ - A flag used to signal the classification loop to stop.
+- **Type**
+ - Boolean
+
+---
+
+### imageClassifier.prevCall
+
+- **Description**
+ - Tracks the previous call to `classifyStart` or `classifyStop` to handle warnings.
+- **Type**
+ - String
+
+---
+
+### imageClassifier.ready
+
+- **Description**
+ - A promise that resolves when the model has loaded.
+- **Type**
+ - Promise
+
+---
+
+
## Methods
-#### ml5.imageClassifier()
+### ml5.imageClassifier()
This method is used to initialize the imageClassifer object.
```javascript
-const classifier = ml5.imageClassifier(?modelName, ?options, ?callback);
+const classifier = ml5.imageClassifier(modelNameOrUrl, ?options, ?callback);
```
**Parameters:**
-- **modelName**: Optional. Name of the underlying model to use. Possible values are `mobilenet`, `darknet` (28 MB in size), `darknet-tiny` (4 MB), `doodlenet`, or a URL to a compatible model file.
+- **modelName**: Optional.
+ - String: Name of the underlying model to use. Possible values are `mobilenet`, `darknet` (28 MB in size), `darknet-tiny` (4 MB), `doodlenet`, or a URL to a compatible model file.
-- **options**: Optional. An object to change the default configuration of the model.
+- **options**: Optional.
+ - Object: An object to change the default configuration of the model.
-The default options for the default `mobilenet` model are
+ The default options for the default `mobilenet` model are
-```
-{
- alpha: 1.0,
- topk: 3
-}
-```
+ ```
+ {
+ alpha: 1.0,
+ topk: 3
+ }
+ ```
+ - _version_: The MobileNet version to use. Default is 2.
+ - _alpha_: The width multiplier for the MobileNet. Default is 1.0.
+ - _topk_: The number of labels to return. Default is 3.
- **callback(classifier, error)**: Optional. A function to run once the model has been loaded. Alternatively, call `ml5.imageClassifier()` within the p5 `preload` function.
**Returns:**
The imageClassifier object.
-#### imageClassifier.classifyStart()
+---
+
+### imageClassifier.classifyStart()
This method repeatedly outputs classification labels on an image media through a callback function.
@@ -198,7 +316,7 @@ imageClassifier.classifyStart(media, ?kNumber, callback);
- **kNumber**: The number of labels returned by the image classification.
-- **callback(output, error)**: A callback function to handle the output of the classification. See below for an example output passed into the callback function:
+- **callback(results, error)**: A callback function to handle the output of the classification. See below for an example output passed into the callback function:
```javascript
[
@@ -214,7 +332,9 @@ imageClassifier.classifyStart(media, ?kNumber, callback);
];
```
-#### imageClassifier.classifyStop()
+---
+
+### imageClassifier.classifyStop()
This method can be called after a call to `imageClassifier.classifyStart` to stop the repeating classifications.
@@ -222,7 +342,9 @@ This method can be called after a call to `imageClassifier.classifyStart` to sto
imageClassifier.classifyStop();
```
-#### imageClassifier.classify()
+---
+
+### imageClassifier.classify()
This method asynchronously outputs a single image classification on an image media when called.
@@ -236,7 +358,8 @@ imageClassifier.classify(media, ?kNumber, ?callback);
- **kNumber**: The number of labels returned by the image classification.
-- **callback(output, error)**: Optional. A callback function to handle the output of the classification.
+- **callback(results, error)**: Optional. A callback function to handle the output of the classification.
**Returns:**
A promise that resolves to the estimation output.
+
diff --git a/docs/reference/neural-network.md b/docs/reference/neural-network.md
index 9b5c8a1..938ea15 100644
--- a/docs/reference/neural-network.md
+++ b/docs/reference/neural-network.md
@@ -307,317 +307,121 @@ Now you can run your sketch and interact with the sliders to change the RGB valu
| `.load()` | allows you to load a trained model |
### ml5.neuralNetwork()
-There are a number of ways to initialize the `ml5.neuralNetwork`. Below we cover the possibilities:
-1. Minimal Configuration Method
-2. Defining inputs and output labels as numbers or as arrays of labels
-3. Loading External Data
-4. Loading a pre-trained Model
-5. A convolutional neural network for image classification tasks
-6. Defining custom layers
+This method initializes the `neuralNetwork` object.
-#### Minimal Configuration Method
-
-**Minimal Configuration Method**: If you plan to create data in real-time, you can just set the type of task you want to accomplish `('regression' | 'classification')` and then create the neuralNetwork. You will have to add data later on, but ml5 will figure the inputs and outputs based on the data your add.
-
-```js
-const options = {
- task: "regression", // or 'classification'
-};
-const nn = ml5.neuralNetwork(options);
-```
-
-#### Defining inputs and output labels as numbers or as arrays of labels
-
-**Defining inputs and output labels as numbers or as arrays of labels**: If you plan to create data in real-time, you can just set the type of task you want to accomplish `('regression' | 'classification')` and then create the neuralNetwork. To be more specific about your inputs and outputs, you can also define the _names of the labels for your inputs and outputs_ as arrays OR _the number of inputs and outputs_. You will have to add data later on. Note that if you add data as JSON, your JSON Keys should match those defined in the `options`. If you add data as arrays, make sure the order you add your data match those given in the `options`.
-
-- **As arrays of labels**
- ```js
- const options = {
- task: 'classification' // or 'regression'
- inputs:['r', 'g','b'],
- outputs: ['color']
- }
- const nn = ml5.neuralNetwork(options)
- ```
-- **As numbers**
- ```js
- const options = {
- task: 'classification' // or 'regression'
- inputs: 3, // r, g, b
- outputs: 2 // red-ish, blue-ish
+```javascript
+const nn = ml5.neuralNetwork(options, callback);
+```
+
+**Parameters:**
+
+- **options**: Required. An object to configure the neural network. The available options are:
+ ```javascript
+ {
+ inputs: [], // can also be a number
+ outputs: [], // can also be a number
+ dataUrl: null,
+ modelUrl: null,
+ layers: [], // custom layers
+ task: null, // 'classification', 'regression', 'imageClassification'
+ debug: false, // determines whether or not to show the training visualization
+ learningRate: 0.2,
+ hiddenUnits: 16,
}
- const nn = ml5.neuralNetwork(options)
- ```
-
-#### Loading External Data
-
-**Loading External Data**: You can initialize `ml5.neuralNetwork` specifying an external url to some data structured as a CSV or a JSON file. If you pass in data as part of the options, you will need to provide a **callback function** that will be called when your data has finished loading. Furthermore, you will **need to specify which properties** in the data that ml5.neuralNetwork will use for inputs and outputs.
-
-```js
-const options = {
- dataUrl: 'data/colorData.csv'
- task: 'classification' // or 'regression'
- inputs: ['r', 'g','b'], // r, g, b
- outputs: ['color'] // red-ish, blue-ish
-}
-
-const nn = ml5.neuralNetwork(options, dataLoaded)
-
-function dataLoaded(){
- // continue on your neural network journey
- nn.normalizeData();
- // ...
-}
-```
-
-#### Loading a pre-trained Model
-
-**Loading a pre-trained Model**: If you've trained a model using the `ml5.neuralNetwork` and saved it out using the `ml5.neuralNetwork.save()` then you can load in the **model**, the **weights**, and the **metadata**.
-
-```js
-const options = {
- task: "classification", // or 'regression'
-};
-const nn = ml5.neuralNetwork(options);
-
-const modelDetails = {
- model: "model/model.json",
- metadata: "model/model_meta.json",
- weights: "model/model.weights.bin",
-};
-nn.load(modelDetails, modelLoaded);
-
-function modelLoaded() {
- // continue on your neural network journey
- // use nn.classify() for classifications or nn.predict() for regressions
-}
-```
-
-#### A convolutional neural network for image classification tasks
-
-**A convolutional neural network for image classification tasks**: You can use convolutional neural networks in the `ml5.neuralNetwork` by setting the `task:"imageClassification"`.
-
-```js
-const IMAGE_WIDTH = 64;
-const IMAGE_HEIGHT = 64;
-const IMAGE_CHANNELS = 4;
-const options = {
- task: "imageClassification",
- inputs: [IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS],
- outputs: ["label"],
-};
-const nn = ml5.neuralNetwork(options);
-```
-
-#### Defining Custom Layers
-
-**Defaults**: By default the `ml5.neuralNetwork` has simple default architectures for the `classification`, `regression` and `imageClassificaiton` tasks.
-
-- default `classification` layers:
- ```js
- layers: [
- {
- type: "dense",
- units: this.options.hiddenUnits,
- activation: "relu",
- },
- {
- type: "dense",
- activation: "softmax",
- },
- ];
- ```
-- default `regression` layers:
- ```js
- layers: [
- {
- type: "dense",
- units: this.options.hiddenUnits,
- activation: "relu",
- },
- {
- type: "dense",
- activation: "sigmoid",
- },
- ];
```
-- default `imageClassification` layers:
- ```js
- layers = [
- {
- type: "conv2d",
- filters: 8,
- kernelSize: 5,
- strides: 1,
- activation: "relu",
- kernelInitializer: "varianceScaling",
- },
- {
- type: "maxPooling2d",
- poolSize: [2, 2],
- strides: [2, 2],
- },
- {
- type: "conv2d",
- filters: 16,
- kernelSize: 5,
- strides: 1,
- activation: "relu",
- kernelInitializer: "varianceScaling",
- },
- {
- type: "maxPooling2d",
- poolSize: [2, 2],
- strides: [2, 2],
- },
- {
- type: "flatten",
- },
- {
- type: "dense",
- kernelInitializer: "varianceScaling",
- activation: "softmax",
- },
- ];
- ```
-
-**Defining Custom Layers**: You can define custom neural network architecture by defining your layers in the `options` that are passed to the `ml5.neuralNetwork` on initialization.
+ - _inputs_ - Optional
+ - Array | Number: Input labels as an array or number of inputs. Default: [].
+ - _outputs_ - Optional
+ - Array | Number: Output labels as an array or number of outputs. Default: [].
+ - _dataUrl_ - Optional
+ - String: The URL to a CSV or JSON file containing the data.
+ - _modelUrl_ - Optional
+ - String: The URL to a pre-trained model.
+ - _layers_ - Optional
+ - Array: Custom layers for the neural network.
+ - _task_ - Required
+ - String: The type of task: 'classification', 'regression', 'imageClassification'.
+ - _debug_ - Optional
+ - Boolean: Show the training visualization. Default: false.
+ - _learningRate_ - Optional
+ - Number: The learning rate for training. Default: 0.2.
+ - _hiddenUnits_ - Optional
+ - Number: Number of hidden units in the default layer. Default: 16.
+
+- **callback(nn)**: Optional. A function to run once the model has been initialized.
+
+**Returns:**
+
+- **Object**: The neuralNetwork object. This object contains methods to add data, normalize data, train the model, and make predictions.
-- A neural network with 3 layers
- ```js
- const options = {
- debug: true,
- task: "classification",
- layers: [
- {
- type: "dense",
- units: 16,
- activation: "relu",
- },
- {
- type: "dense",
- units: 16,
- activation: "sigmoid",
- },
- {
- type: "dense",
- activation: "sigmoid",
- },
- ],
- };
- const nn = ml5.neuralNetwork(options);
- ```
-
-#### Arguments for `ml5.neuralNetwork(options)`
-
-The options that can be specified are:
-
-```js
-const DEFAULTS = {
- inputs: [], // can also be a number
- outputs: [], // can also be a number
- dataUrl: null,
- modelUrl: null,
- layers: [], // custom layers
- task: null, // 'classification', 'regression', 'imageClassificaiton'
- debug: false, // determines whether or not to show the training visualization
- learningRate: 0.2,
- hiddenUnits: 16,
-};
-```
-
-
+---
-### .addData()
+### nn.addData()
-> If you are not uploading data using the `dataUrl` property of the options given to the constructor, then you can add data to a "blank" neural network class using the `.addData()` function.
+This method adds data to the neural network.
-```js
-neuralNetwork.addData(xs, ys);
+```javascript
+nn.addData(xs, ys);
```
-📥 **Inputs**
+**Parameters:**
-- **xs**: Required. Array | Object.
- - If an array is given, then the inputs must be ordered as specified in the constructor. If no labels are given in the constructor, then the order that your data are added here will set the order of how you will pass data to `.predict()` or `.classify()`.
- - If an object is given, then feed in key/value pairs.
- - if `task:imageClassification`: you can supply a HTMLImageElement or HTMLCanvasElement or a flat 1-D array of the pixel values such that the dimensions match with the defined image size in the `options.inputs: [IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS]`
-- **ys**: Required. Array | Object.
- - If an array is given, then the inputs must be ordered as specified in the constructor.
+- **xs**: Required. Array | Object. Input data.
+ - If an array is given, the inputs must be ordered as specified in the constructor. If no labels are given in the constructor, then the order that your data are added here will set the order of how you will pass data to `.predict()` or `.classify()`.
- If an object is given, then feed in key/value pairs.
+ - If an object is given, provide key/value pairs.
+ - If task is `imageClassification`, provide an HTMLImageElement, HTMLCanvasElement, or a flat 1-D array of pixel values.
+- **ys**: Required. Array | Object. Output data.
+ - If an array is given, the outputs must be ordered as specified in the constructor.
+ - If an object is given, provide key/value pairs.
-📤 **Outputs**
+**Returns:**
-- n/a: adds data to `neuralNetwork.data.data.raw`
+- n/a: Adds data to `neuralNetworkData.data.raw`.
---
----
+### nn.normalizeData()
-### .normalizeData()
+This method normalizes the data on a scale from 0 to 1.
-> normalizes the data on a scale from 0 to 1. The data being normalized are part of the `NeuralNetworkData` class which can be accessed in: `neuralNetwork.data.data.raw`
-
-```js
-neuralNetwork.normalizeData();
+```javascript
+nn.normalizeData();
```
-📥 **Inputs**
+**Parameters:**
- n/a
-📤 **Outputs**
+**Returns:**
-- n/a: normalizes the data in `neuralNetwork.data.data.raw` and adds `inputs` and `output` tensors to `neuralNetwork.data.data.tensor` as well as the `inputMin`, `inputMax`, `outputMin`, and `outputMax` as tensors. The `inputMin`, `inputMax`, `outputMin`, and `outputMax` are also added to `neuralNetwork.data.data` as Numbers.
+- n/a: normalizes the data in `neuralNetworkData.data.raw` and adds `inputs` and `output` tensors to `neuralNetworkData.data.tensor` as well as the `inputMin`, `inputMax`, `outputMin`, and `outputMax` as tensors. The `inputMin`, `inputMax`, `outputMin`, and `outputMax` are also added to `neuralNetworkData.data` as Numbers.
---
----
+### nn.train()
-### .train()
+This method trains the model with the data loaded during the instantiation or added using `.addData()`.
-> trains the model with the data loaded during the instantiation of the `NeuralNetwork` or the data added using `neuralNetwork.addData()`
-
-```js
-neuralNetwork.train(?optionsOrCallback, ?optionsOrWhileTraining, ?callback);
+```javascript
+nn.train(?optionsOrCallback, ?optionsOrWhileTraining, ?callback);
```
-📥 **Inputs**
+**Parameters:**
- **optionsOrCallback**: Optional.
- - If an object of options is given, then `optionsOrCallback` will be an object where you can specify the `batchSize` and `epochs`:
- ```js
+ - If an object of options is given, specify `batchSize` and `epochs`:
+ ```javascript
{
batchSize: 24,
epochs: 32,
- };
- ```
- - If a callback function is given here then this will be a callback that will be called when the training is finished.
- ```js
- function doneTraining() {
- console.log("done!");
- }
- ```
- - If a callback function is given here and a second callback function is given, `optionsOrCallback` will be a callback function that is called after each `epoch` of training, and the `optionsOrWhileTraining` callback function will be a callback function that is called when the training has completed:
- ```js
- function whileTraining(epoch, loss) {
- console.log(`epoch: ${epoch}, loss:${loss}`);
- }
- function doneTraining() {
- console.log("done!");
}
- neuralNetwork.train(whileTraining, doneTraining);
```
+ - If a callback function is given, it will be called when the training is finished.
- **optionsOrWhileTraining**: Optional.
- - If an object of options is given as the first parameter, then `optionsOrWhileTraining` will be a callback function that is fired after the training as finished.
- - If a callback function is given as the first parameter to handle the `whileTraining`, then `optionsOrWhileTraining` will be a callback function that is fired after the training as finished.
+ - If an object of options is given as the first parameter, specify a callback function to be called when the training is finished.
- **callback**: Optional. Function.
-
- If an object of options is given as the first parameter and a callback function is given as a second parameter, then this `callback` parameter will be a callback function that is fired after the training as finished.
```js
@@ -634,200 +438,182 @@ neuralNetwork.train(?optionsOrCallback, ?optionsOrWhileTraining, ?callback);
neuralNetwork.train(trainingOptions, whileTraining, doneTraining);
```
-📤 **Outputs**
-
-- n/a: Here, `neuralNetwork.model` is created and the model is trained.
+**Returns:**
----
+- n/a: Creates and trains the `nn.model`.
---
-### .predict()
+### nn.predict()
-> Given an input, will return an array of predictions.
+This method returns an array of predictions for the given input.
-```js
-neuralNetwork.predict(inputs, callback);
+```javascript
+nn.predict(inputs, callback);
```
-📥 **Inputs**
-
-- **inputs**: Required. Array | Object.
- - If an array is given, then the input values should match the order that the data are specified in the `inputs` of the constructor options.
- - If an object is given, then the input values should be given as a key/value pair. The keys must match the keys given in the inputs of the constructor options and/or the keys added when the data were added in `.addData()`.
-- **callback**: Required. Function. A function to handle the results of `.predict()`.
+**Parameters:**
-📤 **Outputs**
+- **inputs**: Required. Array | Object. Input values.
+ - If an array is given, match the order specified in the constructor options.
+ - If an object is given, provide key/value pairs matching the keys specified in the constructor options.
+- **callback(results)**: Required. Function. A function to handle the results of `.predict()`.
-- **Array**: Returns an array of objects. Each object contains `{value, label}`.
+**Returns:**
----
+- **Array**: An array of objects, each containing `{value, label}`.
---
-#### .predictMultiple()
+### nn.predictMultiple()
-> Given an input, will return an array of arrays of predictions.
+This method returns an array of arrays of predictions for the given input.
-```js
-neuralNetwork.predictMultiple(inputs, callback);
+```javascript
+nn.predictMultiple(inputs, callback);
```
-📥 **Inputs**
+**Parameters:**
- **inputs**: Required. Array of arrays | Array of objects.
- If an array of arrays is given, then the input values of each child array should match the order that the data are specified in the `inputs` of the constructor options.
- If an array of objects is given, then the input values of each child object should be given as a key/value pair. The keys must match the keys given in the inputs of the constructor options and/or the keys added when the data were added in `.addData()`.
-- **callback**: Required. Function. A function to handle the results of `.predictMultiple()`.
-
-📤 **Outputs**
+- **callback**: Required. Function. A function to handle the results of `.classifyMultiple()`.
-- **Array**: Returns an array of arrays. Each child array contains objects. Each object contains `{value, label}`.
+**Returns:**
----
+- **Array**: An array of arrays, each containing objects with `{value, label}`.
---
-### .classify()
+### nn.classify()
-> Given an input, will return an array of classifications.
+This method returns an array of classifications for the given input.
-```js
-neuralNetwork.classify(inputs, callback);
+```javascript
+nn.classify(inputs, callback);
```
-📥 **Inputs**
-
-- **inputs**: Required. Array | Object.
- - If an array is given, then the input values should match the order that the data are specified in the `inputs` of the constructor options.
- - If an object is given, then the input values should be given as a key/value pair. The keys must match the keys given in the inputs of the constructor options and/or the keys added when the data were added in `.addData()`.
-- **callback**: Required. Function. A function to handle the results of `.classify()`.
+**Parameters:**
-📤 **Outputs**
+- **inputs**: Required. Array | Object. Input values.
+ - If an array is given, match the order specified in the constructor options.
+ - If an object is given, provide key/value pairs matching the keys specified in the constructor options.
+- **callback(results)**: Required. Function. A function to handle the results of `.classify()`.
-- **Array**: Returns an array of objects. Each object contains `{label, confidence}`.
+**Returns:**
----
+- **Array**: An array of objects, each containing `{label, confidence}`.
---
-### .classifyMultiple()
+### nn.classifyMultiple()
-> Given an input, will return an array of arrays of classifications.
+This method returns an array of arrays of classifications for the given input.
-```js
-neuralNetwork.classifyMultiple(inputs, callback);
+```javascript
+nn.classifyMultiple(inputs, callback);
```
-📥 **Inputs**
-
-- **inputs**: Required. Array of arrays | Array of objects.
- - If an array of arrays is given, then the input values of each child array should match the order that the data are specified in the `inputs` of the constructor options.
- - If an array of objects is given, then the input values of each child object should be given as a key/value pair. The keys must match the keys given in the inputs of the constructor options and/or the keys added when the data were added in `.addData()`.
-- **callback**: Required. Function. A function to handle the results of `.classifyMultiple()`.
+**Parameters:**
-📤 **Outputs**
+- **inputs**: Required. Array of arrays | Array of objects. Input values.
+ - If an array of arrays is given, match the order specified in the constructor options.
+ - If an array of objects is given, provide key/value pairs matching the keys specified in the constructor options.
+- **callback(results)**: Required. Function. A function to handle the results of `.classifyMultiple()`.
-- **Array**: Returns an array of arrays. Each child array contains objects. Each object contains `{label, confidence}`.
+**Returns:**
----
+- **Array**: An array of arrays, each containing objects with `{label, confidence}`.
---
-### .saveData()
+### nn.saveData()
-> Saves the data that has been added
+This method saves the added data to a JSON file.
-```js
-neuralNetwork.saveData(?outputName, ?callback);
+```javascript
+nn.saveData(outputName, callback);
```
-📥 **Inputs**
+**Parameters:**
-- **outputName**: Optional. String. An output name you'd like your data to be called. If no input is given, then the name will be `data_YYYY-MM-DD_mm-hh`.
-- **callback**: Optional. function. A callback that is called after the data has been saved.
+- **outputName**: Optional. String. The name of the saved file. Default is `data_YYYY-MM-DD_mm-hh`.
+- **callback**: Optional. Function. A callback function to be called after the data has been saved.
-📤 **Outputs**
+**Returns:**
-- n/a: downloads the data to a `.json` file in your `downloads` folder.
-
----
+- n/a: Downloads the data to a `.json` file.
---
-### .loadData()
+### nn.loadData()
-> loads the data to `neuralNetwork.data.data.raw`
+This method loads data to `neuralNetworkData.data.raw`.
-```js
-neuralnetwork.loadData(filesOrPath, ?callback);
+```javascript
+nn.loadData(filesOrPath, callback);
```
-📥 **Inputs**
+**Parameters:**
- **filesOrPath**: REQUIRED. String | InputFiles. A string path to a `.json` data object or InputFiles from html input `type="file"`. Must be structured for example as: `{"data": [ { xs:{input0:1, input1:2}, ys:{output0:"a"}, ...]}`
- **callback**: Optional. function. A callback that is called after the data has been loaded.
-📤 **Outputs**
-
-- n/a: set `neuralNetwork.data.data.raw` to the array specified in the `"data"` property of the incoming `.json` file.
+**Returns:**
----
+- n/a: Sets `neuralNetworkData.data.raw` to the array specified in the incoming JSON file.
---
-### .save()
+### nn.save()
-> Saves the trained model
+This method saves the trained model.
-```js
-neuralNetwork.save(?outputName, ?callback);
+```javascript
+nn.save(outputName, callback);
```
-📥 **Inputs**
-
-- **outputName**: Optional. String. An output name you'd like your model to be called. If no input is given, then the name will be `model`.
-- **callback**: Optional. function. A callback that is called after the model has been saved.
+**Parameters:**
-📤 **Outputs**
+- **outputName**: Optional. String. The name of the saved file. Default is `model`.
+- **callback**: Optional. Function. A callback function to be called after the model has been saved.
-- n/a: downloads the model to a `.json` file and a `model.weights.bin` binary file in your `downloads` folder.
+**Returns:**
----
+- n/a: Downloads the model to a `.json` file and a `model.weights.bin` binary file.
---
-### .load()
+### nn.load()
-> Loads a pre-trained model
+This method loads a pre-trained model.
-```js
-neuralNetwork.load(filesOrPath, ?callback);
+```javascript
+nn.load(filesOrPath, callback);
```
-📥 **Inputs**
+**Parameters:**
-- **filesOrPath**: REQUIRED. String | InputFiles.
+- **filesOrPath**: Required. String | InputFiles. The URL to the `model.json` file, or InputFiles from an HTML input element.
- If a string path to the `model.json` data object is given, then the `model.json`, `model_meta.json` file and its accompanying `model.weights.bin` file will be loaded. Note that the names must match.
- If InputFiles from html input `type="file"`. Then make sure to select ALL THREE of the `model.json`, `model_meta.json` and the `model.weights.bin` file together to upload otherwise the load will throw an error.
- - Method 1: using a json object. In this case, the paths to the specific files are set directly.
- ```js
+ - Method 1: Using a JSON object with paths to specific files:
+ ```javascript
const modelInfo = {
model: "path/to/model.json",
metadata: "path/to/model_meta.json",
weights: "path/to/model.weights.bin",
};
- neuralNetwork.load(modelInfo, modelLoadedCallback);
+ nn.load(modelInfo, modelLoadedCallback);
```
- - Method 2: specifying only the path to th model.json. In this case, the `model_meta.json` and the `model.weights.bin` are assumed to be in the same directory, named exactly like `model_meta.json` and `model.weights.bin`.
- ```js
- neuralNetwork.load("path/to/model.json", modelLoadedCallback);
+ - Method 2: Specifying only the path to the `model.json`. Assumes the `model_meta.json` and `model.weights.bin` are in the same directory:
+ ```javascript
+ nn.load("path/to/model.json", modelLoadedCallback);
```
- - Method 3: using the ``
-- **callback**: Optional. function. A callback that is called after the model has been loaded.
-
-📤 **Outputs**
+ - Method 3: Using ``:
+- **callback**: Optional. Function. A callback function to be called after the model has been loaded.
-- n/a: loads the model to `neuralNetwork.model`
+**Returns:**
----
+- n/a: Loads the model to `nn.model`.
diff --git a/docs/reference/sentiment.md b/docs/reference/sentiment.md
index 808792e..bc8177b 100644
--- a/docs/reference/sentiment.md
+++ b/docs/reference/sentiment.md
@@ -201,50 +201,90 @@ That's it! You have successfully built a Sentiment Analysis model that predicts
## Properties
+### sentiment.ready
+
+- Description
+ - Boolean value that specifies if the model has loaded.
+- Type
+ - Boolean
+
---
-#### .ready
+### sentiment.model
-> Boolean value that specifies if the model has loaded.
+- **Description**
+ - The TensorFlow.js model used for sentiment analysis.
+- **Type**
+ - tf.LayersModel
---
+### sentiment.indexFrom
+
+- **Description**
+ - The starting index for words in the model's vocabulary.
+- **Type**
+ - Number
+
---
-#### .model
+### sentiment.maxLen
-> The model being used.
+- **Description**
+ - The maximum length of sequences that the model can process.
+- **Type**
+ - Number
---
+### sentiment.wordIndex
+
+- **Description**
+ - An object mapping words to their corresponding indices in the model's vocabulary.
+- **Type**
+ - Object
+
+---
+
+### sentiment.vocabularySize
+
+- **Description**
+ - The size of the vocabulary that the model was trained on.
+- **Type**
+ - Number
+
+
## Methods
-#### Initialize
+### ml5.sentiment()
+
+This method is used to load the sentiment model and store it in a variable. The ? means the argument is optional!
```js
-const sentiment = ml5.sentiment(model, ?callback);
+let sentiment = ml5.sentiment(model, ?callback);
```
#### Parameters
-- **model**: REQUIRED. Defaults to 'moviereviews'. You can also use a path to a `manifest.json` file via a relative or absolute path.
-- **callback**: OPTIONAL. A callback function that is called once the model has loaded. If no callback is provided, it will return a promise that will be resolved once the model has loaded.
+- **model**: REQUIRED. Defaults to 'movieReviews'. You can also use a path to a `manifest.json` file via a relative or absolute path.
+- **callback(sentiment, error)**: Optional. A callback function that is called once the model has loaded. If no callback is provided, it will return a promise that will be resolved once the model has loaded.
---
-#### .predict()
+### sentiment.predict()
-> Given a number, will make magicSparkles
+This method is used to predict the sentiment of a given text.
```js
sentiment.predict(text);
```
-📥 **Inputs**
+**Parameters:**
-- **text**: Required. String. A string of text to predict
+- **text**: Required.
+ - String: A string of text to predict.
-📤 **Outputs**
+**Return:**
- **Object**: Scores the sentiment of given text with a value between 0 ("negative") and 1 ("positive"). See below for an example output:
```javascript
@@ -253,4 +293,3 @@ sentiment.predict(text);
}
```
----
diff --git a/docs/reference/sound-classifier.md b/docs/reference/sound-classifier.md
index d6521f1..f388955 100644
--- a/docs/reference/sound-classifier.md
+++ b/docs/reference/sound-classifier.md
@@ -202,7 +202,7 @@ Voila! You have successfully built the Sound Classification example. Press the <
## Methods
-#### ml5.soundClassifier()
+### ml5.soundClassifier()
This method is used to initialize the soundClassifier object.
@@ -250,8 +250,9 @@ let soundclassifier = ml5.soundClassifier(?model, ?options, ?callback)
- **callback**: Optional. A function to run once the model has been loaded. Alternatively, call `ml5.soundClassifier()` within the p5 `preload` function.
+---
-#### soundClassifier.classifyStart()
+### soundClassifier.classifyStart()
This method repeatedly outputs classification labels on an audio media through a callback function.
```js
@@ -279,7 +280,9 @@ soundClassifier.classifyStart(numOrCallback, callback);
- **Promise:** If no callback is provided, the method returns a promise that resolves when the classification process starts and provides the classification results.
- **Callback Results:** If a callback is provided, the results are passed directly to the callback function.
-#### soundClassifier.classifyStop()
+---
+
+### soundClassifier.classifyStop()
This method can be called after a call to `soundClassifier.classifyStart` to stop the repeating classifications.
```js
diff --git a/docs/styleguide/reference-guidelines.md b/docs/styleguide/reference-guidelines.md
index 51a9a1d..1fe517a 100644
--- a/docs/styleguide/reference-guidelines.md
+++ b/docs/styleguide/reference-guidelines.md
@@ -41,8 +41,8 @@ const magic = ml5.magicFeature(requiredInput, ?optionalInput1, ?optionalInput2);
#### Parameters
- **requiredInput**: REQUIRED. Notice there is no question mark in front of the input.
-- **optionalInput1**: OPTIONAL. Notice the `?` indicates an optional parameter.
-- **optionalInput2**: OPTIONAL. A description of some kind of object with some properties. Notice the `?` indicates an optional parameter.
+- **optionalInput1**: Optional. Notice the `?` indicates an optional parameter.
+- **optionalInput2**: Optional. A description of some kind of object with some properties. Notice the `?` indicates an optional parameter.
```js
{