Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
actions-user authored and kwagyeman committed Jul 31, 2024
1 parent 6681534 commit 9da3ac2
Show file tree
Hide file tree
Showing 181 changed files with 2,170 additions and 703 deletions.
2 changes: 1 addition & 1 deletion docs/_sources/genrst/builtin_types.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Builtin types
=============
Generated Tue 23 Jul 2024 04:50:56 UTC
Generated Wed 31 Jul 2024 20:51:35 UTC

Exception
---------
Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/genrst/core_language.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Core language
=============
Generated Tue 23 Jul 2024 04:50:56 UTC
Generated Wed 31 Jul 2024 20:51:35 UTC

.. _cpydiff_core_fstring_concat:

Expand Down
16 changes: 8 additions & 8 deletions docs/_sources/genrst/modules.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Modules
=======
Generated Tue 23 Jul 2024 04:50:56 UTC
Generated Wed 31 Jul 2024 20:51:35 UTC

.. Preamble section inserted into generated output
Expand Down Expand Up @@ -310,13 +310,13 @@ Sample code::
x = random.getrandbits(64)
print("{}".format(x))

+-------------------------+---------------------------------------------------------------------+
| CPy output: | uPy output: |
+-------------------------+---------------------------------------------------------------------+
| :: | :: |
| | |
| 3720919278385122334 | /bin/sh: 1: ../ports/unix/build-standard/micropython: not found |
+-------------------------+---------------------------------------------------------------------+
+--------------------------+---------------------------------------------------------------------+
| CPy output: | uPy output: |
+--------------------------+---------------------------------------------------------------------+
| :: | :: |
| | |
| 11280061628789195614 | /bin/sh: 1: ../ports/unix/build-standard/micropython: not found |
+--------------------------+---------------------------------------------------------------------+

.. _cpydiff_modules_random_randint:

Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/genrst/syntax.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Syntax
======
Generated Tue 23 Jul 2024 04:50:56 UTC
Generated Wed 31 Jul 2024 20:51:35 UTC

.. _cpydiff_syntax_arg_unpacking:

Expand Down
4 changes: 2 additions & 2 deletions docs/_sources/library/network.WINC.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -96,13 +96,13 @@ Constructors

.. method:: connected_sta()

This method returns a list containing the connected client's IP adress.
This method returns a list containing the connected client's IP address.

.. method:: wait_for_sta(timeout)

This method blocks and waits for a client to connect. If timeout is 0
this will block forever. This method returns a list containing the
connected client's IP adress.
connected client's IP address.

.. method:: ifconfig([ip_addr, subnet_addr, gateway_addr, dns_addr])

Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/library/omv.audio.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Functions
``highpass`` is the high pass filter cut-off given the target sample frequency. This parameter
is applicable for the Arduino Portenta H7 only.

``samples`` is the number of samples to accumulate per callback. This is typically caluclated
``samples`` is the number of samples to accumulate per callback. This is typically calculated
based on the decimation factor and number of channels. If set to -1, the number of samples
will be calculated automatically based on the decimation factor and number of channels.

Expand Down
1,265 changes: 1,016 additions & 249 deletions docs/_sources/library/omv.image.rst.txt

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion docs/_sources/library/omv.imu.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Functions
Returns the rotation angle in degrees (float) of the camera module.

* 0 -> Camera is standing up.
* 90 -> Camera is roated left.
* 90 -> Camera is rotated left.
* 180 -> Camera is upside down.
* 270 -> Camera is rotated right.

Expand Down
2 changes: 2 additions & 0 deletions docs/_sources/library/omv.ml.preprocessing.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,5 @@ Constructors
allows you to directly specify the range of the input data, the mean, and the standard deviation. The
Grayscale or RGB88 image is then converted into a floating point tensor for the model to process
based on these values.

uint16 and int16 input tensors are not supported.
12 changes: 6 additions & 6 deletions docs/_sources/library/omv.ml.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Constructors
using this option, but, you need to be careful if you deallocate.

Once a model is loaded you can execute it multiple times with different inputs using `predict()`.
The model will rember its internal state between calls to `predict()`.
The model will remember its internal state between calls to `predict()`.

When deleted the model will automatically free up any memory it used from the heap or frame buffer stack.

Expand All @@ -78,15 +78,15 @@ Constructors
to the number of input tensors the model supports. The method returns a list of numpy ``ndarray`` objects
corresponding to the number of output tensors the model has.

The model input tensors can be up to 4D tensors of uint8, int8, int16, or float32 values. The passed
The model input tensors can be up to 4D tensors of uint8, int8, uint16, int16, or float32 values. The passed
numpy ``ndarray`` for an input tensor is then converted to floating point and scaled/offset based on
the input tensor's scale and zero point values before being passed to the model. For example, an ``ndarray``
of uint8 values will be converted to float32s between 0.0-255.0, divided by the input tensor's scale, and
then have the input tensor's zero point added to it. The same process is done for int8 and int16 values
then have the input tensor's zero point added to it. The same process is done for int8, uint16, and int16 values
whereas float32 values are passed directly to the model ignoring the scale and zero point values.

The model's output tensors can be up to 4D tensors of uint8, int8, or float32 values. For uint8
and int8 tensors the returned numpy ndarray is created by subtracting the output tensor's zero
The model's output tensors can be up to 4D tensors of uint8, int8, uint16, int16, or float32 values. For uint8,
int8, uint16, and int16 tensors the returned numpy ndarray is created by subtracting the output tensor's zero
point value before multiplying by the output tensor's scale value. For float32 tensors, values are
passed directly to the output without any scaling or offset being applied.

Expand Down Expand Up @@ -127,7 +127,7 @@ Constructors
:type: list[str]

A list of strings containing the data type of each input tensor.
'b', 'B', 'h', and 'f' respectively for uint8, int8, int16, and float32.
'B', 'b', 'H', 'h', and 'f' respectively for uint8, int8, uint16, int16, and float32.

.. attribute:: input_scale
:type: list[float]
Expand Down
12 changes: 6 additions & 6 deletions docs/_sources/library/omv.rpc.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ size. Once the remote method finishes executing it will return a ``memory_view_o
can also be up to 2^32-1 bytes in size. Because the argument and response are both generic byte
containers you can pass anything through the ``rpc`` library and receive any type of response. A simple
way to pass arguments is to use ``struct.pack()`` to create the argument and ``struct.unpack()`` to
receieve the argument on the other side. For the response, the other side may send a string
receive the argument on the other side. For the response, the other side may send a string
object or json string as the result which the master can then interpret.

As for errors, if you try to execute a non-existant function or method name the
As for errors, if you try to execute a non-existent function or method name the
``rpc_master.call()`` method will return an empty ``bytes()`` object. If the ``rpc`` library failed to communicate with the
slave the ``rpc`` library will return None.

Expand Down Expand Up @@ -159,12 +159,12 @@ Constructors

Executes a remote call on the slave device. ``name`` is a string name of the remote function or method
to execute. ``data`` is the ``bytes`` like object that will be sent as the argument of the remote function
or method to exeucte. ``send_timeout`` defines how many milliseconds to wait while trying to connect to
or method to execute. ``send_timeout`` defines how many milliseconds to wait while trying to connect to
the slave and get it to execute the remote function or method. Once the master starts sending the
argument to the slave deivce ``send_timeout`` does not apply. The library will allow the argument to
argument to the slave device ``send_timeout`` does not apply. The library will allow the argument to
take up to 5 seconds to be sent. ``recv_timeout`` defines how many milliseconds to wait after the slave
started executing the remote method to receive the repsonse. Note that once the master starts
receiving the repsonse ``recv_timeout`` does not apply. The library will allow the response to take up
started executing the remote method to receive the response. Note that once the master starts
receiving the response ``recv_timeout`` does not apply. The library will allow the response to take up
to 5 seconds to be received.

Note that a new packet that includes a copy of ``data`` will be created internally inside the ``rpc``
Expand Down
18 changes: 9 additions & 9 deletions docs/_sources/library/omv.sensor.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ Functions

Creating secondary images normally requires creating them on the heap which
has a limited amount of RAM... but, also gets fragmented making it hard to
grab a large contigous memory array to store an image in. With this method
grab a large contiguous memory array to store an image in. With this method
you are able to allocate a very large memory array for an image instantly
by taking space away from our frame buffer stack memory which we use for
computer vision algorithms. That said, this also means you'll run out of
Expand All @@ -156,9 +156,9 @@ Functions
fixed by firmware. The stack then grows down until it hits the heap.
Next, frame buffers are stored in a secondary memory region. Memory is
liad out with the main frame buffer on the bottom and the frame buffer
stack on the top. When `sensor.snapshot()` is called it fills the frame bufer
stack on the top. When `sensor.snapshot()` is called it fills the frame buffer
from the bottom. The frame buffer stack is then able to use whatever is
left over. This memory allocation method is extremely efficent for computer
left over. This memory allocation method is extremely efficient for computer
vision on microcontrollers.

.. function:: set_pixformat(pixformat:int) -> None
Expand Down Expand Up @@ -264,7 +264,7 @@ Functions

Set the camera image gainceiling. 2, 4, 8, 16, 32, 64, or 128.

.. function:: set_contrast(constrast:int) -> None
.. function:: set_contrast(contrast:int) -> None

Set the camera image contrast. -3 to +3.

Expand Down Expand Up @@ -318,7 +318,7 @@ Functions

Camera auto exposure algorithms are pretty conservative about how much
they adjust the exposure value by and will generally avoid changing the
exposure value by much. Instead, they change the gain value alot of deal
exposure value by much. Instead, they change the gain value a lot to deal
with changing lighting.

.. function:: get_exposure_us() -> int
Expand Down Expand Up @@ -410,7 +410,7 @@ Functions

`sensor.snapshot()` will automatically handle switching active frame buffers in the background.
From your code's perspective there is only ever 1 active frame buffer even though there might
be more than 1 frame buffer on the system and another frame buffer reciving data in the background.
be more than 1 frame buffer on the system and another frame buffer receiving data in the background.

If count is:

Expand All @@ -425,7 +425,7 @@ Functions
In double buffer mode your OpenMV Cam will allocate two frame buffers for receiving images.
When you call `sensor.snapshot()` one framebuffer will be used to receive the image and
the camera driver will continue to run. When the next frame is received it will be stored
in the other frame bufer. In the advent you call `sensor.snapshot()` again
in the other frame buffer. In the advent you call `sensor.snapshot()` again
before the first line of the next frame after is received your code will execute at the frame rate
of the camera. Otherwise, the image will be dropped.

Expand Down Expand Up @@ -520,7 +520,7 @@ Functions

* `sensor.IOCTL_SET_READOUT_WINDOW` - Pass this enum followed by a rect tuple (x, y, w, h) or a size tuple (w, h).
* This IOCTL allows you to control the readout window of the camera sensor which dramatically improves the frame rate at the cost of field-of-view.
* If you pass a rect tuple (x, y, w, h) the readout window will be positoned on that rect tuple. The rect tuple's x/y position will be adjusted so the size w/h fits. Additionally, the size w/h will be adjusted to not be smaller than the ``framesize``.
* If you pass a rect tuple (x, y, w, h) the readout window will be positioned on that rect tuple. The rect tuple's x/y position will be adjusted so the size w/h fits. Additionally, the size w/h will be adjusted to not be smaller than the ``framesize``.
* If you pass a size tuple (w, h) the readout window will be centered given the w/h. Additionally, the size w/h will be adjusted to not be smaller than the ``framesize``.
* This IOCTL is extremely helpful for increasing the frame rate on higher resolution cameras like the OV2640/OV5640.
* `sensor.IOCTL_GET_READOUT_WINDOW` - Pass this enum for `sensor.ioctl` to return the current readout window rect tuple (x, y, w, h). By default this is (0, 0, maximum_camera_sensor_pixel_width, maximum_camera_sensor_pixel_height).
Expand Down Expand Up @@ -551,7 +551,7 @@ Functions
* `sensor.IOCTL_LEPTON_SET_MEASUREMENT_MODE` - Pass this followed by True or False to turn off automatic gain control on the FLIR Lepton and force it to output an image where each pixel value represents an exact temperature value in celsius. A second True enables high temperature mode enabling measurements up to 500C on the Lepton 3.5, False is the default low temperature mode.
* `sensor.IOCTL_LEPTON_GET_MEASUREMENT_MODE` - Pass this to get a tuple for (measurement-mode-enabled, high-temp-enabled).
* `sensor.IOCTL_LEPTON_SET_MEASUREMENT_RANGE` - Pass this when measurement mode is enabled to set the temperature range in celsius for the mapping operation. The temperature image returned by the FLIR Lepton will then be clamped between these min and max values and then scaled to values between 0 to 255. To map a pixel value back to a temperature (on a grayscale image) do: ((pixel * (max_temp_in_celsius - min_temp_in_celsius)) / 255.0) + min_temp_in_celsius.
* The first arugment should be the min temperature in celsius.
* The first argument should be the min temperature in celsius.
* The second argument should be the max temperature in celsius. If the arguments are reversed the library will automatically swap them for you.
* `sensor.IOCTL_LEPTON_GET_MEASUREMENT_RANGE` - Pass this to return the sorted (min, max) 2 value temperature range tuple. The default is -10C to 40C if not set yet.
* `sensor.IOCTL_HIMAX_MD_ENABLE` - Pass this enum followed by ``True``/``False`` to enable/disable motion detection on the HM01B0. You should also enable the I/O pin (PC15 on the Arduino Portenta) attached the HM01B0 motion detection line to receive an interrupt.
Expand Down
4 changes: 2 additions & 2 deletions docs/_sources/library/pyb.I2C.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,11 @@ Constructors
the bus, if any). If extra arguments are given, the bus is initialised.
See ``init`` for parameters of initialisation.

The physical pins of the I2C busses on the OpenMV Cam are:
The physical pins of the I2C buses on the OpenMV Cam are:

- ``I2C(2)`` is on the Y position: ``(SCL, SDA) = (P4, P5) = (PB10, PB11)``

The physical pins of the I2C busses on the OpenMV Cam M7 are:
The physical pins of the I2C buses on the OpenMV Cam M7 are:

- ``I2C(2)`` is on the Y position: ``(SCL, SDA) = (P4, P5) = (PB10, PB11)``
- ``I2C(4)`` is on the Y position: ``(SCL, SDA) = (P7, P8) = (PD12, PD13)``
Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/library/pyb.UART.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Constructors

- ``UART(3)``: ``(TX, RX) = (P4, P5) = (PB10, PB11)``

The physical pins of the UART busses are for the OpenMV Cam M7 and H7:
The physical pins of the UART buses are for the OpenMV Cam M7 and H7:

- ``UART(1)``: ``(TX, RX) = (P1, P0) = (PB14, PB15)``
- ``UART(3)``: ``(TX, RX) = (P4, P5) = (PB10, PB11)``
Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/openmvcam/quickref.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ General OpenMV Cams Board Control
---------------------------------

All OpenMV Cams can use the `machine` module to control the camera hardware. Please refer to the
pinout image for which SPI/I2C/UART/CAN/PWM/TIMER channels are avialable on what I/O pins.
pinout image for which SPI/I2C/UART/CAN/PWM/TIMER channels are available on what I/O pins.

Delay and timing
^^^^^^^^^^^^^^^^
Expand Down
4 changes: 2 additions & 2 deletions docs/_sources/openmvcam/tutorial/openmvide_overview.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ text for the replace. Additionally, it also can preserve case while replacing.
Finally, the find and replace features works not only on the current file, but,
it can work on all files in a folder or all open files in OpenMV IDE.

In addition to the nice text editing enviornment OpenMV IDE also provides
In addition to the nice text editing environment OpenMV IDE also provides
auto-completion support hover tool-tips on keywords. So, after typing ``.`` for
example in Python OpenMV IDE will detect that you're about to write a function
or method name and it will show you an auto-completion text box. Once you've
Expand Down Expand Up @@ -353,7 +353,7 @@ your computer first before playing them since disk I/O over USB on your OpenMV
Cam is slow.

Finally, FFMPEG is used to provide conversion and video player support and may
be used for any non-OpenMV Cam activites you like. FFMPEG can convert/play
be used for any non-OpenMV Cam activities you like. FFMPEG can convert/play
a large number of file formats.

*
Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/openmvcam/tutorial/overview.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ happily forever.
But, if I want to use a 2D sensor like a camera everything changes.
Since a camera sensor generates tons of data a faster more resourceful computer is
required to process the sensor data. Now, with this faster more powerful computer
you can definately do many more things. But, what if my goal were still to just
you can definitely do many more things. But, what if my goal were still to just
turn on an LED? Spin a motor? Or open a valve? Could I still get the Arduino
like experience with a camera sensor?

Expand Down
4 changes: 2 additions & 2 deletions docs/_sources/openmvcam/tutorial/system_architecture.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ revved the OpenMV Cam with a faster and faster main processor the SDR DRAM speed
also has not kept up with the internal RAM speed. On the STM32H7 for example the
internal RAM bandwidth is 3.2GB/s versus a maximum SDR RAM bandwidth of 666MB/s
even if we built the system with an 8-layer board using a 32-bit DRAM bus
requring 50+ I/O pins for the DRAM.
requiring 50+ I/O pins for the DRAM.

So, since we're built on the STM32 architecture and limited to using expensive
and slow SDR DRAM for now we haven't added it as our internal SRAM is way faster.
Expand All @@ -28,7 +28,7 @@ Memory Architecture

Given the above memory architecture limitations we built all of our code to run
inside of the STM32 microcontroller memory. However, the STM32 doesn't have one
large contigous memory map. It features different segments of RAM for different
large contiguous memory map. It features different segments of RAM for different
situations.

First, there's a segment of RAM which contains global variables, the heap, and
Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/reference/filesystem.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ mounted.
On STM32 / Pyboard, the internal flash is mounted at ``/flash``, and optionally
the SDCard at ``/sd``. On ESP8266/ESP32, the primary filesystem is mounted at
``/``. On the OpenMV Cam the internal flash is mounted at ``/`` unless an SDCard
is installed which will be moutned at ``/`` instead.
is installed which will be mounted at ``/`` instead.

Block devices
-------------
Expand Down
2 changes: 1 addition & 1 deletion docs/differences/python_310.html
Original file line number Diff line number Diff line change
Expand Up @@ -424,7 +424,7 @@

<div role="contentinfo">
<p>&#169; Copyright - The MicroPython Documentation is Copyright © 2014-2024, Damien P. George, Paul Sokolovsky, and contributors.
<span class="lastupdated">Last updated on 23 Jul 2024.
<span class="lastupdated">Last updated on 31 Jul 2024.
</span></p>
</div>

Expand Down
2 changes: 1 addition & 1 deletion docs/differences/python_35.html
Original file line number Diff line number Diff line change
Expand Up @@ -375,7 +375,7 @@

<div role="contentinfo">
<p>&#169; Copyright - The MicroPython Documentation is Copyright © 2014-2024, Damien P. George, Paul Sokolovsky, and contributors.
<span class="lastupdated">Last updated on 23 Jul 2024.
<span class="lastupdated">Last updated on 31 Jul 2024.
</span></p>
</div>

Expand Down
Loading

0 comments on commit 9da3ac2

Please sign in to comment.