Skip to content

Commit

Permalink
Updated the unsupported_dtypes documentation by adding unsupported_dt… (
Browse files Browse the repository at this point in the history
  • Loading branch information
mosesdaudu001 authored and druvdub committed Oct 14, 2023
1 parent 383d588 commit bfd27eb
Showing 1 changed file with 67 additions and 0 deletions.
67 changes: 67 additions & 0 deletions docs/overview/deep_dive/data_types.rst
Original file line number Diff line number Diff line change
Expand Up @@ -474,6 +474,73 @@ However, torch does not support ``uint32``, and so we cannot fully adhere to the
Rather than breaking this rule and returning arrays of type ``uint8`` only with a torch backend, we instead opt to remove official support entirely for this combination of data type, function, and backend framework.
This will avoid all of the potential confusion that could arise if we were to have inconsistent and unexpected outputs when using officially supported data types in Ivy.

Another important point to note is that for cases where an entire dtype series is not supported or supported. For example if `float16`, `float32` and `float64` are not supported or is supported by a framework which could be a backend or frontend framework,
then we simply identify that by simply replacing the different float dtypes with the str `float`. The same logic is applied to other dtypes such as `complex`, where we simply replace the entire dtypes with the str `complex`

An example is :func:`ivy.fmin` with a tensorflow backend:

.. code-block:: python
@with_supported_dtypes({"2.13.0 and below": ("float",)}, backend_version)
def fmin(
x1: Union[tf.Tensor, tf.Variable],
x2: Union[tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
x1, x2 = promote_types_of_inputs(x1, x2)
x1 = tf.where(tf.math.is_nan(x1), x2, x1)
x2 = tf.where(tf.math.is_nan(x2), x1, x2)
ret = tf.experimental.numpy.minimum(x1, x2)
return ret
As seen in the above code, we simply use the str `float` instead of writing all the float dtypes that are supported

Another example is :func:`ivy.floor_divide` with a tensorflow backend:

.. code-block:: python
@with_unsupported_dtypes({"2.13.0 and below": ("complex",)}, backend_version)
def floor_divide(
x1: Union[float, tf.Tensor, tf.Variable],
x2: Union[float, tf.Tensor, tf.Variable],
/,
*,
out: Optional[Union[tf.Tensor, tf.Variable]] = None,
) -> Union[tf.Tensor, tf.Variable]:
x1, x2 = ivy.promote_types_of_inputs(x1, x2)
return tf.experimental.numpy.floor_divide(x1, x2)
As seen in the above code, we simply use the str `complex` instead of writing all the complex dtypes that are not supported

Supported and Unsupported Data Types Attributes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In addition to the unsupported / supported data types decorator, we also have the :attr:`unsupported_dtypes` and :attr:`supported_dtypes` attributes. These attributes operate in a manner similar to the attr:`@with_unsupported_dtypes` and attr:`@with_supported_dtypes` decorators.

Special Case
""""""""""""

However, the major difference between the attributes and the decorators is that the attributes are set and assigned in the ivy function itself :mod:`ivy/functional/ivy/<ivy_functional_API>` ,
while the decorators are used within the frontend :mod:`ivy/functional/frontends/<some_frontend>` and backend :mod:`ivy/functional/backends/<some_backend>` to identify the supported or unsupported data types, depending on the use case.
The attributes are set for functions that don't have a specific backend implementation for each backend, where we provide the backend as one of the arguments to the attribute of the framework agnostic function (because all ivy functions are framework agnostic), which allows it to identify the supported or unsupported dtypes for each backend.

An example of an ivy function which does not have a specific backend implementation for each backend is the :attr:`einops_reduce` function. `This function <https://github.com/unifyai/ivy/blob/8516d3f12a8dfc4ec5f819789937d196c7e28566/ivy/functional/ivy/general.py#L1964>`_ , makes use of a third-party library :attr:`einops` which has its own backend-agnostic implementations.

The :attr:`unsupported_dtypes` and :attr:`supported_dtypes` attributes take two arguments, a dictionary with the unsupported dtypes mapped to the corresponding backend framework. Based on that, the specific unsupported dtypes are set for the given function everytime the function is called.
For example, we use the :attr:`unsupported_dtypes` attribute for the :attr:`einops_reduce` function within the ivy functional API as shown below:

.. code-block:: python
einops_reduce.unsupported_dtypes = {
"torch": ("float16",),
"tensorflow": ("complex",),
"paddle": ("complex", "uint8", "int8", "int16", "float16"),
}
With the above aproach, we ensure that anytime the backend is set to torch, the :attr:`einops_reduce` function does not support float16, likewise, complex dtypes are not supported with a tensorflow backend and
complex, uint8, int8, int16, float16 are not supported with a paddle backend.

Backend Data Type Bugs
----------------------
Expand Down

0 comments on commit bfd27eb

Please sign in to comment.