diff --git a/docs/extend/modular-resources/examples/triton.md b/docs/extend/modular-resources/examples/triton.md
index 9ea3108be1..31dd42e7fd 100644
--- a/docs/extend/modular-resources/examples/triton.md
+++ b/docs/extend/modular-resources/examples/triton.md
@@ -128,7 +128,7 @@ The following attributes are available for the MLModel service `viam:mlmodelserv
| `model_version` | int | Optional | The version of the model to be loaded. If not specified, the module will use the newest version of the model named by model_name.
Default: `-1` (newest) |
| `preferred_input_memory_type` | string | Optional | One of `cpu`, `cpu-pinned`, or `gpu`. This controlls the type of memory that will be allocated by the module for input tensors. If not specified, this will default to `cpu` if no CUDA-capable devices are detected at runtime, or to `gpu` if CUDA-capable devices are found.|
| `preferred_input_memory_type_id` | int | Optional | CUDA identifier on which to allocate gpu or cpu-pinned input tensors. You probably don't need to change this unless you have multiple GPUs.
Default: `0` (first device) |
-| `tensor_name_remappings` | string | Optional | Provides two dictionaries under the `input` and `output` keys that rename the models' tensors. Other Viam services, like the [vision service](/services/vision/) may expect to work with tensors with particular names. Use this map to rename the tensors from the loaded model as needed to meet those requirements.
Default: `{}` |
+| `tensor_name_remappings` | obj | Optional | Provides two dictionaries under the `input` and `output` keys that rename the models' tensors. Other Viam services, like the [vision service](/services/vision/) may expect to work with tensors with particular names. Use this map to rename the tensors from the loaded model as needed to meet those requirements.
Default: `{}` |
An example minimal configuration would look like this, within your robot's `"services"` array: