diff --git a/RELEASE.md b/RELEASE.md index e51c4d81b..3861f6647 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -4,26 +4,26 @@ - Configurable document count limit for `add_documents()` calls (https://github.com/marqo-ai/marqo/pull/592). This mitigates Marqo getting overloaded due to add_documents requests with a very high number of documents. If you are adding documents in batches larger than the default (64), you will now receive an error. You can ensure your add_documents request complies to this limit by setting the Python client’s `client_batch_size` or changing this -limit via the `MARQO_MAX_ADD_DOCS_COUNT` variable. Read more on configuring the doc count limit [here](https://marqo.pages.dev/1.4.0/Guides/Advanced-Usage/configuration/#configuring-usage-limits). +limit via the `MARQO_MAX_ADD_DOCS_COUNT` variable. Read more on configuring the doc count limit [here](https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/configuration/#configuring-usage-limits). - Default `refresh` value for `add_documents()` and `delete_documents()` set to `false` (https://github.com/marqo-ai/marqo/pull/601). This prevents unnecessary refreshes, which can negatively impact search and add_documents performance, especially for applications that are constantly adding or deleting documents. If you search or get documents immediately after adding or deleting documents, you may still get some extra or missing documents. To see results of these operations more immediately, simply set the `refresh` parameter to `true`. Read more on this parameter -[here](https://marqo.pages.dev/1.4.0/API-Reference/Documents/add_or_replace_documents/#query-parameters). +[here](https://docs.marqo.ai/1.4.0/API-Reference/Documents/add_or_replace_documents/#query-parameters). ## New Features - Custom vector field type added (https://github.com/marqo-ai/marqo/pull/610). You can now add externally generated vectors to Marqo documents! See -usage [here](https://marqo.pages.dev/1.4.0/Guides/Advanced-Usage/document_fields/#custom-vector-object). +usage [here](https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/document_fields/#custom-vector-object). - `no_model` option added for index creation (https://github.com/marqo-ai/marqo/pull/617). This allows for indexes that do no vectorisation, providing -easy use of custom vectors with no risk of accidentally mixing them up with Marqo-generated vectors. See usage [here](https://marqo.pages.dev/1.4.0/API-Reference/Indexes/create_index/#no-model). +easy use of custom vectors with no risk of accidentally mixing them up with Marqo-generated vectors. See usage [here](https://docs.marqo.ai/1.4.0/API-Reference/Indexes/create_index/#no-model). - The search endpoint's `q` parameter is now optional if `context` vectors are provided. (https://github.com/marqo-ai/marqo/pull/617). This is -particularly useful when using context vectors to search across your documents that have custom vector fields. See usage [here](https://marqo.pages.dev/1.4.0/API-Reference/Search/search/#context). +particularly useful when using context vectors to search across your documents that have custom vector fields. See usage [here](https://docs.marqo.ai/1.4.0/API-Reference/Search/search/#context). - Configurable retries added to backend requests (https://github.com/marqo-ai/marqo/pull/623). This makes `add_documents()` and `search()` requests more resilient to transient network errors. Use with caution, as retries in Marqo will change the consistency guarantees for these endpoints. For more control over retry error handling, you can leave retry attempts at the default value (0) and implement your own backend communication error handling. -See retry configuration instructions and how it impacts these endpoints' behaviour [here](https://marqo.pages.dev/1.4.0/Guides/Advanced-Usage/configuration/#configuring-marqo-os-request-retries). +See retry configuration instructions and how it impacts these endpoints' behaviour [here](https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/configuration/#configuring-marqo-os-request-retries). - More informative `delete_documents()` response (https://github.com/marqo-ai/marqo/pull/619). The response object now includes a list of document -ids, status codes, and results (success or reason for failure). See delete documents usage [here](https://marqo.pages.dev/1.4.0/API-Reference/Documents/delete_documents/). +ids, status codes, and results (success or reason for failure). See delete documents usage [here](https://docs.marqo.ai/1.4.0/API-Reference/Documents/delete_documents/). - Friendlier startup experience (https://github.com/marqo-ai/marqo/pull/600). Startup output has been condensed, with unhelpful log messages removed. More detailed logs can be accessed by setting `MARQO_LOG_LEVEL` to `debug`. @@ -44,7 +44,7 @@ More detailed logs can be accessed by setting `MARQO_LOG_LEVEL` to `debug`. ## New features -- New E5 models added to model registry (https://github.com/marqo-ai/marqo/pull/568). E5 V2 and Multilingual E5 models are now available for use. The new E5 V2 models outperform their E5 counterparts in the BEIR benchmark, as seen [here](https://github.com/microsoft/unilm/tree/master/e5#english-pre-trained-models). See all available models [here](https://marqo.pages.dev/1.2.0/Models-Reference/dense_retrieval/). +- New E5 models added to model registry (https://github.com/marqo-ai/marqo/pull/568). E5 V2 and Multilingual E5 models are now available for use. The new E5 V2 models outperform their E5 counterparts in the BEIR benchmark, as seen [here](https://github.com/microsoft/unilm/tree/master/e5#english-pre-trained-models). See all available models [here](https://docs.marqo.ai/1.2.0/Models-Reference/dense_retrieval/). - Dockerfile optimisation (https://github.com/marqo-ai/marqo/pull/569). A pre-built Marqo base image results in reduced image layers and increased build speed, meaning neater docker pulls and an overall better development experience. @@ -67,9 +67,9 @@ More detailed logs can be accessed by setting `MARQO_LOG_LEVEL` to `debug`. ## New features -- Storage status in health check endpoint (https://github.com/marqo-ai/marqo/pull/555 & https://github.com/marqo-ai/marqo/pull/559). The `GET /indexes/{index-name}/health` endpoint's `backend` object will now return the boolean `storage_is_available`, to indicate if there is remaining storage space. If space is not available, health status will now return `yellow`. See [here](https://marqo.pages.dev/1.2.0/API-Reference/health/) for detailed usage. +- Storage status in health check endpoint (https://github.com/marqo-ai/marqo/pull/555 & https://github.com/marqo-ai/marqo/pull/559). The `GET /indexes/{index-name}/health` endpoint's `backend` object will now return the boolean `storage_is_available`, to indicate if there is remaining storage space. If space is not available, health status will now return `yellow`. See [here](https://docs.marqo.ai/1.2.0/API-Reference/health/) for detailed usage. -- Score Modifiers search optimization (https://github.com/marqo-ai/marqo/pull/566). This optimization reduces latency for searches with the `score_modifiers` parameter when field names or weights are changed. See [here](https://marqo.pages.dev/1.2.0/API-Reference/search/#score-modifiers) for detailed usage. +- Score Modifiers search optimization (https://github.com/marqo-ai/marqo/pull/566). This optimization reduces latency for searches with the `score_modifiers` parameter when field names or weights are changed. See [here](https://docs.marqo.ai/1.2.0/API-Reference/search/#score-modifiers) for detailed usage. ## Bug fixes and minor changes @@ -194,7 +194,7 @@ This can help enhance throughput and performance for certain workloads. Please s ## New features - Custom model pre-loading (https://github.com/marqo-ai/marqo/pull/475). Public CLIP and OpenCLIP models specified by URL can now be loaded on Marqo startup via the `MARQO_MODELS_TO_PRELOAD` environment variable. These must be formatted as JSON objects with `model` and `model_properties`. - See [here (configuring pre-loaded models)](https://marqo.pages.dev/0.0.20/Advanced-Usage/configuration/#configuring-preloaded-models) for usage. + See [here (configuring pre-loaded models)](https://docs.marqo.ai/0.0.20/Advanced-Usage/configuration/#configuring-preloaded-models) for usage. ## Bug fixes and minor changes - Fixed arm64 build issue caused by package version conflicts (https://github.com/marqo-ai/marqo/pull/478) @@ -386,7 +386,7 @@ Thank you to our 2.2k stargazers and 80+ forkers! # Release 0.0.10 ## New features -- Generic model support (https://github.com/marqo-ai/marqo/pull/179). Create an index with your favourite SBERT-type models from HuggingFace! Read about usage [here](https://marqo.pages.dev/0.0.10/Models-Reference/dense_retrieval/#generic-models) +- Generic model support (https://github.com/marqo-ai/marqo/pull/179). Create an index with your favourite SBERT-type models from HuggingFace! Read about usage [here](https://docs.marqo.ai/0.0.10/Models-Reference/dense_retrieval/#generic-models) - Visual search update 2. (https://github.com/marqo-ai/marqo/pull/214). Search-time image reranking and open-vocabulary localization, based on users' queries, is now available with the Owl-ViT model. **Locate the part of the image corresponding to your query!** Read about usage [here](https://docs.marqo.ai/0.0.10/Models-Reference/reranking/) - Visual search update 1. (https://github.com/marqo-ai/marqo/pull/214). Better image patching. In addition to faster-rcnn, you can now use yolox or attention based (DINO) region proposal as a patching method at indexing time. This allows localization as the sub patches of the image can be searched. Read about usage [here](https://docs.marqo.ai/0.0.10/Preprocessing/Images/). @@ -500,13 +500,13 @@ Added Open CLIP models and added features to the get document endpoint. ## New features - Added Open CLIP models ([#116](https://github.com/marqo-ai/marqo/pull/116)). -Read about usage [here](https://marqo.pages.dev/Models-Reference/dense_retrieval/#open-clip) +Read about usage [here](https://docs.marqo.ai/Models-Reference/dense_retrieval/#open-clip) - Added the ability to get multiple documents by ID ([#122](https://github.com/marqo-ai/marqo/pull/122)). -Read about usage [here](https://marqo.pages.dev/API-Reference/documents/#get-multiple-documents) +Read about usage [here](https://docs.marqo.ai/API-Reference/documents/#get-multiple-documents) - Added the ability to get document tensor facets through the get document endpoint ([#122](https://github.com/marqo-ai/marqo/pull/122)). -Read about usage [here](https://marqo.pages.dev/API-Reference/documents/#example_2) +Read about usage [here](https://docs.marqo.ai/API-Reference/documents/#example_2) # Release 0.0.4 diff --git a/examples/ClothingCLI/README.md b/examples/ClothingCLI/README.md index 37e6d1f78..9db2620af 100644 --- a/examples/ClothingCLI/README.md +++ b/examples/ClothingCLI/README.md @@ -35,7 +35,7 @@ Python 3.8 python3 simple_marqo_demo.py ``` -For more information on Marqo's functions and features, please visit the [Marqo Documentation Page](https://marqo.pages.dev/). +For more information on Marqo's functions and features, please visit the [Marqo Documentation Page](https://docs.marqo.ai/). ## Usage Feel free to checkout the code in order to have a better understanding on how Marqo functions are used :). diff --git a/examples/ClothingCLI/simple_marqo_demo.py b/examples/ClothingCLI/simple_marqo_demo.py index 01b0b516f..b6ee2167a 100644 --- a/examples/ClothingCLI/simple_marqo_demo.py +++ b/examples/ClothingCLI/simple_marqo_demo.py @@ -56,7 +56,7 @@ def search_index_text(index_name:str, query_text: str, search_method: str): ) # Marqo also has other features such as searhcing based on a specific attribute field and query fitlering - # refer to the documentation on how these features work (https://marqo.pages.dev/) + # refer to the documentation on how these features work (https://docs.marqo.ai/) return results def search_index_image(index_name:str, image_name: str): diff --git a/examples/ClothingStreamlit/README.md b/examples/ClothingStreamlit/README.md index 983dff7c6..b8c79457a 100644 --- a/examples/ClothingStreamlit/README.md +++ b/examples/ClothingStreamlit/README.md @@ -39,7 +39,7 @@ Python 3.8 ``` For more information on: -- Marqo's functions and features, please visit the [Marqo Documentation Page](https://marqo.pages.dev/). +- Marqo's functions and features, please visit the [Marqo Documentation Page](https://docs.marqo.ai/). - Streamlit's functions and features, please visit the [Streamlit Documentation Page](https://docs.streamlit.io/). diff --git a/examples/ImageSearchLocalization/index_all_data.py b/examples/ImageSearchLocalization/index_all_data.py index dc1b83c1f..41dc9780c 100644 --- a/examples/ImageSearchLocalization/index_all_data.py +++ b/examples/ImageSearchLocalization/index_all_data.py @@ -31,7 +31,7 @@ documents = [{"image_location":s3_uri, '_id':os.path.basename(s3_uri)} for s3_uri in locators] # if you have the images locally, see the instructions -# here https://marqo.pages.dev/Advanced-Usage/images/ for the best ways to index +# here https://docs.marqo.ai/Advanced-Usage/images/ for the best ways to index ##################################################### diff --git a/examples/MultiModalSearch/article.md b/examples/MultiModalSearch/article.md index 39ec989c0..b9f26d900 100644 --- a/examples/MultiModalSearch/article.md +++ b/examples/MultiModalSearch/article.md @@ -278,7 +278,7 @@ The dataset consists of ~220,000 e-commerce products with images, text and some ### 3.2 Installing Marqo -The first thing to do is start [Marqo](https://github.com/marqo-ai/marqo). To start, we can run the following [docker command](https://marqo.pages.dev/0.0.21/) from a terminal (for M-series Mac users see [here](https://marqo.pages.dev/0.0.21/m1_mac_users/)). +The first thing to do is start [Marqo](https://github.com/marqo-ai/marqo). To start, we can run the following [docker command](https://docs.marqo.ai/0.0.21/) from a terminal (for M-series Mac users see [here](https://docs.marqo.ai/0.0.21/m1_mac_users/)). ```bash docker pull marqoai/marqo:latest @@ -308,7 +308,7 @@ documents = data[['s3_http', '_id', 'price', 'blip_large_caption', 'aesthetic_sc ### 3.4 Create the Index -Now we have the data prepared, we can [set up the index](https://marqo.pages.dev/0.0.21/API-Reference/indexes/). We will use a [ViT-L-14 from open clip](https://github.com/mlfoundations/open_clip) as the model. This model is very good to start with. It is recommended to [use a GPU](https://marqo.pages.dev/0.0.21/using_marqo_with_a_gpu/) (at least 4GB VRAM) otherwise a [smaller model](https://marqo.pages.dev/0.0.21/Models-Reference/dense_retrieval/#open-clip) can be used (although results may be worse). +Now we have the data prepared, we can [set up the index](https://docs.marqo.ai/0.0.21/API-Reference/indexes/). We will use a [ViT-L-14 from open clip](https://github.com/mlfoundations/open_clip) as the model. This model is very good to start with. It is recommended to [use a GPU](https://docs.marqo.ai/0.0.21/using_marqo_with_a_gpu/) (at least 4GB VRAM) otherwise a [smaller model](https://docs.marqo.ai/0.0.21/Models-Reference/dense_retrieval/#open-clip) can be used (although results may be worse). ```python from marqo import Client @@ -329,7 +329,7 @@ response = client.create_index(index_name, settings_dict=settings) ### 3.5 Add Images to the Index -Now we can [add images](https://marqo.pages.dev/0.0.21/API-Reference/documents/) to the index which can then be searched over. We can also select the device we want to use and also which fields in the data to embed. To use a GPU, change the device to `cuda` (see [here](https://marqo.pages.dev/0.0.21/using_marqo_with_a_gpu/) for how to use Marqo with a GPU). +Now we can [add images](https://docs.marqo.ai/0.0.21/API-Reference/documents/) to the index which can then be searched over. We can also select the device we want to use and also which fields in the data to embed. To use a GPU, change the device to `cuda` (see [here](https://docs.marqo.ai/0.0.21/using_marqo_with_a_gpu/) for how to use Marqo with a GPU). ```python device = 'cpu' # use 'cuda' if a GPU is available @@ -339,7 +339,7 @@ res = client.index(index_name).add_documents(documents, client_batch_size=64, te ### 3.6 Searching -Now the images are indexed, we can start [searching](https://marqo.pages.dev/0.0.21/API-Reference/search/). +Now the images are indexed, we can start [searching](https://docs.marqo.ai/0.0.21/API-Reference/search/). ```python query = "green shirt" @@ -403,7 +403,7 @@ res = client.index(index_name).search(query, searchable_attributes=['s3_http'], ### 3.12 Searching with Ranking -We can now extend the search to also include document specific values to boost the [ranking of documents](https://marqo.pages.dev/0.0.21/API-Reference/search/#score-modifiers) in addition to the vector similarity. In this example, each document has a field called `aesthetic_score` which can also be used to bias the score of each document. +We can now extend the search to also include document specific values to boost the [ranking of documents](https://docs.marqo.ai/0.0.21/API-Reference/search/#score-modifiers) in addition to the vector similarity. In this example, each document has a field called `aesthetic_score` which can also be used to bias the score of each document. ```python query = {"yellow handbag":1.0} @@ -429,7 +429,7 @@ print(sum(r['aesthetic_score'] for r in res['hits'])) ### 3.13 Searching with Popular or Liked Products -Results at a per-query level can be personalized using sets of items. These items could be previously liked or popular items. To perform this we do it in two stages. The first is to calculate the "context vector" which is a condensed representation of the items. This is pre-computed and then stored to remove any additional overhead at query time. The context is generated by [creating documents](https://marqo.pages.dev/0.0.21/Advanced-Usage/document_fields/#multimodal-combination-object) of the item sets and retrieving the corresponding vectors. +Results at a per-query level can be personalized using sets of items. These items could be previously liked or popular items. To perform this we do it in two stages. The first is to calculate the "context vector" which is a condensed representation of the items. This is pre-computed and then stored to remove any additional overhead at query time. The context is generated by [creating documents](https://docs.marqo.ai/0.0.21/Advanced-Usage/document_fields/#multimodal-combination-object) of the item sets and retrieving the corresponding vectors. The first step is to create a new index to calculate the context vectors. ```python # we create another index to create a context vector @@ -445,7 +445,7 @@ settings = { res = client.create_index(index_name_context, settings_dict=settings) ``` -Then we [construct the objects](https://marqo.pages.dev/0.0.21/Advanced-Usage/document_fields/#multimodal-combination-object) from the sets of items we want to use for the context. +Then we [construct the objects](https://docs.marqo.ai/0.0.21/Advanced-Usage/document_fields/#multimodal-combination-object) from the sets of items we want to use for the context. ```python # create the document that will be created from multiple images @@ -472,7 +472,7 @@ document2 = {"_id":"2", ``` -We can now [define mappings](https://marqo.pages.dev/0.0.21/API-Reference/mappings/) objects to determine how we want to combine the different fields. We can then index the documents. +We can now [define mappings](https://docs.marqo.ai/0.0.21/API-Reference/mappings/) objects to determine how we want to combine the different fields. We can then index the documents. ```python # define how we want to combined @@ -500,7 +500,7 @@ res = client.index(index_name_context).add_documents([document1], tensor_fields= res = client.index(index_name_context).add_documents([document2], tensor_fields=["multimodal"], device=device, mappings=mappings2, auto_refresh=True) ``` -To get the vectors to use as context vectors at search time - we need to [retrieve the calculated vectors](https://marqo.pages.dev/0.0.21/API-Reference/documents/). We can then [create a context object](https://marqo.pages.dev/0.0.21/API-Reference/search/#context) that is used at search time. +To get the vectors to use as context vectors at search time - we need to [retrieve the calculated vectors](https://docs.marqo.ai/0.0.21/API-Reference/documents/). We can then [create a context object](https://docs.marqo.ai/0.0.21/API-Reference/search/#context) that is used at search time. ```python diff --git a/examples/MultiModalSearch/index_and_search.py b/examples/MultiModalSearch/index_and_search.py index 74f733bdc..5034eb45d 100644 --- a/examples/MultiModalSearch/index_and_search.py +++ b/examples/MultiModalSearch/index_and_search.py @@ -11,7 +11,7 @@ ####################################################################### # run the following from the terminal - # see https://marqo.pages.dev/0.0.21/ + # see https://docs.marqo.ai/0.0.21/ """ docker pull marqoai/marqo:latest @@ -38,10 +38,10 @@ ############ Create the index ############ ####################################################################### - # https://marqo.pages.dev/0.0.21/ + # https://docs.marqo.ai/0.0.21/ client = Client() - # https://marqo.pages.dev/0.0.21/API-Reference/indexes/ + # https://docs.marqo.ai/0.0.21/API-Reference/indexes/ index_name = 'multimodal' settings = { "index_defaults": { @@ -59,7 +59,7 @@ ############ Index the data (image only) ############ ####################################################################### - # https://marqo.pages.dev/0.0.21/API-Reference/documents/ + # https://docs.marqo.ai/0.0.21/API-Reference/documents/ device = 'cpu' # change to 'cuda' if GPU is available res = client.index(index_name).add_documents(documents, client_batch_size=64, tensor_fields=["s3_http"], device=device) @@ -69,7 +69,7 @@ ############ Search ############ ####################################################################### - # https://marqo.pages.dev/0.0.21/API-Reference/search/ + # https://docs.marqo.ai/0.0.21/API-Reference/search/ query = "green shirt" res = client.index(index_name).search(query, searchable_attributes=['s3_http'], device=device, limit=10) @@ -85,7 +85,7 @@ ############ Searching with semantic filters ############ ####################################################################### - # https://marqo.pages.dev/0.0.21/API-Reference/search/#query-q + # https://docs.marqo.ai/0.0.21/API-Reference/search/#query-q query = {"green shirt":1.0, "short sleeves":1.0} res = client.index(index_name).search(query, searchable_attributes=['s3_http'], device=device, limit=10) @@ -127,7 +127,7 @@ query = {"yellow handbag":1.0} - # https://marqo.pages.dev/0.0.21/API-Reference/search/#score-modifiers + # https://docs.marqo.ai/0.0.21/API-Reference/search/#score-modifiers # we define the extra document specific data to use for ranking # multiple fields can be used to multiply or add to the vector similairty score score_modifiers = { @@ -165,7 +165,7 @@ res = client.create_index(index_name_context, settings_dict=settings) - # https://marqo.pages.dev/0.0.21/Advanced-Usage/document_fields/#multimodal-combination-object + # https://docs.marqo.ai/0.0.21/Advanced-Usage/document_fields/#multimodal-combination-object # create the document that will be created from multiple images document1 = {"_id":"1", "multimodal": @@ -188,7 +188,7 @@ } } - # https://marqo.pages.dev/0.0.21/API-Reference/mappings/ + # https://docs.marqo.ai/0.0.21/API-Reference/mappings/ # define how we want to comnbined mappings1 = {"multimodal": {"type": "multimodal_combination", "weights": {"top_1": 0.40, diff --git a/src/marqo/tensor_search/models/add_docs_objects.py b/src/marqo/tensor_search/models/add_docs_objects.py index 44054c331..daa852ebe 100644 --- a/src/marqo/tensor_search/models/add_docs_objects.py +++ b/src/marqo/tensor_search/models/add_docs_objects.py @@ -31,7 +31,7 @@ def validate_add_docs_count(docs: Union[Sequence[Union[dict, Any]], np.ndarray]) raise BadRequestError(message=f"Number of docs in add documents request ({doc_count}) exceeds limit of {max_add_docs_count}. " f"This limit can be increased by setting the environment variable `{EnvVars.MARQO_MAX_ADD_DOCS_COUNT}`. " f"If using the Python client, break up your `add_documents` request into smaller batches using its `client_batch_size` parameter. " - f"See https://marqo.pages.dev/1.3.0/API-Reference/documents/#body for more details.") + f"See https://docs.marqo.ai/1.3.0/API-Reference/documents/#body for more details.") return docs diff --git a/src/marqo/tensor_search/on_start_script.py b/src/marqo/tensor_search/on_start_script.py index 2930c95ab..cb83334cd 100644 --- a/src/marqo/tensor_search/on_start_script.py +++ b/src/marqo/tensor_search/on_start_script.py @@ -137,7 +137,7 @@ def __init__(self): f"Could not parse environment variable `{EnvVars.MARQO_MODELS_TO_PRELOAD}`. " f"Please ensure that this a JSON-encoded array of strings or dicts. For example:\n" f"""export {EnvVars.MARQO_MODELS_TO_PRELOAD}='["ViT-L/14", "onnx/all_datasets_v4_MiniLM-L6"]'""" - f"""To add a custom model, it must be a dict with keys `model` and `model_properties` as defined in `https://marqo.pages.dev/0.0.20/Models-Reference/bring_your_own_model/`""" + f"""To add a custom model, it must be a dict with keys `model` and `model_properties` as defined in `https://docs.marqo.ai/0.0.20/Models-Reference/bring_your_own_model/`""" ) from e else: self.models = warmed_models @@ -200,7 +200,7 @@ def _preload_model(model, content, device): except KeyError as e: raise errors.EnvVarError( f"Your custom model {model} is missing either `model` or `model_properties`." - f"""To add a custom model, it must be a dict with keys `model` and `model_properties` as defined in `https://marqo.pages.dev/0.0.20/Advanced-Usage/configuration/#configuring-preloaded-models`""" + f"""To add a custom model, it must be a dict with keys `model` and `model_properties` as defined in `https://docs.marqo.ai/0.0.20/Advanced-Usage/configuration/#configuring-preloaded-models`""" ) from e diff --git a/src/marqo/tensor_search/tensor_search.py b/src/marqo/tensor_search/tensor_search.py index 3bb1a649a..77626ae54 100644 --- a/src/marqo/tensor_search/tensor_search.py +++ b/src/marqo/tensor_search/tensor_search.py @@ -646,7 +646,7 @@ def add_documents(config: Config, add_docs_params: AddDocsParams): s2_inference.ModelDownloadError) as model_error: raise errors.BadRequestError( message=f'Problem vectorising query. Reason: {str(model_error)}', - link="https://marqo.pages.dev/latest/Models-Reference/dense_retrieval/" + link="https://docs.marqo.ai/latest/Models-Reference/dense_retrieval/" ) except s2_inference_errors.S2InferenceError: document_is_valid = False @@ -1646,7 +1646,7 @@ def vectorise_jobs(jobs: List[VectorisedJobs]) -> Dict[JHash, Dict[str, List[flo s2_inference.ModelDownloadError) as model_error: raise errors.BadRequestError( message=f'Problem vectorising query. Reason: {str(model_error)}', - link="https://marqo.pages.dev/latest/Models-Reference/dense_retrieval/" + link="https://docs.marqo.ai/latest/Models-Reference/dense_retrieval/" ) except s2_inference_errors.S2InferenceError as e: @@ -2344,7 +2344,7 @@ def vectorise_multimodal_combination_field( s2_inference_errors.ModelLoadError) as model_error: raise errors.BadRequestError( message=f'Problem vectorising query. Reason: {str(model_error)}', - link="https://marqo.pages.dev/1.4.0/Models-Reference/dense_retrieval/" + link="https://docs.marqo.ai/1.4.0/Models-Reference/dense_retrieval/" ) except s2_inference_errors.S2InferenceError: combo_document_is_valid = False diff --git a/src/marqo/tensor_search/validation.py b/src/marqo/tensor_search/validation.py index 8091d9e42..dad8980c7 100644 --- a/src/marqo/tensor_search/validation.py +++ b/src/marqo/tensor_search/validation.py @@ -493,7 +493,7 @@ def validate_multimodal_combination(field_content, is_non_tensor_field, field_ma f"The multimodal_combination field `{field_content}` is an empty dictionary. " f"This is not a valid format of field content." f"If you aim to use multimodal_combination, it must contain at least 1 field. " - f"please check `https://docs.marqo.ai/1.4.0/Advanced-Usage/document_fields/#multimodal-combination-object` for more info.") + f"please check `https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/document_fields/#multimodal-combination-object` for more info.") for key, value in field_content.items(): if not ((type(key) in constants.ALLOWED_MULTIMODAL_FIELD_TYPES) and ( @@ -508,7 +508,7 @@ def validate_multimodal_combination(field_content, is_non_tensor_field, field_ma f"Multimodal-combination field content `{key}:{value}` \n " f"is not in the multimodal_field mappings weights `{field_mapping['weights']}`. Each sub_field requires a weight." f"Please add `{key}` to the mappings." - f"Please check `https://docs.marqo.ai/1.4.0/Advanced-Usage/document_fields/#multimodal-combination-object` for more info.") + f"Please check `https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/document_fields/#multimodal-combination-object` for more info.") if is_non_tensor_field: raise InvalidArgError( @@ -550,7 +550,7 @@ def validate_custom_vector(field_content: dict, is_non_tensor_field: bool, index except jsonschema.ValidationError as e: raise InvalidArgError( f"Invalid custom_vector field format. Reason: \n{str(e)}" - f"\n For info on how to use custom_vector, please see: `https://docs.marqo.ai/1.4.0/Advanced-Usage/document_fields/#custom-vectors`" + f"\n For info on how to use custom_vector, please see: `https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/document_fields/#custom-vectors`" ) # Fill in default content as empty string if not provided. @@ -608,14 +608,14 @@ def validate_multimodal_combination_mappings_object(mappings_object: Dict): raise InvalidArgError( f"The multimodal_combination mapping `{mappings_object}` has an invalid child_field `{child_field}` of type `{type(child_field).__name__}`." f"In multimodal_combination fields, it must be a string." - f"Please check `https://docs.marqo.ai/1.4.0/Advanced-Usage/document_fields/#multimodal-combination-object` for more info." + f"Please check `https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/document_fields/#multimodal-combination-object` for more info." ) if not isinstance(weight, (float, int)): raise InvalidArgError( f"The multimodal_combination mapping `{mappings_object}` has an invalid weight `{weight}` of type `{type(weight).__name__}`." f"In multimodal_combination fields, weight must be an int or float." - f"Please check `https://docs.marqo.ai/1.4.0/Advanced-Usage/document_fields/#multimodal-combination-object` for more info." + f"Please check `https://docs.marqo.ai/1.4.0/Guides/Advanced-Usage/document_fields/#multimodal-combination-object` for more info." ) return mappings_object