Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved deeplake.py init documentation #10735

Merged
merged 1 commit into from
Feb 14, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -69,37 +69,24 @@ def __init__(
) -> None:
"""
Args:
dataset_path (str): Path to the deeplake dataset, where data will be
stored. Defaults to "llama_index".
overwrite (bool, optional): Whether to overwrite existing dataset with same
name. Defaults to False.
token (str, optional): the deeplake token that allows you to access the
dataset with proper access. Defaults to None.
read_only (bool, optional): Whether to open the dataset with read only mode.
ingestion_batch_size (int): used for controlling batched data
ingestion to deeplake dataset. Defaults to 1024.
dataset_path (str): The full path for storing to the Deep Lake Vector Store. It can be:
- a Deep Lake cloud path of the form ``hub://org_id/dataset_name``. Requires registration with Deep Lake.
- an s3 path of the form ``s3://bucketname/path/to/dataset``. Credentials are required in either the environment or passed to the creds argument.
- a local file system path of the form ``./path/to/dataset`` or ``~/path/to/dataset`` or ``path/to/dataset``.
- a memory path of the form ``mem://path/to/dataset`` which doesn't save the dataset but keeps it in memory instead. Should be used only for testing as it does not persist.
Defaults to "llama_index".
overwrite (bool, optional): If set to True this overwrites the Vector Store if it already exists. Defaults to False.
token (str, optional): Activeloop token, used for fetching user credentials. This is Optional, tokens are normally autogenerated. Defaults to None.
read_only (bool, optional): Opens dataset in read-only mode if True. Defaults to False.
ingestion_batch_size (int): During data ingestion, data is divided
into batches. Batch size is the size of each batch. Defaults to 1024.
ingestion_num_workers (int): number of workers to use during data ingestion.
Defaults to 4.
overwrite (bool): Whether to overwrite existing dataset with the
new dataset with the same name.
exec_option (str): Default method for search execution. It could be either
It could be either ``"python"``, ``"compute_engine"`` or
``"tensor_db"``. Defaults to ``"python"``.
- ``python`` - Pure-python implementation that runs on the client and
can be used for data stored anywhere. WARNING: using this option
with big datasets is discouraged because it can lead to memory
issues.
- ``compute_engine`` - Performant C++ implementation of the Deep Lake
Compute Engine that runs on the client and can be used for any data
stored in or connected to Deep Lake. It cannot be used with
in-memory or local datasets.
- ``tensor_db`` - Performant and fully-hosted Managed Tensor Database
that is responsible for storage and query execution. Only available
for data stored in the Deep Lake Managed Database. Store datasets in
this database by specifying runtime = {"tensor_db": True} during
dataset creation.
verbose (bool): Specify if verbose output is enabled. Default is True.
**kwargs (Any): Additional keyword arguments.
exec_option (str): Default method for search execution. It could be either ``"auto"``, ``"python"``, ``"compute_engine"`` or ``"tensor_db"``. Defaults to ``"auto"``. If None, it's set to "auto".
- ``auto``- Selects the best execution method based on the storage location of the Vector Store. It is the default option.
- ``python`` - Pure-python implementation that runs on the client and can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged because it can lead to memory issues.
- ``compute_engine`` - Performant C++ implementation of the Deep Lake Compute Engine that runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets.
- ``tensor_db`` - Performant and fully-hosted Managed Tensor Database that is responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. Store datasets in this database by specifying runtime = {"tensor_db": True} during dataset creation.

Raises:
ImportError: Unable to import `deeplake`.
Expand Down
Loading