Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'chromadb' has no attribute 'PersistentClient' #208

Open
HowlWolf1209 opened this issue Dec 22, 2023 · 1 comment

Comments

@HowlWolf1209
Copy link

When I ran Extras with the chromadb module enabled: python server.py --enable-modules=chromadb, an error occurred:
(extras) D:\AI_writer\SillyTavern-Launcher\SillyTavern-extras>python server.py --enable-modules=chromadb
Using torch device: cpu
Initializing ChromaDB
Traceback (most recent call last):
File "D:\AI_writer\SillyTavern-Launcher\SillyTavern-extras\server.py", line 282, in
chromadb_client = chromadb.PersistentClient(path=args.chroma_folder, settings=Settings(anonymized_telemetry=False))
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'chromadb' has no attribute 'PersistentClient'

I‘ve tried to run pip install --upgrade chromadb, and it doesn't work
I‘ve tried to run python server.py --enable-modules=chromadb, and it doesn't work neither
I tried to install a later version of chromadb, pip install chromadb==0.4.X, and python server.py --enable-modules=chromadb
a new error occurred:

Initializing ChromaDB
ChromaDB is running in-memory with persistence. Persistence is stored in .chroma_db. Can be cleared by deleting the folder or purging db.
Traceback (most recent call last):
File "D:\AI_writer\SillyTavern-Launcher\SillyTavern-extras\server.py", line 294, in
chromadb_embed_fn = embedding_functions.SentenceTransformerEmbeddingFunction(embedding_model, device=device_string)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\chromadb\utils\embedding_functions.py", line 41, in init
self.models[model_name] = SentenceTransformer(model_name, device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 95, in init
modules = self._load_sbert_model(model_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\sentence_transformers\SentenceTransformer.py", line 840, in _load_sbert_model
module = module_class.load(os.path.join(model_path, module_config['path']))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\sentence_transformers\models\Transformer.py", line 137, in load
return Transformer(model_name_or_path=input_path, **config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\sentence_transformers\models\Transformer.py", line 29, in init
self._load_model(model_name_or_path, config, cache_dir)
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\sentence_transformers\models\Transformer.py", line 49, in _load_model
self.auto_model = AutoModel.from_pretrained(model_name_or_path, config=config, cache_dir=cache_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\transformers\models\auto\auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\transformers\modeling_utils.py", line 3706, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\HowlWolf\miniconda3\envs\extras\Lib\site-packages\transformers\modeling_utils.py", line 4166, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.class.name}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for MPNetModel:
size mismatch for embeddings.word_embeddings.weight: copying a param with shape torch.Size([28996, 768]) from checkpoint, the shape in current model is torch.Size([30527, 768]).
size mismatch for embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 768]) from checkpoint, the shape in current model is torch.Size([514, 768]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

Anyone know any solutions if you have ran into this problem yourselves???

@Technologicat
Copy link
Contributor

The old Smart Context extension has been superseded by the built-in Vector Storage extension.

Vector Storage does not need ChromaDB.

To upgrade:

  • Make sure both your SillyTavern and your ST-extras are up to date.
  • Configure your ST-extras server to load the embeddings module.
    • Optionally, you can choose a custom text embedding model just as before, using the --embedding-model command-line argument of the ST-extras server.
    • The embeddings module uses the device (CPU or GPU) that you have configured your ST-extras to use. As usual, see the --cpu (CPU, default), --cuda (GPU) and --cuda-device command-line arguments.
    • The chromadb module in ST-extras, and the chromadb Python library, are no longer needed.
  • Restart your ST-extras server.
  • Restart SillyTavern, just to be safe.
  • In the ST GUI: Extensions ⊳ Vector Storage ⊳ Vectorization Source: choose Extras
  • Done! You should now have Vector Storage working with the fast Extras provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants