Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: No model on ollama was used to answer #2595

Closed
wwjCMP opened this issue May 14, 2024 · 21 comments
Closed

[Bug]: No model on ollama was used to answer #2595

wwjCMP opened this issue May 14, 2024 · 21 comments
Labels
bug Something isn't working Stale

Comments

@wwjCMP
Copy link

wwjCMP commented May 14, 2024

What happened?

Why did I set up the Ollama model, but it doesn't use the locally deployed model on Ollama to answer? The Ollama link is valid because the Ollama embedding model can work. However, the Ollama dialogue model cannot be correctly invoked. Instead, the remote ms-marco-TinyBERT-L-2-v2 is called.

Relevant log output

2024-05-15 04:57:05 Traceback (most recent call last):
2024-05-15 04:57:05   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 265, in __call__
2024-05-15 04:57:05     await wrap(partial(self.listen_for_disconnect, receive))
2024-05-15 04:57:05   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
2024-05-15 04:57:05     await func()
2024-05-15 04:57:05   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
2024-05-15 04:57:05     message = await receive()
2024-05-15 04:57:05               ^^^^^^^^^^^^^^^
2024-05-15 04:57:05   File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
2024-05-15 04:57:05     await self.message_event.wait()
2024-05-15 04:57:05   File "/usr/local/lib/python3.11/asyncio/locks.py", line 213, in wait
2024-05-15 04:57:05     await fut
2024-05-15 04:57:05 asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f830804f910
2024-05-15 04:57:05 
2024-05-15 04:57:05 During handling of the above exception, another exception occurred:
2024-05-15 04:57:05 
2024-05-15 04:57:05   + Exception Group Traceback (most recent call last):
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
2024-05-15 04:57:05   |     result = await app(  # type: ignore[func-returns-value]
2024-05-15 04:57:05   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
2024-05-15 04:57:05   |     return await self.app(scope, receive, send)
2024-05-15 04:57:05   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
2024-05-15 04:57:05   |     await super().__call__(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
2024-05-15 04:57:05   |     await self.middleware_stack(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
2024-05-15 04:57:05   |     raise exc
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
2024-05-15 04:57:05   |     await self.app(scope, receive, _send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__
2024-05-15 04:57:05   |     await self.simple_response(scope, receive, send, request_headers=headers)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
2024-05-15 04:57:05   |     await self.app(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
2024-05-15 04:57:05   |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-05-15 04:57:05   |     raise exc
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-05-15 04:57:05   |     await app(scope, receive, sender)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
2024-05-15 04:57:05   |     await self.middleware_stack(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
2024-05-15 04:57:05   |     await route.handle(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
2024-05-15 04:57:05   |     await self.app(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
2024-05-15 04:57:05   |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-05-15 04:57:05   |     raise exc
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-05-15 04:57:05   |     await app(scope, receive, sender)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
2024-05-15 04:57:05   |     await response(scope, receive, send)
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 258, in __call__
2024-05-15 04:57:05   |     async with anyio.create_task_group() as task_group:
2024-05-15 04:57:05   |   File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
2024-05-15 04:57:05   |     raise BaseExceptionGroup(
2024-05-15 04:57:05   | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
2024-05-15 04:57:05   +-+---------------- 1 ----------------
2024-05-15 04:57:05     | Traceback (most recent call last):
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 467, in _make_request
2024-05-15 04:57:05     |     self._validate_conn(conn)
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
2024-05-15 04:57:05     |     conn.connect()
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 653, in connect
2024-05-15 04:57:05     |     sock_and_verified = _ssl_wrap_socket_and_match_hostname(
2024-05-15 04:57:05     |                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 806, in _ssl_wrap_socket_and_match_hostname
2024-05-15 04:57:05     |     ssl_sock = ssl_wrap_socket(
2024-05-15 04:57:05     |                ^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket
2024-05-15 04:57:05     |     ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
2024-05-15 04:57:05     |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl
2024-05-15 04:57:05     |     return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
2024-05-15 04:57:05     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/ssl.py", line 517, in wrap_socket
2024-05-15 04:57:05     |     return self.sslsocket_class._create(
2024-05-15 04:57:05     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/ssl.py", line 1108, in _create
2024-05-15 04:57:05     |     self.do_handshake()
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/ssl.py", line 1379, in do_handshake
2024-05-15 04:57:05     |     self._sslobj.do_handshake()
2024-05-15 04:57:05     | ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1006)
2024-05-15 04:57:05     | 
2024-05-15 04:57:05     | During handling of the above exception, another exception occurred:
2024-05-15 04:57:05     | 
2024-05-15 04:57:05     | Traceback (most recent call last):
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen
2024-05-15 04:57:05     |     response = self._make_request(
2024-05-15 04:57:05     |                ^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 491, in _make_request
2024-05-15 04:57:05     |     raise new_e
2024-05-15 04:57:05     | urllib3.exceptions.SSLError: EOF occurred in violation of protocol (_ssl.c:1006)
2024-05-15 04:57:05     | 
2024-05-15 04:57:05     | The above exception was the direct cause of the following exception:
2024-05-15 04:57:05     | 
2024-05-15 04:57:05     | Traceback (most recent call last):
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 486, in send
2024-05-15 04:57:05     |     resp = conn.urlopen(
2024-05-15 04:57:05     |            ^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen
2024-05-15 04:57:05     |     retries = retries.increment(
2024-05-15 04:57:05     |               ^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
2024-05-15 04:57:05     |     raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
2024-05-15 04:57:05     |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     | urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /prithivida/flashrank/resolve/main/ms-marco-TinyBERT-L-2-v2.zip (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1006)')))
2024-05-15 04:57:05     | 
2024-05-15 04:57:05     | During handling of the above exception, another exception occurred:
2024-05-15 04:57:05     | 
2024-05-15 04:57:05     | Traceback (most recent call last):
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
2024-05-15 04:57:05     |     await func()
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
2024-05-15 04:57:05     |     async for chunk in self.body_iterator:
2024-05-15 04:57:05     |   File "/code/modules/brain/knowledge_brain_qa.py", line 326, in generate_stream
2024-05-15 04:57:05     |     conversational_qa_chain = self.knowledge_qa.get_chain()
2024-05-15 04:57:05     |                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/code/modules/brain/rags/quivr_rag.py", line 282, in get_chain
2024-05-15 04:57:05     |     compressor = FlashrankRerank(model="ms-marco-TinyBERT-L-2-v2", top_n=10)
2024-05-15 04:57:05     |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 339, in __init__
2024-05-15 04:57:05     |     values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
2024-05-15 04:57:05     |                                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/pydantic/v1/main.py", line 1048, in validate_model
2024-05-15 04:57:05     |     input_data = validator(cls_, input_data)
2024-05-15 04:57:05     |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/langchain/retrievers/document_compressors/flashrank_rerank.py", line 52, in validate_environment
2024-05-15 04:57:05     |     values["client"] = Ranker(model_name=values["model"])
2024-05-15 04:57:05     |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/flashrank/Ranker.py", line 47, in __init__
2024-05-15 04:57:05     |     self._prepare_model_dir(model_name)
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/flashrank/Ranker.py", line 73, in _prepare_model_dir
2024-05-15 04:57:05     |     self._download_model_files(model_name)
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/flashrank/Ranker.py", line 84, in _download_model_files
2024-05-15 04:57:05     |     with requests.get(formatted_model_url, stream=True) as r:
2024-05-15 04:57:05     |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/requests/api.py", line 73, in get
2024-05-15 04:57:05     |     return request("get", url, params=params, **kwargs)
2024-05-15 04:57:05     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/requests/api.py", line 59, in request
2024-05-15 04:57:05     |     return session.request(method=method, url=url, **kwargs)
2024-05-15 04:57:05     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
2024-05-15 04:57:05     |     resp = self.send(prep, **send_kwargs)
2024-05-15 04:57:05     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
2024-05-15 04:57:05     |     r = adapter.send(request, **kwargs)
2024-05-15 04:57:05     |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 04:57:05     |   File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 517, in send
2024-05-15 04:57:05     |     raise SSLError(e, request=request)
2024-05-15 04:57:05     | requests.exceptions.SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /prithivida/flashrank/resolve/main/ms-marco-TinyBERT-L-2-v2.zip (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1006)')))

Twitter / LinkedIn details

No response

@wwjCMP wwjCMP added the bug Something isn't working label May 14, 2024
Copy link

dosubot bot commented May 14, 2024

To address the issue with the Ollama dialogue model not being invoked and the SSL error you're encountering, let's tackle these separately:

  1. Ollama Dialogue Model Invocation:
    Ensure that your brain_settings correctly specify the ollama_api_base_url and that the model name you're trying to use starts with "ollama". The system checks these conditions to decide whether to use the Ollama model. If the model name doesn't start with "ollama" or if the ollama_api_base_url is not set, it will default to other models.

  2. SSL Error When Downloading Models:
    The "EOF occurred in violation of protocol" SSL error can be due to various reasons, including outdated Python packages, compatibility issues, or network configurations. Here are steps to troubleshoot:

    • Update your Python packages, especially certifi and cryptography, to ensure SSL connections are properly handled.
    • Verify your Python version is compatible with the libraries you're using.
    • Check your OpenSSL library version to ensure it supports the SSL/TLS protocol versions required by the server.
    • Inspect any network security devices or software that might interfere with SSL/TLS connections.
    • If you're behind a proxy or VPN, ensure it's configured to support SSL/TLS connections.

These steps should help resolve the issues you're facing. If the problem persists, providing more details about your setup, including Python version and operating system, could offer more clues for further assistance.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@wwjCMP
Copy link
Author

wwjCMP commented May 14, 2024

2024-05-15 05:12:32 Traceback (most recent call last):
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
2024-05-15 05:12:32 result = await app( # type: ignore[func-returns-value]
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call
2024-05-15 05:12:32 return await self.app(scope, receive, send)
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call
2024-05-15 05:12:32 await super().call(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in call
2024-05-15 05:12:32 await self.middleware_stack(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call
2024-05-15 05:12:32 raise exc
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call
2024-05-15 05:12:32 await self.app(scope, receive, _send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in call
2024-05-15 05:12:32 await self.simple_response(scope, receive, send, request_headers=headers)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
2024-05-15 05:12:32 await self.app(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in call
2024-05-15 05:12:32 await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-05-15 05:12:32 raise exc
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-05-15 05:12:32 await app(scope, receive, sender)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in call
2024-05-15 05:12:32 await self.middleware_stack(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
2024-05-15 05:12:32 await route.handle(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
2024-05-15 05:12:32 await self.app(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
2024-05-15 05:12:32 await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-05-15 05:12:32 raise exc
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-05-15 05:12:32 await app(scope, receive, sender)
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
2024-05-15 05:12:32 response = await func(request)
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
2024-05-15 05:12:32 raw_response = await run_endpoint_function(
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
2024-05-15 05:12:32 return await dependant.call(**values)
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/code/modules/chat/controller/chat_routes.py", line 253, in create_stream_question_handler
2024-05-15 05:12:32 gpt_answer_generator = get_answer_generator(
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/code/modules/chat/controller/chat_routes.py", line 77, in get_answer_generator
2024-05-15 05:12:32 brain, metadata_brain = brain_service.find_brain_from_question(
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/code/modules/brain/service/brain_service.py", line 108, in find_brain_from_question
2024-05-15 05:12:32 list_brains = vector_store.find_brain_closest_query(user.id, question)
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/code/vectorstore/supabase.py", line 44, in find_brain_closest_query
2024-05-15 05:12:32 vectors = self._embedding.embed_documents([query])
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 211, in embed_documents
2024-05-15 05:12:32 embeddings = self._embed(instruction_pairs)
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in _embed
2024-05-15 05:12:32 return [self.process_emb_response(prompt) for prompt in iter]
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in
2024-05-15 05:12:32 return [self.process_emb_response(prompt) for prompt in iter]
2024-05-15 05:12:32 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 05:12:32 File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 173, in _process_emb_response
2024-05-15 05:12:32 raise ValueError(
2024-05-15 05:12:32 ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}

@wwjCMP
Copy link
Author

wwjCMP commented May 14, 2024

How can I set brain_settings?

Copy link

dosubot bot commented May 14, 2024

To correctly set up and use your locally deployed Ollama model with the brain_settings, follow these steps:

  1. Identify the Configuration File: Find the configuration file in your project where brain_settings are defined. This could be named config.py, settings.py, or contained within a JSON file like appsettings.json.

  2. Configure brain_settings: Look for or add a section named brain_settings in this file. You need to specify the ollama_api_base_url to point to your locally deployed Ollama model. For example:

    brain_settings = {
        "ollama_api_base_url": "http://localhost:5000"
    }

    Adjust "http://localhost:5000" to match the URL of your local Ollama deployment.

  3. Model Name Prefix: Ensure the model name you intend to use is prefixed with "ollama". This is crucial for the system to recognize and route requests to your Ollama model correctly. For instance:

    model_name = "ollama-your-model-name"
  4. Apply Changes: After updating the configuration, restart your application to make sure the new settings take effect.

This setup should direct your application to use the locally deployed Ollama model. If you encounter any issues or need further assistance, ensure that the model name and API base URL are correctly specified as per your local deployment details.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@wwjCMP
Copy link
Author

wwjCMP commented May 14, 2024

To correctly set up and use your locally deployed Ollama model with the brain_settings, follow these steps:

  1. Identify the Configuration File: Find the configuration file in your project where brain_settings are defined. This could be named config.py, settings.py, or contained within a JSON file like appsettings.json.

  2. Configure brain_settings: Look for or add a section named brain_settings in this file. You need to specify the ollama_api_base_url to point to your locally deployed Ollama model. For example:

    brain_settings = {
        "ollama_api_base_url": "http://localhost:5000"
    }

    Adjust "http://localhost:5000" to match the URL of your local Ollama deployment.

  3. Model Name Prefix: Ensure the model name you intend to use is prefixed with "ollama". This is crucial for the system to recognize and route requests to your Ollama model correctly. For instance:

    model_name = "ollama-your-model-name"
  4. Apply Changes: After updating the configuration, restart your application to make sure the new settings take effect.

This setup should direct your application to use the locally deployed Ollama model. If you encounter any issues or need further assistance, ensure that the model name and API base URL are correctly specified as per your local deployment details.

Details

To continue the conversation, mention @dosu.

"ou are incorrect in what you said. Which specific file should I specify the default response model in?

Copy link

dosubot bot commented May 14, 2024

To specify the default response model for your locally deployed Ollama model, you should set the OLLAMA_API_BASE_URL in the .env file of your project. This configuration directs the application to use your Ollama model by providing its base URL. Here's how you can set it:

OLLAMA_API_BASE_URL=http://host.docker.internal:11434

This setting ensures that your application communicates correctly with the Ollama model. For more detailed configuration, refer to the .env.example and the backend/models/settings.py files in the Quivr repository.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@wwjCMP
Copy link
Author

wwjCMP commented May 15, 2024

I have correctly set OLLAMA_API_BASE_URL, how do I specify the model for the conversation?

Copy link

dosubot bot commented May 15, 2024

To specify the model for the conversation after setting OLLAMA_API_BASE_URL, you need to configure the desired Ollama model in the user_settings table of your database. Here's a concise guide:

  1. Add the Ollama Model to user_settings:

    • Access your database management interface, typically Supabase for Quivr.
    • Locate the user_settings table.
    • Insert or update a record with the models column including the Ollama model names in a JSON array format, like so:
      [
        "ollama/llama2",
        "ollama/mistral"
      ]

    This configuration tells Quivr to use these specified Ollama models for conversation handling.

  2. Apply Settings:

    • If setting defaults for all users, you might need to reset the user_settings table to ensure new defaults take effect. This step is optional and only necessary for default settings adjustments.

Remember, these steps are specific to configuring Quivr to use Ollama models for conversation handling. Ensure your Ollama service is running and accessible at the URL specified in OLLAMA_API_BASE_URL.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@wwjCMP
Copy link
Author

wwjCMP commented May 15, 2024

I have assigned some Ollama models for users, but users can only select models in the brain, and there is no model selection in the dialogue interface. Also, no Ollama models are invoked during the conversation.

Copy link

dosubot bot commented May 15, 2024

To address the issue of users only being able to select models in the brain and not in the dialogue interface, and to ensure Ollama models are invoked during conversations, follow these steps:

  1. Update User Settings in the Database: Ensure the user_settings table includes entries for the Ollama models you wish to use, such as "ollama/llama2" and "ollama/mistral".

  2. Frontend Model Selection Configuration: Modify the ModelSelection.tsx component to include the Ollama models in the accessibleModels array. This enables model selection from the dialogue interface.

  3. Backend Configuration for Model Invocation: Update backend configurations, such as backend/models/settings.py, to ensure the selected model is used for generating responses.

  4. Implement Model Selection Logic: Implement logic to dynamically select the Ollama model based on the user's selection from the dialogue interface.

  5. Configure .env File for Ollama Integration: Verify that OLLAMA_API_BASE_URL is correctly set in your .env file to match the local URL for the Ollama API.

  6. Test the Integration: Thoroughly test the model selection and invocation process to ensure functionality.

These steps are designed to enable model selection in the dialogue interface and ensure the correct Ollama models are invoked during conversations. For detailed guidance, refer to the Ollama Integration Guide and the example .env file provided in the Quivr repository documentation.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@wwjCMP
Copy link
Author

wwjCMP commented May 15, 2024

2024-05-15 10:55:50 INFO: 172.20.0.1:53992 - "GET /brains/0602d8e3-f73a-4f8d-b41f-4df00fd8d471/ HTTP/1.1" 200 OK
2024-05-15 10:55:55 INFO: 172.20.0.1:53998 - "OPTIONS /chat HTTP/1.1" 200 OK
2024-05-15 10:55:56 INFO: 172.20.0.1:53998 - "POST /chat HTTP/1.1" 200 OK
2024-05-15 10:55:56 INFO: 172.20.0.1:53998 - "OPTIONS /chat/0d1a5c90-207d-4369-b31c-8f877ae17606/question/stream?brain_id=0602d8e3-f73a-4f8d-b41f-4df00fd8d471 HTTP/1.1" 200 OK
2024-05-15 10:55:56 INFO: 172.20.0.1:54010 - "GET /chat HTTP/1.1" 200 OK
2024-05-15 10:55:56 INFO: 172.20.0.1:54050 - "OPTIONS /chat/0d1a5c90-207d-4369-b31c-8f877ae17606/history HTTP/1.1" 200 OK
2024-05-15 10:55:56 INFO: 172.20.0.1:54050 - "GET /chat/0d1a5c90-207d-4369-b31c-8f877ae17606/history HTTP/1.1" 200 OK
2024-05-15 10:55:57 INFO: 172.20.0.1:53998 - "POST /chat/0d1a5c90-207d-4369-b31c-8f877ae17606/question/stream?brain_id=0602d8e3-f73a-4f8d-b41f-4df00fd8d471 HTTP/1.1" 200 OK
2024-05-15 10:55:57 INFO: 172.20.0.1:54022 - "GET /user HTTP/1.1" 200 OK
2024-05-15 10:55:57 INFO: 172.20.0.1:54038 - "GET /user/identity HTTP/1.1" 200 OK
2024-05-15 10:55:59 Downloading ms-marco-TinyBERT-L-2-v2...
2024-05-15 10:56:02 INFO: 172.20.0.1:54010 - "GET /onboarding HTTP/1.1" 200 OK
ms-marco-TinyBERT-L-2-v2.zip: 100%|██████████| 3.26M/3.26M [00:00<00:00, 6.27MiB/s]
2024-05-15 10:56:05 ERROR: Exception in ASGI application
2024-05-15 10:56:05 Traceback (most recent call last):
2024-05-15 10:56:05 File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 265, in call
2024-05-15 10:56:05 await wrap(partial(self.listen_for_disconnect, receive))
2024-05-15 10:56:05 File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
2024-05-15 10:56:05 await func()
2024-05-15 10:56:05 File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
2024-05-15 10:56:05 message = await receive()
2024-05-15 10:56:05 ^^^^^^^^^^^^^^^
2024-05-15 10:56:05 File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
2024-05-15 10:56:05 await self.message_event.wait()
2024-05-15 10:56:05 File "/usr/local/lib/python3.11/asyncio/locks.py", line 213, in wait
2024-05-15 10:56:05 await fut
2024-05-15 10:56:05 asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f400ea6df50
2024-05-15 10:56:05
2024-05-15 10:56:05 During handling of the above exception, another exception occurred:
2024-05-15 10:56:05
2024-05-15 10:56:05 + Exception Group Traceback (most recent call last):
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
2024-05-15 10:56:05 | result = await app( # type: ignore[func-returns-value]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call
2024-05-15 10:56:05 | return await self.app(scope, receive, send)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call
2024-05-15 10:56:05 | await super().call(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in call
2024-05-15 10:56:05 | await self.middleware_stack(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call
2024-05-15 10:56:05 | raise exc
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call
2024-05-15 10:56:05 | await self.app(scope, receive, _send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in call
2024-05-15 10:56:05 | await self.simple_response(scope, receive, send, request_headers=headers)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
2024-05-15 10:56:05 | await self.app(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in call
2024-05-15 10:56:05 | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-05-15 10:56:05 | raise exc
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-05-15 10:56:05 | await app(scope, receive, sender)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in call
2024-05-15 10:56:05 | await self.middleware_stack(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
2024-05-15 10:56:05 | await route.handle(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
2024-05-15 10:56:05 | await self.app(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
2024-05-15 10:56:05 | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
2024-05-15 10:56:05 | raise exc
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
2024-05-15 10:56:05 | await app(scope, receive, sender)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
2024-05-15 10:56:05 | await response(scope, receive, send)
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 258, in call
2024-05-15 10:56:05 | async with anyio.create_task_group() as task_group:
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in aexit
2024-05-15 10:56:05 | raise BaseExceptionGroup(
2024-05-15 10:56:05 | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
2024-05-15 10:56:05 +-+---------------- 1 ----------------
2024-05-15 10:56:05 | Traceback (most recent call last):
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
2024-05-15 10:56:05 | await func()
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
2024-05-15 10:56:05 | async for chunk in self.body_iterator:
2024-05-15 10:56:05 | File "/code/modules/brain/knowledge_brain_qa.py", line 336, in generate_stream
2024-05-15 10:56:05 | async for chunk in conversational_qa_chain.astream(
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2900, in astream
2024-05-15 10:56:05 | async for chunk in self.atransform(input_aiter(), config, **kwargs):
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
2024-05-15 10:56:05 | async for chunk in self._atransform_stream_with_config(
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
2024-05-15 10:56:05 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
2024-05-15 10:56:05 | async for output in final_pipeline:
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3315, in atransform
2024-05-15 10:56:05 | async for chunk in self._atransform_stream_with_config(
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
2024-05-15 10:56:05 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3302, in _atransform
2024-05-15 10:56:05 | chunk = AddableDict({step_name: task.result()})
2024-05-15 10:56:05 | ^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3285, in get_next_chunk
2024-05-15 10:56:05 | return await py_anext(generator)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4180, in atransform
2024-05-15 10:56:05 | async for output in self._atransform_stream_with_config(
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
2024-05-15 10:56:05 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4089, in _atransform
2024-05-15 10:56:05 | async for ichunk in input:
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
2024-05-15 10:56:05 | item = await iterator.anext()
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
2024-05-15 10:56:05 | item = await iterator.anext()
2024-05-15 10:56:05 INFO: 172.20.0.1:45252 - "OPTIONS /user/credits HTTP/1.1" 200 OK
2024-05-15 10:56:05 INFO: 172.20.0.1:45252 - "GET /user/credits HTTP/1.1" 200 OK
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
2024-05-15 10:56:05 | item = await iterator.anext()
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3315, in atransform
2024-05-15 10:56:05 | async for chunk in self._atransform_stream_with_config(
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
2024-05-15 10:56:05 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3302, in _atransform
2024-05-15 10:56:05 | chunk = AddableDict({step_name: task.result()})
2024-05-15 10:56:05 | ^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3285, in get_next_chunk
2024-05-15 10:56:05 | return await py_anext(generator)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
2024-05-15 10:56:05 | async for chunk in self._atransform_stream_with_config(
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
2024-05-15 10:56:05 | chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
2024-05-15 10:56:05 | async for output in final_pipeline:
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1334, in atransform
2024-05-15 10:56:05 | async for output in self.astream(final, config, **kwargs):
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 819, in astream
2024-05-15 10:56:05 | yield await self.ainvoke(input, config, **kwargs)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 228, in ainvoke
2024-05-15 10:56:05 | return await self.aget_relevant_documents(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 157, in awarning_emitting_wrapper
2024-05-15 10:56:05 | return await wrapped(*args, **kwargs)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 387, in aget_relevant_documents
2024-05-15 10:56:05 | raise e
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 380, in aget_relevant_documents
2024-05-15 10:56:05 | result = await self._aget_relevant_documents(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain/retrievers/contextual_compression.py", line 70, in _aget_relevant_documents
2024-05-15 10:56:05 | docs = await self.base_retriever.ainvoke(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 228, in ainvoke
2024-05-15 10:56:05 | return await self.aget_relevant_documents(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 157, in awarning_emitting_wrapper
2024-05-15 10:56:05 | return await wrapped(*args, **kwargs)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 387, in aget_relevant_documents
2024-05-15 10:56:05 | raise e
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 380, in aget_relevant_documents
2024-05-15 10:56:05 | result = await self._aget_relevant_documents(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 716, in _aget_relevant_documents
2024-05-15 10:56:05 | docs = await self.vectorstore.asimilarity_search(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 403, in asimilarity_search
2024-05-15 10:56:05 | return await run_in_executor(None, self.similarity_search, query, k=k, **kwargs)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 514, in run_in_executor
2024-05-15 10:56:05 | return await asyncio.get_running_loop().run_in_executor(
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
2024-05-15 10:56:05 | result = self.fn(*self.args, **self.kwargs)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/code/vectorstore/supabase.py", line 76, in similarity_search
2024-05-15 10:56:05 | vectors = self._embedding.embed_documents([query])
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 211, in embed_documents
2024-05-15 10:56:05 | embeddings = self._embed(instruction_pairs)
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in _embed
2024-05-15 10:56:05 | return [self.process_emb_response(prompt) for prompt in iter]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in
2024-05-15 10:56:05 | return [self.process_emb_response(prompt) for prompt in iter]
2024-05-15 10:56:05 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-05-15 10:56:05 | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 173, in _process_emb_response
2024-05-15 10:56:05 | raise ValueError(
2024-05-15 10:56:05 | ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}
2024-05-15 10:56:05 +------------------------------------
2024-05-15 10:56:11 INFO: 127.0.0.1:44924 - "GET /healthz HTTP/1.1" 200 OK

@wwjCMP wwjCMP changed the title [Bug]: Do not use the locally deployed model on ollama to answer. [Bug]: No model on ollama was used to answer May 15, 2024
@wwjCMP
Copy link
Author

wwjCMP commented May 15, 2024

I can modify the default embedding model, what I want to know is how to modify the default dialogue model. Then how do users switch their dialogue models.

Copy link

dosubot bot commented May 15, 2024

To modify the default dialogue model and enable users to switch between dialogue models, follow these steps:

  1. Ensure Ollama is Installed and Configured: Verify Ollama is correctly installed and configured to run in your environment. Start the Ollama service with the model you intend to use, for example, ollama run llama2.

  2. Set OLLAMA_API_BASE_URL in .env: In your .env file, specify the OLLAMA API base URL to point to your local Ollama service, like so:

    OLLAMA_API_BASE_URL=http://host.docker.internal:11434
    

    This ensures your system knows where to find the Ollama service.

  3. Database Configuration for Model Selection: Update the user_settings table in your database to include the Ollama models you want available for dialogue, such as:

    UPDATE user_settings SET models = '["ollama/llama2", "ollama/mistral"]' WHERE user_id = YOUR_USER_ID;

    Replace YOUR_USER_ID with the actual user ID.

  4. Verify and Test: After making these updates, verify that the Ollama models are correctly listed as options and test switching between them during sessions.

This approach addresses both the issue of the 'llama2' model not being found and enables users to switch between different dialogue models during their sessions. If you encounter any specific error messages or issues during this process, those logs could provide further insights into the problem.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@wwjCMP
Copy link
Author

wwjCMP commented May 15, 2024

In fact, I have already assigned a custom model to the user, but llama2 is still the only one called in question and answer.

Copy link

dosubot bot commented May 15, 2024

To address the issue where your custom model is not being invoked and "llama2" is used instead, consider the following steps:

  1. Model Identifier Check: Verify that the model identifier in the user_settings table exactly matches the expected format by Ollama, including case sensitivity. It should be "ollama/your_custom_model".

  2. Model Availability: Ensure your custom model is available and properly configured in your local Ollama service. If there's a configuration issue or the model isn't available, the system might revert to using "llama2".

  3. Model Selection Process: The system selects the model based on session requirements and user preferences. This selection is managed by the find_model_and_generate_metadata function, which considers various factors to choose the most suitable model. Issues in this function could lead to defaulting to "llama2". For a deeper dive, check the function in the backend/modules/chat/controller/chat/utils.py file.

  4. Error Logs: Review the error logs for any indications of why the custom model isn't being invoked. These logs can provide insights into problems occurring during the model selection process.

If after these checks the issue remains unresolved, it might indicate a more complex problem within the Quivr codebase or configuration that requires further investigation.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@srigurubyo
Copy link

srigurubyo commented May 16, 2024

I am having same issue and it is more than few days I am stuck here.

  1. I am running Ollama using ollama run gemma:2b -> able to verify that on my browser using http://:11434
  2. Edited user_settings table on supabase to include the model's name for the user.
  3. My Quivr's .env file has below content:
#### QUIVR Configuration
# This file is used to configure the Quivr stack. It is used by the `docker-compose.yml` file to configure the stack.

# OPENAI. Update this to use your API key. To skip OpenAI integration use a fake key, for example: tk-aabbccddAABBCCDDEeFfGgHhIiJKLmnopjklMNOPqQqQqQqQ
OPENAI_API_KEY=tk-aabbccddAABBCCDDEeFfGgHhIiJKLmnopjklMNOPqQqQqQqQ

# LOCAL
# OLLAMA_API_BASE_URL=http://0.0.0.0:11434  # Uncomment to activate ollama. This is the local url for the ollama api
OLLAMA_API_BASE_URL=http://host.docker.internal:11434

########
# FRONTEND
########

NEXT_PUBLIC_ENV=local
NEXT_PUBLIC_BACKEND_URL=http://<public_ip>:5050
NEXT_PUBLIC_SUPABASE_URL=http://<public_ip>:54321
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0
NEXT_PUBLIC_CMS_URL=https://cms.quivr.app
NEXT_PUBLIC_FRONTEND_URL=http://<public_ip>:3000
NEXT_PUBLIC_AUTH_MODES=password



########
# BACKEND
########

SUPABASE_URL=http://host.docker.internal:54321
SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU
PG_DATABASE_URL=postgresql://postgres:[email protected]:54322/postgres
ANTHROPIC_API_KEY=null
JWT_SECRET_KEY=super-secret-jwt-token-with-at-least-32-characters-long
AUTHENTICATE=true
TELEMETRY_ENABLED=true
CELERY_BROKER_URL=redis://redis:6379/0
CELEBRY_BROKER_QUEUE_NAME=quivr-preview.fifo
QUIVR_DOMAIN=http://<public_ip>:3000/
#COHERE_API_KEY=CHANGE_ME

#RESEND
RESEND_API_KEY=<change-me>
[email protected]
[email protected]
RESEND_CONTACT_SALES_TO=<change-me>

CRAWL_DEPTH=1

PREMIUM_MAX_BRAIN_NUMBER=30
PREMIUM_MAX_BRAIN_SIZE=10000000
PREMIUM_DAILY_CHAT_CREDIT=100

# BRAVE SEARCH API KEY
BRAVE_SEARCH_API_KEY=CHANGE_ME

And below is the error log

backend-core  | INFO:     Application startup complete.
backend-core  | INFO:     106.205.75.65:54801 - "GET /user HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54801 - "GET /user/identity HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54802 - "GET /onboarding HTTP/1.1" 200 OK
worker        | [2024-05-16 11:46:21,882: INFO/MainProcess] celery@975b8e6ffc82 ready.
backend-core  | INFO:     106.205.75.65:54802 - "GET /brains/40ba47d7-51b2-4b2a-9247-89e29619efb0/ HTTP/1.1" 200 OK
worker        | [2024-05-16 11:46:23,082: INFO/MainProcess] Events of group {task} enabled by remote.
backend-core  | INFO:     106.205.75.65:54802 - "POST /chat HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54802 - "OPTIONS /chat/25edae8b-a6d0-4192-b795-0fe54653d7e8/question/stream?brain_id=40ba47d7-51b2-4b2a-9247-89e29619efb0 HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54801 - "GET /chat HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54802 - "POST /chat/25edae8b-a6d0-4192-b795-0fe54653d7e8/question/stream?brain_id=40ba47d7-51b2-4b2a-9247-89e29619efb0 HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54806 - "OPTIONS /chat/25edae8b-a6d0-4192-b795-0fe54653d7e8/history HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54808 - "GET /user/identity HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54807 - "GET /user HTTP/1.1" 200 OK
backend-core  | INFO:     106.205.75.65:54806 - "GET /chat/25edae8b-a6d0-4192-b795-0fe54653d7e8/history HTTP/1.1" 200 OK
backend-core  | Downloading ms-marco-TinyBERT-L-2-v2...
ms-marco-TinyBERT-L-2-v2.zip: 100%|██████████| 3.26M/3.26M [00:00<00:00, 152MiB/s]
backend-core  | INFO:     106.205.75.65:54801 - "GET /onboarding HTTP/1.1" 200 OK
backend-core  | ERROR:    Exception in ASGI application
backend-core  | Traceback (most recent call last):
backend-core  |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 265, in __call__
backend-core  |     await wrap(partial(self.listen_for_disconnect, receive))
backend-core  |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
backend-core  |     await func()
backend-core  |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
backend-core  |     message = await receive()
backend-core  |               ^^^^^^^^^^^^^^^
backend-core  |   File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
backend-core  |     await self.message_event.wait()
backend-core  |   File "/usr/local/lib/python3.11/asyncio/locks.py", line 213, in wait
backend-core  |     await fut
backend-core  | asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fe8d364e350
backend-core  |
backend-core  | During handling of the above exception, another exception occurred:
backend-core  |
backend-core  |   + Exception Group Traceback (most recent call last):
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
backend-core  |   |     result = await app(  # type: ignore[func-returns-value]
backend-core  |   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
backend-core  |   |     return await self.app(scope, receive, send)
backend-core  |   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
backend-core  |   |     await super().__call__(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
backend-core  |   |     await self.middleware_stack(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
backend-core  |   |     raise exc
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
backend-core  |   |     await self.app(scope, receive, _send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__
backend-core  |   |     await self.simple_response(scope, receive, send, request_headers=headers)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response
backend-core  |   |     await self.app(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
backend-core  |   |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
backend-core  |   |     raise exc
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
backend-core  |   |     await app(scope, receive, sender)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
backend-core  |   |     await self.middleware_stack(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
backend-core  |   |     await route.handle(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
backend-core  |   |     await self.app(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
backend-core  |   |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
backend-core  |   |     raise exc
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
backend-core  |   |     await app(scope, receive, sender)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
backend-core  |   |     await response(scope, receive, send)
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 258, in __call__
backend-core  |   |     async with anyio.create_task_group() as task_group:
backend-core  |   |   File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
backend-core  |   |     raise BaseExceptionGroup(
backend-core  |   | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
backend-core  |   +-+---------------- 1 ----------------
backend-core  |     | Traceback (most recent call last):
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 575, in async_streaming
backend-core  |     |     response = await openai_aclient.chat.completions.create(
backend-core  |     |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create
backend-core  |     |     return await self._post(
backend-core  |     |            ^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post
backend-core  |     |     return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request
backend-core  |     |     return await self._request(
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request
backend-core  |     |     raise self._make_status_error_from_response(err.response) from None
backend-core  |     | openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: tk-aabbc***************************************qQqQ. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
backend-core  |     |
backend-core  |     | During handling of the above exception, another exception occurred:
backend-core  |     |
backend-core  |     | Traceback (most recent call last):
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 336, in acompletion
backend-core  |     |     response = await init_response
backend-core  |     |                ^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/llms/openai.py", line 598, in async_streaming
backend-core  |     |     raise OpenAIError(status_code=e.status_code, message=str(e))
backend-core  |     | litellm.llms.openai.OpenAIError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: tk-aabbc***************************************qQqQ. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
backend-core  |     |
backend-core  |     | During handling of the above exception, another exception occurred:
backend-core  |     |
backend-core  |     | Traceback (most recent call last):
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
backend-core  |     |     await func()
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
backend-core  |     |     async for chunk in self.body_iterator:
backend-core  |     |   File "/code/modules/brain/knowledge_brain_qa.py", line 336, in generate_stream
backend-core  |     |     async for chunk in conversational_qa_chain.astream(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2900, in astream
backend-core  |     |     async for chunk in self.atransform(input_aiter(), config, **kwargs):
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
backend-core  |     |     async for chunk in self._atransform_stream_with_config(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
backend-core  |     |     chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
backend-core  |     |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
backend-core  |     |     async for output in final_pipeline:
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3315, in atransform
backend-core  |     |     async for chunk in self._atransform_stream_with_config(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1944, in _atransform_stream_with_config
backend-core  |     |     final_input: Optional[Input] = await py_anext(input_for_tracing, None)
backend-core  |     |                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
backend-core  |     |     return await __anext__(iterator)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
backend-core  |     |     item = await iterator.__anext__()
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3315, in atransform
backend-core  |     |     async for chunk in self._atransform_stream_with_config(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1944, in _atransform_stream_with_config
backend-core  |     |     final_input: Optional[Input] = await py_anext(input_for_tracing, None)
backend-core  |     |                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
backend-core  |     |     return await __anext__(iterator)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
backend-core  |     |     item = await iterator.__anext__()
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3315, in atransform
backend-core  |     |     async for chunk in self._atransform_stream_with_config(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
backend-core  |     |     chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
backend-core  |     |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3302, in _atransform
backend-core  |     |     chunk = AddableDict({step_name: task.result()})
backend-core  |     |                                     ^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3285, in get_next_chunk
backend-core  |     |     return await py_anext(generator)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform
backend-core  |     |     async for chunk in self._atransform_stream_with_config(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config
backend-core  |     |     chunk: Output = await asyncio.create_task(  # type: ignore[call-arg]
backend-core  |     |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform
backend-core  |     |     async for output in final_pipeline:
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 60, in atransform
backend-core  |     |     async for chunk in self._atransform_stream_with_config(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1944, in _atransform_stream_with_config
backend-core  |     |     final_input: Optional[Input] = await py_anext(input_for_tracing, None)
backend-core  |     |                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 62, in anext_impl
backend-core  |     |     return await __anext__(iterator)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer
backend-core  |     |     item = await iterator.__anext__()
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1334, in atransform
backend-core  |     |     async for output in self.astream(final, config, **kwargs):
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream
backend-core  |     |     raise e
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 297, in astream
backend-core  |     |     async for chunk in self._astream(
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/litellm.py", line 374, in _astream
backend-core  |     |     async for chunk in await acompletion_with_retry(
backend-core  |     |                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/litellm.py", line 112, in acompletion_with_retry
backend-core  |     |     return await _completion_with_retry(**kwargs)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/tenacity/_asyncio.py", line 142, in async_wrapped
backend-core  |     |     return await fn(*args, **kwargs)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/tenacity/_asyncio.py", line 58, in __call__
backend-core  |     |     do = await self.iter(retry_state=retry_state)
backend-core  |     |          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/tenacity/_asyncio.py", line 110, in iter
backend-core  |     |     result = await action(retry_state)
backend-core  |     |              ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/tenacity/_asyncio.py", line 78, in inner
backend-core  |     |     return fn(*args, **kwargs)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/tenacity/__init__.py", line 390, in <lambda>
backend-core  |     |     self._add_action_func(lambda rs: rs.outcome.result())
backend-core  |     |                                      ^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
backend-core  |     |     return self.__get_result()
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
backend-core  |     |     raise self._exception
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/tenacity/_asyncio.py", line 61, in __call__
backend-core  |     |     result = await fn(*args, **kwargs)
backend-core  |     |              ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/langchain_community/chat_models/litellm.py", line 110, in _completion_with_retry
backend-core  |     |     return await llm.client.acreate(**kwargs)
backend-core  |     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3762, in wrapper_async
backend-core  |     |     raise e
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 3590, in wrapper_async
backend-core  |     |     result = await original_function(*args, **kwargs)
backend-core  |     |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/main.py", line 357, in acompletion
backend-core  |     |     raise exception_type(
backend-core  |     |           ^^^^^^^^^^^^^^^
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 9497, in exception_type
backend-core  |     |     raise e
backend-core  |     |   File "/usr/local/lib/python3.11/site-packages/litellm/utils.py", line 8297, in exception_type
backend-core  |     |     raise AuthenticationError(
backend-core  |     | litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: tk-aabbc***************************************qQqQ. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
backend-core  |     | Model: gpt-3.5-turbo-0125
backend-core  |     | API Base: https://api.openai.com
backend-core  |     | Messages: [{'role': 'user', 'content': 'Given the following conversation and a follow up question, rephrase th
backend-core  |     +------------------------------------
backend-core  |
backend-core  | Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
backend-core  | LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
backend-core  |
backend-core  | INFO:     106.205.75.65:54801 - "GET /user/credits HTTP/1.1" 200 OK
backend-core  | INFO:     127.0.0.1:45188 - "GET /healthz HTTP/1.1" 200 OK

@andyzhangwp
Copy link

I am having same issue too.

@cj-lyons
Copy link

cj-lyons commented Jun 1, 2024

I updated backend/model/settings.py in ollama_api_base_url: str = "http://host.docker.internal:11434", and then updated the models in the user_settings table are "ollama/llama3".
My .env:

#### QUIVR Configuration
# This file is used to configure the Quivr stack. It is used by the `docker-compose.yml` file to configure the stack.

# OPENAI. Update this to use your API key. To skip OpenAI integration use a fake key, for example: tk-aabbccddAABBCCDDEeFfGgHhIiJKLmnopjklMNOPqQqQqQqQ
OPENAI_API_KEY=tk-aabbccddAABBCCDDEeFfGgHhIiJKLmnopjklMNOPqQqQqQqQ

# LOCAL
OLLAMA_API_BASE_URL=http://host.docker.internal:11434 # Uncomment to activate ollama. This is the local url for the ollama api

########
# FRONTEND
########

NEXT_PUBLIC_ENV=local
NEXT_PUBLIC_BACKEND_URL=http://localhost:5050
NEXT_PUBLIC_SUPABASE_URL=http://localhost:54321
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0
NEXT_PUBLIC_CMS_URL=https://cms.quivr.app
NEXT_PUBLIC_FRONTEND_URL=http://localhost:*
NEXT_PUBLIC_AUTH_MODES=password



########
# BACKEND
########

SUPABASE_URL=http://host.docker.internal:54321
SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU
PG_DATABASE_URL=postgresql://postgres:[email protected]:54322/postgres
ANTHROPIC_API_KEY=null
JWT_SECRET_KEY=super-secret-jwt-token-with-at-least-32-characters-long
AUTHENTICATE=true
TELEMETRY_ENABLED=true
CELERY_BROKER_URL=redis://redis:6379/0
CELEBRY_BROKER_QUEUE_NAME=quivr-preview.fifo
QUIVR_DOMAIN=http://localhost:3088/
BACKEND_URL=http://localhost:5050
#COHERE_API_KEY=CHANGE_ME

#RESEND
RESEND_API_KEY=<change-me>
[email protected]
[email protected]
RESEND_CONTACT_SALES_TO=<change-me>

CRAWL_DEPTH=1

PREMIUM_MAX_BRAIN_NUMBER=30
PREMIUM_MAX_BRAIN_SIZE=10000000
PREMIUM_DAILY_CHAT_CREDIT=100

# BRAVE SEARCH API KEY
BRAVE_SEARCH_API_KEY=CHANGE_ME


# GOOGLE DRIVE
GOOGLE_CLIENT_ID=your-client-id
GOOGLE_CLIENT_SECRET=your-client-secret
GOOGLE_PROJECT_ID=your-project-id
GOOGLE_AUTH_URI=https://accounts.google.com/o/oauth2/auth
GOOGLE_TOKEN_URI=https://oauth2.googleapis.com/token
GOOGLE_AUTH_PROVIDER_CERT_URL=https://www.googleapis.com/oauth2/v1/certs
GOOGLE_REDIRECT_URI=http://localhost

# SHAREPOINT
SHAREPOINT_CLIENT_ID=your-client-id

Here I created a new web1 in docker and mapped it QUIVR_DOMAIN=http://localhost:3088/
docker log:
2024-06-02 11:56:24 [TypeError: fetch failed] {
2024-06-02 11:56:24 cause: [Error: connect ECONNREFUSED ::1:54321] {
2024-06-02 11:56:24 errno: -111,
2024-06-02 11:56:24 code: 'ECONNREFUSED',
2024-06-02 11:56:24 syscall: 'connect',
2024-06-02 11:56:24 address: '::1',
2024-06-02 11:56:24 port: 54321
2024-06-02 11:56:24 }
2024-06-02 11:56:24 }
2024-06-02 11:56:24 [AuthRetryableFetchError: fetch failed] {
2024-06-02 11:56:24 __isAuthError: true,
2024-06-02 11:56:24 name: 'AuthRetryableFetchError',
2024-06-02 11:56:24 status: 0
2024-06-02 11:56:24 }
2024-06-02 11:56:33 ⚠ The "images.domains" configuration is deprecated. Please use "images.remotePatterns" configuration instead.
But now the problem arises: I keep appearing TypeError:network error,What is the reason for this? How can I fix it?

@ArmanBM
Copy link

ArmanBM commented Jun 12, 2024

Similar problem here,
I want to use llama3 instead of llama2, I already set user_settings but quivr think it should use llama2 (because langchaing community default ollama model is llama2)

error:
2024-06-12 03:58:00,034: INFO/ForkPoolWorker-6] Task process_sync_active[2d0efc0f-2953-45b8-862f-8079c36d500c] succeeded in 0.02721104200463742s: None backend-core | INFO: 127.0.0.1:36110 - "GET /healthz HTTP/1.1" 200 OK backend-core | ERROR: Exception in ASGI application backend-core | Traceback (most recent call last): backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 265, in __call__ backend-core | await wrap(partial(self.listen_for_disconnect, receive)) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap backend-core | await func() backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 238, in listen_for_disconnect backend-core | message = await receive() backend-core | ^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 553, in receive backend-core | await self.message_event.wait() backend-core | File "/usr/local/lib/python3.11/asyncio/locks.py", line 213, in wait backend-core | await fut backend-core | asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fe8d5205150 backend-core | backend-core | During handling of the above exception, another exception occurred: backend-core | backend-core | + Exception Group Traceback (most recent call last): backend-core | | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi backend-core | | result = await app( # type: ignore[func-returns-value] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__ backend-core | | return await self.app(scope, receive, send) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ backend-core | | await super().__call__(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__ backend-core | | await self.middleware_stack(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__ backend-core | | raise exc backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__ backend-core | | await self.app(scope, receive, _send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__ backend-core | | await self.simple_response(scope, receive, send, request_headers=headers) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response backend-core | | await self.app(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__ backend-core | | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app backend-core | | raise exc backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app backend-core | | await app(scope, receive, sender) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__ backend-core | | await self.middleware_stack(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app backend-core | | await route.handle(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle backend-core | | await self.app(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app backend-core | | await wrap_app_handling_exceptions(app, request)(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app backend-core | | raise exc backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app backend-core | | await app(scope, receive, sender) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 75, in app backend-core | | await response(scope, receive, send) backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 258, in __call__ backend-core | | async with anyio.create_task_group() as task_group: backend-core | | File "/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 680, in __aexit__ backend-core | | raise BaseExceptionGroup( backend-core | | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) backend-core | +-+---------------- 1 ---------------- backend-core | | Traceback (most recent call last): backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap backend-core | | await func() backend-core | | File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response backend-core | | async for chunk in self.body_iterator: backend-core | | File "/code/modules/brain/knowledge_brain_qa.py", line 347, in generate_stream backend-core | | async for chunk in conversational_qa_chain.astream( backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2811, in astream backend-core | | async for chunk in self.atransform(input_aiter(), config, **kwargs): backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2794, in atransform backend-core | | async for chunk in self._atransform_stream_with_config( backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _atransform_stream_with_config backend-core | | chunk: Output = await asyncio.create_task( # type: ignore[call-arg] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2764, in _atransform backend-core | | async for output in final_pipeline: backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3226, in atransform backend-core | | async for chunk in self._atransform_stream_with_config( backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _atransform_stream_with_config backend-core | | chunk: Output = await asyncio.create_task( # type: ignore[call-arg] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3213, in _atransform backend-core | | chunk = AddableDict({step_name: task.result()}) backend-core | | ^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3196, in get_next_chunk backend-core | | return await py_anext(generator) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4091, in atransform backend-core | | async for output in self._atransform_stream_with_config( backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _atransform_stream_with_config backend-core | | chunk: Output = await asyncio.create_task( # type: ignore[call-arg] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4000, in _atransform backend-core | | async for ichunk in input: backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer backend-core | | item = await iterator.__anext__() backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer backend-core | | item = await iterator.__anext__() backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 97, in tee_peer backend-core | | item = await iterator.__anext__() backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3226, in atransform backend-core | | async for chunk in self._atransform_stream_with_config( backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _atransform_stream_with_config backend-core | | chunk: Output = await asyncio.create_task( # type: ignore[call-arg] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3213, in _atransform backend-core | | chunk = AddableDict({step_name: task.result()}) backend-core | | ^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3196, in get_next_chunk backend-core | | return await py_anext(generator) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2794, in atransform backend-core | | async for chunk in self._atransform_stream_with_config( backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1880, in _atransform_stream_with_config backend-core | | chunk: Output = await asyncio.create_task( # type: ignore[call-arg] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2764, in _atransform backend-core | | async for output in final_pipeline: backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1206, in atransform backend-core | | async for output in self.astream(final, config, **kwargs): backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 826, in astream backend-core | | yield await self.ainvoke(input, config, **kwargs) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 280, in ainvoke backend-core | | raise e backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 273, in ainvoke backend-core | | result = await self._aget_relevant_documents( backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain/retrievers/contextual_compression.py", line 70, in _aget_relevant_documents backend-core | | docs = await self.base_retriever.ainvoke( backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 280, in ainvoke backend-core | | raise e backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py", line 273, in ainvoke backend-core | | result = await self._aget_relevant_documents( backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 715, in _aget_relevant_documents backend-core | | docs = await self.vectorstore.asimilarity_search( backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 403, in asimilarity_search backend-core | | return await run_in_executor(None, self.similarity_search, query, k=k, **kwargs) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 533, in run_in_executor backend-core | | return await asyncio.get_running_loop().run_in_executor( backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run backend-core | | result = self.fn(*self.args, **self.kwargs) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/code/vectorstore/supabase.py", line 76, in similarity_search backend-core | | vectors = self._embedding.embed_documents([query]) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 211, in embed_documents backend-core | | embeddings = self._embed(instruction_pairs) backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in _embed backend-core | | return [self._process_emb_response(prompt) for prompt in iter_] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in <listcomp> backend-core | | return [self._process_emb_response(prompt) for prompt in iter_] backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 173, in _process_emb_response backend-core | | raise ValueError( backend-core | | ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}

just look at the last two errors in above log.

how to alter llama2 to llama3 ?

@mkhludnev
Copy link

how to alter llama2 to llama3 ?

#2690

Copy link
Contributor

Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion.

@github-actions github-actions bot added the Stale label Sep 17, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Stale
Projects
None yet
Development

No branches or pull requests

6 participants