Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Exception in ASGI application when running with Ollama. #676

Open
banifou opened this issue Jul 16, 2024 · 9 comments
Open

ERROR: Exception in ASGI application when running with Ollama. #676

banifou opened this issue Jul 16, 2024 · 9 comments

Comments

@banifou
Copy link

banifou commented Jul 16, 2024

Not working with Ollama and llama3. Before it worked fine with Chatgpt 4-o.

From the logs:

Finalized research step.
💸 Total Research Costs: $0.01411754
🤔 Generating subtopics...

🤖 Calling llama3...

📋Subtopics: subtopics=[Subtopic(task='Learn more about BLABLA'), Subtopic(task='Analyze your competitors'), Subtopic(task='Content optimization')]
INFO: connection closed
Error in generating report introduction:
🔎 Starting the research task for 'Learn more about BLABLA'...
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 244, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value]
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call
return await self.app(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/applications.py", line 123, in call
await self.middleware_stack(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 151, in call
await self.app(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in call
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/routing.py", line 756, in call
await self.middleware_stack(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/routing.py", line 373, in handle
await self.app(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/routing.py", line 96, in app
await wrap_app_handling_exceptions(app, session)(scope, receive, send)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/routing.py", line 94, in app
await func(session)
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/fastapi/routing.py", line 348, in app
await dependant.call(**values)
File "/python/gpt-researcher/backend/server.py", line 68, in websocket_endpoint
report = await manager.start_streaming(
File "/python/gpt-researcher/backend/websocket_manager.py", line 56, in start_streaming
report = await run_agent(task, report_type, report_source, tone, websocket)
File "/python/gpt-researcher/backend/websocket_manager.py", line 88, in run_agent
report = await researcher.run()
File "/python/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 65, in run
_, report_body = await self._generate_subtopic_reports(subtopics)
File "/python/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 105, in _generate_subtopic_reports
result = await fetch_report(subtopic)
File "/python/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 93, in fetch_report
subtopic_report = await self._get_subtopic_report(subtopic)
File "/python/gpt-researcher/backend/report_type/detailed_report/detailed_report.py", line 131, in _get_subtopic_report
await subtopic_assistant.conduct_research()
File "/python/gpt-researcher/gpt_researcher/master/agent.py", line 88, in conduct_research
await stream_output(
File "/python/gpt-researcher/gpt_researcher/master/actions.py", line 382, in stream_output
await websocket.send_json({"type": type, "output": output})
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/websockets.py", line 198, in send_json
await self.send({"type": "websocket.send", "text": text})
File "/python/gpt-researcher/venv/lib/python3.10/site-packages/starlette/websockets.py", line 112, in send
raise RuntimeError('Cannot call "send" once a close message has been sent.')
RuntimeError: Cannot call "send" once a close message has been sent.

@arsaboo
Copy link
Contributor

arsaboo commented Jul 16, 2024

Working fine for me with Ollama.

Did it work with Ollama for you earlier?

@banifou
Copy link
Author

banifou commented Jul 17, 2024

No, I tried twice. Ollama works fine from the command line.

@arsaboo
Copy link
Contributor

arsaboo commented Jul 17, 2024

I meant, did GPTR ever work with Ollama for you?

@banifou
Copy link
Author

banifou commented Jul 18, 2024

What is GPTR?! Ah, gpt-researcher.
Like I said, I tried twice with Ollama and failed twice. So it never worked with Ollama.

@assafelovic
Copy link
Owner

Hey @banifou can you try pulling the latest and try again? Also make sure to upgrade the gptr pip package: pip install gpt-researcher -U

@banifou
Copy link
Author

banifou commented Aug 19, 2024

Yes, it's working now! It takes much more longer until llama3 finished the work than with Openai!

@ElishaKay
Copy link
Collaborator

ElishaKay commented Aug 20, 2024

Sup @banifou @arsaboo

Is it possible to get a sample env file / setup guidance for running with Ollama?
I'd love to create some documentation around this for the community.

I deployed an Open WebUI server with Elestio, but have yet to get it working

Here's my .env:

LLM_PROVIDER=ollama
OLLAMA_BASE_URL=https://ollama-ug3qr-u21899.vm.elestio.app:11434/api

OPENAI_API_KEY=OLLAMA
EMBEDDING_PROVIDER=ollama
FAST_LLM_MODEL=llama3
SMART_LLM_MODEL=llama3
OLLAMA_EMBEDDING_MODEL=snowflake-arctic-embed:l

Any recommendations on a working setup would be greatly appreciated

@arsaboo
Copy link
Contributor

arsaboo commented Aug 20, 2024

@ElishaKay
Here's my env:

RETRIEVER=searx
SEARX_URL=http://host.docker.internal:4002
RETRIEVER_ENDPOINT=http://host.docker.internal:4002
# Ollama Config
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://host.docker.internal:11434
OPENAI_API_KEY=OLLAMA
EMBEDDING_PROVIDER=ollama
FAST_LLM_MODEL=qwen2:72b-instruct
SMART_LLM_MODEL=llama3
OLLAMA_EMBEDDING_MODEL=snowflake-arctic-embed:l
DOC_PATH=./my-docs

Here's my compose file (the nextJS frontend is not working for me):

version: '3'
services:
  gpt-researcher:
    pull_policy: build
    image: gptresearcher/gpt-researcher
    build: ./
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      TAVILY_API_KEY: ${TAVILY_API_KEY}
      LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
    restart: always
    ports:
      - 8001:8000
    extra_hosts:
      - "host.docker.internal:host-gateway"
  gptr-nextjs:
    pull_policy: build
    image: gptresearcher/gptr-nextjs
    stdin_open: true
    environment:
      - CHOKIDAR_USEPOLLING=true
    build:
      dockerfile: Dockerfile.dev
      context: multi_agents/frontend
    volumes:
      - /app/node_modules
      - ./multi_agents/frontend:/app
    restart: always
    ports:
      - 3000:3000
    extra_hosts:
      - "host.docker.internal:host-gateway"

@ElishaKay
Copy link
Collaborator

Cool, thank @arsaboo

What error does docker compose throw for the nextjs app?

Try opening localhost:3000 in an incognito tab - sometimes nextjs has caching issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants