Skip to content
This repository has been archived by the owner on Feb 11, 2025. It is now read-only.

OpenAI RateLimit ERROR || Ample Quota left #74

Open
Ashutosh-Srivastav opened this issue Nov 18, 2023 · 1 comment
Open

OpenAI RateLimit ERROR || Ample Quota left #74

Ashutosh-Srivastav opened this issue Nov 18, 2023 · 1 comment

Comments

@Ashutosh-Srivastav
Copy link

Ashutosh-Srivastav commented Nov 18, 2023

Hi,

I have deployed the code as suggested, but while trying to query Knowledge graph, I got the error on terminal/server:

api | [{'role': 'user', 'content': 'Hi'}, {'role': 'user', 'content': 'Which company has generated maximum revenue?'}, {'role': 'user', 'content': 'Question to be converted to Cypher: Which company has generated maximum revenue?'}]
api | Retrying LLM call You exceeded your current quota, please check your plan and billing details.
api | Retrying LLM call You exceeded your current quota, please check your plan and billing details.
api | Retrying LLM call You exceeded your current quota, please check your plan and billing details.

Q. This shouldn't be the case since I have enough quota left in my free tier account for GPT 3.5, kindly review.

Also, there is an uncaught exception for RateLimitError for GPT 3.5 which is implemented for GPT 4:

GPT 4:

api | results {'output': [{'message': 'Error: The model gpt-4 does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.'}], 'generated_cypher': None}

GPT 3.5:

api | INFO: 172.18.0.1:52382 - "POST /questionProposalsForCurrentDb HTTP/1.1" 500 Internal Server Error
api | ERROR: Exception in ASGI application
api | Traceback (most recent call last):
api | File "/api/src/llm/openai.py", line 33, in generate
api | completions = openai.ChatCompletion.create(
api | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api | File "/usr/local/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
api | return super().create(*args, **kwargs)
api | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api | File "/usr/local/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
api | response, _, api_key = requestor.request(
api | ^^^^^^^^^^^^^^^^^^
api | File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 230, in request
api | resp, got_stream = self._interpret_response(result, stream)
api | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api | File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 624, in _interpret_response
api | self._interpret_response_line(
api | File "/usr/local/lib/python3.11/site-packages/openai/api_requestor.py", line 687, in _interpret_response_line
api | raise self.handle_error_response(
api | openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.

This can be handled, in openai.py..

except openai.error.RateLimitError as e:
    return(f"Rate limit exceeded. Error: {e}")
@AshutoshABB
Copy link

Hi,

I am sorry, I just noticed my token grants have expired, I had no idea about 3 months availability of free tokens.
Still, the exception handling is valid, since the limit error must propagated to client page.

image

Thanks and Regards,
Ashutosh

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants