Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error code: 422 - UnprocessableEntityError - As of g4f v0.3.6.6 and above, been getting this error! #2420

Open
Linden10 opened this issue Nov 25, 2024 · 12 comments
Assignees
Labels
bug Something isn't working

Comments

@Linden10
Copy link

Just updated from v0.3.6.4 to the latest version of g4f, that being 0.3.7.1 (updates happening in the past few hours!) and I've been getting this error whenever I try to translate my files in DazedMTL:

openai.UnprocessableEntityError: Error code: 422

Here's the full output of the error from DazedMTL:

Traceback (most recent call last):
  File "C:\Users\ASUS\Downloads\Eroge\DazedMTLTool-main\modules\csv.py", line 337, in translateCSV
    response = translateGPT(stringList, "", True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\retry\api.py", line 73, in retry_decorator
    return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, ma           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\retry\api.py", line 33, in __retry_internal
    return f()
           ^^^
  File "C:\Users\ASUS\Downloads\Eroge\DazedMTLTool-main\modules\csv.py", line 739, in translateGPT
    response = translateText(characters, system, user, history, 0.2, format)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ASUS\Downloads\Eroge\DazedMTLTool-main\modules\csv.py", line 585, in translateText
    response = openai.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_utils\_utils.py", line 303, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\resources\chat\completions.py", line 598, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 1086, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 846, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 898, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.UnprocessableEntityError: Error code: 422 - {'detail': [{'loc': ['body', 'provider'], 'message': 'Field required', 'type': 'missing'}, {'loc': ['body', 'max_tokens'], 'message': 'Field required', 'type': 'missing'}, {'loc': ['body', 'stop'], 'message': 'Field required', 'type': 'missing'}, {'loc': ['body', 'api_key'], 'message': 'Field required', 'type': 'missing'}, {'loc': ['body', 'web_search'], 'message': 'Field required', 'type': 'missing'}, {'loc': ['body', 'proxy'], 'message': 'Field required', 'type': 'missing'}, {'loc': ['body', 'conversation_id'], 'message': 'Field required', 'type': 'missing'}]}

On G4F, it reports this:

INFO:     127.0.0.1:59946 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:59948 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:59951 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:59953 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:59956 - "POST /v1/chat/completions HTTP/1.1" 422 Unprocessable Entity

I have no idea what this missing "field" is but it's causing me these issues, I had to downgrade to 0.3.6.5 just to get this to work again. Hope there's a way to fix this if possible!

@Linden10 Linden10 added the bug Something isn't working label Nov 25, 2024
@ahmedashraf443
Copy link

I'm also getting the same error
image

@Auximen
Copy link

Auximen commented Nov 25, 2024

Same problem

{"detail":[{"loc":["body","temperature"],"message":"Field required","type":"missing"},{"loc":["body","max_tokens"],"message":"Field required","type":"missing"},{"loc":["body","stop"],"message":"Field required","type":"missing"},{"loc":["body","api_key"],"message":"Field required","type":"missing"},{"loc":["body","web_search"],"message":"Field required","type":"missing"},{"loc":["body","proxy"],"message":"Field required","type":"missing"},{"loc":["body","conversation_id"],"message":"Field required","type":"missing"}]}...

@mind-animator-design
Copy link

Same problem

DDG: ConnectionTimeoutError: Connection timeout to host https://duckduckgo.com/duckchat/v1/status
ChatGptEs: ClientResponseError: 413, message='Request Entity Too Large', url='https://chatgpt.es/wp-admin/admin-ajax.php'
Liaobots: ResponseStatusError: Response 500: <title>500: Internal Server Error</title><script defer="" nomodule="" src="/_next/static/chunks/polyfills-78c92fac7aa8fdd8.js"></script><script src="/_next/static/chunks/webpack-03e3a51ac43132ab.js" defer=""></script><script src="/_next/static/chunks/framework-9620da855a94eb57.js" defer=""></script><script src="/_next/static/chunks/main-546e297120e8a925.js" defer=""></script><script src="/_next/static/chunks/pages/_app-0fa01697a89f8c82.js" defer=""></script><script src="/_next/static/chunks/pages/_error-8447282b6bcee29e.js" defer=""></script><script src="/_next/static/WqPdeS122Cm0ib4jO8ouA/_buildManifest.js" defer=""></script><script src="/_next/static/WqPdeS122Cm0ib4jO8ouA/_ssgManifest.js" defer=""></script>

<style>body{color:#000;background:#fff;margin:0}.next-error-h1{border-right:1px solid rgba(0,0,0,.3)}@media (prefers-color-scheme:dark){body{color:#fff;background:#000}.next-error-h1{border-right:1px solid rgba(255,255,255,.3)}}</style>

500

Internal Server Error.

<script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{"statusCode":500}},"page":"/_error","query":{},"buildId":"WqPdeS122Cm0ib4jO8ouA","nextExport":true,"isFallback":false,"gip":true,"locale":"zh","locales":["zh","bn","de","en","es","fr","he","id","it","ja","ko","pt","ru","ro","sv","te","vi","ar"],"defaultLocale":"zh","scriptLoader":[]}</script>
OpenaiChat: NoValidHarFileError: No .har file found

@hlohaus
Copy link
Collaborator

hlohaus commented Nov 25, 2024

@Linden10
It is a issue with "Field" from pydantic.

I created a pull request, where i fix the error 422 issue:

#2421

@Linden10
Copy link
Author

Hmm...the error with 422 has been fixed! But now I'm getting error 500 internal server errors despite ChatGPT, ChatGptEs and Blackbox being available...

This is after upgrading to 0.3.7.4 @hlohaus
I'm aware of the other providers being rate-limited but the usual ones that work are also not working for some reason.
DazedMTL also doesn't progress with the translation, continuously going (and breaking the tqdm progress bar due to no response 200 I guess) until reporting the same errors as G4F and closing. Yup!

Here's what G4F says:

INFO:     127.0.0.1:59681 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:g4f.api:RetryProvider failed:
Airforce: RateLimitError: Response 429: Rate limit reached
DarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'
Traceback (most recent call last):
  File "C:\Python311\Lib\site-packages\g4f\api\__init__.py", line 317, in chat_completions
    return await response
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\g4f\client\__init__.py", line 162, in async_iter_append_model_and_provider
    async for chunk in response:
  File "C:\Python311\Lib\site-packages\g4f\client\__init__.py", line 117, in async_iter_response
    async for chunk in response:
  File "C:\Python311\Lib\site-packages\g4f\providers\retry_provider.py", line 143, in create_async_generator
    raise_exceptions(exceptions)
  File "C:\Python311\Lib\site-packages\g4f\providers\retry_provider.py", line 324, in raise_exceptions
    raise RetryProviderError("RetryProvider failed:\n" + "\n".join([
g4f.errors.RetryProviderError: RetryProvider failed:
Airforce: RateLimitError: Response 429: Rate limit reached
DarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'
INFO:     127.0.0.1:59681 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:g4f.api:RetryProvider failed:
Airforce: RateLimitError: Response 429: Rate limit reached
DarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'
Traceback (most recent call last):
  File "C:\Python311\Lib\site-packages\g4f\api\__init__.py", line 317, in chat_completions
    return await response
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\g4f\client\__init__.py", line 162, in async_iter_append_model_and_provider
    async for chunk in response:
  File "C:\Python311\Lib\site-packages\g4f\client\__init__.py", line 117, in async_iter_response
    async for chunk in response:
  File "C:\Python311\Lib\site-packages\g4f\providers\retry_provider.py", line 143, in create_async_generator
    raise_exceptions(exceptions)
  File "C:\Python311\Lib\site-packages\g4f\providers\retry_provider.py", line 324, in raise_exceptions
    raise RetryProviderError("RetryProvider failed:\n" + "\n".join([
g4f.errors.RetryProviderError: RetryProvider failed:
Airforce: RateLimitError: Response 429: Rate limit reached
DarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'
INFO:     127.0.0.1:59681 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:g4f.api:RetryProvider failed:
Airforce: RateLimitError: Response 429: Rate limit reached
Liaobots: RateLimitError: Response 402: Rate limit reached
OpenaiChat: RateLimitError: Response 429: Rate limit reached
DarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'
Traceback (most recent call last):
  File "C:\Python311\Lib\site-packages\g4f\api\__init__.py", line 317, in chat_completions
    return await response
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\g4f\client\__init__.py", line 162, in async_iter_append_model_and_provider
    async for chunk in response:
  File "C:\Python311\Lib\site-packages\g4f\client\__init__.py", line 117, in async_iter_response
    async for chunk in response:
  File "C:\Python311\Lib\site-packages\g4f\providers\retry_provider.py", line 143, in create_async_generator
    raise_exceptions(exceptions)
  File "C:\Python311\Lib\site-packages\g4f\providers\retry_provider.py", line 324, in raise_exceptions
    raise RetryProviderError("RetryProvider failed:\n" + "\n".join([
g4f.errors.RetryProviderError: RetryProvider failed:
Airforce: RateLimitError: Response 429: Rate limit reached
Liaobots: RateLimitError: Response 402: Rate limit reached
OpenaiChat: RateLimitError: Response 429: Rate limit reached
DarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'

And here's what DazedMTL says as well, same as above:

01commonB.csv: |          | 27000/? [08:10<00:00, 66.32it/s]                        Traceback (most recent call last):
  File "C:\Users\ASUS\Downloads\Eroge\DazedMTLTool-main\modules\csv.py", line 337, in translateCSV
    response = translateGPT(stringList, "", True)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\decorator.py", line 232, in fun
    return caller(func, *(extras + args), **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\retry\api.py", line 73, in retry_decorator
    return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\retry\api.py", line 33, in __retry_internal
    return f()
           ^^^
  File "C:\Users\ASUS\Downloads\Eroge\DazedMTLTool-main\modules\csv.py", line 739, in translateGPT
    response = translateText(characters, system, user, history, 0.2, format)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ASUS\Downloads\Eroge\DazedMTLTool-main\modules\csv.py", line 585, in translateText
    response = openai.chat.completions.create(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_utils\_utils.py", line 303, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\resources\chat\completions.py", line 598, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 1086, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 846, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 884, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 956, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 884, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 956, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Python311\Lib\site-packages\openai\_base_client.py", line 898, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': "RetryProviderError: RetryProvider failed:\nAirforce: RateLimitError: Response 429: Rate limit reached\nOpenaiChat: RateLimitError: Response 429: Rate limit reached\nLiaobots: RateLimitError: Response 402: Rate limit reached\nDarkAI: ClientResponseError: 429, message='Too Many Requests', url='https://darkai.foundation/chat'"}, 'model': 'gpt-4o'}

I re-downgraded back to 0.3.6.5 and G4F started working again with DazedMTL like before.

If you want to, I can provide my copy of DazedMTL with an sample .csv file for translation in case you want to see if something is not accounted for in the project or understand why the error is happening. Just ask and I'll upload it with instructions!

@hlohaus
Copy link
Collaborator

hlohaus commented Nov 25, 2024

Hello,

I'm not sure if you're using the Docker image or Python package, but it's possible that you're on Cloudflare's blacklist. Many websites use this to protect their services.

Additionally, you're only testing providers from the model list. I recommend using Microsoft Copilot, as it's both fast and effective.

Let me know if you have any other questions.

Best regards

@Linden10
Copy link
Author

Linden10 commented Nov 25, 2024

Hello,

I'm not sure if you're using the Docker image or Python package, but it's possible that you're on Cloudflare's blacklist. Many websites use this to protect their services.

Additionally, you're only testing providers from the model list. I recommend using Microsoft Copilot, as it's both fast and effective.

Let me know if you have any other questions.

Best regards

I’m using the python pip package and when I downgraded to 0.3.6.5, chatgpt/blackbox/chstgptes started working again.

About the rate limittd providers, that’s because I was using DazedMTL for a bit before upgrading to the newest version so it’s not a cloudflare blacklist issue. Despite that, normally the available providers that aren’t rate-limited respond but some reason they aren’t!

I’m also able to visit the sites and do the cloudflare captcha without any issue as well. It’s just that the last newest versions of g4f broke somehow with working with DazedMTL so if you can, please help and look into this!

Thank you @hlohaus

Also If copilot can provide quality translations like gpt-4o then I’ll consider trying it out but only until this issue is somehow fixed. (It also has to be openai api compatible too)

@Linden10
Copy link
Author

Here's a copy of my DazedMTL python project for testing in case you're curious what's causing the errors. It includes an slightly modified csv.py module to make it so the json output is properly extracted from the api by the way.
DazedMTLTool-main.zip

I've included a sample untranslated .csv file (formatted in Translator++ format) for bug-testing with the api. If you're curious what VN (Visual Novel) it's from, it's this one.

The .env file has already been modified to work with g4f on the local machine but in-case you need to adjust anything, you can edit it.

The minimum required python version is Python 12 and up, you can either manually install the requirements using the requirements.txt file or by running the "start.bat", "start.sh" which has the command to run the requirements installation and then starting the project.

If you installed the requirements already, you can skip "start.bat" or "start.sh" and manually start DazedMTL itself with "python start.py" to start up the program.

Once it starts up, if no error shows up you should be shown two options which you can select with 1 or 2.
Start up G4F api before proceeding.

Once G4F is started, switch to DazedMTL and press 1 for "translations", then press 4 for "csv (Translator++)", then press 1 again for "Translator++" to start translating the .csv file in the "files" folder.

You'll then encounter potentially the same issues as I have with the latest versions of G4F.

If you want to manually modify the python project, you can do so with the csv.py module in case you want to add debug printing or such. I hope this helps and sorry for bothering with such a chore...thank you. @hlohaus

@hlohaus
Copy link
Collaborator

hlohaus commented Nov 26, 2024

The Microsoft Copilot utilizes the GPT-4o model.
Consider implementing a brief sleep interval between requests. Additionally, ensure that curl_cffi is installed; if not, proceed with the installation.

pip install -U curl_cffi
@Linden10

@Linden10
Copy link
Author

Linden10 commented Nov 26, 2024

The Microsoft Copilot utilizes the GPT-4o model. Consider implementing a brief sleep interval between requests. Additionally, ensure that curl_cffi is installed; if not, proceed with the installation.

pip install -U curl_cffi @Linden10

Oh really? That's interesting to know about copilot! I'll test it out! Also I'm already using the latest version of curl_cffi so that's not the issue with the DazedMTL not working with G4F. I again re-upgraded to the newest version, now v0.3.7.5, and haven't used any api in the last couple of hours including chatgpt! So when I tried DazedMTL with G4F using my chatgpt account, I got the nodriver's browser as usual and made a new chat in chatgpt, the browser closes like usual and at this point the translations should go through and return with responses 200...but even so...I kept on getting 500 internal errors and nothing worked.

I then re-downgraded back to 0.3.6.5 and Chatgpt, Blackbox, and such all worked again with no error!
I'll look into incorporating some custom code to delay the requests to prevent rate-limiting but even so, that won't fix the issue of the 500 internal server error...

Have you tested my DazedMTL with the newest version of G4F yet? If you have, did it work?
You should be getting post 200 responses as logged in G4F api, otherwise if you're getting the same error 500 as I am then that should prove it's not just a me issue...

Is this a Python version compatibility issue? I have the last version of python 11 (I modified DazedMTL to use it with python 11 in my case, plan on upgrading to either python 12 or 13 soon) so I'm just curious if something in G4F changed that requires a newer version...otherwise nevermind.

As for copilot...is copilot coded to work with g4f api? I don't see it in models.py...should I manually add it to the gpt-4o models.py list or wait until it's ready?
Thanks for helping! @hlohaus

@hlohaus
Copy link
Collaborator

hlohaus commented Nov 26, 2024

It appears that you need to pass the provider separately in the request body or you add ?provider=Copilot to the chat completions URL. I am not sure why the providers are not working for you. I have not made any changes that would explain this. @Linden10

@Linden10
Copy link
Author

It appears that you need to pass the provider separately in the request body or you add ?provider=Copilot to the chat completions URL. I am not sure why the providers are not working for you. I have not made any changes that would explain this. @Linden10

Pass the provider separately in the request body or add the ?provider=copilot?
Hmm...I'm guessing there's documentation explaining how to do that on specifying a specific provider to use when making a request or manually changing the api url to use copilot...

Interesting to know! I'll uh...wait until Copilot is officially added to the models.py api section since it'll be easier for me to use g4f api with. Still thanks for helping on that!

As for the changes that would explain the errors, it has to be one of the changes since v0.3.6.6 (ignoring the 422 error that was fixed) that's causing this problem.

I would have to download each version, then applying the error 422 fix patch to see if I can found out which particular version starts throwing the 500 internal server error. Yup!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants