Replies: 33 comments 1 reply
-
https://status.openai.com
OK, I think I found the problem, OpenAI is having issue of their services. |
Beta Was this translation helpful? Give feedback.
-
Seems to be back up. Error went away for us. |
Beta Was this translation helpful? Give feedback.
-
seems like openai outage, if not please reopen the issue |
Beta Was this translation helpful? Give feedback.
-
Re-pen the issue. It's not OpenAI outage problem.,
When i am sending around 2k tokens and expecting back another 3-4k tokens, the API works, once i increase the tokens > 4k and expect back 6k tokens i get this error. |
Beta Was this translation helpful? Give feedback.
-
@xbaha, I believe you are encountering a different issue, but I will keep this issue open. I think the SDK should provide more information about errors in such cases. If possible, it would be helpful to reproduce the issue in my client. If your prompt is not private, could you share it? Alternatively, You can use Laser Cat Eyes to debug APIs (sample usage available in the playground). You can observe the incoming response and share it with us. |
Beta Was this translation helpful? Give feedback.
-
hi, Thank you for reopening this issue. use gpt3.5 turbo 16k, temp=0.0f
I just tested it and I got this error. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@roldengarm OpenAI is supposed to return an error message and code when encountering issues. These fields are available through the SDK response. However, in cases where a proper response cannot be returned due to an outage or internal error, the SDK is unable to provide more details. I am planning to improve this behavior in the future, but for now, my suggestion is to return an unexpected error message :/ |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@kayhantolga have you tried it with the Nuget package? I never tried laser CatEyes and not sure why you tried it there because the prompt works as i mentioned works in openAi but not in this c# package. , if you can try it on c# and see if you get an exception or not. |
Beta Was this translation helpful? Give feedback.
-
@xbaha Lasercateyes is a tool that simply displays incoming and outgoing data. I tried using the source code(in playground project), but now I'm going to try using the NuGet package again. They should behave in the same way, but let's see. |
Beta Was this translation helpful? Give feedback.
-
For analysis I enhanced in HttpClientExtensions the method with a try-catch to get the JsonException and include HTTP status code:
I most cases it has been a 'bad gateway' http error status and reason, which was the cause of the JsonException. |
Beta Was this translation helpful? Give feedback.
-
System.Text.Json.JsonException: ''<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0.' |
Beta Was this translation helpful? Give feedback.
-
@kayhantolga i got this error too |
Beta Was this translation helpful? Give feedback.
-
@kayhantolga why you don’t use Newtonsoft.Json? |
Beta Was this translation helpful? Give feedback.
-
@belaszalontai the response works in Lasercateyes , this means it was returned as a valid JSON to this nugget package, what happens later with parsing it is the problem. |
Beta Was this translation helpful? Give feedback.
-
System.Text.Json.JsonException The JSON value could not be converted to System.String. Path: $.error.code | LineNumber: 0 | BytePositionInLine: 20. Hello everyone, I'm having same issue in 1/3 of my requests for last two days.
Any update about it? Or idea how to solve it? |
Beta Was this translation helpful? Give feedback.
-
Just to note I've seen this issue, too. It seems sporadic and not easily reproduceable, given the non-deterministic way that even the same prompt can generate different results. But it does happen and I suspect it is to do with the size of the response.
|
Beta Was this translation helpful? Give feedback.
-
As I said, this error message System.Text.Json.JsonException: '<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0. means that the BODY of the response is HTML instead of JSON. I suspect that somtimes the response from OpenAI API is not in JSON format some reason but in HTML. On the other hand if it wont help then we need a stable reproducible code or data or just a request to be able to find the root cause. |
Beta Was this translation helpful? Give feedback.
-
@belaszalontai I think you are on to something, as I'm asking in the prompt for the output to be formatted as HTML, but I'm not explicitly setting the output format (so would assume it would return as "text" but formatted as HTML). For most cases, this seems to work fine as it returns text formatted as HTML. So I can run the same prompt 10 times and 9 times will work, but just occasionally it fails. I'll look at explicitly setting the output format as text, to see if that makes any difference. So basically I'm passing it a load of text and then saying (as a system prompt): "You will output your response formatted as HTML (but not an entire document) without any other text." I'm using |
Beta Was this translation helpful? Give feedback.
-
@DanDiplo I looked it up and the openAI API will always return JSON in response BODY, and the content property within the choices array will contain the HTML page. So my response_format idea makes no sense. I also don't want to assume that the content property is wrongly escaped in the OpenAI's response. '<' is an invalid start of a value. LineNumber: 0 | BytePositionInLine: 0. The line and character position is 0. So I think the parser found a native HTML code in response BODY instead of a proper chat completion object. We need to log somehow the raw response from OpenAI to be able to figure out what is received and why the SDK is not able to parse. |
Beta Was this translation helpful? Give feedback.
-
@belaszalontai I think the sdk has a serious bug, i suspect it has to do with the response size, going back to my example, i was never able to get a successful reply with this SDK, i tried another sdk "openAIDotNet" and i got the exact same error everytime i called it, then i decided to write the GenerateResponse function from scratch using HttpClientHandler/HttpRequestMessage, and it worked every time and i have never seen the error since. i already used it over 5,000 times. |
Beta Was this translation helpful? Give feedback.
-
@xbaha Am I understand well that you have a root cause and a solution for this issue? Can you please share your GenerateResponse "function"? |
Beta Was this translation helpful? Give feedback.
-
sure, but it is customized for my own use case without SDK or anything:
|
Beta Was this translation helpful? Give feedback.
-
@xbaha I have analyzed your code and compared to the betalgo's CreateCompletionAsStream method in OpenAIChatCompletions.cs class Anyway both code implements the SSE consumption wrong, because they handling only data events and do not consider the possible other event fields and the keep alive messages starting with colon. See: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#event_stream_format In your code you just remove the first 6 character from each and every line with this code: The CreateCompletionAsStream method also does not handle other SSE events than "data:". I think this is the root cause but I have no proof. We need a raw log of messages (with event names) of stream coming from OpenAI to be sure. @kayhantolga what is your oppinion? |
Beta Was this translation helpful? Give feedback.
-
do you mean
not sure if this is universal or i was just lucky. i invite anyone who got the json error to try this code and give feedback @DanDiplo @PabloOteroDeMiguel @i542873057 @asiryan |
Beta Was this translation helpful? Give feedback.
-
@xbaha |
Beta Was this translation helpful? Give feedback.
-
Hi. I only did one thing. I increased the tokens by double and the error has practically disappeared maybe once in 100 request. |
Beta Was this translation helpful? Give feedback.
-
I have a couple of comments which hopefully explain things.
Here are the actions I will take:
|
Beta Was this translation helpful? Give feedback.
-
This issue should be addressed in version 7.4.3. After testing the new version, please provide additional feedback regarding the issue and the solution. |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
I use the api as normal, but it shows System.Text.Json.JsonException
The JSON value could not be converted to System.String. Path: $.error.code | LineNumber: 0 | BytePositionInLine: 20.
Your code piece
Result
System.Text.Json.JsonException
The JSON value could not be converted to System.String. Path: $.error.code | LineNumber: 0 | BytePositionInLine: 20.
InvalidOperationException: Cannot get the value of a token type 'Number' as a string.
at System.Text.Json.ThrowHelper.ReThrowWithPath(ReadStack& state, Utf8JsonReader& reader, Exception ex)
at System.Text.Json.Serialization.JsonConverter`1.ReadCore(Utf8JsonReader& reader, JsonSerializerOptions options, ReadStack& state)
at System.Text.Json.JsonSerializer.ContinueDeserialize[TValue](ReadBufferState& bufferState, JsonReaderState& jsonReaderState, ReadStack& readStack, JsonTypeInfo jsonTypeInfo)
at System.Text.Json.JsonSerializer.ReadFromStreamAsync[TValue](Stream utf8Json, JsonTypeInfo jsonTypeInfo, CancellationToken cancellationToken)
at System.Net.Http.Json.HttpContentJsonExtensions.ReadFromJsonAsyncCore[T](HttpContent content, Encoding sourceEncoding, JsonSerializerOptions options, CancellationToken cancellationToken)
at OpenAI.Extensions.HttpClientExtensions.PostAndReadAsAsync[TResponse](HttpClient client, String uri, Object requestModel, CancellationToken cancellationToken)
at OpenAI.Managers.OpenAIService.CreateCompletion(ChatCompletionCreateRequest chatCompletionCreateRequest, String modelId, CancellationToken cancellationToken)
at 【MyCode】
Expected behavior
API returns ChatGPT's reply
Screenshots
Desktop (please complete the following information):
Beta Was this translation helpful? Give feedback.
All reactions