-
I am trying to change the operating gpt-4 model to gpt-4-1106-preview. It's much more convenient for its 128k context length, and is SIGNIFICANTLY cheaper than the model currently used. I have noticed that the context here is limited to 8192k tokens. I've gone through the gpt-engineer/gpt-engineer directory, and have updated virtually every instance of the string "gpt-4" (not comments) to "gpt-4-1106-preview" and that did not appear to change anything as I am still limited to 8192k tokens. Is there something I have to do that I am not doing? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I still am not sure how to do this in the older 0.1.0 version, but the recent update automatically solved this for me by defaulting the gpt-4 model to the one I was talking about. |
Beta Was this translation helpful? Give feedback.
I still am not sure how to do this in the older 0.1.0 version, but the recent update automatically solved this for me by defaulting the gpt-4 model to the one I was talking about.