Skip to content

gptstudio 0.4.0

Latest
Compare
Choose a tag to compare
@JamesHWade JamesHWade released this 21 May 11:35

This release introduces several exciting enhancements. The chat app now features a sidebar for conversation history, new chats, and settings, along with helpful tooltips. Additionally, local models are now supported using ollama, and the Perplexity Service offers various models like llama-3-sonar and mixtral-8x7b. Cohere Service, with models such as command and command-light, is also available. Internally, there are improvements, bug fixes, and quality-of-life enhancements.

UI updates

  • The chat app has now a sidebar where users can see their conversation history, start new chats and change the settings. Because of this, the chat interface has more room for showing messages.
  • All chats are saved and automatically updated after every assistant's message. They are created with a placeholder title built using the first user message in the conversation. Titles are editable and users are able to delete any conversation (or all of them at once).
  • We have a shorter welcome message, but we have added lots of tooltips to help with navigation.

Local models

We are happy to announce that we now support local models with ollama. By default we look for the ollama host in http://localhost:11434 but this can be customized by setting up the OLLAMA_HOST environmental variable. Be aware that you are in charge of maintaining your own ollama installation and models.

Perplexity Service

Perplexity AI now offers a wide range of models as part of their service. The current version includes the following models: llama-3-sonar-small-32k-chat, llama-3-sonar-small-32k-online, llama-3-sonar-large-32k-chat, llama-3-sonar-large-32k-online, llama-3-8b-instruct, llama-3-70b-instruct, mixtral-8x7b-instruc". See Perplexity API documentation for more information on these models.

Cohere Service

Cohere is now available as another service. The current version includes the following models: command, command-light, command-nightly, and command-light-nightly. See Cohere's docs for more on these models and capabilities.

Internal

  • Reverted back to use an R6 class for OpenAI streaming (which now inherits from SSEparser::SSEparser). This doesn't affect how the users interact with the addins, but avoids a wider range of server errors.
  • We now make heavy use of {lintr} for keeping code consistency.
  • Fixed a bug in retrieval of OpenAI models
  • Fixed a bug in Azure OpenAI request formation.
  • Fixed a bug in "in source" calls for addins.
  • Fixed a bug that showed a "Connection refused" message in the viewer in unix platforms. Fix #179
  • The chat addin no longer closes itself when an OpenAI api key is not detected.
  • Converted from PALM to Google for Google AI Studio models.
  • Updated Anthropic models with with their claude-3 update.
  • More bug fixes with Azure OpenAI with request body structure and using token + api key.
  • Any scrollbar now has 5px for width and height. This allows for more room when using the viewer panel.
  • {gptstudio} now requires {bslib} v0.6.0 or greater, to take advantage of the sidebar styling.
  • Updated anthropic API calls to use new messages endpoint.
  • Fixed a bug in Anthropic chats to now include history.
  • OpenAI stream no longer hangs with error "Argument has length 0". #199
  • In source calls no longer attempt to evaluate R code. #203

Quality of Life Improvements and Documentation

  • Chat in source now respects the model selection that you set using the Chat addin.
  • A new function gpstudio_sitrep() has been added to help with debugging and setup.
  • API checking is now done for each available service, including local models.
  • New vignettes were added to setup each service.