Skip to content

chore: deprecate 'over-reaching' Chat features #91

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Aug 15, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions pkg-py/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,15 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

* The chat input no longer submits incomplete text when the user has activated IME completions (e.g. while typing in Japanese or Chinese). (#85)

### Deprecations

* Numerous `Chat()` features have been deprecated in preparation for future removal to simplify the API (#91)
* `Chat(messages=...)` was deprecated. Use `chat.ui(messages=...)` instead.
* `Chat(tokenizer=...)` was deprecated. This is only relevant for `.messages(token_limits=...)` which is also now deprecated.
* All parameters to `.messages()` were deprecated. This reflects an overall change philosophy for maintaining the conversation history sent to the LLM -- `Chat` should no longer be responsible for maintaining it -- another stateful object (perhaps the one provided by chatlas, LangChain, etc.) should be used instead. That said, `.messages()` is still useful if you want to access UI message state.
* The `.transform_user_input` and `.transform_assistant_response` decorators were deprecated. Instead, transformation of input/responses should be done manually and independently of `Chat`.
* As a result of the previous deprecations, `.user_input(transform=...)` was also deprecated.

## [0.1.0] - 2025-08-07

This first release of the `shinychat` package simply copies the `Chat` and `MarkdownStream` components exactly as they are in version 1.4.0 of `shiny`. Future versions of `shiny` will import these components from `shinychat`. By maintaining these components via a separate library, we can ship features more quickly and independently of `shiny`.
168 changes: 73 additions & 95 deletions pkg-py/src/shinychat/_chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -182,21 +182,7 @@ async def handle_user_input(user_input: str):
A unique identifier for the chat session. In Shiny Core, make sure this id
matches a corresponding :func:`~shiny.ui.chat_ui` call in the UI.
messages
A sequence of messages to display in the chat. A given message can be one of the
following:

* A string, which is interpreted as markdown and rendered to HTML on the client.
* To prevent interpreting as markdown, mark the string as
:class:`~shiny.ui.HTML`.
* A UI element (specifically, a :class:`~shiny.ui.TagChild`).
* This includes :class:`~shiny.ui.TagList`, which take UI elements
(including strings) as children. In this case, strings are still
interpreted as markdown as long as they're not inside HTML.
* A dictionary with `content` and `role` keys. The `content` key can contain a
content as described above, and the `role` key can be "assistant" or "user".

**NOTE:** content may include specially formatted **input suggestion** links
(see `.append_message()` for more information).
Deprecated. Use `chat.ui(messages=...)` instead.
on_error
How to handle errors that occur in response to user input. When `"unhandled"`,
the app will stop running when an error occurs. Otherwise, a notification
Expand All @@ -208,11 +194,8 @@ async def handle_user_input(user_input: str):
* `"sanitize"`: Sanitize the error message before displaying it to the user.
* `"unhandled"`: Do not display any error message to the user.
tokenizer
The tokenizer to use for calculating token counts, which is required to impose
`token_limits` in `.messages()`. If not provided, a default generic tokenizer
is attempted to be loaded from the tokenizers library. A specific tokenizer
may also be provided by following the `TokenEncoding` (tiktoken or tozenizers)
protocol (e.g., `tiktoken.encoding_for_model("gpt-4o")`).
Deprecated. Token counting and message trimming features will be removed in a
future version.
"""

def __init__(
Expand All @@ -226,6 +209,17 @@ def __init__(
if not isinstance(id, str):
raise TypeError("`id` must be a string.")

if messages:
warn_deprecated(
"`Chat(messages=...)` is deprecated. Use `.ui(messages=...)` instead."
)

if tokenizer is not None:
warn_deprecated(
"`Chat(tokenizer=...)` is deprecated. "
"This is only relevant for `.messages(token_limits=...)` which is also now deprecated."
)

self.id = resolve_id(id)
self.user_input_id = ResolvedId(f"{self.id}_user_input")
self._transform_user: TransformUserInputAsync | None = None
Expand Down Expand Up @@ -486,48 +480,22 @@ def messages(
"""
Reactively read chat messages

Obtain chat messages within a reactive context. The default behavior is
intended for passing messages along to a model for response generation where
you typically want to:

1. Cap the number of tokens sent in a single request (i.e., `token_limits`).
2. Apply user input transformations (i.e., `transform_user`), if any.
3. Not apply assistant response transformations (i.e., `transform_assistant`)
since these are predominantly for display purposes (i.e., the model shouldn't
concern itself with how the responses are displayed).
Obtain chat messages within a reactive context.

Parameters
----------
format
The message format to return. The default value of `MISSING` means
chat messages are returned as :class:`ChatMessage` objects (a dictionary
with `content` and `role` keys). Other supported formats include:

* `"anthropic"`: Anthropic message format.
* `"google"`: Google message (aka content) format.
* `"langchain"`: LangChain message format.
* `"openai"`: OpenAI message format.
* `"ollama"`: Ollama message format.
Deprecated. Provider-specific message formatting will be removed in a future
version.
token_limits
Limit the conversation history based on token limits. If specified, only
the most recent messages that fit within the token limits are returned. This
is useful for avoiding "exceeded token limit" errors when sending messages
to the relevant model, while still providing the most recent context available.
A specified value must be a tuple of two integers. The first integer is the
maximum number of tokens that can be sent to the model in a single request.
The second integer is the amount of tokens to reserve for the model's response.
Note that token counts based on the `tokenizer` provided to the `Chat`
constructor.
Deprecated. Token counting and message trimming features will be removed in
a future version.
transform_user
Whether to return user input messages with transformation applied. This only
matters if a `transform_user_input` was provided to the chat constructor.
The default value of `"all"` means all user input messages are transformed.
The value of `"last"` means only the last user input message is transformed.
The value of `"none"` means no user input messages are transformed.
Deprecated. Message transformation features will be removed in a future
version.
transform_assistant
Whether to return assistant messages with transformation applied. This only
matters if an `transform_assistant_response` was provided to the chat
constructor.
Deprecated. Message transformation features will be removed in a future
version.

Note
----
Expand All @@ -541,6 +509,34 @@ def messages(
A tuple of chat messages.
"""

if not isinstance(format, MISSING_TYPE):
warn_deprecated(
"`.messages(format=...)` is deprecated. "
"Provider-specific message formatting will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

if token_limits is not None:
warn_deprecated(
"`.messages(token_limits=...)` is deprecated. "
"Token counting and message trimming features will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

if transform_user != "all":
warn_deprecated(
"`.messages(transform_user=...)` is deprecated. "
"Message transformation features will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

if transform_assistant:
warn_deprecated(
"`.messages(transform_assistant=...)` is deprecated. "
"Message transformation features will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

messages = self._messages()

# Anthropic requires a user message first and no system messages
Expand Down Expand Up @@ -1020,25 +1016,15 @@ def transform_user_input(
self, fn: TransformUserInput | TransformUserInputAsync | None = None
) -> None | Callable[[TransformUserInput | TransformUserInputAsync], None]:
"""
Transform user input.

Use this method as a decorator on a function (`fn`) that transforms user input
before storing it in the chat messages returned by `.messages()`. This is
useful for implementing RAG workflows, like taking a URL and scraping it for
text before sending it to the model.

Parameters
----------
fn
A function to transform user input before storing it in the chat
`.messages()`. If `fn` returns `None`, the user input is effectively
ignored, and `.on_user_submit()` callbacks are suspended until more input is
submitted. This behavior is often useful to catch and handle errors that
occur during transformation. In this case, the transform function should
append an error message to the chat (via `.append_message()`) to inform the
user of the error.
Deprecated. User input transformation features will be removed in a future version.
"""

warn_deprecated(
"The `.transform_user_input` decorator is deprecated. "
"User input transformation features will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

def _set_transform(fn: TransformUserInput | TransformUserInputAsync):
self._transform_user = _utils.wrap_async(fn)

Expand All @@ -1062,31 +1048,15 @@ def transform_assistant_response(
fn: TransformAssistantResponseFunction | None = None,
) -> None | Callable[[TransformAssistantResponseFunction], None]:
"""
Transform assistant responses.

Use this method as a decorator on a function (`fn`) that transforms assistant
responses before displaying them in the chat. This is useful for post-processing
model responses before displaying them to the user.

Parameters
----------
fn
A function that takes a string and returns either a string,
:class:`shiny.ui.HTML`, or `None`. If `fn` returns a string, it gets
interpreted and parsed as a markdown on the client (and the resulting HTML
is then sanitized). If `fn` returns :class:`shiny.ui.HTML`, it will be
displayed as-is. If `fn` returns `None`, the response is effectively ignored.

Note
----
When doing an `.append_message_stream()`, `fn` gets called on every chunk of the
response (thus, it should be performant), and can optionally access more
information (i.e., arguments) about the stream. The 1st argument (required)
contains the accumulated content, the 2nd argument (optional) contains the
current chunk, and the 3rd argument (optional) is a boolean indicating whether
this chunk is the last one in the stream.
Deprecated. Assistant response transformation features will be removed in a future version.
"""

warn_deprecated(
"The `.transform_assistant_response` decorator is deprecated. "
"Assistant response transformation features will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

def _set_transform(
fn: TransformAssistantResponseFunction,
):
Expand Down Expand Up @@ -1303,6 +1273,14 @@ def user_input(self, transform: bool = False) -> str | None:
2. Maintaining message state separately from `.messages()`.

"""

if transform:
warn_deprecated(
"`.user_input(transform=...)` is deprecated. "
"User input transformation features will be removed in a future version. "
"See here for more details: https://github.com/posit-dev/shinychat/pull/91"
)

msg = self._latest_user_input()
if msg is None:
return None
Expand Down