-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v3-dev] Initial migration to jupyterlab-chat
#1043
Conversation
* Fix handler methods' parameters * Add slash commands (autocompletion) to the chat input
* Allow for stream messages * update jupyter collaborative chat dependency
* Add a menu option to open the AI settings * Remove the input option from the setting widget
* Show that the bot is writing (answering) * Update jupyter chat dependency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@brichet Thanks for contributing this, and for being patient with us as we were very busy for the last few months.
I've tried to test this branch by doing a dev install with jlpm && jlpm build && jlpm dev-install
. However, I only see replies in the original Jupyter AI chat, and not chats created through jupyterlab-chat
.
Question: Do I need to do anything else to get Jupyter AI working in new chats?
Screenshot:
The server logs also include an error:
[E 2024-12-02 15:59:53.928 ServerApp] Failed to write message
Traceback (most recent call last):
File "/Users/dlq/micromamba/envs/jai3/lib/python3.11/site-packages/jupyter_collaboration/handlers.py", line 270, in send
self.write_message(message, binary=True)
File "/Users/dlq/micromamba/envs/jai3/lib/python3.11/site-packages/tornado/websocket.py", line 332, in write_message
raise WebSocketClosedError()
tornado.websocket.WebSocketClosedError
[I 2024-12-02 15:59:53.955 ServerApp] Request for Y document 'test.chat' with room ID: 41030a07-f29d-4ded-b13b-39dfda2b43bb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completed an initial review of your PR. The changes are surprisingly small relative to the scope of this PR. Thank you for keeping the changes minimal for the sake of reviewers! 🥇
I haven't reviewed everything, but have left some questions & comments on the areas of the code I found most important.
High-level proposal: I noticed was that many of the backend changes to the Python files centered around forwarding the YChat
instance as an argument to many methods in BaseChatHandler
. Since we know for certain that we want to support multiple chats, I think it makes sense to make chat handlers non-singletons and allow multiple instances of chat handlers (one per chat). This will allow us to revert the changes to the arguments and simplify this PR.
- By making chat handlers non-singletons, each chat handler can be passed a
ychat: YChat
attribute on init, which means that chat handler methods won't require achat: YChat
argument. This is what will allow many of the Python changes to be reverted. - If this breaks some chat handlers, that's OK for now. We will improve & fix them later right before the first pre-release.
Next steps:
- Nicolas: Help verify this PR works, respond to my feedback, and tell me if you're OK with me implementing the high-level proposal above.
- David: Continue reviewing as time permits, and work on the high-level proposal above if you accept.
- Nicolas & David: Continue to work together to get this merged by the end of this week.
Sorry for this @dlqqq , my bad. EDIT: It should be fixed by now. |
Co-authored-by: david qiu <[email protected]>
Thanks for the review @dlqqq
Sure 👍 |
jupyterlab-chat
jupyterlab-chat
jupyterlab-chat
@brichet New PR that implements the high-level proposal I mentioned in my last review: QuantStack#9 |
* create new set of chat handlers per room * make YChat an instance attribute on BaseChatHandler * revert changes to chat handlers * pre-commit * use room_id local var Co-authored-by: Nicolas Brichet <[email protected]> --------- Co-authored-by: Nicolas Brichet <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@brichet Awesome, this PR looks good to me! Of course we will want to iterate on this, but this is a great start. Thank you for your hard work thus far.
Merging into v3-dev
. 🎉
Hooray, I guess it's a good time to start working on other items from v3 roadmap? Or do you expect any other larger refactor to land soon?
|
@krassowski Hey Mike, thanks for reaching out and being so eager to help! Yes, you can absolutely feel free to collaborate on v3 with us. We welcome your contributions, and appreciate all the work you've done for Jupyter AI so far. If you are proposing a large change, I do ask that you ping me in a new issue / existing issue and provide a broad high-level outline of your proposal. If we discover in our initial discussion that there may be a better way to implement a change, then having this discussion first would save time for both of us. 🤗 The current plan is to cut a pre-release ( Hope this helps! Have a great weekend Mike, get some rest. 👋 |
I think that it worth noticing that the I'll work on "cleaning old chat" PR. |
Related to #785 and #862
This PR make use of the collaborative chat in place of the chat integrated in jupyter-ai.
In this first step, both chats co-exist (to be able to compare the features), that's why we can see 2 left panel icons, one for the collaborative chat sidebar and one for the
jupyter-ai
chat.Currently, the
Jupyternaut
personna is connected every chat, and answers to every message in the chat.Work in progress
Jupyternaut
from a chat