Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "AI Tools" Plugin #8365

Draft
wants to merge 29 commits into
base: master
Choose a base branch
from
Draft

Add "AI Tools" Plugin #8365

wants to merge 29 commits into from

Conversation

Jermolene
Copy link
Member

@Jermolene Jermolene commented Jul 11, 2024

This plugin adds integrated LLM conversations to the TiddlyWiki platform.

It currently supports two different server backends:

  • Locally running Llamafile server - LLlamafile is an open source project that lets you distribute and run LLMs as a single file. The files are large, typically 4+ gigabytes but offer reasonable performance on modern hardware, and total privacy
  • OpenAI Service - OpenAI is a commercial service that offers paid APIs for accessing some of the most sophisticated LLMs that are available. OpenAI requires tokens to be purchased for API usage (this is entirely separate from ChatGPT subscriptions)

See the preview build at: https://tiddlywiki5-git-feat-ai-tools-jermolenes-projects.vercel.app

Note that this PR includes a small modification to the core that is required in order for authorisation with OpenAI to work correctly. This means that you cannot use the "ai-tools" plugin with other versions of the TiddlyWiki core, it only works correctly with the version of the core included in the preview

image

Copy link

vercel bot commented Jul 11, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
tiddlywiki5 ✅ Ready (Inspect) Visit Preview Jul 21, 2024 8:12pm

@Jermolene Jermolene changed the title Feat ai tools Feat AI tools Jul 11, 2024
@Jermolene Jermolene changed the title Feat AI tools Feat AI Tools Jul 11, 2024
@Jermolene Jermolene changed the title Feat AI Tools Add "AI Tools" Plugin Jul 11, 2024
@AndrewHamm05
Copy link

NoapiKey
error401
I was testing the plugin and ran into a major issue. I was looking through the server tiddlers and can't find where the API key is referenced. My concerns were confirmed because I received an HTTP error 401 when sending an OpenAI request, and when importing the plugin into an empty wiki, it proved that even if an API key is given it will not provide authorization.

@pmario
Copy link
Member

pmario commented Jul 12, 2024

image

@AndrewHamm05
Copy link

image

I did provide an API key in the settings of the plugin. Does it work for you?

@pmario
Copy link
Member

pmario commented Jul 12, 2024

Sorry - I do not have an OpenAI Key

@Jermolene
Copy link
Member Author

when importing the plugin into an empty wiki, it proved that even if an API key is given it will not provide authorization.

Hi @joocyNut this PR includes a small core modification that is required by the "ai-tools" plugin in order to function. That means that for the moment you can only use the plugin with the special prerelease version of the core included in the preview

I will update the docs to clarify this point.

@pmario
Copy link
Member

pmario commented Jul 14, 2024

I did have a look and was a bit confused.

  • The system dialogue says: edit - copy - delete
  • If I click edit I get: view - copy - delete

I did not know how to close the dialogue. I personally would prefer done - copy - delete


I am not happy with the round "send" button. None of our default buttons at the moment are round.
I would prefer border radius 3px or 4px as those in the system-dialogue

image

@pmario
Copy link
Member

pmario commented Jul 14, 2024

The "Choose an image" dropdown is almost invisible with "dark" palettes

image

Especially "Spartan Night" seems to have a problem

@AndrewHamm05
Copy link

Is it possible to get an empty version of the PR?

@Jermolene
Copy link
Member Author

I did have a look and was a bit confused.

  • The system dialogue says: edit - copy - delete
  • If I click edit I get: view - copy - delete

I did not know how to close the dialogue. I personally would prefer done - copy - delete

Hi @pmario I understand your confusion. This isn't the final user interface. For the moment I prefer "view" because this is not a true editor in that it does not support drafts, and thus doesn't allow users to cancel edits.

I am not happy with the round "send" button. None of our default buttons at the moment are round. I would prefer border radius 3px or 4px as those in the system-dialogue

While consistency is usually our goal, here the goal is to make the button look distinctive. When using OpenAI, clicking that button is a billable event, and so it seems reasonable to signal that with an unusual visual presentation.

@Jermolene
Copy link
Member Author

The "Choose an image" dropdown is almost invisible with "dark" palettes

HI @pmario I've pushed a fix in a1782b1. Eventually we'll need to use palette colours.

@Jermolene
Copy link
Member Author

Is it possible to get an empty version of the PR?

Hi @joocyNut at the moment the best approach is to save the empty custom core from https://tiddlywiki5-git-feat-ai-tools-jermolenes-projects.vercel.app/empty.html and then drag and drop the ai-tools plugin from https://tiddlywiki5-git-feat-ai-tools-jermolenes-projects.vercel.app/index.html

To avoid triggering a docs template
@@ -27,7 +27,8 @@ ConversationsArchiveImporter.prototype.import = function(widget,conversationsTit
title: conversationTitle,
tags: $tw.utils.stringifyList(["$:/tags/AI/Conversation"]),
created: conversationCreated,
modified: conversationModified
modified: conversationModified,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jermolene ... it seems modified an created fields are wrong

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @pmario what do you mean? That they are the wrong way round? Or something else?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UUPS, Sorry. GitHub showd 5 lines of code and I thought it is a header. So I did expect modified: 20240719xxx instead of a variable. Sorry forget about it.

image

…nside action-createtiddler in action strings

The root cause was that action-createtiddler widget was calling refreshChildren() with no argument.

A secondary factor was that importvariables widget was not defensive in handling a missing changedTiddlers parameter
So that we can do image analysis
@Jermolene Jermolene marked this pull request as draft July 25, 2024 16:33
@linonetwo
Copy link
Contributor

linonetwo commented Jul 27, 2024

Is there some wikitext API for this plugin? I'm writing a workflow plugin like Defy or Coze, which is a visual version of langchain https://github.com/tiddly-gittly/workflow (WIP)

I care about how to redirect input from your plugin to workflow, and how to stream workflow output to your UI? Some intermediate node on workflow may search the wiki and append it to user prompt.

@Jermolene
Copy link
Member Author

Is there some wikitext API for this plugin?

The API is not yet documented, but there is a generic function for retrieving the next message in an LLM conversation:

https://github.com/TiddlyWiki/TiddlyWiki5/pull/8365/files#diff-d9cb6ac4fd8984f05e3db03e83f998cbcf1e588010a2c73c4e40a2b5cc9df90a

Much of the logic is specific to the LLM, and so lives in the LLM server tiddlers:

https://github.com/TiddlyWiki/TiddlyWiki5/pull/8365/files#diff-87f1e1e8cbb23ac187209b09af4deeb5453bff27f6fe5c18cb2638471cf4988a

I'm writing a workflow plugin like Defy or Coze, which is a visual version of langchain https://github.com/tiddly-gittly/workflow (WIP)

I care about how to redirect input from your plugin to workflow, and how to stream workflow output to your UI? Some intermediate node on workflow may search the wiki and append it to user prompt.

The plugin doesn't yet support the streaming variant of LLM APIs, but that is something I hope to work on soon. I think the approach would just be that a message would have a new "message-status" field that could hold values for "streaming" and "complete", and in the background streaming messages would periodically be updated with new text appended.

But I think you're asking about piping input and output between different components. I think that would be accomplished by passing tiddler titles around, rather in the way that the get-llm-completion procedure takes in its parameters the title of the tiddler that should receive the output. So the LLM would stream its response to a tiddler, and the workflow component would pick up that output to act on it.

@linonetwo
Copy link
Contributor

linonetwo commented Jul 28, 2024

Thank you, I was asking about piping, so I just need to addTiddler to <<resultTitlePrefix>> at the end of every workflow, then any workflow can work with the UI in this plugin.

I have other questions:

  1. tag $:/tags/AI/CompletionServer may suggest it support all kinds of AI, for example comfyui image AI API, will it be supported? Developer just need to put the generated image to <<resultTitlePrefix>> tiddler.
  2. Can you move procedure action-get-response to another tiddler, and add a dropdown to pick different procedure to use? So I can provide several custom procedure for different workflows (Agents, as those people advertised), and user can choose which to run.
  3. Since we have TiddlyWiki organization, why not put new plugins to separate repo, like in https://github.com/tiddly-gittly/ , so the release cycle of this plugin won't need to be the same as tw core? I think there will be frequent change to UI in the future.
  4. UI to be added are: cancel request, see agent background thinking steps, cute avatar for waifu agents... I might PR to add it, or use some kinds of slots to add it from other plugin.

This plugin may be a foundation of many 3rd party AI plugins, I'm happy for not needing to spend time writing UI and web API requests for chatbot, and can focus on workflow part.

@Jermolene
Copy link
Member Author

Thank you, I was asking about piping, so I just need to addTiddler to <<resultTitlePrefix>> at the end of every workflow, then any workflow can work with the UI in this plugin.

Yes, I think so. I've added some docs for the conversation format here: 58f96e7

  1. tag $:/tags/AI/CompletionServer may suggest it support all kinds of AI, for example comfyui image AI API, will it be supported? Developer just need to put the generated image to <<resultTitlePrefix>> tiddler.

Yes. It currently supports image input to Llamafile and OpenAI, and the schema allows for image responses, but that is not yet implemented.

  1. Can you move procedure action-get-response to another tiddler, and add a dropdown to pick different procedure to use? So I can provide several custom procedure for different workflows (Agents, as those people advertised), and user can choose which to run.

I've refactored all the procedures and functions to be global here: ea595df

  1. Since we have TiddlyWiki organization, why not put new plugins to separate repo, like in https://github.com/tiddly-gittly/ , so the release cycle of this plugin won't need to be the same as tw core? I think there will be frequent change to UI in the future.

The purpose of introducing the stability badges is so that we can move to allowing the experimental core plugins to be updated on their own schedule, without waiting for a release (as long as there are no associated changes to the core).

  1. UI to be added are: cancel request, see agent background thinking steps, cute avatar for waifu agents... I might PR to add it, or use some kinds of slots to add it from other plugin.

I expect a combination of both approaches will be needed, PRs are welcome in any case.

Another key improvement to add is streaming responses from the server, albeit I think that that would require some core modifications to be able to access SSE data in wikitext.

This plugin may be a foundation of many 3rd party AI plugins, I'm happy for not needing to spend time writing UI and web API requests for chatbot, and can focus on workflow part.

Great, that definitely fits with the goal of pooling the resources of those interested in exploring LLMs in TiddlyWiki.

@yedhukrishnagirish
Copy link

Screenshot 2024-08-10 at 11 27 27 I pasted the API key, and it didn't run for me. Does it only work with gpt4.0?

@linonetwo
Copy link
Contributor

linonetwo commented Aug 11, 2024

Some premitives inspired by https://wordware.ai and https://www.colelawrence.com/storyai :

  1. <$read-context name="xxx"> <<xxx>> </$read-context>
    1. when invokeAction, collect all wikified text above, or up to a section mark
    2. save to a variable xxx, simillar to wikify widget.
    3. Then the variable can be used by AI generation procedures that is called nested inside it.
    4. So user can create system prompt in normal wikitext in same tiddler
  2. <$suspend > or <$await > that block the next invokeAction, until sub-widget tree or procedure inside it runs the <$continue /> or <$resolve /> widget. Or wait until a filter has output.
    1. when "async await" or even "promise resolve" comes out, JS get is reborn, because no more callback hell. Currently, widgets must wrap each other to perform async callback, like a callback hell
    2. when this is provided, user can write control flow of LLM agent easier
  3. <$references> to collect missing or empty variables referneced in a wikitext in current context
    1. so before executing an agent, it can show a modal, asking user to input these variables.
    2. or feed the variable list to the prompt, let LLM ask user input natural language and extract variables, store to temp tiddlers, and <$set> to variables
  4. <% for [<xxx>compare:string:eq[xxx]] %> loop
    1. variable init and "i++" is handled by user, before and inside this statement
    2. so agent can do self-correction until output something good.

I can provide better UX with WYSIWYG editor, but without underlying widget support, the UX can't be easily achieve, and won't be wikitext-native (may relying on JSON or YAML to describe sequental async chain, instead of directly using wikitext).

This might be a good chance to make TW + AI solution popular. (Wordware has raised a total funding of $125K, not much, but I think product like this worth more, because it feels like JupyterNotebook of AI).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants