-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Post Process Extractors #41
Comments
This is a great idea and easy to add! Using it would look something like: {
"role": "user",
"content": "```{LANGUAGE}\n{CODE}```"
} Where the prompt when expanded would look like: {
"role": "user",
"content": "```python\ndef greet(name):\n print(f\"Hello, {<CURSOR>}\")\n```"
} One thing we need to think through is the LLM response. If we send it markdown will it respond with markdown? We currently don't post process LLM responses, but we may need to if we being sending it markdown. In other words, we may want to provide some kind of post process regex response matcher that extracts text so users can specify the code to insert from the LLM should be the text in the markdown code block. This extractor would actually be really useful for chain of though prompting: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/chain-of-thought#example-writing-donor-emails-structured-guided-cot Notice the actual answer for Claude is between If you want to take a crack at adding it I would be happy to help make some suggestions, otherwise we can add it to our roadmap. |
Got it thanks for testing that!
Pre-filling the assistant response is a really cool. It is something I would like to support for Anthropic's API I need to look into the support the other backends have for it.
It might be worth introducing the idea of We have been discussing the idea of presets in the Discord. There are a few more features on the roadmap first (this now included) and then I want to dial in on presets. Also just shot you an email would love to connect and talk more on these ideas. I also have a few other ideas I'm looking for feedback from users on. |
[Edit because I pressed some random shortcut that submitted this before it was done]
Hi there. New to this tool and certainly not an LSP expert. I've been going through the prompt template examples, and it struck me as odd that nowhere is the LLM actually told what language it's dealing with. Sure, it can probably infer it from the context, particularly if there are few-shot examples. But why make the LLM figure it out when we already know it.
I've been reading the LSP spec to see if that information is at any point provided to the LSP server by the client. It turns out Document Synchronization messages contain an object with a
languageId
key, with values such aspython
. The logs of my editor (Helix) confirm that this key is sent by the client on message requests of this type.I would use this key either by including it in the system prompt or by formatting code as Markdown blocks and adding it after the first set of triple backticks. An example of the second:
What do you think? Is this something that could easily be added?
The text was updated successfully, but these errors were encountered: