-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anthropic support #222
Anthropic support #222
Conversation
README.md
Outdated
@@ -7,7 +7,10 @@ | |||
|
|||
# Initial Project Setup | |||
|
|||
1. Install Poetry | |||
1. Install Python version 3.10.13 and make sure it is being used the next step and later during development |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other 3.10.x versions are also fine. The exact version is added for consistency
input_variables={'description': self.description}, | ||
tracked_user=self.user, | ||
tags=['manual_contribution_assessment'], | ||
**make_prompt_kwargs(settings.GITHUB_MANUAL_CONTRIBUTION_ASSESSMENT_PROMPT_NAME), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This helps development and debug where a developer can use a prompt with a particular tag by overriding a value in settings.dev.py
|
||
|
||
def is_ia(author): | ||
return author.id == settings.IA_DISCORD_USER_ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fixes the issue where anyone impersonate for IA just by having _ai
in their username. It also allows not to force developers to have a bot with _ai
string in their name
content = message.content | ||
|
||
if (role := map_author_structured(message.author)) == prev_role: | ||
# We need to merge messages to prevent the following error from Anthropic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anthropic has its limitations. We may need similar thing in the other place of IA interaction if we decide to use Anthropic there
@@ -32,28 +75,25 @@ async def on_message_implementation(message): | |||
await message.reply('Please, register at https://thenewboston.com') | |||
return | |||
|
|||
# TODO(dmu) MEDIUM: Request message history just once and convert it to necessary format before LLM call | |||
plain_text_message_history = await get_plain_text_message_history(message.channel) | |||
messages = (await get_historical_messages(message.channel))[::-1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now we query historical messages just once
@@ -20,6 +20,49 @@ | |||
|
|||
bot = commands.Bot('/', intents=intents) | |||
|
|||
# TODO(dmu) HIGH: Cover bot logic with unittests: it is already complex enough |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Saving time on not having unittests is overrated
from thenewboston.general.commands import CustomCommand | ||
from thenewboston.general.utils.json import ResponseEncoder | ||
|
||
|
||
class Command(CustomCommand): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was very handy for developing and debugging Anthropic integration
b84a538
to
7fa3dec
Compare
Suspect IssuesThis pull request was deployed and Sentry observed the following issues:
Did you find this useful? React with a 👍 or 👎 |
Implementation of #221
Both Anthropic and OpenAI are supported on LLMClient level. Other prompts can be switched to Anthropic just by changing prompt parameters in PromptLayer (did not test, but should work)