Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.5.1 #132

Merged
merged 55 commits into from
Dec 12, 2023
Merged

0.5.1 #132

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
8ce2c43
Tracing (#60)
VVoruganti Aug 22, 2023
d0570d6
0.3.0 Long Term Memory (#62)
VVoruganti Aug 28, 2023
b8843c0
fix: update link to more recent blog post
vintrocode Sep 1, 2023
2628753
Sanitize thoughts before sending them in the thought's channel (#65)
jacobvm04 Sep 1, 2023
88c81d1
LayeredLRU Cache (#69)
VVoruganti Sep 2, 2023
17caa80
Stateless bug (#70)
VVoruganti Sep 5, 2023
7fc94e9
Merge branch 'main' into staging
VVoruganti Sep 8, 2023
174a9fa
Fix merge errors
VVoruganti Sep 8, 2023
05b2de2
Fix merge errors 2
VVoruganti Sep 8, 2023
b7e2bb7
fix: top_p 0.5 to address gibberish
vintrocode Sep 8, 2023
312ff22
Merge branch 'main' into staging
VVoruganti Sep 8, 2023
30e08f6
Custom Web UI (#76)
VVoruganti Sep 10, 2023
c248dae
Fix Github Action Workflow
VVoruganti Sep 10, 2023
638c78a
Fix Github Action Workflow
VVoruganti Sep 10, 2023
c912212
add user prediction function
vintrocode Sep 11, 2023
0596350
Honcho Changes (#77)
VVoruganti Sep 11, 2023
cc1b745
Social Graph Changes
VVoruganti Sep 11, 2023
e226639
Authentication Form Styling
VVoruganti Sep 12, 2023
f1b02b9
Working Auth
VVoruganti Sep 12, 2023
9a31a8c
Stylistic changes
VVoruganti Sep 12, 2023
676be29
Address all linting errors
VVoruganti Sep 13, 2023
e5a8ff0
Naive Route Protection
VVoruganti Sep 13, 2023
63a693a
Fly minimum machines
VVoruganti Sep 13, 2023
9c96ecc
Open Graph Image Changes
VVoruganti Sep 13, 2023
dbce17e
Merge pull request #78 from plastic-labs/ui-tweaks
vintrocode Sep 13, 2023
dda6f0f
Remove anonymous honcho usage, fix opengraph (#80)
VVoruganti Sep 13, 2023
2378e3d
UI tweaks (#81)
VVoruganti Sep 13, 2023
0841901
UI tweaks (#82)
VVoruganti Sep 13, 2023
39a9559
Open Graph Fix
VVoruganti Sep 13, 2023
025320a
Merge branch 'staging' into bloom-a
VVoruganti Sep 13, 2023
933dda5
Remove Streamlit
VVoruganti Sep 13, 2023
5f72fad
Merge pull request #83 from plastic-labs/bloom-a
vintrocode Sep 14, 2023
53ef59d
UI tweaks (#84)
VVoruganti Sep 14, 2023
7a1f3de
Merge branch 'main' into staging
VVoruganti Sep 14, 2023
46930fc
Web fixes (#89)
hyusap Sep 18, 2023
f633583
Optimization (#96)
VVoruganti Sep 19, 2023
0ed850d
Optimization (#98)
VVoruganti Sep 19, 2023
99bf58a
Merge branch 'main' into staging
VVoruganti Sep 19, 2023
3b95cfb
add latex support and incentive it (#104)
hyusap Oct 5, 2023
6dc3528
prevent unallowed messages (#111)
hyusap Oct 5, 2023
53430b1
implement autoscroll (#112)
hyusap Nov 8, 2023
9a4576d
✨ add multiline support (#118)
hyusap Nov 16, 2023
5ebe31c
♻️ refactor all of the api stuff (#119)
hyusap Nov 29, 2023
0035fed
✨ implement dark mode (#120)
hyusap Nov 29, 2023
a89527a
Documentation (#121)
VVoruganti Dec 8, 2023
986473f
Force redirect for unauthenticated and add posthog events (#122)
VVoruganti Dec 8, 2023
9c4445c
Main merge conflict
VVoruganti Dec 8, 2023
940dd2a
Update version
VVoruganti Dec 8, 2023
2d885c0
Static banner
VVoruganti Dec 8, 2023
15e649e
✨ caching and skeletons (#127)
hyusap Dec 12, 2023
a850d0e
Revert "✨ caching and skeletons (#127)"
VVoruganti Dec 12, 2023
3ae592e
Http error handling (#129)
jacobvm04 Dec 12, 2023
40c0a5a
✨ caching and skeletons 2 (#130)
hyusap Dec 12, 2023
417e1ae
Changelogs (#131)
VVoruganti Dec 12, 2023
19eecaf
Merge branch 'main' into staging
VVoruganti Dec 12, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,27 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/)
and this project adheres to [Semantic Versioning](http://semver.org/).

## [0.5.1] - 2023-12-12

### Added

- Dark mode
- Web Caching
- LateX Support
- Multiline support for chat textbox

### Changed

- Required Sign In
- No A/B for Honcho
- Error handling for content filter

### Security

- Update Langchain version to ^0.0.348
- Update OpenAI Package to ^1.3.8


## [0.4.1] – 2023-09-14

### Added
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# tutor-gpt
![Static Badge](https://img.shields.io/badge/Version-0.4.0-blue)
![Static Badge](https://img.shields.io/badge/Version-0.5.1-blue)
[![Discord](https://img.shields.io/discord/1076192451997474938?logo=discord&logoColor=%23ffffff&label=Bloom&labelColor=%235865F2)](https://discord.gg/bloombotai)
![GitHub License](https://img.shields.io/github/license/plastic-labs/tutor-gpt)
![GitHub Repo stars](https://img.shields.io/github/stars/plastic-labs/tutor-gpt)
Expand Down
43 changes: 31 additions & 12 deletions agent/chain.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@
)
from langchain.prompts import load_prompt, ChatPromptTemplate
from langchain.schema import AIMessage, HumanMessage, BaseMessage

from openai import BadRequestError

from dotenv import load_dotenv

from collections.abc import AsyncIterator
Expand Down Expand Up @@ -54,12 +57,11 @@ def think(cls, cache: Conversation, input: str):
])
chain = thought_prompt | cls.llm

cache.add_message("thought", HumanMessage(content=input))
def save_new_messages(ai_response):
cache.add_message("response", HumanMessage(content=input))
cache.add_message("response", AIMessage(content=ai_response))

return Streamable(
chain.astream({}, {"tags": ["thought"], "metadata": {"conversation_id": cache.conversation_id, "user_id": cache.user_id}}),
lambda thought: cache.add_message("thought", AIMessage(content=thought))
)
return Streamable(chain.astream({}, {"tags": ["thought"], "metadata": {"conversation_id": cache.conversation_id, "user_id": cache.user_id}}), save_new_messages)

@classmethod
@sentry_sdk.trace
Expand All @@ -72,13 +74,12 @@ def respond(cls, cache: Conversation, thought: str, input: str):
])
chain = response_prompt | cls.llm

cache.add_message("response", HumanMessage(content=input))
def save_new_messages(ai_response):
cache.add_message("response", HumanMessage(content=input))
cache.add_message("response", AIMessage(content=ai_response))

return Streamable(chain.astream({ "thought": thought }, {"tags": ["response"], "metadata": {"conversation_id": cache.conversation_id, "user_id": cache.user_id}}), save_new_messages)

return Streamable(
chain.astream({ "thought": thought }, {"tags": ["response"], "metadata": {"conversation_id": cache.conversation_id, "user_id": cache.user_id}}),
lambda response: cache.add_message("response", AIMessage(content=response))
)

@classmethod
@sentry_sdk.trace
async def think_user_prediction(cls, cache: Conversation):
Expand Down Expand Up @@ -114,27 +115,45 @@ async def chat(cls, cache: Conversation, inp: str ) -> tuple[str, str]:
return thought, response



class Streamable:
"A async iterator wrapper for langchain streams that saves on completion via callback"

def __init__(self, iterator: AsyncIterator[BaseMessage], callback):
self.iterator = iterator
self.callback = callback
self.content = ""
self.stream_error = False

def __aiter__(self):
return self

async def __anext__(self):
try:
if self.stream_error:
raise StopAsyncIteration

data = await self.iterator.__anext__()
self.content += data.content
return data.content
except StopAsyncIteration as e:
self.callback(self.content)
raise StopAsyncIteration
except BadRequestError as e:
if e.code == "content_filter":
self.stream_error = True
self.message = "Sorry, your message was flagged as inappropriate. Please try again."

return self.message
else:
raise Exception(e)
except Exception as e:
raise e
sentry_sdk.capture_exception(e)

self.stream_error = True
self.message = "Sorry, an error occurred while streaming the response. Please try again."

return self.message

async def __call__(self):
async for _ in self:
Expand Down
6 changes: 5 additions & 1 deletion agent/mediator.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,17 @@
from dotenv import load_dotenv
# Supabase for Postgres Management
from supabase.client import create_client, Client
from supabase.lib.client_options import ClientOptions
from typing import List, Tuple, Dict
load_dotenv()

class SupabaseMediator:
@sentry_sdk.trace
def __init__(self):
self.supabase: Client = create_client(os.environ['SUPABASE_URL'], os.environ['SUPABASE_KEY'])
# Change the network db timeout to 60 seconds since the default is only 5 seconds
timeout_client_options = ClientOptions(postgrest_client_timeout=60)
self.supabase: Client = create_client(os.environ['SUPABASE_URL'], os.environ['SUPABASE_KEY'], timeout_client_options)

self.memory_table = os.environ["MEMORY_TABLE"]
self.conversation_table = os.environ["CONVERSATION_TABLE"]

Expand Down
Loading
Loading