Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

correcoes cosmeticas, de acordo com pycharm, de alguns tipos de retorno #2

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Inspired by [tinygrad](https://github.com/tinygrad/tinygrad) and [simpleaichat](https://github.com/minimaxir/simpleaichat/tree/main/simpleaichat), `tiny-ai-client` is the easiest way to use and switch LLMs with vision and tool usage support. It works because it is `tiny`, `simple` and most importantly `fun` to develop.

I want to change LLMs with ease, while knowing what is happening under the hood. Langchain is cool, but became bloated, complicated there is just too much chaos going on. I want to keep it simple, easy to understand and easy to use. If you want to use a LLM and have an API key, you should not need to read a 1000 lines of code and write `response.choices[0].message.content` to get the response.
I want to change LLMs with ease, while knowing what is happening under the hood. Langchain is cool, but became bloated, complicated there is just too much chaos going on. I want to keep it simple, easy to understand and easy to use. If you want to use a LLM and have an API key, you should not need to read 1000 lines of code and write `response.choices[0].message.content` to get the response.

Simple and tiny, that's the goal.

Expand Down
2 changes: 2 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
pydantic==2.7.3
openai==1.31.0
anthropic==0.28.0
pillow~=10.3.0
google-generativeai
4 changes: 2 additions & 2 deletions tiny_ai_client/anthropic_.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ def call_llm_provider(
temperature: int | None,
max_new_tokens: int | None,
timeout: int,
) -> str:
) -> Message:
kwargs = {}
input_messages, system = model_input
if temperature is not None:
Expand Down Expand Up @@ -139,7 +139,7 @@ async def async_call_llm_provider(
temperature: int | None,
max_new_tokens: int | None,
timeout: int,
) -> str:
) -> Message:
kwargs = {}
input_messages, system = model_input
if temperature is not None:
Expand Down
8 changes: 4 additions & 4 deletions tiny_ai_client/gemini_.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ def build_model_input(self, messages: List["Message"]) -> Any:
history = []
local_messages = deepcopy(messages)
system = None
message = None

for message in local_messages:
if message.role == "system":
Expand All @@ -45,6 +44,7 @@ def build_model_input(self, messages: List["Message"]) -> Any:
if message.text is not None:
parts.append(message.text)
if message.images is not None:
# noinspection PyTypeChecker
parts.extend(message.images)
history.append(
{
Expand All @@ -53,15 +53,15 @@ def build_model_input(self, messages: List["Message"]) -> Any:
}
)

return (system, history)
return system, history

def call_llm_provider(
self,
model_input: Any,
temperature: int | None,
max_new_tokens: int | None,
timeout: int,
) -> str:
) -> Message:
system, history = model_input

generation_config_kwargs = {}
Expand Down Expand Up @@ -92,7 +92,7 @@ async def async_call_llm_provider(
temperature: int | None,
max_new_tokens: int | None,
timeout: int,
) -> str:
) -> Message:
system, history = model_input

generation_config_kwargs = {}
Expand Down
2 changes: 1 addition & 1 deletion tiny_ai_client/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def get_llm_client_wrapper(

@property
def model_name(self) -> str:
return self._model_name
return self.model_name
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tks for finding actually my mistake. the attribute should be _model_name. Can we fix that before merging.


@model_name.setter
def model_name(self, value: str) -> None:
Expand Down
4 changes: 2 additions & 2 deletions tiny_ai_client/openai_.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def call_llm_provider(
temperature: int | None,
max_new_tokens: int | None,
timeout: int,
) -> str:
) -> Message:
kwargs = {}
if temperature is not None:
kwargs["temperature"] = temperature
Expand Down Expand Up @@ -103,7 +103,7 @@ async def async_call_llm_provider(
temperature: int | None,
max_new_tokens: int | None,
timeout: int,
) -> str:
) -> Message:
kwargs = {}
if temperature is not None:
kwargs["temperature"] = temperature
Expand Down