Easily integrate language models (LLMs) into your applications with a user-friendly interface. EasyLM simplifies the process of making requests to various LLM providers, allowing you to focus on building intelligent features.
To install EasyLM, run:
pip install easylm
- User-Friendly API: Intuitive decorators for easy integration with your existing codebase.
- Conversation Management: Keep track of message history for dynamic interactions.
- Provider Flexibility: Easily switch between multiple LLM providers.
- Customizable Requests: Adjust parameters such as temperature and response formats.
- Support for Long Prompts: Load prompts from text files for complex queries with variables.
Define a query function using the @ask
decorator. Here’s a simple example:
from easylm import ask
@ask()
def simple_query(question: str) -> str:
return f"What is your response to: {question}?"
Invoke the function to get a response:
response = simple_query("What is the capital of France?")
print(response) # Prints the answer
Track conversation history with the .history()
method:
for message in response.history():
print(message) # Prints each message in the conversation history
Utilize the .reply()
method to ask follow-up questions:
follow_up = response.reply("Can you tell me more about that city?")
print(follow_up) # Prints the answer to the follow-up question
Specify different models by passing the model name when decorating your function:
@ask(model="meta-llama/llama-3.2-3b-instruct")
def advanced_query(question: str) -> str:
return f"Explain this: {question}"
For longer prompts, load them from .txt
files. This is useful for managing complex queries with variables and formatting.
You can usue {{my_variable}} in your prompt file and pass the value of my_variable as a parameter to the function:
@ask()
def file_query() -> str:
return "path/to/your_prompt.txt", {"my_variable": "value"}
This method simplifies the management of complex prompts.
Adjust your requests with additional parameters to optimize responses:
@ask(params={"temperature": 0.7, "max_tokens": 150})
def tailored_query(question: str) -> str:
return f"Analyze this: {question}"
Integrate retrieval with generation for sophisticated applications:
@ask(model="openrouter/retrieval_model")
def rag_query(data_source: str, user_question: str) -> str:
retrieved_data = utils.fetch_data(data_source) # Custom data fetching logic
return f"Using this information: {retrieved_data}. Now answer: {user_question}"
This integration allows for contextualized responses based on external data.
Specify the json_response
parameter to customize how the response is returned:
@ask(json_response=True)
def json_format_query(question: str) -> str:
return f"Provide data for: {question}"
This returns the response as a JSON object for easier parsing.
Parameter | Description |
---|---|
model |
Specify the LLM model to use for generating responses. |
json_response |
If true, return the response in JSON format. |
save_logs |
If true, logs the interaction for review and debugging. |
params |
Additional parameters to customize the request (e.g., temperature, max_tokens). |
EasyLM supports multiple language model providers. By default, it uses OpenRouter, but you can switch to others as needed. Here’s a brief overview of supported providers:
- OpenAI: Leading provider of language models with a wide range of capabilities.
- OpenRouter: Versatile model provider with broad capabilities.
You can also add your own provider if they use a compatible API. Simply user add_provider()
to register your provider with EasyLM.
from easylm.config import PROVIDERS_MAP
add_provider("qwen", "https://qwen.ai/api/v1/chat/completions", "QWEN_API_KEY")
If you use this library in your research, you can cite it as follows:
@misc{easylm,
author = {Momeni, Mohammad},
title = {EasyLM},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/moehmeni/easylm}},
}