Skip to content

dartmouth/langchain-dartmouth

Repository files navigation

Dartmouth LangChain

documentation tests

LangChain components for Dartmouth-hosted models.

Getting started

  1. Install the package:
pip install langchain_dartmouth
  1. Obtain a Dartmouth API key from developer.dartmouth.edu
  2. Store the API key as an environment variable called DARTMOUTH_API_KEY:
export DARTMOUTH_API_KEY=<your_key_here>
  1. Obtain a Dartmouth Chat API key
  2. Store the API key as an environment variable called DARTMOUTH_CHAT_API_KEY
export DARTMOUTH_CHAT_API_KEY=<your_key_here>

What is this?

This library provides an integration of Darmouth-provided generative AI resources with the LangChain framework.

There are three main components currently implemented:

  • Large Language Models
  • Embedding models
  • Reranking models

All of these components are based on corresponding LangChain base classes and can be used seamlessly wherever the corresponding LangChain objects can be used.

Using the library

Large Language Models

There are three kinds of Large Language Models (LLMs) provided by Dartmouth:

  • On-premises:
    • Base models without instruction tuning (require no special prompt format)
    • Instruction-tuned models (also known as Chat models) requiring specific prompt formats
  • Cloud:
    • Third-party, pay-as-you-go chat models (e.g., OpenAI's GPT 4o, Google Gemini)

Using a Dartmouth-hosted base language model:

from langchain_dartmouth.llms import DartmouthLLM

llm = DartmouthLLM(model_name="codellama-13b-hf")

response = llm.invoke("Write a Python script to swap two variables.")
print(response)

Using a Dartmouth-hosted chat model:

from langchain_dartmouth.llms import ChatDartmouth


llm = ChatDartmouth(model_name="llama-3-8b-instruct")

response = llm.invoke("Hi there!")

print(response.content)

Note

The required prompt format is enforced automatically when you are using ChatDartmouth.

Using a Dartmouth-provided third-party chat model:

from langchain_dartmouth.llms import ChatDartmouthCloud


llm = ChatDartmouthCloud(model_name="openai.gpt-4o-mini-2024-07-18")

response = llm.invoke("Hi there!")


### Embeddings model

Using a Dartmouth-hosted embeddings model:

```{python}
from langchain_dartmouth.embeddings import DartmouthEmbeddings


embeddings = DartmouthEmbeddings()

embeddings.embed_query("Hello? Is there anybody in there?")

print(response)

Reranking

Using a Dartmouth-hosted reranking model:

from langchain_dartmouth.retrievers.document_compressors import DartmouthReranker
from langchain.docstore.document import Document


docs = [
    Document(page_content="Deep Learning is not..."),
    Document(page_content="Deep learning is..."),
    ]

query = "What is Deep Learning?"
reranker = DartmouthReranker(model_name="bge-reranker-v2-m3")
ranked_docs = reranker.compress_documents(query=query, documents=docs)

print(ranked_docs)

Available models

For a list of available models, check the respective list() method of each class.

License

Created by Simon Stone for Dartmouth College under Creative Commons CC BY-NC 4.0 License.
For questions, comments, or improvements, email Research Computing.
Creative Commons License

Except where otherwise noted, the example programs are made available under the OSI-approved MIT license.

About

LangChain components for Dartmouth-hosted AI models.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages