Skip to content

πŸ‡ Typescript-first LLM framework with static type inference, testability, and composability.

License

Notifications You must be signed in to change notification settings

BriefHQ/hopfield

Repository files navigation


Hopfield logo


Typescript-first LLM framework with static type inference, testability, and composability.

import hop from "hopfield";
import openai from "hopfield/openai";
import OpenAI from "openai";
import z from "zod";

// create an OpenAI hopfield client
const hopfield = hop.client(openai).provider(new OpenAI());

// use description templates with Typescript string literal types
const categoryDescription = hopfield
  .template()
  .enum("The category of the message.");

// define functions for LLMs to call, with Zod validations
const classifyMessage = hopfield.function({
  name: "classifyMessage",
  description: "Triage an incoming support message.",
  parameters: z.object({
    summary: z.string().describe("The summary of the message."),
    category: z
      .enum([
        "ACCOUNT_ISSUES",
        "BILLING_AND_PAYMENTS",
        "TECHNICAL_SUPPORT",
        "OTHERS",
      ])
      .describe(categoryDescription),
  }),
});

// create a client with function calling
const chat = hopfield.chat().functions([classifyMessage]);

const incomingUserMessage = "How do I reset my password?";

// use utility types to infer inputs for a simple devex
const messages: hop.inferMessageInput<typeof chat>[] = [
  {
    content: incomingUserMessage,
    role: "user",
  },
];

// use the built-in LLM API calls (or just use the input/output Zod validations)
const parsed = await chat.get({
  messages,
});

// get type-strong responses with `__type` helpers
if (parsed.choices[0].__type === "function_call") {
  // automatically validate the arguments returned from the LLM
  // we use the Zod schema you passed, for maximum flexibility in validation
  const category = parsed.choices[0].message.function_call.arguments.category;
  await handleMessageWithCategory(category, incomingUserMessage);
}

TL;DR

Hopfield might be a good fit for your project if:

  • πŸ—οΈ You build with Typescript/Javascript, and have your database schemas in these languages (e.g. Prisma and/or Next.js).
  • πŸͺ¨ You don't need a heavyweight LLM orchestration framework that ships with a ton of dependencies you'll never use.
  • πŸ€™ You're using OpenAI function calling and/or custom tools, and want Typescript-native features for them (e.g. validations w/ Zod).
  • πŸ’¬ You're building complex LLM interactions which use memory & RAG, evaluation, and orchestration (Coming Soonβ„’).
  • πŸ“ You want best-practice, extensible templates, which use string literal types under the hood for transparency.

Oh, and liking Typescript is a nice-to-have.

Guiding principles

  • πŸŒ€ We are Typescript-first, and only support TS (or JS) - with services like Replicate or OpenAI, why do you need Python?
  • 🀏 We provide a simple, ejectable interface with common LLM use-cases. This is aligned 1-1 with LLM provider abstractions, like OpenAI's.
  • πŸͺ’ We explicitly don't provide a ton of custom tools (please don't ask for too many πŸ˜…) outside of the building blocks and simple examples provided. Other frameworks provide these, but when you use them, you soon realize the tool you want is very use-case specific.
  • πŸ§ͺ We (will) provide evaluation frameworks which let you simulate user scenarios and backend interactions with the LLM, including multi-turn conversations and function calling.
  • 🐢 We support Node.js, Vercel Edge Functions, Cloudflare Workers, and more (oh and even web, if you like giving away API keys).

Install

npm i hopfield

Documentation

For full documentation, visit hopfield.ai.

Community

If you have questions or need help, reach out to the community in the Hopfield GitHub Discussions.

Inspiration

Shoutout to these projects which inspired us:

Contributing

If you're interested in contributing to Hopfield, please read our contributing docs before submitting a pull request.

About

πŸ‡ Typescript-first LLM framework with static type inference, testability, and composability.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •