Skip to content

Getting Started

R7B7 edited this page Dec 17, 2024 · 6 revisions

Add Maven Dependency

Add jitpack and hosp-ai maven dependency

  <dependency>
   <groupId>com.github.r7b7</groupId>
   <artifactId>hosp-ai</artifactId>
   <version>v1.0.0-alpha.1</version>
  </dependency>

  <repository>
   <id>jitpack.io</id>
   <url>https://jitpack.io</url>
  </repository>

Zero-Shot Prompt

When no example is provided in the prompt, it is called a zero-shot prompt. In this approach, the model is given only an instruction or question without any demonstrations of the desired output format. The model relies solely on its pre-trained knowledge to generate a response.

Invoke chat completion API from a provider of your liking - We use OpenAI here

  public static void main( String[] args )
  {
    ILLMService service = LLMServiceFactory.createService(Provider.OPENAI, "<OPENAI-API-KEY>", "gpt-4");

    PromptEngine promptEngine = new PromptEngine(service);
    CompletionResponse response1 = promptEngine.sendQuery("what's the stock symbol of Palantir technology?");

    System.out.println("Response from  promptEngine: " + response1);
  }

Response

    Response from  promptEngine: CompletionResponse[messages=[Message[role=assistant, content=The stock symbol for Palantir Technologies 
    is PLTR.]], metaData={total_tokens=30, id=chatcmpl-Ab9i1eXMZp6wffPGQVK56h83wfH8Y, prompt_tokens=18, model=gpt-4-0613, provider=OpenAi, 
    completion_tokens=12}, error=null]

Few-Shot Prompt

Few-Shot Prompt involves providing the model with a few examples (typically 2 to 5) of a task to help it understand how to generate the desired output.

Invoke chat completion API by passing additional params (optional) and a system message (optional) along with a few examples.

    public static void main( String[] args )
    {
     ILLMService service = LLMServiceFactory.createService(Provider.OPENAI, "<OPENAI-API-KEY>", "gpt-4");

     PromptBuilder builder = new PromptBuilder()
            .addMessage(new Message(Role.system, "Give output in consistent format"))
            .addMessage(new Message(Role.user, "what's the stock symbol of ARCHER Aviation?"))
            .addMessage(new Message(Role.assistant, "{\"company\":\"Archer\", \"symbol\":\"ACHR\"}"))
            .addMessage(new Message(Role.user, "what's the stock symbol of Palantir technology?"))
            .addParam("temperature", 0.7)
            .addParam("max_tokens", 150);
     PromptEngine engine = builder.build(service);
     CompletionResponse response = engine.sendQuery();
     System.out.println("Response from  engine: " + response);
    }

Response

    Response from  engine: CompletionResponse[messages=[Message[role=assistant, content={"company":"Palantir Technology", 
    "symbol":"PLTR"}]], metaData={total_tokens=71, id=chatcmpl-Ab9i0RltCR3C8PHzZyOmRphQdMcGz, prompt_tokens=57, model=gpt-4-0613, 
    provider=OpenAi, completion_tokens=14}, error=null]

Asynchronous Response from LLM

  1. Follow any of the above examples
  2. Update synchronous call with an Asynchronous call
       promptEngine.sendQueryAsync("<input_query>")
               .thenAccept(asyncResponse -> System.out.println("Async Response: " + asyncResponse))
               .join(); 
    
       //Prompt Builder as Input
       PromptEngine engine = builder.build(service);
       engine.sendQueryAsync()
             .thenAccept(asyncResponse -> System.out.println("Async Response: " + asyncResponse))
             .join(); 
    

Update Model OR Provider

Update the values of model, apiKey (where applicable, for example, Ollama doesn't need an API key) and Provider.

For example, to move from Ollama to OpenAI, change this line

 ILLMService service = LLMServiceFactory.createService(Provider.OLLAMA,"gemma2");

to

 LLMService service = LLMServiceFactory.createService(Provider.OPENAI, "<OPEN-AI-KEY>", "gpt-4");

Tool Support

To add tools to a chat completion API, add the tool name, description and parameters to ToolFunction class.

ToolFunction function = new ToolFunction("get_current_weather",
            "Get the current weather in a given location in fahrenheit",
            parameters);

Next, pass the tool to PromptBuilder along with query and other parameters (if any).

PromptEngine engine = new PromptBuilder()
            .addMessage(new Message(Role.user, "What's the weather in Chicago today?"))
            .addTool(function)
            .addToolChoice("auto")
            .build(service);

If, the following parameters were passed in the tools field,

 private static Map<String, Object> getParameters() {
    Map<String, Object> parameters = new HashMap<>();
    parameters.put("type", "object");

    Map<String, Object> location = new HashMap<>();
    location.put("type", "string");
    location.put("description", "The city and state");

    Map<String, Object> unit = new HashMap<>();
    unit.put("type", "string");
    unit.put("enum", List.of("celsius", "fahrenheit"));

    parameters.put("properties", Map.of("location",location,"unit", unit));
    parameters.put("required", List.of("location", "unit"));
    return parameters;
}

We would get a similar response from the LLM,

  1. Groq

    CompletionResponse[messages=[Message[role=assistant, content=null, toolCalls=[{id=call_8ed1, type=function, function= {name=get_current_weather, arguments={"location":"Chicago","unit":"fahrenheit"}}}]]], metaData={completion_tokens=51, provider=Groq, model=llama3-70b-8192, prompt_tokens=947, id=chatcmpl-f0895dda-d0de-4b34-a109-d4cbd07ef397, total_tokens=998}, error=null]

  2. Ollama

    CompletionResponse[messages=[Message[role=assistant, content=, toolCalls=[{function={name=get_current_weather, arguments=. {location=Chicago, IL, unit=fahrenheit}}}]]], metaData={provider=Ollama, model=mistral, eval_duration=2309000000, total_duration=6780714000}, error=null]