Skip to content

Commit

Permalink
llamamodel: fix semantic typo in nomic client dynamic mode (nomic-ai#…
Browse files Browse the repository at this point in the history
…2216)

Signed-off-by: Jared Van Bortel <[email protected]>
  • Loading branch information
cebtenzzre authored Apr 12, 2024
1 parent 46818e4 commit 3f8257c
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion gpt4all-backend/llamamodel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -302,8 +302,8 @@ bool LLamaModel::loadModel(const std::string &modelPath, int n_ctx, int ngl)

if (llama_verbose()) {
std::cerr << "llama.cpp: using Metal" << std::endl;
d_ptr->backend_name = "metal";
}
d_ptr->backend_name = "metal";

// always fully offload on Metal
// TODO(cebtenzzre): use this parameter to allow using more than 53% of system RAM to load a model
Expand Down
2 changes: 1 addition & 1 deletion gpt4all-bindings/python/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ def get_long_description():

setup(
name=package_name,
version="2.5.0",
version="2.5.1",
description="Python bindings for GPT4All",
long_description=get_long_description(),
long_description_content_type="text/markdown",
Expand Down

0 comments on commit 3f8257c

Please sign in to comment.