Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve progress bar #10821

Merged
merged 1 commit into from
Dec 19, 2024
Merged

Improve progress bar #10821

merged 1 commit into from
Dec 19, 2024

Conversation

ericcurtin
Copy link
Contributor

@ericcurtin ericcurtin commented Dec 13, 2024

Set default width to whatever the terminal is. Also fixed a small bug around default n_gpu_layers value.

@ericcurtin ericcurtin force-pushed the progress-bar branch 8 times, most recently from 31b446e to b8956b8 Compare December 14, 2024 13:26
README.md Outdated Show resolved Hide resolved
@ericcurtin ericcurtin force-pushed the progress-bar branch 17 times, most recently from 1090741 to 23b1d44 Compare December 15, 2024 12:07
@ericcurtin
Copy link
Contributor Author

@ggerganov @slaren this is ready for review

@ericcurtin ericcurtin force-pushed the progress-bar branch 2 times, most recently from 6d8e249 to 2d05bf3 Compare December 16, 2024 12:19
@ericcurtin ericcurtin force-pushed the progress-bar branch 7 times, most recently from dd78c36 to e55e673 Compare December 16, 2024 19:16
@ggerganov
Copy link
Owner

It crashes when I try to run with a local file:

$ git log -1
commit e55e67314c42360176643364c1277897eff96d78 (HEAD -> pr/10821)
Author: Eric Curtin <[email protected]>
Date:   Fri Dec 13 22:46:13 2024 +0000

    Improve progress bar
    
    Set default width to whatever the terminal is. Also fixed a small bug around
    default n_gpu_layers value.
    
    Signed-off-by: Eric Curtin <[email protected]>
$ ./build/bin/llama-run ./models/llama-3.2-3b-instruct/ggml-model-f16.gguf 
curl_easy_perform() failed: Failure when receiving data from the peer
libc++abi: terminating due to uncaught exception of type nlohmann::json_abi_v3_11_3::detail::parse_error: [json.exception.parse_error.101] parse error at line 1, column 1: attempting to parse an empty input; check that your input string or stream contains the expected JSON
Abort trap: 6

@ericcurtin
Copy link
Contributor Author

It crashes when I try to run with a local file:

$ git log -1
commit e55e67314c42360176643364c1277897eff96d78 (HEAD -> pr/10821)
Author: Eric Curtin <[email protected]>
Date:   Fri Dec 13 22:46:13 2024 +0000

    Improve progress bar
    
    Set default width to whatever the terminal is. Also fixed a small bug around
    default n_gpu_layers value.
    
    Signed-off-by: Eric Curtin <[email protected]>
$ ./build/bin/llama-run ./models/llama-3.2-3b-instruct/ggml-model-f16.gguf 
curl_easy_perform() failed: Failure when receiving data from the peer
libc++abi: terminating due to uncaught exception of type nlohmann::json_abi_v3_11_3::detail::parse_error: [json.exception.parse_error.101] parse error at line 1, column 1: attempting to parse an empty input; check that your input string or stream contains the expected JSON
Abort trap: 6

Looking now

@ericcurtin
Copy link
Contributor Author

Thanks for reporting @ggerganov , fixed with:

diff --git a/examples/run/run.cpp b/examples/run/run.cpp
index 7306c538..0ce138f7 100644
--- a/examples/run/run.cpp
+++ b/examples/run/run.cpp
@@ -567,8 +567,10 @@ class LlamaData {
         const std::vector<std::string> headers = { "--header",
                                                    "Accept: application/vnd.docker.distribution.manifest.v2+json" };
         int                            ret     = 0;
-        if (string_starts_with(model_, "file://") || std::filesystem::exists(bn)) {
+        if (string_starts_with(model_, "file://") || std::filesystem::exists(model_)) {
             remove_proto(model_);
+
+            return ret;
         } else if (string_starts_with(model_, "hf://") || string_starts_with(model_, "huggingface://")) {
             remove_proto(model_);
             ret = huggingface_dl(model_, headers, bn);

@ericcurtin
Copy link
Contributor Author

Fix pushed

examples/run/run.cpp Outdated Show resolved Hide resolved
@ericcurtin ericcurtin force-pushed the progress-bar branch 5 times, most recently from 3bb5568 to 7191fb0 Compare December 17, 2024 16:10
examples/run/README.md Outdated Show resolved Hide resolved
examples/run/run.cpp Outdated Show resolved Hide resolved
Set default width to whatever the terminal is. Also fixed a small bug around
default n_gpu_layers value.

Signed-off-by: Eric Curtin <[email protected]>
@slaren slaren merged commit 7909e85 into ggerganov:master Dec 19, 2024
48 checks passed
@ericcurtin ericcurtin deleted the progress-bar branch December 19, 2024 11:43
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Dec 20, 2024
Set default width to whatever the terminal is. Also fixed a small bug around
default n_gpu_layers value.

Signed-off-by: Eric Curtin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants