Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misc. bug: ggml files conflict between llama.cpp and whisper.cpp #11303

Open
fpemud opened this issue Jan 19, 2025 · 1 comment
Open

Misc. bug: ggml files conflict between llama.cpp and whisper.cpp #11303

fpemud opened this issue Jan 19, 2025 · 1 comment

Comments

@fpemud
Copy link

fpemud commented Jan 19, 2025

Name and Version

llama-cpp git commit: 92bc493
whisper-cpp git commit: 7a423f1c008c1d7efdee91e1ce2f8ae22f42f43b

Operating systems

No response

Which llama.cpp modules do you know to be affected?

No response

Command line

Problem description & steps to reproduce

I'm trying to install both llama.cpp and whisper.cpp and find that they install different version of ggml files to the same location.
Maybe libggml should be separated out as a standalone project?

Conflict files are:
/usr/include/ggml*.h
/usr/include/gguf.h
/usr/lib64/libggml*.so

llama.cpp:
└── usr
    ├── bin
    │   ├── convert_hf_to_gguf.py
    │   ├── llama-batched
    │   ├── llama-batched-bench
    │   ├── llama-bench
    │   ├── llama-cli
    │   ├── llama-convert-llama2c-to-ggml
    │   ├── llama-cvector-generator
    │   ├── llama-embedding
    │   ├── llama-eval-callback
    │   ├── llama-export-lora
    │   ├── llama-gbnf-validator
    │   ├── llama-gen-docs
    │   ├── llama-gguf
    │   ├── llama-gguf-hash
    │   ├── llama-gguf-split
    │   ├── llama-gritlm
    │   ├── llama-imatrix
    │   ├── llama-infill
    │   ├── llama-llava-cli
    │   ├── llama-lookahead
    │   ├── llama-lookup
    │   ├── llama-lookup-create
    │   ├── llama-lookup-merge
    │   ├── llama-lookup-stats
    │   ├── llama-minicpmv-cli
    │   ├── llama-parallel
    │   ├── llama-passkey
    │   ├── llama-perplexity
    │   ├── llama-quantize
    │   ├── llama-quantize-stats
    │   ├── llama-qwen2vl-cli
    │   ├── llama-retrieval
    │   ├── llama-run
    │   ├── llama-save-load-state
    │   ├── llama-simple
    │   ├── llama-simple-chat
    │   ├── llama-speculative
    │   ├── llama-speculative-simple
    │   ├── llama-tokenize
    │   ├── llama-tts
    │   └── vulkan-shaders-gen
    ├── include
    │   ├── ggml-alloc.h
    │   ├── ggml-backend.h
    │   ├── ggml-blas.h
    │   ├── ggml-cann.h
    │   ├── ggml-cpu.h
    │   ├── ggml-cuda.h
    │   ├── ggml-kompute.h
    │   ├── ggml-metal.h
    │   ├── ggml-opt.h
    │   ├── ggml-rpc.h
    │   ├── ggml-sycl.h
    │   ├── ggml-vulkan.h
    │   ├── ggml.h
    │   ├── gguf.h
    │   ├── llama-cpp.h
    │   └── llama.h
    ├── lib
    │   └── pkgconfig
    │       └── llama.pc
    ├── lib64
    │   ├── cmake
    │   │   └── llama
    │   │       ├── llama-config.cmake
    │   │       └── llama-version.cmake
    │   ├── libggml-base.so
    │   ├── libggml-cpu.so
    │   ├── libggml-hip.so
    │   ├── libggml-opencl.so
    │   ├── libggml-vulkan.so
    │   ├── libggml.so
    │   ├── libllama.so
    │   └── libllava_shared.so
    └── share
        └── doc
            └── llama-cpp-9999
                ├── AUTHORS
                └── README.md
whisper.cpp:
└── usr
    ├── bin
    │   ├── whisper-bench
    │   ├── whisper-cli
    │   └── whisper-server
    ├── include
    │   ├── ggml-alloc.h
    │   ├── ggml-backend.h
    │   ├── ggml-blas.h
    │   ├── ggml-cann.h
    │   ├── ggml-cpu.h
    │   ├── ggml-cuda.h
    │   ├── ggml-kompute.h
    │   ├── ggml-metal.h
    │   ├── ggml-opt.h
    │   ├── ggml-rpc.h
    │   ├── ggml-sycl.h
    │   ├── ggml-vulkan.h
    │   ├── ggml.h
    │   ├── gguf.h
    │   └── whisper.h
    ├── lib
    │   └── pkgconfig
    │       └── whisper.pc
    ├── lib64
    │   ├── cmake
    │   │   └── whisper
    │   │       ├── whisper-config.cmake
    │   │       └── whisper-version.cmake
    │   ├── libggml-base.so
    │   ├── libggml-cpu.so
    │   ├── libggml.so
    │   ├── libwhisper.so -> libwhisper.so.1
    │   ├── libwhisper.so.1 -> libwhisper.so.1.7.4
    │   └── libwhisper.so.1.7.4
    └── share
        └── doc
            └── whisper-cpp-9999
                ├── AUTHORS
                ├── README.md
                └── README_sycl.md

First Bad Commit

No response

Relevant log output

@ggerganov
Copy link
Owner

For now you can build the two projects with 2 different install paths: -DCMAKE_INSTALL_PREFIX=/usr/local/llama.cpp and -DCMAKE_INSTALL_PREFIX=/usr/local/whisper.cpp so that the binaries do not clash with each other.

The proper solution longer term would be to start versioning ggml and allow the llama.cpp and whisper.cpp projects to use an external build. No ETA for now. See ggerganov/ggml#1066 (reply in thread) for more info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants