Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting verbose=False issues with threads #1874

Open
4 tasks done
jeberger opened this issue Dec 20, 2024 · 1 comment
Open
4 tasks done

Setting verbose=False issues with threads #1874

jeberger opened this issue Dec 20, 2024 · 1 comment

Comments

@jeberger
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Observed Behavior

Setting verbose = False when creating a Llama instance causes all program output to disappear while the model is running. This includes in particular logging output from other threads. In addition, if I create several Llama instances and use them in parallel from multiple threads, I sometimes receive some (but not all) output from one of the instances (I assume that's because another instance finished running first and restored the output for everyone). And sometimes output is not restored when all instances have finished (probably because the last instance to finish restored output to a blocked state).

Expected Behaviour

Setting verbose = False when creating a Llama instance should cause output from that instance to disappear. Other program output should still be visible (this includes logging output and output from other Llama instances).

@jeberger
Copy link
Author

As a workaround for logging on Linux, you can duplicate the stderr file descriptor and log to that instead of the true stderr. E.g.:

logging.basicConfig (
    level = LOG_LEVEL, format = LOG_FORMAT,
    handlers = [
        logging.StreamHandler (os.fdopen (
            os.dup (sys.stderr.fileno()), 'w'))
    ])

This allows logging to proceed even while output has been blocked by llama.cpp.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant