Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] predict multiple times with a model #222

Open
dh-Kang opened this issue Jun 30, 2024 · 0 comments
Open

[QUESTION] predict multiple times with a model #222

dh-Kang opened this issue Jun 30, 2024 · 0 comments
Labels
question Further information is requested

Comments

@dh-Kang
Copy link

dh-Kang commented Jun 30, 2024

❓ Questions and Help

Before asking:

  1. Search for similar issues.
  2. Search the docs.

What is your question?

When I tried to predict some data multiple times, there's problem.
First dataset prediction is complete.
While it try second dataset, this process is stucked and only GPU0 is occupied very long time.
So, I have to kill the process and start from second dataset again.
Below is final part of log of trying second dataset

GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
Killed

What should I do for fixing this problem?

Code

model_path = download_model("Unbabel/wmt22-cometkiwi-da")
model = load_from_checkpoint(model_path)
model.predict(data1, batch_size=256, gpus=8)["scores"]
...
model.predict(data2, batch_size=256, gpus=8)["scores"]

What have you tried?

What's your environment?

python 3.11.9
pyenv
Linux 22.04
torch==2.3.1
unbabel-comet==2.2.2

  • OS: [e.g. iOS, Linux, Win]
  • Packaging [e.g. pip, conda]
  • Version [e.g. 0.5.2.1]
@dh-Kang dh-Kang added the question Further information is requested label Jun 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant