Replies: 4 comments
-
you just need to change the "export_mode" in config.yaml about fastflow model. |
Beta Was this translation helpful? Give feedback.
0 replies
-
And I also want to know how to improve the inference speed. |
Beta Was this translation helpful? Give feedback.
0 replies
-
how to speedup?? |
Beta Was this translation helpful? Give feedback.
0 replies
-
You could use OpenVINO inference to speed it up on CPU. As @Sj-Yuan mentioned, you need to change the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
What is the motivation for this task?
Hello, I use fastflow model (anomalib 0.3.3) for work. (can't update to lastest version)
Until now, For fast inference speed,
After traning, I made lightning model ('.ckpt') to torch model ( '.pt' ) like below
logger.info("Training the model.")
trainer.fit(model=model, datamodule=datamodule)
logger.info("Loading the best model weights.")
load_model_callback = LoadModelCallback(weights_path=trainer.checkpoint_callback.best_model_path)
trainer.callbacks.insert(0, load_model_callback)
best_model =model.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
torch.save(best_model.state_dict(), 'best_model.pt')
And I tried to inference, an error occured (in anomalib/tools/inference/torch_inference.py)
weights ='best_model.pt'
inferencer = TorchInferencer(path=weights, device= 'cuda')
KeyError("
model
is not found in the checkpoint. Please check the checkpoint file.")I saw a solution about config file with export mode but there is no paramter in my config file(anomalib 0.3.3)
Describe the solution you'd like
How I can convert lightning '.ckpt' to torch '.pt' file and infence by torch '.pt' file?
If there is better way to improve inference speed, please tell me
In Advance, thank you for your help!!
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions