Add support for autodetecting tensor_storage.type from the size of the stream #289
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I'm honestly not sure if this is even valid as per the official format, but I have some existing .ckpt files which were written using the automatic1111 DreamBooth extension and they contain some f16 tensors without
GLOBAL 'torch HalfStorage'
opcodes in the pkl stream.These load fine on the Python implementation, but fail here since
PickleTensorReader
skips those tensors due a mismatched size. It seems the other loader is guessing the f16 format based on the size of the tensor data file, whereas this code always assumes f32 if the pickle header doesn't contain a specifier.With this change my checkpoints load and work fine with stable-diffuision.cpp so I figured I would make a PR incase there are other .ckpts out there that need this same fixup to load and maybe it helps somebody else.