You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I look at this, all of the computations of ImageHash requires flattening out the hash. When I was profiling (for hash_size=32), each of the flattening is adding 0.5us of overhead that can be avoided.
It might be small, but I have code where I need to subtract one hash from 10,000 stored hashes every frame I'm processing, and it adds more than 10ms per frame. This forces me to copy paste the arithmetic of ImageHash to my code snippet (instead of calling some_hash - other_hash)
I don't see any reason to store the flattened version of it in the first place. In other words:
Hmm, I think I did that because one could make hashes of (6,4) or (4,6) in principle. The functions allow one to compare these for convenience because some databases may store the _binary_array_to_hex and hex_to_hash lose the shape information (flattening things). Maybe we could make the .hash field flat in __init__ as you say and add a .shape property?
When I look at this, all of the computations of ImageHash requires flattening out the hash. When I was profiling (for hash_size=32), each of the flattening is adding 0.5us of overhead that can be avoided.
It might be small, but I have code where I need to subtract one hash from 10,000 stored hashes every frame I'm processing, and it adds more than 10ms per frame. This forces me to copy paste the arithmetic of ImageHash to my code snippet (instead of calling
some_hash - other_hash
)I don't see any reason to store the flattened version of it in the first place. In other words:
The text was updated successfully, but these errors were encountered: