You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For large network payloads (hearing cases of 75k+ rows in a single update) the receiving thread currently breaks down each of these rows into a separate byte[] allocation. This is to replace them with slices into existing buffers.
The tricky part is storage inside caches, where the byte[] of the row is used as a key; a solution is to make table types produce implement hash and equality operators to become their own keys.
The text was updated successfully, but these errors were encountered:
For large network payloads (hearing cases of 75k+ rows in a single update) the receiving thread currently breaks down each of these rows into a separate byte[] allocation. This is to replace them with slices into existing buffers.
The tricky part is storage inside caches, where the byte[] of the row is used as a key; a solution is to make table types produce implement hash and equality operators to become their own keys.
The text was updated successfully, but these errors were encountered: