Skip to content

Commit

Permalink
Remove an error check regarding large cache objects
Browse files Browse the repository at this point in the history
In PR#4231 an assert() call was converted to a normal HDF5 error
check. It turns out that the original assert() was added by a
developer as a way of being alerted that large cache objects
existed instead of as a guard against incorrect behavior, making
it unnecessary in either debug or release builds.

The error check has been removed.
  • Loading branch information
derobins committed Mar 26, 2024
1 parent c4d2891 commit 79351e7
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 12 deletions.
5 changes: 1 addition & 4 deletions release_docs/RELEASE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -700,10 +700,7 @@ Bug Fixes since HDF5-1.14.0 release
builds. In HDF5 1.14.4, this can happen if you create a very large
number of links in an old-style group that uses local heaps.

The library will now emit a normal error when it tries to load a
metadata object that is too large.

Partially addresses GitHub #3762
Fixes GitHub #3762

- Fixed an issue with the Subfiling VFD and multiple opens of a
file
Expand Down
8 changes: 0 additions & 8 deletions src/H5Centry.c
Original file line number Diff line number Diff line change
Expand Up @@ -1288,14 +1288,6 @@ H5C__load_entry(H5F_t *f,

H5C__RESET_CACHE_ENTRY_STATS(entry);

/* This is a temporary fix for a problem identified in GitHub #3762, where
* it looks like a local heap entry can grow to a size that is larger
* than the metadata cache will allow. This doesn't fix the underlying
* problem, but it at least prevents the library from crashing.
*/
if (entry->size >= H5C_MAX_ENTRY_SIZE)
HGOTO_ERROR(H5E_CACHE, H5E_BADVALUE, NULL, "cache entry size is too large");

ret_value = thing;

done:
Expand Down

0 comments on commit 79351e7

Please sign in to comment.