From 79351e75c51af6230679ff38868f049be1553dec Mon Sep 17 00:00:00 2001 From: Dana Robinson Date: Tue, 26 Mar 2024 15:21:34 -0700 Subject: [PATCH] Remove an error check regarding large cache objects In PR#4231 an assert() call was converted to a normal HDF5 error check. It turns out that the original assert() was added by a developer as a way of being alerted that large cache objects existed instead of as a guard against incorrect behavior, making it unnecessary in either debug or release builds. The error check has been removed. --- release_docs/RELEASE.txt | 5 +---- src/H5Centry.c | 8 -------- 2 files changed, 1 insertion(+), 12 deletions(-) diff --git a/release_docs/RELEASE.txt b/release_docs/RELEASE.txt index d20574dc83b..9a383acbfb4 100644 --- a/release_docs/RELEASE.txt +++ b/release_docs/RELEASE.txt @@ -700,10 +700,7 @@ Bug Fixes since HDF5-1.14.0 release builds. In HDF5 1.14.4, this can happen if you create a very large number of links in an old-style group that uses local heaps. - The library will now emit a normal error when it tries to load a - metadata object that is too large. - - Partially addresses GitHub #3762 + Fixes GitHub #3762 - Fixed an issue with the Subfiling VFD and multiple opens of a file diff --git a/src/H5Centry.c b/src/H5Centry.c index a799c4bb97d..6883e897186 100644 --- a/src/H5Centry.c +++ b/src/H5Centry.c @@ -1288,14 +1288,6 @@ H5C__load_entry(H5F_t *f, H5C__RESET_CACHE_ENTRY_STATS(entry); - /* This is a temporary fix for a problem identified in GitHub #3762, where - * it looks like a local heap entry can grow to a size that is larger - * than the metadata cache will allow. This doesn't fix the underlying - * problem, but it at least prevents the library from crashing. - */ - if (entry->size >= H5C_MAX_ENTRY_SIZE) - HGOTO_ERROR(H5E_CACHE, H5E_BADVALUE, NULL, "cache entry size is too large"); - ret_value = thing; done: