Skip to content

Commit

Permalink
Revert "Free all slabs on region reset"
Browse files Browse the repository at this point in the history
This reverts commit 67d7ab4.

The goal of the reverted commit was to fix flaky fails of tarantool
tests that checks amount of memory used by a fiber:

 | fiber.info()[fiber.self().id()].memory.used

It also attempts to overcome the situation when a fiber holds some
amount of memory, which is not used in any way. The high limit of such
memory is controlled by a threshold in fiber_gc() tarantool's function
(128 KiB at the moment):

 | void
 | fiber_gc(void)
 | {
 |         if (region_used(&fiber()->gc) < 128 * 1024) {
 |                 region_reset(&fiber()->gc);
 |                 return;
 |         }
 |
 |         region_free(&fiber()->gc);
 | }

The reverted commit, however, leads to significant performance
degradation on certain workloads (see #4736). So the revertion fixes the
performance degradation and opens the problem with tests, which is
tracked in #4750.

Related to #12
Related to tarantool/tarantool#4750
Fixes tarantool/tarantool#4736
  • Loading branch information
Totktonada committed Jan 29, 2020
1 parent 50cb787 commit 4e734e6
Showing 1 changed file with 4 additions and 14 deletions.
18 changes: 4 additions & 14 deletions small/region.h
Original file line number Diff line number Diff line change
Expand Up @@ -156,16 +156,6 @@ region_reserve(struct region *region, size_t size)
slab.next_in_list);
if (size <= rslab_unused(slab))
return (char *) rslab_data(slab) + slab->used;
/* Try to get a slab from the region cache. */
slab = rlist_last_entry(&region->slabs.slabs,
struct rslab,
slab.next_in_list);
if (slab->used == 0 && size <= rslab_unused(slab)) {
/* Move this slab to the head. */
slab_list_del(&region->slabs, &slab->slab, next_in_list);
slab_list_add(&region->slabs, &slab->slab, next_in_list);
return (char *) rslab_data(slab);
}
}
return region_reserve_slow(region, size);
}
Expand Down Expand Up @@ -222,14 +212,14 @@ region_aligned_alloc(struct region *region, size_t size, size_t alignment)

/**
* Mark region as empty, but keep the blocks.
* Do not change the first slab and use previous slabs as a cache to
* use for future allocations.
*/
static inline void
region_reset(struct region *region)
{
struct rslab *slab;
rlist_foreach_entry(slab, &region->slabs.slabs, slab.next_in_list) {
if (! rlist_empty(&region->slabs.slabs)) {
struct rslab *slab = rlist_first_entry(&region->slabs.slabs,
struct rslab,
slab.next_in_list);
region->slabs.stats.used -= slab->used;
slab->used = 0;
}
Expand Down

0 comments on commit 4e734e6

Please sign in to comment.