Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use new crossbeam skiplist version to skip heap allocation in point read #98

Draft
wants to merge 127 commits into
base: main
Choose a base branch
from
Draft
Changes from 1 commit
Commits
Show all changes
127 commits
Select commit Hold shift + click to select a range
5d5a8ce
KeyRange::empty
marvin-j97 Sep 21, 2024
521e84c
import
marvin-j97 Sep 21, 2024
a782200
import
marvin-j97 Sep 21, 2024
eeb51a1
#51
marvin-j97 Sep 21, 2024
64d6c94
Merge branch '2.1.0' into 2.2.0
marvin-j97 Sep 27, 2024
888b11f
also use full index for L1
marvin-j97 Sep 28, 2024
22b33fd
Merge branch 'main' into 2.2.0
marvin-j97 Oct 9, 2024
369ab94
Merge branch 'main' into perf/l0-full-index
marvin-j97 Oct 22, 2024
d7eae1e
Merge branch 'main' into perf/l0-full-index
marvin-j97 Oct 26, 2024
df33c04
Merge branch 'main' into perf/l0-full-index
marvin-j97 Oct 26, 2024
d18b4cb
fix bench
marvin-j97 Nov 2, 2024
6b34e13
refactor: bench
marvin-j97 Nov 2, 2024
58635c7
add test case
marvin-j97 Nov 2, 2024
3f1482f
closes #62
marvin-j97 Nov 2, 2024
36baa11
Merge branch 'main' into perf/lazy-range-eval
marvin-j97 Nov 2, 2024
417d243
Merge branch 'main' into perf/lazy-range-eval
marvin-j97 Nov 2, 2024
a53fce2
Merge branch 'main' into perf/l0-full-index
marvin-j97 Nov 2, 2024
124b9ff
change size tiered base size to 64M
marvin-j97 Nov 6, 2024
224558c
perf: make Memtable::get_highest_seqno O(1)
marvin-j97 Nov 10, 2024
b1d1419
add test cases
marvin-j97 Nov 10, 2024
dc4b4c6
Merge branch 'main' into perf/lazy-range-eval
marvin-j97 Nov 10, 2024
0c3df6b
Merge branch 'main' into perf/lazy-range-eval
marvin-j97 Nov 13, 2024
ef20ff4
Merge branch 'main' into perf/l0-full-index
marvin-j97 Nov 13, 2024
9043abd
Merge branch 'main' into perf/l0-full-index
marvin-j97 Nov 17, 2024
cbbc9e9
Merge branch 'main' into perf/lazy-range-eval
marvin-j97 Nov 20, 2024
dd19d96
remove unused memtable clear method
marvin-j97 Nov 20, 2024
5cf470d
Revert "remove unused memtable clear method"
marvin-j97 Nov 20, 2024
f5e2ccb
fix: Memtable::clear
marvin-j97 Nov 20, 2024
540089e
Merge pull request #76 from fjall-rs/perf/lazy-range-eval
marvin-j97 Nov 21, 2024
cec2878
Merge remote-tracking branch 'origin/main'
marvin-j97 Nov 21, 2024
7e911ba
Merge branch 'main' into perf/l0-full-index
marvin-j97 Nov 22, 2024
0e02551
Merge branch 'main' into perf/l0-full-index
marvin-j97 Nov 22, 2024
50badf7
fix: range upper bound
marvin-j97 Nov 22, 2024
3339646
fix: workaround for 1.74 rust
marvin-j97 Nov 22, 2024
f808294
fix(full block index): can't debug assert len
marvin-j97 Nov 22, 2024
d2cedba
reimplement segment verify
marvin-j97 Nov 23, 2024
5de0290
recover L0/L1 with full block index
marvin-j97 Nov 23, 2024
4be47cc
stage missing file
marvin-j97 Nov 23, 2024
9671f3d
refactor
marvin-j97 Nov 23, 2024
fa766d0
closes #51
marvin-j97 Nov 23, 2024
9674bd0
wip
marvin-j97 Nov 26, 2024
f04b7a8
rename
marvin-j97 Nov 26, 2024
489ffd4
refactor
marvin-j97 Nov 26, 2024
06f105e
Merge pull request #80 from fjall-rs/perf/l0-full-index
marvin-j97 Nov 26, 2024
8d96736
fix: leveled compaction
marvin-j97 Nov 26, 2024
d543dad
filter out 0 compactions
marvin-j97 Nov 26, 2024
9527036
wip
marvin-j97 Nov 26, 2024
f376442
Merge pull request #82 from fjall-rs/compaction/leveled/level_base_size
marvin-j97 Nov 26, 2024
95bdeb0
preliminary parallel compactions
marvin-j97 Nov 26, 2024
2c8da7d
Merge remote-tracking branch 'origin/main' into leveled-parallel
marvin-j97 Nov 27, 2024
4dff1d2
refactor
marvin-j97 Nov 27, 2024
85a8151
add comments
marvin-j97 Nov 27, 2024
9081447
refactor: rename
marvin-j97 Nov 27, 2024
afd3287
Merge branch 'main' into 2.5.0
marvin-j97 Nov 28, 2024
00604fb
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Nov 28, 2024
f49d476
clippy
marvin-j97 Nov 28, 2024
e661a02
clippy
marvin-j97 Nov 28, 2024
bcdbff1
change L0 bloom filter FPR on flush as well
marvin-j97 Nov 29, 2024
ba438e6
Merge branch 'main' into 2.5.0
marvin-j97 Nov 29, 2024
58c87d3
Merge branch 'main' into leveled-parallel
marvin-j97 Nov 29, 2024
b2492b4
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Nov 29, 2024
27b773b
take compacted bytes into account
marvin-j97 Nov 30, 2024
c47e194
Merge branch 'main' into 2.5.0
marvin-j97 Nov 30, 2024
05beaa8
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Nov 30, 2024
6dd9abf
Update memtable.rs
marvin-j97 Nov 30, 2024
d562c36
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Nov 30, 2024
ced984a
Merge branch 'main' into 2.5.0
marvin-j97 Nov 30, 2024
87f642e
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Nov 30, 2024
5734108
Merge branch 'main' into 2.5.0
marvin-j97 Dec 1, 2024
77259f8
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Dec 1, 2024
e054c8c
Merge branch 'main' into 2.5.0
marvin-j97 Dec 2, 2024
d9c6c8e
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Dec 2, 2024
168d4e0
better zero copy support from types that implement Into<Slice>
carlsverre Dec 4, 2024
12413c3
Merge branch 'main' into 2.5.0
marvin-j97 Dec 4, 2024
d726767
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Dec 4, 2024
03c68c7
Merge branch 'main' into 2.5.0
marvin-j97 Dec 4, 2024
faa3251
Merge branch '2.5.0' into leveled-parallel
marvin-j97 Dec 4, 2024
2d675ec
wip
marvin-j97 Dec 4, 2024
315d558
remove some ? in compaction worker
marvin-j97 Dec 4, 2024
c7ebaec
replace another ? in compaction worker
marvin-j97 Dec 4, 2024
37bf57d
wip
marvin-j97 Dec 4, 2024
d3e2f77
Improve HiddenSet ergonomics and fix a bug in the leveled compaction …
carlsverre Dec 4, 2024
d98c0eb
pass hidden set directly to pick_minimal_computation, but still keep …
carlsverre Dec 4, 2024
186fd9f
Update hidden_set.rs
marvin-j97 Dec 4, 2024
5c5fe26
Merge branch 'leveled-parallel' into leveled-parallel
marvin-j97 Dec 4, 2024
ec3e1b9
update value-log
marvin-j97 Dec 4, 2024
891011c
Update mod.rs
marvin-j97 Dec 4, 2024
1f6cbd3
Merge pull request #88 from carlsverre/leveled-parallel
marvin-j97 Dec 4, 2024
eeab262
cleanup
marvin-j97 Dec 5, 2024
27e7b4d
refactor
marvin-j97 Dec 5, 2024
46f2d83
fmt
marvin-j97 Dec 5, 2024
c219ff2
wip
marvin-j97 Dec 5, 2024
0cd43e6
refactor
marvin-j97 Dec 5, 2024
69b4038
clippy
marvin-j97 Dec 5, 2024
d4246ae
wip
marvin-j97 Dec 5, 2024
95ccb2b
refactor
marvin-j97 Dec 5, 2024
1ee2272
remove unneeded struct
marvin-j97 Dec 5, 2024
dde29cf
wip
marvin-j97 Dec 5, 2024
5806a14
refactor
marvin-j97 Dec 5, 2024
85a3ff7
refactor
marvin-j97 Dec 5, 2024
38f3648
Merge pull request #83 from fjall-rs/leveled-parallel
marvin-j97 Dec 5, 2024
3dcaf4f
Merge branch 'main' into 2.5.0
marvin-j97 Dec 7, 2024
fb98138
compaction: remove overshoot checking again
marvin-j97 Dec 7, 2024
4ada35b
add merge bench
marvin-j97 Dec 7, 2024
4c2754c
adjust bench
marvin-j97 Dec 7, 2024
26c6072
add mvcc stream bench
marvin-j97 Dec 7, 2024
f4b745c
wip
marvin-j97 Dec 7, 2024
5c02628
catch another ? in compaction worker, #87
marvin-j97 Dec 7, 2024
f46b6fe
fix(MultiWriter): make sure KV versions cannot span segments
marvin-j97 Dec 9, 2024
ffed629
doc
marvin-j97 Dec 9, 2024
5dca9e5
refactor
marvin-j97 Dec 9, 2024
615ce00
fix
marvin-j97 Dec 9, 2024
96ead47
remove dbg log
marvin-j97 Dec 10, 2024
a9c883e
doc: update internal docs
marvin-j97 Dec 11, 2024
8332b1a
refactor: segment writer
marvin-j97 Dec 12, 2024
88575ef
move module
marvin-j97 Dec 14, 2024
b9c1bcd
perf: upgrade crossbeam-skiplist to skip heap allocation in get()
marvin-j97 Dec 14, 2024
2d29e55
perf: simplify segment point read fast path
marvin-j97 Dec 14, 2024
164adb5
revert version change
marvin-j97 Dec 14, 2024
bd52297
Merge remote-tracking branch 'origin/main' into 2.5.0
marvin-j97 Dec 14, 2024
122c77c
Merge branch 'main' into 2.5.0
marvin-j97 Dec 14, 2024
296e64f
Merge pull request #85 from carlsverre/main
marvin-j97 Dec 14, 2024
26847b3
2.5.0-pre.0
marvin-j97 Dec 14, 2024
ab28434
perf: remove heap allocation in snapshot point read path
marvin-j97 Dec 14, 2024
9113cb4
perf: specialize Segment reader for snapshot point reads
marvin-j97 Dec 17, 2024
8175914
revert crossbeam-skiplist for now
marvin-j97 Dec 20, 2024
e1a1ce5
perf: skip heap alloc in Memtable::get
marvin-j97 Dec 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Improve HiddenSet ergonomics and fix a bug in the leveled compaction …
…strategy where it didn't consider hidden segments when computing the minimal compaction job.
carlsverre committed Dec 4, 2024
commit d3e2f775c5e55903a396ce3a5b29d9a4abe1329c
43 changes: 28 additions & 15 deletions src/compaction/leveled.rs
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ use super::{Choice, CompactionStrategy, Input as CompactionInput};
use crate::{
config::Config,
key_range::KeyRange,
level_manifest::{level::Level, HiddenSet, LevelManifest},
level_manifest::{level::Level, LevelManifest},
segment::Segment,
HashSet, SegmentId,
};
@@ -20,21 +20,41 @@ fn aggregate_key_range(segments: &[Segment]) -> KeyRange {
fn pick_minimal_compaction(
curr_level: &Level,
next_level: &Level,
hidden_set: &HiddenSet,
levels: &LevelManifest,
) -> Option<(HashSet<SegmentId>, bool)> {
// assert!(curr_level.is_disjoint, "Lx is not disjoint");
// assert!(next_level.is_disjoint, "Lx+1 is not disjoint");

let mut choices = vec![];

let mut add_choice =
|write_amp: f32, segment_ids: HashSet<SegmentId>, can_trivial_move: bool| {
let mut valid_choice = true;

// IMPORTANT: Compaction is blocked because of other
// on-going compaction
valid_choice &= !segment_ids.iter().any(|x| levels.segment_hidden(*x));

// NOTE: Keep compactions with 25 or less segments
// to make compactions not too large
//
// TODO: ideally, if a level has a lot of compaction debt
// compactions could be parallelized as long as they don't overlap in key range
valid_choice &= segment_ids.len() <= 25;

if valid_choice {
choices.push((write_amp, segment_ids, can_trivial_move));
}
};

for size in 1..=next_level.len() {
let windows = next_level.windows(size);

for window in windows {
if window
.iter()
.map(|x| x.metadata.id)
.any(|x| hidden_set.contains(&x))
.any(|x| levels.segment_hidden(x))
{
// IMPORTANT: Compaction is blocked because of other
// on-going compaction
@@ -72,7 +92,7 @@ fn pick_minimal_compaction(
if curr_level_pull_in
.iter()
.map(|x| x.metadata.id)
.any(|x| hidden_set.contains(&x))
.any(|x| levels.segment_hidden(x))
{
// IMPORTANT: Compaction is blocked because of other
// on-going compaction
@@ -93,7 +113,7 @@ fn pick_minimal_compaction(

let write_amp = (next_level_size as f32) / (curr_level_size as f32);

choices.push((write_amp, segment_ids, false));
add_choice(write_amp, segment_ids, false);
}
}
}
@@ -108,18 +128,11 @@ fn pick_minimal_compaction(
let key_range = aggregate_key_range(window);

if next_level.overlapping_segments(&key_range).next().is_none() {
choices.push((0.0, segment_ids, true));
add_choice(0.0, segment_ids, true);
}
}
}

// NOTE: Keep compactions with 25 or less segments
// to make compactions not too large
//
// TODO: ideally, if a level has a lot of compaction debt
// compactions could be parallelized as long as they don't overlap in key range
choices.retain(|(_, segments, _)| segments.len() <= 25);

let minimum_effort_choice = choices
.into_iter()
.min_by(|a, b| a.0.partial_cmp(&b.0).unwrap_or(std::cmp::Ordering::Equal));
@@ -216,7 +229,7 @@ impl CompactionStrategy for Strategy {
.iter()
// NOTE: Take bytes that are already being compacted into account,
// otherwise we may be overcompensating
.filter(|x| !levels.hidden_set.contains(&x.metadata.id))
.filter(|x| !levels.segment_hidden(x.metadata.id))
.map(|x| x.metadata.file_size)
.sum();

@@ -230,7 +243,7 @@ impl CompactionStrategy for Strategy {
};

let Some((segment_ids, can_trivial_move)) =
pick_minimal_compaction(level, next_level, &levels.hidden_set)
pick_minimal_compaction(level, next_level, levels)
else {
break;
};
2 changes: 1 addition & 1 deletion src/compaction/tiered.rs
Original file line number Diff line number Diff line change
@@ -74,7 +74,7 @@ impl CompactionStrategy for Strategy {
.iter()
// NOTE: Take bytes that are already being compacted into account,
// otherwise we may be overcompensating
.filter(|x| !levels.hidden_set.contains(&x.metadata.id))
.filter(|x| !levels.segment_hidden(x.metadata.id))
.map(|x| x.metadata.file_size)
.sum();

64 changes: 33 additions & 31 deletions src/compaction/worker.rs
Original file line number Diff line number Diff line change
@@ -73,7 +73,7 @@ impl Options {
/// This will block until the compactor is fully finished.
pub fn do_compaction(opts: &Options) -> crate::Result<()> {
log::trace!("compactor: acquiring levels manifest lock");
let mut original_levels = opts.levels.write().expect("lock is poisoned");
let original_levels = opts.levels.write().expect("lock is poisoned");

log::trace!("compactor: consulting compaction strategy");
let choice = opts.strategy.choose(&original_levels, &opts.config);
@@ -82,35 +82,15 @@ pub fn do_compaction(opts: &Options) -> crate::Result<()> {

match choice {
Choice::Merge(payload) => merge_segments(original_levels, opts, &payload),
Choice::Move(payload) => {
let segment_map = original_levels.get_all_segments();

original_levels.atomic_swap(|recipe| {
for segment_id in payload.segment_ids {
if let Some(segment) = segment_map.get(&segment_id).cloned() {
for level in recipe.iter_mut() {
level.remove(segment_id);
}

recipe
.get_mut(payload.dest_level as usize)
.expect("destination level should exist")
.insert(segment);
}
}
})
}
Choice::Drop(payload) => {
drop_segments(
original_levels,
opts,
&payload
.into_iter()
.map(|x| (opts.tree_id, x).into())
.collect::<Vec<_>>(),
)?;
Ok(())
}
Choice::Move(payload) => move_segments(original_levels, payload),
Choice::Drop(payload) => drop_segments(
original_levels,
opts,
&payload
.into_iter()
.map(|x| (opts.tree_id, x).into())
.collect::<Vec<_>>(),
),
Choice::DoNothing => {
log::trace!("Compactor chose to do nothing");
Ok(())
@@ -186,6 +166,28 @@ fn create_compaction_stream<'a>(
}
}

fn move_segments(
mut levels: RwLockWriteGuard<'_, LevelManifest>,
payload: CompactionPayload,
) -> crate::Result<()> {
let segment_map = levels.get_all_segments();

levels.atomic_swap(|recipe| {
for segment_id in payload.segment_ids {
if let Some(segment) = segment_map.get(&segment_id).cloned() {
for level in recipe.iter_mut() {
level.remove(segment_id);
}

recipe
.get_mut(payload.dest_level as usize)
.expect("destination level should exist")
.insert(segment);
}
}
})
}

#[allow(clippy::too_many_lines)]
fn merge_segments(
mut levels: RwLockWriteGuard<'_, LevelManifest>,
@@ -202,7 +204,7 @@ fn merge_segments(
if payload
.segment_ids
.iter()
.any(|id| levels.hidden_set.contains(id))
.any(|id| levels.segment_hidden(*id))
{
log::warn!("Compaction task contained hidden segments, declining to run it");
return Ok(());
36 changes: 36 additions & 0 deletions src/level_manifest/hidden_set.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
use crate::segment::meta::SegmentId;

use crate::HashSet;

#[derive(Clone)]
pub(super) struct HiddenSet {
pub(crate) set: HashSet<SegmentId>,
}

impl Default for HiddenSet {
fn default() -> Self {
Self {
set: HashSet::with_capacity_and_hasher(10, xxhash_rust::xxh3::Xxh3Builder::new()),
}
}
}

impl HiddenSet {
pub(crate) fn hide<T: IntoIterator<Item = SegmentId>>(&mut self, keys: T) {
self.set.extend(keys);
}

pub(crate) fn show<T: IntoIterator<Item = SegmentId>>(&mut self, keys: T) {
for key in keys {
self.set.remove(&key);
}
}

pub(crate) fn contains(&self, key: SegmentId) -> bool {
self.set.contains(&key)
}

pub(crate) fn is_empty(&self) -> bool {
self.set.is_empty()
}
}
60 changes: 26 additions & 34 deletions src/level_manifest/mod.rs
Original file line number Diff line number Diff line change
@@ -2,6 +2,7 @@
// This source code is licensed under both the Apache 2.0 and MIT License
// (found in the LICENSE-* files in the repository)

mod hidden_set;
pub mod iter;
pub(crate) mod level;

@@ -21,8 +22,6 @@ use std::{
sync::Arc,
};

pub type HiddenSet = HashSet<SegmentId>;

type Levels = Vec<Arc<Level>>;

/// Represents the levels of a log-structured merge tree.
@@ -38,7 +37,7 @@ pub struct LevelManifest {
///
/// While consuming segments (because of compaction) they will not appear in the list of segments
/// as to not cause conflicts between multiple compaction threads (compacting the same segments)
pub hidden_set: HiddenSet,
hidden_set: hidden_set::HiddenSet,

is_disjoint: bool,
}
@@ -62,7 +61,7 @@ impl std::fmt::Display for LevelManifest {
#[allow(clippy::indexing_slicing)]
for segment in level.segments.iter().take(2) {
let id = segment.metadata.id;
let is_hidden = self.hidden_set.contains(&id);
let is_hidden = self.segment_hidden(id);

write!(
f,
@@ -76,7 +75,7 @@ impl std::fmt::Display for LevelManifest {
#[allow(clippy::indexing_slicing)]
for segment in level.segments.iter().rev().take(2).rev() {
let id = segment.metadata.id;
let is_hidden = self.hidden_set.contains(&id);
let is_hidden = self.segment_hidden(id);

write!(
f,
@@ -88,7 +87,7 @@ impl std::fmt::Display for LevelManifest {
} else {
for segment in &level.segments {
let id = segment.metadata.id;
let is_hidden = self.hidden_set.contains(&id);
let is_hidden = self.segment_hidden(id);

write!(
f,
@@ -126,10 +125,7 @@ impl LevelManifest {
let mut manifest = Self {
path: path.as_ref().to_path_buf(),
levels,
hidden_set: HashSet::with_capacity_and_hasher(
10,
xxhash_rust::xxh3::Xxh3Builder::new(),
),
hidden_set: Default::default(),
is_disjoint: true,
};
Self::write_to_disk(path, &manifest.deep_clone())?;
@@ -235,10 +231,7 @@ impl LevelManifest {

let mut manifest = Self {
levels,
hidden_set: HashSet::with_capacity_and_hasher(
10,
xxhash_rust::xxh3::Xxh3Builder::new(),
),
hidden_set: Default::default(),
path: path.as_ref().to_path_buf(),
is_disjoint: false,
};
@@ -379,14 +372,10 @@ impl LevelManifest {
HashSet::with_capacity_and_hasher(self.len(), xxhash_rust::xxh3::Xxh3Builder::new());

for (idx, level) in self.levels.iter().enumerate() {
for segment_id in level.ids() {
if self.hidden_set.contains(&segment_id) {
// NOTE: Level count is u8
#[allow(clippy::cast_possible_truncation)]
let idx = idx as u8;

output.insert(idx);
}
if level.ids().any(|id| self.segment_hidden(id)) {
// NOTE: Level count is u8
#[allow(clippy::cast_possible_truncation)]
output.insert(idx as u8);
}
}

@@ -400,7 +389,7 @@ impl LevelManifest {

for raw_level in &self.levels {
let mut level = raw_level.iter().cloned().collect::<Vec<_>>();
level.retain(|x| !self.hidden_set.contains(&x.metadata.id));
level.retain(|x| !self.segment_hidden(x.metadata.id));

output.push(Level {
segments: level,
@@ -425,16 +414,16 @@ impl LevelManifest {
output
}

pub(crate) fn show_segments(&mut self, keys: impl Iterator<Item = SegmentId>) {
for key in keys {
self.hidden_set.remove(&key);
}
pub(crate) fn segment_hidden(&self, key: SegmentId) -> bool {
self.hidden_set.contains(key)
}

pub(crate) fn hide_segments(&mut self, keys: impl Iterator<Item = SegmentId>) {
for key in keys {
self.hidden_set.insert(key);
}
pub(crate) fn hide_segments<T: IntoIterator<Item = SegmentId>>(&mut self, keys: T) {
self.hidden_set.hide(keys);
}

pub(crate) fn show_segments<T: IntoIterator<Item = SegmentId>>(&mut self, keys: T) {
self.hidden_set.show(keys);
}
}

@@ -464,8 +453,11 @@ impl Encode for Vec<Level> {
#[cfg(test)]
#[allow(clippy::expect_used)]
mod tests {
use crate::{coding::Encode, level_manifest::LevelManifest, AbstractTree};
use std::collections::HashSet;
use crate::{
coding::Encode,
level_manifest::{hidden_set::HiddenSet, LevelManifest},
AbstractTree,
};
use test_log::test;

#[test]
@@ -513,7 +505,7 @@ mod tests {
#[test]
fn level_manifest_raw_empty() -> crate::Result<()> {
let manifest = LevelManifest {
hidden_set: HashSet::default(),
hidden_set: HiddenSet::default(),
levels: Vec::default(),
path: "a".into(),
is_disjoint: false,
Loading