Skip to content

Commit

Permalink
Updates
Browse files Browse the repository at this point in the history
* Update docs for spring 2024.
* Apply some more clippy lints.

Co-authored-by: Sunho park <[email protected]>
  • Loading branch information
Lee-Janggun and Sunho park committed Feb 28, 2024
1 parent 7c5e85e commit 10fcc09
Show file tree
Hide file tree
Showing 10 changed files with 141 additions and 78 deletions.
4 changes: 2 additions & 2 deletions homework/doc/arc.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ The skeleton code is a heavily modified version of `Arc` from the standard libra
We don't recommend reading the original source code before finishing this homework
because that version is more complex.

## ***2023 fall semester notice: Use `SeqCst`***
Due to lack of time, we cannot cover the weak memory semantics.
## ***2024 spring semester notice: Use `SeqCst`***
We won't cover the weak memory semantics in this semester.
So you may ignore the instructions on `Ordering` stuff below and
use `Ordering::SeqCst` for `ordering: Ordering` parameters for `std::sync::atomic` functions.

Expand Down
2 changes: 1 addition & 1 deletion homework/doc/hash_table.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This homework is in 2 parts:
optimize the implementation by relaxing the ordering on the atomic accesses.
We recommend working on this part after finishing the [Arc homework](./arc.md).

## ***2023 fall semester notice: Part 2 is cancelled***
## ***2024 spring semester notice: Part 2 is cancelled***
We won't cover the weak memory semantics in this semester.

## Part 1: Split-ordered list in sequentially consistent memory model
Expand Down
2 changes: 1 addition & 1 deletion homework/doc/hazard_pointer.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This homework is in 2 parts:
optimize the implementation by relaxing the ordering.
We recommend working on this part after finishing the [Arc homework](./arc.md).

## ***2023 fall semester notice: Part 2 is cancelled***
## ***2024 spring semester notice: Part 2 is cancelled***
We won't cover the weak memory semantics in this semester.
To ensure that the grader works properly, you must use `Ordering:SeqCst` for all operations.

Expand Down
84 changes: 47 additions & 37 deletions homework/doc/list_set.md
Original file line number Diff line number Diff line change
@@ -1,55 +1,30 @@
# Concurrent set based on Lock-coupling linked list
**Implement concurrent set data structures with sorted singly linked list using (optimistic) fine-grained lock-coupling.**
**Implement concurrent set data structures with sorted singly linked list using fine-grained lock-coupling.**

Suppose you want a set data structure that supports concurrent operations.
The simplest possible approach would be taking a non-concurrent set implementation and protecting it with a global lock.
However, this is not a great idea if the set is accessed frequently because a thread's operation blocks all the other threads' operations.

In this homework, you will write two implementations of the set data structure based on singly linked list protected by fine-grained locks.
In this homework, you will write an implementation of the set data structure based on singly linked list protected by fine-grained locks.
* The nodes in the list are sorted by their value, so that one can efficiently check if a value is in the set.
* Each node has its own lock that protects its `next` field.
When traversing the list, the locks are acquired and released in the hand-over-hand manner.
This allows multiple operations run more concurrently.

You will implement two variants.
* In `list_set/fine_grained.rs`, the lock is the usual `Mutex`.
* In `list_set/optimistic_fine_grained.rs`, the lock is a `SeqLock`.
This allows read operations to run optimistically without actually locking.
Therefore, read operations are more efficient in read-most scenario, and
they do not block other operations.
However, more care must be taken to ensure correctness.
* You need to validate read operations and handle the failure.
* Do not use `ReadGuard::restart()`.
Using this correctly requires some extra synchronization
(to be covered in lock-free list lecture),
which makes `SeqLock` somewhat pointless.
The tests assume that `ReadGuard::restart()` is not used.
* Since each node can be read and modified to concurrently,
you should use atomic operations to avoid data races.
Specifically, you will use `crossbeam_epoch`'s `Atomic<T>` type
(instead of `std::sync::AtomicPtr<T>`, due to the next issue).
For `Ordering`, use `SeqCst` everywhere.
(In the later part of this course, you will learn that `Relaxed` is sufficient.
But don't use `Relaxed` in this homework, because that would break `cargo_tsan`.)
* Since a node can be removed while another thread is reading,
reclamation of the node should be deferred.
You can handle this semi-automatically with `crossbeam_epoch`.

Fill in the `todo!()`s in `list_set/{fine_grained,optimistic_fine_grained}.rs` (about 40 + 80 lines of code).
Fill in the `todo!()`s in `list_set/fine_grained.rs` (about 40 lines of code).
As in the [Linked List homework](./linked_list.md), you will need to use some unsafe operations.

## Testing
Tests are defined in `tests/list_set/{fine_grained,optimistic_fine_grained}.rs`.
Tests are defined in `tests/list_set/fine_grained.rs`.
Some of them use the common set test functions defined in `src/test/adt/set.rs`.

## Grading (100 points)
## Grading (45 points)
Run
```
./scripts/grade-list_set.sh
```

For each module `fine_grained` and `optimistic_fine_grained`,
the grader runs the tests
The grader runs the tests
with `cargo`, `cargo_asan`, and `cargo_tsan` in the following order.
1. `stress_sequential` (5 points)
1. `stress_concurrent` (10 points)
Expand All @@ -58,12 +33,6 @@ with `cargo`, `cargo_asan`, and `cargo_tsan` in the following order.

For the above tests, if a test fails in a module, then the later tests in the same module will not be run.

For `optimistic_fine_grained`, the grader additionally runs the following tests
(10 points if all of them passes, otherwise 0).
* `read_no_block`
* `iter_invalidate_end`
* `iter_invalidate_deleted`

## Submission
```sh
cd cs431/homework
Expand All @@ -72,3 +41,44 @@ ls ./target/hw-list_set.zip
```

Submit `hw-list_set.zip` to gg.

## Advanced (optional)
**Note**: This is an *optional* homework, meaning that it will not be graded and not be asked in the exam.

Consider a variant of the homework that uses `SeqLock` instead of `Mutex`.
This allows read operations to run optimistically without actually locking.
Therefore, read operations are more efficient in read-most scenario, and
they do not block other operations.
However, more care must be taken to ensure correctness.
* You need to validate read operations and handle the failure.
* Do not use `ReadGuard::restart()`.
Using this correctly requires some extra synchronization
(to be covered in lock-free list lecture),
which makes `SeqLock` somewhat pointless.
The tests assume that `ReadGuard::restart()` is not used.
* Since each node can be read and modified to concurrently,
you should use atomic operations to avoid data races.
Specifically, you will use `crossbeam_epoch`'s `Atomic<T>` type
(instead of `std::sync::AtomicPtr<T>`, due to the next issue).
For `Ordering`, use `SeqCst` everywhere.
(In the later part of this course, you will learn that `Relaxed` is sufficient.
But don't use `Relaxed` in this homework, because that would break `cargo_tsan`.)
* Since a node can be removed while another thread is reading,
reclamation of the node should be deferred.
You can handle this semi-automatically with `crossbeam_epoch`.

**Instruction**: Fill in the `todo!()`s in `list_set/optimistic_fine_grained.rs` (about 80 lines of code).

**Testing**: Tests are defined in `tests/list_set/optimistic_fine_grained.rs`.

**Self grading**:
Run
```
./scripts/grade-optimistic_list_set.sh
```

Unlike the main homework, the grader additionally runs the following tests
(10 points if all of them passes, otherwise 0).
* `read_no_block`
* `iter_invalidate_end`
* `iter_invalidate_deleted`
31 changes: 2 additions & 29 deletions homework/scripts/grade-list_set.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,6 @@ RUNNER_TIMEOUTS=(
)
# the index of the last failed test
fine_grained_fail=${#COMMON_TESTS[@]}
optimistic_fine_grained_fail=${#COMMON_TESTS[@]}
others_failed=false

for r in "${!RUNNERS[@]}"; do
RUNNER=${RUNNERS[r]}
Expand All @@ -53,35 +51,10 @@ for r in "${!RUNNERS[@]}"; do
fi
done
fi
if [ $t -lt $optimistic_fine_grained_fail ]; then
echo "Testing optimistic_fine_grained $TEST_NAME with $RUNNER, timeout $TIMEOUT..."
TESTS=("--test list_set -- --exact optimistic_fine_grained::$TEST_NAME")
for ((i = 0; i < REPS; i++)); do
if [ $(run_tests) -ne 0 ]; then
optimistic_fine_grained_fail=$t
break
fi
done
fi
done

if [ "$others_failed" == false ]; then
echo "Running additional tests for optimistic_fine_grained with $RUNNER, timeout $TIMEOUT..."
TESTS=(
"--test list_set -- --exact optimistic_fine_grained::read_no_block"
"--test list_set -- --exact optimistic_fine_grained::iter_invalidate_end"
"--test list_set -- --exact optimistic_fine_grained::iter_invalidate_deleted"
)
if [ $(run_tests) -ne 0 ]; then
others_failed=true
fi
fi
done

SCORES=( 0 5 15 30 45 )
SCORE=$(( SCORES[fine_grained_fail] + SCORES[optimistic_fine_grained_fail] ))
if [ "$others_failed" == false ]; then
SCORE=$(( SCORE + 10 ))
fi
SCORE=$(( SCORES[fine_grained_fail] ))

echo "Score: $SCORE / 100"
echo "Score: $SCORE / 45"
76 changes: 76 additions & 0 deletions homework/scripts/grade-optimistic_list_set.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
#!/usr/bin/env bash
# set -e
set -uo pipefail
IFS=$'\n\t'

# Imports library.
BASEDIR=$(dirname "$0")
source $BASEDIR/grade-utils.sh

run_linters || exit 1

export RUST_TEST_THREADS=1


REPS=3
COMMON_TESTS=(
"stress_sequential"
"stress_concurrent"
"log_concurrent"
"iter_consistent"
)
RUNNERS=(
"cargo --release"
"cargo_asan"
"cargo_asan --release"
"cargo_tsan --release"
)
# timeout for each RUNNER
RUNNER_TIMEOUTS=(
30s
180s
180s
180s
)
# the index of the last failed test
optimistic_fine_grained_fail=${#COMMON_TESTS[@]}
others_failed=false

for r in "${!RUNNERS[@]}"; do
RUNNER=${RUNNERS[r]}
TIMEOUT=${RUNNER_TIMEOUTS[r]}
for t in "${!COMMON_TESTS[@]}"; do
TEST_NAME=${COMMON_TESTS[t]}
# run only if no test has failed yet
if [ $t -lt $optimistic_fine_grained_fail ]; then
echo "Testing optimistic_fine_grained $TEST_NAME with $RUNNER, timeout $TIMEOUT..."
TESTS=("--test list_set -- --exact optimistic_fine_grained::$TEST_NAME")
for ((i = 0; i < REPS; i++)); do
if [ $(run_tests) -ne 0 ]; then
optimistic_fine_grained_fail=$t
break
fi
done
fi
done

if [ "$others_failed" == false ]; then
echo "Running additional tests for optimistic_fine_grained with $RUNNER, timeout $TIMEOUT..."
TESTS=(
"--test list_set -- --exact optimistic_fine_grained::read_no_block"
"--test list_set -- --exact optimistic_fine_grained::iter_invalidate_end"
"--test list_set -- --exact optimistic_fine_grained::iter_invalidate_deleted"
)
if [ $(run_tests) -ne 0 ]; then
others_failed=true
fi
fi
done

SCORES=( 0 5 15 30 45 )
SCORE=$(( SCORES[optimistic_fine_grained_fail] ))
if [ "$others_failed" == false ]; then
SCORE=$(( SCORE + 10 ))
fi

echo "Score: $SCORE / 55"
1 change: 0 additions & 1 deletion homework/src/hash_table/growable_array.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
use core::fmt::Debug;
use core::mem::{self, ManuallyDrop};
use core::ops::{Deref, DerefMut};
use core::sync::atomic::Ordering::*;
use crossbeam_epoch::{Atomic, Guard, Owned, Shared};

Expand Down
6 changes: 6 additions & 0 deletions homework/src/hazard_pointer/hazard.rs
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,12 @@ impl HazardBag {
}
}

impl Default for HazardBag {
fn default() -> Self {
Self::new()
}
}

impl Drop for HazardBag {
/// Frees all slots.
fn drop(&mut self) {
Expand Down
1 change: 0 additions & 1 deletion homework/src/linked_list.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
use std::cmp::Ordering;
use std::fmt;
use std::iter::FromIterator;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
Expand Down
12 changes: 6 additions & 6 deletions homework/tests/growable_array.rs
Original file line number Diff line number Diff line change
Expand Up @@ -79,33 +79,33 @@ mod stack {
use crossbeam_epoch::{Atomic, Guard, Owned, Shared};

#[derive(Debug)]
pub(crate) struct Stack<T> {
pub(super) struct Stack<T> {
head: Atomic<Node<T>>,
}

impl<T> Stack<T> {
pub(crate) fn new() -> Self {
pub(super) fn new() -> Self {
Self {
head: Atomic::null(),
}
}
}

#[derive(Debug)]
pub(crate) struct Node<T> {
pub(super) struct Node<T> {
data: T,
next: UnsafeCell<*const Node<T>>,
}

impl<T> Node<T> {
pub(crate) fn new(data: T) -> Self {
pub(super) fn new(data: T) -> Self {
Self {
data,
next: UnsafeCell::new(ptr::null()),
}
}

pub(crate) fn into_inner(self) -> T {
pub(super) fn into_inner(self) -> T {
self.data
}
}
Expand All @@ -130,7 +130,7 @@ mod stack {
///
/// - A single `n` should only be pushed into the stack once.
/// - After the push, `n` should not be used again.
pub(crate) unsafe fn push_node<'g>(&self, n: Shared<'g, Node<T>>, guard: &'g Guard) {
pub(super) unsafe fn push_node<'g>(&self, n: Shared<'g, Node<T>>, guard: &'g Guard) {
let mut head = self.head.load(Relaxed, guard);
loop {
unsafe { *n.deref().next.get() = head.as_raw() };
Expand Down

0 comments on commit 10fcc09

Please sign in to comment.