Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

res-lock #8566

Open
wants to merge 26 commits into
base: bpf-next_base
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
6627a4c
adding ci files
Mar 4, 2025
29458a7
locking: Move MCS struct definition to public header
kkdwivedi Aug 27, 2024
e691830
locking: Move common qspinlock helpers to a private header
kkdwivedi Aug 15, 2024
dd4d722
locking: Allow obtaining result of arch_mcs_spin_lock_contended
kkdwivedi Aug 15, 2024
6ef4049
locking: Copy out qspinlock.c to rqspinlock.c
kkdwivedi Aug 15, 2024
c96eb3f
rqspinlock: Add rqspinlock.h header
kkdwivedi Aug 27, 2024
9dd4f9c
rqspinlock: Drop PV and virtualization support
kkdwivedi Oct 10, 2024
0128e3e
rqspinlock: Add support for timeouts
kkdwivedi Aug 15, 2024
f75c6d5
rqspinlock: Hardcode cond_acquire loops for arm64
kkdwivedi Feb 3, 2025
e872a97
rqspinlock: Protect pending bit owners from stalls
kkdwivedi Aug 15, 2024
2a0700c
rqspinlock: Protect waiters in queue from stalls
kkdwivedi Aug 15, 2024
04077d7
rqspinlock: Protect waiters in trylock fallback from stalls
kkdwivedi Aug 15, 2024
bd5a43b
rqspinlock: Add deadlock detection and recovery
kkdwivedi Nov 19, 2024
5c42639
rqspinlock: Add a test-and-set fallback
kkdwivedi Feb 4, 2025
547ca2b
rqspinlock: Add basic support for CONFIG_PARAVIRT
kkdwivedi Oct 16, 2024
4a31367
rqspinlock: Add helper to print a splat on timeout or deadlock
kkdwivedi Oct 16, 2024
c77724b
rqspinlock: Add macros for rqspinlock usage
kkdwivedi Nov 19, 2024
1055e3f
rqspinlock: Add entry to Makefile, MAINTAINERS
kkdwivedi Aug 27, 2024
8a16bc5
rqspinlock: Add locktorture support
kkdwivedi Nov 20, 2024
8e31c93
bpf: Convert hashtab.c to rqspinlock
kkdwivedi Nov 19, 2024
8b12c15
bpf: Convert percpu_freelist.c to rqspinlock
kkdwivedi Nov 20, 2024
b077e8b
bpf: Convert lpm_trie.c to rqspinlock
kkdwivedi Nov 20, 2024
042f2c3
bpf: Introduce rqspinlock kfuncs
kkdwivedi Aug 15, 2024
dbb59fc
bpf: Implement verifier support for rqspinlock
kkdwivedi Dec 13, 2024
50ee21a
bpf: Maintain FIFO property for rqspinlock unlock
kkdwivedi Jan 27, 2025
8b27f13
selftests/bpf: Add tests for rqspinlock
kkdwivedi Jul 30, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .github/actions/veristat_baseline_compare/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
name: 'run-veristat'
description: 'Run veristat benchmark'
inputs:
veristat_output:
description: 'Veristat output filepath'
required: true
baseline_name:
description: 'Veristat baseline cache name'
required: true
runs:
using: "composite"
steps:
- uses: actions/upload-artifact@v4
with:
name: ${{ inputs.baseline_name }}
if-no-files-found: error
path: ${{ github.workspace }}/${{ inputs.veristat_output }}

# For pull request:
# - get baseline log from cache
# - compare it to current run
- if: ${{ github.event_name == 'pull_request' }}
uses: actions/cache/restore@v4
with:
key: ${{ inputs.baseline_name }}
restore-keys: |
${{ inputs.baseline_name }}-
path: '${{ github.workspace }}/${{ inputs.baseline_name }}'

- if: ${{ github.event_name == 'pull_request' }}
name: Show veristat comparison
shell: bash
run: ./.github/scripts/compare-veristat-results.sh
env:
BASELINE_PATH: ${{ github.workspace }}/${{ inputs.baseline_name }}
VERISTAT_OUTPUT: ${{ inputs.veristat_output }}

# For push: just put baseline log to cache
- if: ${{ github.event_name == 'push' }}
shell: bash
run: |
mv "${{ github.workspace }}/${{ inputs.veristat_output }}" \
"${{ github.workspace }}/${{ inputs.baseline_name }}"
- if: ${{ github.event_name == 'push' }}
uses: actions/cache/save@v4
with:
key: ${{ inputs.baseline_name }}-${{ github.run_id }}
path: '${{ github.workspace }}/${{ inputs.baseline_name }}'
18 changes: 18 additions & 0 deletions .github/scripts/compare-veristat-results.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#!/bin/bash

if [[ ! -f "${BASELINE_PATH}" ]]; then
echo "# No ${BASELINE_PATH} available" >> "${GITHUB_STEP_SUMMARY}"

echo "No ${BASELINE_PATH} available"
echo "Printing veristat results"
cat "${VERISTAT_OUTPUT}"

exit
fi

selftests/bpf/veristat \
--output-format csv \
--emit file,prog,verdict,states \
--compare "${BASELINE_PATH}" "${VERISTAT_OUTPUT}" > compare.csv

python3 ./.github/scripts/veristat_compare.py compare.csv
202 changes: 202 additions & 0 deletions .github/scripts/matrix.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
#!/usr/bin/env python3

import os
import dataclasses
import json

from enum import Enum
from typing import Any, Dict, List, Final, Set, Union

MANAGED_OWNER: Final[str] = "kernel-patches"
MANAGED_REPOS: Final[Set[str]] = {
f"{MANAGED_OWNER}/bpf",
f"{MANAGED_OWNER}/vmtest",
}

DEFAULT_RUNNER: Final[str] = "ubuntu-24.04"
DEFAULT_LLVM_VERSION: Final[int] = 17
DEFAULT_SELF_HOSTED_RUNNER_TAGS: Final[List[str]] = ["self-hosted", "docker-noble-main"]


class Arch(str, Enum):
"""
CPU architecture supported by CI.
"""

AARCH64 = "aarch64"
S390X = "s390x"
X86_64 = "x86_64"


class Compiler(str, Enum):
GCC = "gcc"
LLVM = "llvm"


@dataclasses.dataclass
class Toolchain:
compiler: Compiler
# This is relevant ONLY for LLVM and should not be required for GCC
version: int

@property
def short_name(self) -> str:
return str(self.compiler.value)

@property
def full_name(self) -> str:
if self.compiler == Compiler.GCC:
return self.short_name

return f"{self.short_name}-{self.version}"

def to_dict(self) -> Dict[str, Union[str, int]]:
return {
"name": self.short_name,
"fullname": self.full_name,
"version": self.version,
}


@dataclasses.dataclass
class BuildConfig:
arch: Arch
toolchain: Toolchain
kernel: str = "LATEST"
run_veristat: bool = False
parallel_tests: bool = False
build_release: bool = False

@property
def runs_on(self) -> List[str]:
if is_managed_repo():
return DEFAULT_SELF_HOSTED_RUNNER_TAGS + [self.arch.value]
return [DEFAULT_RUNNER]

@property
def build_runs_on(self) -> List[str]:
if is_managed_repo():
# Build s390x on x86_64
return DEFAULT_SELF_HOSTED_RUNNER_TAGS + [
self.arch.value == "s390x" and Arch.X86_64.value or self.arch.value,
]
return [DEFAULT_RUNNER]

@property
def tests(self) -> Dict[str, Any]:
tests_list = [
"test_progs",
"test_progs_parallel",
"test_progs_no_alu32",
"test_progs_no_alu32_parallel",
"test_verifier",
]

if self.arch.value != "s390x":
tests_list.append("test_maps")

if self.toolchain.version >= 18:
tests_list.append("test_progs_cpuv4")

# if self.arch in [Arch.X86_64, Arch.AARCH64]:
# tests_list.append("sched_ext")

# Don't run GCC BPF runner, because too many tests are failing
# See: https://lore.kernel.org/bpf/[email protected]/
# if self.arch == Arch.X86_64:
# tests_list.append("test_progs-bpf_gcc")

if not self.parallel_tests:
tests_list = [test for test in tests_list if not test.endswith("parallel")]

return {"include": [generate_test_config(test) for test in tests_list]}

def to_dict(self) -> Dict[str, Any]:
return {
"arch": self.arch.value,
"toolchain": self.toolchain.to_dict(),
"kernel": self.kernel,
"run_veristat": self.run_veristat,
"parallel_tests": self.parallel_tests,
"build_release": self.build_release,
"runs_on": self.runs_on,
"tests": self.tests,
"build_runs_on": self.build_runs_on,
}


def is_managed_repo() -> bool:
return (
os.environ["GITHUB_REPOSITORY_OWNER"] == MANAGED_OWNER
and os.environ["GITHUB_REPOSITORY"] in MANAGED_REPOS
)


def set_output(name, value):
"""Write an output variable to the GitHub output file."""
with open(os.getenv("GITHUB_OUTPUT"), "a", encoding="utf-8") as file:
file.write(f"{name}={value}\n")


def generate_test_config(test: str) -> Dict[str, Union[str, int]]:
"""Create the configuration for the provided test."""
is_parallel = test.endswith("_parallel")
config = {
"test": test,
"continue_on_error": is_parallel,
# While in experimental mode, parallel jobs may get stuck
# anywhere, including in user space where the kernel won't detect
# a problem and panic. We add a second layer of (smaller) timeouts
# here such that if we get stuck in a parallel run, we hit this
# timeout and fail without affecting the overall job success (as
# would be the case if we hit the job-wide timeout). For
# non-experimental jobs, 360 is the default which will be
# superseded by the overall workflow timeout (but we need to
# specify something).
"timeout_minutes": 30 if is_parallel else 360,
}
return config


if __name__ == "__main__":
matrix = [
BuildConfig(
arch=Arch.X86_64,
toolchain=Toolchain(compiler=Compiler.GCC, version=DEFAULT_LLVM_VERSION),
run_veristat=True,
parallel_tests=True,
),
BuildConfig(
arch=Arch.X86_64,
toolchain=Toolchain(compiler=Compiler.LLVM, version=DEFAULT_LLVM_VERSION),
build_release=True,
),
BuildConfig(
arch=Arch.X86_64,
toolchain=Toolchain(compiler=Compiler.LLVM, version=18),
build_release=True,
),
BuildConfig(
arch=Arch.AARCH64,
toolchain=Toolchain(compiler=Compiler.GCC, version=DEFAULT_LLVM_VERSION),
),
# BuildConfig(
# arch=Arch.AARCH64,
# toolchain=Toolchain(
# compiler=Compiler.LLVM,
# version=DEFAULT_LLVM_VERSION
# ),
# ),
BuildConfig(
arch=Arch.S390X,
toolchain=Toolchain(compiler=Compiler.GCC, version=DEFAULT_LLVM_VERSION),
),
]

# Outside of those repositories we only run on x86_64
if not is_managed_repo():
matrix = [config for config in matrix if config.arch == Arch.X86_64]

json_matrix = json.dumps({"include": [config.to_dict() for config in matrix]})
print(json_matrix)
set_output("build_matrix", json_matrix)
75 changes: 75 additions & 0 deletions .github/scripts/tests/test_veristat_compare.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
#!/usr/bin/env python3

import unittest
from typing import Iterable, List

from ..veristat_compare import parse_table, VeristatFields


def gen_csv_table(records: Iterable[str]) -> List[str]:
return [
",".join(VeristatFields.headers()),
*records,
]


class TestVeristatCompare(unittest.TestCase):
def test_parse_table_ignore_new_prog(self):
table = gen_csv_table(
[
"prog_file.bpf.o,prog_name,N/A,success,N/A,N/A,1,N/A",
]
)
veristat_info = parse_table(table)
self.assertEqual(veristat_info.table, [])
self.assertFalse(veristat_info.changes)
self.assertFalse(veristat_info.new_failures)

def test_parse_table_ignore_removed_prog(self):
table = gen_csv_table(
[
"prog_file.bpf.o,prog_name,success,N/A,N/A,1,N/A,N/A",
]
)
veristat_info = parse_table(table)
self.assertEqual(veristat_info.table, [])
self.assertFalse(veristat_info.changes)
self.assertFalse(veristat_info.new_failures)

def test_parse_table_new_failure(self):
table = gen_csv_table(
[
"prog_file.bpf.o,prog_name,success,failure,MISMATCH,1,1,+0 (+0.00%)",
]
)
veristat_info = parse_table(table)
self.assertEqual(
veristat_info.table,
[["prog_file.bpf.o", "prog_name", "success -> failure (!!)", "+0.00 %"]],
)
self.assertTrue(veristat_info.changes)
self.assertTrue(veristat_info.new_failures)

def test_parse_table_new_changes(self):
table = gen_csv_table(
[
"prog_file.bpf.o,prog_name,failure,success,MISMATCH,0,0,+0 (+0.00%)",
"prog_file.bpf.o,prog_name_increase,failure,failure,MATCH,1,2,+1 (+100.00%)",
"prog_file.bpf.o,prog_name_decrease,success,success,MATCH,1,1,-1 (-100.00%)",
]
)
veristat_info = parse_table(table)
self.assertEqual(
veristat_info.table,
[
["prog_file.bpf.o", "prog_name", "failure -> success", "+0.00 %"],
["prog_file.bpf.o", "prog_name_increase", "failure", "+100.00 %"],
["prog_file.bpf.o", "prog_name_decrease", "success", "-100.00 %"],
],
)
self.assertTrue(veristat_info.changes)
self.assertFalse(veristat_info.new_failures)


if __name__ == "__main__":
unittest.main()
21 changes: 21 additions & 0 deletions .github/scripts/tmpfsify-workspace.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
#!/bin/bash

set -x -euo pipefail

TMPFS_SIZE=20 # GB
MEM_TOTAL=$(awk '/MemTotal/ {print int($2/1024)}' /proc/meminfo)

# sanity check: total mem is at least double TMPFS_SIZE
if [ $MEM_TOTAL -lt $(($TMPFS_SIZE*1024*2)) ]; then
echo "tmpfsify-workspace.sh: will not allocate tmpfs, total memory is too low (${MEM_TOTAL}MB)"
exit 0
fi

dir="$(basename "$GITHUB_WORKSPACE")"
cd "$(dirname "$GITHUB_WORKSPACE")"
mv "${dir}" "${dir}.backup"
mkdir "${dir}"
sudo mount -t tmpfs -o size=${TMPFS_SIZE}G tmpfs "${dir}"
rsync -a "${dir}.backup/" "${dir}"
cd -

Loading
Loading