From b1b9aa7e7fa9189c9742145e7cbb0f3f9ef7e2e6 Mon Sep 17 00:00:00 2001 From: Lakshman Date: Sat, 27 Jul 2024 14:21:43 -0600 Subject: [PATCH] Added basic documentation for all new benchmarks. Signed-off-by: L Lakshmanan --- benchmarks/compression/README.md | 63 ++++++++++++++++++ benchmarks/image-rotate/README.md | 66 +++++++++++++++++++ benchmarks/rnn-serving/README.md | 63 ++++++++++++++++++ .../video-analytics-standalone/README.md | 66 +++++++++++++++++++ benchmarks/video-processing/README.md | 66 +++++++++++++++++++ 5 files changed, 324 insertions(+) create mode 100644 benchmarks/compression/README.md create mode 100644 benchmarks/image-rotate/README.md create mode 100644 benchmarks/rnn-serving/README.md create mode 100644 benchmarks/video-analytics-standalone/README.md create mode 100644 benchmarks/video-processing/README.md diff --git a/benchmarks/compression/README.md b/benchmarks/compression/README.md new file mode 100644 index 00000000..7291f604 --- /dev/null +++ b/benchmarks/compression/README.md @@ -0,0 +1,63 @@ +# Compression Benchmark + +The compression benchmark measures the performance of a serverless platform for the task of file compression. The benchmark uses the zlib library to compress and decompress input files. A specific file can be specified using the `--def_file` flag, however if not specified, the benchmark uses a default file. + +The functionality is implemented in Python. The function is invoked using gRPC. + +## Running this benchmark locally (using docker) + +The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the compression-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +### Invoke once +2. Start the function with docker-compose + ```bash + docker-compose -f ./yamls/docker-compose/dc-compression-python.yaml up + ``` +3. In a new terminal, invoke the interface function with grpcurl. + ```bash + ./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello + ``` +### Invoke multiple times +2. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "localhost" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 50000 -dbg -time 10 -rps 1 + ``` + +## Running this benchmark (using knative) + +The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the compression-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +2. Start the function with knative + ```bash + kubectl apply -f ./yamls/knative/kn-compression-python.yaml + ``` +3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.** +### Invoke once +4. In a new terminal, invoke the interface function with test-client. + ```bash + ./test-client --addr $URL:80 --name "Example text for Compression" + ``` +### Invoke multiple times +4. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "$URL" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 80 -dbg -time 10 -rps 1 + ``` +## Tracing + +This Benchmark does not support distributed tracing for now. \ No newline at end of file diff --git a/benchmarks/image-rotate/README.md b/benchmarks/image-rotate/README.md new file mode 100644 index 00000000..9908e86a --- /dev/null +++ b/benchmarks/image-rotate/README.md @@ -0,0 +1,66 @@ +# Image Rotate Benchmark + +The image rotate benchmark rotates an input image by 90 degrees. An input image can be specified, but if nothing is given, a default image is used. This benchmark also utilises and depends on a database to store the images that can be used, which in turn uses MongoDB. + +The `init-database.go` script runs when starting the function and populates the database with the images from the `images` folder. + +The functionality is implemented in two runtimes, namely Go and Python. The function is invoked using gRPC. + +## Running this benchmark locally (using docker) + +The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the image-rotate-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +### Invoke once +2. Start the function with docker-compose + ```bash + docker-compose -f ./yamls/docker-compose/dc-image-rotate-python.yaml up + ``` +3. In a new terminal, invoke the interface function with grpcurl. + ```bash + ./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello + ``` +### Invoke multiple times +2. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "localhost" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 50000 -dbg -time 10 -rps 1 + ``` + +## Running this benchmark (using knative) + +The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the image-rotate-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +2. Initialise the database and start the function with knative + ```bash + kubectl apply -f ./yamls/knative/image-rotate-database.yaml + kubectl apply -f ./yamls/knative/kn-image-rotate-python.yaml + ``` +3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.** +### Invoke once +4. In a new terminal, invoke the interface function with test-client. + ```bash + ./test-client --addr $URL:80 --name "Example text for Image-rotate" + ``` +### Invoke multiple times +4. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "$URL" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 80 -dbg -time 10 -rps 1 + ``` +## Tracing + +This Benchmark does not support distributed tracing for now. \ No newline at end of file diff --git a/benchmarks/rnn-serving/README.md b/benchmarks/rnn-serving/README.md new file mode 100644 index 00000000..cf039545 --- /dev/null +++ b/benchmarks/rnn-serving/README.md @@ -0,0 +1,63 @@ +# RNN Serving Benchmark + +The RNN serving benchmark generates a string using an RNN model given a specific language to generate the string in. An language can be specified as an input, but if nothing is given, a default language is chosen, either at random or uniquely using the input generator. + +The functionality is implemented in Python. The function is invoked using gRPC. + +## Running this benchmark locally (using docker) + +The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the rnn-serving-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +### Invoke once +2. Start the function with docker-compose + ```bash + docker-compose -f ./yamls/docker-compose/dc-rnn-serving-python.yaml up + ``` +3. In a new terminal, invoke the interface function with grpcurl. + ```bash + ./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello + ``` +### Invoke multiple times +2. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "localhost" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 50000 -dbg -time 10 -rps 1 + ``` + +## Running this benchmark (using knative) + +The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the rnn-serving-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +2. Initialise the database and start the function with knative + ```bash + kubectl apply -f ./yamls/knative/kn-rnn-serving-python.yaml + ``` +3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.** +### Invoke once +4. In a new terminal, invoke the interface function with test-client. + ```bash + ./test-client --addr $URL:80 --name "Example text for rnn-serving" + ``` +### Invoke multiple times +4. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "$URL" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 80 -dbg -time 10 -rps 1 + ``` +## Tracing + +This Benchmark does not support distributed tracing for now. \ No newline at end of file diff --git a/benchmarks/video-analytics-standalone/README.md b/benchmarks/video-analytics-standalone/README.md new file mode 100644 index 00000000..877c9a1f --- /dev/null +++ b/benchmarks/video-analytics-standalone/README.md @@ -0,0 +1,66 @@ +# Video Analytics Standalone Benchmark + +The video analyics standalone benchmark preprocesses an input video and runs an object detection model (squeezenet) on the video. An input video can be specified, but if nothing is given, a default video is used. This benchmark also utilises and depends on a database to store the videos that can be used, which in turn uses MongoDB. + +The `init-database.go` script runs when starting the function and populates the database with the videos from the `videos` folder. + +The functionality is implemented in Python. The function is invoked using gRPC. + +## Running this benchmark locally (using docker) + +The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the video-analytics-standalone-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +### Invoke once +2. Start the function with docker-compose + ```bash + docker-compose -f ./yamls/docker-compose/dc-video-analytics-standalone-python.yaml up + ``` +3. In a new terminal, invoke the interface function with grpcurl. + ```bash + ./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello + ``` +### Invoke multiple times +2. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "localhost" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 50000 -dbg -time 10 -rps 1 + ``` + +## Running this benchmark (using knative) + +The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the video-analytics-standalone-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +2. Initialise the database and start the function with knative + ```bash + kubectl apply -f ./yamls/knative/video-analytics-standalone-database.yaml + kubectl apply -f ./yamls/knative/kn-video-analytics-standalone-python.yaml + ``` +3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.** +### Invoke once +4. In a new terminal, invoke the interface function with test-client. + ```bash + ./test-client --addr $URL:80 --name "Example text for video analytics standalone" + ``` +### Invoke multiple times +4. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "$URL" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 80 -dbg -time 10 -rps 1 + ``` +## Tracing + +This Benchmark does not support distributed tracing for now. \ No newline at end of file diff --git a/benchmarks/video-processing/README.md b/benchmarks/video-processing/README.md new file mode 100644 index 00000000..e8edcea4 --- /dev/null +++ b/benchmarks/video-processing/README.md @@ -0,0 +1,66 @@ +# Video Processing Benchmark + +The video processing benchmark converts an input video to grayscale. An input video can be specified, but if nothing is given, a default video is used. This benchmark also utilises and depends on a database to store the videos that can be used, which in turn uses MongoDB. + +The `init-database.go` script runs when starting the function and populates the database with the videos from the `videos` folder. + +The functionality is implemented in Python. The function is invoked using gRPC. + +## Running this benchmark locally (using docker) + +The detailed and general description how to run benchmarks local you can find [here](../../docs/running_locally.md). The following steps show it on the video-processing-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +### Invoke once +2. Start the function with docker-compose + ```bash + docker-compose -f ./yamls/docker-compose/dc-video-processing-python.yaml up + ``` +3. In a new terminal, invoke the interface function with grpcurl. + ```bash + ./tools/bin/grpcurl -plaintext localhost:50000 helloworld.Greeter.SayHello + ``` +### Invoke multiple times +2. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "localhost" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 50000 -dbg -time 10 -rps 1 + ``` + +## Running this benchmark (using knative) + +The detailed and general description on how to run benchmarks on knative clusters you can find [here](../../docs/running_benchmarks.md). The following steps show it on the video-processing-python function. +1. Build or pull the function images using `make all-image` or `make pull`. +2. Initialise the database and start the function with knative + ```bash + kubectl apply -f ./yamls/knative/video-processing-database.yaml + kubectl apply -f ./yamls/knative/kn-video-processing-python.yaml + ``` +3. **Note the URL provided in the output. The part without the `http://` we'll call `$URL`. Replace any instance of `$URL` in the code below with it.** +### Invoke once +4. In a new terminal, invoke the interface function with test-client. + ```bash + ./test-client --addr $URL:80 --name "Example text for Video-processing" + ``` +### Invoke multiple times +4. Run the invoker + ```bash + # build the invoker binary + cd ../../tools/invoker + make invoker + + # Specify the hostname through "endpoints.json" + echo '[ { "hostname": "$URL" } ]' > endpoints.json + + # Start the invoker with a chosen RPS rate and time + ./invoker -port 80 -dbg -time 10 -rps 1 + ``` +## Tracing + +This Benchmark does not support distributed tracing for now. \ No newline at end of file