Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project 2: Dineth Meegoda #32

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 107 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,113 @@
CUDA Stream Compaction
======================
# CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
- Dineth Meegoda
- [LinkedIn](https://www.linkedin.com/in/dinethmeegoda/), [Personal Website](https://www.dinethmeegoda.com).
- Tested on: Windows 10 Pro, Ryzen 9 5900X 12 Core @ 3.7GHz 32GB, RTX 3070 8GB

### (TODO: Your README)
## Summary

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
This project implements a stream compaction algorithm with a prefix sum scan using various algorithms on the CPU and GPU. Stream Compaction is a useful algorithm used to reduce the 'un-needed' elements of a stream to improve the efficiency of a variety of algorithms, including path tracing.

This was done with the following approaches:

- **CPU-Based**: A basic CPU based scan and compaction algorithm (with and without using scan) to serve as an accuracy and performance benchmark for the other implementations.

- **Naive GPU Scan**: A parallel algorithm run on the GPU to conduct the scan and utilize multiple threads to find the resulting prefix sum array. However, this is a 'naive' scan since it is not work efficient as there are several idle threads in a warp as it runs. The complexity of this algorithm is O(log_2_n).

- **Work-Efficient GPU**: A parallel algorithm that can be done in place with two stages: an upsweep and a downsweep. This algorithm maximizes usage on each thread such that the complexity of this algorithm is O(n).

- **Thrust**: A wrapper of a call to the Thrust library function exclusive scan. This was done to act as a performance bench mark for the rest of the implementations.

## Performance Analysis

All performance was analyzed on Release mode builds without debugging. The timing was analyzed with std::chrono for CPU high-precision timing and CUDA events to measure the GPU/CUDA performance. Initial/Final memory operations were not included in the GPU/CUDA performance times since the purpose of the analysis is to measure the efficiency of the algorithms, not the associated memory operations.

The arrays tested are randomly generated and were different on each run. As a result, three performance trials were taken to estimate each data point.

### Block Size Analysis:

![](img/block_size.png)

According to these tests, the optimal block size for both the Naive Algorithm and Work-Efficient Algorithm seems to be **128**. This remains true for regardless if the array tested had a size of a power of two or other. As the block sizes increase, performance slightly decreases with the slight exception at 128.

The work-efficient implementation usually underperformed against the naive algorithm with one exception, but the arrays tested had a comparatively smaller size than arrays tested later on in our performance analysis.

### Approach Comparison:

![](img/array-size.png)

Generally, performance for all implementations increases as the array size increases, which is expected. The CPU Scan has a faster growing runtime than the other implementations.

Although the efficient scan is more performant than the naive scan, it grows similarly for increasing input sizes. Compared to the smaller 2^8 array input sizes in the block size comparison, the work-efficient scan performs much better than the naive scan. This suggests that with a higher number of elements to process, the work-efficient process is better at optimizing GPU resources and as a result achieves a greater performance than the naive approach.

Meanwhile, the thrust implementation is much faster than any other implementation and also grows at a much slower rate in relation to the increasing input size.

### Performance Bottlenecks:

![](img/nsight_system.png)

- **Computation:** The CPU implementation is mainly limited in its compute due to it being single-threaded and not being able to take advantage of parallelization. The naive implementation also suffers from a computation bottleneck due to not being able to properly utilize all of the threads in a warp during computation. The work-efficient implementation does not seem to suffer from as much computational bottlenecks.

- **Memory I/O:** Memory I/O is definitely a bottleneck in the two GPU implementations (non-thrust). Namely, shared memory is not used, and data access to the array during operations is done through global memory reads. This greatly increases latency and can lead to stalls in warps. These reads are used in the same operations in which they are computed, which does not allow for an oppoturnity to 'hide' these latencies with compute operations. These memory operations especially become a problem when it comes to larger input array sizes as we have much more reads that take more cycles than any of our compute operations. This becomes a greater issue during the Naive algorithm since we must constantly do swaps in memory since it suffers from race conditions and does not allow the array to be modified in place.

- **Synchronization:** In the GPU implementation, thread synchronization also becomes a source of bottlenecks. We require threads to be synchronized during the upsweep and downsweep phases of the work-efficient scan. This can lead reduced throughput. In addition, some algorithm implementations can also cause thread divergence due to branch conditions from an if statement which causes threads without that same branch result in the same warp to halt.

## Sample Output

```
****************
** SCAN TESTS **
****************
[ 30 2 7 4 22 10 21 1 12 46 17 27 28 ... 7 0 ]
==== cpu scan, power-of-two ====
elapsed time: 8.2656ms (std::chrono Measured)
[ 0 30 32 39 43 65 75 96 97 109 155 172 199 ... 410830655 410830662 ]
==== cpu scan, non-power-of-two ====
elapsed time: 8.4731ms (std::chrono Measured)
[ 0 30 32 39 43 65 75 96 97 109 155 172 199 ... 410830574 410830578 ]
passed
==== naive scan, power-of-two ====
elapsed time: 9.20499ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 8.86784ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 6.68979ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 7.08096ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.491744ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.4608ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 1 1 2 0 3 1 1 0 3 2 3 1 1 ... 3 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 24.0825ms (std::chrono Measured)
[ 1 1 2 3 1 1 3 2 3 1 1 1 1 ... 2 3 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 23.9509ms (std::chrono Measured)
[ 1 1 2 3 1 1 3 2 3 1 1 1 1 ... 3 3 ]
passed
==== cpu compact with scan ====
elapsed time: 65.608ms (std::chrono Measured)
[ 1 1 2 3 1 1 3 2 3 1 1 1 1 ... 2 3 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 8.32102ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 7.97389ms (CUDA Measured)
[ 1 1 2 3 1 1 3 2 3 1 1 1 1 ... 3 3 ]
passed
```
Binary file added compute.txt.ncu-rep
Binary file not shown.
Binary file added img/array-size.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/block_size.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/nsight_system.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 4 additions & 4 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 24; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand All @@ -36,13 +36,13 @@ int main(int argc, char* argv[]) {
// At first all cases passed because b && c are all zeroes.
zeroArray(SIZE, b);
printDesc("cpu scan, power-of-two");
StreamCompaction::CPU::scan(SIZE, b, a);
StreamCompaction::CPU::scan(SIZE, b, a, true);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(SIZE, b, true);

zeroArray(SIZE, c);
printDesc("cpu scan, non-power-of-two");
StreamCompaction::CPU::scan(NPOT, c, a);
StreamCompaction::CPU::scan(NPOT, c, a, true);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(NPOT, b, true);
printCmpResult(NPOT, b, c);
Expand Down Expand Up @@ -144,7 +144,7 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient compact, non-power-of-two");
count = StreamCompaction::Efficient::compact(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(count, c, true);
printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

system("pause"); // stop Win32 console from closing on exit
Expand Down
10 changes: 8 additions & 2 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ namespace StreamCompaction {
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) { return; }
bools[index] = idata[index] != 0 ? 1 : 0;
}

/**
Expand All @@ -32,7 +34,11 @@ namespace StreamCompaction {
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) { return; }
if (bools[index]) {
odata[indices[index]] = idata[index];
}
}

}
Expand Down
44 changes: 39 additions & 5 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,14 @@ namespace StreamCompaction {
* For performance analysis, this is supposed to be a simple for loop.
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
void scan(int n, int *odata, const int *idata, bool isTimed) {
if (isTimed) timer().startCpuTimer();
// TODO
timer().endCpuTimer();
odata[0] = 0;
for (int i = 1; i < n; i++) {
odata[i] = odata[i - 1] + idata[i - 1];
}
if (isTimed) timer().endCpuTimer();
}

/**
Expand All @@ -31,8 +35,15 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int count = 0;
for (int i = 0; i < n; i++) {
if (idata[i] != 0) {
odata[count] = idata[i];
count++;
}
}
timer().endCpuTimer();
return -1;
return count;
}

/**
Expand All @@ -43,8 +54,31 @@ namespace StreamCompaction {
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO

int* temp = new int[n]();
int* scan = new int[n]();

// Temporary array
for (int i = 0; i < n; i++) {
temp[i] = (idata[i] != 0) ? 1 : 0;
}

// Scan
CPU::scan(n, scan, temp, false);

// Scatter
for (int i = 0; i < n; i++) {
if (temp[i] != 0) {
odata[scan[i]] = idata[i];
}
}

int count = scan[n - 1];
delete[] temp;
delete[] scan;

timer().endCpuTimer();
return -1;
return count;
}
}
}
2 changes: 1 addition & 1 deletion stream_compaction/cpu.h
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ namespace StreamCompaction {
namespace CPU {
StreamCompaction::Common::PerformanceTimer& timer();

void scan(int n, int *odata, const int *idata);
void scan(int n, int *odata, const int *idata, bool isTimed);

int compactWithoutScan(int n, int *odata, const int *idata);

Expand Down
Loading