diff --git a/README.md b/README.md
index 0e38ddb..c215ba6 100644
--- a/README.md
+++ b/README.md
@@ -3,12 +3,123 @@ CUDA Stream Compaction
**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**
-* (TODO) YOUR NAME HERE
- * (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
-* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
+* Saksham Nagpal
+ * [LinkedIn](https://www.linkedin.com/in/nagpalsaksham/)
+* Tested on: Windows 11 Home, AMD Ryzen 7 6800H Radeon @ 3.2GHz 16GB, NVIDIA GeForce RTX 3050 Ti Laptop GPU 4096MB
-### (TODO: Your README)
+Introduction
+====
+This project implements the **Scan (All-Prefix Sums)** and the **Stream Compaction** algorithms. We first implement the algorithms on the CPU, then they are implemented using CUDA to run on the GPU. A performance analysis comparing the different approaches is presented afterwards.
-Include analysis, etc. (Remember, this is public, so don't put
-anything here that you don't want to share with the world.)
+### Scan
+The Scan algorithm operates on an array and computes the prefix-sum by applying an operator to all the preceding elements for each index. We implement this algorithm in 3 different ways:
+1. CPU Scan: A CPU-side version of the algorithm that sequentially adds every number to the next index, accumulating the result.
+2. Naive Scan: A naive implementation that translates the CPU-side algorithm to run on the GPU. Accumulates the result by dividing the array into smaller sub-arrays and adding the corresponding elements.
+3. Work-Efficient Scan: An optimization of the naive scan that treats the input array as a balance binary tree and performs _up-sweep_ and _down-sweep_ to accumulate the result.
+
+### Stream Compaction
+Stream Compaction operates on an array based on a given condition and filters out the elements that do not meet that condition, thus 'compacting' the data stream. In this project, we compact arrays of integers and filter out any element if it is 0. We implement this algorithm in 3 different ways:
+1. A simple CPU version,
+2. A CPU version that imitates the parallelized version using _scan_ and _scatter_ passes, and
+3. a GPU version that maps the input to booleans, runs _scan_ on the mapped boolean array, and then runs _scatter_ to get the compacted output.
+
+
+Performance Analysis
+====
+
+* ## Figuring out the appropriate block size
+First, we track the performace of our different implementations against varying block sizes to figure out the most suitable block size vefore continuing our comparison.
+
+|  |
+|:--:|
+| *Time (ms) VS Block Size using an array of 221 elements* |
+
+We see that there is a significant performance increase till increasing the block size to 64, and after that the gain is negligible. For further performance comparisons, we use a block size of 128.
+
+* ## Comparing Scan implementations
+
+|  |
+|:--:|
+| ***Scan:** Time (ms) VS Array Size using Block Size of 128* |
+
+Next, we compare how our different implementations perform with respect to varying array sizes. It is clear that thrust's implementation is the fastest, while our naive implementation is the slowest. While it is interesting that the sequential CPU implementation outperforms both the Naive and the Work-Efficient GPU-based methods for the most part, we can see the Work-Efficient implementation catching up to it for larger sized arrays.
+
+* ## Comparing Stream Compaction implementations
+
+|  |
+|:--:|
+| ***Stream Compaction:** Time (ms) VS Array Size using Block Size of 128* |
+
+Lastly, we compare the performance of our 3 different implementations of stream compaction. The GPU-based Work Efficient compaction outperforms the other 2 methods significantly.
+
+#### Can you find the performance bottlenecks? Is it memory I/O? Computation? Is it different for each implementation?
+As we can see, the GPU-based implementations (Naive and Work-Efficient) of the Scan algorithm are not that 'efficient' - even the CPU-based approach outperforms them. Some observations that cen be made in this regard are as follows:
+1. One of the reasons for the above bottleneck could be **Warp-Partitioning**. Our implementations divide our array into sub-arrays and add their corresponding elements. The way we do indexing could cause half of the threads in each warp to stall, thus causing warp divergence. The indexing can potentially be tweaked so that after each division step, we end up with totally free warps that can then be retired and used to schedule other warps.
+2. As an optimization to the Work-Efficient method, we try to launch only as many threads as the number of elements in the divided sub-array at each iteration. This could potentially be the reason why we see this approach catching up to the CPU-implementation's performance in the second graph above, since the parallelized implementation would offset the sequential computation sepcially for larger sized arrays.
+
+Output
+====
+The following tests were ran on array size of **221**, a non-power-of-two array size of **221 - 3**, and a block size of **128**.
+```
+****************
+** SCAN TESTS **
+****************
+ [ 38 48 2 7 4 39 42 0 33 1 4 46 46 ... 37 0 ]
+==== cpu scan, power-of-two ====
+ elapsed time: 1.2294ms (std::chrono Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329733 51329770 ]
+==== cpu scan, non-power-of-two ====
+ elapsed time: 1.2644ms (std::chrono Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329672 51329705 ]
+ passed
+==== naive scan, power-of-two ====
+ elapsed time: 2.62829ms (CUDA Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329733 51329770 ]
+ passed
+==== naive scan, non-power-of-two ====
+ elapsed time: 3.84182ms (CUDA Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329672 51329705 ]
+ passed
+==== work-efficient scan, power-of-two ====
+ elapsed time: 1.43606ms (CUDA Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329733 51329770 ]
+ passed
+==== work-efficient scan, non-power-of-two ====
+ elapsed time: 1.27795ms (CUDA Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329672 51329705 ]
+ passed
+==== thrust scan, power-of-two ====
+ elapsed time: 0.882496ms (CUDA Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329733 51329770 ]
+ passed
+==== thrust scan, non-power-of-two ====
+ elapsed time: 1.51619ms (CUDA Measured)
+ [ 0 38 86 88 95 99 138 180 180 213 214 218 264 ... 51329672 51329705 ]
+ passed
+
+*****************************
+** STREAM COMPACTION TESTS **
+*****************************
+ [ 2 2 2 3 2 3 0 0 3 1 0 2 2 ... 1 0 ]
+==== cpu compact without scan, power-of-two ====
+ elapsed time: 4.7782ms (std::chrono Measured)
+ [ 2 2 2 3 2 3 3 1 2 2 3 2 1 ... 3 1 ]
+ passed
+==== cpu compact without scan, non-power-of-two ====
+ elapsed time: 4.5961ms (std::chrono Measured)
+ [ 2 2 2 3 2 3 3 1 2 2 3 2 1 ... 1 3 ]
+ passed
+==== cpu compact with scan ====
+ elapsed time: 9.5317ms (std::chrono Measured)
+ [ 2 2 2 3 2 3 3 1 2 2 3 2 1 ... 3 1 ]
+ passed
+==== work-efficient compact, power-of-two ====
+ elapsed time: 2.03965ms (CUDA Measured)
+ [ 2 2 2 3 2 3 3 1 2 2 3 2 1 ... 3 1 ]
+ passed
+==== work-efficient compact, non-power-of-two ====
+ elapsed time: 1.76128ms (CUDA Measured)
+ [ 2 2 2 3 2 3 3 1 2 2 3 2 1 ... 1 3 ]
+ passed
+```
diff --git a/img/compact_time_vs_array_size_blocksize128.png b/img/compact_time_vs_array_size_blocksize128.png
new file mode 100644
index 0000000..8b1526c
Binary files /dev/null and b/img/compact_time_vs_array_size_blocksize128.png differ
diff --git a/img/scan_time_vs_array_size_blocksize128.png b/img/scan_time_vs_array_size_blocksize128.png
new file mode 100644
index 0000000..e238463
Binary files /dev/null and b/img/scan_time_vs_array_size_blocksize128.png differ
diff --git a/img/time_vs_blocksize_2_21.png b/img/time_vs_blocksize_2_21.png
new file mode 100644
index 0000000..e893304
Binary files /dev/null and b/img/time_vs_blocksize_2_21.png differ
diff --git a/src/main.cpp b/src/main.cpp
index 896ac2b..4001836 100644
--- a/src/main.cpp
+++ b/src/main.cpp
@@ -13,16 +13,51 @@
#include
#include "testing_helpers.hpp"
-const int SIZE = 1 << 8; // feel free to change the size of array
+const int SIZE = 1 << 21; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
int *c = new int[SIZE];
+int main1(int argc, char* argv[]) {
+ int bs = 16;
+ while (bs <= 1024)
+ {
+ float cpuScan = 0.0;
+ float naiveScan = 0.0;
+ float workEffScan = 0.0;
+ float thrustScan = 0.0;
+ for (int i = 0; i < 100; i++) {
+ genArray(SIZE - 5, a, 50);
+ a[SIZE - 1] = 0;
+
+ zeroArray(SIZE, c);
+ //printDesc("naive scan, non-power-of-two");
+ StreamCompaction::Naive::scan(NPOT, c, a, bs);
+ naiveScan += StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation();
+
+ zeroArray(SIZE, c);
+ //printDesc("work-efficient scan, non-power-of-two");
+ StreamCompaction::Efficient::scan(NPOT, c, a, bs);
+ workEffScan += StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation();
+
+ }
+ std::cout << " BLOCK SIZE: " << bs << std::endl;
+ std::cout << " Naive Scan: " << naiveScan/100.f << std::endl;
+ std::cout << " Work Efficient Scan: " << workEffScan / 100.f << std::endl;
+ bs *= 2;
+ }
+ system("pause"); // stop Win32 console from closing on exit
+ delete[] a;
+ delete[] b;
+ delete[] c;
+ return 0;
+}
+
int main(int argc, char* argv[]) {
// Scan tests
-
printf("\n");
+ std::cout << " BLOCK SIZE: " << BLOCK_SIZE << std::endl;
printf("****************\n");
printf("** SCAN TESTS **\n");
printf("****************\n");
@@ -51,7 +86,7 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, power-of-two");
StreamCompaction::Naive::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(SIZE, c, true);
+ printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);
/* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan
@@ -64,35 +99,35 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, non-power-of-two");
StreamCompaction::Naive::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(SIZE, c, true);
+ printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);
zeroArray(SIZE, c);
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(SIZE, c, true);
+ printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);
zeroArray(SIZE, c);
printDesc("work-efficient scan, non-power-of-two");
StreamCompaction::Efficient::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(NPOT, c, true);
+ printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);
zeroArray(SIZE, c);
printDesc("thrust scan, power-of-two");
StreamCompaction::Thrust::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(SIZE, c, true);
+ printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);
zeroArray(SIZE, c);
printDesc("thrust scan, non-power-of-two");
StreamCompaction::Thrust::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(NPOT, c, true);
+ printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);
printf("\n");
@@ -137,14 +172,14 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient compact, power-of-two");
count = StreamCompaction::Efficient::compact(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(count, c, true);
+ printArray(count, c, true);
printCmpLenResult(count, expectedCount, b, c);
zeroArray(SIZE, c);
printDesc("work-efficient compact, non-power-of-two");
count = StreamCompaction::Efficient::compact(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
- //printArray(count, c, true);
+ printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);
system("pause"); // stop Win32 console from closing on exit
diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu
index 2ed6d63..0acc872 100644
--- a/stream_compaction/common.cu
+++ b/stream_compaction/common.cu
@@ -23,7 +23,11 @@ namespace StreamCompaction {
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
- // TODO
+ int index = threadIdx.x + (blockIdx.x * blockDim.x);
+ if (index >= n) {
+ return;
+ }
+ bools[index] = idata[index] == 0 ? 0 : 1;
}
/**
@@ -32,7 +36,13 @@ namespace StreamCompaction {
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
- // TODO
+ int index = threadIdx.x + (blockIdx.x * blockDim.x);
+ if (index >= n) {
+ return;
+ }
+ if (bools[index] == 1) {
+ odata[indices[index]] = idata[index];
+ }
}
}
diff --git a/stream_compaction/common.h b/stream_compaction/common.h
index d2c1fed..22338bd 100644
--- a/stream_compaction/common.h
+++ b/stream_compaction/common.h
@@ -12,6 +12,8 @@
#define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
#define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__)
+#define BLOCK_SIZE 128
+
/**
* Check for CUDA errors; print and exit if there was a problem.
diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu
index 719fa11..734e617 100644
--- a/stream_compaction/cpu.cu
+++ b/stream_compaction/cpu.cu
@@ -18,8 +18,12 @@ namespace StreamCompaction {
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
- timer().startCpuTimer();
- // TODO
+ timer().startCpuTimer();
+ odata[0] = 0;
+ for (int i = 1; i < n; i++)
+ {
+ odata[i] = odata[i - 1] + idata[i - 1];
+ }
timer().endCpuTimer();
}
@@ -30,9 +34,15 @@ namespace StreamCompaction {
*/
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
- // TODO
+ int index = 0;
+ for (int i = 0; i < n; i++) {
+ if (idata[i] != 0) {
+ odata[index] = idata[i];
+ index++;
+ }
+ }
timer().endCpuTimer();
- return -1;
+ return index;
}
/**
@@ -41,10 +51,37 @@ namespace StreamCompaction {
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
- timer().startCpuTimer();
- // TODO
+ int* temp = new int[n];
+ int* scan = new int[n];
+ timer().startCpuTimer();
+ for (int i = 0; i < n; i++) {
+ if (idata[i] == 0) {
+ temp[i] = 0;
+ }
+ else {
+ temp[i] = 1;
+ }
+ }
+
+ //scan
+ scan[0] = 0;
+ for (int i = 1; i < n; i++)
+ {
+ scan[i] = scan[i - 1] + temp[i - 1];
+ }
+
+ //scatter
+ for (int i = 0; i < n; i++) {
+ if (temp[i] == 1) {
+ odata[scan[i]] = idata[i];
+ }
+ }
+
timer().endCpuTimer();
- return -1;
+ int noOfRemaining = scan[n - 1];
+ delete[] temp;
+ delete[] scan;
+ return noOfRemaining;
}
}
}
diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu
index 2db346e..c17e60b 100644
--- a/stream_compaction/efficient.cu
+++ b/stream_compaction/efficient.cu
@@ -3,6 +3,26 @@
#include "common.h"
#include "efficient.h"
+__global__ void kernUpSweep(int n, int d, int* data) {
+ int index = threadIdx.x + (blockIdx.x * blockDim.x);
+ if (index >= n) {
+ return;
+ }
+ int k = (index + 1) * (1 << (d + 1)) - 1;
+ data[k] += data[k - (1 << d)];
+}
+
+__global__ void kernDownSweep(int n, int d, int* data) {
+ int index = threadIdx.x + (blockIdx.x * blockDim.x);
+ if (index >= n) {
+ return;
+ }
+ int k = (index + 1) * (1 << (d + 1)) - 1;
+ int val = data[k - (1 << d)];
+ data[k - (1 << d)] = data[k];
+ data[k] += val;
+}
+
namespace StreamCompaction {
namespace Efficient {
using StreamCompaction::Common::PerformanceTimer;
@@ -15,10 +35,37 @@ namespace StreamCompaction {
/**
* Performs prefix-sum (aka scan) on idata, storing the result into odata.
*/
- void scan(int n, int *odata, const int *idata) {
+ void scan(int n, int *odata, const int *idata, int BLOCKSIZE) {
+ int* dev_data;
+ int paddedSize = 1 << ilog2ceil(n);
+ int noOfIters = ilog2ceil(paddedSize) - 1;
+
+ cudaMalloc((void**)&dev_data, paddedSize * sizeof(int));
+ checkCUDAError("cudaMalloc dev_data failed!");
+
+ cudaMemcpy(dev_data, idata, sizeof(int) * n, cudaMemcpyHostToDevice);
+ cudaMemset(&dev_data[n], 0, sizeof(int) * (paddedSize - n));
timer().startGpuTimer();
- // TODO
+
+ //up sweep
+ for (int d = 0; d <= noOfIters; d++) {
+ int noOfElementsToBeUpdated = paddedSize / (1 << (d + 1));
+ dim3 fullBlocksPerGrid((noOfElementsToBeUpdated + BLOCKSIZE - 1) / BLOCKSIZE);
+ kernUpSweep << > > (noOfElementsToBeUpdated, d, dev_data);
+ }
+
+ //set last element to zero before starting down sweep
+ cudaMemset(&dev_data[paddedSize -1], 0, sizeof(int));
+
+ //down sweep
+ for (int d = noOfIters; d >= 0; d--) {
+ int noOfElementsToBeUpdated = paddedSize / (1 << (d + 1));
+ dim3 fullBlocksPerGrid((noOfElementsToBeUpdated + BLOCKSIZE - 1) / BLOCKSIZE);
+ kernDownSweep << > > (noOfElementsToBeUpdated, d, dev_data);
+ }
timer().endGpuTimer();
+ cudaMemcpy(odata, dev_data, sizeof(int) * n, cudaMemcpyDeviceToHost);
+ cudaFree(dev_data);
}
/**
@@ -30,11 +77,70 @@ namespace StreamCompaction {
* @param idata The array of elements to compact.
* @returns The number of elements remaining after compaction.
*/
- int compact(int n, int *odata, const int *idata) {
+ int compact(int n, int *odata, const int *idata, int BLOCKSIZE) {
+ int* dev_idata;
+ int* dev_odata;
+ int* dev_bools;
+ int* dev_indices;
+
+ int paddedSize = 1 << ilog2ceil(n);
+ int noOfIters = ilog2ceil(paddedSize) - 1;
+
+ cudaMalloc((void**)&dev_idata, paddedSize * sizeof(int));
+ checkCUDAError("cudaMalloc dev_idata failed!");
+ cudaMemcpy(dev_idata, idata, sizeof(int) * n, cudaMemcpyHostToDevice);
+ cudaMemset(&dev_idata[n], 0, sizeof(int) * (paddedSize - n));
+
+ cudaMalloc((void**)&dev_odata, paddedSize * sizeof(int));
+ checkCUDAError("cudaMalloc dev_odata failed!");
+
+ cudaMalloc((void**)&dev_bools, paddedSize * sizeof(int));
+ checkCUDAError("cudaMalloc dev_bools failed!");
+
+ cudaMalloc((void**)&dev_indices, paddedSize * sizeof(int));
+ checkCUDAError("cudaMalloc dev_indices failed!");
+
timer().startGpuTimer();
- // TODO
+
+ //Map to booleans
+ dim3 fullBlocksPerGrid((paddedSize + BLOCKSIZE - 1) / BLOCKSIZE);
+ StreamCompaction::Common::kernMapToBoolean << > > (paddedSize, dev_bools, dev_idata);
+
+ cudaMemcpy(dev_indices, dev_bools, sizeof(int) * paddedSize, cudaMemcpyDeviceToDevice);
+
+ //scan
+
+ //up sweep
+ for (int d = 0; d <= noOfIters; d++) {
+ int noOfElementsToBeUpdated = paddedSize / (1 << (d + 1));
+ dim3 fullBlocksPerGrid((noOfElementsToBeUpdated + BLOCKSIZE - 1) / BLOCKSIZE);
+ kernUpSweep << > > (noOfElementsToBeUpdated, d, dev_indices);
+ }
+
+ //set last element to zero before starting down sweep
+ cudaMemset(&dev_indices[paddedSize - 1], 0, sizeof(int));
+
+ //down sweep
+ for (int d = noOfIters; d >= 0; d--) {
+ int noOfElementsToBeUpdated = paddedSize / (1 << (d + 1));
+ dim3 fullBlocksPerGrid((noOfElementsToBeUpdated + BLOCKSIZE - 1) / BLOCKSIZE);
+ kernDownSweep << > > (noOfElementsToBeUpdated, d, dev_indices);
+ }
+
+ int returnVal;
+ cudaMemcpy(&returnVal, dev_indices + paddedSize - 1, sizeof(int), cudaMemcpyDeviceToHost);
+
+ //scatter
+ StreamCompaction::Common::kernScatter << > > (paddedSize, dev_odata, dev_idata, dev_bools, dev_indices);
timer().endGpuTimer();
- return -1;
+ cudaMemcpy(odata, dev_odata, sizeof(int) * returnVal, cudaMemcpyDeviceToHost);
+
+ //cleanup
+ cudaFree(dev_idata);
+ cudaFree(dev_odata);
+ cudaFree(dev_bools);
+ cudaFree(dev_indices);
+ return returnVal;
}
}
}
diff --git a/stream_compaction/efficient.h b/stream_compaction/efficient.h
index 803cb4f..128941d 100644
--- a/stream_compaction/efficient.h
+++ b/stream_compaction/efficient.h
@@ -6,8 +6,8 @@ namespace StreamCompaction {
namespace Efficient {
StreamCompaction::Common::PerformanceTimer& timer();
- void scan(int n, int *odata, const int *idata);
+ void scan(int n, int *odata, const int *idata, int BLOCKSIZE = BLOCK_SIZE);
- int compact(int n, int *odata, const int *idata);
+ int compact(int n, int *odata, const int *idata, int BLOCKSIZE = BLOCK_SIZE);
}
}
diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu
index 4308876..b979879 100644
--- a/stream_compaction/naive.cu
+++ b/stream_compaction/naive.cu
@@ -3,6 +3,30 @@
#include "common.h"
#include "naive.h"
+__global__ void kernNaiveScan(int offset, int n, int* odata, const int* idata) {
+ int k = threadIdx.x + (blockIdx.x * blockDim.x);
+ if (k >= n) {
+ return;
+ }
+ odata[k] = idata[k];
+ if (k >= offset) {
+ odata[k] += idata[k - offset];
+ }
+}
+
+__global__ void kernRightShift(int n, int* odata, int* idata) {
+ int index = threadIdx.x + (blockIdx.x * blockDim.x);
+ if (index >= n) {
+ return;
+ }
+ if (index == 0) {
+ odata[index] = 0;
+ }
+ else {
+ odata[index] = idata[index - 1];
+ }
+}
+
namespace StreamCompaction {
namespace Naive {
using StreamCompaction::Common::PerformanceTimer;
@@ -16,10 +40,30 @@ namespace StreamCompaction {
/**
* Performs prefix-sum (aka scan) on idata, storing the result into odata.
*/
- void scan(int n, int *odata, const int *idata) {
+ void scan(int n, int* odata, const int* idata, int BLOCKSIZE) {
+ int* dev_in;
+ int* dev_out;
+ int noOfIters = ilog2ceil(n);
+ dim3 fullBlocksPerGrid((n + BLOCKSIZE - 1) / BLOCKSIZE);
+
+ cudaMalloc((void**)&dev_in, n * sizeof(int));
+ checkCUDAError("cudaMalloc dev_in failed!");
+
+ cudaMalloc((void**)&dev_out, n * sizeof(int));
+ checkCUDAError("cudaMalloc dev_out failed!");
+
+ cudaMemcpy(dev_in, idata, sizeof(int) * n, cudaMemcpyHostToDevice);
timer().startGpuTimer();
- // TODO
+ for (int d = 1; d <= noOfIters; d++) {
+ int offset = 1 << (d - 1);
+ kernNaiveScan << > > (offset, n, dev_out, dev_in);
+ std::swap(dev_in, dev_out);
+ }
+ kernRightShift << > > (n, dev_out, dev_in);
timer().endGpuTimer();
+ cudaMemcpy(odata, dev_out, sizeof(int) * n, cudaMemcpyDeviceToHost);
+ cudaFree(dev_in);
+ cudaFree(dev_out);
}
}
}
diff --git a/stream_compaction/naive.h b/stream_compaction/naive.h
index 37dcb06..c7592eb 100644
--- a/stream_compaction/naive.h
+++ b/stream_compaction/naive.h
@@ -6,6 +6,6 @@ namespace StreamCompaction {
namespace Naive {
StreamCompaction::Common::PerformanceTimer& timer();
- void scan(int n, int *odata, const int *idata);
+ void scan(int n, int *odata, const int *idata, int BLOCKSIZE = BLOCK_SIZE);
}
}
diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu
index 1def45e..3eb98fc 100644
--- a/stream_compaction/thrust.cu
+++ b/stream_compaction/thrust.cu
@@ -18,11 +18,14 @@ namespace StreamCompaction {
* Performs prefix-sum (aka scan) on idata, storing the result into odata.
*/
void scan(int n, int *odata, const int *idata) {
- timer().startGpuTimer();
- // TODO use `thrust::exclusive_scan`
- // example: for device_vectors dv_in and dv_out:
- // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin());
+ thrust::host_vector host_data(idata, idata + n);
+ thrust::device_vector device_data = host_data;
+ timer().startGpuTimer();
+ //thrust::exclusive_scan(idata, idata + n, odata);
+ thrust::exclusive_scan(device_data.begin(), device_data.end(), device_data.begin());
+ //host_data = device_data;
timer().endGpuTimer();
+ cudaMemcpy(odata, device_data.data().get(), sizeof(int) * n, cudaMemcpyDeviceToHost);
}
}
}