This project implements a parallel sorting algorithm using MPI on a virtual hypercube architecture. We leverage the power of distributed computing to sort large datasets efficiently.
- Merge Sort: Efficient sequential sorting algorithm
- Bitonic Sort: Parallel implementation using hypercube topology
- MPI: Utilized for inter-process communication
- Scalability: Tested on various dataset sizes and processor counts
- Linear speedup with increasing processor count
- High Computing-over-Communication Ratio (
$CCR > 98%$ ) - Efficient handling of large datasets (up to
$2^{30}$ elements)
- Sequential Merge Sort implemented in C
- Parallel Bitonic Sort implemented using MPI
- Tested on CAPRI High-Performance Computing (HPC) system
Speedup
Computing-over-Communication Ratio
- Clone the repository
- Ensure MPI is installed on your system
- Compile the code:
make
- Run the sequential algortihm:
./test/test_seq
- Run the parallel algortihm:
mpirun -np <num_processors> ./test/test_par
NOTICE: This repository is focused on benchmark analysis. You won't see the actual sorted data here, but rather meaningful metrics and performance measurements. This approach allows us to concentrate on analyzing the efficiency and scalability of our parallel sorting implementation across various scenarios.
For a detailed explanation of the algorithm and performance analysis, please refer to the accompanying paper: Parallel Computing: MPI Parallel Sorting by Francesco Biscaccia Carrara.
This project is licensed under the MIT License with a Non-Commercial Clause - see the LICENSE file for details.
Made with 💻 and ☕ by Francesco Biscaccia Carrara