Erasure Codes(based on Reed-Solomon Codes) engine in pure Go.
It's a kind of Systematic Codes, which means the input data is embedded in the encoded output .
High Performance: More than 15GB/s per physics core.
High Reliability:
- At least two companies are using this library in their storage system. (More than dozens PB data)
- Full test of galois field calculation and invertible matrices (You can also find the mathematical proof in this repo).
Based on Klauspost ReedSolomon & Intel ISA-L with some additional changes/optimizations.
It's the backend of XRS (Erasure Codes which can save about 30% I/O in reconstruction process).
Coding over in GF(2^8).
Primitive Polynomial: x^8 + x^4 + x^3 + x^2 + 1 (0x1d).
Cauchy Matrix is the generator matrix.
- Any submatrix of encoding matrix is invertible (See the proof here).
Galois Field Tool: Generate primitive polynomial and it's log, exponent, multiply and inverse tables etc.
Inverse Matrices Tool: Calculate the number of inverse matrices with specific data & parity number.
XP has written an excellent article (Here, in Chinese) about how Erasure Codes works and the math behind it. It's a good start to read it.
SIMD: Screaming Fast Galois Field Arithmetic Using Intel SIMD Instructions
Reduce memory I/O: Write cache-friendly code. In the process of two matrices multiply, we will have to read data times, and keep the temporary results, then write to memory. If we could put more data into CPU's Cache but not read/write memory again and again, the performance should improve a lot.
Cache inverse matrices: It'll save thousands ns, not much, but it's still meaningful for small data.
...
Here (in Chinese) is an article about how to write a fast Erasure Codes engine. (Written by me years ago, need update, but the main ideas still work)
Performance depends mainly on:
CPU instruction extension.
Number of data/parity row vectors.
Platform:
AWS c5d.xlarge (Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz)
All test run on a single Core.
I/O = (data + parity) * vector_size / cost
Base means no SIMD.
Data | Parity | Vector size | AVX512 I/O (MB/S) | AVX2 I/O (MB/S) | Base I/O (MB/S) |
---|---|---|---|---|---|
10 | 2 | 4KB | 29683.69 | 21371.43 | 910.45 |
10 | 2 | 1MB | 17664.67 | 15505.58 | 917.26 |
10 | 2 | 8MB | 10363.05 | 9323.60 | 914.62 |
10 | 4 | 4KB | 17708.62 | 12705.35 | 531.82 |
10 | 4 | 1MB | 11970.42 | 9804.57 | 536.31 |
10 | 4 | 8MB | 7957.9 | 6941.69 | 534.82 |
12 | 4 | 4KB | 16902.12 | 12065.14 | 511.95 |
12 | 4 | 1MB | 11478.86 | 9392.33 | 514.24 |
12 | 4 | 8MB | 7949.81 | 6760.49 | 513.06 |
I/O = (data + reconstruct_data_num) * vector_size / cost
Data | Parity | Vector size | Reconstruct Data Num | AVX512 I/O (MB/S) |
---|---|---|---|---|
10 | 4 | 4KB | 1 | 29830.36 |
10 | 4 | 4KB | 2 | 21649.61 |
10 | 4 | 4KB | 3 | 17088.41 |
10 | 4 | 4KB | 4 | 14567.26 |
I/O = (2 + parity_num + parity_num) * vector_size / cost
Data | Parity | Vector size | AVX512 I/O (MB/S) |
---|---|---|---|
10 | 4 | 4KB | 36444.13 |
I/O = (parity_num + parity_num + replace_data_num) * vector_size / cost
Data | Parity | Vector size | Replace Data Num | AVX512 I/O (MB/S) |
---|---|---|---|---|
10 | 4 | 4KB | 1 | 78464.33 |
10 | 4 | 4KB | 2 | 50068.71 |
10 | 4 | 4KB | 3 | 38808.11 |
10 | 4 | 4KB | 4 | 32457.60 |
10 | 4 | 4KB | 5 | 28679.46 |
10 | 4 | 4KB | 6 | 26151.85 |
PS:
And we must know the benchmark test is quite different with encoding/decoding in practice. Because in benchmark test loops, the CPU Cache may help a lot.
Klauspost ReedSolomon: It's the most commonly used Erasure Codes library in Go. Impressive performance, friendly API, and it can support multi platforms(with fast Galois Field Arithmetic). Inspired me a lot.
Intel ISA-L: The ideas of Cauchy matrix and saving memory I/O are from it.