Skip to content

Recommended scenarios

Karel Kubicek edited this page Apr 12, 2018 · 4 revisions

Here we explain exemplary use-cases of eacirc-streams. We add examples of configuration files for them.

TODO: rewrite this form paper to more wiki-like format.

Several different data generation strategies can be used to analyze target function output confusion and diffusion properties, namely counter, low hamming weight (hw_counter), strict avalanche criterion (sac), and tuple of random plaintext and corresponding ciphertext (rnd_plt_ctx_stream)

The counter generates blocks of particular size each containing the current block index. Intuitively the high bits are set to zero while the low bits are iterating.

The hw_counter generates function's input blocks with the fixed and low Hamming weight. The weight is derived from the block size as it is required to avoid cycling of the generator, i.e., depleting of all options on the block size. If tested function has input block size of 128 bits, the Hamming weight is set to 4, because 16 x (128 over 4) ~ 170 MB. For 64-bit input block, the minimal required Hamming weight is 6, as 8 x (64 over 6) ~ 600 MB. The idea behind the hw_counter strategy is to cover the whole input block with small changes only, keeping the total Hamming weight low thus feeding the minimal possible entropy to a function. Both counter and hw_counter serve as low-entropy input generators.

The sac strategy aims to test the Strict Avalanche Criterion. It generates pairs of blocks where the first block in the pair is randomly generated and the second one is almost the same except for single bit flip at a randomly selected position. Both blocks are then used as an input to tested function.

The rnd_plt_ctx_stream strategy stands for random-plaintext-ciphertext and generates random input block pi, which is an input to a tested function f. The resulting data block used for statistical analysis is then p_i || f(pi), the concatenation of plaintext and ciphertext. This particular testing method adds additional entropy to the tested function making the detection more difficult, e.g., it is expected that number of function's internal rounds with still detectable bias would be lower when compared to low-entropy inputs such as counter and hw_counter. On the other hand, we can directly analyze function's input-output correlation (similar to linear cryptoanalysis).