Contact Details
(use Gitlab)
Version
v5.6.6-stable-564-g3129e29a1
Description
Apparently OpenSSL fills in the bits into the int pointed to by the 2nd argument of SSL_CIPHER_get_bits(sc, &i); in addition to returning them. I haven't found this documented for OpenSSL, but it's in use in the wild and appears to, once upon a time, have been a means to reveal weakened algorithms that only used, say 40 out of 128 bits.
Reproduction steps
Use WolfSSL's OpenSSL compatibility layer,
establish a TLS client connection,
have it connect and negotiate a cipher (assuming SSL_CIPHER *sc is extant and initialized properly), then call:
int bits1;
int bits2 = -23;
bits1 = SSL_CIPHER_get_bits(sc, &bits2);
and see bits2 unchanged, whereas OpenSSL 1.1.1, 3.0 or 3.1 would fill in 128 or 256 on my typical AES-encrypting ciphers.
Relevant log output
No response