-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This implementation is vulnerable to a timing attack #1
Comments
Hey, you are right that in the case of your ED-521 implementation, this is a timing constant, but the reason is not because of your Montgomery implementation. As you can see in [1] some curves are timing constant and some are not, as, for example, the NIST curves. That’s because of the use of technics like clamping.
Your provided solution leaves me not satisfied because one second is a lot of time for a simple ECC multiplication. So in my implementation, it just needs 22ms.
For a proper solution, you can look in [2] in section „4.6.2 Constant-time ladders“: „One fix is to always arrange for n to have a fixed top bit, for example by adding an appropriate multiple of the order of P.“ You can also find this solution in the fix for libgcrypt [3] where they had the same timing issue as in your implementation.
[1]: https://safecurves.cr.yp.to/ladder.html
[2]: https://eprint.iacr.org/2017/293.pdf
[3]: gpg/libgcrypt@b9577f7
… Am 20.06.2023 um 03:34 schrieb Dustin Ray ***@***.***>:
Hi, thanks for opening this. The algorithm as implemented should be resistant to branch analysis. Additionally, your assessment and experimentation in regards to timing analysis is correct. The amount of time the algorithm takes to return will depend on the number of significant bits returned from:
s.bits()
My first thought was to implement a significant_bit function that returns the same representation but with leading zeros included. However, for every 0 bit, r0, the return value, is doubled:
r0 = add_points(&r0, &r0);
so, with leading zeros included, extra doubling operations occur. For timing analysis, the best I can come up for this would simply be to only return exactly every 1 second, or some other amount of time, so that the calling function receives a result at exactly a predictable amount of time. The algorithm could be modified as follows:
fn mul(self, s: Integer) -> CurvePoint {
let start_time = Instant::now();
let mut r0 = CurvePoint::id_point(self.curve);
let mut r1 = self;
for i in (0..=s.significant_bits()).rev() {
if s.get_bit(i) {
r0 = r0 + r1.clone();
r1 = r1.clone() + r1.clone();
} else {
r1 = r0.clone() + r1;
r0 = r0.clone() + r0.clone();
}
}
let elapsed_time = start_time.elapsed();
if elapsed_time < Duration::from_secs(1) {
// Always take 1 second to return, mitigating timing analysis from caller
thread::sleep(Duration::from_secs(1) - elapsed_time);
}
r0 // r0 = P * s
}
which leads to, from the callers perspective, the following: (running your POC)
0 needed 1000126 micro seconds
1 needed 1000073 micro seconds
2 needed 1000109 micro seconds
3 needed 1000142 micro seconds
4 needed 1000139 micro seconds
5 needed 1000127 micro seconds
6 needed 1000060 micro seconds
7 needed 1000097 micro seconds
8 needed 1000141 micro seconds
9 needed 1000098 micro seconds
10 needed 1000114 micro seconds
```ssssssssssssssssssssssssss
Let me know what your thoughts are. Thanks again, Ive learned allot from this.
—
Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ALVH6U54PLXQ3ERVDWSG2WLXMD43PANCNFSM6AAAAAAVPGLXJQ>.
You are receiving this because you authored the thread.
|
This was referenced Oct 31, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This implementation is referenced in this Wikipedia article. The Code is after the sentence “The Montgomery ladder approach computes the point multiplication in a fixed amount of time”. But this implementation has in my understanding a timing issue, leading to a possibly timing attack.
The vulnerable line is here.
According to the docs of BigInt, the function bits will return the fewest bits necessary to express the BigInt, but without any leading zeros. Because of the missing leading zeros, there are fewer iterations in the loop when the number is smaller. If you are using for the secret scalar in for example ECDSA random numbers between 1 and 2^256 it is likely that this random numbers can have multiple leading zeros. When you now generate many signatures and measures the time you can detect, when there are multiple leading zeros are present. With this knowledge, you can then perform a Lattice ECDSA attack.
Here is a simple POC to prof the timing issue:
And here are the first lines of the output:
The text was updated successfully, but these errors were encountered: