Recommended # of iterations when using PKBDF2-SHA256?
I'm curious if anyone has any advice or points of reference when it comes to determining how many iterations is 'good enough' when using PBKDF2 (specifically with SHA-256). Certainly, 'good enough' is subjective and hard to define, varies by application & risk profile, and what's 'good enough' today is likely not 'good enough' tomorrow...
But the question remains, what does the industry currently think 'good enough' is? What reference points are available for comparison?
Some references I've located:
- Sept 2000 - 1000+ rounds recommended (source: RFC 2898)
- Feb 2005 - AES in Kerberos 5 'defaults' to 4096 rounds of SHA-1. (source: RFC 3962)
- Sept 2010 - ElcomSoft claims iOS 3.x uses 2,000 iterations, iOS 4.x uses 10,000 iterations, shows BlackBerry uses 1 (exact hash algorithm is not stated) (source: ElcomSoft)
- May 2011 - LastPass uses 100,000 iterations of SHA-256 (source: LastPass)
- Jun 2015 - StableBit uses 200,000 iterations of SHA-512 (source: StableBit CloudDrive Nuts & Bolts)
- Aug 2015 - CloudBerry uses 1,000 iterations of SHA-1 (source: CloudBerry Lab Security Consideration (pdf))
I'd appreciate any additional references or feedback about how you determined how many iterations was 'good enough' for your application.
As additional background, I'm considering PBKDF2-SHA256 as the method used to hash user passwords for storage for a security conscious web site. My planned PBKDF2 salt is: a per-user random salt (stored in the clear with each user record) XOR'ed with a global salt. The objective is to increase the cost of brute forcing passwords and to avoid revealing pairs of users with identical passwords.
- RFC 2898: PKCS #5: Password-Based Cryptography Specification v2.0
- RFC 3962: Advanced Encryption Standard (AES) Encryption for Kerberos 5
- PBKDF2: Password Based Key Derivation Function v2
Although I'm marking this as answered, I'd still appreciate any references that document how many iterations other applications use... Thanks.
A global salt doesn't add any extra protection against rainbow tables. If you're using the global salt to prevent offline cracking attempts, you may want to consider using an HMAC instead. Also, consider using bcrypt or scrypt instead of PBKDF2-SHA256, since they are designed with the explicit purpose of slowly hashing passwords.
Obligatory note for everyone coming here: Nowadays you should seriously consider using better key stretching algorithms than PKBDF2 as it is highly parralisable. Consider Argon2 as the latest thing (winner from 2015), or scrypt or so.
The per user salt already protects against revealing identical passwords, as well as protecting against rainbow table attacks. Using a global salt is pointless. Using ONLY a global salt does not protect against revealing identical passwords, but would provide some level of protection against a rainbow table attack. The table could be rebuilt using the global salt, but it's a space and time tradeoff that probably isn't worth the effort. I agree with using bcrypt or scrypt instead of pbkdf2, because they're slower.
You should use the maximum number of rounds which is tolerable, performance-wise, in your application. The number of rounds is a slowdown factor, which you use on the basis that under normal usage conditions, such a slowdown has negligible impact for you (the user will not see it, the extra CPU cost does not imply buying a bigger server, and so on). This heavily depends on the operational context: what machines are involved, how many user authentications per second... so there is no one-size-fits-all response.
The wide picture goes thus:
- The time to verify a single password is v on your system. You can adjust this time by selecting the number of rounds in PBKDF2.
- A potential attacker can gather f times more CPU power than you (e.g. you have a single server, and the attacker has 100 big PCs, each being twice faster than your server: this leads to f=200).
- The average user has a password of entropy n bits (this means that trying to guess a user password, with a dictionary of "plausible passwords", will take on average 2n-1 tries).
- The attacker will find your system worth attacking if the average password can be cracked in time less than p (that's the attacker's "patience").
Your goal is to make the average cost to break a single password exceed the attacker's patience, so that he does not even try, and goes on to concentrate on another, easier target. With the notations detailed above, this means that you want:
v·2n-1 > f·p
p is beyond your control; it can be estimated with regards to the value of the data and systems protected by the user passwords. Let's say that p is one month (if it takes more than one month, the attacker will not bother trying). You can make f smaller by buying a bigger server; on the other hand, the attacker will try to make f bigger by buying bigger machines. An aggravating point is that password cracking is an embarrassingly parallel task, so the attacker will get a large boost by using a GPU which supports general programming; so a typical f will still range in the order of a few hundreds.
n relates to the quality of the passwords, which you can somehow influence through a strict password-selection policy, but realistically you will have a hard time getting a value of n beyond, say, 32 bits. If you try to enforce stronger passwords, users will begin to actively fight you, with workarounds such as reusing passwords from elsewhere, writing passwords on sticky notes, and so on.
So the remaining parameter is v. With f = 200 (an attacker with a dozen good GPU), a patience of one month, and n = 32, you need v to be at least 241 milliseconds (note: I initially wrote "8 milliseconds" here, which is wrong -- this is the figure for a patience of one day instead of one month). So you should set the number of rounds in PBKDF2 such that computing it over a single password takes at least that much time on your server. You will still be able to verify four passwords per second with a single core, so the CPU impact is probably negligible(*). Actually, it is safer to use more rounds than that, because, let's face it, getting 32 bits worth of entropy out of the average user password is a bit optimistic; on the other hand, not many attacks will devote dozens of PC for one full month to the task of cracking a single password, so maybe an "attacker's patience" of one day is more realistic, leading to a password verification cost of 8 milliseconds.
So you need to make a few benchmarks. Also, the above works as long as your PBKDF2/SHA-256 implementation is fast. For instance, if you use a fully C#/Java-based implementation, you will get the typical 2 to 3 slowdown factor (compared to C or assembly) for CPU-intensive tasks; in the notations above, this is equivalent to multiplying f by 2 or 3. As a comparison baseline, a 2.4 GHz Core2 CPU can perform about 2.3 millions of elementary SHA-256 computations per second (with a single core), so this would imply, on that CPU, about 20000 rounds to achieve the "8 milliseconds" goal.
(*) Take care that making password verification more expensive also makes your server more vulnerable to Denial-of-Service attacks. You should apply some basic countermeasures, such as temporarily blacklisting client IP addresses that send too many requests per second. You need to do that anyway, to thwart online dictionary attacks.
+1 for the great answer, if I could I'd give another +1 for the embarrassingly parallel link :-)
Just an idea: Could one do like ~20k rounds on the client PC and than 1k on the server? If an attacker would start by guessing "normal passwords", he would still have to do 21k rounds. And if he'd start with keys, he'd only need to do 1k rounds, but the entropy should be much higher. Am I missing something? Seems like a good solution to me.
I don't understand how you got to 8ms in your example. If f=200, p=30 * 24 * 60 * 60 * 1000?, n=32 . Then v is 241ms? I transformed 1 month to millis. Not sure what I am doing wrong. Thanks for the answer
It does come out to 241ms. In fact, if you plug 8ms into the formula, you get approximately 23 hours out as "attacker patience". Which is a whole lot less than a month (for a single 32 bit entropy password)... And low enough where the recommended value of 8ms should likely be raised.
Here's a quick script for benchmarking PBKDF2 (note: time is reported in seconds): https://gist.github.com/sarciszewski/a3b1cf19caab3f408bf8
I agree with your assessment, with one clarification: the number of iterations should not be based upon how long it takes *your* hardware to perform the operation. It needs to be based upon the amount of time an *attacker's* hardware can perform the operation. If you have weak hardware, that's not a good excuse for choosing `iterations=1`. Your long explanation goes on to describe this in a reasonable way, but your up-front conclusion is "as much as you think is okay in your environment". It really should be "whatever it takes to thwart an attacker in his environment."
We're upgrading the encryption in our existing app. However, having a high iteration will cause a massive performance hit as our system sometimes has to deal with encrypting/decrypting thousands of entries. So it seems as if we will have to use a very low iteration value -- probably no more than 1,000. So the question I have is should we even bother with a value that low? Why not just go with '1'?