Recommended # of rounds for bcrypt

  • What is nowadays (July 2012) the recommended number of bcrypt rounds for hashing a password for an average website (storing only name, emailaddress and home address, but no creditcard or medical information)?

    In other words, what is the current capability of the bcrypt password cracking community? Several bcrypt libraries use 12 rounds (2^12 iterations) as the default setting. Is that the recommended workfactor? Would 6 rounds not be strong enough (which happens to be the limit for client-side bcrypt hashing in Javascript, see also Challenging challenge: client-side password hashing and server-side password verification)?

    I have read answer which gives an in-depth discussion how to balance the various factors (albeit for PBKDF2-SHA256). However, I am looking for an actual number. A rule of thumb.

    I do 120.000, but it depends on your application. It just depends on your app, and your CPU power you can spend on it. E.g. if you have a 1 user login per second and using 2 cores only, you would not do more than 10.000 I think. Basically you need to check how long it takes with "time" command and see for yourself. Something close to second should be OK.

    @Andrew: The speed of my own system should not be leading for the number of iterations. It is the current speed of the brute-forcers that should dictate how many iterations are considered safe. Hence my question: how many iterations are nowadays considered safe?

    @JasonSmith, The speed of your system *is* relevant, because it determines how many iterations you can reasonably do without bogging down your system. You want as many iterations as possible, because the reality is that no number of iterations is enough to be completely safe: we're just reducing the risk somewhat, not eliminating it. There is no realistic number of rounds that is large enough to be considered safe. If you ask a question here, please be prepared to listen to the answers you get.

    @D.W. wrote "please be prepared to listen to the answers you get", sorry if I gave the impression being pedantic or stubborn. Perhaps as a non-native English speaker my comments conveyed the wrong message. I do appreciate all answers, and try hard to understand the rationale behind them.

    @JasonSmith, ok, my fault for misunderstanding, sorry!

    For anyone interested, I just wrote a small Java CLI tool to test bcrypt performance on servers (which is obviously important for balancing security, server-load and response-times):

  • D.W.

    D.W. Correct answer

    9 years ago

    I think the answer to all of your questions is already contained in Thomas Pornin's answer. You linked to it, so you presumably know about it, but I suggest that you read it again.

    The basic principles are: don't choose a number of rounds; instead, choose the amount of time password verification will take on your server, then calculate the number of rounds based upon that. You want verification to take as long as you can stand.

    For some examples of concrete numbers, see Thomas Pornin's answer. He suggests a reasonable goal would be for password verification/hashing to take 241 milliseconds per password. (Note: Thomas initially wrote "8 milliseconds", which is wrong -- this is the figure for a patience of one day instead of one month.) That still lets your server verify 4 passwords per second (more if you can do it in parallel). Thomas estimates that, if this is your goal, about 20,000 rounds is in the right ballpark.

    However, the optimal number of rounds will change with your processor. Ideally, you would benchmark how long it takes on your processor and choose the number accordingly. This doesn't take that long; so for best results, just whip up the script and work out how many rounds are needed to ensure that password hashing takes about 240 milliseconds on your server (or longer, if you can bear it).

    Time doesn't matter, money is why the world spins.

    I have a hard time understanding the reasoning that *my* platform determines the amount of hashing rounds. A client-side Javascript bcrypt implementation can do about 2^6 rounds on a legacy mobile device, my most recent hardware can do 2^13 rounds. However, you commented elsewhere that "12 rounds is almost certainly not enough". How can the speed of *my* implementation be relevent? Perhaps this just means I need to buy faster hardware and can't run a secure website on an old Pentium 4 which does 2^4 rounds?

    If 12 rounds is the best you can do, go with that. The summary version is that you want to use as many rounds as you can tolerate (for any reasonable number of rounds, even more will be even better). Therefore, the speed of your implementation is relevant, because it places an upper bound on the number of rounds you can use (if you set the number of rounds ridiculously large, it'll take too long). I recommend setting the number of rounds as close to that upper bound as possible. That's why the speed of your implementation is relevant.

    P.S. I do not recommend implementing bcrypt in Javascript, as performance will likely be very poor. I assume you are computing bcrypt on the server. (I do not think there is sufficient value to computing bcrypt on the client, so I do not recommend computing bcrypt on the client.) I suggest using a native, optimized implementation of bcrypt; that will run much faster.

    there should be a number somewhere. How many rounds can our system tolerate is not relevant, what is relevant is how difficult is it for the NSA to crack your password if it has been through only 10 rounds of bcrypt (the default in many systems). And I'm saying NSA because I don't think there are other groups that can provide the same amount of computation power.

    @David天宇Wong, the number of rounds our system can bear *is* relevant. There is no practical number of rounds that was enough to make it impossible for NSA to crack any passwords (even weak ones). So you can never get 100% security -- it's always a tradeoff between how much harder you are making it for yourself, vs how much harder you are making it for the adversary. But given the realities of how users choose passwords, we can never make it as hard as we would really like for the adversary. This is risk mitigation, not risk elimination.

    @D.W., I don't get it. Each round will increase the complexity of the brute force, at one point it will be unfeasable for any machine/group of machine on earth to crack that, even the NSA's.

    @David天宇Wong, a non-trivial number of people choose a weak password (say, 10 bits of entropy). We'd need, say, 2^70 iterations to make that secure against the NSA. If you did 2^70 iterations of bcrypt, no one could use it in practice, because it would be far too slow for the good guys. That's why I say there is no value for the number of iterations that both provides strong security against strong adversaries (like the NSA), and yet is also small enough to be practical for the legitimate users of the system. Anyway, we've gotten a bit far afield from the question that was asked.

    I was not thinking of this usercase, of course you will never be able to protect weak passwords but you should think about the number of iterations to protect normal passwords. What do you mean 10 bits of entropy for a pw? I guess entropy of the password is 2^10 but what does it translates to?

    @David天宇Wong, a password with 10 bits of entropy *is* normal. But if you don't like that, consider a password with 20 bits of entropy. (More than half of all users have a password with fewer than 20 bits of entropy.) Then you'll need 2^60 iterations of bcrypt to provide strong security for those passwords, but that's far too many for the good guys to do. Try working through this on your own. There are lots of resources about what entropy is -- this is not the place to explain/ask about it. Entropy is a fundamental concept that you have to understand, to understand password security.

License under CC-BY-SA with attribution

Content dated before 7/24/2021 11:53 AM