Is a rand from /dev/urandom secure for a login key?
Lets say I want to create a cookie for a user. Would simply generating a 1024 bit string by using /dev/urandom, and checking if it already exists (looping until I get a unique one) suffice?
Should I be generating the key based on something else? Is this prone to an exploit somehow?
Checking for uniqueness is slow. A better choice is to ensure uniqueness. Append the time stamp to the string, down as far as you can. This will ensure that no two strings are ever the same, even if somehow the randomness is the same.
@DampeS8N You're assuming that repeatedly retrieving a timestamp yields a monotonically increasing value. This is far from true: the timestamp can remain constant during a fast sequence of operations, and can go backwards because the clock is reset for some reason. (Recommended reading: Cryptography Engineering ch. 16.) A counter is a reliable (and fast) way of ensuring uniqueness, if you can store it in non-volatile memory. A crypto-quality (P)RNG does ensure (crypto-quality) uniqueness, no additional technique is needed.
@Gilles: Agreed. A counter is always a better choice. But it should be known that we are talking about the VERY rare time when both the randomness and the timestamp are the same. And with dev/urandom/ we are talking a once in a universe event.
@DampeS8N If `/dev/urandom` gives you repeats, you have a security problem that merely appending a counter won't fix. As our conversation is wandering away from the question, I suggest that we take any continuation to chat.
All this seems academic. The probability of randomly generating two identical 1024 bit messages is so absurdly low that it doesn't even bear consideration.
@NickJohnson run the following command on any linux machine: `cat /dev/urandom | rngtest -c 1000` several times. IF its a vm (as many server environments are now) you'll fail FIPS compliance about every other run.
@NickJohnson, That depends on the **consequence** of a non-unique clash. If the consequence is End-Of-Universe, then yes it makes sense to check for uniqueness.
@Pacerier If we're the only intelligent beings in the universe, then each failure to check incurs an average of 6e9 / 2^512 = 4.5e-145 deaths. There's not even an SI suffix for a number that small. We should focus on higher risk activities, like being struck by lightning from a clear sky while skydiving on the day you win the lottery.
@NickJohnson, You are confusing the probability to win with the expected value. The expectation will be 4.5e-145, which is the average over many runs. But each run could well end up somewhere else, in this case: 0 * consequence or 1 * consequence. Some random people getting struck by lightning from a clear sky is less than insignificant compared to End-Of-Universe.
@Pacerier It's statistically valid to extrapolate the probability to an expected value, particularly for statistical purposes like this. If you were to hand me a revolver with 4.5e145 chambers and one bullet and offer me $1 per pull, I'd take you up on it any day.
@NickJohnson, Not if you would live forever. You would take up the offer only and only because there's a fixed amount of time that you could live anyway and thus that life has a **limited** value. End-Of-Universe here is however assumed equal to an **unlimited** / infinite value.
@Gilles, NickJohnson, Re "academic"; Yea but not all cookies are 1024 bits. Some are only 36^16 ones. And it essentially depends fully on how many cookies he's generating. If he's generating 700*6b*9t cookies per microsecond, and still getting exponentially *faster* with each passing microsecond, sooner or later there would be a hit.
The short answer is yes. The long answer is also yes.
/dev/urandomyields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what
/dev/urandomprovides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).
The man page for
urandomis somewhat misleading, arguably downright wrong, when it suggests that
/dev/urandommay "run out of entropy" and
/dev/randomshould be preferred; the only instant where
/dev/urandommight imply a security issue due to low entropy is during the first moments of a fresh, automated OS install; if the machine booted up to a point where it has begun having some network activity then it has gathered enough physical randomness to provide randomness of high enough quality for all practical usages (I am talking about Linux here; on FreeBSD, that momentary instant of slight weakness does not occur at all). On the other hand,
/dev/randomhas a tendency of blocking at inopportune times, leading to very real and irksome usability issues. Or, to say it in less words: use
/dev/urandomand be happy; use
/dev/randomand be sorry.
(Edit: this Web page explains the differences between
For the purpose of producing a "cookie": such a cookie should be such that no two users share the same cookie, and that it is computationally infeasible for anybody to "guess" the value of an existing cookie. A sequence of random bytes does that well, provided that it uses randomness of adequate quality (
/dev/urandomis fine) and that it is long enough. As a rule of thumb, if you have less than 2n users (n = 33 if the whole Earth population could use your system), then a sequence of n+128 bits is wide enough; you do not even have to check for a collision with existing values: you will not see it in your lifetime. 161 bits fits in 21 bytes.
There are some tricks which are doable if you want shorter cookies and still wish to avoid looking up for collisions in your database. But this should hardly be necessary for a cookie (I assume a Web-based context). Also, remember to keep your cookies confidential (i.e. use HTTPS, and set the cookie "secure" and "HttpOnly" flags).
On the topic of urandom "running out", you're only sort of right. On a system with a poor entropy source (such as a VM) and a high rate of entropy use (lots of SSH connections, VPN tunnels, etc), urandom will return less random data instead of blocking. "Less random" is a loose term, but it means that you're more likely to see repetition. Is that a problem? Matters on your application :) In this case, urandom is probably fine.
@Bill, that is *not* correct. The chances of seeing repetition from `/dev/urandom`, due to high use of entropy, are essentially nil. (To be precise, by "repetition" I mean an amount of repetition that is statistically significantly higher than expected by chance.) There is essentially no risk that `/dev/urandom` ever "runs out" within your lifetime.
Good answer, nice coverage of the whole question including the cookie aspect. Instead of checking the random value against other random values, use a chi-square test to check the output of the generator. Fourmilab has a program ent which test the entropy of a generator and includes the chi-square test.
What it means when /dev/urandom runs out of entropy is that it produces a more predictable randomness, by that it means that an attacker with enough processing power can theoretically do statistical analysis of your random numbers to determine the internal state of the PRNG, and therefore predict your future random output. This is easier said than done though.
You might want to watch the talk "Fast Internet-wide Scanning and its Security Applications" given by J. Alex Halderman at 30C3. They did a large scan for SSH keys and basically found many duplicate keys. It turns out that many devices were embedded systems (like routers) which lack good sources of entropy (mouse, keyboard, etc.) and will usually generate the SSH key right after the boot. They determined that for /dev/urandom there is an "entropy gap" which can take 60 seconds (!) after the system startup, during which the output is actually predictable.
@dog Before the CSPRNG has been seeded by enough entropy, its output will always be predictable, no matter how little data you read. Once it has been seeded with enough entropy, it will never be (practically) predictable no matter how much data you read. "Running out of entropy" isn't a thing. (CSPRNGs do have theoretical limits about how much data can be generated without a reseed, but you'll never hit them unless the CSPRNG in question sucks.)
@Thomas, Why does Schneier https://www.schneier.com/blog/archives/2013/10/insecurities_in.html contradict with your answer?
There is no contradiction (also, Schneier is merely quoting an article written by other people). The research paper worries about _recovering from an internal state compromise_, an already rotten situation. If your system was utterly compromised, you should have nuked it from orbit, rather than keeping on generating keys with it; what the article says is that _if_ you do the wrong thing (keep on using the compromised machine "as is" and pray for the best) then the PRNG used in /dev/random (and urandom) won't save your skin -- but, realistically, nothing would have.
@ThomasPornin, re "begun having some network activity"; What about airgapped devices?
@MattNordhoff, re "its output will always be predictable"; You're missing the point. the point is that if you use `urandom` **you get insecure output** as demonstrated by J. Alex Halderman above. If you instead use `random`, it would block until it is safe so the output is secure. ¶ Context and use case would determine if something is (in)secure, In short: for some cases, use `random` and be happy; use `urandom` and be sorry.
Does this answer hold true for MacOS ? I am interested in the context of cryptocurrency wallet software that relies on OS entropy for the generation of private keys.