Why is Math.random() not designed to be cryptographically secure?

  • The JavaScript Math.random() function is designed to return a single IEEE floating point value n such that 0 ≤ n < 1. It is (or at least should be) widely known that the output is not cryptographically secure. Most modern implementations use the XorShift128+ algorithm which can be easily broken. As it is not at all uncommon for people to mistakenly use it when they need better randomness, why do browsers not replace it with a CSPRNG? I know that Opera does that*, at least. The only reasoning I could think of would be that XorShift128+ is faster than a CSPRNG, but on modern (and even not so modern) computers, it would be trivial to output hundreds of megabytes per second using ChaCha8 or AES-CTR. These are often fast enough that a well-optimized implementation may be bottlenecked only by the system's memory speed. Even an unoptimized implementation of ChaCha20 is extremely fast on all architectures, and ChaCha8 is more than twice as fast.

    I understand that it could not be re-defined as a CSPRNG as the standard explicitly gives no guarantee of suitability for cryptographic use, but there seems to be no downside to browser vendors doing it voluntarily. It would reduce the impact of bugs in a large number of web applications without violating the standard (it only requires the output be round-to-nearest-even IEEE 754 numbers), decreasing performance, or breaking compatibility with web applications.

    EDIT: A few people have pointed out that this could potentially cause people to abuse this function even if the standard says you cannot rely on it for cryptographic security. In my mind, there are two opposing factors that determine whether or not using a CSPRNG would be a net security benefit:

    1. False sense of security - The number of people who otherwise would use a function designed for this purpose, such as window.crypto, decide instead to use Math.random() because it happens to be cryptographically secure on their intended target platform.

    2. Opportunistic security - The number of people who don't know any better and use Math.random() anyway for sensitive applications who would be protected from their own mistake. Obviously, it would be better to educate them instead, but this is not always possible.

    It seems safe to assume that the number of people who would be protected from their own mistakes would greatly exceed the number of people who are lulled into a false sense of security.

    * As CodesInChaos points out, this is no longer true now that Opera is based off of Chromium.

    Several major browsers have had bug reports suggesting to replace this function with a cryptographically-secure alternative, but none of the suggested secure changes landed:

    The arguments for the change essentially match mine. The arguments against it vary from reduced performance on microbenchmarks (with little impact in the real world) to misunderstandings and myths, such as the incorrect idea that a CSPRNG gets weaker over time as more randomness is generated. In the end, Chromium created an entirely new crypto object, and Firefox replaced their RNG with the XorShift128+ algorithm. The Math.random() function remains fully predictable.

    Comments are not for extended discussion; this conversation has been moved to chat.

    Saying that a feature has certain qualities implies obligation. Obligation incurs cost, first to implement the obligation, second to keep it up to date, and third when you find out that you have not lived up to your obligation. This is especially so when you have obligated to deliver **secure** functionality.

    @MichaelK But there are so many other examples where that is not true. You do not have to say that a feature has certain qualities anywhere. Keep the standard as it is and opportunistically improve security. A good example is any modern C compiler. Would you claim that it is foolish for the compiler to support `FORTIFY_SOURCE`? Why not just educate people so they don't make vulnerable programs? Why have GCC protect them? I don't know of anyone who is sloppy in their code because they think GCC will protect them, but I know of many people who have been saved by GCC's security measures.

    In other words, you are saying that _fail-safe design_ is a bad thing.

    @forest Opportinistic security? Words I never thought anyone would say.

    Performance maybe? Obtaining a cryptographically secure random value is way more CPU heavy than a pseudo random value. While designing a game (in C++) I specifically had to choose a random algorithm that was offering decent performance.

    Blink is a rendering engine. V8 is the JavaScript implementation used by Chrome and Opera.

    @Rolfツ This issue has been discussed extensively in answers and comments. The performance of XorShift128+ compared to, say ChaCha8 as used for returning individual floating point numbers.

    Comparing the performance XorShift128+ to ChaCha8 is only part of the performance question. A CSPRNG must collect sufficient entropy before it is ready to emit any secure random bits. This can take significant wall-clock time.

    @JamesKPolk You only need 128 bits, once (ChaCha takes 256 bits but it's perfectly acceptable to repeat the key twice, as long as you change the constant). By the time any browser loads, plenty of entropy will be available to the system. On microcontrollers that run a JavaScript interpreter for whatever reason and have no source of good entropy, they could just use XorShift128+ (since I'm not suggesting a change to the standards).

    @forest: fair enough, it should indeed not be a problem on any modern platform, desktop or mobile.

  • I was one of the implementers of JScript and on the ECMA committee in the mid to late 1990s, so I can provide some historical perspective here.

    The JavaScript Math.random() function is designed to return a floating point value between 0 and 1. It is widely known (or at least should be) that the output is not cryptographically secure

    First off: the design of many RNG APIs is horrible. The fact that the .NET Random class can trivially be misused in multiple ways to produce long sequences of the same number is awful. An API where the natural way to use it is also the wrong way is a "pit of failure" API; we want our APIs to be pits of success, where the natural way and the right way are the same.

    I think it is fair to say that if we knew then what we know now, the JS random API would be different. Even simple things like changing the name to "pseudorandom" would help, because as you note, in some cases the implementation details matter. At an architectural level, there are good reasons why you want random() to be a factory that returns an object representing a random or pseudo-random sequence, rather than simply returning numbers. And so on. Lessons learned.

    Second, let's remember what the fundamental design purpose of JS was in the 1990s. Make the monkey dance when you move the mouse. We thought of inline expression scripts as normal, we thought of two-to-ten line script blocks as common, and the notion that someone might write a hundred lines of script on a page was really very unusual. I remember the first time I saw a ten thousand line JS program and my first question to the people who were asking me for help because it was so slow compared to their C++ version was some version of "are you insane?! 10KLOC JS?!"

    The notion that anyone would need crypto randomness in JS was similarly insane. You need your monkey movements to be crypto strength unpredictable? Unlikely.

    Also, remember that it was the mid 1990s. If you were not there for it, I can tell you it was a very different world than today as far as crypto was concerned... See export of cryptography.

    I would not have even considered putting crypto strength randomness into anything that shipped with the browser without getting a huge amount of legal advice from the MSLegal team. I didn't want to touch crypto with a ten foot pole in a world where shipping code was considered exporting munitions to enemies of the state. This sounds crazy from today's perspective, but that was the world that was.

    why do browsers not replace it with a CSPRNG?

    Browser authors do not have to provide a reason to NOT do a change. Changes cost money, and they take away effort from better changes; every change has a huge opportunity cost.

    Rather, you have to provide an argument not just why making the change is a good idea, but why it is the best possible use of their time. This is a small-bang-for-the-buck change.

    I understand that it could not be re-defined as a CSPRNG as the standard explicitly gives no guarantee for suitability for cryptographic use, but there seems to be no downside to doing it anyway

    The downside is that developers are still in a situation where they cannot reliably know whether their randomness is crypto strength or not, and can even more easily fall into the trap of relying on a property that is not guaranteed by the standard. The proposed change doesn't actually fix the problem, which is a design problem.

    (Bit off topic) Would you mind providing a reference for "the .NET Random class can trivially be misused in multiple ways to produce long sequences of the same number is awful"?: I've not heard of this before, or are you referring to the classic "create a thousand Random instances in a tight loop"?

    @VisualMelon: Create a thousand instances in a tight loop is the classic. But there are also failure modes when you use one instance of Random on two threads at the same time. Random is not threadsafe and there is a scenario where a race can cause it to return zero forever!

    @VisualMelon: There are also more subtle scenarios. Suppose you have two instances of Random with different seeds. Seems fine, right? But suppose you then combine those two instances of Random in some way. Maybe you are using them to each produce a sequence of die rolls and add them together pairwise. **Are the two sequences correlated with each other in some non-random way**? It seems plausible. After all, they're running the same algorithm "in parallel", just with a different seed.

    Thanks for elaborating: I suppose I'm wary enough to not try using _any_ source from multiple threads unless it is clearly documented for that purpose so I'd never notice. I'm not sure that the last point is a specific concern for any 'implementation' rather than the algorithm behind it: I don't think we can hold the API designers accountable for that!

    I think this is the best answer here so far, especially as it's from such an authoritative source. I'll mark this answer as accepted for now unless a better answer comes along.

    @EricLippert I don't understand your 2 streams with different keys added together example. If it is a CSPRNG, then I would think that should be ok. If it is not a CSPRNG (just a plan PRNG) then you should never use it for security, even if you just have a single instance.

    @Buge: It is quite easy to imagine a (not very good) RNG which generates integers, and where the bottom bit of the output is random, but independent of the seed. If you add the output of two such instances (independently seeded), the result would always be even. (Eric's point had nothing to do with security - I don't know what made you think it had.)

    Yes! Why are *all* random APIs on *all* platforms so messed up? I guess teams go "let the intern do this".

    @EricLippert I think it's worth adding a concrete example to this answer of C#'s abusable `Random`. Something like this (pseudo)code: `while (something) { int rand = new Random().nextInt(); doSomething(rand); }` -- That fails because C#'s `Random` uses the current time with course resolution as the seed, so if `doSomething` is fast, you'll get the first number in the sequence with whatever that time's seed is, over and over again, rather than different ones. (I know you know this; I'm explaining for the people who didn't work on C# with Microsoft)

    @Buge Actually, sometimes mixing multiple bad PRNGs together can actually give you a decent cipher, at least if done correctly. For example E0 (the cipher used by in all but the newest Bluetooth protocols) involves four LFSRs. Each individual LFSR is trivial to break, but E0 itself is a good bit stronger (but still not great).

    @MartinBonner I thought Eric's comment was talking about CSPRNGs, because this is security.stackexchange.com and both the question and Eric's answer were talking about CSPRNGs.

    @forest I should have limited my statement by saying "If it is not a CSPRNG (just a plain PRNG) then you should never use it for security unless you are a cryptography expert." Just as a crypto expert can create a new cipher using insecure primitives such as addition, xor, and multiplication, a crypto expert can also create a new CSPRNG using PRNGs. And just as a layperson should never create a new cipher using insecure primitives, a layperson should never use a PRNG for security.

    An excellent, informative answer. The “opportunity cost” is probably the only real explanation needed, but everything else is a real treat

    I was going to provide a very similar answer that played on the same themes mentioned here, basically the short, unsatisfying version of which is "the specification doesn't require it, so implementations don't need to make it cryptographically random"

    @Buge The reasoning Martin uses applies to CSPRNGs. Not particularly good ones mind you, but then most CSPRNGs used by mainstream languages are anything but amazing.

    @Voo If " the bottom bit of the output is random, but independent of the seed" then it isn't a CSPRNG. A CSPRNG requires that if you have no knowledge of the seed, then the output of the CSPRNG is indistinguishable from true randomness. But this bad PRNG is fairly easily distinguishable from true randomness even with no knowledge of the seed. Simply check if the bottom bit follows the pattern.

    @Buge I was thinking of a shared source of randomness (say a hardware module). But you're right that that wouldn't work if you had two sequences.

License under CC-BY-SA with attribution

Content dated before 7/24/2021 11:53 AM