How do large companies protect their source code?
I recently read the canonical answer of our ursine overlord to the question on How do certification authorities store their private root keys?
I then just had to ask myself:
How do large companies (e.g. Microsoft, Apple, ...) protect their valuable source code?
In particular I was asking myself, how do they protect their source code against theft, against malicious externally based modification and against malicious insider-based modification.
The first sub-question was already (somewhat) answered in CodeExpress' answer on How to prevent private data being disclosed outside of Organization.
The reasoning for the questions is simple:
- If the source code would be stolen, a) would the company be (at least partially) hindered from selling it and b) would the product be at risk of source code based attack search. Just imagine what would happen if the Windows or iOS source code was stolen.
- If the code would be modified by malicious external attackers, secret backdoors may be added which can be catastrophic. This is what happened with Juniper lately, where the coordinates of the second
DUAL_EC_DRBGpoint were replaced in their source.
- If the code would be modified by an internal attacker (e.g. an Apple iOS engineer?) that person could make a lot of money by selling said backdoors and can put the product at severe risk if the modified version ships.
Please don't come up with "law" and "contracts". While these are effective measures against theft and modification, they certainly don't work as well as technical defenses and won't stop aggressive attackers (i.e. other governments' agencies).
To prevent stealing, there are no removable media slots in the workstations. Employees are not allowed to carry media, cell phone with camera etc. to work. Authorization required for permissions on not related/relevant modules. Separation of duties. For backdoors prevention, they need a source code analysis program built into their secure SDLC. For malicious external attackers, they should undergo Penetration Testing before release.
To be fair, in the U.S., most of Europe, Japan, etc. "law and contracts" (well, contracts are enforced or ignored under "law"; but a petty semantic quibble...) are often effective tools. Both for punishing those who compromise source code and restraining what advantage parties who gain access to that code can take from it. (Not *always* effective, sure.) The big problems much more come from actors in the areas of the world of cyber-lawlessness. Or at least areas of the world where authorities don't give two wits about protecting the claimed legal rights of Western companies.
IMHO, that's two questions: Protect source code from being stolen, and from being tampered with. They have different threat models, and should be asked separately.
FWIW, the core of iOS source code is freely available to anyone: http://opensource.apple.com. Also, Windows source code is available to anyone willing to sign an agreement (not sure if one needs to pay anything): https://www.microsoft.com/en-us/sharedsource/. So for really big companies like Apple and Microsoft, they protect their IP using THE LAW (not the answer you wanted but it's the truth).
Also note that unlike Microsoft, whose agreement you must sign prevent you from selling your own version of Windows, Apple cannot prevent/disallow others from building and selling OSes based on the OSX kernel (because it's open source). One such OS is Darwin: http://www.puredarwin.org/
If closed source is the only things that prevents you from vulnerabilities, then I have bad news for you.
In a large company, it's unlikely that every developer would need to recompile every product that the company has built. So if developers are only given access to the source code they need to complete their projects, no one developer can leak all of the company's source code. Except perhaps the keeper of the keys, but you can avoid that by using different repositories, with admin access for each project held by different managers.
One amusing thing we do is sign up for services that regularly index and search Github and the like for our own company name and internal URLs. You'd be surprised how often proprietary Java packages with hardcoded internal URLs, usernames and passwords get uploaded to external code repositories.
As a related aside, large companies do have software stolen sometimes. Here is the result: Ex-IBM employee from China arrested in U.S. for code theft
@KrishnaPandey, let's just say that if the Fortune 50 I work for had some of those kinds of rules in place (re: no carrying media to/from company premises -- which implies a hard line against telecommuting), they wouldn't have acquired the startup I came in through. Such measures have real-world costs, and those costs can outweigh the benefits.
What makes you think that keeping software soure secure increases security or that having it be open decreases it? eg. here's the source to the OS X kernel: https://opensource.apple.com/source/xnu/xnu-3248.20.55/ does that automatically make OS X less secure? The truth is quite the opposite; more eyes on the code means more people reporting bugs. Another good example is Atlassian (disclaimer, I work for them), who gives the source of their products to any customer who asks (and lets them modify it as long as they don't redistribute their modifications).
@CharlesDuffy These scenarios are already covered where InfoSec policies are in place. Risk arising due to Acquisition or Take Over or Third party vendors, even your notebook getting lost at airport. As security is as strong as your weakest link.
"Just imagine what would happen if the Windows" AFAIR Windows source code has leaked, somewhere around NT or 2000.
@KrishnaPandey You could still get all the sourcecode, zip it and then E-Mail it. You dont need usb slots for this
@JonasDralle That will be a lame way to steal, leaving trail everywhere. :)
First off, I want to say that just because a company is big doesn't mean their security will be any better.
That said, I'll mention that having done security work in a large number of Fortune 500 companies, including lots of name-brands most people are familiar with, I'll say that currently 60-70% of them don't do as much as you'd think they should do. Some even give hundreds of third-party companies around the world full access to pull from their codebase, but not necessarily write to it.
A few use multiple private Github repositories for separate projects with two-factor authentication enabled and tight control over who they grant access too and have a process to quickly revoke access when anyone leaves.
A few others are very serious about protecting things, so they do everything in house and use what to many other companies would look like excessive levels of security control and employee monitoring. These companies use solutions like Data Loss Prevention (DLP) tools to watch for code exfiltration, internal VPN access to heavily hardened environments just for development with a ton of traditional security controls and monitoring, and, in some cases, full-packet capture of all traffic in the environment where the code is stored. But as of 2015 this situation is still very rare.
Something that may be of interest and which has always seemed unusual to me is that the financial industry, especially banks, have far worse security than one would think and that the pharmaceutical industry are much better than other industries, including many defense contractors. There are some industries that are absolutely horrible about security. I mention this because there are other dynamics at play: it's not just big companies versus small ones, a large part of it has to do with organizational culture.
To answer your question, I'm going to point out that it's the business as a whole making these decisions and not the security teams. If the security teams were in charge of everything, or even knew about all the projects going on, things probably wouldn't look anything like they do today.
That said, you should keep in mind that most large businesses are publicly traded and for a number of reasons tend to be much more concerned with short-term profits, meeting quarterly numbers, and competing for marketshare against their other large competitors than about security risks, even if the risks could effectively destroy their business. So keep that in mind when reading the following answers.
If source code were stolen:
a. Most wouldn't care and it would have almost no impact on their brand or sales. Keep in mind that the code itself is in many cases not what stores the value of a companies offering. If someone else got a copy of Windows 10 source they couldn't suddenly create a company selling a Windows 10 clone OS and be able to support it. The code itself is only part of the solution sold.
b. Would the product be at greater risk because of this ? yes absolutely.
External Modification: Yes, but this is harder to do, and easier to catch. That said, since most companies are not seriously monitoring this it's a very real possibility that this has happened to many large companies, especially if back-door access to their software is of significant value to other nation-states. This probably happens a lot more often than people realize.
Internal Attacker: Depending on how smart the attacker was, this may never even be noticed or could be made to look like an inconspicuous programming mistake. Outside of background checks and behavior monitoring, there is not much that can prevent this, but hopefully some source-code analysis tools would catch this and force the team to correct it. This is a particularly tough attack to defend against and is the reason a few companies don't outsource work to other countries and do comprehensive background checks on their developers. Static source code analysis tools are getting better, but there will always be gap between what they can detect and what can be done.
In a nutshell, the holes will always come out before the fixes, so dealing with most security issues becomes something of a race against time. Security tools help give you time-tradeoffs but you'll never have "perfect" security and getting close to that can get very expensive in terms of time (slowing developers down or requiring a lot more man-hours somewhere else).
Again, just because a company is big doesn't mean they have good security. I've seen some small companies with much better security than their larger competitors, and I think this will increasingly be the case since smaller companies that want to take their security more seriously don't have to do massive organizational changes, where larger companies will be forced to stick with the way they've been doing things in the past due to the transition cost.
More importantly, I think it's easier for a new company (of any size, but especially smaller ones) to have security heavily integrated into it's core culture rather having to change their current/legacy cultures like older companies have to. There may even be opportunities now to take market share away from the a less secure product by creating a very secure version of it. Likewise, I think your question is important for a totally different reason: security is still in it's infancy, so we need better solutions in areas like code management where there is a lot of room for improvement.
To add to the point that most people wouldn't care if source code was stolen: generally speaking the real value of the company is the data in their databases, not the source code. Chances are most of that source code is boring wheel-reinventions that you can find anywhere.
As a sidenote, a technology giant requires you to carry your devices (PC, Phone) with you all the time.