Can a computer virus be stored somewhere else than on the hard drive?

  • Are there viruses that have managed to hide themselves somewhere other than on the hard drive? Like CPU cache or on the motherboard?

    Is it even possible? Say I get a virus, so I get rid of the HDD and install a new one. Could the virus still be on my PC?

    Floppy disks -- I remember having to deal with viruses on floppies on my Amiga, and that didn't even have a hard drive. Same applies to any other removable media like USB sticks that can auto-run code when inserted into the computer. Even read-only media like CD-ROMs might have been shipped with viruses on them.

    A bit more abstract, but I had a virus inside a virtual Windows machine once, with access to my real harddisk. Not directly what you're asking for, hence a comment.

    I had years ago a virus on my mainboard (at least I'm assuming so, since I couldn't explain it with anything else). it was like 2008 and my computer behaved strange. alot randomly files where wirtten all over my folders. And 1 or 2 reboots later something prevented to boot from that HDD. I also was not able to reinstall windows on that HDD. So I bought a new HDD unplugged the old one and installed windows. Installed drivers (didn't even connect internet so far) rebooted..... Same files where written on the factory new HDD. 1 more reboot and I couldn't use it anymore aswell. I bought a new PC.

    Someone should write a virus for a Mercury delay line

    Though not answering this question, it's better also to know that a virus doesn't have to be stored (except for the running instance in the memory) if it spreads quickly enough. In this case, if everyone on the internet shutdown their computer, the virus is gone, but they don't.

    If you want to be scared or impressed depending on your position check out hardware manufacturers reference documentation. For example Dell offers a document called "Statement of Volatility" for all servere. It contains mjltiple pages with inventory of (writeable, flash) firmware storage in a enterprise server. Sample: http://downloads.dell.com/manuals/common/poweredge-r720_white%20papers1_en-us.pdf

    @Zaibis : What you are describing can also be caused by a simple failure of the onboard drive controller. Replacing the mobo should have been sufficient, assuming that the failure wasn't being induced by something else (like a marginal P/S, or another component able to inject transients into the power bus or data bus).

    There are viruses who hide on firmware of hardware. Think about it. The driver inside your GPU is infected (or any other part of your computer). Then you're basically boned. Even when you whipe your full pc the malwsre stays there and probably the only way to retrieve is to flash the hardware piece or replace it. Given that you ever have the chance of locating it. This is the really nasty stuff

    @Zaibis: Had that behaviour once, too. As Eric describes, it was probably a mobo controller failure. To me, it was even worse than a virus, because it can hide itself for way longer than your typical virus and it was completely "cross-platform" :P

    @JonasDralle What should a "driver inside your GPU" be?! How would that code be executed and re-infect the PC?

  • Polynomial

    Polynomial Correct answer

    5 years ago

    Plenty of places:

    Modern hardware has a wide range of persistent data stores, usually used for firmware. It's far too expensive to ship a complex device like a GPU or network card and put the firmware on a mask ROM where it can't be updated, then have a fault cause mass recalls. As such you need two things: a writeable location for that firmware, and a way to put the new firmware in place. This means the operating system software must be able to write to where the firmware is stored in the hardware (usually EEPROMs).

    A good example of this is the state of modern BIOS/UEFI update utilities. You can take a UEFI image and an executable running on your OS (e.g. Windows), click a button, and your UEFI updates. Simple! If you reverse engineer how these work (which I have done a few times) it's mostly a case of a kernel-mode driver being loaded which takes page data from the given UEFI image and talks directly to the UEFI chip using the out instruction, sending the correct commands to unlock the flash and start the update process.

    There are some protections, of course. Most BIOS / UEFI images won't load unless they're signed by the vendor. Of course, an advanced enough attacker might just steal the signing key from the vendor, but that's going into conspiracy theories and godlike threat actors, which just aren't realistic to fight in almost any scenario. Management engines like IME are meant to have certain protections which prevent their memory sections from being accessed even by ring0 code, but research has shown that there are many mistakes out there, and lots of weaknesses.

    So, everything is screwed, right? Well, yes and no. It's possible to put rootkits in hardware, but it's also incredibly difficult. Each individual computer has such a variance in hardware and firmware versions that it's impossible to build a generic rootkit for most things. You can't just get a generic Asus BIOS and flash it to any board; you'll kill it. You'd need to create a rootkit for each separate board type, sometimes down to the correct revision range. It's also an area of security that involves a huge amount of cross-domain knowledge, way down deep to the hardware and low-level operational aspects of modern computing platforms, alongside strong security and cryptographic knowledge, so not many people are capable.

    Are you likely to be targeted? No.

    Are you likely to get infected with a BIOS/UEFI/SMM/GPU/NIC-resident rootkit? No.

    The complexities and variances involved are just too great for the average user to ever realistically have to worry about it. Even from an economic perspective, these things take an inordinate amount of skill and effort and money to build, so burning them on consumer malware is idiotic. These kinds of threats are so targeted that they only ever really belong in the nation-state threat model.

    Don't forget the most simple case: a flash drive. If an USB stick caused the infection, it will happily infect the new HDD again.

    @Bergi I took the question to exclude traditional mass storage media entirely, but yes, that is true. You can also include smartphones in that category.

    It realy depends on what "you" represents.Low-level user like you and me ? Of course, no. High-level journalist reporting on controversial subjects? The answer is a bit more sophiscated. Beeing careful never hurt somebody ...

    It's more than just the content of the flash drive though. Wasn't there a PoC for infecting usb firmware?

    @Nate Yes, it is known as badusb, but for virtually all the reasons described above for other hardware types, badusb is equally unlikely to be a threat to the average user.

    Note that malware like the one that infected the Iranian nuclear power stations was also later discovered to infect German powerplants and laptops of ordinary people. That's the thing with malware designed to infect secure installations: by definition it has to try as hard as it possibly can. Such malware, since it was designed to target specific hardware (in this case equipment manufactured by Siemens), is unlikely to do damage to your PC. But just because you weren't deliberately targeted does not mean that your PC won't be potentially infected.

    While I tend to agree that these threat vectors are unlikely, aren't we basically reviving a variation of the old "security through obscurity" argument?

    @ViktorToth Yes, but that's somewhat irrelevant when we're talking about risk modeling. It's not as much obscurity as it is limiting the applicability of malware to a very small ecosystem, making it not worth the effort to an attacker unless it's intended to be targeted. Attack economics is an important part of the threat model.

    @slebetman The difference here is that Stuxnet's *payload* was designed to interface with a specific PLC device attached to the system. In this case we're talking about generic persistence vectors, not payloads, which is an important distinction. While you might get infected with a piece of malware with such functionality (which would still be unlikely) the chances of it actually being able to "hide" in the intended way are infinitesimal. Similarly, the German powerplant that was infected simply had Stuxnet on a computer that was in the powerplant, and it did not trigger its intended payload.

    @Polynomial: Correct. That's what I said. Just because you're not targeted does not mean you will not be infected. Remember - payload and infection are two different things. I don't know about you but I'm not comfortable with viruses on my PC regardless weather the payload is deployed. It's hard enough to live with bugs in regular programs. Potential bugs in viruses that may end doing random stuff to my files I can live without.

    @slebetman True, but I'd rather have malware on my system that I can detect and appropriately triage than malware on my system that ruins my hardware.

    It is also possible that malware only hides itself in the memory (RAM). Especially on servers - which are rarely restarted - this becomes popular.

    @rugk This is almost impossible to do without ever touching the disk, though. While attempts have been made, issues like paging and having to drop libraries to inject into the processes into temporary directories, ultimately means that memory-resident malware is often not entirely memory-resident.

    If you are really paranoid, your hard drive can have a malware on it's firmware. I read a while ago about running Linux out of the hardware chip itself. The hard drive had an ARM processor, 32MB or memory and enough to run a shell there. You can read it on http://spritesmods.com/?art=hddhack

    Just a minor correction. Malware cannot store itself in SMM, it can merely hide itself there at runtime because SMM is not persistent. As soon as the computer shuts down, everything executing in a system management context (ring -2) is lost. Also, you are confusing SMM with the ME. SMM is completely different from the Management Engine, which is ring -3. The ME has its own firmware (stored in the BIOS), which actually is persistent, whereas SMM is just a higher privileged mode available for execution on the CPU.

    Actually, there's a third problem with this answer I have to point out. You mentioned the Jellyfish PoC (the GPU-resident malware). It is also not persistent across reboots. It, like SMM, is just designed to be able to hide from traditional IDS. While in theory the GPU's firmware could be overwritten to create a true persistent hiding place for malware, that's not what you linked to. And the Quest to the Core paper, talking about microcode and such, are also not relevant to the question. Microcode must be re-applied at every boot. It is lost when the system shuts down.

    Overall, every one of those places you mentioned, other than the BIOS/UEFI, are temporary hiding spots that malware can use to evade IDS and maintain higher privileges. None of them are used to persist across reboots or reinstalls. Also I think you mixed up IME with SMI (the Phrack paper you mentioned didn't talk about the IME at all, only SMIs, which are used to launch into SMM context). I think this whole answer misunderstands the question and lists places malware can hide from an IDS in, not places malware can be stored in. And don't most systems use MLC NAND instead of EEPROM now days?

    @forest If you infect these areas you can implement malware with specific capabilities. For example, infecting a NIC's firmware gives you DMA access to the entire system memory, and a covert exfiltration channel that can't be seen by the OS. GPU malware can again DMA, so it has full control over the system, although it can't directly communicate out (well, actually, it can, if you hook up an internet-connected TV via HDMI, due to HEC). GPUs also have firmware. Point being that these places store code, and modifying that code allows for a persistent stealthy rootkit.

    That's true, but in order to get into those places, you need the privileges required to get DMA abilities in the first place, so it's not privesc either. There's really no point in moving to the GPU's once you already have ring 0, other than to hide from the rest of the system and covertly mess with things over DMA. It won't let you survive a reboot. Unless I misunderstood OP's question, he's not asking about a stealthy rootkit, but one which is stored in places other than the hard drive, like the BIOS, etc (not just one which hides from an IDS).

    Aren't there dedicated chips responsible for reading if the software to be installed and checking if it is signed ?

    @TrevörAnneDenise Not really; there are features in some hardware platforms to try to enforce code integrity and authenticity (e.g. ARM TrustZone and UEFI SecureBoot) but in practice these have been shown to be deficient due to mistakes in implementation or poor vendor support.

    @TrevörAnneDenise The problem is that these systems have (largely) been jury-rigged into existing architectures to solve problems which arose decades after the system was originally designed. As such, the features tend to have convoluted requirements and limitations, weakening their capabilities and increasing the implementation costs. Also, code signing is a difficult matter on general purpose machines - who should be allowed to sign code? If you limit it, it stifles your freedom of OS choice. If you open it up to anyone, what's to stop malware being signed? It's a difficult UX choice.

    That's understandable !

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM

Tags used