Create a memory leak, without any fork bombs

  • Your task is to create a memory leak. This is a program that uses loads of memory, until the computer runs out and has to do some swapping to save itself from running out. The only way that the memory can be released is by killing the program in the task manager or using a command line kill such as taskkill /im yourprogram /f (in Windows) or even restarting the computer. Merely closing the app should not prevent it from continuing to hog memory.



    Rules:




    1. Fork bombs of any sort are banned. That means that infamous Bash line :(){ :|:&};: is banned!


    2. The application must be single-threaded only. This implies the fork bomb rule.


    3. The program must not run other programs. This means that you can't just do something like run(memoryfiller.exe). The only exception to this is programs that are bundled with your OS or language, that are not primarily designed to consume memory (i.e. they have another purpose). This means that things like cat and ln -s are permitted.


    4. You can take up as much memory as you like. The more the better.


    5. Code must be explained fully.




    Good luck. This is a popularity contest so the code with the most votes after 10 days from the asking date wins!


    "Closing it should still make it hog memory" - if a program is a shell executable (like most of windows versions of scripting language interpreters are), closing its window will kill the program.

    Isn't this just `while(1)malloc(999);`?

    I'm not sure if "Closing it should still make it hog memory" is compatible with "The application must be single threaded only." If no thread has a chunk of memory, the OS can take it back, right?

    Does writing an infinite string to a file count?

    Just run firefox 26 with a few tabs open running flash for a half hour. It'll bring your computer to its knees.

    @mniip. That's the whole point of the challenge. To make a difficult challenge. And doorknob. I wanted something different! ;)

    @GeorgeH my previous comment was a troll (I apologize for that) See my answer which hangs computer in 2 second! Once started, your computer will hang so don't try!

    "Closing it should still make it hog memory." without spawning some secret processes this is not possible on unices unless you exploit some kernel memory leak or mess directly with the os. You'd need a windows platform (preferably win98 or older) to do that with a conventional code.

    Is overriding garbage collection so that nothing happens a valid answer?

    @David. Thats fine, Garbage collection can be overridden for this.

    Why specifically without fork bombs? Banning one technique is like saying "no Python" or "can't use BeOS". It just seems silly.

    You can always install Windows... that'll give you several interesting memory leaks, some of which have been there a really long time. You can also run malloc's, and store into the memory location given to you by malloc, until it barfs up and gives you a 0.L as an error code ("Nil Pointer"), then store into location 0, and see how long the machine lives. Judging by how many "zero-stores" I have seen in many places, Microsoft and Apple run neck and neck with Nil pointer references. *grin* Dave Small

    By "must be single threaded", do you mean "must not explicitly start any threads?" As a comment on the current most popular answer notes, some language runtimes have multiple threads behind the scenes, which effectively excludes those languages from the challenge. Furthermore, @acbabis raises a valid point. I don't know of any languages where "closing the program" wouldn't at **least** stop the main thread from running. Without at least allowing threads opened in the background by standard libraries (such as GUI threads) or the runtime itself, this seems like an impossible challenge.

    @jpmc26. No. I mean you cannot use memory by starting threads so like you cant use `for x in range(100000)` to spawn loads of threads to take up memory.

    How do you define "merely closing the app"? Closing all windows belonging to the process, or must the process itself be terminated? What if the program runs entirely in the background? What's the difference between "closing the app" and "killing the program" then?

    @hpsMouse. Closing is pressing the close button (the X in the top right corner). Killing it is going into task manager and clicking end task, or going into processes and ending the process

    I wonder if I should remove the 10 days part!!

    I'm closing this question as off-topic because it's asking for, in some sense, malicious software, which we do not allow.

  • nobody

    nobody Correct answer

    7 years ago

    Windows



    The Win32 API allows you to allocate memory in other processes, and then to read/write that memory remotely. This program has only one thread, which it uses to enumerate each running process on the system, and then repeatedly allocate 1MB buffers in each process until the allocation fails. When it finishes with one process, it moves on to the next. The allocations are not released when the calling program finishes - only when/if each target process finishes. This hangs a 2GB Windows 7 VM in about 10 seconds. It does require running as admin.



    To compile: cl /MD leak.cpp /link psapi.lib



    #include <windows.h>
    #include <psapi.h>

    typedef void (*ProcFunc)(DWORD pid);
    #define ALLOC_SIZE 0x100000
    LPVOID buf;

    void ForEachProcess(ProcFunc f)
    {
    DWORD aProcesses[1024], cbNeeded;

    if (!EnumProcesses( aProcesses, sizeof(aProcesses), &cbNeeded))
    return;

    for (unsigned int i = 0; i < cbNeeded / sizeof(DWORD); i++)
    if (aProcesses[i] != 0)
    f(aProcesses[i]);
    }

    void RemoteLeak(DWORD pid)
    {
    HANDLE hProcess = OpenProcess( PROCESS_ALL_ACCESS, FALSE, pid );
    if (hProcess == NULL)
    return;

    for (;;)
    {
    LPVOID ptr = VirtualAllocEx(hProcess, NULL, ALLOC_SIZE,
    MEM_COMMIT, PAGE_READWRITE);
    if (ptr == NULL)
    return;

    WriteProcessMemory(hProcess, ptr, buf, ALLOC_SIZE, NULL);
    }
    }

    int main(void)
    {
    buf = malloc(ALLOC_SIZE);
    if (buf == NULL)
    return 0;

    memset(buf, 0xFF, ALLOC_SIZE);

    ForEachProcess(RemoteLeak);

    return 0;
    }

    Windows is evil.

    I need to shut down tonight. Ill give this a go ;)

    " (running as a normal user, not using admin privileges" - not sure about this, you need SeDebugPrivilege which by default is not present in regular user's token

    @rkosegi Thanks, fixed.

    +1 **This deserves many upvotes** as it so far is the **only** answer that meets the original _Closing it should still make it hog memory_ requirement. Very creative solution :-)

    @rkosegi: Sure about SeDebugPrivilege? AFAIR (my Win32 times are a while ago) you do not need it for own processes, and there surely are a few around.

    Hi @Daniel, according to http://msdn.microsoft.com/en-us/library/windows/desktop/ms684320%28v=vs.85%29.aspx and also my little experience, I needed to adjust token (even if I was admin on WinXP) to make it work.My program was simply opening specific processes, enumerated it's threads and kill them.Maybe I wrong here, that's why I said "I'm not sure", but MSDN said so.BTW Andrew is accessing not just own thread in his answer.

    @rkosegi: "opening specific processes" is the key here. What processes? If they have been create with the same token than `leak.exe`, their DACL should grant you access. Basically, everything you could (as a non-Admin) kill from TaskManager should also be leakable by this nice prog. (SeDebugPrivilege is the one that, if enabled, bypasses any process DACL, so you can also access foreign processes.)

    Is the `WriteProcessMemory()` really necessary? In theory, `MEM_COMMIT` of `VirtualAllocEx()` should be enough. Also, you probably do not need `PROCESS_ALL_ACCESS`, but only `PROCESS_VM_OPERATION`.

    @Daniel you're right that it's the only solution that still hogs memory when it gets killed. But the requirements state that it's fine if the memory is released when the program is killed. Therefore my solution also fullfils that requirement, by disabling any way to normally terminate the program. Killing it is the only way to stop it from running. But Andrew Medico definitly has a special way to keep the memory leaked after termination. That's really a nice one ;)

    The RAM Disk and shared memory based approaches also still hog memory after being killed. This solution does deserve plentiful upvotes for the shear perverseness of allocating memory in other processes, though.

    Congratulations! You have won the contest :)

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM