Leak Memory in as few bytes as possible

  • Your task is to write code that will leak at least one byte of memory in as few bytes as possible. The memory must be leaked not just allocated.



    Leaked memory is memory that the program allocates but loses the ability to access before it can deallocate the memory properly. For most high level languages this memory has to be allocated on the heap.



    An example in C++ would be the following program:



    int main(){new int;}


    This makes a new int on the heap without a pointer to it. This memory is instantly leaked because we have no way of accessing it.



    Here is what a leak summary from Valgrind might look like:



    LEAK SUMMARY:
    definitely lost: 4 bytes in 1 blocks
    indirectly lost: 0 bytes in 0 blocks
    possibly lost: 0 bytes in 0 blocks
    still reachable: 0 bytes in 0 blocks
    suppressed: 0 bytes in 0 blocks


    Many languages have a memory debugger (such as Valgrind) if you can you should include output from such a debugger to confirm that you have leaked memory.



    The goal is to minimize the number of bytes in your source.


    Perhaps you could have different ranges of amount leaked and depending on how much you leak you lose x% of your byte count

    @ChristopherPeart For one I am not a fan of bonuses on challenges and for two as you have already shown it is very easy to leak unbounded memory.

    Related. Not a duplicate, though, because most answers to that question form an infinite reachable structure in memory rather than actually leaking memory.

    what is the idea? That the mem cannot be freed? I guess this would require native execution for garbage collected languages or exploiting bugs.

    What happened to "this is code golf"?

    I have read only the title of your question and I immediately went to a C++ online compiler to put `new int` inside the `main` function! After I've read the question's body, and because the answer is there I will not post that as an answer!

    I see how languages designed for golfing fail miserably on this one ...

    DOS batch file, 3 bytes. `win` :-)

    I think this is how I justify 16 gigs of RAM for a casual desktop

    I think it would have been better if the questione asked for a program that didn't leak and then removing characters to make a program that did. That way it would be the program leaking and not all those interpreters and compilers that leak on every program they run.

    @JerryJeremiah Ideally if I were to ask this again I would require two programs one of which leaks *more* memory that the other and your score would be the length of the one that leaks more. That way if every program leaks memory you can still participate, you just have to find a way to leak additional memory on top of the default. (Or plug the existing leak).

  • Perl (5.22.2), 0 bytes







    Try it online!



    I knew there'd be some language out there that leaked memory on an empty program. I was expecting it to be an esolang, but turns out that perl leaks memory on any program. (I'm assuming that this is intentional, because freeing memory if you know you're going to exit anyway just wastes time; as such, the common recommendation nowadays is to just leak any remaining memory once you're in your program's exit routines.)



    Verification



    $ echo -n | valgrind perl
    …snip…
    ==18517==
    ==18517== LEAK SUMMARY:
    ==18517== definitely lost: 8,134 bytes in 15 blocks
    ==18517== indirectly lost: 154,523 bytes in 713 blocks
    ==18517== possibly lost: 0 bytes in 0 blocks
    ==18517== still reachable: 0 bytes in 0 blocks
    ==18517== suppressed: 0 bytes in 0 blocks
    ==18517==
    ==18517== For counts of detected and suppressed errors, rerun with: -v
    ==18517== ERROR SUMMARY: 15 errors from 15 contexts (suppressed: 0 from 0)

    Is there any way to get the valgrind output on TIO? The empty program itself isn't very enlightening :P

    @CAD97: Not as far as I know (TIO doesn't have `valgrind` installed), but the TIO link at least shows what the program's "main" functionality is (i.e. nothing). The links are also generally useful for things like formatting the post, giving a machine-readable version of the program, etc. (although arguably none of that is useful here). The best example of a pointless TIO link is probably this one (which caught a lot of attention at the time).

    I tried installing `valgrind` earlier, but it requires some permissions the sandbox context doesn't have.

    I liked the Unlambda answer, but this one is (IMHO) too much of a stretch, as it is obviously the interpreter itself which leaks the memory, i.e. I get ` definitely lost: 7,742 bytes in 14 blocks` when I run `perl --version` on my machine, despite it never gets to running any program, at all.

    @zeppelin: Agreed, but according to our rules, it's the implementation that defines the language, thus if the implementation leaks memory, all programs in the language leak memory. I'm not necessarily sure I agree with that rule, but at this point it's too entrenched to really be able to change.

    This also works in Node JS.

    This feels like a new standard loophole in the making...

    Finally a Perl script that I can understand.

    If Perl one-liners are good, no-liners are better !

  • C, 48 31 22 bytes



    Warning: Don't run this too many times.



    Thanks to Dennis for lots of help/ideas!



    f(k){shmget(k,1,512);}


    This goes one step further. shmget allocates shared memory that isn't deallocated when the program ends. It uses a key to identify the memory, so we use an uninitialized int. This is technically undefined behaviour, but practically it means that we use the value that is just above the top of the stack when this is called. This will get written over the next time that anything is added to the stack, so we will lose the key.






    The only case that this doesn't work is if you can figure out what was on the stack before. For an extra 19 bytes you can avoid this problem:



    f(){srand(time(0));shmget(rand(),1,512);}





    Or, for 26 bytes:



    main(k){shmget(&k,1,512);}


    But with this one, the memory is leaked after the program exits. While running the program has access to the memory which is against the rules, but after the program terminates we lose access to the key and the memory is still allocated. 
    This requires address space layout randomisation (ASLR), otherwise &k will always be the same. Nowadays ASLR is typically on by default.






    Verification:



    You can use ipcs -m to see what shared memory exists on your system. I removed pre-existing entries for clarity:



    $ cat leakMem.c 
    f(k){shmget(k,1,512);}
    int main(){f();}
    $ gcc leakMem.c -o leakMem
    leakMem.c:1:1: warning: return type defaults to ‘int’ [-Wimplicit-int]
    f(k){shmget(k,1,512);}
    ^
    leakMem.c: In function ‘f’:
    leakMem.c:1:1: warning: type of ‘k’ defaults to ‘int’ [-Wimplicit-int]
    leakMem.c:1:6: warning: implicit declaration of function ‘shmget’ [-Wimplicit-function-declaration]
    f(k){shmget(k,1,512);}
    ppcg:ipcs -m

    ------ Shared Memory Segments --------
    key shmid owner perms bytes nattch status


    $ ./leakMem

    $ ipcs -m

    ------ Shared Memory Segments --------
    key shmid owner perms bytes nattch status

    0x0000007b 3375157 Riley 0 1 0

    Why is linux not freeing memory when process exits and nothing else is using the memory? I'm sure this scenario is not possible on windows, if there are no handles to shared memory it will get freed, and all the handles *are* released when process shuts down

    @AndrewSavinykh Theoretically, the shmid could have been stored in a file and a program could attach to it in the future. This is how unix shared memory works...

    @AndrewSavinykh Shared memory basically becomes a resource that the OS can give to other processes. It is similar to a file that lives in RAM and any process that knows the name (key) has access to it until it is deleted. Imagine a process that calculates a number and stores it in memory and exits before the process that reads the data connects to the shared memory. In this case, if the OS frees the memory then the second process can't get it.

    Thank you for posting this. I just protected TIO against shared memory leaks.

    @Dennis That's why I didn't post a TIO link. I didn't know if it was protected or not.

    I like how you use the word *problem* to describe the scenario where the program leaks less memory than intended.

    @AndrewSavinykh there are Linux APIs, that were considered OK at the time, but became obsolete with creation of better alternatives and saner development practices. Unlike in Windows, there is no central entity to forcibly declare stuff *deprecated*, everyone is free to keep using it, even if better alternatives exist. File locks are supplanted by file leases, but some people don't even know those exist; shared memory got replaced by `tmpfs`, but many still use it by inertia. On the bright side, there is no Vista-like breakage-feasts :)

    Would `f(k){shmget(k,1,512);}` work?

    @Dennis That depends on how strict we are about loosing access. `k` will often be set to the first argument of the function called before f(). That means that there is a good chance we can figure out what `k` originally was.

    But `int k` has the same problem, assuming it is one. https://tio.run/nexus/c-gcc#@[email protected]A3pGoZGxkBZTqABQDX//wMA Btw, `shmget(k,1);` seems to work just fine.

    @Dennis If the function modifies it's first argument then `k` will also change to that. I think that is good enough to loose the key. Thanks!

    @Dennis For `shmget(k,1);`, that doesn't always work for me. Without the third argument of 512 (which is IPC_CREAT) it doesn't always create the new segment.

    Right, I should have tested it more than once. `main(k){shmget(&k,1,512);}` should work instead of the `rand()` approach.

    @Dennis After the `shmget` call we still know what `&k` is so we still have the key.

    But that's a full program. How would we recover `&k`?

    @Dennis The question says: "_memory that the program allocates but looses the ability to access before it can deallocate ..._". The program still has the key so it could deallocate. I'll add it, but keep the `rand()` one because this is borderline.

    @AndrewSavinykh you can *almost* get the same thing with `GlobalAddAtom` in Windows. Also `CreateFileMapping`. But `GlobalAddAtom` isn't unrecoverable, and neither is `CreateFileMapping` so I guess they don't really count.

    Why does the main method not count towards the bytes?

    @ThomasWeller In the verification? Because my submission is just the function. `int main(){f();}` is there to call `f()`. In other words, an example of usage.

    `clock()` would be much shorter than `srand();rand()`

  • Unlambda (c-refcnt/unlambda), 1 byte



    i


    Try it online!



    This is really a challenge about finding a pre-existing interpreter which leaks memory on very simple programs. In this case, I used Unlambda. There's more than one official Unlambda interpreter, but c-refcnt is one of the easiest to build, and it has the useful property here that it leaks memory when a program runs successfully. So all I needed to give here was the simplest possible legal Unlambda program, a no-op. (Note that the empty program doesn't work here; the memory is still reachable at the time the interpreter crashes.)



    Verification



    $ wget ftp://ftp.madore.org/pub/madore/unlambda/unlambda-2.0.0.tar.gz
    …snip…
    2017-02-18 18:11:08 (975 KB/s) - ‘unlambda-2.0.0.tar.gz’ saved [492894]
    $ tar xf unlambda-2.0.0.tar.gz
    $ cd unlambda-2.0.0/c-refcnt/
    $ gcc unlambda.c
    $ echo -n i | valgrind ./a.out /dev/stdin
    …snip…
    ==3417== LEAK SUMMARY:
    ==3417== definitely lost: 40 bytes in 1 blocks
    ==3417== indirectly lost: 0 bytes in 0 blocks
    ==3417== possibly lost: 0 bytes in 0 blocks
    ==3417== still reachable: 0 bytes in 0 blocks
    ==3417== suppressed: 0 bytes in 0 blocks
    ==3417==
    ==3417== For counts of detected and suppressed errors, rerun with: -v
    ==3417== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)

  • TI-Basic, 12 bytes


    While 1
    Goto A
    End
    Lbl A
    Pause

    "... a memory leak is where you use a Goto/Lbl within a loop or If conditional (anything that has an End command) to jump out of that control structure before the End command is reached... " (more)


    Wow, I think I remember this. I kept jumping out of my loops in my old basic programs and noticed how my TI-84+ got slower and slower...

    Yep, most of us know the feeling ;) @RayKoopa

    +1 for Ti Basic. I spent most of my 9th grade year programming those things.

    Do you need `Pause ` at the end? You could save 2 bytes.

    @kamoroso94 I think so, because "If a program is ended the leak is cleared and will cause no further issues", so it is to stop the program from ending.

    A simpler "leak" under these rules would be to create a program named `prgmA` consisting of the single command `prgmA` (2 bytes). This creates the same kind of "leak" (technically blowing an internal stack). Neither one is leaking "memory" in the OP's sense, and neither one is "unrecoverable" in OP's sense either (since the `While` "leak" can be reclaimed by hitting an `End`, and the infinite recursion can be stopped by... making it not-infinite) — but I don't think the OP's definitions make a whole lot of sense in non-C languages anyway. :)

    Nice idea @Quuxplusone, but do notice that in this example there is no recursion and no overflow like there would be with your example.

    @Timtech: It's just a question of semantics (as admittedly so was the OP's question). I'd say that in both cases the control flow is looping forever and the internal data structures of the Basic interpreter (in one case the "active programs" stack, in the other case the "active End-able constructs" stack) are growing without bound, so to me there's no difference. In *neither* case is any "heap memory" allocated; TI-Basic doesn't have that concept. (But see my next comment.)

    I'd say the closest thing to a "heap memory allocation" in TI-Basic is `1→L₁(1+len(L₁`, and the closest thing to a "leak" is `1→ʟA`. If the program does this and then exits, it's up to the calculator user to go delete that newly created list.

  • Python <3.6.5, 23 bytes


    property([]).__init__()

    property.__init__ leaks references to the property's old fget, fset, fdel, and __doc__ if you call it on an already-initialized property instance. This is a bug, eventually reported as part of CPython issue 31787 and fixed in Python 3.6.5 and Python 3.7.0. (Also, yes, property([]) is a thing you can do.)


    Has a bug report been sent?

  • Javascript, 14 bytes


    Golfed


    setInterval(0)

    Registers an empty interval handler with a default delay, discarding the resulting timer id (making it impossible to cancel).


    enter image description here


    I've used a non-default interval, to create several million timers, to illustrate the leak, as using a default interval eats CPU like mad.


    Haha I love that you've typed 'Golfed', makes me curious about the ungolfed version

    it might look like this `if(window && window.setInterval && typeof window.setInterval === 'function') { window.setInterval(0); }`

    @Martijn Believe it or not, but I actually did golfed it a bit :) The proper syntax for `setInterval` is `setInterval(func|string, delay[, param1, param2, ...]);`, and both `func` and `delay` are required parameters, so `setInterval(0)` abuses them both.

    Actually, this is not impossible to cancel: interval (and timeout) ID's are numbered sequentially, so it's fairly easy to cancel the thing just by calling `clearInterval` with an incrementing ID until your interval is gone. For example: ````for(let i=0;i<1e5;i++){try{clearInterval(i);}catch(ex){}}````

    Not wholly unreachable: The interval id can be guessed in multiple ways: Just plain brute force: `for (var i = 0; i++; i<100) { clearInterval(i) }`, or more 'elegantly': `var i = setInterval(); clearInterval(i); clearInterval(i-1)` since it is just an incremented integer.

    @user2428118 Brute-forcing the timer ID, might indeed work, in some implementations. But in general the order and range of ID values is not guaranteed, as well if they will be reused or not (most implementations specify the ID to be an opaque numeric token). So, I don't think there is a "legitimate" way to cancel timers w/o knowing the ID. Anyway, finding a "leak", which can not be "hacked" or "brute-forced" is (IMHO) not the point of this challenge, e.g. most _malloc_-based answers can be "fixed" with the malloc hooks, pretty easily, but that does not disqualify them.

    @user2428118 As zeppelin says, this is no more "legitmate" than saying the C/C++ leaks aren't "real" because you could brute force calls to `free()`

    Wow, not many challenges where JavaScript is an actual contender...

    @zeppelin user2428118 only disputed your claim that it's impossible to cancel, without saying anything about whether it should count as a leak. Yes, it's possible to cancel the interval. Yes, it's a leak and a valid answer nonetheless. Just change the "making it impossible to cancel" to something like "making it impossible to cancel without guessing the id" to correct that one small inaccuracy.

    Would a pair of backticks (template string) instead of `(0)` or even `setInterval()` work, too?

  • C#, 34 bytes


    class L{~L(){for(;;)new L();}}

    This solution does not require the Heap.
    It just needs a real hard working GC (Garbage Collector).


    Essentially it turns the GC into its own enemy.


    Explanation


    Whenever the destructor is called, It creates new instances of this evil class as long as the timeout runs out and tells the GC to just ditch that object without waiting for the destructor to finish. By then thousands of new instances have been created.


    The "evilness" of this is, the harder the GC is working, the more this will blow up in your face.


    Disclaimer: Your GC may be smarter than mine. Other circumstances in the program may cause the GC to ignore the first object or its destructor. In these cases this will not blow up. But in many variations it will. Adding a few bytes here and there might ensure a leak for every possible circumstances. Well except for the power switch maybe.


    Test


    Here is a test suite:


    using System;
    using System.Threading;
    using System.Diagnostics;
    class LeakTest {
    public static void Main() {
    SpawnLeakage();
    Console.WriteLine("{0}-: Objects may be freed now", DateTime.Now);
    // any managed object created in SpawbLeakage
    // is no longer accessible
    // The GC should take care of them

    // Now let's see
    MonitorGC();
    }
    public static void SpawnLeakage() {
    Console.WriteLine("{0}-: Creating 'leakage' object", DateTime.Now);
    L l = new L();
    }
    public static void MonitorGC() {
    while(true) {
    int top = Console.CursorTop;
    int left = Console.CursorLeft;
    Console.WriteLine(
    "{0}-: Total managed memory: {1} bytes",
    DateTime.Now,
    GC.GetTotalMemory(false)
    );
    Console.SetCursorPosition(left, top);
    }
    }
    }

    Output after 10 minutes:


    2/19/2017 2:12:18 PM-: Creating 'leakage' object
    2/19/2017 2:12:18 PM-: Objects may be freed now
    2/19/2017 2:22:36 PM-: Total managed memory: 2684476624 bytes

    That's 2 684 476 624 bytes.
    The Total WorkingSet of the process was about 4.8 GB


    This answer has been inspired by Eric Lippert's wonderful article: When everything you know is wrong.


    This is fascinating. Does the garbage collector "Forget" that some things exist and lose track of them because of this? I dont know much about c#. Also now I am wondering, what is the difference between a bomb and a leak? I imagine a similar fiasco could be created by calling a constructor from inside of a constructor, or having an infinite recursing function that never stops, although technically the system never loses track of those references, it just runs out of space...

    A constructor within a constructor would cause a stack overflow. But the destructor of an instance gets called in a flat hierarchy. The GC actually never loses track of the objects. Just whenever it tries to destroy them it unwittingly create new objects. User code on the other hand has no access to said objects. Also the mentioned inconsistencies may arise since the GC may decide to destroy an object without calling its destructor.

    Wouldn't the challenge be complete by just using `class L{~L(){new L();}}`? AFAIK the `for(;;)` only makes it leak memory faster, right?

    Sadly, no. Since for each destroyed object only one new instance is going to be created, which then again is inaccessible and marked for destruction. Repeat. Only ever one object will be pending for destruction. No increasing population.

    @MrPaulch well, the challenge only asks you to leak one byte, and according to the very interesting article you linked at the bottom, when the program ends finalizers will eventually timeout and the last created object will remain allocated.

    Not really. Eventually one finalized will be ignored. The corresponding object will be eaten regardless.

    Thanks that is very interesting situation with garbage collection in c#!

    How does this work if `L` is never instantiated in the first place?

    It doesn't. It also doesn't work if you don't start the process. Sarcasm aside: the challenge asked for some code that can be executed that leaks. In this case instantiating L is the equivalent of running a process or calling a function.

  • C (gcc), 15 bytes



    f(){malloc(1);}


    Verification



    $ cat leak.c
    f(){malloc(1);}
    main(){f();}
    $ gcc -g -o leak leak.c
    leak.c: In function ‘f’:
    leak.c:1:5: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default]
    f(){malloc(1);}
    ^
    $ valgrind --leak-check=full ./leak
    ==32091== Memcheck, a memory error detector
    ==32091== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
    ==32091== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info
    ==32091== Command: ./leak
    ==32091==
    ==32091==
    ==32091== HEAP SUMMARY:
    ==32091== in use at exit: 1 bytes in 1 blocks
    ==32091== total heap usage: 1 allocs, 0 frees, 1 bytes allocated
    ==32091==
    ==32091== 1 bytes in 1 blocks are definitely lost in loss record 1 of 1
    ==32091== at 0x4C29110: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
    ==32091== by 0x40056A: f (leak.c:1)
    ==32091== by 0x40057A: main (leak.c:2)
    ==32091==
    ==32091== LEAK SUMMARY:
    ==32091== definitely lost: 1 bytes in 1 blocks
    ==32091== indirectly lost: 0 bytes in 0 blocks
    ==32091== possibly lost: 0 bytes in 0 blocks
    ==32091== still reachable: 0 bytes in 0 blocks
    ==32091== suppressed: 0 bytes in 0 blocks
    ==32091==
    ==32091== For counts of detected and suppressed errors, rerun with: -v
    ==32091== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)

  • Java, 10 bytes


    Finally, a competitive answer in Java !


    Golfed


    ". "::trim

    This is a method reference (against a string constant), which can be used like that:


    Supplier<String> r = ". "::trim

    A literal string ". " will be automatically added to the global interned strings pool, as maintained by the java.lang.String class,
    and as we immediatelly trim it, the reference to it can not be reused further in the code (unless you declare exactly the same string again).



    ...


    A pool of strings, initially empty, is maintained privately by the class String.


    All literal strings and string-valued constant expressions are interned. String literals are defined in section 3.10.5 of the The Java™ Language Specification.


    ...



    https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#intern--


    You can turn this in a "production grade" memory leak, by adding the string to itself and then invoking the intern() method explicitly, in a loop.


    I considered this for C#... but I don't think it counts, because as you say you _can_ access that memory by including another string literal. I'd also be interested to know what `("." + " ").intern()` would do (if they were user input or w/e, so we discount compiler optimisations).

    @VisualMelon - the reference is lost, so you have to recreate the string content exactly, to be able to access it again, which is pretty much impossible to do programmatically.

    Surly it's just a case of adding two strings together, as in my prior comment, or does `intern` not work like that? It's perfectly possible, I would say. This answer also is neither a program, nor a self-contained piece of code that leaks memory, as I don't believe it will compile by itself (it _has_ to have some type-asserting context)

    Yep, if happen to know the string content in advance, which is an "external" piece of information. I.e. consider this is a library function and you do not see the source.

    >it has to have some type-asserting context I don't think there is a good consensus on Java lambdas currently, and there are well-accepted Java-lambda answers around, which do not provide any explicit type information. But anyway, in this specific case it is not just a lambda, but an _object method reference_, against the string literal (which only possible type is a concrete final class `java.lang.String`), pointing to the 0-arg `trim()` method of it, making it pretty unambiguous.

    Indeed, the only consensus is slim at best, I'm just firmly on the "the code should compile" side. I'm still not sure I buy this solution given the wording on the question (these strings _can't_ be freed, and they _can_ be found in normal operating code even though it's unlikely), but I invite others to make their own judgement

    Forgive me for saying, I think this is ridiculous. You could just as well say that every cache is a memory leak. The java GC won't clear this memory since it is *intentionally* reserved for performance reasons - to allow `==` comparisons (which, anyway, is not entirely true, since `intern`ed Strings are weakly-referenced).

    > You could just as well say that every cache is a memory leak Exactly, if a) What you put into it does not expire by itself b) There is no way to explicitly remove anything from it, or enumerate it's content c) You have lost the _key_. Now you have a piece of memory allocated for something you can not access or remove any longer, hence it is a leak. And I don't think interned _string literals_ are subject to garbage collection, even in implementations which do GC on the interned strings pool per-se.

    That string isn't even *inaccessible*, let alone leaked. We can retrieve it at any time by interning an equal string. If this were to count, any unused global variable (private static in Java) would be a leak. That's not how memory leaks are defined.

    @user2357112 "...That string isn't even inaccessible..." That only looks obvious because you see the code. Now consider you got this method reference X() as an argument to your code, you know that it allocates (and interns) a string literal inside, but you do not know which one exactly, it might be ". " or "123" or any other string of a (generally) unknown length. Would you please demonstrate how you can still access it, or deallocate the entry in the "intern" pool it occupies?

    One thing I don't really know how it has to been handled: you can't use this exactly code and run it through javac. So are java answers always (like C# or other languages with that buildup) with all that class and "public static void main" part.

    @Serverfrog, I don't think there is a good consensus about it, really. There are well-accepted lambda-based answers (like this one), which do not include all this repeated boilerplate code. Moreover, the method reference, as used in this answer, points to a concrete method of a final class (i.e. it implies more type information than a generic lambda). But I guess that would make a good topic to discuss on Meta.

    @zeppelin You used a specific `String`, not a random one. So we have access to the code and we can show you that that code isn't unreachable. It isn't a leak. Your best bet if you want to use the string cache is to generate a random String with an unspecified seed and `.intern()` it. so we can't go back to access the String again. Using any specific String will make it accessible again.

    @zeppelin: A memory leak is not when it's *unlikely* or *difficult* to reach an object. We can access this string simply by guessing correctly. That's not a leak. (If guessing is somehow a magical superability that doesn't count, we can open and read the .class file, even within the program, and there's probably some way to use reflection to determine the string's contents too.)

    @Serverfrog functions are permitted, including lambdas. It is a good idea to include code to invoke the function or explain _which_ functional interface can wrap the lambda, which this answer did. For golfing purposes, however, only the lambda is included in the byte count - not any other "glue" code to include it in a larger program.

    @user2357112 On a machine with a finite memory, you can access a value stored in any piece of memory `simply by guessing it correctly`, but that does not mean that such a thing as memory leaks do not exist. `there's probably some way to use reflection to determine the string's contents too` could you demonstrate this ? (hint, String.intern() is implemented in the _native_ code).

    Won't Java garbage collect the original string, thus not making this a memory leak?

    @tenmiles - it is a string literal, so it should not be garbage collected.

    Personally I like this answer. However I think the best argument against it is that the GC does clean up interned Strings in newer versions (7+) of java. Interned Strings used to be allocated in PermGen but were moved to the heap as of Java 7. Furthermore I think we need better clarification about what a "memory leak" is. The argument I'm seeing is that we can still get a reference to the leaked String. I don't think this matters because we would still have a failure to release that memory even though it's reachable. Failure to release vs lost reference.

    @Poke - that is true for the dynamically allocated strings in Java 7+ (at least for the HotSpot family of VMs), but a string literal, as used in this answer, won't be garbage collected (as well as it's entry in the intern pool).

    `Supplier f = ". "::trim; f.get(); var x = ". ";` Now `". "` is reclaimed. So this is not a leak.

  • 8086 ASM, 3 bytes



    This examples assumes that a C runtime is linked in.



    jmp _malloc


    this assembles to e9 XX XX where XX XX is the relative address of _malloc



    This invokes malloc to allocate an unpredictable amount of memory and then immediately returns, terminating the processes. On some operating systems like DOS, the memory might not be reclaimable at all until the system is rebooted!


    The normal implementation of malloc will result in memory being freed on process exit.

    @Joshua Yeah, but that's implementation defined behaviour.

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM

Tags used