Paint Starry Night, objectively, in 1kB of code

  • Note: a 1000 point bounty is still available for this challenge, which I will create and award if anyone takes the top score without using built-in compression.


    Below is a 386x320 png representation of van Gogh's Starry Night.


    enter image description here


    Your goal is to reproduce this image as closely as possible, in no more than 1024 bytes of code. For the purposes of this challenge, the closeness of images is measured by the squared differences in RGB pixel values, as explained below.


    This is . Scores are calculated using the validation script below. The lowest score wins.


    Your code must obey the following restrictions:



    • It must be a complete program

    • It must output an image in a format that can be read by the validation script below, running on my machine. The script uses Python's PIL library, which can load a wide variety of file formats, including png, jpg and bmp.

    • It must be completely self-contained, taking no input and loading no files (other than importing libraries, which is allowed)

    • If your language or library includes a function that outputs Starry Night, you are not allowed to use that function.

    • It should run deterministically, producing the same output every time.

    • The dimensions of the output image must be 386x320

    • For the avoidance of doubt: valid answers must use programming languages as per the usual PPCG rules. It must be a program that outputs an image, not just an image file.


    It is likely that some submissions will themselves be generated by code. If this is the case, please include in your answer the code that was used to produce your submission, and explain how it works. The above restrictions only apply to the 1kB image-generating program that you submit; they don't apply to any code used to generate it.


    Scoring


    To calculate your score, take your output image and the original above and convert the RGB pixel values to floating point numbers ranging from 0 to 1. The score of a pixel is (orig_r-img_r)^2 +(orig_g-img_g)^2 + (orig_b-img_b)^2, i.e. the squared distance in RGB space between the two images. The score of an image is the sum of the scores of its pixels.


    Below is a Python script that performs this calculation - in the case of any inconsistency or ambiguity, the definitive score is the one calculated by that script running on my machine.


    Note that the score is calculated based on the output image, so if you use a lossy format that will affect the score.


    The lower the score the better. The original Starry Night image would have a score of 0. In the astronomically unlikely event of a tie, the answer with the most votes will determine the winner.


    Bonus objectives


    Because the answers were dominated by solutions using built-in compression, I awarded a series of bounties to answers that use other techniques. The next one will be a bounty of 1000 points, to be awarded if and when an answer that does not use built-in compression takes the top place overall.


    The previously awarded bonus bounties were as follows:



    • A 100 point bounty was awarded to nneonneo's answer, for being the highest-scoring answer that did not use built-in compression at the time. It had 4852.87 points at the time it was awarded. Honourable mentions go to 2012rcampion, who made a valiant attempt to beat nneonneo using an approach based on Voronoi tesselation, scoring 5076 points, and to Sleafar, whose answer was in the lead until near the end, with 5052 points, using a similar method to nneonneo.



    • A 200 point bounty was awarded to Strawdog's entry. This was awarded for being an optimization-based strategy that took the lead among non-built-in-compression answers and held it for a week. It scored 4749.88 points using an impressively clever method.




    Scoring/validation script


    The following Python script should be placed in the same folder as the image above (which should be named ORIGINAL.png) and run using a command of the form python validate.py myImage.png.


    from PIL import Image
    import sys

    orig = Image.open("ORIGINAL.png")
    img = Image.open(sys.argv[1])

    if img.size != orig.size:
    print("NOT VALID: image dimensions do not match the original")
    exit()

    w, h = img.size

    orig = orig.convert("RGB")
    img = img.convert("RGB")

    orig_pix = orig.load()
    img_pix = img.load()

    score = 0

    for x in range(w):
    for y in range(h):
    orig_r, orig_g, orig_b = orig_pix[x,y]
    img_r, img_g, img_b = img_pix[x,y]
    score += (img_r-orig_r)**2
    score += (img_g-orig_g)**2
    score += (img_b-orig_b)**2

    print(score/255.**2)

    Technical note: Objective measures of image similarity are a tricky thing. In this case I've opted for one that's easy for anyone to implement, in full knowledge that much better measures exist.


    Leaderboard




    var QUESTION_ID=69930,OVERRIDE_USER=21034;function answersUrl(e){return"https://api.stackexchange.com/2.2/questions/"+QUESTION_ID+"/answers?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+ANSWER_FILTER}function commentUrl(e,s){return"https://api.stackexchange.com/2.2/answers/"+s.join(";")+"/comments?page="+e+"&pagesize=100&order=desc&sort=creation&site=codegolf&filter="+COMMENT_FILTER}function getAnswers(){jQuery.ajax({url:answersUrl(answer_page++),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){answers.push.apply(answers,e.items),answers_hash=[],answer_ids=[],e.items.forEach(function(e){e.comments=[];var s=+e.share_link.match(/\d+/);answer_ids.push(s),answers_hash[s]=e}),e.has_more||(more_answers=!1),comment_page=1,getComments()}})}function getComments(){jQuery.ajax({url:commentUrl(comment_page++,answer_ids),method:"get",dataType:"jsonp",crossDomain:!0,success:function(e){e.items.forEach(function(e){e.owner.user_id===OVERRIDE_USER&&answers_hash[e.post_id].comments.push(e)}),e.has_more?getComments():more_answers?getAnswers():process()}})}function getAuthorName(e){return e.owner.display_name}function process(){var e=[];answers.forEach(function(s){var r=s.body;s.comments.forEach(function(e){OVERRIDE_REG.test(e.body)&&(r="<h1>"+e.body.replace(OVERRIDE_REG,"")+"</h1>")});var a=r.match(SCORE_REG);a&&e.push({user:getAuthorName(s),size:+a[2],language:a[1],link:s.share_link})}),e.sort(function(e,s){var r=e.size,a=s.size;return r-a});var s={},r=1,a=null,n=1;e.forEach(function(e){e.size!=a&&(n=r),a=e.size,++r;var t=jQuery("#answer-template").html();t=t.replace("{{PLACE}}",n+".").replace("{{NAME}}",e.user).replace("{{LANGUAGE}}",e.language).replace("{{SIZE}}",e.size).replace("{{LINK}}",e.link),t=jQuery(t),jQuery("#answers").append(t);var o=e.language;/<a/.test(o)&&(o=jQuery(o).text()),s[o]=s[o]||{lang:e.language,user:e.user,size:e.size,link:e.link}});var t=[];for(var o in s)s.hasOwnProperty(o)&&t.push(s[o]);t.sort(function(e,s){return e.lang>s.lang?1:e.lang<s.lang?-1:0});for(var c=0;c<t.length;++c){var i=jQuery("#language-template").html(),o=t[c];i=i.replace("{{LANGUAGE}}",o.lang).replace("{{NAME}}",o.user).replace("{{SIZE}}",o.size).replace("{{LINK}}",o.link),i=jQuery(i),jQuery("#languages").append(i)}}var ANSWER_FILTER="!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe",COMMENT_FILTER="!)Q2B_A2kjfAiU78X(md6BoYk",answers=[],answers_hash,answer_ids,answer_page=1,more_answers=!0,comment_page;getAnswers();var SCORE_REG=/<h\d>\s*([^\n,]*[^\s,]),.*?(\d+(?:\.\d+))(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/,OVERRIDE_REG=/^Override\s*header:\s*/i;

    body{text-align:left!important}#answer-list,#language-list{padding:10px;width:400px;float:left}table thead{font-weight:700}table td{padding:5px}

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Score</td></tr></thead> <tbody id="answers"> </tbody> </table> </div><div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr></thead> <tbody id="languages"> </tbody> </table> </div><table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr></tbody> </table>




    On Windows I had trouble installing the python requirements. A safer option is to use pillow (`pip unistall PIL`, then `pip install pillow`) and change the first line to `from PIL import Image`.

    @mınxomaτ I've changed that line of code. (I have never consciously installed pillow but ways of importing Image work on my machine and appear to give the same answer.)

    Can the program read a file if the combined file size + 1 byte is less than 1kB? Alternatively, can the program read its own source code?

    @orlp no, no loading of files is permitted regardless of their size. But reading your own source is fine.

    This is essentially the same similarity metric as PSNR, correct?

    @tepples other than going in the opposite direction and not being logarithmic, yes :)

    I don't think that you'll "feel like" awarding the bounty until my answer becomes outvoted, will you? :p

    @LegionMammal978 by "highest-scoring" I meant best score according to the rules of this challenge, not votes. (Amended.) I voted for your answer!

    Would an SVG or HTML file be considered a 'program' ?

    @dronus no, not unless (a) those formats are programming languages according to the PPCG rules, and (b) you can run them to produce an image in a raster format that can be read by PIL.

    Given that the top four answers to this question seem to demonstrate an inverse relationship between their calculated score and their actual resemblance to the original painting, I think your scoring system may be broken.

    @Ajedi32 oh, I expected that to happen! I tried to post a version where the scoring system was based on human judgement (via voting) but it was deemed to be an "art contest" and closed. Doing it by least-squares is a fun challenge, but if we wanted the results to actually look like the image, there isn't really any other way than to rely on human judgement.

    I'm surprised that not a single answer has tried working with greyscale output yet. Averaging the channels in each pixel gives a score of something like 2800, and having only to compress a third of the data would introduce less error on top of that.

    @MartinBüttner you could probably do even better by weighting a greyscale image by the average bluish colour of the image. I hadn't thought of this.

    @Nathaniel hasn't been said enough, but this question is so awesome! ;-)

    @Nathaniel Possibly, but I'm not sure it's worth it because then you probably also need more code to output a 3-channel image.

    @Nathaniel following your comment here, if you post a future challenge that rules out built in compression, it would interesting to see a score calculation that measures contrast in addition to colour similarity.

    @trichoplax yeah, I've been thinking about how best to do that. It's actually kind of hard to come up with a good measure that's both easy for people to understand and not too expensive to compute.

    @Nathaniel I commented here because I couldn't find you in [chat] but there are lots of helpful people there. If you can, pop in for a longer discussion :)

    A metric that measures similarity in complexity might be more interesting. e.g. x264's psychovisual optimizations and adaptive quantization try to preserve energy in the source, because "detail" looks better than blurring, even if it's the wrong detail. SSIM is a widely-used visual quality metric. It's much more complex to compute than sum-of-squared-errors (PSNR). x264's tuning for max-SSIM (rather than the default max human visual quality) is `--aq-mode 2 --no-psy`, so SSIM still doesn't measure what psy optimizes for.

    x265's docs are good, and summarize what psy is doing there (similar to x264): psy = preserve the energy of the source image in the encoded image at the expense of compression efficiency. psy-rdoq = favoring higher energy in the reconstructed image (regardless of source energy). With these options at 0, the encoder mostly just blurs parts of frames that are hard to encode. (It's a video codec, so when they say "difficult motion", they mean big residuals that have lots of energy after motion compensation.)

    Is the term "objectively" defined anywhere here? It seems like an important part of the question.

    @trichoplax I understand how the scoring is "objective", but I read this in the title as describing the program's implementation, not the scoring methodology. Also c.f. the comment to ndenarodev's answer: "for actually drawing objects, thus filling 'objectively' term in challenge", with 6 upvotes. It seems there's some agreement that his program is "objective", but it don't see an interpretation of the word that applies.

    @bmm6o trichoplax has it right, it refers to the scoring method. I had recently posted a similar challenge (unfortunately closed) that used human judgement for the scoring, and I was afraid this would be seen as a duplicate if I didn't make the difference clear in the title. I guess I could remove the word "objectively" the next time I have a reason to edit the question.

    @bmm6o Ah I see - the comment on that answer does make it sound like objectively means "object based". I think the upvoters on that comment may simply be showing that they like the approach, rather than agreeing with the interpretation of the word. We'll never know though... :)

  • Pyth (no built-in compression), score 4695.07 4656.03 4444.82



    Pyth’s only image-related functionality is a builtin to write a matrix of RGB triples as an image file. So the crazy idea here is to train a small deep neural network on the (x, y) ↦ (r, g, b) function representing the image, and run it on the coordinates of each pixel.



    The plan




    1. Write a custom backpropagation loop in C++.

    2. Curse at how slow it is.

    3. Learn Tensorflow.

    4. Build a new desktop with a sick GPU using Black Friday deals.

    5. Scour the literature for ways to compress neural networks, and do that.

    6. Scour the literature for ways to avoid overfitting neural networks, and do the opposite of that.



    The current network is built from 45 sigmoid neurons, with each neuron connected to the x, y inputs and to every previous neuron, and the last three neurons interpreted as r, g, b. It’s trained using the Adam algorithm without batching. The parameters weighting the 1125 connections are quantized to a range of 93 possible values (except the constant terms, which have 932 possible values) using a variant of stochastic quantization, the primary variation being that we set the gradient for quantized parameters to zero.



    The result



    output



    The code



    1023 bytes, encoded with xxd (decode with xxd -r). I used the 2016-01-22 version of Pyth that was current when this challenge was released. You can run the code directly in Pyth, but Pyth in PyPy3 (pypy3 pyth starry.pyth) runs it nine times faster, in about 3 minutes. The output image is written to o.png.



    00000000: 4b6a 4322 05d4 7bb1 06f8 6149 da66 28e3  KjC"..{...aI.f(.
    00000010: 8d17 92de a833 9b70 f937 9fc6 a74e 544d .....3.p.7...NTM
    00000020: 1388 e4e5 1d7e 9432 fe38 1313 3c34 0c54 .....~.2.8..<4.T
    00000030: 89fe 553b 83a3 84bb 08c8 09fe 72be 3597 ..U;........r.5.
    00000040: b799 34f8 8809 4868 feb8 acde 2e69 34e6 ..4...Hh.....i4.
    00000050: 1c1a c49a 27f0 f06a 3b27 0564 178a 1718 ....'..j;'.d....
    00000060: 1440 e658 e06a c46d aa81 ac3f c4b7 8262 [email protected]?...b
    00000070: 398a 39e3 c9b7 6f71 e2ab 37e0 7566 9997 9.9...oq..7.uf..
    00000080: 54eb eb95 0076 0adf 103c f34c 0b4e e528 T....v...<.L.N.(
    00000090: a2df 6b4a 7a02 011a 10a9 2cf0 2edc 9f6f ..kJz.....,....o
    000000a0: 33f3 5c96 9e83 fadb a2fa 80fc 5179 3906 3.\.........Qy9.
    000000b0: 9596 4960 8997 7225 edb1 9db5 435e fdd8 ..I`..r%....C^..
    000000c0: 08a6 112f 32de c1a5 3db8 160f b729 649a .../2...=....)d.
    000000d0: 51fa 08e8 dcfa 11e0 b763 61e6 02b3 5dbb Q........ca...].
    000000e0: 6e64 be69 3939 b5b2 d196 5b85 7991 bda5 nd.i99....[.y...
    000000f0: 087a f3c0 6b76 b1d0 bb29 f7a4 29a3 e21a .z..kv...)..)...
    00000100: 3b1b 97ae 1d1b 1e0f f3c7 9759 2458 c2db ;..........Y$X..
    00000110: 386f 5fbb a166 9f27 2910 a1b5 cfcc d8db 8o_..f.').......
    00000120: afaf bdb4 573d efb1 399b e160 6acf e14b ....W=..9..`j..K
    00000130: 4c6b 957a 245a 6f87 63c7 737d 6218 6ab2 Lk.z$Zo.c.s}b.j.
    00000140: e388 a0b3 2007 1ddf b55c 7266 4333 f3a2 .... ....\rfC3..
    00000150: d58f d80b a3a6 c6c1 d474 58f3 274b 6d32 .........tX.'Km2
    00000160: 9d72 b674 7cc4 fdf6 6b86 fb45 1219 cc5c .r.t|...k..E...\
    00000170: 7244 396d 1411 d734 a796 ff54 cf1f 119d rD9m...4...T....
    00000180: 91af 5eab 9aad 4300 1dae d42e 13f8 62a1 ..^...C.......b.
    00000190: a894 ab0b 9cb1 5ee2 bb63 1fff 3721 2328 ......^..c..7!#(
    000001a0: 7609 34f5 fcfe f486 46e9 dfa8 9885 4dac v.4.....F.....M.
    000001b0: f464 3666 e8b9 cd82 1159 8434 95e8 5901 .d6f.....Y.4..Y.
    000001c0: f0f5 426c ef53 6c7e ad28 60f6 8dd8 edaa ..Bl.Sl~.(`.....
    000001d0: 8784 a966 81b6 dc3a e0ea d5bf 7f15 683e ...f...:......h>
    000001e0: 93f2 23ae 0845 c218 6bdc f47c 08e8 41c2 ..#..E..k..|..A.
    000001f0: 950e f309 d1de 0b64 5868 924e 933e 7ab8 .......dXh.N.>z.
    00000200: dab7 8efb b53a 5413 c64b 48e6 fc4d 26fe .....:T..KH..M&.
    00000210: 594a 7d6b 2dd0 914e 6947 afa7 614d b605 YJ}k-..NiG..aM..
    00000220: 8737 554e 31bc b21c 3673 76bf fb98 94f8 .7UN1...6sv.....
    00000230: 1a7d 0030 3035 2ce6 c302 f6c2 5434 5f74 .}.005,.....T4_t
    00000240: c692 349a a33e b327 425c 22e8 8735 37e1 ..4..>.'B\"..57.
    00000250: 942a 2170 ef10 ff42 b629 e572 cd0f ca4f .*!p...B.).r...O
    00000260: 5d52 247d 3e62 6d9a d71a 8b01 4826 d54b ]R$}>bm.....H&.K
    00000270: f26f fe8e d33d efb5 30a8 54fb d50a 8f44 .o...=..0.T....D
    00000280: a3ac 170a b9a0 e436 50d5 0589 6fda 674a .......6P...o.gJ
    00000290: 26fb 5cf6 27ef 714e fe74 64fa d487 afea &.\.'.qN.td.....
    000002a0: 09f7 e1f1 21b6 38eb 54cd c736 2afa d031 ....!.8.T..6*..1
    000002b0: 853c 8890 8cc0 7fab 5f15 91d5 de6e 460f .<......_....nF.
    000002c0: 4b95 6a4d 02e4 7824 1bbe ae36 5e6c 0acd K.jM..x$...6^l..
    000002d0: 0603 b86c f9fd a299 480f 4123 627e 951f ...l....H.A#b~..
    000002e0: a678 3510 912c 26a6 2efc f943 af96 53cd .x5..,&....C..S.
    000002f0: 3f6c 435c cbae 832f 316c e90e 01e7 8fd6 ?lC\.../1l......
    00000300: 3e6d d7b4 fffb cd4a 69c7 5f23 2fe7 bf52 >m.....Ji._#/..R
    00000310: 3632 3990 17ed 045a b543 8b79 8231 bc9b 629....Z.C.y.1..
    00000320: 4452 0f10 b342 3e41 6e70 187c 9cb2 7eb5 DR...B>Anp.|..~.
    00000330: cdff 5c22 9e34 618f b372 8acf 4172 a220 ..\".4a..r..Ar.
    00000340: 0136 3eff 2702 dc5d b946 076d e5fd 6045 .6>.'..].F.m..`E
    00000350: 8465 661a 1c6e b6c8 595f 6091 daf2 103b .ef..n..Y_`....;
    00000360: 23ab 343a 2e47 95cf 4218 7bf5 8a46 0a69 #.4:.G..B.{..F.i
    00000370: dabb 4b8d 7f9b b0c1 23b1 c917 839c 358c ..K.....#.....5.
    00000380: b33c de51 e41c e84d 12bf 8379 f4c5 65fa .<.Q...M...y..e.
    00000390: 0b65 7fe7 e1a0 fb0e 30f4 a7d2 b323 3400 .e......0....#4.
    000003a0: 15e8 8a48 5d42 9a70 3979 7bba abf5 4b80 ...H]B.p9y{...K.
    000003b0: b239 4ceb d301 89f8 9f4d 5ce6 8caa 2a74 .9L......M\...*t
    000003c0: ca1b 9d3f f934 0622 3933 2e77 6d6d 2b4a ...?.4."93.wmm+J
    000003d0: 4b73 4d3e 332e 574a 615a 6332 3536 685e KsM>3.WJaZc256h^
    000003e0: 3463 732a 4c2d 3436 2e29 4a5a 3138 3739 4cs*L-46.)JZ1879
    000003f0: 5b32 3739 6b33 6429 3338 3620 3332 30 [279k3d)386 320


    How it works



    KjC"…"93
    C"…" convert the long binary string to an integer in base 256
    j 93 list its base 93 digits
    K assign to K

    .wmm+JKsM>3.WJaZc256h^4cs*L-46.)JZ1879[279k3d)386 320
    m 320 map for d in [0, …, 319]:
    m 386 map for k in [0, …, 385]
    JK copy K to J
    [279k3d) initialize value to [3*93, k, 3, d]
    .WJ while J is nonempty, replace value with
    *L Z map over value, multiplying by
    .)J pop back of J
    -46 subtract from 46
    s sum
    c 1879 divide by 1879
    ^4 exponentiate with base 4
    h add 1
    c256 256 divided by that
    aZ append to value
    >3 last three elements of the final value
    sM floor to integers
    .w write that matrix of RGB triples as image o.png


    Training



    During my final training run, I used a much slower quantization schedule and did some interactive fiddling with that and the learning rate, but the code I used was roughly as follows.



    from __future__ import division, print_function
    import sys
    import numpy as np
    import tensorflow as tf

    NEURONS, SCALE_BASE, SCALE_DIV, BASE, MID = 48, 8, 3364, 111, 55

    def idx(n):
    return n * (n - 1) // 2 - 3

    WEIGHTS = idx(NEURONS)
    SCALE = SCALE_DIV / np.log(SCALE_BASE)
    W_MIN, W_MAX = -MID, BASE - 1 - MID

    sess = tf.Session()

    with open('ORIGINAL.png', 'rb') as f:
    img = sess.run(tf.image.decode_image(f.read(), channels=3))
    y_grid, x_grid = np.mgrid[0:img.shape[0], 0:img.shape[1]]
    x = tf.constant(x_grid.reshape([-1]).astype(np.float32))
    y = tf.constant(y_grid.reshape([-1]).astype(np.float32))
    color_ = tf.constant(img.reshape([-1, 3]).astype(np.float32))

    w_real = tf.Variable(
    np.random.uniform(-16, 16, [WEIGHTS]).astype(np.float32),
    constraint=lambda w: tf.clip_by_value(w, W_MIN, W_MAX))

    quantization = tf.placeholder(tf.float32, shape=[])
    w_int = tf.round(w_real)
    qrate = 1 / (tf.abs(w_real - w_int) + 1e-6)
    qscale = 0
    for _ in range(16):
    v = tf.exp(-qscale * qrate)
    qscale -= ((1 - quantization) * WEIGHTS - tf.reduce_sum(v)) / \
    tf.tensordot(qrate, v, 1)
    unquantized = tf.distributions.Bernoulli(
    probs=tf.exp(-qscale * qrate), dtype=tf.bool).sample()
    num_unquantized = tf.reduce_sum(tf.cast(unquantized, tf.int64))
    w = tf.where(unquantized, w_real, w_int)

    a = tf.stack([tf.ones_like(x) * 256, x, y], 1)
    for n in range(3, NEURONS):
    a = tf.concat([a, 256 * tf.sigmoid(
    tf.einsum('in,n->i;', a, w[idx(n):idx(n + 1)]) / SCALE)[:, None]], 1)
    color = a[:, -3:]
    err = tf.reduce_sum(tf.square((color - 0.5 - color_) / 255))

    train_step = tf.train.AdamOptimizer(0.01).minimize(err, var_list=[w_real])

    sess.run(tf.global_variables_initializer())

    count = 0
    quantization_val = 0
    best_err = float("inf")

    while True:
    num_unquantized_val, err_val, w_val, _ = sess.run(
    [num_unquantized, err, w, train_step],
    {quantization: quantization_val})
    if num_unquantized_val == 0 and err_val < best_err:
    print(end='\r\x1b[K', file=sys.stderr)
    sys.stderr.flush()
    print(
    'weights', list(w_val.astype(np.int64)),
    'count', count, 'err', err_val)
    best_err = err_val
    count += 1
    print(
    '\r\x1b[Kcount', count, 'err', err_val,
    'unquantized', num_unquantized_val, end='', file=sys.stderr)
    sys.stderr.flush()
    quantization_val = (1 - 1e-4) * quantization_val + 1e-4


    Visualization



    This picture shows the activations of all 45 neurons as a function of the x, y coordinates. Click to enlarge.



    neuron activations


    Have you considered trying to add a convolutional layer or two to the network? I think it'd be better at getting the noisier aesthetic. Alternatively, you could try additional manual-crafted features like [x^2,y^2] or [x%5,y%5] to get those lined patterns.

    @StevenH. Maybe? I haven’t developed much intuition for convolutional networks, but my assumption is that, although something along those lines could probably improve the result visually, to improve the pixel-difference error metric used in this challenge you’d need to encode a lot more information to line up the noisy patterns fairly precisely with the original.

    Hm... Yeah, I'd still recommend considering adding cyclic features like `[x%5,y%5]`, since they're a) easy to represent in Pyth code and b) hard to attain using a vanilla neural network.

    Oh my god this is amazing! I was hoping someone would use a deep neural network approach, and you really went the extra mile to do it. This is also, without a doubt, the coolest looking of all the answers. I'm giving you the green check mark, at least temporarily, to draw attention to your answer.

    What are the updates? Are you just continuing to train the same network, or are you tweaking the code? (I'm just curious.)

    @Nathaniel For now I have been training the same unquantized network (which currently scores about 4350), but working on tweaking the quantization strategy. The bulk of the improvement in this update came from allocating two digits instead of one to the constant term of each neuron.

    How long did the training take for this (disregarding quantization)?

    I still love this answer, but I'm removing the green check on the off chance it encourages further attempts at the challenge

License under CC-BY-SA with attribution


Content dated before 7/24/2021 11:53 AM