### Draw an Image as a Voronoi Map

• Credits to Calvin's Hobbies for nudging my challenge idea in the right direction.

Consider a set of points in the plane, which we will call sites, and associate a colour with each site. Now you can paint the entire plane by colouring each point with the colour of the closest site. This is called a Voronoi map (or Voronoi diagram). In principle, Voronoi maps can be defined for any distance metric, but we'll simply use the usual Euclidean distance `r = √(x² + y²)`. (Note: You do not necessarily have to know how to compute and render one of these to compete in this challenge.)

Here is an example with 100 sites:

If you look at any cell, then all points within that cell are closer to the corresponding site than to any other site.

Your task is to approximate a given image with such a Voronoi map. You're given the image in any convenient raster graphics format, as well as an integer N. You should then produce up to N sites, and a colour for each site, such that the Voronoi map based on these sites resembles the input image as closely as possible.

You can use the Stack Snippet at the bottom of this challenge to render a Voronoi map from your output, or you can render it yourself if you prefer.

You may use built-in or third-party functions to compute a Voronoi map from a set of sites (if you need to).

This is a popularity contest, so the answer with the most net votes wins. Voters are encouraged to judge answers by

• how well the original images and their colours are approximated.

• how well the algorithm works on different kinds of images.

• how well the algorithm works for small N.

• whether the algorithm adaptively clusters points in regions of the image that require more detail.

## Test Images

Here are a few images to test your algorithm on (some of our usual suspects, some new ones). Click the pictures for larger versions.

The beach in the first row was drawn by Olivia Bell, and included with her permission.

If you want an extra challenge, try Yoshi with a white background and get his belly line right.

You can find all of these test images in this imgur gallery where you can download all of them as a zip file. The album also contains a random Voronoi diagram as another test. For reference, here is the data that generated it.

Please include example diagrams for a variety of different images and N, e.g. 100, 300, 1000, 3000 (as well as pastebins to some of the corresponding cell specifications). You may use or omit black edges between the cells as you see fit (this may look better on some images than on others). Do not include the sites though (except in a separate example maybe if you want to explain how your site placement works, of course).

If you want to show a large number of results, you can create a gallery over on imgur.com, to keep the size of the answers reasonable. Alternatively, put thumbnails in your post and make them links to larger images, like I did in my reference answer. You can get the small thumbnails by appending `s` to the file name in the imgur.com link (e.g. `I3XrT.png` -> `I3XrTs.png`). Also, feel free to use other test images, if you find something nice.

## Renderer

Paste your output into the following Stack Snippet to render your results. The exact list format is irrelevant, as long as each cell is specified by 5 floating point numbers in the order `x y r g b`, where `x` and `y` are the coordinates of the cell's site, and `r g b` are the red, green and blue colour channels in the range `0 ≤ r, g, b ≤ 1`.

The snippet provides options to specify a line width of the cell edges, and whether or not the cell sites should be shown (the latter mostly for debugging purposes). But note that the output is only re-rendered when the cell specifications change - so if you modify some of the other options, add a space to the cells or something.

``function draw() {  document.getElementById("output").innerHTML = svg}function drawMap() {  var cells = document.getElementById("cells").value;  var showSites = document.getElementById("showSites").checked;  var showCells = document.getElementById("showCells").checked;  var lineWidth = parseFloat(document.getElementById("linewidth").value);  var width = parseInt(document.getElementById("width").value);  var height = parseInt(document.getElementById("height").value);  var e = prefix.replace('{{WIDTH}}', width)                .replace('{{HEIGHT}}', height);  cells = cells.match(/(?:\d*\.\d+|\d+\.\d*|\d+)(?:e-?\d+)?/ig);  var sitesDom = '';  var sites = []  for (i = 0; i < cells.length; i+=5)  {    x = parseFloat(cells[i]);    y = parseFloat(cells[i+1]);    r = Math.floor(parseFloat(cells[i+2])*255);    g = Math.floor(parseFloat(cells[i+3])*255);    b = Math.floor(parseFloat(cells[i+4])*255);    sitesDom += '<circle cx="' + x + '" + cy="' + y + '" r="1" fill="black"/>';    sites.push({ x: x, y: y, r: r, g: g, b: b });  }    if (showCells)  {    var bbox = { xl: 0, xr: width, yt: 0, yb: height };    var voronoi = new Voronoi();      var diagram = voronoi.compute(sites, bbox);    for (i = 0; i < diagram.cells.length; ++i)    {      var cell = diagram.cells[i];      var site = cell.site;      var coords = '';      for (var j = 0; j < cell.halfedges.length; ++j)      {        var vertex = cell.halfedges[j].getStartpoint();        coords += ' ' + vertex.x + ',' + vertex.y;      }      var color = 'rgb('+site.r+','+site.g+','+site.b+')';      e += '<polygon fill="'+color+'" points="' + coords + '" stroke="' + (lineWidth ? 'black' : color) + '" stroke-width="'+Math.max(0.6,lineWidth)+'"/>';    }  }  if (showSites)    e += sitesDom;  e += suffix;  e += '<br> Using <b>'+sites.length+'</b> cells.';  svg = e;  draw();}var prefix = '<svg width="{{WIDTH}}" height="{{HEIGHT}}">',  suffix = "</svg>",  scale = 0.95,  offset = 225,  radius = scale*offset,  svg = "";``

``svg {  position: relative;}``

``<script src="http://www.raymondhill.net/voronoi/rhill-voronoi-core.js"></script>Line width: <input id="linewidth" type="text" size="1" value="0" /><br>Show sites: <input id="showSites" type="checkbox" /><br>Show cells: <input id="showCells" type="checkbox" checked="checked" /><br><br>Dimensions: <input id="width" type="text" size="1" value="400" /> x <input id="height" type="text" size="1" value="300" /><br>Paste cell specifications here<br><textarea id="cells" cols="60" rows="10" oninput='drawMap()'></textarea><br><br><div id="output"></div>``

Massive credits to Raymond Hill for writing this really nice JS Voronoi library.

### Related Challenges

How do we verify we have done it the best way?

@frogeyedpeas By looking at the votes you get. ;) This is a popularity contest. There is not necessarily *the* best way to do. The idea is that you try to do it as well as you can, and the votes will reflect whether people agree that you've done a good job. There is a certain amount of subjectivity in these, admittedly. Have a look at the related challenges I linked, or at this one. You'll see that there is usually a wide variety of approaches but the voting system helps the better solutions bubble up to the top and decide on a winner.

Olivia approves of the approximations of her beach submitted thus far.

@AlexA. Devon approves of *some* of the approximations of his face submitted thus far. He's not a big fan of any of the n=100 versions ;)

@Geobits: He'll understand when he's older.

Here's a page about a centroidal voronoi-based stippling technique. A good source of inspiration (the related master thesis has a nice discussion of possible improvements to the algorithm).

Did I make a comment about the use of the term "distance measure" and how it should be "metric" or am I going mental? I can't find the comment!

@AlecTeal You did, and I fixed it, and I replied... and I saw you were online since my reply, so I assumed you'd read it and deleted both comments as they were obsolete. ;)

I had linked to the definition because.... I have a maths background, but it might be the first time someone has seen a formal notion of "distance" and their "metrics" - that's why I included the link. You're using just 3 properties to generate this map. I am also quite interested in seeing what the pictures look like with different metrics.... http://www.maths.kisogo.com/index.php?title=Metric_space is the link BTW

Is this just a puzzle or do you also think this could be an approach for a new type of compressed images? To me it looks like storing the coordinates and colors of the sites would need much less information than storing the image pixel by pixel.

@ThomasWeller I just thought it's a fun challenge that is both challenging and would yield varied and interesting results. But the same thing has been discussed when I posted it in Hacker News, so maybe you can weigh in there.

6 years ago

# Python + scipy + scikit-image, weighted Poisson disc sampling

My solution is rather complex. I do some preprocessing on the image to remove noise and get a mapping how 'interesting' each point is (using a combination of local entropy and edge detection):

Then I choose sampling points using Poisson disc sampling with a twist: the distance of the circle is determined by the weight we determined earlier.

Then once I have the sampling points I divide up the image in voronoi segments and assign the L*a*b* average of the color values inside each segment to each segment.

I have a lot of heuristics, and I also must do a bit of math to make sure the number of sample points is close to `N`. I get `N` exactly by overshooting slightly, and then dropping some points with an heuristic.

In terms of runtime, this filter isn't cheap, but no image below took more than 5 seconds to make.

``import mathimport randomimport collectionsimport osimport sysimport functoolsimport operator as opimport numpy as npimport warningsfrom scipy.spatial import cKDTree as KDTreefrom skimage.filters.rank import entropyfrom skimage.morphology import disk, dilationfrom skimage.util import img_as_ubytefrom skimage.io import imread, imsavefrom skimage.color import rgb2gray, rgb2lab, lab2rgbfrom skimage.filters import sobel, gaussian_filterfrom skimage.restoration import denoise_bilateralfrom skimage.transform import downscale_local_mean# Returns a random real number in half-open range [0, x).def rand(x):    r = x    while r == x:        r = random.uniform(0, x)    return rdef poisson_disc(img, n, k=30):    h, w = img.shape[:2]    nimg = denoise_bilateral(img, sigma_range=0.15, sigma_spatial=15)    img_gray = rgb2gray(nimg)    img_lab = rgb2lab(nimg)    entropy_weight = 2**(entropy(img_as_ubyte(img_gray), disk(15)))    entropy_weight /= np.amax(entropy_weight)    entropy_weight = gaussian_filter(dilation(entropy_weight, disk(15)), 5)    color = [sobel(img_lab[:, :, channel])**2 for channel in range(1, 3)]    edge_weight = functools.reduce(op.add, color) ** (1/2) / 75    edge_weight = dilation(edge_weight, disk(5))    weight = (0.3*entropy_weight + 0.7*edge_weight)    weight /= np.mean(weight)    weight = weight    max_dist = min(h, w) / 4    avg_dist = math.sqrt(w * h / (n * math.pi * 0.5) ** (1.05))    min_dist = avg_dist / 4    dists = np.clip(avg_dist / weight, min_dist, max_dist)    def gen_rand_point_around(point):        radius = random.uniform(dists[point], max_dist)        angle = rand(2 * math.pi)        offset = np.array([radius * math.sin(angle), radius * math.cos(angle)])        return tuple(point + offset)    def has_neighbours(point):        point_dist = dists[point]        distances, idxs = tree.query(point,                                    len(sample_points) + 1,                                    distance_upper_bound=max_dist)        if len(distances) == 0:            return True        for dist, idx in zip(distances, idxs):            if np.isinf(dist):                break            if dist < point_dist and dist < dists[tuple(tree.data[idx])]:                return True        return False    # Generate first point randomly.    first_point = (rand(h), rand(w))    to_process = [first_point]    sample_points = [first_point]    tree = KDTree(sample_points)    while to_process:        # Pop a random point.        point = to_process.pop(random.randrange(len(to_process)))        for _ in range(k):            new_point = gen_rand_point_around(point)            if (0 <= new_point[0] < h and 0 <= new_point[1] < w                    and not has_neighbours(new_point)):                to_process.append(new_point)                sample_points.append(new_point)                tree = KDTree(sample_points)                if len(sample_points) % 1000 == 0:                    print("Generated {} points.".format(len(sample_points)))    print("Generated {} points.".format(len(sample_points)))    return sample_pointsdef sample_colors(img, sample_points, n):    h, w = img.shape[:2]    print("Sampling colors...")    tree = KDTree(np.array(sample_points))    color_samples = collections.defaultdict(list)    img_lab = rgb2lab(img)    xx, yy = np.meshgrid(np.arange(h), np.arange(w))    pixel_coords = np.c_[xx.ravel(), yy.ravel()]    nearest = tree.query(pixel_coords)[1]    i = 0    for pixel_coord in pixel_coords:        color_samples[tuple(tree.data[nearest[i]])].append(            img_lab[tuple(pixel_coord)])        i += 1    print("Computing color means...")    samples = []    for point, colors in color_samples.items():        avg_color = np.sum(colors, axis=0) / len(colors)        samples.append(np.append(point, avg_color))    if len(samples) > n:        print("Downsampling {} to {} points...".format(len(samples), n))    while len(samples) > n:        tree = KDTree(np.array(samples))        dists, neighbours = tree.query(np.array(samples), 2)        dists = dists[:, 1]        worst_idx = min(range(len(samples)), key=lambda i: dists[i])        samples[neighbours[worst_idx][1]] += samples[neighbours[worst_idx][0]]        samples[neighbours[worst_idx][1]] /= 2        samples.pop(neighbours[worst_idx][0])    color_samples = []    for sample in samples:        color = lab2rgb([[sample[2:]]])[0][0]        color_samples.append(tuple(sample[:2][::-1]) + tuple(color))    return color_samplesdef render(img, color_samples):    print("Rendering...")    h, w = [2*x for x in img.shape[:2]]    xx, yy = np.meshgrid(np.arange(h), np.arange(w))    pixel_coords = np.c_[xx.ravel(), yy.ravel()]    colors = np.empty([h, w, 3])    coords = []    for color_sample in color_samples:        coord = tuple(x*2 for x in color_sample[:2][::-1])        colors[coord] = color_sample[2:]        coords.append(coord)    tree = KDTree(coords)    idxs = tree.query(pixel_coords)[1]    data = colors[tuple(tree.data[idxs].astype(int).T)].reshape((w, h, 3))    data = np.transpose(data, (1, 0, 2))    return downscale_local_mean(data, (2, 2, 1))if __name__ == "__main__":    warnings.simplefilter("ignore")    img = imread(sys.argv[1])[:, :, :3]    print("Calibrating...")    mult = 1.02 * 500 / len(poisson_disc(img, 500))    for n in (100, 300, 1000, 3000):        print("Sampling {} for size {}.".format(sys.argv[1], n))        sample_points = poisson_disc(img, mult * n)        samples = sample_colors(img, sample_points, n)        base = os.path.basename(sys.argv[1])        with open("{}-{}.txt".format(os.path.splitext(base)[0], n), "w") as f:            for sample in samples:                f.write(" ".join("{:.3f}".format(x) for x in sample) + "\n")        imsave("autorenders/{}-{}.png".format(os.path.splitext(base)[0], n),            render(img, samples))        print("Done!")``

# Images

Respectively `N` is 100, 300, 1000 and 3000:

I like this; it looks a bit like smoked glass.

I messed around with this a bit, and you get better results, particularly for the low triangle images, if you replace the denoise_bilatteral with a denoise_tv_bregman. It generates more even patches in its denoising, which helps.

@LKlevin What weight did you use?

I used 1.0 as the weight.

Content dated before 7/24/2021 11:53 AM

• {{ error }}