This looks interesting. I was wondering if you could explain a little more about what information about a picture this transform is exposing? I mean, if you look at just the output image, what kind of insight can you gain into the original input image?
That is weird. It decides to flip a column by summing up the vector distances between the pixels in that column vs. the pixels in the previous column. I'm wondering if that since there is so much gray in the painting, this causes a large accumulation of error in gray vs. slightly different gray which outweighs the orange vs gray.
I do know that euclidean distance in RGB space is not that great a metric for perceptual similarity. Perhaps everything should be done in YUV instead? I will try tonight. Thanks.
I do, only because it's from the institution that actually owns the real painting. Museums try pretty hard to get colors right on the digital images that they put online. And since the project is about color, it seems like an important detail. Thanks for being open to my nitpicking. Very cool project.
Made somewhat similar project by organizing pixels in three dimensional space of HSL (hue saturation lightness) color model: http://www.pixelbox.com/colorclouds/
Note that this reduces identical pixels into one point in 3D space. The key idea behind the experiment was to learn more about the palettes used by painters.
Yes: it seems to be for colors where they are similar to two or more other clusters of pixels. I imagine that these are just really hard to place without seeming too out of place? Also probably has to do with the column by column "greedy" approach that isn't really global.
In one Iain M Banks story, it is mentioned that a character was ordered not to destroy a library, so he had all the paintings given the treatment provided by this software and all the books cut up and arranged so that individual letters were in alphabetical order.
My first thought was "paintings don't have pixels"; but colour is quantised I guess. I wonder how many pixels you need to get the same graduation between color tones as the original image, presumably enough to provide atom level resolution.
I believe the term you want is "discretized", not "quantized". They're quantized too, but that's about there being a finite number of shades rather than a finite number of subdivisions of the image.