The inspiration for this project was https://allrgb.com/ where there are many images with no two pixels the same colour. There are many approaches demonstrated, but I wanted to see if I could achieve similar results with an evolutionary algorithm.
My idea was simple, I'd select an image to try to replicate in every colour, create an output image containing one pixel of every colour, and an use a hill-climbing algorithm to adjust the output image to be more similar to the input image. The basic premise for all three algorithms I wrote is that two pixels are selected at random, and they are swapped if the algorithm calculates that it would improve the image. Given that there are 16,777,216 pixels in one of these images, it may take a billion or more swaps to reach a result that looks good.
The project is open source and can be found on GitHub
Run it yourself
To run this program yourself on Windows, download and unzip the file and double-click on "allRGB.exe" inside the folder.AllRGB_v2.0 (zip)
This is the image that each algorithm was attempting to replicate. This is perhaps not the best example, as it doesn't contain a balanced range of colours, but it does highlight the pros and cons of each algorithm.
Due to the large size of the output files, the following results have been scaled to 2048x2048 pixels.
Each pixel is scored by how closely its individual colour channels match the target image. Each pixel has three 0-255 values representing how much of each colour it has in it, by subtracting the red value of the target from the red value of the output pixel.
This image contains glaring issues in the dark areas, and loses the shadows amongst the grass and on the roses. Also the fact that the target image contains more green than any other colour results in the reds and blues being forced into unatural places.
Similar to RGB error except the pixel is converted to the HSV colourspace first. This means that we can bias the error calculation to prefer pixels that are closer in brightness, which may be desirable because human eyesight is much more adept at percieving brightness differences than it is colour differences.
While this image does improve the dark regions, and does bring back the shadows in the grass and on the roses, the trade-off to sacrifice colour accuracy for brightness accuracy is very evident.
This uses the same algorithm as the RGB error except it compares the average colour of a small area around each pixel, rather than just each individual pixel. This does however slow down the algorithm greatly.
This is by far the most natural looking image, however it really sacrifices the contrast, making the colours all look washed out.