How palette extraction works under the hood
2026-02-21 · Ellie Wagner
The core of DuskRidge is a colour quantisation step that takes an arbitrary JPEG or PNG and produces a small palette. Here's how that works, and a few decisions that weren't obvious.
Median cut vs k-means
The classic algorithm for palette extraction is median cut, which recursively splits the colour space along the axis with the highest variance. It's fast and deterministic, and it's what most older tools use.
DuskRidge uses k-means instead, because median cut tends to over-represent colours that occupy large flat areas of an image. A landscape photo with a big grey sky produces a palette skewed towards sky greys, which is rarely what you want for a terminal theme.
k-means with a good initialisation (k-means++ in our case) clusters by perceptual density rather than spatial coverage, which gives palettes that better represent the visually interesting parts of the image.
Perceptual colour space
The clustering runs in CIELAB rather than RGB. Euclidean distance in CIELAB correlates much better with perceived colour difference than in RGB, which means the cluster boundaries end up where a human would draw them.
Converting back to sRGB for output is straightforward, but clipping is occasionally needed for highly saturated extracted colours.