When you upload a photo to Spectrimage, the image is drawn onto a hidden canvas scaled so the longest side is 300 pixels, preserving the original aspect ratio. Every pixel is read and converted from RGB to HSL. Pixels with black, white, and grays with no discernible color go into an achromatic bucket. Everything else is binned by hue into 2-degree slices (180 bins across the color wheel).
For each hue bin, the pixels are sorted by lightness. The darkest 20% are averaged to produce the shade color (bottom of the column). The middle 20% produce the pure color. The lightest 20% produce the tint (top of the column). Using 20% slices reduces outlier noise (such as a single nearly-black pixel from a deep crack between two oranges). Averaging the darkest 20% pixels gives a representative shade that matches what your eye actually perceives as "the dark version of this orange."
The bins are sorted into ROYGBIV order using a continuous hue remapping that handles red's wrap-around (red straddles both ends of the 0–360 degree scale, appearing at both 355 and 5 degrees). The achromatic bin is appended at the end.
The spectrum is rendered on an HTML Canvas. Each bin gets an equal-width column. The column's height is proportional to its pixel count relative to the most common hue. A linear gradient paints each column from tint at top, through pure at center, to shade at bottom. The gradient produces a smoother visual result than representing every pixel.
The whole process runs client-side in the browser. No server round-trip, no image upload to external services, no dependencies beyond the Canvas API and my HSL conversion utilities. For a 4000x3000 photograph, analysis completes in under a second.