A large format XY scanning hyperspectral camera
A hyperspectral camera captures spectral data for each pixel. To create a version of a hyperspectral camera I made use of a 200um fibre optic cable attached to a spectrometer which is moved in both the X and Y axis behind a large format camera lens.
I make use of the https://www.colour-science.org library to create an RGB image from my spectral data.
Note: I recommend watching ‘Do It Yourself Hyperspectral‘ for a much more sensible approach, which uses linear scanning, meaning a column of the image is scanned and a spectral graph for each row of the column is produced. Also see the links at the bottom of the page.
The following parts where used:
I 3D printed a plastic bracket to connect the two stages together and a long piece of plastic with screw holes, which I used to bolt the X stage to the Sinar stand.

You can see how I printed a plastic part to clip the fibre cable onto the Y stage.

I designed the 3D printed parts using OpenSCAD and printed them using a Bambu A1 mini in PETG. The following image depicts the fibre clip:

The following image depicts the bracket to attach the two stages together. As it is printed as 5mm thick plastic with 5 walls it feels pretty sturdy, but would probably make sense to add extra support triangles for increased sturdiness. Additionally I need to drill an extra 2 holes in the metal of one of the stages to make it less wobbly when attached.

You can see how the X stage is bolted to the Sinar stand using the following bracket.

You can see how the spectrometer is connected to the SMA905 fibre in the following image

To control the X, Y stages I made a simple PCB that uses a Raspberry Pi Pico and TMC2130 drivers (TMCSILENTSTEPSTICK SPI) modules. I wrote a simple micropython program which listens on USB UART for commands such as:
After a command has been received I use the Pico’s PIO to generate N pulses at a fixed frequency for the number of steps that has been passed in, for example 4000 in this case. These pulses are sent to the appropriate stepper driver. After the pulses have been generated the message ‘done’ is sent over the UART. Currently I’m not using a trapezoidal motion profile.

I found on a couple of computers with the spectrometer I got the error “usb.core.USBError: [Errno 5] Input/Output Error” after a few minutes, this appeared to be related to using an overly long USB-C cable.
In order to convert the output of my spectrometer to RGB values, I normalise the spectrum to values 0 to 1 and make use of the excellent colour science Python library (https://www.colour-science.org/) to handle the conversion.
The following image was created with a 0.2s exposure per pixel, with 0.1s delay after stepper movement using 4000 steps between each pixel.
In order to get the focus approximately correct I placed ground glass near the back of the bellows and mainly adjusted the front part of the bellows where the lens is attached. I then removed this and covered the linear stages and fibre with a black cloth.
In order to minimise affects of external light sources I kept curtains closed.

I then tried lowering the delay after stepper movement –
The following image was created with a 0.2s exposure per pixel, with 0.04s delay after stepper movement using 4000 steps between each pixel you can see there is noticeable linear distortion.

The following graph depicts the spectral output of one of the pixels of the cat’s rose tinted glasses which as you can see is very noisy (I believe with longer exposure times/averaging in the spectrometer, there would be less noise).

The following is the first full image I obtained with 400 x 400 pixels, it took approximately 1126.1042945543925 mins (or 18.8 hours) to generate, creating a pickled file of around 6.8GB. You will notice an amount of noise and low contrast. For each pixel I normalised the spectral values between 0 and 1 based on the maximum intensity of a wavelength for that pixel.

If I instead normalised the pixel values based on the maximum intensity of a wavelength out of all the spectra, we get the following image –

This was lightened using imagemagick via –
convert first-global.png -linear-stretch 30x30 first-global-stretch.png
According to the documentation – “while performing the stretch, black-out at most black-point pixels and white-out at most white-point pixels.”

I chose to generate an image using 550nm wavelength data you can see how noisy the result is. Note that I used a multiplication gain of 15, to increase the exposure of the following images here.
# global normalisation
minv = np.min(wavlength_imgs[w])
arr = ((((wavlength_imgs[w] - minv) / (np.max(wavlength_imgs[w]) - minv))) * 15) * 255
arr[arr > 255] = 255
img = Image.fromarray(arr.astype(np.uint8), 'L')

By using intensity information from 25 wavelengths either side of the central wavelength and averaging the noise is significantly reduced

The following video shows frames ranging from wavelengths 300nm to 900nm under a 10.5W LED light (used multiplication gain of 15).
The following video shows frames ranging from wavelengths 300nm to 900nm under a 850nm 12W light, 395nm 10W light and 10.5W LED light. You will notice how the cat’s rose tinted glasses look black under the UV light (I believe they’re possibly acting like sunglasses and blocking UV).
The lines near the top of the image possibly come from changes in sunlight.
(used multiplication gain of 5)
The following still frame shows data at 400nm using imagemagick ‘-linear-stretch 300×300’ to brighten, you will see how the sunglasses look black.

The following video with a software gain of 7, shows data from all wavelengths the spectrometer captures using an 850nm 12W light, 395nm 10W light and 10.5W LED light (this took 1125.36 mins to capture).
Source code
You can find the source code for the MCU code, processing code and 3D prints here.
To Do
- Try different sized pinholes in front of fibre
- Try different widths of fibre
- Try exposures longer than 0.2s to see if this decreases noise in the spectral data
- Try adjusting averaging of data in spectrometer
- Software to browse spectral data of each pixel
Hyperspectral imaging links
Line Scan Hyperspectral Imaging Framework for Open Source Low-Cost Platforms – much more sensible approach
Smartphone Cameras Go Hyperspectral – very interesting looking approach using a colour chart and algorithm
HyperCam: Hyperspectral Imaging for Ubiquitous Computing Applications – changing the colour of light