展示HN:一种大尺寸XY扫描高光谱相机
Show HN: A large format XY scanning hyperspectral camera

原始链接: https://www.anfractuosity.com/projects/waverider/

## DIY 高光谱相机总结 该项目详细介绍了使用光谱仪、200µm 光纤电缆和大型镜头构建定制高光谱相机的方法。该系统采用二维扫描方法,使用由树莓派 Pico 和步进电机控制的 X 和 Y 舞台,为每个像素捕获光谱数据。3D 打印组件使用 OpenSCAD 设计,并使用 Bambu A1 mini 制造,用于容纳和对齐组件。 数据处理利用 `colour-science` Python 库将光谱读数转换为 RGB 图像。初步结果表明该系统具有功能性,但图像存在噪点且采集时间较长(400x400 像素图像需要超过 18 小时)。正在探索降噪技术,例如在多个波长上平均数据。 该项目强调了 USB 连接问题和图像失真等挑战,以及潜在的改进,例如更长的曝光时间、不同的光纤配置和专门的光谱数据浏览软件。作者承认存在更高效的线性扫描方法,并提供了相关项目的链接以供进一步探索。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 展示 HN: 一种大尺寸 XY 扫描高光谱相机 (anfractuosity.com) 5 分,由 anfractuosity 2 小时前发布 | 隐藏 | 过去 | 收藏 | 讨论 考虑申请 YC 2026 冬季批次!申请截止日期为 11 月 10 日 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系方式 搜索:
相关文章

原文

A large format XY scanning hyperspectral camera

A hyperspectral camera captures spectral data for each pixel. To create a version of a hyperspectral camera I made use of a 200um fibre optic cable attached to a spectrometer which is moved in both the X and Y axis behind a large format camera lens.

I make use of the https://www.colour-science.org library to create an RGB image from my spectral data.

Note: I recommend watching ‘Do It Yourself Hyperspectral‘ for a much more sensible approach, which uses linear scanning, meaning a column of the image is scanned and a spectral graph for each row of the column is produced. Also see the links at the bottom of the page.

The following parts where used:

I 3D printed a plastic bracket to connect the two stages together and a long piece of plastic with screw holes, which I used to bolt the X stage to the Sinar stand.

You can see how I printed a plastic part to clip the fibre cable onto the Y stage.

I designed the 3D printed parts using OpenSCAD and printed them using a Bambu A1 mini in PETG. The following image depicts the fibre clip:

The following image depicts the bracket to attach the two stages together. As it is printed as 5mm thick plastic with 5 walls it feels pretty sturdy, but would probably make sense to add extra support triangles for increased sturdiness. Additionally I need to drill an extra 2 holes in the metal of one of the stages to make it less wobbly when attached.

You can see how the X stage is bolted to the Sinar stand using the following bracket.

You can see how the spectrometer is connected to the SMA905 fibre in the following image

To control the X, Y stages I made a simple PCB that uses a Raspberry Pi Pico and TMC2130 drivers (TMCSILENTSTEPSTICK SPI) modules. I wrote a simple micropython program which listens on USB UART for commands such as:

After a command has been received I use the Pico’s PIO to generate N pulses at a fixed frequency for the number of steps that has been passed in, for example 4000 in this case. These pulses are sent to the appropriate stepper driver. After the pulses have been generated the message ‘done’ is sent over the UART. Currently I’m not using a trapezoidal motion profile.

I found on a couple of computers with the spectrometer I got the error “usb.core.USBError: [Errno 5] Input/Output Error” after a few minutes, this appeared to be related to using an overly long USB-C cable.

In order to convert the output of my spectrometer to RGB values, I normalise the spectrum to values 0 to 1 and make use of the excellent colour science Python library (https://www.colour-science.org/) to handle the conversion.

The following image was created with a 0.2s exposure per pixel, with 0.1s delay after stepper movement using 4000 steps between each pixel.

In order to get the focus approximately correct I placed ground glass near the back of the bellows and mainly adjusted the front part of the bellows where the lens is attached. I then removed this and covered the linear stages and fibre with a black cloth.

In order to minimise affects of external light sources I kept curtains closed.

I then tried lowering the delay after stepper movement –

The following image was created with a 0.2s exposure per pixel, with 0.04s delay after stepper movement using 4000 steps between each pixel you can see there is noticeable linear distortion.

The following graph depicts the spectral output of one of the pixels of the cat’s rose tinted glasses which as you can see is very noisy (I believe with longer exposure times/averaging in the spectrometer, there would be less noise).

The following is the first full image I obtained with 400 x 400 pixels, it took approximately 1126.1042945543925 mins (or 18.8 hours) to generate, creating a pickled file of around 6.8GB. You will notice an amount of noise and low contrast. For each pixel I normalised the spectral values between 0 and 1 based on the maximum intensity of a wavelength for that pixel.

If I instead normalised the pixel values based on the maximum intensity of a wavelength out of all the spectra, we get the following image –

This was lightened using imagemagick via –

 convert first-global.png -linear-stretch 30x30 first-global-stretch.png 

According to the documentation – “while performing the stretch, black-out at most black-point pixels and white-out at most white-point pixels.”

I chose to generate an image using 550nm wavelength data you can see how noisy the result is. Note that I used a multiplication gain of 15, to increase the exposure of the following images here.

    # global normalisation
    minv = np.min(wavlength_imgs[w])
    arr = ((((wavlength_imgs[w] - minv) / (np.max(wavlength_imgs[w]) - minv))) * 15) * 255
    arr[arr > 255] = 255
    img = Image.fromarray(arr.astype(np.uint8), 'L')

By using intensity information from 25 wavelengths either side of the central wavelength and averaging the noise is significantly reduced

The following video shows frames ranging from wavelengths 300nm to 900nm under a 10.5W LED light (used multiplication gain of 15).

The following video shows frames ranging from wavelengths 300nm to 900nm under a 850nm 12W light, 395nm 10W light and 10.5W LED light. You will notice how the cat’s rose tinted glasses look black under the UV light (I believe they’re possibly acting like sunglasses and blocking UV).

The lines near the top of the image possibly come from changes in sunlight.

(used multiplication gain of 5)

The following still frame shows data at 400nm using imagemagick ‘-linear-stretch 300×300’ to brighten, you will see how the sunglasses look black.

The following video with a software gain of 7, shows data from all wavelengths the spectrometer captures using an 850nm 12W light, 395nm 10W light and 10.5W LED light (this took 1125.36 mins to capture).

Source code

You can find the source code for the MCU code, processing code and 3D prints here.

To Do

  • Try different sized pinholes in front of fibre
  • Try different widths of fibre
  • Try exposures longer than 0.2s to see if this decreases noise in the spectral data
  • Try adjusting averaging of data in spectrometer
  • Software to browse spectral data of each pixel

Hyperspectral imaging links

Line Scan Hyperspectral Imaging Framework for Open Source Low-Cost Platforms – much more sensible approach

Smartphone Cameras Go Hyperspectral – very interesting looking approach using a colour chart and algorithm

HyperCam: Hyperspectral Imaging for Ubiquitous Computing Applications – changing the colour of light

Leave Comment

联系我们 contact @ memedata.com