Skip to main content

Purdue Researchers Convert Smartphone to Hyperspectural Camera Using AI

A new breakthrough in mobile healthcare
Created on June 19|Last edited on June 20
The emergence of artificial intelligence has profoundly impacted nearly every industry, and healthcare is no exception. A team of researchers at Purdue University has created a unique AI-driven technology that repurposes a smartphone into a hyperspectral imaging tool. This development could facilitate swift and accurate medical diagnostics, like the detection of anemia, a condition where there is a deficiency in the number of red blood cells, or the amount of hemoglobin in the blood.

Previous Sensors

Understanding this significant development in mobile health technology necessitates a review of the existing state of hyperspectral imaging. These systems traditionally utilize a multitude of sensors to capture an extensive range of light wavelengths, which exceed the basic Red, Green, and Blue (RGB) light that smartphone cameras typically register. This capability to record comprehensive light data opens the door for hyperspectral imaging to potentially diagnose various conditions, including skin and retinal diseases and certain cancers.
Nonetheless, hyperspectral imaging systems have traditionally faced a trade-off between spectral and spatial resolution, leading to large, slow, and costly devices. This situation is analogous to the challenges lidar sensors encounter in self-driving vehicles, where similar trade-offs exist between resolution, size, and cost. Despite substantial efforts to refine this specialized equipment, the size, complexity, and expense are barriers to wider access.

Just a Camera

A solution to this predicament comes from Professor Young Kim's team at Purdue University, who developed an innovative algorithm. They employed deep learning and statistical techniques, combined with insights about light-tissue interactions, to reconstruct the full spectrum of visible light in each pixel of a standard smartphone camera image.
By employing this methodology, the team effectively converts regular smartphone cameras into hyperspectral imaging devices, which only capture RGB wavelengths of light. This progress aligns with the trend of utilizing just a smartphone camera for healthcare purposes, as seen with recent advancements in measuring Heart Rate Variability (HRV) and ongoing projects like Apple's research into determining blood sugar levels via sensors in the Apple Watch.
The method involves capturing RGB color intensity in each pixel using a smartphone camera's ultra-slow-motion setting, which produces video at roughly 1,000 frames per second. The algorithm uses this information to infer full-spectrum data for each pixel, enabling a detailed analysis of blood flow and oxygenated and deoxygenated hemoglobin levels.
This technique enables the smartphone to produce hyperspectral images in a mere millisecond, a feat that conventional hyperspectral imaging systems take three minutes to accomplish.

The Future

The implications of this technology for the future are super exciting. By utilizing AI in this way, we edge closer to a future where our smartphones could act as comprehensive medical diagnostic tools, akin to having an entire doctor at our fingertips. This scenario echoes the transformation brought about by Apple's iPod, which promised "1,000 songs in your pocket."
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.