DARPA ReImagine: Reconfigurable Imaging Sensor with Smart Pixels Sees Like Never Before

Home / Articles / External / Government

darpa_reimage_reconfigurable_imager_smart_pixel_camera_o

January 30, 2017 | Originally published by Date Line: January 30 on

Picture a sensor pixel about the size of a red blood cell. Now envision a million of these pixels—a megapixel’s worth—in an array that covers a thumbnail. Take one more mental trip: dive down onto the surface of the semiconductor hosting all of these pixels and marvel at each pixel’s associated tech-mesh of more than 1,000 integrated transistors, which provide each and every pixel with a tiny reprogrammable brain of its own. That is the vision for DARPA’s new Reconfigurable Imaging (ReImagine) program.

“What we are aiming for,” said Jay Lewis, program manager for ReImagine, “is a single, multi-talented camera sensor that can detect visual scenes as familiar still and video imagers do, but that also can adapt and change their personality and effectively morph into the type of imager that provides the most useful information for a given situation.” This could mean selecting between different thermal (infrared) emissions or different resolutions or frame rates, or even collecting 3-D LIDAR data for mapping and other jobs that increase situational awareness. The camera ultimately would rely on machine learning to autonomously take notice of what is happening in its field of view and reconfigure the imaging sensor based on the context of the situation. The future sensor Lewis has in mind would even be able to perform many of these functions simultaneously because different patches of the sensor’s carpet of pixels could be reconfigured by way of software to work in different imaging modes. That same reconfigurability should enable the same sensor to toggle between different sensor modes from one lightning-quick frame to the next. No single camera can do that now.