Pentagon Wants an Imaging Sensor That Can Think

Home / Articles / External Non-Government

October 10, 2016 | Originally published by Date Line: October 10 on

Digital imagery has come a long way in the last 20 years or so. From early images where you could see the square edges on the pixels to cameras with top-quality lenses and 160 or more million pixels and phone cameras good enough to shoot professional sporting events. The Energy department also is developing a 3.2-gigpixel camera to serve as the eye of the Large Synoptic Survey Telescope.

What could be next? How about an imaging sensor that essentially can think, combining data from multiple sensors and using machine learning to adjust the image based on what’s happening in the frame. That’s the idea behind a new Defense Advanced Research Projects Agency program called ReImagine, or Reconfigurable Imaging.

The goals of the program isn’t so much to improve everyday photography as it is to improve situational awareness for warfighter by combining such inputs as infrared emissions, separate resolutions or frame rates and 3D LIDAR (light detection and ranging). The system would use a million pixels in an array the size of a thumbnail and over 1,000 transistors to give each pixel a programmable ability to adjust to the image being delivered.