Intel Gets Serious About Neuromorphic, Cognitive Computing Future

Home / Articles / External Non-Government

intel_nervana_w_d_2

March 13, 2017 | Originally published by Date Line: March 13 on

Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side.

However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation—an emphasis on neuromorphic devices and what Intel is openly calling “cognitive computing” (a term used primarily—and heavily—for IBM’s Watson-driven AI technologies). This is the first time to date we’ve heard the company make any definitive claims about where neuromorphic chips might fit into a strategy to capture machine learning, and marks a bold grab for the term “cognitive computing” which has been an umbrella term for Big Blue’s AI business.

Intel has been developing neuromorphic devices for some time, with one of the first prototypes that was well known in 2012. At the same time, IBM was still building out efforts on its own “True North” neuromorphic architecture, which we do not generally hear much about outside of its role as a reference point for new neuro-inspired devices we’ve watched roll out in the last couple of years. Some might suggest that a renewed interest in neuromorphic computing from Intel could be aligned with the DoE’s assertion that at least one of the forthcoming exascale machines must utilize a novel architecture, (although just what classifies as “novel” is still up for debate) and some believe that neuromorphic is a strong contender. The problem is, if neuromorphic is one of the stronger bets, there are some big challenges ahead. Near term, there are really no neuromorphic devices being produced at scale enough to warrant an already-risky DoE investment and second, albeit longer term, is the fact that programming such devices, even to handle offload workloads for existing large-scale scientific simulations, is a tall order.