"New Hardware Offers Faster Computation For Artificial Intelligence, With Much Less Energy"

As scientists push the boundaries of machine learning, the amount of money, energy, and time required to train increasingly complex neural network models are skyrocketing.  A new area of artificial intelligence called analog deep learning promises faster computation with a fraction of the energy usage.  Researchers at MIT stated that programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors.  The researchers noted that by repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial “neurons” and “synapses” that execute computations just like a digital neural network.  The researchers stated that this network can then be trained to achieve complex AI tasks like image recognition and natural language processing.  MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed.  The researchers utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain.  Also, this inorganic material makes the resistor extremely energy-efficient.  The researchers noted that unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques.  This change has enabled fabricating devices at the nanometer scale and could pave the way for integration into commercial computing hardware for deep-learning applications.

 

MIT News reports: "New Hardware Offers Faster Computation For Artificial Intelligence, With Much Less Energy"

Submitted by Anonymous on