[comp.research.japan] Neurochip announced

shaun@isr.recruit.co.jp (Shaun Lawson) (03/12/91)

(The following is a summarized translation of an article found in the 3/12/91
edition of the Nikkei Sangyou Newspaper, prepared by Shaun Lawson of the 
Institute for Supercomputing Research [shaun@isr.recruit.co.jp].  Please
include this notice when forwarding.)
===============================================================================
Mitsubishi Electric has announced what it claims is the world's fastest
and largest scale learning neurochip.  The chip they have developed
contains 336 neurons (units) and 28,000 synapses (connections).  The
chip area required for a single synapse has been halved, and a new 
circuit architecture has been used in which system performance does not
degrade with increase in scale through the connection of many chips.
They plan to create a prototype neural network during this year, which
has 1,000 units and 1,000,000 connections.

Synapse weights are expressed as the amount of electricity stored in
capacitors, and learning takes place through changes in these amounts.
Conventional synapse circuits are all digital, however the new chip
calculates and updates the weights an an analog fashion.  As a result,
it was possible to hold the area needed for a single synapse to 70
square microns, without decrease in efficiency.

The execution speed is 1 TCPS and 28 GCUPS [*], far faster than Mitsubishi's
previous chip, which was also the world's fastest when it was released
last year.

In order to overcome the problem of decreased performance with larger
scale due to the increase in connections, a new architecture which
splits a single neuron over several chips was developed.  For example,
if a neural net is created with four chips, then the circuits for a
single neuron will be distributed throughout all four chips.

Evaluation tests with connection of several chips have revealed that
it is possible to create neural networks of up to several hundred
chips.   It is therefore possible to create a neural network made up
of 200 chips which will contain 3,000 neurons and 5,600,000 synapses.
It is said that the performance in such a case would be 200 TCPS and
5.6 TCUPS.

[*: CPS  = Connections Per Second         (speed without learning) 
    CUPS = Connections Updated Per Second (speed with learning   )
    G    = Giga
    T    = Tera ]

-------------------------------------------------------------------------------
Shaun Lawson                                  Tel : (03)3536-7770
Institute for Supercomputing Research         Fax : (03)3536-7769
1-13-1 Kachidoki, Chuo-ku                Internet : shaun@isr.recruit.co.jp 
Tokyo, Japan 104                      Disclaimers : Standard