WO1991019259A1 - Architecture distributive et numerique de maximalisation, et procede - Google Patents
Architecture distributive et numerique de maximalisation, et procede Download PDFInfo
- Publication number
- WO1991019259A1 WO1991019259A1 PCT/US1990/003068 US9003068W WO9119259A1 WO 1991019259 A1 WO1991019259 A1 WO 1991019259A1 US 9003068 W US9003068 W US 9003068W WO 9119259 A1 WO9119259 A1 WO 9119259A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- processor node
- node
- data
- processor
- value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 15
- 230000001902 propagating effect Effects 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 abstract description 9
- 230000006870 function Effects 0.000 description 24
- 238000013528 artificial neural network Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 210000000653 nervous system Anatomy 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005094 computer simulation Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 210000003050 axon Anatomy 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8007—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
- G06F15/8015—One dimensional arrays, e.g. rings, linear arrays, buses
Definitions
- the instant invention relates to a computer processor architecture, and specifically to an architecture which provides a maximization structure and method of determining which of several processor nodes in an array contains a maximum data value.
- Neural networks are a form of architecture which enables a computer to closely approximate human thought processes.
- One form of neural network architecture enables single instruction stream, multiple data stream
- a nervous system and a neurocomputational computer, is characterized by continuous, non-symbolic, and massively parallel structure that is fault-tolerant of input noise and hardware failure.
- Representations, ie, the input are distributed among groups of computing elements, which independently reach a result or conclusion, and which then generalize and interpolate information to reach a final output conclusion.
- connectionist/neural networks search for "good” solutions using massively parallel computations of many small computing elements.
- the model is one of parallel hypothesis generation and relaxation to the dominant, or “most-likely” hypothesis.
- the search speed is more or less independent of the size of the search space.
- Learning is a process of incrementally changing the connection (synaptic) strengths, as opposed to allocating data structures.
- "Programming" in such a neural network is by example.
- One particularly useful operation which is performed by neural networks is character recognition, which may be used to input printed or handwritten material into an electronic storage device. It is necessary for the system to recognize a particular character from among hundreds of possible characters.
- a matrix may be provided for each possible character, which matrix is stored in a processor node, for comparison to a similar matrix which is generated by analyzing the character to be stored.
- the input matrix is compared, by all processor nodes to the values contained therein, and the best match, the maximum correlation between the stored data and the input data, determines how the input matrix will be interpreted and stored electronically.
- the problem of how to distribute the comparison function and determination of best match across a processor array with potentially thousands of processor nodes may be solved in a variety of ways.
- the most efficient way to determine a best match would be to use an analog system, where the magnitude of a current is proportional to the number being maximized.
- There are a number of electrical problems with this approach The analog approach has limited precision and will not work well across multiple integrated circuits, ie, it is usually limited to a single chip and therefor limited to the number of PNs which may be placed on a single chip.
- An object of the invention is to provide a maximization architecture which determines, in an array of processor nodes, which processor node has the maximum data figure contained therein.
- Another object of the invention is to provide a maximization architecture which allows selectable, arbitrarily large, precision of determining maximization.
- a further object of the invention is to provide the maximization architecture which analyzes a data value on a bit-by-bit method to produce a winner-take-all result without necessarily analyzing ever bit of every data figure contained in the array of processor nodes.
- Yet another object of the invention is to provide a method of determining, without necessarily examining every bit of data value which is the maximum data value contained in an array of processor nodes which extend over multiple integrated circuits.
- the maximization architecture of the invention includes an array of processor nodes wherein each node has a manipulation unit contained therein. Each node is connected to an input bus and to an output bus.
- a data register is located in each processor node and contains a data figure, which consists of a plurality of segments, or bits, wherein each segment or bit has a value.
- a maximization mechanism is located in each processor node and is connected to an arbitration bus which extends between adjacent processor nodes and an arbitration mechanism, which is connected to the arbitration bus, for comparing a value of a bit to a signal which is transmitted on the arbitration bus and for subsequently transmitting a comparison indicator.
- Each processor node includes first and second indicator retainers, which are connected to the arbitration means for retaining the comparison indicator.
- a flag mechanism is provided and flags the processor node which contains the maximum data figure.
- the method of the invention includes providing the previously identified structure and initially setting the flag mechanism of each processor node with a positive flag.
- the value of a subject bit is examined in the data registered to determine whether it has a value of one or zero. If and only if the value is zero, and the value of the subject segment of at least one other processor node is one, the flag register of a designated processor node is set with a negative flag. The value of the flag register is placed into the arbitration mechanism and transmitted to the indicator retainers. The steps are reiterated until only one processor node has a positive flag. The remaining positive flag indicates that particular processor node contains the maximum data figure.
- FIG. 1 is a schematic diagram of a broadcast communication pattern of communication nodes contained within processor nodes of the invention.
- Fig. 2 is a block diagram of a portion of an array of processor nodes of the invention containing the maximization architecture of the invention.
- Fig. 3 is a block diagram of a single processor node of the invention.
- Fig. 4 is a block diagram of the maximization function of the mvention.
- a CN is a state associated with an emulated node in a neural network located in a PN.
- Each PN may have several CNs located therein.
- the CNs are often arranged in "layers", with CN0 - CN3 comprising one layer, while CN4 - CN7 comprise a second layer.
- the array depicted would generally include four PNs, such as PN0, PNl, PN2 and PN3, (28, 30, 32 and 34, respectively) depicted in Fig.
- connection nodes there may be more than two layers of connection nodes in any one processor node or in any array of processor nodes.
- a typical array of processor nodes may include hundreds or thousands of individual PNs.
- connection nodes operate in what is referred to as a broadcast hierarchy, wherein each of connection nodes 0-3 broadcast to each of connection nodes 4-7.
- An illustrative technique for arranging such a broadcast hierarchy is disclosed in U.S. Patent No. 4,796,199, NEURALr MODEL INFORMA ⁇ ON-HANDUNG ARCHITECTURE AND METHOD, to Hammerstrom et al, January 3, 1989, which is incorporated herein by reference.
- the available processor nodes may be thought of as a "layer" of processors, each executing its function (multiply, accumulate, and increment weight index) for each input, on each clock, wherein one processor node broadcasts its output to all other processor nodes.
- n 2 connections in n clocks By using the output processor node arrangement described herein, it is possible to provide n 2 connections in n clocks using only a two layer arrangement.
- conventional SIMD structures may accomplish n 2 connections in n clocks, but require a three layer configuration, or 50% more structure.
- the boundaries of the individual chips do not interrupt broadcast through processor node arrays, as the arrays may span as many chips as are provided in the architecture.
- Each processor node includes a manipulation unit 40, which is depicted in greater detail in Fig. 3 and will be described later herein.
- each PN includes a data register 44 which contains a data figure. Subscripts are used in connection with the reference number to designate the particular PN and which a structure is located. For instance, each PN includes a data register 44. Reference numeral 28 refers to PN0, which includes data register 44 j , therein. The data figure consists of a plurality of segments or bits, each segment or bit having a value. The technique which is used to arrive at the value is data register 44 will be described subsequently herein. Each PN also includes a left flip-flop 46 and a right flip-flop 48.
- each left flip-flop is connected to the data register in the left most, immediate-adjacent processor node by a connection 50.
- the right flip-flops are connected to the immediate right most, immediate-adjacent processor node data register by a connection 52.
- Connections 50 and 52 comprise what is referred to herein as an arbitration bus 53. It should be appreciated that while the connections are shown extending directly between the appropriate flip-flops and the data register, the arbitration bus may be formed as part of the input/output bus structure.
- Each processor node includes an OR gate 54, which is also referred to herein as arbitration means.
- the inputs to OR gates 54 are designated by reference numerals 56, 58 and come from connections 50, 52 respectively.
- the outputs 60 of OR gates 54 are connected to the left and right flip-flops.
- a max flag register 62 also referred to herein as flag means or maximum value indicator, receives and holds a value based on the comparison and arbitration which takes place amongst the maximization architecture of the various processor nodes.
- Manipulation unit 40 includes an input unit 62 which is connected to input bus 36 and output bus 38.
- a processor node controller 64 is provided to establish operational parameters for each processor node.
- An addition unit 66 provides for addition operations and receives input from input unit 62.
- a multiplier unit 68 is provided and is connected to both the input and output buses and addition unit 66.
- a register unit 70 contains an array of registers, which may include data register 44 as well as flag register 62.
- each processor node includes an array of 32 16-bit registers. A number of other arrangements may be utilized.
- a weight address generation unit 72 is provided and computes the next address for a weight memory unit 74.
- the address may be set in one of two ways: (1) by a direct write to a weight address register or (2) by asserting a command which causes the contents of a weight offset register to be added to the current contents of the memory address register, thereby producing a new register.
- An output unit 76 is provided to store data prior to the data being transmitted on output bus 38.
- Output unit 76 may include an output buffer, which receives data from the remainder of the output unit prior to the data being transmitted on the output bus.
- the data is transmitted to output bus 38 by means of one or more connection nodes, such as CNO or CN4, for instance, which are part of the output unit of PNO. While only single input and output buses are depicted in the drawings for the sake of simplicity, it should be appreciated that multiple input and/or output buses may be provided and if so, the various components of the manipulation unit will have connections to each input and output bus. Additionally, the input/output bus may be arranged to connect only to the input and output units, and a separate internal PN bus may be provided to handle communications among the various components of the PN.
- processor node array 10 determines which processor node has a maximum data figure contained therein will be described. This description begins with the assumption that each PN in the array has a data figure therein, which is the result of the manipulation of data by the PN and that the maximization function of the invention will determine which PN in the array has the maximum data figure contained in its data register.
- the maximization function of the invention is depicted generally at 78.
- the first step in the maximization function is to set max flag (mxflg) 62 to "1" in each PN, block 80.
- flip-flops 46 and 48 also referred to as flag registers, are cleared, i.e., set to zero, block 82.
- Block 86 asks whether the most significant bit is equal to one. If the answer to 86 is "yes", the value of mxflg is loaded into OR gate 54, which results in a "1" being propagated throughout the right and left flip-flops to all of the processor nodes in the array, block 88. This step enables any processor node that answered "no" to block 86 to inquire whether any other most significant bit was equal to one, block 90. If block 90 is answered in the affirmative, the mxflg is set to zero for that designated processor node, block 92.
- Tie-breaker routine 100 may be determined by individual programers who decide, according to the criteria being measured, which PN will be determined to hold the value of interest, and consequently, the data of interest.
- Each flip-flop acts similarly to an axon in an animal neuron: it is set by the first signal, and the signal "remains" in the flip-flop until the flip-flop is instructed to change the signal. Therefor, when one PN propagates a "1", the "1" is propagated indefinitely to all other PNs in the network until the mechanism is cleared.
- block 94 ie, the routine has been iterated fewer times than there are bits in the data register, the segments of the data registers will be shifted, block 96, the counter incremented, block 98, and the max function iterated again beginning with block 82, clearing the flip-flops.
- PN0 will set its mxflg to zero.
- PN2 will set its mxflg to zero in the fourth iteration, which will result in the end of the max function.
- the function will be iterated until block 94 is answered "yes”. If there is no tie, as in the case being described, the tie breaker routine, block 100, if present, will ignore the data, and the maximization function will be ended, block 102.
- PN2 will be determined to have the largest data figure by the maximization function and architecture of the invention. This means that PN2 contains the best match to the input data and represents the particular matrix value being sought.
- the following code is a simplification of the code that describes the actual CMOS implementation of the PN structure in a neurocomputer chip.
- the code shown below is in the C programming language embellished by certain predefined macros.
- the code is used as a register transfer level description language in the actual implementation of the circuitry described here.
- Bolded text indicates a signal, hardware or firmware component, or a phase or clock cycle.
- the phi and ph2 variables simulate the two phases in the two- phase, non-overlapping clock used to implement dynamic MOS devices.
- torght_B and toleftJB are signals which propagate l's through to the left or right along the arbitration bus.
- mxsw is used to disconnect one region from another so that local maximization functions may be performed.
- the provision of the maximization architecture of the processor node array provides a structure in which all PNs in the array have general information about the state of other PNs in the array. As previously noted, one way to provide this information would be to charge a bus or line which connects to all PNs and let all of the PNs compare the value on the line or bus with the value contained in the PN data register. However, this requires an extraordinary amount of energy and, in a fault tolerant system, such as a neural network, electrical delays and other electrical considerations may affect the integrity of the signal on such a wide spread bus.
- each PN By providing the flag registers, or flip-flops, in each PN, data is shared amongst all of the PNs in the array through the use of very short connections, the right and left flip-flops acting as latches, where the output goes to a known state at a given time, thereby providing synchronous data to all of the processor nodes in the array.
- a maximization can be performed upon several data figures contained in a single PN to determine which of the data figures has the largest value, and the maximization function amongst the PNs then can be run to determine which PN contains the maximum data figure.
- the maximization architecture and function may also be used to determine a minimum data figure by providing an additional step of subtracting the segment with its values from 1, and then performing the previously described maximization function.
- Processors constructed according to the invention are useful in neural network systems which may be used to simulate human brain functions in analysis and decision making applications, such as character recognition and robotic control.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multi Processors (AREA)
Abstract
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2511266A JPH05501460A (ja) | 1990-05-30 | 1990-05-30 | 分散ディジタル最大化機能アーキテクチャおよびその方法 |
PCT/US1990/003068 WO1991019259A1 (fr) | 1990-05-30 | 1990-05-30 | Architecture distributive et numerique de maximalisation, et procede |
EP19900911963 EP0485466A4 (en) | 1990-05-30 | 1990-05-30 | Distributive, digital maximization function architecture and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US1990/003068 WO1991019259A1 (fr) | 1990-05-30 | 1990-05-30 | Architecture distributive et numerique de maximalisation, et procede |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1991019259A1 true WO1991019259A1 (fr) | 1991-12-12 |
Family
ID=22220892
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1990/003068 WO1991019259A1 (fr) | 1990-05-30 | 1990-05-30 | Architecture distributive et numerique de maximalisation, et procede |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP0485466A4 (fr) |
JP (1) | JPH05501460A (fr) |
WO (1) | WO1991019259A1 (fr) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993019431A1 (fr) * | 1992-03-20 | 1993-09-30 | Maxys Circuit Technology Ltd. | Architecture de processeur vectoriel en parallele |
EP0694855A1 (fr) * | 1994-07-28 | 1996-01-31 | International Business Machines Corporation | Circuit de recherche/triage pour réseaux neuronaux |
EP0619557A3 (fr) * | 1993-03-31 | 1996-06-12 | Motorola Inc | Système et méthode de traitement des données. |
GB2393286A (en) * | 2002-09-17 | 2004-03-24 | Micron Europe Ltd | Method for finding a local extreme of a set of values associated with a processing element by separating the set into an odd and an even position pair of sets |
DE10260176A1 (de) * | 2002-12-20 | 2004-07-15 | Daimlerchrysler Ag | Verfahren und Vorrichtung zur Datenerfassung |
US7447720B2 (en) | 2003-04-23 | 2008-11-04 | Micron Technology, Inc. | Method for finding global extrema of a set of bytes distributed across an array of parallel processing elements |
US7454451B2 (en) | 2003-04-23 | 2008-11-18 | Micron Technology, Inc. | Method for finding local extrema of a set of values for a parallel processing element |
US7574466B2 (en) | 2003-04-23 | 2009-08-11 | Micron Technology, Inc. | Method for finding global extrema of a set of shorts distributed across an array of parallel processing elements |
FR3015068A1 (fr) * | 2013-12-18 | 2015-06-19 | Commissariat Energie Atomique | Module de traitement du signal, notamment pour reseau de neurones et circuit neuronal |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3484749A (en) * | 1964-08-21 | 1969-12-16 | Int Standard Electric Corp | Adaptive element |
US3613084A (en) * | 1968-09-24 | 1971-10-12 | Bell Telephone Labor Inc | Trainable digital apparatus |
US4858147A (en) * | 1987-06-15 | 1989-08-15 | Unisys Corporation | Special purpose neurocomputer system for solving optimization problems |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3970993A (en) * | 1974-01-02 | 1976-07-20 | Hughes Aircraft Company | Cooperative-word linear array parallel processor |
US4843540A (en) * | 1986-09-02 | 1989-06-27 | The Trustees Of Columbia University In The City Of New York | Parallel processing method |
US5093781A (en) * | 1988-10-07 | 1992-03-03 | Hughes Aircraft Company | Cellular network assignment processor using minimum/maximum convergence technique |
-
1990
- 1990-05-30 JP JP2511266A patent/JPH05501460A/ja active Pending
- 1990-05-30 EP EP19900911963 patent/EP0485466A4/en not_active Withdrawn
- 1990-05-30 WO PCT/US1990/003068 patent/WO1991019259A1/fr not_active Application Discontinuation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3484749A (en) * | 1964-08-21 | 1969-12-16 | Int Standard Electric Corp | Adaptive element |
US3613084A (en) * | 1968-09-24 | 1971-10-12 | Bell Telephone Labor Inc | Trainable digital apparatus |
US4858147A (en) * | 1987-06-15 | 1989-08-15 | Unisys Corporation | Special purpose neurocomputer system for solving optimization problems |
Non-Patent Citations (1)
Title |
---|
See also references of EP0485466A4 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1993019431A1 (fr) * | 1992-03-20 | 1993-09-30 | Maxys Circuit Technology Ltd. | Architecture de processeur vectoriel en parallele |
EP0619557A3 (fr) * | 1993-03-31 | 1996-06-12 | Motorola Inc | Système et méthode de traitement des données. |
US5664134A (en) * | 1993-03-31 | 1997-09-02 | Motorola Inc. | Data processor for performing a comparison instruction using selective enablement and wired boolean logic |
EP0694855A1 (fr) * | 1994-07-28 | 1996-01-31 | International Business Machines Corporation | Circuit de recherche/triage pour réseaux neuronaux |
US5740326A (en) * | 1994-07-28 | 1998-04-14 | International Business Machines Corporation | Circuit for searching/sorting data in neural networks |
GB2393286A (en) * | 2002-09-17 | 2004-03-24 | Micron Europe Ltd | Method for finding a local extreme of a set of values associated with a processing element by separating the set into an odd and an even position pair of sets |
GB2393286B (en) * | 2002-09-17 | 2006-10-04 | Micron Europe Ltd | Method for finding local extrema of a set of values for a parallel processing element |
DE10260176B4 (de) * | 2002-12-20 | 2006-05-18 | Daimlerchrysler Ag | Vorrichtung zur Datenerfassung |
US6970071B2 (en) | 2002-12-20 | 2005-11-29 | Daimlerchrysler Ag | Method and device for acquiring data |
DE10260176A1 (de) * | 2002-12-20 | 2004-07-15 | Daimlerchrysler Ag | Verfahren und Vorrichtung zur Datenerfassung |
US7447720B2 (en) | 2003-04-23 | 2008-11-04 | Micron Technology, Inc. | Method for finding global extrema of a set of bytes distributed across an array of parallel processing elements |
US7454451B2 (en) | 2003-04-23 | 2008-11-18 | Micron Technology, Inc. | Method for finding local extrema of a set of values for a parallel processing element |
US7574466B2 (en) | 2003-04-23 | 2009-08-11 | Micron Technology, Inc. | Method for finding global extrema of a set of shorts distributed across an array of parallel processing elements |
FR3015068A1 (fr) * | 2013-12-18 | 2015-06-19 | Commissariat Energie Atomique | Module de traitement du signal, notamment pour reseau de neurones et circuit neuronal |
WO2015090885A1 (fr) * | 2013-12-18 | 2015-06-25 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Module de traitement du signal, notamment pour reseau de neurones et circuit neuronal. |
US11017290B2 (en) | 2013-12-18 | 2021-05-25 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Signal processing module, especially for a neural network and a neuronal circuit |
Also Published As
Publication number | Publication date |
---|---|
EP0485466A1 (fr) | 1992-05-20 |
EP0485466A4 (en) | 1992-12-16 |
JPH05501460A (ja) | 1993-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Patel | Performance of processor-memory interconnections for multiprocessors | |
EP0075593B1 (fr) | Processeur micro-programmable par tranches de bits pour des applications de traitement de signaux | |
US7930517B2 (en) | Programmable pipeline array | |
Akl | Parallel sorting algorithms | |
US4974169A (en) | Neural network with memory cycling | |
US5517600A (en) | Neuro-chip and neurocomputer having the chip | |
Petrowski et al. | Performance analysis of a pipelined backpropagation parallel algorithm | |
US4228498A (en) | Multibus processor for increasing execution speed using a pipeline effect | |
CA2189148A1 (fr) | Ordinateur utilisant un reseau neuronal et procede d'utilisation associe | |
Kung et al. | Synchronous versus asynchronous computation in very large scale integrated (VLSI) array processors | |
EP0591286A1 (fr) | Architecture de reseau neuronal. | |
WO1991019259A1 (fr) | Architecture distributive et numerique de maximalisation, et procede | |
WO1993014459A1 (fr) | Systeme de traitement parallele modulaire | |
JPH10134033A (ja) | コンボリューション操作を行うための電子装置 | |
USRE31287E (en) | Asynchronous logic array | |
EP0557675B1 (fr) | Commande électronique en logique floue et procédé associé d'organisation des mémoires | |
EP0544629B1 (fr) | Architecture d'un contrôleur électronique basé sur la logique floue | |
Skubiszewski | An Extact Hardware Implementation of the Boltzmann Machine. | |
Linde et al. | Using FPGAs to implement a reconfigurable highly parallel computer | |
Wilson | Neural Computing on a One Dimensional SIMD Array. | |
den Bout | A stochastic architecture for neural nets | |
Lew et al. | Dynamic programming on a functional memory computer | |
US5958001A (en) | Output-processing circuit for a neural network and method of using same | |
Romanchuk | Evaluation of effectiveness of data processing based on neuroprocessor devices of various models | |
Ahn | Implementation of a 12-Million Hodgkin-Huxley Neuron Network on a Single Chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB IT LU NL SE |
|
WWP | Wipo information: published in national office |
Ref document number: 1990911963 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1990911963 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: CA |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1990911963 Country of ref document: EP |