WO1991002323A1 - Adaptive network for classifying time-varying data - Google Patents
Adaptive network for classifying time-varying data Download PDFInfo
- Publication number
- WO1991002323A1 WO1991002323A1 PCT/US1990/004487 US9004487W WO9102323A1 WO 1991002323 A1 WO1991002323 A1 WO 1991002323A1 US 9004487 W US9004487 W US 9004487W WO 9102323 A1 WO9102323 A1 WO 9102323A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input
- domain
- neurons
- network
- information processor
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/415—Identification of targets based on measurements of movement associated with the target
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
Definitions
- This invention relates to information processors and, more particularly, to a method and apparatus for classifying time varying data.
- Classifying of complex time-varying data poses a number of difficult problems for conventional information processors.
- the task of classification typically involves recognizing patterns typical of known classes from large amounts of two-dimensional data. Where the patterns to be recognized have subtle variations between the known classes, traditional classifiers often fail to correctly distinguish between the classes. This is due, in part, to the strong assumptions which must be made concerning the underlying distributions of the input data. Algorithms must then be developed to extract these features and to match known features with the input features for classification.
- classification problems include classifying time varying signals from various sources such as speech, image data, radar, sonar, etc.
- conventional information processor are generally not fault tolerant, and cannot handle certain variations in the input signals such as changes in the orientation of a visual pattern, or differences in speakers, in the case of speech recognition.
- neural nets are capable of recognizing a pattern and producing a desired output even where the input is incomplete or hidden in background noise. Also, neural nets exhibit greater robustness, or fault tolerance, than Von Neumann sequential computers because there are many more processing nodes, each with primarily local connections. Damage to a few nodes or links need not impair overall performance significantly.
- neural net models utilizing various topologies, neuron characteristics, and training, or learning, algorithms. Learning algorithms specify an internal set of weights and indicate how weights should be adapted during use, or training, to improve performance.
- some of these neural net models include the Perceptron, described in U.S. Patent No. 3,287,649 issued to F. Rosenblatt; the Hopfield Net, described in U.S. Patent Nos. 4,660,166 and 4,719,591 issued to J. Hopfield; the Hamming Net and Kohohonen self-organizing maps, described in R.
- the time-varying data is complex and involves large quantities of data
- a major problem is in developing a technique for representing the data to the neural network for processing.
- the minimum amount of data required to adequately represent classifications may involve, for example, fifty time slices of sixteen frequency bands of the doppler data.
- One way to present this data to a neural net processor would be to utilize a neural network with 800 (50 + 16) input neurons and to present each of the 800 input neurons with one sample of doppler data.
- the disadvantage of this approach is that such a large number of input neurons and the corresponding large number of total neurons and interconnections would result in a neural network that is very complex and expensive. Further, such a complex network takes a greater period of time to process information and to learn.
- a processor for classifying time-varying data with a minimum of preprocessing and requiring a minimum of algorithm and software development. It would also be desirable to provide a classification processor that is not based on explicit assumptions but instead can adapt by training to recognize patterns. It would also be desirable ⁇ to provide a means for representing time-varying data to an adaptive processor in a simplified ⁇ manner which reduces the total number of input values presented to the processor.
- an adaptive network is provided with at least N--+ 1 input neurons, where N equals the number of values ⁇ in a first domain associated with a given value in a second domain.
- the processor receives one of each of the N values in the first domain in the input neurons, and receives a single associated value from a second domain in the remaining input neuron.
- the network is trained using known training data to produce an output that serves to classify the known data.
- the network training is repeated for each value in the second domain by presenting that value together with each of the N values in the first domain as input. Once trained, the adaptive network will produce an output which classifies an unknown input when that input is from a class the adaptive network was trained to recognize.
- FIG. 1 (A-D) are representative doppler signatures from four classes of multiple moving objects
- FIG. 2 is a diagram of the adaptive network in accordance with the teachings of the present invention
- FIG. 3 is a representation of doppler data for four classes of objects
- FIG. 4 is a drawing of an additional embodiment of the present invention.
- the two-dimensional data can be derived from a variety of signal sources such as infrared, optical, radar, sonar, etc.
- the data may ' be raw, that is unprocessed, or it may be processed.
- One example of such processing is doppler procfessing, wherein the difference in frequency between an outgoing and an incoming signal is analyzed.
- doppler procfessing wherein the difference in frequency between an outgoing and an incoming signal is analyzed.
- FIGS. l(A-D) four doppler signatures from four different classes of objects are shown.
- the doppler frequency that is, the shift in frequency in the returning object or objects is represented along the horizontal axis.
- Time is represented, along the vertical axis.
- FIGS. l(A-D) each have a characteristic shape or pattern- The fact that the pattern changes from the lower portion of each figure to the upper portion, indicates changes in the detected doppler frequencies over time. This would indicate changes in the motion of multiple objects in the particular instance for each of the four classes of objects.
- FIG. 3 there is shown a representative simplified doppler signature for four different classes of objects.
- the horizontal axis represents the doppler frequency and the vertical axis represents time.
- Each horizontal line 10 in FIG. 3 represents the doppler frequencies received at a given time.
- the doppler signals in FIG. 3 are divided by means of vertical lines into four classes; a first class 12, a second class 14, a third class 15 and a fourth class 18.
- FIG. 3 represent doppler signals from four different types of objects and each have a pattern that is characteristic of that object, or objects. Even though FIG. 3 represents much more simplified doppler data than that shown in FIGS. l(A-D), representation of the four patterns in FIG. 3 to a neural network would still involve a large amount of data.
- each time slice 10 in each class is drawn from doppler frequencies from 16 frequency bins. There are 32 time slices 10 for each class. Consequently, there would be 512 individual pieces of information for each class.
- a neural network having 512 input neurons might be required to process all of the information in each class shown in FIG. 3.
- FIG. 3 the data shown in FIG. 3 may be represented as indicated by FIG. 2.
- an adaptive network 20 in accordance with the preferred embodiment of the present invention is shown.
- the adaptive network 20 utilizes a conventional neural network architecture known as a multilayer perceptron. It will be appreciated by those skilled in the art that a multilayer perceptron utilizes a layer of input neurons 22, one or more layers of inner neurons 24 and a layer of output neurons 26. Each neuron in each layer is connected to every neuron in the adjacent layer by means of synaptic connections 27, but neurons in the same layer are not typically connected to each other.
- Each neuron accepts as input either a binary or a continuous-valued input and produces an output which is some transfer function of the inputs to that neuron.
- the multilayer perceptron shown in FIG. 2 may be trained bythe conventional back propagation technique as is know ⁇ in the art. This technique is described in detail in the above-mentioned article by D. E. Rumelhart and J. L. McClelland, which is incorporated herein by reference.
- the adaptive network 20 is configured so that it has a particular number of input neurons 22 determined by the input data.
- the doppler data contains seven frequency bins. It will be appreciated that, for example, in FIG. 3 there will be 16 frequency bins, and that the number of doppler frequency bins will depend on the particular data to be analyzed, and the desired complexity of the adaptive network 20.
- the doppler frequency curve 28, like the doppler frequency curves in FIG. 3, represents one time slice of doppler data. That is, it represents the doppler frequencies received at a given time. It is preferred that the range of frequencies be normalized so that they may be represented by a signal within a range that is acceptable to the input neurons 22. For example, the doppler frequencies may be normalized to have particular relative values between zero and one. As shown in FIG. 2, seven input neurons each receive a single doppler frequency value from the doppler frequency curve 28. An eighth input neuron 30 receives a signal which is representative of the time at which the doppler frequency curve 28 was received.
- the magnitude of the signal used for the time input neuron 30 may be normalized so that the entire range of time values falls within the acceptable range for the input neuron 22.
- the doppler frequency curve 28 together with the time is transmitted to the input neurons 22 ⁇ nd 30, the adaptive network 20 will produce some output state at its output neurons 26.
- the learning algorithm known as backward error propagation may be used.
- the adaptive network 20 will be trained to produce an output corresponding to the class of the doppler frequency curve. For example, assuming that the training input is from a first class, the desired output may be to have the first two output neurons 26 produce binary ones and all the other output neurons produce binary zero values. After repeated training procedures the adaptive network 20 will adapt the weights of the synaptic connections 27 until it produces the desired output state. Once the adaptive network 20 is trained with the first doppler frequency curve 28 at a first time slice, it may then be trained for all the successive time slices. For example, the adaptive network 20 may be trained for each of the 32 doppler frequency curves 10 in FIG. 3 to produce an output indicating the first class.
- an unknown set of doppler frequency curves and times may be transmitted to the adaptive network 20. If the unknown doppler signature has the general characteristics of that of the first class, the adaptive network 20 will produce an output state for each .time slice corresponding the first class.
- the adaptive network 20 may be trained to 1 recognize multiple classes of doppler signatures. It To accomplish this, the steps used to train the -adaptive network 20 to recognize the first class of doppler frequency curves is simply repeated for the second, third and fourth classes. As shown in FIG. 2, tjhe _ adaptive network 20 may be trained to indicate the second, third and fourth classes by producing binary ones in the output neurons 26 associated with those classes as indicated in FIG. 2. The number of classes which the adaptive network 20 may be trained to recognize will depend on a number of variables such as the complexity of the doppler signals, and the ' number of neurons, layers and interconnections in the adaptive network 20.
- an adaptive network 20 in accordance with the present invention is shown. This embodiment is similar to the one shown in FIG. 2, except that it utilizes 18 input neurons 22, 24 inner neurons 24 and 26 output neurons 26. It will be appreciated that with a larger number of neurons and synaptic connections 27, time-varying data of greater complexity can be classified.
- the adaptive network 20 Once the adaptive network 20 has been trained it could be reproduced an unlimited number of times by making a copy of the adaptive network 20.
- the copies may have identical, but fixed weight values for the synaptic connections 27. In this way, mass production of adaptive networks 20 is possible without repeating the training process.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
An information processor (20) for classifying a set of two-dimensional data. The data represents information from at least two domains. The processor (20) utilizes a neural network architecture having at least N + 1 input neurons (22), where N is the number of values in the first domain. The network (22) is trained to produce an output state that classifies a plurality of input signals belonging to a particular class. In the preferred embodiment the second domain is time.
Description
ADAPTIVE NETWORK FOR CLASSIFYING TIME-VARYING DATA
Background of the Invention
1. Technical Field
This invention relates to information processors and, more particularly, to a method and apparatus for classifying time varying data.
2. Discussion
Classifying of complex time-varying data poses a number of difficult problems for conventional information processors. The task of classification typically involves recognizing patterns typical of known classes from large amounts of two-dimensional data. Where the patterns to be recognized have subtle variations between the known classes, traditional classifiers often fail to correctly distinguish between the classes. This is due, in part, to the strong assumptions which must be made concerning the underlying distributions of the input data. Algorithms must then be developed to extract these features and to match known features with the input features for classification.
The success of the classifier is dependent on the correctness of these underlying assumptions. Many problems are not susceptible to explicit assumptions in algorithms, due to the subtlety of the patterns involved, as well as the wide variations of such patterns within each class. A further disadvantage
-2-
with traditional classifiers is the extensive preprocessing normally reguired and the extensive time required to develop the algorithm and software to accomplish the pattern matching. Examples of such classification problems include classifying time varying signals from various sources such as speech, image data, radar, sonar, etc. Also, conventional information processor are generally not fault tolerant, and cannot handle certain variations in the input signals such as changes in the orientation of a visual pattern, or differences in speakers, in the case of speech recognition.
In recent 'years it has been realized that conventional Von Neumann computers, which operate serially, bear little resemblance to the parallel processing that takes place in biological systems such as the brain. It is not surprising, therefore, that conventional information classification techniques should fail to adequately perform the pattern recognition tasks performed by humans. Consequently, new methods based on neural models of the brain are being developed to perform perceptual tasks. These systems are known variously as neural networks, neuromorpHic systems, learning machines, parallel distributed processors, self-organizing systems, or adaptive logic systems. Whatever the name, these models utilize ■ numerous nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural networks. Each computational element or "neuron" is connected via weights or "synapses" that typically are adapted during training to improve performance. Thus, these systems exhibit self-learning by changing their synaptic weights* until the correct output is achieved in response to a particular input. Once trained, neural
nets are capable of recognizing a pattern and producing a desired output even where the input is incomplete or hidden in background noise. Also, neural nets exhibit greater robustness, or fault tolerance, than Von Neumann sequential computers because there are many more processing nodes, each with primarily local connections. Damage to a few nodes or links need not impair overall performance significantly.
There are a wide variety of neural net models utilizing various topologies, neuron characteristics, and training, or learning, algorithms. Learning algorithms specify an internal set of weights and indicate how weights should be adapted during use, or training, to improve performance. By way of illustration, some of these neural net models include the Perceptron, described in U.S. Patent No. 3,287,649 issued to F. Rosenblatt; the Hopfield Net, described in U.S. Patent Nos. 4,660,166 and 4,719,591 issued to J. Hopfield; the Hamming Net and Kohohonen self-organizing maps, described in R. Lippman, "An Introduction to Computing with Neural Nets", IEEE ASSP Magazine, April 1987, pages 4-22; and "The Generalized Delta Rule for Multilayered Perceptrons", described in Rumelhart, Hinton, and Williams, "Learning Internal Representations by Error Propagation", in D. E. Rumelhart and J. L. McClelland (Eds.), Parallel Distributed Processing; Explorations in the Microstructure of Cognition. Vol. 1: Foundation. MIT Press (1986) . While each of these neural net models achieve varying degrees of success at the particular task to which it is best suited, a number of difficulties in classifying time-varying data still are encountered when using neural network processors. For example, where the time-varying data is complex and involves
large quantities of data, a major problem is in developing a technique for representing the data to the neural network for processing. For example, in classifying radar or sonar doppler time signatures from objects, the minimum amount of data required to adequately represent classifications may involve, for example, fifty time slices of sixteen frequency bands of the doppler data. One way to present this data to a neural net processor would be to utilize a neural network with 800 (50 + 16) input neurons and to present each of the 800 input neurons with one sample of doppler data. The disadvantage of this approach is that such a large number of input neurons and the corresponding large number of total neurons and interconnections would result in a neural network that is very complex and expensive. Further, such a complex network takes a greater period of time to process information and to learn.
Thus, it would be desirable to provide a processor for classifying time-varying data with a minimum of preprocessing and requiring a minimum of algorithm and software development. It would also be desirable to provide a classification processor that is not based on explicit assumptions but instead can adapt by training to recognize patterns. It would also be desirable ± to provide a means for representing time-varying data to an adaptive processor in a simplified^ manner which reduces the total number of input values presented to the processor.
' SUMMARY OF THE INVENTION
In accordance with the teachings of the present invention, an adaptive network is provided with at least N--+ 1 input neurons, where N equals the number of values^ in a first domain associated with a given
value in a second domain. The processor receives one of each of the N values in the first domain in the input neurons, and receives a single associated value from a second domain in the remaining input neuron. The network is trained using known training data to produce an output that serves to classify the known data. The network training is repeated for each value in the second domain by presenting that value together with each of the N values in the first domain as input. Once trained, the adaptive network will produce an output which classifies an unknown input when that input is from a class the adaptive network was trained to recognize.
BRIEF DESCRIPTION OF THE DRAWINGS The various advantages of the present invention will become apparent to those skilled in the art after reading the following specification and by reference to the drawings in which:
FIG. 1 (A-D) are representative doppler signatures from four classes of multiple moving objects;
FIG. 2 is a diagram of the adaptive network in accordance with the teachings of the present invention; and FIG. 3 is a representation of doppler data for four classes of objects; and
FIG. 4 is a drawing of an additional embodiment of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENT in accordance with the teaching of the present invention, a method and apparatus is provided for classifying two-dimensional data. The two-dimensional data can be derived from a variety of
signal sources such as infrared, optical, radar, sonar, etc. The data may'be raw, that is unprocessed, or it may be processed. One example of such processing is doppler procfessing, wherein the difference in frequency between an outgoing and an incoming signal is analyzed. In general, if the object reflecting the transmitted energy back is stationary with respect to the source there wilL be no shift in frequency observed in the returning energy. If the object is moving toward the source, the reflected energy will have a higher frequency, and if the object is moving away, the reflected energy will be lowered in frequency.
In FIGS. l(A-D) four doppler signatures from four different classes of objects are shown. In these figures the doppler frequency, that is, the shift in frequency in the returning object or objects is represented along the horizontal axis. Time is represented, along the vertical axis. It can be seen that FIGS. l(A-D) each have a characteristic shape or pattern- The fact that the pattern changes from the lower portion of each figure to the upper portion, indicates changes in the detected doppler frequencies over time. This would indicate changes in the motion of multiple objects in the particular instance for each of the four classes of objects.
It should be noted that two different instances of multiple objects within a given class will have a doppler signature which resembles, but is not exactly identical, to each other. Thus, while it may be relatively easy for an observer, upon visual inspection to identify a doppler signature from a given cl^sfe, because of the subtle variations from instance to instance within- a different class, it is difficult, if not impossible, for conventional processors /to correctly identify the class of a doppler
signature. For this reason the pattern recognition capabilities of a neural network would seem to be well suited to solving the doppler time signature classifying problem. However, one problem that is encountered is that due to practical limitations in the number of neurons in working neural networks, it would be difficult to provide a neural network with all of the information contained in the doppler signatures shown in FIGS. 1(A-D) . To simplify the information, one could compress the data to its most essential characteristics. In this way, the data would be reduced to manageable proportions for processing by a neural network.
Accordingly, In FIG. 3 there is shown a representative simplified doppler signature for four different classes of objects. As in FIGS. l(A-D) - the horizontal axis represents the doppler frequency and the vertical axis represents time. Each horizontal line 10 in FIG. 3 represents the doppler frequencies received at a given time. There are 32 horizontal lines in FIG. 3, each representing a time slice of the doppler signal. The doppler signals in FIG. 3 are divided by means of vertical lines into four classes; a first class 12, a second class 14, a third class 15 and a fourth class 18. Like the four classes shown in FIGS. l(A-D), the four classes in FIG. 3 represent doppler signals from four different types of objects and each have a pattern that is characteristic of that object, or objects. Even though FIG. 3 represents much more simplified doppler data than that shown in FIGS. l(A-D), representation of the four patterns in FIG. 3 to a neural network would still involve a large amount of data. In particular, each time slice 10 in each class is drawn from doppler frequencies from 16 frequency bins. There are 32 time slices 10 for each
class. Consequently, there would be 512 individual pieces of information for each class. Using conventional neural network techniques, a neural network having 512 input neurons might be required to process all of the information in each class shown in FIG. 3.
In order to simplify the representation of this data for presentation to the neural network, in accordance with the present invention, the data shown in FIG. 3 may be represented as indicated by FIG. 2. In FIG. 2 an adaptive network 20 in accordance with the preferred embodiment of the present invention is shown. The adaptive network 20 utilizes a conventional neural network architecture known as a multilayer perceptron. It will be appreciated by those skilled in the art that a multilayer perceptron utilizes a layer of input neurons 22, one or more layers of inner neurons 24 and a layer of output neurons 26. Each neuron in each layer is connected to every neuron in the adjacent layer by means of synaptic connections 27, but neurons in the same layer are not typically connected to each other. Each neuron accepts as input either a binary or a continuous-valued input and produces an output which is some transfer function of the inputs to that neuron. The multilayer perceptron shown in FIG. 2 may be trained bythe conventional back propagation technique as is knowη in the art. This technique is described in detail in the above-mentioned article by D. E. Rumelhart and J. L. McClelland, which is incorporated herein by reference.
In accordance with the present invention the adaptive network 20 is configured so that it has a particular number of input neurons 22 determined by the input data. In particular, in the example in FIG. 2, the doppler data, contains seven frequency bins. It
will be appreciated that, for example, in FIG. 3 there will be 16 frequency bins, and that the number of doppler frequency bins will depend on the particular data to be analyzed, and the desired complexity of the adaptive network 20.
The doppler frequency curve 28, like the doppler frequency curves in FIG. 3, represents one time slice of doppler data. That is, it represents the doppler frequencies received at a given time. It is preferred that the range of frequencies be normalized so that they may be represented by a signal within a range that is acceptable to the input neurons 22. For example, the doppler frequencies may be normalized to have particular relative values between zero and one. As shown in FIG. 2, seven input neurons each receive a single doppler frequency value from the doppler frequency curve 28. An eighth input neuron 30 receives a signal which is representative of the time at which the doppler frequency curve 28 was received. The magnitude of the signal used for the time input neuron 30 may be normalized so that the entire range of time values falls within the acceptable range for the input neuron 22. For example, in the data from FIG. 3 there are 32 doppler frequency curves from 32 different time slices and these times may simply be numbered 1 through 32 wherein the range 1 through 32 is normalized for the signal transmitted to input neuron 30. When the doppler frequency curve 28, together with the time is transmitted to the input neurons 22 εnd 30, the adaptive network 20 will produce some output state at its output neurons 26. To train the adaptive network 20 to produce a desired output, the learning algorithm known as backward error propagation may be used. In this technique a known doppler frequency and time input will be presented to the input neurons and the adaptive
network 20 will be trained to produce an output corresponding to the class of the doppler frequency curve. For example, assuming that the training input is from a first class, the desired output may be to have the first two output neurons 26 produce binary ones and all the other output neurons produce binary zero values. After repeated training procedures the adaptive network 20 will adapt the weights of the synaptic connections 27 until it produces the desired output state. Once the adaptive network 20 is trained with the first doppler frequency curve 28 at a first time slice, it may then be trained for all the successive time slices. For example, the adaptive network 20 may be trained for each of the 32 doppler frequency curves 10 in FIG. 3 to produce an output indicating the first class. Once the training for the first class is complete, an unknown set of doppler frequency curves and times may be transmitted to the adaptive network 20. If the unknown doppler signature has the general characteristics of that of the first class, the adaptive network 20 will produce an output state for each .time slice corresponding the first class.
Further, the adaptive network 20 may be trained to1 recognize multiple classes of doppler signatures. it To accomplish this, the steps used to train the -adaptive network 20 to recognize the first class of doppler frequency curves is simply repeated for the second, third and fourth classes. As shown in FIG. 2, tjhe _ adaptive network 20 may be trained to indicate the second, third and fourth classes by producing binary ones in the output neurons 26 associated with those classes as indicated in FIG. 2. The number of classes which the adaptive network 20 may be trained to recognize will depend on a number of
variables such as the complexity of the doppler signals, and the ' number of neurons, layers and interconnections in the adaptive network 20.
Referring now to FIG. 4, an adaptive network 20 in accordance with the present invention is shown. This embodiment is similar to the one shown in FIG. 2, except that it utilizes 18 input neurons 22, 24 inner neurons 24 and 26 output neurons 26. It will be appreciated that with a larger number of neurons and synaptic connections 27, time-varying data of greater complexity can be classified.
Once the adaptive network 20 has been trained it could be reproduced an unlimited number of times by making a copy of the adaptive network 20. For example the copies may have identical, but fixed weight values for the synaptic connections 27. In this way, mass production of adaptive networks 20 is possible without repeating the training process.
In view of the foregoing, those skilled in the art should appreciate that the present invention provides an adaptive network that can be used in a wide variety of applications. The various advantages should become apparent to those skilled in the art after having the benefit of studying the specification, drawings and following claims.
Claims
1. An information processor (20) for classifying a set of two-dimensional data, said data representing information from at least two domains, including a first and second domain, said information processor including a network of neurons including input (22) and output (26) neurons, there being at least N + 1 input neurons, a plurality of synaptic connections (27) providing weighted interconnections between selected ones of said neurons, characterized o by: said network (20) having at least N + 1 input neurons (22) , where N is the number of values in said first domain; means for transmitting a set of input signals 5 to said input neurons (22) , each signal being received by at least one input neuron, said set of input signals including at least a single value from said second domain, and, said set of input signals also including N values fro Said first domain, said N values all being 0 associated with said single value in said second domain; and means for training (22, 24, 26) said network (20) to produce a desired output including means for presenting a known input signal to said input neurons 5 (22, 30), and means for adjusting (24, 26) said weighted synaptic interconnections (27) in repeated training sessions to cause said network (20) to produce said desired output.
l
2. The information processor (20) of Claim
1 wherein said second domain is time, and the N values from the first domain associated with a given time value represents the values of those N values at a
5 given time,
3. The information processor of Claim 2 wherein said second domain represents doppler data.
4. The information processor of Claim 3 wherein said classification represents different types of objects from which said doppler signals originate.
5. The information processor of Claim 1 wherein the total number of input neurons (22) is N +
1.
6. The information processor of Claim 1 wherein said desired output represents a classification for a plurality of said known input signals.
7. The information processor of Claim 1 wherein said means for training said network (24, 26) trains said network with multiple input signals that include a plurality of inputs from the second domain along with the associated N inputs from the first domain.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US39267489A | 1989-08-11 | 1989-08-11 | |
US392,674 | 1989-08-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1991002323A1 true WO1991002323A1 (en) | 1991-02-21 |
Family
ID=23551548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1990/004487 WO1991002323A1 (en) | 1989-08-11 | 1990-08-09 | Adaptive network for classifying time-varying data |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP0438569A1 (en) |
JP (1) | JPH04501328A (en) |
WO (1) | WO1991002323A1 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1992008149A1 (en) * | 1990-11-02 | 1992-05-14 | The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland | Radar apparatus |
FR2669489A1 (en) * | 1990-11-16 | 1992-05-22 | Thomson Csf | METHOD AND DEVICE FOR RECOGNIZING MODULATIONS. |
GB2261511A (en) * | 1991-11-16 | 1993-05-19 | British Nuclear Fuels Plc | Ultrasonic ranging device |
DE4223346A1 (en) * | 1992-07-16 | 1994-01-20 | Vega Grieshaber Gmbh & Co | Contactless distance measurement arrangement using pulse=echo signals - uses neural network to evaluate transition time of transmitted and reflected signal |
WO1994008258A1 (en) * | 1992-10-07 | 1994-04-14 | Octrooibureau Kisch N.V. | Apparatus and a method for classifying movement of objects along a passage |
DE19515666A1 (en) * | 1995-04-28 | 1996-10-31 | Daimler Benz Ag | Radar-based object detection and classification method |
DE19518993A1 (en) * | 1995-05-29 | 1996-12-05 | Sel Alcatel Ag | Device and method for automatic detection or classification of objects |
DE19649618A1 (en) * | 1996-11-29 | 1998-06-04 | Alsthom Cge Alcatel | Method and device for automatic classification of objects |
DE19649563A1 (en) * | 1996-11-29 | 1998-06-04 | Alsthom Cge Alcatel | Device and method for automatic classification of objects |
US5949367A (en) * | 1997-02-20 | 1999-09-07 | Alcatel Alsthom Compagnie Generale D'electricite | Device and method for classifying objects in an environmentally adaptive manner |
US6894639B1 (en) * | 1991-12-18 | 2005-05-17 | Raytheon Company | Generalized hebbian learning for principal component analysis and automatic target recognition, systems and method |
EP1643264A1 (en) * | 1996-09-18 | 2006-04-05 | MacAleese Companies, Inc. | Concealed weapons detection system |
US7167123B2 (en) | 1999-05-25 | 2007-01-23 | Safe Zone Systems, Inc. | Object detection method and apparatus |
US7450052B2 (en) | 1999-05-25 | 2008-11-11 | The Macaleese Companies, Inc. | Object detection method and apparatus |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9449272B2 (en) * | 2013-10-29 | 2016-09-20 | Qualcomm Incorporated | Doppler effect processing in a neural network model |
-
1990
- 1990-08-09 WO PCT/US1990/004487 patent/WO1991002323A1/en not_active Application Discontinuation
- 1990-08-09 EP EP90912233A patent/EP0438569A1/en not_active Withdrawn
- 1990-08-09 JP JP2511554A patent/JPH04501328A/en active Pending
Non-Patent Citations (3)
Title |
---|
IEEE First International Conference on Neural Networks, San Diego, California, 21-24 June 1987, H. BOULARD et al.: "Multilayer Perceptrons and Automatic Speech Recognition", pages IV-407-IV-416 see page IV-413, lines 3-20, page IV-414, lines 1-13; figure 2 * |
IEEE International Conference on Neural Networks, San Diego, California, 24-27 Juli 1988, P.F. CASTELAZ: "Neural Networks in Defense Applications", pages 473-480 see page 476, lines 10-31; figure 2 * |
IJCNN International Joint Conference on Neural Networks, Sheraton Washington Hotel, 19-22 June 1989, A. KHOTANZAD et al.: "Target Detection using a Neural Network Based Passive Sonar System", pages I-335-I-340 see Abstract; page I-336, column 1, lines 15-32; column 2, lines 30-36; pages I-337, column 1, lines 1-31; column 2, lines 35-49; figure 2 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5345539A (en) * | 1990-11-02 | 1994-09-06 | The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland | Radar apparatus using neural network for azimuth and elevation detection |
WO1992008149A1 (en) * | 1990-11-02 | 1992-05-14 | The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland | Radar apparatus |
FR2669489A1 (en) * | 1990-11-16 | 1992-05-22 | Thomson Csf | METHOD AND DEVICE FOR RECOGNIZING MODULATIONS. |
EP0487376A1 (en) * | 1990-11-16 | 1992-05-27 | Thomson-Csf | Method and apparatus for modulation recognition |
GB2261511A (en) * | 1991-11-16 | 1993-05-19 | British Nuclear Fuels Plc | Ultrasonic ranging device |
US6894639B1 (en) * | 1991-12-18 | 2005-05-17 | Raytheon Company | Generalized hebbian learning for principal component analysis and automatic target recognition, systems and method |
DE4223346A1 (en) * | 1992-07-16 | 1994-01-20 | Vega Grieshaber Gmbh & Co | Contactless distance measurement arrangement using pulse=echo signals - uses neural network to evaluate transition time of transmitted and reflected signal |
WO1994008258A1 (en) * | 1992-10-07 | 1994-04-14 | Octrooibureau Kisch N.V. | Apparatus and a method for classifying movement of objects along a passage |
US5519784A (en) * | 1992-10-07 | 1996-05-21 | Vermeulen; Pieter J. E. | Apparatus for classifying movement of objects along a passage by type and direction employing time domain patterns |
DE19515666A1 (en) * | 1995-04-28 | 1996-10-31 | Daimler Benz Ag | Radar-based object detection and classification method |
DE19518993A1 (en) * | 1995-05-29 | 1996-12-05 | Sel Alcatel Ag | Device and method for automatic detection or classification of objects |
EP1643264A1 (en) * | 1996-09-18 | 2006-04-05 | MacAleese Companies, Inc. | Concealed weapons detection system |
DE19649618A1 (en) * | 1996-11-29 | 1998-06-04 | Alsthom Cge Alcatel | Method and device for automatic classification of objects |
DE19649563A1 (en) * | 1996-11-29 | 1998-06-04 | Alsthom Cge Alcatel | Device and method for automatic classification of objects |
US5949367A (en) * | 1997-02-20 | 1999-09-07 | Alcatel Alsthom Compagnie Generale D'electricite | Device and method for classifying objects in an environmentally adaptive manner |
US7167123B2 (en) | 1999-05-25 | 2007-01-23 | Safe Zone Systems, Inc. | Object detection method and apparatus |
US7450052B2 (en) | 1999-05-25 | 2008-11-11 | The Macaleese Companies, Inc. | Object detection method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
JPH04501328A (en) | 1992-03-05 |
EP0438569A1 (en) | 1991-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5003490A (en) | Neural network signal processor | |
Haykin | Neural networks and learning machines, 3/E | |
Pandya et al. | Pattern recognition with neural networks in C++ | |
US6038338A (en) | Hybrid neural network for pattern recognition | |
Munakata | Fundamentals of the new artificial intelligence: Beyond traditional paradigms | |
Becker | Mutual information maximization: models of cortical self-organization | |
WO1991002324A1 (en) | Adaptive network for in-band signal separation | |
Bohte et al. | Unsupervised clustering with spiking neurons by sparse temporal coding and multilayer RBF networks | |
Anderson et al. | Artificial neural networks technology | |
US6456991B1 (en) | Classification method and apparatus based on boosting and pruning of multiple classifiers | |
WO1991002323A1 (en) | Adaptive network for classifying time-varying data | |
RU2193797C2 (en) | Content-addressable memory device (alternatives) and image identification method (alternatives) | |
GB2245401A (en) | Neural network signal processor | |
Barnard et al. | Image processing for image understanding with neural nets | |
CA2002681A1 (en) | Neural network signal processor | |
Hampson et al. | Representing and learning boolean functions of multivalued features | |
Soulie | Multi-modular neural network-hybrid architectures: a review | |
WO1991002322A1 (en) | Pattern propagation neural network | |
Lampinen et al. | Neural network systems, techniques and applications in pattern recognition | |
Levy et al. | Machine learning at the edge | |
Kijsirikul et al. | Approximate ilp rules by backpropagation neural network: A result on thai character recognition | |
Harpur et al. | Experiments with simple Hebbian-based learning rules in pattern classification tasks | |
Apte et al. | Development of back propagation neutral network model for ectracting the feature from a satellite image using curvelet transform | |
Dinov et al. | Deep Learning, Neural Networks | |
Sarigül | A new deep learning approach: Differential convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB IT LU NL SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1990912233 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1990912233 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1990912233 Country of ref document: EP |