US20020198711A1 - Speech feature extraction system - Google Patents
Speech feature extraction system Download PDFInfo
- Publication number
- US20020198711A1 US20020198711A1 US09/882,744 US88274401A US2002198711A1 US 20020198711 A1 US20020198711 A1 US 20020198711A1 US 88274401 A US88274401 A US 88274401A US 2002198711 A1 US2002198711 A1 US 2002198711A1
- Authority
- US
- United States
- Prior art keywords
- signal
- input
- processing
- input speech
- speech
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims description 24
- 239000013598 vector Substances 0.000 claims description 18
- 238000001914 filtration Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 4
- 230000003111 delayed effect Effects 0.000 claims description 3
- 101100001671 Emericella variicolor andF gene Proteins 0.000 claims 2
- 239000000284 extract Substances 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0204—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
Definitions
- This invention relates to a speech feature extraction system for use in a speech recognition, voice identification, voice authentication systems. More specifically, this invention relates to a speech feature extraction that can be used to create a speech recognition system or other speech processing with a reduced error rate.
- a speech recognition system is an apparatus that attempts to identify spoken words by analyzing the speaker's voice signal. Speech is converted into an electronic form from which features are extracted. The system then attempts to match a sequence of features to previously stored sequence of models associated with known speech units. When a sequence of features matches a sequence of models in accordance with specified rules, the corresponding words are deemed to be recognized by the speech recognition system.
- the speech features are extracted by cepstral analysis, which generally involves measuring the energy in specific frequency bands.
- the product of that analysis reflects the amplitude of the signal in those bands. Analysis of these amplitude changes over successive time periods can be modeled as an amplitude modulated signal.
- the present invention provides a speech feature extraction system that reflects frequency modulation characteristics of speech as well as amplitude characteristics. This is done by a feature extraction stage that included a plurality of complex band pass filters in adjacent frequency bands. The output of alternate complex band pass filters is multiplied by the conjugate of the output of the bandpass filter in the adjacent lower frequency band and the resulting signal is low pass filtered.
- Each of the low pass filter outputs is processed to compute two components: a FM component that is substantially sensitive to the frequency of the signal passed by the adjacent bandpass filters from which the low pass filter output was generated, and an AM component that is substantially sensitive to the amplitude of the signal passed by the adjacent bandpass filters.
- the FM component reflects the difference in the phase of the outputs of the adjacent bandpass filters used to generate the lowpass filter output.
- the AM and FM components are then processed using known feature enhancement techniques, such as discrete cosine transform, melscale translation, mean normalization, delta and acceleration analysis, linear discriminant analysis and principal component analysis, to generate speech features suitable for statistical processing or other recognition or identification methods.
- known feature enhancement techniques such as discrete cosine transform, melscale translation, mean normalization, delta and acceleration analysis, linear discriminant analysis and principal component analysis, to generate speech features suitable for statistical processing or other recognition or identification methods.
- FIG. 1 is a block diagram of an illustrative speech recognition system incorporating the speech feature extraction system of the present invention
- FIG. 2 is a detailed block diagram of the speech recognition system of FIG. 1;
- FIG. 3 is a detailed block diagram of a band pass filter suitable for implementing the feature extraction system of the present invention.
- FIG. 4 is a detailed block diagram of an alternative embodiment of a speech recognition including an alternative speech feature extraction system of the present invention.
- FIG. 1 a generalized depiction of illustrative speech recognition system 5 is described that incorporates the speech extraction system of the present invention.
- the speech feature extraction system of the present invention also may be used in speaker identification, authentication and other voice processing systems.
- System 5 illustratively includes four stages: pre-filtering stage 10 , feature extraction stage 12 , statistical processing stage 14 , and energy stage 16 .
- Pre-filtering stage 10 , statistical processing stage 14 and energy stage 16 employ speech processing techniques known in the art and do not form part of the present invention.
- Feature extraction stage 12 incorporates the speech feature extraction system of the present invention, and further includes feature enhancement techniques which are known in the art, as described hereinafter.
- Audio speech signal is converted into an electrical signal by a microphone, telephone receiver or other device, and provided as an input speech signal to system 5 .
- the electrical signal is sampled or digitized to provide a digital signal (IN) representative of the audio speech.
- Pre-filtering stage 10 amplifies the high frequency components of audio signal IN, and the prefiltered signal is then provided to feature extraction stage 12 .
- Feature extraction stage 12 processes pre-filtered signal X to generate a sequence of feature vectors related to characteristics of input signal IN that may be useful for speech recognition.
- the output of feature extraction stage 12 is used by statistical processing stage 14 which compares the sequence of feature vectors to predefined statistical models to identify words or other speech units in the input signal IN.
- the feature vectors are compared to the models using known techniques, such as the Hidden Markov Model (HMM) described in Jelinek, “Statistical Methods for Speech Recognition,” The MIT Press, 1997, pp. 15-37.
- HMM Hidden Markov Model
- the output of statistical processing stage 14 is the recognized word, or other suitable output depending upon the specific application.
- Statistical processing at stage 14 may be performed locally, or at a remote location relative to where the processing of stages 10 , 12 , and 16 are performed.
- the sequence of feature vectors may be transmitted to a remote server for statistical processing.
- the illustrative speech recognition system of FIG. 1 preferably also includes energy stage 16 which provides an output signal indicative of the total energy in a frame of input signal IN.
- Statistical processing stage 14 may use this total energy information to provide improved recognition of speech contained in the input signal.
- Pre-filtering stage 10 is a high pass filter that amplifies high frequency components of the input signal.
- Pre-filtering stage 10 comprises one-sample delay element 21 , multiplier 23 and adder 24 .
- Multiplier 23 multiplies the one-sample delayed signal by constant K f , which typically has a value of ⁇ 0.97.
- the output of pre-filtering stage 10 , X is input at the sampling rate into a bank of band pass filters 30 1 , 30 2 , . . . 30 n , having adjacent frequency bands.
- the number of bandpass filters and width of the frequency bands preferably are selected according to the application for the speech processing system. For example, a system useful in telephony applications preferably will employ about forty band pass filters having center frequencies approximately 100 Hz apart.
- filter 30 1 may have a center frequency of 50 Hz
- filter 30 2 may have a center frequency of 150 Hz
- filter 30 3 may have a center frequency of 250 Hz, and so on, so that the center frequency of filter 30 40 is 3950 Hz.
- the bandwidth of each filter may be several hundred Hertz.
- Blocks 40 1-20 provide the complex conjugate of the output signal of band pass filter 30 1 , 30 3 , . . . 30 n ⁇ 1 .
- Multiplier blocks 42 1-20 multiply the complex conjugates by the outputs of an adjacent higher frequency band pass filter 30 2 , 30 4 , 30 6 , . . . 30 40 to provide output signals Z 1-20 at.
- Output signals Z 1-20 then are passed through a series of low pass filters 44 1-20 .
- the outputs of the low pass filters typically are generated only at the feature frame rate. For example, at a input speech sampling rate of 8 kHz, the output of the low pass filters is only computed at a feature frame rate of once every 10 msec.
- Each output of low pass filters 44 1-20 is a complex signal having real component R and imaginary component I.
- Blocks 46 1-20 process the real and imaginary components of the low pass filter outputs to provide output signals A 1-20 and F 1-20 as shown in equations (1) and (2):
- R i and I i are the real and imaginary components of the corresponding low pass filter output.
- Output signals A i are a function of the amplitude of the low pass filter output and signals F i are a function of the frequency of the signal passed by the adjacent bandpass filters from which the low pass filter output was generated.
- the amplitude and frequency signals A 1-20 and F 1-20 then are processed using conventional feature enhancement techniques in feature enhancement component 12 b , using, for example, discrete cosine transform, melscale translation, mean normalization, delta and acceleration analysis, linear discriminant analysis and principal component analysis techniques that are per se known in the art.
- feature enhancement component 12 b uses, for example, discrete cosine transform, melscale translation, mean normalization, delta and acceleration analysis, linear discriminant analysis and principal component analysis techniques that are per se known in the art.
- a preferred embodiment of a speech recognition system of the present invention incorporating the speech extraction system of the present invention employs a discrete cosine transform and delta features technique, as described hereinafter.
- feature enhancement component 12 b receives output signals A 1-20 and F 1-20 , and processes those signals using discrete cosine transform (DCT) blocks 50 and 54 , respectively.
- DCTs 50 and 54 attempt to diagonalize the co-variance matrix of signals A 1-20 and F 1-20 . This helps to uncorrelate the features in output signals B 0-19 of DCT 50 and output signals C 0-19 of DCT 54 .
- Each set of output signals B 0-19 and C 0-19 then are input into statistical processing stage 14 .
- DCT 50 The function performed by DCT 50 on input signals A 1-20 to provide output signals B 0-19 is shown by equation (3), and the function performed by DCT 54 on input signals F 1-20 to provide output signals C 0-19 is shown by equation (4).
- each vector of input signals A 1-20 are multiplied by a cosine function and D(r) and summed together as shown in equation (3).
- each vector of input signals S 1-20 are multiplied by a cosine function and D(r) and summed together as shown in equation (4).
- Output signals B 0-19 and C 0-19 also are input into delta blocks 52 and 56 , respectively.
- Each of delta blocks 52 and 56 takes the difference between measurements of feature vector values between consecutive feature frames and this difference may be used to enhance speech recognition performance.
- difference formulas may be used by delta blocks 52 and 56 , as are known in the art.
- delta blocks 52 and 56 may take the difference between two consecutive feature frames.
- the output signals of delta blocks 52 and 56 are input into statistical processing stage 14 .
- Filter 30 ′ comprises adder 31 , multiplier 32 and one-sample delay element 33 .
- Multiplier 32 multiples the one-sample delayed output Y by complex coefficient G and the resultant is added to the input signal X to generate an output signal Y.
- FIG. 4 An alternative embodiment of the feature extraction system of the present invention is described with respect to FIG. 4.
- the embodiment of FIG. 4 is similar to the embodiment of FIG. 2 and includes pre-filtering stage 10 , statistical processing stage 14 , and energy stage 16 the operate substantially as described above.
- the embodiment of FIG. 4 differs from the previously described embodiment in that feature extraction stage 12 ′ includes additional circuitry within feature extraction system 12 a , so that the feature vectors include additional information.
- feature extraction stage 12 a ′ includes a bank of 41 band pass filters 30 1-41 and conjugate blocks 40 1-40 .
- the output of each band pass filter is combined with the conjugate of the output of a lower adjacent band bass filter by multipliers 42 1-40 .
- Low pass filters 44 1-40 , and computation blocks 46 1-40 compute vectors A and F as described above, except that the vectors have a length of forty elements instead of twenty.
- DCTs 50 and 54 , and delta blocks 52 and 56 of feature enhancement component 12 b ′ each accept the forty element input vectors and output forty element vectors to statistical processing block 14 .
- the present invention includes feature extraction stages which may include any number of band pass filters 30 , depending upon the intended voice processing application, and corresponding numbers of conjugate blocks 40 , multipliers 42 , low pass filters 44 and blocks 46 to provide output signals A and F for each low pass filter.
- signals A and F may be combined in a weighted fashion, to generate melscale outputs, or only part of the signals may be used. For example, it may be advantageous to use only the amplitude signals in one frequency domain, and a combination of the amplitude and frequency signals in another.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Sorting Of Articles (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
- Alarm Systems (AREA)
- Machine Translation (AREA)
- Telephonic Communication Services (AREA)
Abstract
Description
- This invention relates to a speech feature extraction system for use in a speech recognition, voice identification, voice authentication systems. More specifically, this invention relates to a speech feature extraction that can be used to create a speech recognition system or other speech processing with a reduced error rate.
- Generally, a speech recognition system is an apparatus that attempts to identify spoken words by analyzing the speaker's voice signal. Speech is converted into an electronic form from which features are extracted. The system then attempts to match a sequence of features to previously stored sequence of models associated with known speech units. When a sequence of features matches a sequence of models in accordance with specified rules, the corresponding words are deemed to be recognized by the speech recognition system.
- However, background sounds such as radios, car noise, other nearby speakers can make it difficult to extract useful features from the speech. In addition, ambient conditions, such as the use of a different microphones or telephone handsets, a different telephone line, the speaker's distance from the microphone interfere with system performance. Differences between speakers, changes in speaker intonation or emphasis, and even the speakers health can also adversely impact system performance. For a further description of some of these problems, see Richard A. Quinnell, “Speech Recognition: No Longer a Dream, But Still a Challenge,” EDN Magazine, Jan. 19, 1995, p. 41-46.
- In most speech recognition systems, the speech features are extracted by cepstral analysis, which generally involves measuring the energy in specific frequency bands. The product of that analysis reflects the amplitude of the signal in those bands. Analysis of these amplitude changes over successive time periods can be modeled as an amplitude modulated signal.
- Whereas the human ear is a sensitive to frequency modulation as well as amplitude modulation in received speech signals, this frequency modulated content is only partially reflected in systems that perform cepstral analysis.
- Accordingly, it would be desirable to provide a speech feature extraction system capable of capturing the frequency modulation characteristics of speech, as well as previously known amplitude modulation characteristics.
- It also would be desirable to provide speech recognition and other speech processing systems that incorporate feature extraction systems that provide information on frequency modulation characteristics of the input speech signal.
- In view of the foregoing, it is an object of the present invention to provide a speech feature extraction system capable of capturing the frequency modulation characteristics of speech, as well as previously known amplitude modulation characteristics.
- It also is an object of this invention to provide speech recognition and other speech processing systems that incorporate feature extraction systems that provide information on frequency modulation characteristics of the input speech signal.
- The present invention provides a speech feature extraction system that reflects frequency modulation characteristics of speech as well as amplitude characteristics. This is done by a feature extraction stage that included a plurality of complex band pass filters in adjacent frequency bands. The output of alternate complex band pass filters is multiplied by the conjugate of the output of the bandpass filter in the adjacent lower frequency band and the resulting signal is low pass filtered.
- Each of the low pass filter outputs is processed to compute two components: a FM component that is substantially sensitive to the frequency of the signal passed by the adjacent bandpass filters from which the low pass filter output was generated, and an AM component that is substantially sensitive to the amplitude of the signal passed by the adjacent bandpass filters. The FM component reflects the difference in the phase of the outputs of the adjacent bandpass filters used to generate the lowpass filter output.
- The AM and FM components are then processed using known feature enhancement techniques, such as discrete cosine transform, melscale translation, mean normalization, delta and acceleration analysis, linear discriminant analysis and principal component analysis, to generate speech features suitable for statistical processing or other recognition or identification methods.
- The above and other objects and advantages of the present invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
- FIG. 1 is a block diagram of an illustrative speech recognition system incorporating the speech feature extraction system of the present invention;
- FIG. 2 is a detailed block diagram of the speech recognition system of FIG. 1; and
- FIG. 3 is a detailed block diagram of a band pass filter suitable for implementing the feature extraction system of the present invention; and
- FIG. 4 is a detailed block diagram of an alternative embodiment of a speech recognition including an alternative speech feature extraction system of the present invention.
- Referring to FIG. 1, a generalized depiction of illustrative speech recognition system5 is described that incorporates the speech extraction system of the present invention. As will be apparent to one of ordinary skill in the art, the speech feature extraction system of the present invention also may be used in speaker identification, authentication and other voice processing systems.
- System5 illustratively includes four stages: pre-filtering
stage 10,feature extraction stage 12,statistical processing stage 14, andenergy stage 16. Pre-filteringstage 10,statistical processing stage 14 andenergy stage 16 employ speech processing techniques known in the art and do not form part of the present invention.Feature extraction stage 12 incorporates the speech feature extraction system of the present invention, and further includes feature enhancement techniques which are known in the art, as described hereinafter. - Audio speech signal is converted into an electrical signal by a microphone, telephone receiver or other device, and provided as an input speech signal to system5. In a preferred embodiment of the present invention, the electrical signal is sampled or digitized to provide a digital signal (IN) representative of the audio speech.
Pre-filtering stage 10 amplifies the high frequency components of audio signal IN, and the prefiltered signal is then provided tofeature extraction stage 12. -
Feature extraction stage 12 processes pre-filtered signal X to generate a sequence of feature vectors related to characteristics of input signal IN that may be useful for speech recognition. The output offeature extraction stage 12 is used bystatistical processing stage 14 which compares the sequence of feature vectors to predefined statistical models to identify words or other speech units in the input signal IN. The feature vectors are compared to the models using known techniques, such as the Hidden Markov Model (HMM) described in Jelinek, “Statistical Methods for Speech Recognition,” The MIT Press, 1997, pp. 15-37. The output ofstatistical processing stage 14 is the recognized word, or other suitable output depending upon the specific application. - Statistical processing at
stage 14 may be performed locally, or at a remote location relative to where the processing ofstages - The illustrative speech recognition system of FIG. 1 preferably also includes
energy stage 16 which provides an output signal indicative of the total energy in a frame of input signal IN.Statistical processing stage 14 may use this total energy information to provide improved recognition of speech contained in the input signal. - Referring now to FIG. 2, pre-filtering
stage 10 andfeature extraction stage 12 are described in greater detail.Pre-filtering stage 10 is a high pass filter that amplifies high frequency components of the input signal.Pre-filtering stage 10 comprises one-sample delay element 21,multiplier 23 and adder 24.Multiplier 23 multiplies the one-sample delayed signal by constant Kf, which typically has a value of −0.97. - The output of
pre-filtering stage 10, X, is input at the sampling rate into a bank ofband pass filters filter 30 1, may have a center frequency of 50 Hz,filter 30 2 may have a center frequency of 150 Hz,filter 30 3 may have a center frequency of 250 Hz, and so on, so that the center frequency offilter 30 40 is 3950 Hz. The bandwidth of each filter may be several hundred Hertz. -
Blocks 40 1-20 provide the complex conjugate of the output signal ofband pass filter Multiplier blocks 42 1-20 multiply the complex conjugates by the outputs of an adjacent higher frequencyband pass filter low pass filters 44 1-20. The outputs of the low pass filters typically are generated only at the feature frame rate. For example, at a input speech sampling rate of 8 kHz, the output of the low pass filters is only computed at a feature frame rate of once every 10 msec. - Each output of
low pass filters 44 1-20 is a complex signal having real component R and imaginary component I.Blocks 46 1-20 process the real and imaginary components of the low pass filter outputs to provide output signals A1-20 and F1-20 as shown in equations (1) and (2): - wherein Ri and Ii are the real and imaginary components of the corresponding low pass filter output. Output signals Ai are a function of the amplitude of the low pass filter output and signals Fi are a function of the frequency of the signal passed by the adjacent bandpass filters from which the low pass filter output was generated. By computing two sets of signals that are indicative of the amplitude and frequency of the input signal, the speech recognition system incorporating the speech feature extraction system of the present invention provides reduced error rate.
- The amplitude and frequency signals A1-20 and F1-20 then are processed using conventional feature enhancement techniques in
feature enhancement component 12 b, using, for example, discrete cosine transform, melscale translation, mean normalization, delta and acceleration analysis, linear discriminant analysis and principal component analysis techniques that are per se known in the art. A preferred embodiment of a speech recognition system of the present invention incorporating the speech extraction system of the present invention employs a discrete cosine transform and delta features technique, as described hereinafter. - Still referring to FIG. 2,
feature enhancement component 12 b receives output signals A1-20 and F1-20, and processes those signals using discrete cosine transform (DCT) blocks 50 and 54, respectively.DCTs DCT 50 and output signals C0-19 ofDCT 54. Each set of output signals B0-19 and C0-19 then are input intostatistical processing stage 14. The function performed byDCT 50 on input signals A1-20 to provide output signals B0-19 is shown by equation (3), and the function performed byDCT 54 on input signals F1-20 to provide output signals C0-19 is shown by equation (4). - In equations (3) and (4), N equals the length of the input signal vectors A and F (e.g., N=20 in FIG. 2), n is an index from 0 to N−1 (e.g., n=0 to 19 in the embodiment of FIG. 2), and r is the index of output signals B and C (e.g., r=0 to 19 in the embodiment of FIG. 2). Thus, for each vector output signal Br, each vector of input signals A1-20 are multiplied by a cosine function and D(r) and summed together as shown in equation (3). For each vector output signal Cr, each vector of input signals S1-20 are multiplied by a cosine function and D(r) and summed together as shown in equation (4). D(r) are coefficients that are given by the following equations:
- Output signals B0-19 and C0-19 also are input into delta blocks 52 and 56, respectively. Each of delta blocks 52 and 56 takes the difference between measurements of feature vector values between consecutive feature frames and this difference may be used to enhance speech recognition performance. Several difference formulas may be used by delta blocks 52 and 56, as are known in the art. For example, delta blocks 52 and 56 may take the difference between two consecutive feature frames. The output signals of delta blocks 52 and 56 are input into
statistical processing stage 14. -
- Equation 7 shows that
energy block 16 takes the sum of the squares of the values of the input signal IN during the previous K sampling intervals (e.g., K=220, T={fraction (1/8000)} seconds), divides the sum by K, and takes the logarithm of the final result.Energy block 16 performs this calculation every frame (e.g., 10 msec), and provides the result as an input tostatistical processing block 14. - Referring now to FIG. 3, illustrative
complex bandpass filter 30′ suitable for use in the feature extraction system of the present invention is described.Filter 30′ comprisesadder 31,multiplier 32 and one-sample delay element 33.Multiplier 32 multiples the one-sample delayed output Y by complex coefficient G and the resultant is added to the input signal X to generate an output signal Y. - An alternative embodiment of the feature extraction system of the present invention is described with respect to FIG. 4. The embodiment of FIG. 4 is similar to the embodiment of FIG. 2 and includes
pre-filtering stage 10,statistical processing stage 14, andenergy stage 16 the operate substantially as described above. However, the embodiment of FIG. 4 differs from the previously described embodiment in thatfeature extraction stage 12′ includes additional circuitry within feature extraction system 12 a, so that the feature vectors include additional information. - For example, feature extraction stage12 a′ includes a bank of 41 band pass filters 30 1-41 and conjugate blocks 40 1-40. The output of each band pass filter is combined with the conjugate of the output of a lower adjacent band bass filter by
multipliers 42 1-40. Low pass filters 44 1-40, and computation blocks 46 1-40 compute vectors A and F as described above, except that the vectors have a length of forty elements instead of twenty.DCTs feature enhancement component 12 b′ each accept the forty element input vectors and output forty element vectors tostatistical processing block 14. - The present invention includes feature extraction stages which may include any number of band pass filters30, depending upon the intended voice processing application, and corresponding numbers of conjugate blocks 40,
multipliers 42, low pass filters 44 and blocks 46 to provide output signals A and F for each low pass filter. In addition, signals A and F may be combined in a weighted fashion, to generate melscale outputs, or only part of the signals may be used. For example, it may be advantageous to use only the amplitude signals in one frequency domain, and a combination of the amplitude and frequency signals in another. The foregoing therefore is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention.
Claims (22)
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/882,744 US6493668B1 (en) | 2001-06-15 | 2001-06-15 | Speech feature extraction system |
AT02744395T ATE421137T1 (en) | 2001-06-15 | 2002-06-14 | LANGUAGE FEATURE EXTRACTION SYSTEM |
DE60230871T DE60230871D1 (en) | 2001-06-15 | 2002-06-14 | VOICE FEATURE EXTRACTION SYSTEM |
PCT/US2002/019182 WO2002103676A1 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
US10/173,247 US7013274B2 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
JP2003505912A JP4177755B2 (en) | 2001-06-15 | 2002-06-14 | Utterance feature extraction system |
EP02744395A EP1402517B1 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
CA002450230A CA2450230A1 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/882,744 US6493668B1 (en) | 2001-06-15 | 2001-06-15 | Speech feature extraction system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/173,247 Continuation-In-Part US7013274B2 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
Publications (2)
Publication Number | Publication Date |
---|---|
US6493668B1 US6493668B1 (en) | 2002-12-10 |
US20020198711A1 true US20020198711A1 (en) | 2002-12-26 |
Family
ID=25381249
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/882,744 Expired - Lifetime US6493668B1 (en) | 2001-06-15 | 2001-06-15 | Speech feature extraction system |
US10/173,247 Expired - Lifetime US7013274B2 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/173,247 Expired - Lifetime US7013274B2 (en) | 2001-06-15 | 2002-06-14 | Speech feature extraction system |
Country Status (7)
Country | Link |
---|---|
US (2) | US6493668B1 (en) |
EP (1) | EP1402517B1 (en) |
JP (1) | JP4177755B2 (en) |
AT (1) | ATE421137T1 (en) |
CA (1) | CA2450230A1 (en) |
DE (1) | DE60230871D1 (en) |
WO (1) | WO2002103676A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080010067A1 (en) * | 2006-07-07 | 2008-01-10 | Chaudhari Upendra V | Target specific data filter to speed processing |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3673507B2 (en) * | 2002-05-16 | 2005-07-20 | 独立行政法人科学技術振興機構 | APPARATUS AND PROGRAM FOR DETERMINING PART OF SPECIFIC VOICE CHARACTERISTIC CHARACTERISTICS, APPARATUS AND PROGRAM FOR DETERMINING PART OF SPEECH SIGNAL CHARACTERISTICS WITH HIGH RELIABILITY, AND Pseudo-Syllable Nucleus Extraction Apparatus and Program |
JP4265908B2 (en) * | 2002-12-12 | 2009-05-20 | アルパイン株式会社 | Speech recognition apparatus and speech recognition performance improving method |
DE102004008225B4 (en) * | 2004-02-19 | 2006-02-16 | Infineon Technologies Ag | Method and device for determining feature vectors from a signal for pattern recognition, method and device for pattern recognition and computer-readable storage media |
US20070041517A1 (en) * | 2005-06-30 | 2007-02-22 | Pika Technologies Inc. | Call transfer detection method using voice identification techniques |
US20070118364A1 (en) * | 2005-11-23 | 2007-05-24 | Wise Gerald B | System for generating closed captions |
US20070118372A1 (en) * | 2005-11-23 | 2007-05-24 | General Electric Company | System and method for generating closed captions |
US8345890B2 (en) | 2006-01-05 | 2013-01-01 | Audience, Inc. | System and method for utilizing inter-microphone level differences for speech enhancement |
US8194880B2 (en) | 2006-01-30 | 2012-06-05 | Audience, Inc. | System and method for utilizing omni-directional microphones for speech enhancement |
US9185487B2 (en) | 2006-01-30 | 2015-11-10 | Audience, Inc. | System and method for providing noise suppression utilizing null processing noise subtraction |
US8204252B1 (en) | 2006-10-10 | 2012-06-19 | Audience, Inc. | System and method for providing close microphone adaptive array processing |
US8744844B2 (en) | 2007-07-06 | 2014-06-03 | Audience, Inc. | System and method for adaptive intelligent noise suppression |
US7778831B2 (en) * | 2006-02-21 | 2010-08-17 | Sony Computer Entertainment Inc. | Voice recognition with dynamic filter bank adjustment based on speaker categorization determined from runtime pitch |
US8204253B1 (en) | 2008-06-30 | 2012-06-19 | Audience, Inc. | Self calibration of audio device |
US8259926B1 (en) | 2007-02-23 | 2012-09-04 | Audience, Inc. | System and method for 2-channel and 3-channel acoustic echo cancellation |
US8189766B1 (en) | 2007-07-26 | 2012-05-29 | Audience, Inc. | System and method for blind subband acoustic echo cancellation postfiltering |
MX2010001394A (en) * | 2007-08-27 | 2010-03-10 | Ericsson Telefon Ab L M | Adaptive transition frequency between noise fill and bandwidth extension. |
US20090150164A1 (en) * | 2007-12-06 | 2009-06-11 | Hu Wei | Tri-model audio segmentation |
US8180064B1 (en) | 2007-12-21 | 2012-05-15 | Audience, Inc. | System and method for providing voice equalization |
US8194882B2 (en) | 2008-02-29 | 2012-06-05 | Audience, Inc. | System and method for providing single microphone noise suppression fallback |
US8355511B2 (en) | 2008-03-18 | 2013-01-15 | Audience, Inc. | System and method for envelope-based acoustic echo cancellation |
US8521530B1 (en) | 2008-06-30 | 2013-08-27 | Audience, Inc. | System and method for enhancing a monaural audio signal |
US8626516B2 (en) * | 2009-02-09 | 2014-01-07 | Broadcom Corporation | Method and system for dynamic range control in an audio processing system |
US9838784B2 (en) | 2009-12-02 | 2017-12-05 | Knowles Electronics, Llc | Directional audio capture |
US9008329B1 (en) | 2010-01-26 | 2015-04-14 | Audience, Inc. | Noise reduction using multi-feature cluster tracker |
US8767978B2 (en) | 2011-03-25 | 2014-07-01 | The Intellisis Corporation | System and method for processing sound signals implementing a spectral motion transform |
US8620646B2 (en) | 2011-08-08 | 2013-12-31 | The Intellisis Corporation | System and method for tracking sound pitch across an audio signal using harmonic envelope |
US8548803B2 (en) * | 2011-08-08 | 2013-10-01 | The Intellisis Corporation | System and method of processing a sound signal including transforming the sound signal into a frequency-chirp domain |
US9183850B2 (en) | 2011-08-08 | 2015-11-10 | The Intellisis Corporation | System and method for tracking sound pitch across an audio signal |
WO2013184667A1 (en) | 2012-06-05 | 2013-12-12 | Rank Miner, Inc. | System, method and apparatus for voice analytics of recorded audio |
US9536540B2 (en) | 2013-07-19 | 2017-01-03 | Knowles Electronics, Llc | Speech signal separation and synthesis based on auditory scene analysis and speech modeling |
US9280968B2 (en) * | 2013-10-04 | 2016-03-08 | At&T Intellectual Property I, L.P. | System and method of using neural transforms of robust audio features for speech processing |
DE112015004185T5 (en) | 2014-09-12 | 2017-06-01 | Knowles Electronics, Llc | Systems and methods for recovering speech components |
US9922668B2 (en) | 2015-02-06 | 2018-03-20 | Knuedge Incorporated | Estimating fractional chirp rate with multiple frequency representations |
US9842611B2 (en) | 2015-02-06 | 2017-12-12 | Knuedge Incorporated | Estimating pitch using peak-to-peak distances |
US9870785B2 (en) | 2015-02-06 | 2018-01-16 | Knuedge Incorporated | Determining features of harmonic signals |
US9820042B1 (en) | 2016-05-02 | 2017-11-14 | Knowles Electronics, Llc | Stereo separation and directional suppression with omni-directional microphones |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4300229A (en) * | 1979-02-21 | 1981-11-10 | Nippon Electric Co., Ltd. | Transmitter and receiver for an othogonally multiplexed QAM signal of a sampling rate N times that of PAM signals, comprising an N/2-point offset fourier transform processor |
US4221934A (en) * | 1979-05-11 | 1980-09-09 | Rca Corporation | Compandor for group of FDM signals |
GB8307702D0 (en) * | 1983-03-21 | 1983-04-27 | British Telecomm | Digital band-split filter means |
NL8400677A (en) * | 1984-03-02 | 1985-10-01 | Philips Nv | TRANSMISSION SYSTEM FOR THE TRANSMISSION OF DATA SIGNALS IN A MODULAR TIRE. |
-
2001
- 2001-06-15 US US09/882,744 patent/US6493668B1/en not_active Expired - Lifetime
-
2002
- 2002-06-14 AT AT02744395T patent/ATE421137T1/en not_active IP Right Cessation
- 2002-06-14 EP EP02744395A patent/EP1402517B1/en not_active Expired - Lifetime
- 2002-06-14 WO PCT/US2002/019182 patent/WO2002103676A1/en active Application Filing
- 2002-06-14 CA CA002450230A patent/CA2450230A1/en not_active Abandoned
- 2002-06-14 US US10/173,247 patent/US7013274B2/en not_active Expired - Lifetime
- 2002-06-14 JP JP2003505912A patent/JP4177755B2/en not_active Expired - Fee Related
- 2002-06-14 DE DE60230871T patent/DE60230871D1/en not_active Expired - Lifetime
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080010067A1 (en) * | 2006-07-07 | 2008-01-10 | Chaudhari Upendra V | Target specific data filter to speed processing |
US20090043579A1 (en) * | 2006-07-07 | 2009-02-12 | International Business Machines Corporation | Target specific data filter to speed processing |
US7831424B2 (en) | 2006-07-07 | 2010-11-09 | International Business Machines Corporation | Target specific data filter to speed processing |
Also Published As
Publication number | Publication date |
---|---|
JP2004531767A (en) | 2004-10-14 |
US20030014245A1 (en) | 2003-01-16 |
JP4177755B2 (en) | 2008-11-05 |
CA2450230A1 (en) | 2002-12-27 |
EP1402517A1 (en) | 2004-03-31 |
EP1402517A4 (en) | 2007-04-25 |
US6493668B1 (en) | 2002-12-10 |
WO2002103676A1 (en) | 2002-12-27 |
ATE421137T1 (en) | 2009-01-15 |
US7013274B2 (en) | 2006-03-14 |
DE60230871D1 (en) | 2009-03-05 |
EP1402517B1 (en) | 2009-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6493668B1 (en) | Speech feature extraction system | |
JP2004531767A5 (en) | ||
CA2247364C (en) | Method and recognizer for recognizing a sampled sound signal in noise | |
US6266633B1 (en) | Noise suppression and channel equalization preprocessor for speech and speaker recognizers: method and apparatus | |
US7035797B2 (en) | Data-driven filtering of cepstral time trajectories for robust speech recognition | |
CN101014997B (en) | Method and system for generating training data for an automatic speech recogniser | |
US6721698B1 (en) | Speech recognition from overlapping frequency bands with output data reduction | |
CA2184256A1 (en) | Speaker identification and verification system | |
WO2001031635A1 (en) | Speech recognition | |
KR101414233B1 (en) | Apparatus and method for improving intelligibility of speech signal | |
JP4816711B2 (en) | Call voice processing apparatus and call voice processing method | |
US5806022A (en) | Method and system for performing speech recognition | |
Wickramasinghe et al. | Auditory inspired spatial differentiation for replay spoofing attack detection | |
Maganti et al. | An auditory based modulation spectral feature for reverberant speech recognition. | |
KR20050051435A (en) | Apparatus for extracting feature vectors for speech recognition in noisy environment and method of decorrelation filtering | |
Sahidullah et al. | On the use of distributed dct in speaker identification | |
CN116312561A (en) | Method, system and device for voice print recognition, authentication, noise reduction and voice enhancement of personnel in power dispatching system | |
KR101610708B1 (en) | Voice recognition apparatus and method | |
Magrin-Chagnolleau et al. | Application of time–frequency principal component analysis to speaker verification | |
Tiwari et al. | Wavelet based noise robust features for speaker recognition | |
EP1354312B1 (en) | Method, device, terminal and system for the automatic recognition of distorted speech data | |
Kubo et al. | Recognizing reverberant speech based on amplitude and frequency modulation | |
CN117079666A (en) | Song scoring method, song scoring device, terminal equipment and storage medium | |
Mashao | Matching feature distributions for the Robust Speaker Verification | |
Sharma et al. | Speaker verification using time cepstral principal components derived from a pole-zero model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: PHONETACT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BRANDMAN, YIGAL;REEL/FRAME:013390/0594 Effective date: 20021009 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: PHONETACT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BRANDMAN, YIGAL;REEL/FRAME:016297/0233 Effective date: 20050517 |
|
AS | Assignment |
Owner name: BRANDMAN, YIGAL, CALIFORNIA Free format text: CORRECTED COVER SHEET (ASSIGNMENT);ASSIGNOR:PHONETACT, INC.;REEL/FRAME:016603/0209 Effective date: 20050517 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: LTOS); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
REFU | Refund |
Free format text: REFUND - PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: R1552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |