+

WO2008105661A1 - Method and device for sound processing and hearing aid - Google Patents

Method and device for sound processing and hearing aid Download PDF

Info

Publication number
WO2008105661A1
WO2008105661A1 PCT/NL2008/050119 NL2008050119W WO2008105661A1 WO 2008105661 A1 WO2008105661 A1 WO 2008105661A1 NL 2008050119 W NL2008050119 W NL 2008050119W WO 2008105661 A1 WO2008105661 A1 WO 2008105661A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
signal
signals
transformation
electroacoustic converter
Prior art date
Application number
PCT/NL2008/050119
Other languages
French (fr)
Inventor
Willem Van Keulen
Original Assignee
Exsilent Research B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Exsilent Research B.V. filed Critical Exsilent Research B.V.
Publication of WO2008105661A1 publication Critical patent/WO2008105661A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L2021/065Aids for the handicapped in understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/06Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being correlation coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • H04R2430/23Direction finding using a sum-delay beam-former
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • the invention relates to a method for processing sound from a sound source in the midst of ambient sound, wherein a first sound signal is taken from a first electroacoustic converter, wherein a second sound signal is taken from a second electroacoustic converter spatially separated from the first electroacoustic converter, wherein said sound signals are brought into digital form and subjected to a transformation to a frequency domain.
  • the invention also relates to a sound processing device which is able and adapted to perform such a method, and to a hearing aid which comprises such a sound processing device.
  • Such a method is known from an American patent application with publication number US 2001/0031053 Al.
  • This document describes a method in which a useful source signal from a desired sound source is selected and source signals from other, unwanted sound sources are suppressed.
  • different sound signals are received by a set of spatially separated microphones and presented to multiple delay lines after digitization and Fourier transformation.
  • Output signals are taken from the delay lines, which signals are mutually combined to determine coincidence patterns in order to localize therefrom the position of an undesired sound source.
  • Source signals of the thus localized, undesired sound sources are then selectively suppressed in order to produce a less disturbed output signal.
  • a drawback of this method is that the delay lines applied here impose a significant time delay between the input signals and the output signals, which results in synchronization loss. This is particularly disturbing and undesirable in the case of speech signals.
  • the known method and device moreover require relatively heavy, complex arithmetical functions.
  • An object of the invention includes providing a method and device for sound processing, with which a useful source signal can be isolated in unambiguous and simple manner and to a satisfactory extent from undesired source signals caused by the ambient sound.
  • a method of the type stated in the preamble has the feature according to the invention that said sound signals are combined into a first aggregate function in the frequency domain, that both sound signals are translated over time differences Ax 1 or phase differences A ⁇ t and are combined to form further aggregate functions in the frequency domain, that said first and further aggregate functions per time difference ⁇ f or phase difference ⁇ j over the frequency domain are at least added in order to determine therefrom a specific time difference ⁇ s or specific phase difference ⁇ s at which a maximum occurs, and that the aggregate function corresponding to said specific time difference ⁇ s or phase difference ⁇ s is retransformed from the frequency domain to a time domain in order to obtain an output signal therefrom.
  • a natural process is thus simulated, such as takes place in the human brain and in which a direction of origin of sound is manifested as an internal time delay or phase difference between the sound received by the right ear and the sound received by the left ear.
  • the invention is based here on the recognition that the described superposition of signals translated in time or phase will substantially provide an amplification and corresponding maximum for the direction of the probably desired sound source, while from the other directions a statistical averaging of the signals originating from independent, undesired sound sources and thereby not mutually correlated, or hardly so.
  • the said translations with a time delay in the time domain are reduced in the frequency domain to relatively simple and easily performed mathematical processes on both input signals, whereby a retardation of the signal can be limited to a minimum and adverse synchronization errors can be avoided.
  • the invention is based more particularly on the recognition that a translation over a time delay in the time domain is translated into a corresponding phase shift in the frequency domain, which can be performed in relatively simple manner as a multiplication with a complex function for the frequencies of the Fourier or other frequency series which resulted from the transformation.
  • a particular embodiment of the method according to the invention has the feature here that the transformation to the frequency domain comprises a Fourier transformation and the transformation to the time domain an associated inverse Fourier transformation. Although other transformations to the frequency domain also enable per se the desired selection of the preferred direction according to the invention, a Fourier transformation is standardized and can therefore be retransformed directly to the original signal.
  • the computational load hereby remains limited.
  • a particularly practical embodiment of the method according to the invention is however characterized in that the further aggregate functions are obtained by a cross-correlation of functions derived by translation over discrete time differences ⁇ ; or discrete phase differences ⁇ ; of the sound signals. Making use of standard obtainable components or software modules, such a cross-correlation can be implemented in relatively simple manner.
  • a preferred embodiment of the method according to the invention has the feature that a chosen time difference ⁇ mim or chosen phase difference ⁇ man is set manually in order to be imposed respectively as specific time difference ⁇ s or chosen phase difference ⁇ s .
  • the automatically selected time difference or phase difference can thus be overruled within the method by the value thus set by the user him/herself, so as to thus process primarily the sound from the direction corresponding therewith.
  • the output signal can be directly converted and supplied to the user, although a further preferred embodiment of the method according to the invention has the feature that the output signal is processed and/or amplified on the basis of an optionally predetermined processing characteristic, and is then fed to at least one further -A- electroacoustic converter for the purpose of generating an output sound therewith. An improvement in the quality of the output signal can thus be implemented before sound is produced therefrom.
  • electroacoustic converters can per se be applied for the purpose of receiving or generating sound.
  • a specific embodiment of the method according to the invention has the feature however that the first and second electroacoustic converters comprise a microphone, and the at least one further electroacoustic converter comprises at least one loudspeaker.
  • a sound processing device comprises a first electroacoustic converter from which a first sound signal can be taken, a second electroacoustic converter which is spatially separated from the first electroacoustic converter and from which a second sound signal can be taken, and electronic processing means which are able and adapted to perform the method according to the invention and from which the output signal can be taken.
  • a device can be integrated compactly in a shared housing or be distributed as desired among individual components, which are in that case mutually coupled for wired or wireless operation.
  • the invention is particularly suitable for application in or with a hearing aid.
  • a hearing aid comprising a further electroacoustic converter which is able and adapted to receive an electronic sound signal and generate an output sound to an ear of the user, and comprising a sound processing device with an input coupled to the processing means in order to receive the output signal and with an output coupled to the further converter in order to supply the processed and/or amplified output signal to the further converter, is therefore characterized in that a sound processing device according to the invention is provided therein or therewith.
  • the invention thus provides an aid with which a hearing-impaired person may regain to at least a certain extent a natural binaural perception.
  • a particular embodiment of the hearing aid is characterized according to the invention in that a portable expansion unit is provided, which is coupled operationally to the sound processing device, and that the expansion device comprises manually operated setting means for the purpose of setting a chosen time difference ⁇ man or chosen phase difference ⁇ man and imposing thereof on the sound processing device as respectively specific time difference ⁇ s or chosen phase difference ⁇ s .
  • a preferred direction can thus be set manually and imposed on the sound processing in order to direct the device with priority in this preferred direction.
  • a particularly user-friendly embodiment of the hearing aid according to the invention has the feature here that the expansion unit is coupled wirelessly to the sound processing device.
  • the expansion unit can otherwise also be provided with further functions and modules.
  • the sound processing device according to the invention can thus be wholly or partially accommodated therein, and the same applies for sound processing means such as a digital signal processor (DSP) for further modelling and optional further amplifying of the output signal .
  • DSP digital signal processor
  • a further embodiment of the hearing aid according to the invention has the feature here that the expansion unit comprises at least one input for connection of at least one further sound source.
  • a method for sound processing in which a first signal from a first sound sensor and a second signal from a second sound sensor are processed, which signals comprise sound from one or more sound sources, wherein these sound sources comprise one or more undesired sound sources and a desired sound source generating a desired source signal, and wherein the first signal and the second signal are digitized, subjected to a Fourier transformation and subsequently to a cross-correlation, characterized according to the invention in that a fictitious Central Activities Pattern is created by means of the cross-correlation, from which a Central Amplitude Spectrum is determined, this Central Amplitude Spectrum being subjected to an inverse Fourier transformation in order to recreate the desired source signal.
  • Persons with reasonable to good hearing can otherwise also use the method and device advantageously. It can for instance be used to listen to a sound source such as music by a band performing in the midst of interference sounds. It is also possible to envisage ear protection in physical engineering or mechanical engineering operations, in which loud, harmful disturbing sounds can be effectively suppressed and useful signals still transmitted. The ear of the user is then spared while he/she can still discern relevant sounds.
  • a further preferred embodiment of the method for sound processing according to the invention is characterized in that the desired sound source can be adjusted by a user. The user can then select as desired sound source a sound source other than the most obvious (normally that located in the viewing direction) and has more freedom in listening to the surrounding area.
  • a further embodiment of the method for sound processing according to the invention is characterized in that the desired source signal is supplied to at least a loudspeaker. In this way the desired source signal coming from the desired sound source can be recreated at a location desired by the user. The user can thus create a situation in which substantially only the desired source signal is audible.
  • Yet another embodiment of the method for sound processing according to the invention is characterized in that the desired source signal is supplied wirelessly to the loudspeaker. This results in an increased flexibility in the use of the method. It is easier to allow a greater distance between the sound sensors and the loudspeaker. The sound sensors and the loudspeaker can moreover be moved more easily relative to each other without cords forming a limitation or becoming entangled.
  • the invention also relates to a device adapted to perform the method according to the invention, comprising at least two sound sensors and two first transformation units which are connected thereto and in which the sound signals from the sound sensors are digitized and subjected to a Fourier transformation.
  • the invention is characterized in that the device also comprises an electronic unit which applies a cross-correlation per frequency to both transformed signals, then adds them and subsequently determines the time delay at which the maximum of the resulting graph lies, and a second transformation unit which subjects the signal frequency spectrum associated with this time delay to an inverse Fourier transformation.
  • An embodiment of the device according to the invention is characterized in that the device comprises means for supplying the desired source signal to at least one ear.
  • the device is hereby suitable to serve as an aid such as a hearing aid for the hearing.
  • a hearing-impaired person can function better in his/her social environment because he/she will be better able to discern speech in his/her surroundings.
  • Source signals from different sound sources hereby blend into a disordered mass of sound. It is then very difficult, if not almost impossible, to understand the speech of a determined person in a noisy environment.
  • the invention is highly suitable for the purpose of obviating this drawback.
  • the simulation of a Central Spectrum as according to a particular embodiment of the invention provides a spatial perception which approximates to at least a certain extent natural binaural hearing.
  • a further embodiment of the device according to the invention is characterized in that it comprises an interruption unit which is adapted to perform automatic selection of a priority signal.
  • an alarm source signal can for instance still reach the user even if he/she has less than average hearing.
  • the interruption unit can for instance thus ensure that a hearing-impaired person hears the siren of an ambulance or an air-raid alarm, even if he/she is listening to a different sound.
  • the siren is recognized by the interruption unit as a priority sound source, and the Central Spectrum shifts to the signal thereof.
  • this can also be advantageously applied for those of normal hearing, who are for instance wearing hearing protection in order to counteract potentially dangerous sound levels.
  • the device according to the invention can then for instance be utilized as noise suppressor and be integrated into such hearing protection means. The user thus remains receptive and accessible to alarm and emergency signals.
  • Another further embodiment of the device according to the invention is characterized in that it comprises a connecting means for connection to a communication means, in particular a mobile telephone.
  • a connecting means for connection to a communication means in particular a mobile telephone.
  • Yet another embodiment of the device according to the invention is characterized in that it comprises a connecting means for connection to an audio player, in particular an MP3 player or the like. Through the combination with an audio player the user can also be provided with a multifunctional device.
  • Figure 1 shows a schematic representation of an embodiment of the method according to the invention
  • Figure 2 shows a Central Activities Pattern
  • Figure 3 shows a basic model of a processing module in which the device according to the invention is applied; and Figure 4 shows the implementation of the processing module of figure 3 in a hearing aid.
  • FIG. 1 shows a schematic representation of an embodiment of the method according to the invention.
  • Two sound sensors (Sl , S2) receive sound from a plurality of sources, including a desired source (not shown).
  • the first signal of the first sound sensor (Sl) and the second signal of a second sound sensor (S2) are each digitized and subjected to a Fourier transformation (for instance an FFT).
  • a cross-correlation (CES) is then applied to both signals.
  • a simulation of a Central Activities Pattern (CAP) is composed from the results of the cross-correlation (CES).
  • SEL The Central Amplitude Spectrum is subjected to an inverse Fourier transformation (IFFT), whereby the desired source signal from the desired sound source is recreated.
  • IFFT inverse Fourier transformation
  • This signal is supplied to two loudspeakers (Ll, L2). If desired, use can otherwise also be made of only one loudspeaker in order to simplify the device, or more loudspeakers can be selected.
  • FIG. 2 shows a simulation of a Central Activities Pattern (abbreviated CAP).
  • CAP Central Activities Pattern
  • This brain activity provides an image of the ambient sound, this image being composed of the sound detected by both ears of a person.
  • Hearing with two ears, binaural hearing is of essential importance for building up a direction-sensitive sound image of the surrounding area. Sound from a determined angle reaches the closest ear before the other ear.
  • the brain can hereby distinguish between sound from different directions.
  • the direction from which a sound comes is registered by the brain as a so-called internal delay (At 1 ).
  • Attention is usually focussed naturally on a sound from the direction in which the face is pointed, such as a person with whom someone is conversing. Attention can however also be consciously focussed on another sound from a different direction.
  • the CAP then changes such that the Central Amplitude Spectrum corresponds with the source signal of the sound source from this determined angle relative to the nose plane.
  • the nose plane is the plane through the nose perpendicularly of an imaginary line connecting the two ears.
  • the consciousness has a mechanism which can break this attention in case of emergency.
  • a siren for instance shifts attention immediately.
  • a person listens consciously to the emergency signal from this siren. Motorists can for instance thus respond adequately and immediately to an approaching ambulance.
  • the Central Activities Pattern is a three-dimensional graph in which the neural activity in the brain (P) is plotted against the internal time delay (AT 1 ) for different frequencies (f).
  • the Central Amplitude Spectrum can be determined by determining at which internal time delay all frequencies (f) display a peak in neural activity (P) 5 in this case at ⁇ s . This can be done in practice by superimposing all P( ⁇ t j )-graphs and locating the highest peak, or the maximum, of the sum graph. Mathematically, this corresponds to a squaring of the original amplitude spectrum in order to determine the absolute intensity therefrom, followed by an addition over a frequency domain in order to determine said maximum.
  • the Central Amplitude Spectrum is then determined by plotting the P against f for the relevant ⁇ s at which this maximum occurs. This is then the spectrum associated with the sound to which the user is listening and which is obtained by retransformation of the thus distilled Central Amplitude Spectrum.
  • the source sound to which a person wishes to listen is situated halfway between the two sound sensors, so equally remote from both sound sensors.
  • the user of the device can him/herself set a different sound by selecting a different angle.
  • the device provides manually adjustable setting means for this purpose.
  • the brain activity is determined by means of an electronic circuit which mimics the brain structure for the hearing.
  • the electronic circuit comprises a mathematical cross- correlation unit which performs a cross-correlation per frequency on the signals received by both sound sensors. This cross-correlation unit produces a relation per frequency between intensity and angle which forms a mimicking of the relation between neural activity and internal delay. With these graphs per frequency a CAP can be simulated, from which a Central Amplitude Spectrum can be derived in the above indicated manner.
  • the signal for processing is obtained by Fourier transformations or a comparable analysis to the frequency domain. These Fourier analyses take place per signal input and are added together for a number of delay values in order to produce, as it were, a kind of cross-correlation thereof. Because the Fourier analysis produces narrow-band components, the cross-correlation per component is a periodic function. These functions are added together in the frequency domain. Selection of the addition information then takes place on the basis of the maximum to be determined therefrom. This selection corresponds to the direction of the most important, and therefore probably desired source. Signal reconstruction is carried out by means of inverse analysis although, if desired, it can be replaced by or supplemented with other methods. This reconstruction is performed on the selected spectrum.
  • FIG. 3-5 show schematic representations of different implementation options of the sound processing device according to the invention in a hearing aid.
  • Such an apparatus normally comprises an in-the-ear part with at least an output for the sound from a further electroacoustic converter, such as a loudspeaker, optionally coupled to a behind-the-ear part (BTE) with other components.
  • BTE behind-the-ear part
  • FIG 3 shows a schematic view of a multi-application sound processing module, designated a processing unit, in which the sound processing device according to the invention is applied, for instance one as specified above and shown with reference to figure 1, with signal inputs and outputs IN R , IN t and OUT.
  • the processing module can comprise one or more outputs OUT. These are not necessarily connected to a further converter L, but can also serve as signal source for other devices, for instance other hearing aids or implants, or an expansion unit.
  • Converters M R , M L on the input can take a direction-sensitive form.
  • headsets in particular as BTE part, in combination with the transfer module of figure 3 as shown in figure 4.
  • the headsets comprise respectively the first and second electroacoustic converter (microphone) for receiving the signal at respectively the left and right ear.
  • the signal analysis can take place in both headset/BTE parts, wherein the analysis information is sent to a separate processing unit.
  • the information could here be processed in the matrix and the selection information sent back to the headset/BTE part, wherein this part can make the correct reconstruction. Selection could also take place in a single apparatus, wherein the other apparatus only supplies the phase-related information.
  • the one apparatus sends the selection information to the other apparatus, and vice versa.
  • the transfer of information and audio can optionally take place wirelessly between headsets and module and/or mutually between headsets.
  • the headsets and/or module can be equipped with a plurality of microphones.
  • Another variant could be that all information is processed by a separate processing unit.
  • the processing unit carries out both the selection and the signal reconstruction.
  • the signal is then sent to the headsets, optionally with hearing aid operations.
  • a further variant of the system could be provided with a direction-sensitive electroacoustic converter. This can consist of a plurality of microphones, for a better directing action of the converters such as in a microphone array, or a single direction- sensitive converter.
  • a further variant could be that a plurality of converters are placed separately and are individually coupled to the processing unit.
  • the processing unit could also be situated in a headset/BTE. Use can also be made in these variants of both wired and wireless connections for the mutual information transfer.
  • the device can be built into a housing with an attractive design.
  • the classic hearing aid can hereby be replaced by a more appealing and fashionable one.
  • a hearing- impaired person hereby attracts positive attention.
  • When he/she wears the hearing aid the impression is given that he/she is wearing one of the most advanced telephones or audio players, which may result in admiring reactions from his/her surroundings.
  • an optionally distributed system which comprises a sound processing device according to the invention as well as a mobile telephone, an audio player and/or an associated headset, wherein the device can moreover have a provision for responding to emergency signals.
  • a user can for instance then listen comfortably to his/her MP3 player in a noisy building excavation, optionally phone home and still be alert to significant sounds such as alarm and emergency signals, which are always given priority.
  • alarm and emergency signals which are always given priority.
  • the device will ensure that he/she discerns substantially only this important priority signal.
  • the necessary logic and experience can also be incorporated into the apparatus.
  • Known emergency signals can thus be stored in the device so that they are recognized.
  • a possibility could also be created for the user to add important signals and input priorities into the unit. In consultation with the civil authorities fixed priority could optionally be given to for instance the air-raid alarm.
  • wavelet transformation instead of a Fourier transformation, or for instance a Laplace transformation or other standard mathematical translation to the frequency domain, use can for instance also be made of a so-called wavelet transformation.
  • wavelet transformation as in Fourier analysis, the signal is represented in different bins. The distance of the bins in relation to the frequency is however not uniform. Different sample rates (so-called levels) are used for different frequency ranges. This means that a wavelet has multiple resolutions, wherein the low frequencies are proportionally accurate in relation to the higher ones, while in Fourier analysis the higher frequencies are more accurate.
  • wavelet transformation method There are different types of wavelet transformation method. The advantage hereof is always the relatively great accuracy of the lower frequencies at short frame lengths,
  • the invention is based on the selection of frequency spectra by means of phase information in a manner comparable to a natural process as takes place in our brain.
  • the phase information is supplied from two (or more) separate and different signals by a transformation from time domain to frequency domain, in particular a Fourier transformation, such as the brain also receives different signals from both ears.
  • the phase information is determined per frequency and can be arranged in a matrix. From this matrix the frequencies with corresponding phase are selected for signal reconstruction. This selection can be preset or take place automatically (tracking mode). In the latter case the phase is chosen at which an addition of the signals over the frequency domain, which have optionally been further processed mathematically, produces a maximum.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

A method for sound processing has two sound sensors. The signals from these sensors are digitized, subjected to a Fourier transformation and then to a cross-correlation. With this cross-correlation a fictitious Central Activities Pattern is created from which a Central Spectrum is determined, which represents a desired signal from a desired sound source. This Central Spectrum is subjected to an inverse Fourier transformation in order to recreate the desired signal.

Description

Method and device for sound processing and hearing aid
The invention relates to a method for processing sound from a sound source in the midst of ambient sound, wherein a first sound signal is taken from a first electroacoustic converter, wherein a second sound signal is taken from a second electroacoustic converter spatially separated from the first electroacoustic converter, wherein said sound signals are brought into digital form and subjected to a transformation to a frequency domain. The invention also relates to a sound processing device which is able and adapted to perform such a method, and to a hearing aid which comprises such a sound processing device.
Such a method is known from an American patent application with publication number US 2001/0031053 Al. This document describes a method in which a useful source signal from a desired sound source is selected and source signals from other, unwanted sound sources are suppressed. For this purpose different sound signals are received by a set of spatially separated microphones and presented to multiple delay lines after digitization and Fourier transformation. Output signals are taken from the delay lines, which signals are mutually combined to determine coincidence patterns in order to localize therefrom the position of an undesired sound source. Source signals of the thus localized, undesired sound sources are then selectively suppressed in order to produce a less disturbed output signal.
A drawback of this method is that the delay lines applied here impose a significant time delay between the input signals and the output signals, which results in synchronization loss. This is particularly disturbing and undesirable in the case of speech signals. The known method and device moreover require relatively heavy, complex arithmetical functions.
An object of the invention includes providing a method and device for sound processing, with which a useful source signal can be isolated in unambiguous and simple manner and to a satisfactory extent from undesired source signals caused by the ambient sound. In order to achieve the intended object a method of the type stated in the preamble has the feature according to the invention that said sound signals are combined into a first aggregate function in the frequency domain, that both sound signals are translated over time differences Ax1 or phase differences Aφt and are combined to form further aggregate functions in the frequency domain, that said first and further aggregate functions per time difference Δτf or phase difference Δφj over the frequency domain are at least added in order to determine therefrom a specific time difference Δτs or specific phase difference Δφs at which a maximum occurs, and that the aggregate function corresponding to said specific time difference Δτs or phase difference Δφs is retransformed from the frequency domain to a time domain in order to obtain an output signal therefrom.
A natural process is thus simulated, such as takes place in the human brain and in which a direction of origin of sound is manifested as an internal time delay or phase difference between the sound received by the right ear and the sound received by the left ear. The invention is based here on the recognition that the described superposition of signals translated in time or phase will substantially provide an amplification and corresponding maximum for the direction of the probably desired sound source, while from the other directions a statistical averaging of the signals originating from independent, undesired sound sources and thereby not mutually correlated, or hardly so. By determining and isolating this preferred direction from the total amplitude spectrum in accordance with the invention, the disturbing influence of the ambient sound can thus be suppressed. The said translations with a time delay in the time domain are reduced in the frequency domain to relatively simple and easily performed mathematical processes on both input signals, whereby a retardation of the signal can be limited to a minimum and adverse synchronization errors can be avoided. The invention is based more particularly on the recognition that a translation over a time delay in the time domain is translated into a corresponding phase shift in the frequency domain, which can be performed in relatively simple manner as a multiplication with a complex function for the frequencies of the Fourier or other frequency series which resulted from the transformation. A particular embodiment of the method according to the invention has the feature here that the transformation to the frequency domain comprises a Fourier transformation and the transformation to the time domain an associated inverse Fourier transformation. Although other transformations to the frequency domain also enable per se the desired selection of the preferred direction according to the invention, a Fourier transformation is standardized and can therefore be retransformed directly to the original signal. The computational load hereby remains limited.
Different mathematical methods can per se be followed in order to form said aggregate functions from the two input signals. A particularly practical embodiment of the method according to the invention is however characterized in that the further aggregate functions are obtained by a cross-correlation of functions derived by translation over discrete time differences Δτ; or discrete phase differences Δφ; of the sound signals. Making use of standard obtainable components or software modules, such a cross-correlation can be implemented in relatively simple manner.
In some cases the dynamic, autonomous choice of direction according to the invention can result in an unintended focussing on an at least temporarily undesired sound source, while the user would in fact like to focus on a different sound source. In order make provision for this, a preferred embodiment of the method according to the invention has the feature that a chosen time difference Δτmim or chosen phase difference Δφman is set manually in order to be imposed respectively as specific time difference Δτs or chosen phase difference Δφs. The automatically selected time difference or phase difference can thus be overruled within the method by the value thus set by the user him/herself, so as to thus process primarily the sound from the direction corresponding therewith.
The output signal can be directly converted and supplied to the user, although a further preferred embodiment of the method according to the invention has the feature that the output signal is processed and/or amplified on the basis of an optionally predetermined processing characteristic, and is then fed to at least one further -A- electroacoustic converter for the purpose of generating an output sound therewith. An improvement in the quality of the output signal can thus be implemented before sound is produced therefrom.
Different forms of electroacoustic converters can per se be applied for the purpose of receiving or generating sound. A specific embodiment of the method according to the invention has the feature however that the first and second electroacoustic converters comprise a microphone, and the at least one further electroacoustic converter comprises at least one loudspeaker. These components are commercially available in a selection of quality variants and in size variations so that a suitable choice will be possible in practically all cases.
In order to achieve the stated objective, a sound processing device according to the invention comprises a first electroacoustic converter from which a first sound signal can be taken, a second electroacoustic converter which is spatially separated from the first electroacoustic converter and from which a second sound signal can be taken, and electronic processing means which are able and adapted to perform the method according to the invention and from which the output signal can be taken. Such a device can be integrated compactly in a shared housing or be distributed as desired among individual components, which are in that case mutually coupled for wired or wireless operation.
The invention is particularly suitable for application in or with a hearing aid. According to the invention a hearing aid, comprising a further electroacoustic converter which is able and adapted to receive an electronic sound signal and generate an output sound to an ear of the user, and comprising a sound processing device with an input coupled to the processing means in order to receive the output signal and with an output coupled to the further converter in order to supply the processed and/or amplified output signal to the further converter, is therefore characterized in that a sound processing device according to the invention is provided therein or therewith. The invention thus provides an aid with which a hearing-impaired person may regain to at least a certain extent a natural binaural perception.
A particular embodiment of the hearing aid is characterized according to the invention in that a portable expansion unit is provided, which is coupled operationally to the sound processing device, and that the expansion device comprises manually operated setting means for the purpose of setting a chosen time difference Δτman or chosen phase difference Δφman and imposing thereof on the sound processing device as respectively specific time difference Δτs or chosen phase difference Δφs. A preferred direction can thus be set manually and imposed on the sound processing in order to direct the device with priority in this preferred direction. A particularly user-friendly embodiment of the hearing aid according to the invention has the feature here that the expansion unit is coupled wirelessly to the sound processing device.
In addition to or instead of being used for the adjustment of the preferred direction, the expansion unit can otherwise also be provided with further functions and modules. The sound processing device according to the invention can thus be wholly or partially accommodated therein, and the same applies for sound processing means such as a digital signal processor (DSP) for further modelling and optional further amplifying of the output signal . With a view to the addition of further functions and options for use, a further embodiment of the hearing aid according to the invention has the feature here that the expansion unit comprises at least one input for connection of at least one further sound source.
hi a further particular embodiment a method for sound processing, in which a first signal from a first sound sensor and a second signal from a second sound sensor are processed, which signals comprise sound from one or more sound sources, wherein these sound sources comprise one or more undesired sound sources and a desired sound source generating a desired source signal, and wherein the first signal and the second signal are digitized, subjected to a Fourier transformation and subsequently to a cross-correlation, characterized according to the invention in that a fictitious Central Activities Pattern is created by means of the cross-correlation, from which a Central Amplitude Spectrum is determined, this Central Amplitude Spectrum being subjected to an inverse Fourier transformation in order to recreate the desired source signal. Nor is it necessary here to deliberately suppress undesired source signals. Because almost only the spectrum of the desired source signal is retransformed, an undesired source signal is no longer automatically present in a processed sound, or hardly so. With this method a user can comfortably receive a desired source signal among disturbing, undesired source signals. This is particularly useful for hearing-impaired people. It is known that, as the hearing deteriorates, the skill of signal selection also decreases. This is no longer found to be resolved by amplifying all sound. Owing to the method according to the invention a hearing-impaired person can once again distinguish a desired source signal from undesired signals.
Persons with reasonable to good hearing can otherwise also use the method and device advantageously. It can for instance be used to listen to a sound source such as music by a band performing in the midst of interference sounds. It is also possible to envisage ear protection in physical engineering or mechanical engineering operations, in which loud, harmful disturbing sounds can be effectively suppressed and useful signals still transmitted. The ear of the user is then spared while he/she can still discern relevant sounds.
A further preferred embodiment of the method for sound processing according to the invention is characterized in that the desired sound source can be adjusted by a user. The user can then select as desired sound source a sound source other than the most obvious (normally that located in the viewing direction) and has more freedom in listening to the surrounding area.
A further embodiment of the method for sound processing according to the invention is characterized in that the desired source signal is supplied to at least a loudspeaker. In this way the desired source signal coming from the desired sound source can be recreated at a location desired by the user. The user can thus create a situation in which substantially only the desired source signal is audible. Yet another embodiment of the method for sound processing according to the invention is characterized in that the desired source signal is supplied wirelessly to the loudspeaker. This results in an increased flexibility in the use of the method. It is easier to allow a greater distance between the sound sensors and the loudspeaker. The sound sensors and the loudspeaker can moreover be moved more easily relative to each other without cords forming a limitation or becoming entangled.
The invention also relates to a device adapted to perform the method according to the invention, comprising at least two sound sensors and two first transformation units which are connected thereto and in which the sound signals from the sound sensors are digitized and subjected to a Fourier transformation. In respect of the device the invention is characterized in that the device also comprises an electronic unit which applies a cross-correlation per frequency to both transformed signals, then adds them and subsequently determines the time delay at which the maximum of the resulting graph lies, and a second transformation unit which subjects the signal frequency spectrum associated with this time delay to an inverse Fourier transformation.
An embodiment of the device according to the invention is characterized in that the device comprises means for supplying the desired source signal to at least one ear. The device is hereby suitable to serve as an aid such as a hearing aid for the hearing. Using the device a hearing-impaired person can function better in his/her social environment because he/she will be better able to discern speech in his/her surroundings. As the hearing of a person deteriorates, he/she thereby surprisingly also loses the ability to hear from which direction a sound originates. Source signals from different sound sources hereby blend into a disordered mass of sound. It is then very difficult, if not almost impossible, to understand the speech of a determined person in a noisy environment. The invention is highly suitable for the purpose of obviating this drawback. The simulation of a Central Spectrum as according to a particular embodiment of the invention provides a spatial perception which approximates to at least a certain extent natural binaural hearing. A further embodiment of the device according to the invention is characterized in that it comprises an interruption unit which is adapted to perform automatic selection of a priority signal. Thus achieved is that an alarm source signal can for instance still reach the user even if he/she has less than average hearing. The interruption unit can for instance thus ensure that a hearing-impaired person hears the siren of an ambulance or an air-raid alarm, even if he/she is listening to a different sound. In this case the siren is recognized by the interruption unit as a priority sound source, and the Central Spectrum shifts to the signal thereof. In addition to use by the hearing-impaired, this can also be advantageously applied for those of normal hearing, who are for instance wearing hearing protection in order to counteract potentially dangerous sound levels. The device according to the invention can then for instance be utilized as noise suppressor and be integrated into such hearing protection means. The user thus remains receptive and accessible to alarm and emergency signals.
Another further embodiment of the device according to the invention is characterized in that it comprises a connecting means for connection to a communication means, in particular a mobile telephone. Through the combination with a mobile telephone a user can for instance be provided with a multifunctional device.
Yet another embodiment of the device according to the invention is characterized in that it comprises a connecting means for connection to an audio player, in particular an MP3 player or the like. Through the combination with an audio player the user can also be provided with a multifunctional device.
The invention will be further elucidated hereinbelow on the basis of exemplary embodiments of a method for sound processing and a device according to the invention, shown in the drawings. Herein:
Figure 1 shows a schematic representation of an embodiment of the method according to the invention; and Figure 2 shows a Central Activities Pattern;
Figure 3 shows a basic model of a processing module in which the device according to the invention is applied; and Figure 4 shows the implementation of the processing module of figure 3 in a hearing aid.
The figures are otherwise purely schematic and not drawn to scale. For the sake of clarity some dimensions in particular may be exaggerated to a greater or lesser extent. Corresponding parts are designated in the figures with the same reference numeral, unless expressly stated otherwise.
Figure 1 shows a schematic representation of an embodiment of the method according to the invention. Two sound sensors (Sl , S2) receive sound from a plurality of sources, including a desired source (not shown). The first signal of the first sound sensor (Sl) and the second signal of a second sound sensor (S2) are each digitized and subjected to a Fourier transformation (for instance an FFT). A cross-correlation (CES) is then applied to both signals. A simulation of a Central Activities Pattern (CAP) is composed from the results of the cross-correlation (CES). A Central Amplitude
Spectrum, which shows the desired source signal, is determined herefrom by means of selection (SEL). The Central Amplitude Spectrum is subjected to an inverse Fourier transformation (IFFT), whereby the desired source signal from the desired sound source is recreated. This signal is supplied to two loudspeakers (Ll, L2). If desired, use can otherwise also be made of only one loudspeaker in order to simplify the device, or more loudspeakers can be selected.
Use is made in this example of two microphones as electroacoustic converters, from which both the input signals are taken. If desired, more microphones can be applied in order to optimize the directivity. This could be optimized by using filter techniques and/or existing directional techniques, such as for instance in the case of a microphone array. The input signals of the algorithm do not otherwise have to come directly from transducers, other electrical sources also being possible, such as for instance a pre- fϊlter or digital sound processor (DSP), for instance for optimizing the input signals. The same applies mutatis mutandis for the output signal of the algorithm. Figure 2 shows a simulation of a Central Activities Pattern (abbreviated CAP). Such a CAP can be conceived as a schematic representation of human brain activity in response to ambient sound. This brain activity provides an image of the ambient sound, this image being composed of the sound detected by both ears of a person. Hearing with two ears, binaural hearing, is of essential importance for building up a direction-sensitive sound image of the surrounding area. Sound from a determined angle reaches the closest ear before the other ear. The brain can hereby distinguish between sound from different directions. The direction from which a sound comes is registered by the brain as a so-called internal delay (At1).
Attention is usually focussed naturally on a sound from the direction in which the face is pointed, such as a person with whom someone is conversing. Attention can however also be consciously focussed on another sound from a different direction. The CAP then changes such that the Central Amplitude Spectrum corresponds with the source signal of the sound source from this determined angle relative to the nose plane. The nose plane is the plane through the nose perpendicularly of an imaginary line connecting the two ears. In addition, the consciousness has a mechanism which can break this attention in case of emergency. A siren for instance shifts attention immediately. A person then listens consciously to the emergency signal from this siren. Motorists can for instance thus respond adequately and immediately to an approaching ambulance.
The Central Activities Pattern is a three-dimensional graph in which the neural activity in the brain (P) is plotted against the internal time delay (AT1) for different frequencies (f). The Central Amplitude Spectrum can be determined by determining at which internal time delay all frequencies (f) display a peak in neural activity (P)5 in this case at Δτs. This can be done in practice by superimposing all P(Δtj)-graphs and locating the highest peak, or the maximum, of the sum graph. Mathematically, this corresponds to a squaring of the original amplitude spectrum in order to determine the absolute intensity therefrom, followed by an addition over a frequency domain in order to determine said maximum. The Central Amplitude Spectrum is then determined by plotting the P against f for the relevant Δτs at which this maximum occurs. This is then the spectrum associated with the sound to which the user is listening and which is obtained by retransformation of the thus distilled Central Amplitude Spectrum.
In a device according to the invention it is assumed in the first instance that the source sound to which a person wishes to listen is situated halfway between the two sound sensors, so equally remote from both sound sensors. The user of the device can him/herself set a different sound by selecting a different angle. The device provides manually adjustable setting means for this purpose.
The brain activity is determined by means of an electronic circuit which mimics the brain structure for the hearing. The electronic circuit comprises a mathematical cross- correlation unit which performs a cross-correlation per frequency on the signals received by both sound sensors. This cross-correlation unit produces a relation per frequency between intensity and angle which forms a mimicking of the relation between neural activity and internal delay. With these graphs per frequency a CAP can be simulated, from which a Central Amplitude Spectrum can be derived in the above indicated manner.
The signal for processing is obtained by Fourier transformations or a comparable analysis to the frequency domain. These Fourier analyses take place per signal input and are added together for a number of delay values in order to produce, as it were, a kind of cross-correlation thereof. Because the Fourier analysis produces narrow-band components, the cross-correlation per component is a periodic function. These functions are added together in the frequency domain. Selection of the addition information then takes place on the basis of the maximum to be determined therefrom. This selection corresponds to the direction of the most important, and therefore probably desired source. Signal reconstruction is carried out by means of inverse analysis although, if desired, it can be replaced by or supplemented with other methods. This reconstruction is performed on the selected spectrum. The sound processing device according to the invention can be applied particularly advantageously in a hearing aid for the hearing-impaired. Figures 3-5 show schematic representations of different implementation options of the sound processing device according to the invention in a hearing aid. Such an apparatus normally comprises an in-the-ear part with at least an output for the sound from a further electroacoustic converter, such as a loudspeaker, optionally coupled to a behind-the-ear part (BTE) with other components. These parts are designated individually or together as headset.
Figure 3 shows a schematic view of a multi-application sound processing module, designated a processing unit, in which the sound processing device according to the invention is applied, for instance one as specified above and shown with reference to figure 1, with signal inputs and outputs INR, INt and OUT. The processing module can comprise one or more outputs OUT. These are not necessarily connected to a further converter L, but can also serve as signal source for other devices, for instance other hearing aids or implants, or an expansion unit. Converters MR, ML on the input can take a direction-sensitive form.
As application in a hearing aid use can for instance be made of two headsets, in particular as BTE part, in combination with the transfer module of figure 3 as shown in figure 4. The headsets comprise respectively the first and second electroacoustic converter (microphone) for receiving the signal at respectively the left and right ear. This thus provides for the spatial distribution of the two converters within the scope of the invention. The signal analysis can take place in both headset/BTE parts, wherein the analysis information is sent to a separate processing unit. The information could here be processed in the matrix and the selection information sent back to the headset/BTE part, wherein this part can make the correct reconstruction. Selection could also take place in a single apparatus, wherein the other apparatus only supplies the phase-related information. The one apparatus sends the selection information to the other apparatus, and vice versa. The transfer of information and audio can optionally take place wirelessly between headsets and module and/or mutually between headsets. The headsets and/or module can be equipped with a plurality of microphones.
Another variant could be that all information is processed by a separate processing unit. The processing unit carries out both the selection and the signal reconstruction. The signal is then sent to the headsets, optionally with hearing aid operations. A further variant of the system could be provided with a direction-sensitive electroacoustic converter. This can consist of a plurality of microphones, for a better directing action of the converters such as in a microphone array, or a single direction- sensitive converter. A further variant could be that a plurality of converters are placed separately and are individually coupled to the processing unit. The processing unit could also be situated in a headset/BTE. Use can also be made in these variants of both wired and wireless connections for the mutual information transfer.
The device can be built into a housing with an attractive design. The classic hearing aid can hereby be replaced by a more appealing and fashionable one. A hearing- impaired person hereby attracts positive attention. When he/she wears the hearing aid the impression is given that he/she is wearing one of the most advanced telephones or audio players, which may result in admiring reactions from his/her surroundings.
hi addition, many functions can be combined in the device according to the invention. It is thus possible to provide an optionally distributed system which comprises a sound processing device according to the invention as well as a mobile telephone, an audio player and/or an associated headset, wherein the device can moreover have a provision for responding to emergency signals. A user can for instance then listen comfortably to his/her MP3 player in a noisy building excavation, optionally phone home and still be alert to significant sounds such as alarm and emergency signals, which are always given priority. As soon as the user is addressed, or when an emergency signal sounds, the device will ensure that he/she discerns substantially only this important priority signal. The necessary logic and experience can also be incorporated into the apparatus. Known emergency signals can thus be stored in the device so that they are recognized. A possibility could also be created for the user to add important signals and input priorities into the unit. In consultation with the civil authorities fixed priority could optionally be given to for instance the air-raid alarm.
Although the invention has been elucidated in the foregoing on the basis of a single exemplary embodiment, it will be apparent that the invention is by no means limited thereto. On the contrary, many variations and embodiments are still possible within the scope of the invention.
Instead of a Fourier transformation, or for instance a Laplace transformation or other standard mathematical translation to the frequency domain, use can for instance also be made of a so-called wavelet transformation. In wavelet transformation, as in Fourier analysis, the signal is represented in different bins. The distance of the bins in relation to the frequency is however not uniform. Different sample rates (so-called levels) are used for different frequency ranges. This means that a wavelet has multiple resolutions, wherein the low frequencies are proportionally accurate in relation to the higher ones, while in Fourier analysis the higher frequencies are more accurate. There are different types of wavelet transformation method. The advantage hereof is always the relatively great accuracy of the lower frequencies at short frame lengths,
The invention is based on the selection of frequency spectra by means of phase information in a manner comparable to a natural process as takes place in our brain. The phase information is supplied from two (or more) separate and different signals by a transformation from time domain to frequency domain, in particular a Fourier transformation, such as the brain also receives different signals from both ears. The phase information is determined per frequency and can be arranged in a matrix. From this matrix the frequencies with corresponding phase are selected for signal reconstruction. This selection can be preset or take place automatically (tracking mode). In the latter case the phase is chosen at which an addition of the signals over the frequency domain, which have optionally been further processed mathematically, produces a maximum.

Claims

Claims
1. Method for processing sound from a sound source in the midst of ambient sound, wherein a first sound signal is taken from a first electroacoustic converter, wherein a second sound signal is taken from a second electroacoustic converter spatially separated from the first electroacoustic converter, wherein said sound signals are brought into digital form and subjected to a transformation to a frequency domain, characterized in that said sound signals are combined into a first aggregate function in the frequency domain, that both sound signals are translated over time differences At1 or phase differences Δφ( and are combined to form further aggregate functions in the frequency domain, that said first and further aggregate functions per time difference Δτ; or phase difference Δφf over the frequency domain are at least added in order to determine therefrom a specific time difference Δτs or specific phase difference Δφs at which a maximum occurs, and that the aggregate function corresponding to said specific time difference Δτs or phase difference Δφs is retransformed from the frequency domain to a time domain in order to obtain an output signal therefrom.
2. Method as claimed in claim 1, characterized in that the transformation to the frequency domain comprises a Fourier transformation and the transformation to the time domain an associated inverse Fourier transformation.
3. Method as claimed in claim 1 or 2, characterized in that the further aggregate functions are obtained by a cross-correlation of functions derived by translation over discrete time differences Δτf or discrete phase differences Δ<pj of the sound signals.
4. Method as claimed in claim 1 , 2 or 3, characterized in that a chosen time difference Δτman or chosen phase difference Δφ.^,, is set manually in order to be imposed respectively as specific time difference Δτs or chosen phase difference Δφs.
5. Method as claimed in one or more of the foregoing claims, characterized in that the output signal is processed and/or amplified on the basis of an optionally predetermined processing characteristic, and is then fed to at least one further electroacoustic converter for the purpose of generating an output sound therewith.
6. Method as claimed in claim 5, characterized in that the first and second electroacoustic converters comprise a microphone, and the at least one further electroacoustic converter comprises at least one loudspeaker.
7. Sound processing device, comprising a first electroacoustic converter from which a first sound signal can be taken, a second electroacoustic converter which is spatially separated from the first electroacoustic converter and from which a second sound signal can be taken, and comprising electronic processing means which are able and adapted to perform the method as claimed in one or more of the foregoing claims and from which the output signal can be taken.
8. Hearing aid comprising the sound processing device as claimed in claim 7, comprising a further electroacoustic converter which is able and adapted to receive an electronic sound signal and generate an output sound to an ear of the user, and comprising a sound processing device with an input coupled to the processing means in order to receive the output signal and with an output coupled to the further converter in order to supply the processed and/or amplified output signal to the further converter.
9. Hearing aid as claimed in claim 8, characterized in that a portable expansion unit is provided, which is coupled operationally to the sound processing device, and that the expansion device comprises manually operated setting means for the purpose of setting a chosen time difference Δτman or chosen phase difference Δφman and imposing thereof on the sound processing device as respectively specific time difference Δτs or chosen phase difference Δφs.
10. Hearing aid as claimed in claim 9, characterized in that the expansion unit is coupled wirelessly to the sound processing device.
11. Hearing aid as claimed in claim 9 or 10, characterized in that the expansion unit comprises at least one input for connection of at least one further sound source.
12. Method for sound processing, wherein a first signal from a first sound sensor and a second signal from a second sound sensor are processed, which signals comprise sound from one or more sound sources, wherein these sound sources comprise one or more undesired sound sources and a desired sound source generating a desired source signal, and wherein the first signal and the second signal are digitized, subjected to a Fourier transformation and subsequently to a cross- correlation, characterized in that a fictitious Central Activities Pattern is created by means of the cross-correlation by adding the cross-correlations per frequency, subsequently determining the time delay at which the maximum of the resulting graph lies, and then determining and subsequently subjecting the frequency spectrum (Central Spectrum) associated with this time delay to an inverse Fourier transformation in order to recreate the desired source signal.
13. Method for sound processing as claimed in claim 12, characterized in that the desired sound source can be adjusted by a user.
14. Method as claimed in claim 12, characterized in that the desired signal is supplied to at least a loudspeaker.
15. Method as claimed in claim 14, characterized in that the desired signal is supplied wirelessly to the loudspeaker.
16. Device adapted to perform the method as claimed in any of the claims 12-15, comprising at least two sound sensors and two first transformation units which are connected thereto and in which the sound signals from the sound sensors are digitized and subjected to a Fourier transformation, characterized in that the device also comprises an electronic unit which applies a cross-correlation per frequency to both transformed signals, then adds them and subsequently determines the time delay at which the maximum of the resulting graph lies, and a second transformation unit which subjects the frequency spectrum associated with this time delay to an inverse Fourier transformation.
17. Device as claimed in claim 16, characterized in that the device comprises means for supplying the desired source signal to at least one ear.
18. Device as claimed in claim 16 or 17, characterized in that the device comprises an interruption unit which is adapted to perform automatic selection of a priority signal.
19. Device as claimed in claim 16, 17 or 18, characterized in that it comprises a connecting means for connection to a communication means, in particular a mobile telephone.
20. Device as claimed in claim 16, 17, 18 or 19, characterized in mat it comprises a connecting means for connection to an audio player, in particular an MP3 player.
PCT/NL2008/050119 2007-02-28 2008-02-28 Method and device for sound processing and hearing aid WO2008105661A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2000510 2007-02-28
NL2000510A NL2000510C1 (en) 2007-02-28 2007-02-28 Method and device for sound processing.

Publications (1)

Publication Number Publication Date
WO2008105661A1 true WO2008105661A1 (en) 2008-09-04

Family

ID=39495845

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2008/050119 WO2008105661A1 (en) 2007-02-28 2008-02-28 Method and device for sound processing and hearing aid

Country Status (2)

Country Link
NL (1) NL2000510C1 (en)
WO (1) WO2008105661A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693719B2 (en) 2010-10-08 2014-04-08 Starkey Laboratories, Inc. Adjustment and cleaning tool for a hearing assistance device
WO2014113891A1 (en) * 2013-01-25 2014-07-31 Hu Hai Devices and methods for the visualization and localization of sound
EP2928214B1 (en) 2014-04-03 2019-05-08 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction
CN113470680A (en) * 2020-03-31 2021-10-01 新唐科技股份有限公司 Sound signal processing system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081504A1 (en) * 2001-10-25 2003-05-01 Mccaskill John Automatic camera tracking using beamforming
WO2004028203A2 (en) * 2002-09-18 2004-04-01 Stichting Voor De Technische Wetenschappen Spectacle hearing aid
EP1460769A1 (en) * 2003-03-18 2004-09-22 Phonak Communications Ag Mobile Transceiver and Electronic Module for Controlling the Transceiver
US20040240680A1 (en) * 2003-05-28 2004-12-02 Yong Rui System and process for robust sound source localization
EP1571875A2 (en) * 2004-03-02 2005-09-07 Microsoft Corporation A system and method for beamforming using a microphone array

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081504A1 (en) * 2001-10-25 2003-05-01 Mccaskill John Automatic camera tracking using beamforming
WO2004028203A2 (en) * 2002-09-18 2004-04-01 Stichting Voor De Technische Wetenschappen Spectacle hearing aid
EP1460769A1 (en) * 2003-03-18 2004-09-22 Phonak Communications Ag Mobile Transceiver and Electronic Module for Controlling the Transceiver
US20040240680A1 (en) * 2003-05-28 2004-12-02 Yong Rui System and process for robust sound source localization
EP1571875A2 (en) * 2004-03-02 2005-09-07 Microsoft Corporation A system and method for beamforming using a microphone array

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693719B2 (en) 2010-10-08 2014-04-08 Starkey Laboratories, Inc. Adjustment and cleaning tool for a hearing assistance device
US8848956B2 (en) 2010-10-08 2014-09-30 Starkey Laboratories, Inc. Standard fit hearing assistance device with removable sleeve
US9002049B2 (en) 2010-10-08 2015-04-07 Starkey Laboratories, Inc. Housing for a standard fit hearing assistance device
WO2014113891A1 (en) * 2013-01-25 2014-07-31 Hu Hai Devices and methods for the visualization and localization of sound
CN105073073A (en) * 2013-01-25 2015-11-18 胡海 Devices and methods for the visualization and localization of sound
US10111013B2 (en) 2013-01-25 2018-10-23 Sense Intelligent Devices and methods for the visualization and localization of sound
EP2928214B1 (en) 2014-04-03 2019-05-08 Oticon A/s A binaural hearing assistance system comprising binaural noise reduction
CN113470680A (en) * 2020-03-31 2021-10-01 新唐科技股份有限公司 Sound signal processing system and method
CN113470680B (en) * 2020-03-31 2023-09-29 新唐科技股份有限公司 Sound signal processing system and method

Also Published As

Publication number Publication date
NL2000510C1 (en) 2008-09-01

Similar Documents

Publication Publication Date Title
EP2873251B1 (en) An audio signal output device and method of processing an audio signal
JP5642851B2 (en) Hearing aid
US10123134B2 (en) Binaural hearing assistance system comprising binaural noise reduction
EP3013070B1 (en) Hearing system
KR101779641B1 (en) Personal communication device with hearing support and method for providing the same
EP3160162B2 (en) Hearing aid device for hands free communication
US8526649B2 (en) Providing notification sounds in a customizable manner
CN109951785A (en) Hearing devices and binaural hearing system including ears noise reduction system
EP2262285A1 (en) A listening device providing enhanced localization cues, its use and a method
US9253571B2 (en) Hearing apparatus for binaural supply and method for providing a binaural supply
CN109845296B (en) Binaural hearing aid system and method of operating a binaural hearing aid system
JP2019041382A (en) Acoustic device
WO2008105661A1 (en) Method and device for sound processing and hearing aid
CN105744455A (en) Method for superimposing spatial auditory cues on externally picked-up microphone signals
US8218800B2 (en) Method for setting a hearing system with a perceptive model for binaural hearing and corresponding hearing system
KR101971608B1 (en) Wearable sound convertor
EP4429276A1 (en) Synchronous binaural user controls for hearing instruments
EP4425958A1 (en) User interface control using vibration suppression
JPH08205293A (en) Hearing aid

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08712643

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC

122 Ep: pct application non-entry in european phase

Ref document number: 08712643

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载