WO2021068000A1 - Aide à la respiration basée sur une analyse audio en temps réel - Google Patents
Aide à la respiration basée sur une analyse audio en temps réel Download PDFInfo
- Publication number
- WO2021068000A1 WO2021068000A1 PCT/US2020/070608 US2020070608W WO2021068000A1 WO 2021068000 A1 WO2021068000 A1 WO 2021068000A1 US 2020070608 W US2020070608 W US 2020070608W WO 2021068000 A1 WO2021068000 A1 WO 2021068000A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- breathing
- indicators
- tempo
- determining
- visual
- Prior art date
Links
- 230000029058 respiratory gaseous exchange Effects 0.000 title claims abstract description 149
- 230000000007 visual effect Effects 0.000 claims abstract description 93
- 238000000034 method Methods 0.000 claims description 55
- 230000005236 sound signal Effects 0.000 claims description 20
- 230000008859 change Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 230000003213 activating effect Effects 0.000 claims description 4
- 230000002596 correlated effect Effects 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000001514 detection method Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 6
- 238000012800 visualization Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 3
- 230000001914 calming effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000007407 health benefit Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Measuring devices for evaluating the respiratory organs
- A61B5/0816—Measuring devices for examining respiratory frequency
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/486—Biofeedback
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M21/02—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0022—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the tactile sense, e.g. vibrations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0027—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0044—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the sight sense
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M21/00—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
- A61M2021/0005—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
- A61M2021/0088—Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus modulated by a simulated respiratory frequency
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/33—Controlling, regulating or measuring
- A61M2205/3375—Acoustical, e.g. ultrasonic, measuring means
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61M—DEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
- A61M2205/00—General characteristics of the apparatus
- A61M2205/50—General characteristics of the apparatus with microprocessors or computers
- A61M2205/502—User interfaces, e.g. screens or keyboards
Definitions
- the present invention generally relates to indicators for assisting with deep breathing and meditation.
- the present invention is directed to a system and method for providing breathing guidance based on real-time audio analysis.
- a method of providing visual breathing indicators that are displayed to correlate with audio being listened to by a user includes receiving an audio signal, determining a current tempo of the audio signal, determining a time index with a plurality of time indices associated with times that audio events are predicted to occur based on the current tempo, selecting one of a plurality of visual breathing indicators to display at one of the plurality of time indices, determining a duration to display the selected visual breathing indicator, and displaying the selected visual breathing indicator at the selected time index for the determined duration.
- a system for displaying visual breathing indicators that are coordinated with audio being listened to by a user includes an audio detector, the audio detector detecting sounds and producing a signal representative of the detected sounds, a display, a computing device coupled to the audio detector and the display, the computing device configured to receive the signal and including a processor and a memory, the memory including a plurality of visual breathing indicators and a set of instructions executable by the processor, the instructions including determining a current tempo of the audio sounds based on the signal, determining a time index that includes a plurality of predicted time indices when musical events will occur based on the current tempo, selecting one of the plurality of visual breathing indicators to display at a selected one of the plurality of predicted time indices, determining a duration to display the selected visual breathing indicator, and sending the selected visual breathing indicator to the display for display at the selected time index for the determined duration.
- a method of providing breathing indicators that are correlated with audio being listened to by a user includes receiving data about a song to be played to a user, generating an FFT derived format of the song from the data, determining a tempo for the song from the FFT derived format using a convolutional neural network, selecting one of a plurality of breathing indicators to activate at a selected time index based on the determined tempo, determining a duration to activate the selected breathing indicator based on the tempo and a breathing guide protocol, activating the selected breathing indicator at the selected time index for the determined duration, selecting a second one of the plurality of breathing indicators to display at a second selected time index based on a change in the determined tempo, determining a second duration to activate the second selected breathing indicator based on the changed tempo and the breathing guide protocol, and activating the second selected breathing indicator at the second selected time index for the second determined duration.
- FIG. 1 is a diagram showing a system for providing indicators for breathing based on real-time audio analysis in accordance with an embodiment of the present invention
- FIG. 2 a process diagram outlining a method for determining tempo of audio in accordance with an aspect of the present invention
- FIG. 3 a process diagram outlining another method for determining tempo of audio in accordance with an aspect of the present invention
- FIGS. 4A-4D illustrate exemplary visual breathing indicators provided sequentially on a display in accordance with an embodiment of the present invention
- FIG. 5 is a process diagram for providing visual indicators for breathing based on real-time audio analysis in accordance with an embodiment of the present invention
- FIG. 6 is a process diagram outlining a process for determining visual indicators to display based on a determined current tempo of audio in accordance with an embodiment of the present invention.
- FIG. 7 is an exemplary computing system for use with a system providing visual indicators for breathing based on real-time audio analysis according to an embodiment of the present invention.
- breathing guidance is provided to a user based on real-time audio detection of music or other audio being listened to by the user.
- An audio detection system analyzes current audio being played (or to be played in the near future), which may be selected by the user or other audio provider, and determines a current tempo. Breathing indicators are then provided to the user based on and in correlation with the determined tempo to coincide with the audio being played by the user. The system continuously re-determines the tempo (or the tempo of music to be played in the near future). Thus, if the tempo changes within a song, for example, or a new song begins, the system will adjust the breathing indicators that are to be displayed accordingly in real-time.
- a user can choose the audio to which to meditate while still receiving appropriate visual breathing indicators for that selected audio.
- a user can choose a certain type of music to set the tone for a particular type of meditation, such as for calming down, getting inspired, or trying to fall asleep.
- a certain type of music to set the tone for a particular type of meditation, such as for calming down, getting inspired, or trying to fall asleep.
- users can engage in calming breathing exercises while listening to their favorite music from any source, including whichever music app they choose even if the app plays songs at random. This provides an enjoyable and creative experience, which can have the effect of increasing the duration and frequency of meditative activity.
- a system for providing indicators for guiding the breathing of a user i.e., listener
- the user plays, or is otherwise in the presence of, audio, typically music.
- the audio is detected, either via a direct connection to a device that is playing the audio or by first capturing sounds from the user’s surroundings.
- the audio is continuously analyzed to determine a tempo of the audio. For simplicity, the following description will assume that the audio is a musical song, but it will be understood that other sounds may be used even if they do not constitute what may traditionally be considered music.
- an indicator including a visual indicator, such as an animation, and/or an auditory overlay or vibration loop, is provided on a display or other suitable apparatus that guides the breathing of the user.
- the indicators guide the user to breathe based on the current tempo, i.e., the beats per time of the song, and are presented in synchrony in terms of introduction and duration of the indicators with the tempo of the music.
- the indicators may correspond with a variety of time signatures for breathing.
- the visual indicators may create a breathing guide schema that coaches the user to breathe in for four beats, hold it in for the next four beats, breathe out during the next four beats, then to hold it out for the next four beats, and so on.
- the visual indicators may create a breathing guide schema that coaches the user to breathe in for eight beats, hold it in for the next four beats, then breathe out for the next four beats.
- Any variation of breathing loop duration can be used, preferably, however, it would fit into a loop that adds up to 16 beats for songs in 4/4 or 12 beats for songs in 3/4.
- the visual indications are preferably presented on the display such that the indicators correspond in time with any detected beat.
- the system continues to monitor the audio being played (or to be played). If changes are detected in the tempo, either because a new song begins or the tempo of the current song changes, the visual indicators displayed to the user are changed to remain appropriate for and synchronized with the changed tempo.
- system 100 includes an audio detector 104, a computing device 108, and a display 112.
- audio detector 104 detects audio signals, either by detecting sounds present in the user’s surrounding environment (e.g., live music or music played from a device to which audio detector 104 is not in direct electronic communication) or by receiving a signal of the music that is being played as it is being played (or before it is scheduled to be played), for example, by monitoring the output signal being sent to headphones or speaker.
- audio detector 104 Upon detecting sounds or receiving a signal representative of the music being played (or scheduled to be played), audio detector 104 sends a signal representative of the current music to computing device 108. If audio detector 104 is used to detect sounds from the environment, the sounds are transformed into a signal representative of the detected sounds in time and that signal is sent to computing device 108.
- Computing device 108 receives the signal from audio detector 104 and analyzes the signal to determine the current tempo of the music.
- the current tempo may be determined, for example, by analyzing features in the signal that carries the musical information after converting the audio signal into an image (e.g., a spectrogram) or other format using but not limited to the Fast Fourier Transform algorithm (FFT).
- FFT Fast Fourier Transform algorithm
- a tempo determination process 140 for estimating tempo includes receiving, at step 142, an audio signal.
- the audio signal is converted to an image, such as a spectrogram by FFT or other FFT derived format, at step 144, followed by a preprocessing routine that removes non-music noise from the audio signal at step 145.
- peaks are detected at step 146.
- filtering is used to eliminate the counting of peaks that are too close together in time.
- Each detected peak for each frequency band is associated with a relative time of occurrence, and for every pair of peaks in a given band, a time interval between peaks is determined at step 148.
- Each interval is then inverted to give a tempo value, preferably in beats per minute (bpm) at step 150.
- the bpm for each pair of peaks in a frequency band is compiled and an estimated tempo for the frequency band is selected statistically (e.g., by removing outliers, selecting the average of values that fall within a range containing the most values) at step 152.
- This process can be repeated for each frequency band such that a plurality of tempo estimates are generated. These tempo estimates may be increased in number as sampling continues, until a best estimate tempo emerges, which is then deemed the current tempo. This process will be ongoing so that the current tempo may change over time.
- the real-time tempo may be determined through the implementation of a pre-processing routine in conjunction with a deep learning architecture trained to recognize the tempo of a song and output tempo (in bpm, for example), beat location, and measure start location.
- tempo determination process 160 an audio signal is received at step 162.
- the audio signal is converted to an image, such as a spectrogram by FFT or other FFT derived format, at step 164.
- Non-music noise is removed from the audio signal at step 166 by detecting and isolating the likely musical portion from the rest of the audio signal.
- a convolutional neural network trained on segments of audio files represented as images with labeled time stamp metadata using but not limited to spectrograms derived from the Fast Fourier Transform algorithm or other FFT derived format is implemented that correlates the images or other FFT derived format of the detected audio signal with images or other FFT derived format formed from segments of audio with known tempos at step 168.
- the training dataset created to build the convolutional neural network is created by one or more humans, determining which portions of songs include viable music (e.g., excluding extended silences), followed by a script that converts the song into usable training data. As more data is accumulated, this process can be automated further reducing the amount of human involvement in training data creation.
- the neural network is built to be used by devices with limited computing resources.
- Original audio image resolution can be utilized by many mobile device models.
- a reduced resolution audio image or other FFT derived format is created to be processed by mobile device models with fewer computational resources, including CPU and memory.
- computing device 108 selects from stored options or generates a series of visual breathing indicators for guiding the breathing of the user who is listening to that music.
- the breathing indicators typically are displayed in small groups or a series and are designed to be synchronized with the tempo/beat of the detected music and include breathing guidance that is appropriate for the current tempo, such as visual instructions conveying to the user to breathe in for eight beats, hold it in for the next four beats, then breathe out for the next four beats.
- the generated or selected series of breathing indicators are then sent to display 112 for display to the user.
- the breathing indicators are displayed in essentially real-time with the music being played or detected.
- FIG. 4D an illustrative diagram depicts indicators 116 being displayed at determined times, e.g., T1-T6.
- audio detector 104 continues to detect the music being played and sends updated signals representative of the music to computing device 108.
- Computing device 108 analyzes the signal and modifies, or maintains the status quo of, as applicable, the series of breathing indicators to be displayed on display 112.
- computing device 108 analyzes the metrics of the music that is scheduled to be played to the user and determines which series of breathing indicators is to be shown and when so the series corresponds to the music as it is played.
- computing device 108 can be, but is not limited to, a smartphone, smartwatch, tablet, portable computer, or augmented or virtual reality player. It will be understood that computing device 108 may include display 112 and audio detector 104 as built-in components.
- Breathing visualization process 200 involves detecting audio sounds being played or recognizing an audio signal being played or scheduled to be played at step 204, determining a current tempo of the detected music at step 208 as outlined above, selecting or preparing a series of visual breathing indicators to guide a user’s breathing based on the determined current tempo at step 212, and displaying the series of visual indicators on a display for the user at step 216 such that the indicators are shown at appropriate time indices to be in sync with the determined tempo.
- This synchronization may be based on a known time index if the music is in a digital format and recognized, or based on predicted time indices based on the determined current tempo of the music being played.
- steps 204, 208, 212, and 216 are repeated continuously so that the displayed breathing indicators remain synchronized with the current tempo of the music even if the tempo (or song) changes.
- Continuously in the context of this invention means with sufficient frequency such that the displayed indicators are updated, if necessary, at appropriate times within several seconds or less as the music is being played.
- a digital audio signal is accessible and analyzed in advance of being played, continuously means that the displayed indicators are updated in order to coincide with changes in the music scheduled to be played.
- a process for determining appropriate series of visual breathing indicators to display based on the determined current tempo is outlined in FIG. 6.
- process 300 a signal representative of music being played by or for the user is received at step 304.
- the signal is analyzed at step 308 to determine the tempo as described above.
- the determined tempo is analyzed to determine whether it is new or has otherwise changed. If the tempo remains the same, the signal for the now current music is again received at step 304 and an updated current tempo is determined at step 308. If the tempo has changed, which includes the music starting or a new song being played, a time index is created or updated at step 312.
- the time index includes the time indices in the future at which certain musical events will likely occur based on the current tempo, assuming the song doesn’t change or stop.
- the time index may include a beat to occur at 30 seconds from the current clock time.
- a series of visual breathing indicators are developed or selected at step 316.
- the breathing indicators for the selected series are then assigned time indices at which they are to be shown and durations for which they are to be shown at step 320.
- a first breathing indicator instructing the user to breathe in could be set to be displayed at a time index that would correspond with the playing of a predicted beat peak and would be scheduled to be displayed for 8 seconds. Then a second breathing indicator instructing the user to hold the breath would be scheduled to be displayed immediately after the first visual indicator and be shown for 4 seconds, at which point a third indicator in this series of breathing indicators would be displayed instructing the user to breath out for 4 seconds (corresponding to 16 beats).
- these breathing indicators will be timed to display consecutively and such that they are synchronized with the appropriate musical occurrences based on the analysis of the current music signal in step 308 and the predictive time index for the relevant musical occurrences that includes predicted time indices for future relevant musical occurrences developed at step 312.
- the series to be displayed and the time indices at which each indicator in the series is updated at step 324, which schedules the selected indicators to be shown on a display at the designated times. Note that if the indicators are not visual, then the indicators will be scheduled to be activated (e.g., emit sound or vibrate) at appropriate times for the selected durations.
- the accuracy of these predicted time indices depends on the continuation and constancy of the music. Therefore, the signal is continuously analyzed so that changes in tempo can be accounted for in the selection of appropriate breathing indicators and the time indices at which they are to be shown (as well as their duration).
- the breathing indicators and time indices for display of those indicators can be based on that data, and can thus be correlated without any lag since any changes in tempo would be determined prior to those changes being played.
- the time indices would not be predicted based on the current tempo, but based on available data about the song being played. However, the musical selections would still be monitored for any changes (e.g., skipped songs, changed playlist), so that the selected indicators and times to be shown could be updated or altered as appropriate.
- FIG. 7 shows a diagrammatic representation of one embodiment of a computing system, e.g., computing device 108, in the exemplary form of a system 400, within which a set of instructions for causing a processor 404 to perform any one or more of the aspects and/or methodologies, such as process 300, of the present disclosure.
- System 400 can also include a memory 408 that communicates with other components, via a bus 412.
- Bus 412 may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures.
- Memory 408 may include various components (e.g., machine readable media) including, but not limited to, a random access memory component (e.g., a static RAM “SRAM”, a dynamic RAM “DRAM”, etc.), a read only component, and any combinations thereof.
- a basic input/output system 416 (BIOS), including basic routines that help to transfer information between elements within system 400, such as during start-up, may be stored in memory 408.
- BIOS basic input/output system 416
- Memory 408 may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software) 420 embodying any one or more of the aspects and/or methodologies of the present disclosure.
- memory 408 may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof.
- System 400 may also include a storage device 424, such as, but not limited to, the machine-readable storage medium described above.
- Storage device 424 may be connected to bus 412 by an appropriate interface (not shown).
- Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof.
- storage device 424 (or one or more components thereof) may be removably interfaced with system 400 (e.g., via an external port connector (not shown)).
- Storage device 424 and an associated machine-readable medium 425 may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for system 400.
- instructions 420 may reside, completely or partially, within non-transitory machine-readable medium 425.
- instructions 420 may reside, completely or partially, within processor 404.
- Systems 400 may be connected via a network interface 440 to a network 444 so that music provider services 448 can be accessed for music selection.
- System 400 may further include a video display adapter 452 for communicating a displayable image to a display device, such as display 112.
- a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof.
- LCD liquid crystal display
- CRT cathode ray tube
- LED light emitting diode
- a method of providing visual breathing indicators that are displayed to correlate with audio being listened to by a user includes receiving an audio signal, determining a current tempo of the audio signal, determining a time index with a plurality of time indices associated with times that audio events are predicted to occur based on the current tempo, selecting one of a plurality of visual breathing indicators to display at one of the plurality of time indices, determining a duration to display the selected visual breathing indicator, and displaying the selected visual breathing indicator at the selected time index for the determined duration.
- the method further includes selecting a second visual breathing indicator from the plurality of visual breathing indicators and displaying the second visual breathing indicator at a second time index for a second determined duration.
- the method further includes converting the audio signal to a spectrogram or other FFT derived format.
- the method further includes removing non-music noise from the spectrogram.
- determining a current tempo includes determining a plurality of frequency peaks in a frequency band.
- the method further includes determining a time interval between each pair of peaks in the plurality of frequency peaks in the frequency band.
- determining a current tempo includes determining a second plurality of frequency peaks in a second frequency band.
- the method further includes determining a second time interval between each pair of peaks in the second plurality of frequency peaks in the second frequency band.
- the method further includes determining the tempo is based on the time interval and the second time interval.
- determining the tempo includes correlating the digital file with a set of pre-selected digital files representing segments of audio with known tempos.
- the digital file is a spectrogram or other FFT derived format.
- the first visual breathing indicator instructs the user to breathe in, wherein the first time index corresponds to when a beat peak is predicted, and wherein the first duration is eight seconds.
- the second visual breathing indicator the user to hold a breath, wherein the second time index corresponds to when the first duration ends, and wherein the second is four seconds.
- the method further includes selecting a third visual breathing indicator from the plurality of visual breathing indicators to display at a third time index, determining a third duration to display the third visual breathing indicators, and displaying the third visual breathing indicator, wherein the third time index corresponds to when the second duration ends, wherein the third visual breathing indicator instructs the user to breathe out, and wherein the third duration is four seconds.
- the instructions further include selecting a second one of the plurality of visual breathing indicators to display at a selected second one of the plurality of predicted time indices, determining a second duration to display the second selected visual breathing indicator, and sending the second selected visual breathing indicator to the display for display at the second selected time index for the second determined duration.
- the instructions further include determining an updated current tempo.
- determining the current tempo includes converting the signal to a spectrogram or other FFT derived format, determining a frequency for each of a plurality of peaks in the spectrogram or other FFT derived format in a frequency band, and determining a time interval between pairs of each of the plurality of peaks.
- non-music noise is removed from the signal.
- the current tempo is determined based on multiple time interval determinations between pairs of each of the plurality of peaks.
- determining the current tempo includes converting the signal to a spectrogram or other FFT derived format, removing non-music noise from the signal, and determining the current tempo by using a pre-trained convolutional neural network trained from segments of audio files with known tempos.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Animal Behavior & Ethology (AREA)
- Pulmonology (AREA)
- Anesthesiology (AREA)
- Surgery (AREA)
- Psychology (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Physical Education & Sports Medicine (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Databases & Information Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Pain & Pain Management (AREA)
- Acoustics & Sound (AREA)
- Hematology (AREA)
- Social Psychology (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
L'invention concerne un système d'aide à la respiration basé sur la détection et/ou l'analyse audio en temps réel. Des informations de rythme sont déterminées pour un contenu audio lu pour un utilisateur, qui peut être sélectionné par l'utilisateur ou un autre fournisseur audio. Une série d'indicateurs visuels, auditifs ou vibratoires de la respiration sont ensuite affichés sur la base et en corrélation avec le rythme déterminé à des moments et pour des durées qui sont synchronisés avec le rythme et les événements audio prédits. Le système détermine de nouveau le rythme à mesure que l'audio est lu et règle les indicateurs de respiration selon les besoins. Des indicateurs corrélés au rythme sont affichés qui guident les utilisateurs pour respirer en synchronisation avec de la musique, les utilisateurs effectuent des exercices de respiration tout en écoutant leur musique préférée à partir de n'importe quelle source, y compris l'application de musique qu'ils choisissent même si l'application lit des chansons au hasard. Ceci permet une expérience agréable et créative, qui peut avoir pour effet d'augmenter la durée et la fréquence d'activité méditative.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962909357P | 2019-10-02 | 2019-10-02 | |
US62/909,357 | 2019-10-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021068000A1 true WO2021068000A1 (fr) | 2021-04-08 |
Family
ID=75337531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2020/070608 WO2021068000A1 (fr) | 2019-10-02 | 2020-10-02 | Aide à la respiration basée sur une analyse audio en temps réel |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2021068000A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116617636A (zh) * | 2023-05-30 | 2023-08-22 | 深圳市联华电子有限公司 | 一种通过灯光和声音练习正确呼吸方法的电子装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070173684A1 (en) * | 2004-03-18 | 2007-07-26 | Coherence Llc | Method of presenting audible and visual cues for synchronizing the breathing cycle with an external timing reference for purposes of synchronizing the heart rate variability cycle with the breathing cycle |
US20080115656A1 (en) * | 2005-07-19 | 2008-05-22 | Kabushiki Kaisha Kawai Gakki Seisakusho | Tempo detection apparatus, chord-name detection apparatus, and programs therefor |
US20100017034A1 (en) * | 2008-07-16 | 2010-01-21 | Honda Motor Co., Ltd. | Beat tracking apparatus, beat tracking method, recording medium, beat tracking program, and robot |
US20140357960A1 (en) * | 2013-06-01 | 2014-12-04 | James William Phillips | Methods and Systems for Synchronizing Repetitive Activity with Biological Factors |
US20160151603A1 (en) * | 2013-07-08 | 2016-06-02 | Resmed Sensor Technologies Limited | Methods and systems for sleep management |
US20180220901A1 (en) * | 2015-07-15 | 2018-08-09 | Valencell, Inc. | Methods of controlling biometric parameters via musical audio |
-
2020
- 2020-10-02 WO PCT/US2020/070608 patent/WO2021068000A1/fr active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070173684A1 (en) * | 2004-03-18 | 2007-07-26 | Coherence Llc | Method of presenting audible and visual cues for synchronizing the breathing cycle with an external timing reference for purposes of synchronizing the heart rate variability cycle with the breathing cycle |
US20080115656A1 (en) * | 2005-07-19 | 2008-05-22 | Kabushiki Kaisha Kawai Gakki Seisakusho | Tempo detection apparatus, chord-name detection apparatus, and programs therefor |
US20100017034A1 (en) * | 2008-07-16 | 2010-01-21 | Honda Motor Co., Ltd. | Beat tracking apparatus, beat tracking method, recording medium, beat tracking program, and robot |
US20140357960A1 (en) * | 2013-06-01 | 2014-12-04 | James William Phillips | Methods and Systems for Synchronizing Repetitive Activity with Biological Factors |
US20160151603A1 (en) * | 2013-07-08 | 2016-06-02 | Resmed Sensor Technologies Limited | Methods and systems for sleep management |
US20180220901A1 (en) * | 2015-07-15 | 2018-08-09 | Valencell, Inc. | Methods of controlling biometric parameters via musical audio |
Non-Patent Citations (1)
Title |
---|
SCHONEVELD: "Detecting Music BPM using Neural Networks", NLML.GITHUB.IO, 28 June 2017 (2017-06-28), Retrieved from the Internet <URL:https://web.archive.org/web/20170628134905/https://nlml.github.io/neural-networks/detecting-bpm-neural-networks> [retrieved on 20210109] * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116617636A (zh) * | 2023-05-30 | 2023-08-22 | 深圳市联华电子有限公司 | 一种通过灯光和声音练习正确呼吸方法的电子装置 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9330680B2 (en) | Biometric-music interaction methods and systems | |
CN1937462A (zh) | 内容偏好得分确定方法、内容重放装置及内容重放方法 | |
US9646625B2 (en) | Audio correction apparatus, and audio correction method thereof | |
US10942563B2 (en) | Prediction of the attention of an audience during a presentation | |
WO2020052665A1 (fr) | Procédé et appareil d'interaction avec une émission en direct, et support de stockage | |
JP2005518594A (ja) | オーディオ・コンテンツの識別を使用して製品を販売するシステム | |
CN104768049B (zh) | 一种用于同步音频数据和视频数据的方法、系统及计算机可读存储介质 | |
US11511200B2 (en) | Game playing method and system based on a multimedia file | |
CN110324726B (zh) | 模型生成、视频处理方法、装置、电子设备及存储介质 | |
US11468867B2 (en) | Systems and methods for audio interpretation of media data | |
CN112995736A (zh) | 语音字幕合成方法、装置、计算机设备及存储介质 | |
EP4250291A1 (fr) | Procédé et appareil de détection audio, dispositif informatique et support de stockage lisible | |
KR101648931B1 (ko) | 리듬 게임 제작 방법, 장치 및 이를 컴퓨터에서 실행하기 위한 컴퓨터 프로그램 | |
CN112119456A (zh) | 任意信号插入方法以及任意信号插入系统 | |
WO2014141413A1 (fr) | Dispositif de traitement d'informations, procédé de sortie, et programme | |
WO2021068000A1 (fr) | Aide à la respiration basée sur une analyse audio en temps réel | |
KR102429108B1 (ko) | 노래 부르기를 기반으로 청능 훈련을 수행하는 전자 장치, 방법, 및 컴퓨터 프로그램 | |
JP2010237257A (ja) | 評価装置 | |
KR20190004215A (ko) | 시창평가 시스템 및 그것을 이용한 시창평가방법 | |
CN103531220A (zh) | 歌词校正方法及装置 | |
KR20250048809A (ko) | 동기 통신을 위한 오디오 합성 | |
Wagner et al. | Induced loudness reduction as a function of exposure time and signal frequency | |
JPWO2019043871A1 (ja) | 表示タイミング決定装置、表示タイミング決定方法、及びプログラム | |
CN107452408A (zh) | 一种音频播放方法及装置 | |
CN117222364A (zh) | 用于听力训练的方法和设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20870502 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20870502 Country of ref document: EP Kind code of ref document: A1 |