+

US20160189566A1 - System and method for enhancing remote speech fluency therapy via a social media platform - Google Patents

System and method for enhancing remote speech fluency therapy via a social media platform Download PDF

Info

Publication number
US20160189566A1
US20160189566A1 US14/981,110 US201514981110A US2016189566A1 US 20160189566 A1 US20160189566 A1 US 20160189566A1 US 201514981110 A US201514981110 A US 201514981110A US 2016189566 A1 US2016189566 A1 US 2016189566A1
Authority
US
United States
Prior art keywords
fluency
social networking
devices
practice
therapist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/981,110
Inventor
Moshe Rot
Lilach Rothschild
Smadar LERNER
Eli LERNER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Novotalk Ltd
Original Assignee
Novotalk Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novotalk Ltd filed Critical Novotalk Ltd
Priority to US14/981,110 priority Critical patent/US20160189566A1/en
Assigned to Novotalk, Ltd. reassignment Novotalk, Ltd. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LERNER, ELI, LERNER, SMADAR, ROT, MOSHE, ROTHSCHILD, LILACH
Publication of US20160189566A1 publication Critical patent/US20160189566A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Biofeedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7282Event detection, e.g. detecting unique waveforms indicative of a medical condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient; User input means
    • A61B5/742Details of notification to user or communication with user or patient; User input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient; User input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16ZINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
    • G16Z99/00Subject matter not provided for in other main groups of this subclass
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the present disclosure relates generally to the field of speech therapy, and more particularly to engaging speech therapy patients over social media networks.
  • Speech disorders are one of the most prevalent disabilities in the world. Generally, speech disorders may be classified as fluency disorders, voice disorders, motor speech disorders, and speech sound disorders. As one example, stuttering is classified as a fluency disorder in the rhythm of speech in which a person knows precisely what to say, but is unable to communicate or speak in accordance with his or her intent.
  • one common stutter therapy technique is fluency shaping, in which a therapist trains a person (e.g., a stuttering patient) to improve his or her speech fluency through the altering of various motor skills.
  • a therapist trains a person (e.g., a stuttering patient) to improve his or her speech fluency through the altering of various motor skills.
  • skills include the abilities to control breathing; to gently increase, at the beginning of each phrase, vocal volume and laryngeal vibration continue phonation through the end of the phrase by keeping the vocal folds relaxed and air flowing; to speak slower and with prolonged vowel sounds; to enable continuous phonation; and to reduce articulatory pressure.
  • the speech motor skills are taught in the clinic while the therapist models the behavior and provides verbal feedback as the person learns to perform the motor skill. As the person develops speech motor control, the person increases rate and prosody of her/his speech until it sounds normal. During the final stage of the therapy, when the speech is fluent and sounds normal in the clinic, the person is trained to practice the acquired speech motor skills in her/his everyday life activities.
  • fluency shaping therapy When fluency shaping therapy is successful, the stuttering is significantly improved or even eliminated.
  • this therapy requires continuous training and practice in order to maintain effective speech fluency.
  • the conventional techniques for practicing fluency shaping therapy are not effective for people suffering from stuttering. This is mainly because not all persons are capable of developing the target speech motor skills in the clinic, and even if such skills are developed, such skills are not easily transferable into everyday conversations. In other word, a patient can learn to speak fluently in the clinic, but will likely revert to stuttering outside of the clinic.
  • therapists and other patients can contribute to motivating the patient to continue practicing regularly. To maintain motivation during the time periods between live sessions with a therapist, the patient should receive encouragement during those time periods. Patient motivation may further be spurred by sharing patient progress with appropriate people who can acknowledge the patient's efforts and encourage additional progress.
  • Existing techniques for providing speech therapy face challenges in providing encouragement between therapy sessions because they rely on the availability of therapists and other motivating individuals between sessions.
  • Social media networks may further provide automatic updates for users such as, e.g., birthday and holiday messages, as well as web-based activity of a user.
  • a social media network may automatically (or by user selection) share media content viewed by a user, items purchased by the user, games played by the user, and so on.
  • such social media networks lack the ability to provide feedback and support for speech therapy patients.
  • the disclosed embodiments include a social networking platform for enhancing fluency training, comprising: a plurality of fluency practice devices; a plurality of therapist devices; and a server communicatively connected to the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is configured to facilitate communications among the plurality of fluency practice devices and between the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is further configured to share social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.
  • the disclosed embodiments further include a method for enhancing speech fluency training via social networking platform.
  • the method comprises facilitating communications among a plurality of fluency practice devices and between a plurality of fluency practice devices and the plurality of therapist devices; and sharing social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.
  • FIG. 1 is a network diagram utilized to describe the various disclosed embodiments.
  • FIG. 2 is a flowchart illustrating a method for enhancing remote speech therapy via social media according to an embodiment.
  • FIG. 3 is a screenshot illustrating a social networking message from a user.
  • FIG. 6 is a screenshot illustrating groupings of users.
  • Social networking elements are used to support stuttering/stammering treatment using Fluency Shaping techniques, as follows. Online practice between users—with the aid of the video chat and chat platforms, for example, in the form of templates, fluency practice sessions between users of the system are created. Feed creation in the area of Fluency Shaping-achievements of the users are used to create the feed as well as to improve motivation to practice the technique. Users can reward the achievements of other users based on their successes and various reports.
  • Speech is a social activity. Therefore, in order to gain mastery of the fluency shaping techniques in spontaneous speech, it is important to practice using the new speech patterns in various communicative situations which challenge speech fluency (for example, with new people, with strangers, in situations already identified as difficult for that individual, in group discussions), and not just alone in front of the computer or the clinician. Elements of social networking can be used within the system to help achieve this goal.
  • the main purpose of the system is to provide a platform for the practice and application of the fluency shaping techniques in conversational spontaneous speech with other people.
  • an additional advantage is that it provides a community in which persons practicing speech fluency can support each other before, during and after treatment.
  • Practicing fluent speech at the standardized/regulated rate can occur between two users on the system of the invention (the users do not need to know each other previously).
  • the practice is conducted online at convenient times for the users and in any location from which they can connect to the system.
  • the conversations are conducted using video conferencing as well as the various system-based indicators (visual monitor, rate monitor, etc.).
  • a feed is created for fluency shaping where users' achievements can be broadcast on an online “feed” and used to increase motivation for practicing the techniques. Users of the system have the option to “reward” the achievements of other users. Achievements are published inside internal news feed (available only for registered users). All users can encourage the feed events in the manner of comments and encouragement counter.
  • FIG. 1 shows an exemplary and non-limiting diagram of a remote speech therapy system 100 utilized to describe the various disclosed embodiments.
  • the system 100 includes a network 110 , a first plurality of user devices 120 - 1 through 120 - n (hereinafter referred to individually as a fluency practice device 120 and collectively as fluency practice devices 120 , merely for simplicity purposes), a server 130 , a database 140 , a feedback generator system (FGS) 150 , and a second plurality of user devices 160 - 1 through 160 - n (hereinafter referred to individually as a therapist device 160 and collectively as therapist devices 160 , merely for simplicity purposes).
  • FGS feedback generator system
  • the network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), and other networks configured to communicate between the elements of the 110 .
  • Each fluency practice device 120 and each therapist device 160 may be, but is not limited to, a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computer device, a game console, and the like.
  • the fluency practice devices 120 are utilized by people (e.g., speech therapy patients) practicing to improve existing speech disorders, and the therapist devices 160 are utilized by speech therapists. It should be noted that one or more fluency practice devices 120 can communicate with a single therapist device 160 , and multiple therapist devices 160 can communicate with one or more fluency practice devices 120 . It should be noted that the fluency practice device 120 can be operated by any person who may or may not suffer from a speech disorder. It should be further noted that the therapist device 160 may be operated by any person who may or may not be a therapist. Typically, the therapist device 160 is operated by either a therapist or any other person who may be permitted to observe the fluency-practicing user's progress (e.g., a friend, a family member, a guardian, and so on).
  • Each of the fluency practice devices 120 and the therapist devices 160 is configured to communicate with the server 130 .
  • the server 130 is configured to allow fluency practice sessions between the user devices 120 and/or 160 , to obtain feedback based on the fluency practice sessions, to store fluency practice progress, and to share content respective of the fluency practice sessions.
  • Each fluency practice session may be joined by one or more of the fluency practice devices 120 and/or one or more of the therapist devices 160 .
  • the fluency practice sessions may be created according to a template.
  • the template defines parameters for exercises included in the fluency practice sessions.
  • Each exercise may further have a difficulty level that may correspond to a therapy stage of a fluency-practicing user. Exercises having higher difficulty levels are therefore typically for fluency-practicing user at higher therapy stages as determined based on, e.g., past performances.
  • the server 130 may be configured to select one or more exercises from a plurality of exercise based on the user profiles of the patient(s) participating in the fluency practice session. For example, the exercises may be selected based on the therapy stage of the patients participating in a particular fluency practice session.
  • an audio/video communication channel may be established between any of the fluency practice devices 120 and/or the therapist devices 160 .
  • This enables fluency practice sessions such as, for example, remote therapy sessions, group therapy sessions between multiple patients and/or therapists, combinations thereof, and so on.
  • the audio/video communication channel can be a peer-to-peer connection between the fluency practice devices 120 and/or the therapist devices 160 , or can be through the server 130 via, e.g., a website, a mobile application, and so on.
  • an audio/video channel may be established between the fluency practice devices 120 and/or the therapist devices 160 to allow direct communication between the patients and/or the therapists.
  • the channel in one embodiment, is established over HTTP.
  • the agent 125 or 165 of each respective device 120 or 160 is configured to stream video streams from one device to another over the established communication channel.
  • each of the devices 120 or 160 and the server 130 may be realized through, for example, a web interface (e.g., a web portal), an application installed on the device 120 or 160 , a script executed on the device 120 or 160 , and the like.
  • each fluency practice device 120 is installed with an agent 125 and each therapist device 160 is installed with an agent 165 .
  • Each of the agents 125 and 165 may be configured to communicate with the server 130 .
  • each agent 125 or 165 can operate and be implemented as stand-alone programs and/or can communicate and be integrated with other programs or applications executed in the fluency practice device 120 and the therapist device 160 , respectively. Examples for a stand-alone program may include a web application, a mobile application, and the like.
  • the server 130 may be configured to store a user profile associated with each user of the fluency practice devices 120 and/or therapist devices 160 in the database 140 .
  • Each user profile may include, but is not limited to, a name of a user, a classifications of a user (e.g., a user may be classified as either a patient, a therapist, or a guardian), friends lists, supervised users lists (e.g., a list of patients that a therapist supervises), a fluency proficiency level of a patient, a therapy stage of the patient, an assignment for a therapist (e.g., a therapist may be assigned to work with patients at a particular therapy stage stage), content shared among users of the devices 120 and 160 , feedback related to a patient's vocal performances, and so on.
  • the server 130 may be configured to utilize the user profiles to automatically determine and provide appropriate motivational content to each patient.
  • the motivational content may include, but is not limited to, prizes, achievements, shared content, performance evaluations, additional exercises
  • FIG. 3 shows an exemplary and non-limiting screenshot 300 illustrating a supervised users list for a therapist as displayed in a social networking application.
  • the screenshot 300 includes a supervised users list 310 .
  • the supervised users list 310 indicates patients whose progress the therapist is currently supervising.
  • the agent 125 may be configured to capture sound samples from its respective fluency practice device 120 during a fluency practice session and to send the captured sound samples to the server 130 .
  • the server 130 may be configured to receive the sound samples from the agent 125 and to send the sound samples to the feedback generator system 150 .
  • the feedback generator system 150 analyzes the patient's performance respective of the captured sound samples and generates feedback respective of the performance. The analysis and generation of feedback based on patient performances is described further in U.S.
  • the feedback generator system 150 may send the feedback to the server 130 .
  • the server 130 may store the feedback in the database 140 .
  • the server 130 may be configured to select one or more of the devices 120 and/or 160 and to send the feedback to the selected devices 120 and 160 .
  • the selected devices 120 and 160 may display the feedback.
  • the sever 130 may generate a feed for the patient based, in part, on the obtained feedback.
  • the feed may be a collection of previously generated feedbacks respective of past voice productions of a patient.
  • the feed may further indicate progress of the patient based on, for example, time periods of practice, regularity of practice, successes, attaining particular stages in therapy, and so on.
  • the progress may further be respective of one or more challenges undertaken by the patient.
  • FIG. 5 shows an exemplary and non-limiting screenshot 500 illustrating a challenge from another user presented as motivational content.
  • the screenshot 500 includes a challenge box 510 including an image and text related to a challenge from another patient or from a therapist.
  • the challenge asks the patient to practice for 15 minutes on at least 4 days and to complete 4 medium difficulty speech tasks.
  • the server 130 may be configured to receive social networking messages (e.g., “posts”) respective of the feedback.
  • FIG. 6 is an exemplary and non-limiting screenshot 600 of a social networking message from a user.
  • the screenshot 600 includes a message entry field 610 as well as message boxes 620 .
  • the message boxes 620 include posts by a patient Moshe Rot.
  • the posts include images and text entered in response to fluency practice sessions.
  • each of the fluency practice device 120 , the server 130 , and the therapist device 160 typically includes a processing system (not shown) connected to a memory (not shown).
  • the memory contains a plurality of instructions that are executed by the processing system.
  • the memory may include machine-readable media for storing software.
  • Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing system to perform the various functions described herein.
  • the processing system may comprise or be a component of a larger processing system implemented with one or more processors.
  • the one or more processors may be implemented with any combination of general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information.
  • the server 130 may reside in the cloud computing platform, a datacenter, and the like. Moreover, in an embodiment, there may be a plurality of servers 130 operating as described hereinabove and configured to either have one as a standby, to share the load between them, or to split the functions between them.
  • FIG. 2 is an exemplary and non-limiting flowchart 200 illustrating a method for enhancing remote speech therapy via social media according to an embodiment.
  • the method may be performed by the server 130 .
  • a voice production by the patient is received from the fluency practice device respective of a fluency practice session.
  • the fluency practice session may be individual to the fluency practice device (i.e., the patient may be engaged in a solo session that does not require a communication channel between devices), or may be based on communications between the fluency practice device and other patient and/or therapist devices.
  • the fluency practice session may include one or more exercises, with each exercise having a difficulty level indicating the relative degree of fluency required to perform the exercise well. The relative degree of fluency required may be indicated by, but not limited to, a fluency proficiency level of the user, a therapy stage of the user, and so on.
  • the training session may be based on selections made via the fluency practice device.
  • the selections may include initiating a session without communicating with other devices (i.e., a solo session), inviting one or more friends to join a current group training session, and scheduling a public (i.e., no invitation required to join) or semi-public (i.e., by invitation) group training session at a later time.
  • the selections may be received via a user interface displayed on the user device.
  • results of the fluency practice session including the voice production may be received after the fluency practice session.
  • the scheduled training session may include establishing a communication channel between the fluency practice device and any of the invited devices at the scheduled start time.
  • an analysis of the voice production is caused.
  • the analysis may be performed by a feedback generator system (e.g., the feedback generator system 150 ).
  • the analysis may be performed in real-time during the fluency practice session, or may be performed after the fluency practice session has ended.
  • the analysis includes processing the voice production to evaluate a correct execution of the voice production respective of the exercise difficulty levels. Processing the voice production to evaluate a correct execution is described further in the above-referenced '274 Application.
  • a feedback is obtained.
  • the feedback may be received or retrieved from the feedback generator system.
  • the feedback may be a visual feedback illustrating differences between a patient's voice production and a predefined target template.
  • the feedback may be sent for display on any of the fluency practice device, the other fluency practice devices, and/or the therapist devices.
  • a feed is generated or updated based on the obtained feedback.
  • the feed is a collection of previous feedbacks respective of past user performances.
  • S 250 a display of the motivational content is caused.
  • S 250 may further include sending a notification respective of the motivational content.
  • the notification may be sent via email, short message service, an agent installed on a device, and so on.
  • the notification may be accessed via the fluency practice device, another fluency practice device, a therapist device, and so on.
  • the notification may indicate information related to the patient's vocal production such as, but not limited to, termination of a fluency practice session, completion of a therapy course, achievements, performance graphs, audio and video clips of the patient's performance, and so on.
  • the notification may further include the motivational content.
  • the various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • Signal Processing (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Primary Health Care (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Nursing (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Urology & Nephrology (AREA)

Abstract

A social networking platform for enhancing fluency training is presented. The platform includes a plurality of fluency practice devices; a plurality of therapist devices; and a server communicatively connected to the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is configured to facilitate communications among the plurality of fluency practice devices and between the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is further configured to share social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 62/098,355 filed on Dec. 31, 2014, the contents of which are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present disclosure relates generally to the field of speech therapy, and more particularly to engaging speech therapy patients over social media networks.
  • BACKGROUND
  • Speech disorders are one of the most prevalent disabilities in the world. Generally, speech disorders may be classified as fluency disorders, voice disorders, motor speech disorders, and speech sound disorders. As one example, stuttering is classified as a fluency disorder in the rhythm of speech in which a person knows precisely what to say, but is unable to communicate or speak in accordance with his or her intent.
  • Many clinical therapy techniques for speech disorders are disclosed in the related art. Conventional techniques for treating speech disorders and, in particular, anti-stuttering techniques, are commonly based on regulating the breath and controlling the rate of speech. To this end, speech therapists train their patients to improve their fluency. Such conventional techniques were found effective, in the short-term, as a speech disorder is predominantly a result of poorly coordinated speech production muscles.
  • In more details, one common stutter therapy technique is fluency shaping, in which a therapist trains a person (e.g., a stuttering patient) to improve his or her speech fluency through the altering of various motor skills. Such skills include the abilities to control breathing; to gently increase, at the beginning of each phrase, vocal volume and laryngeal vibration continue phonation through the end of the phrase by keeping the vocal folds relaxed and air flowing; to speak slower and with prolonged vowel sounds; to enable continuous phonation; and to reduce articulatory pressure.
  • The speech motor skills are taught in the clinic while the therapist models the behavior and provides verbal feedback as the person learns to perform the motor skill. As the person develops speech motor control, the person increases rate and prosody of her/his speech until it sounds normal. During the final stage of the therapy, when the speech is fluent and sounds normal in the clinic, the person is trained to practice the acquired speech motor skills in her/his everyday life activities.
  • When fluency shaping therapy is successful, the stuttering is significantly improved or even eliminated. However, this therapy requires continuous training and practice in order to maintain effective speech fluency. As a result, the conventional techniques for practicing fluency shaping therapy are not effective for people suffering from stuttering. This is mainly because not all persons are capable of developing the target speech motor skills in the clinic, and even if such skills are developed, such skills are not easily transferable into everyday conversations. In other word, a patient can learn to speak fluently in the clinic, but will likely revert to stuttering outside of the clinic.
  • Thus, the continuous practice of speech motor skills is key to successful fluency shaping therapy. Consequently, the dependency on therapists and on frequent visits to clinics reduces the success rate of the fluency-shaping therapy. For example, a patient who waits a few days or weeks between therapy sessions may be more likely to stutter than patients who more frequently attend therapy. Lack of continuous practice between sessions further deteriorates the effectiveness of the therapy.
  • In addition to the need for access to continuous practice, the patient must remain motivated to actually participate in such continuous practice. Motivation may come in the form of, e.g., encouragement, celebrations of progress, setting progress goals, and so on. If the patient does not remain sufficiently motivated, then the patient will not continuously practice even if he or she has access to tools enabling remote practice. Consequently, motivation is a key factor in ensuring speech therapy success.
  • Therapists and other patients can contribute to motivating the patient to continue practicing regularly. To maintain motivation during the time periods between live sessions with a therapist, the patient should receive encouragement during those time periods. Patient motivation may further be spurred by sharing patient progress with appropriate people who can acknowledge the patient's efforts and encourage additional progress. Existing techniques for providing speech therapy face challenges in providing encouragement between therapy sessions because they rely on the availability of therapists and other motivating individuals between sessions.
  • With the advent of the Internet and, in particular, social media, people can now communicate with large groups of friends, family, and acquaintances 24 hours a day, 7 days a week, 365 days a year. Social media networks may further provide automatic updates for users such as, e.g., birthday and holiday messages, as well as web-based activity of a user. As an example, a social media network may automatically (or by user selection) share media content viewed by a user, items purchased by the user, games played by the user, and so on. However, such social media networks lack the ability to provide feedback and support for speech therapy patients.
  • It would therefore be advantageous to provide a solution that would overcome the deficiencies of the prior art.
  • SUMMARY
  • A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
  • The disclosed embodiments include a social networking platform for enhancing fluency training, comprising: a plurality of fluency practice devices; a plurality of therapist devices; and a server communicatively connected to the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is configured to facilitate communications among the plurality of fluency practice devices and between the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is further configured to share social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.
  • The disclosed embodiments further include a method for enhancing speech fluency training via social networking platform. The method comprises facilitating communications among a plurality of fluency practice devices and between a plurality of fluency practice devices and the plurality of therapist devices; and sharing social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
  • FIG. 1 is a network diagram utilized to describe the various disclosed embodiments.
  • FIG. 2 is a flowchart illustrating a method for enhancing remote speech therapy via social media according to an embodiment.
  • FIG. 3 is a screenshot illustrating a social networking message from a user.
  • FIG. 4 is a screenshot illustrating progress indicators presented as motivational content.
  • FIG. 5 is a screenshot illustrating a challenge presented as motivational content.
  • FIG. 6 is a screenshot illustrating groupings of users.
  • DETAILED DESCRIPTION
  • It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
  • Social networking elements are used to support stuttering/stammering treatment using Fluency Shaping techniques, as follows. Online practice between users—with the aid of the video chat and chat platforms, for example, in the form of templates, fluency practice sessions between users of the system are created. Feed creation in the area of Fluency Shaping-achievements of the users are used to create the feed as well as to improve motivation to practice the technique. Users can reward the achievements of other users based on their successes and various reports.
  • Speech is a social activity. Therefore, in order to gain mastery of the fluency shaping techniques in spontaneous speech, it is important to practice using the new speech patterns in various communicative situations which challenge speech fluency (for example, with new people, with strangers, in situations already identified as difficult for that individual, in group discussions), and not just alone in front of the computer or the clinician. Elements of social networking can be used within the system to help achieve this goal.
  • It is important to note that the main purpose of the system is to provide a platform for the practice and application of the fluency shaping techniques in conversational spontaneous speech with other people. However, an additional advantage is that it provides a community in which persons practicing speech fluency can support each other before, during and after treatment.
  • Practicing fluent speech at the standardized/regulated rate can occur between two users on the system of the invention (the users do not need to know each other previously). The practice is conducted online at convenient times for the users and in any location from which they can connect to the system. The conversations are conducted using video conferencing as well as the various system-based indicators (visual monitor, rate monitor, etc.).
  • A feed is created for fluency shaping where users' achievements can be broadcast on an online “feed” and used to increase motivation for practicing the techniques. Users of the system have the option to “reward” the achievements of other users. Achievements are published inside internal news feed (available only for registered users). All users can encourage the feed events in the manner of comments and encouragement counter.
  • FIG. 1 shows an exemplary and non-limiting diagram of a remote speech therapy system 100 utilized to describe the various disclosed embodiments. The system 100 includes a network 110, a first plurality of user devices 120-1 through 120-n (hereinafter referred to individually as a fluency practice device 120 and collectively as fluency practice devices 120, merely for simplicity purposes), a server 130, a database 140, a feedback generator system (FGS) 150, and a second plurality of user devices 160-1 through 160-n (hereinafter referred to individually as a therapist device 160 and collectively as therapist devices 160, merely for simplicity purposes).
  • The network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), and other networks configured to communicate between the elements of the 110. Each fluency practice device 120 and each therapist device 160 may be, but is not limited to, a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computer device, a game console, and the like.
  • In a non-limiting example, the fluency practice devices 120 are utilized by people (e.g., speech therapy patients) practicing to improve existing speech disorders, and the therapist devices 160 are utilized by speech therapists. It should be noted that one or more fluency practice devices 120 can communicate with a single therapist device 160, and multiple therapist devices 160 can communicate with one or more fluency practice devices 120. It should be noted that the fluency practice device 120 can be operated by any person who may or may not suffer from a speech disorder. It should be further noted that the therapist device 160 may be operated by any person who may or may not be a therapist. Typically, the therapist device 160 is operated by either a therapist or any other person who may be permitted to observe the fluency-practicing user's progress (e.g., a friend, a family member, a guardian, and so on).
  • Each of the fluency practice devices 120 and the therapist devices 160 is configured to communicate with the server 130. The server 130, according to the disclosed embodiments, is configured to allow fluency practice sessions between the user devices 120 and/or 160, to obtain feedback based on the fluency practice sessions, to store fluency practice progress, and to share content respective of the fluency practice sessions. Each fluency practice session may be joined by one or more of the fluency practice devices 120 and/or one or more of the therapist devices 160.
  • In an embodiment, the fluency practice sessions may be created according to a template. The template defines parameters for exercises included in the fluency practice sessions. Each exercise may further have a difficulty level that may correspond to a therapy stage of a fluency-practicing user. Exercises having higher difficulty levels are therefore typically for fluency-practicing user at higher therapy stages as determined based on, e.g., past performances. In a further embodiment, the server 130 may be configured to select one or more exercises from a plurality of exercise based on the user profiles of the patient(s) participating in the fluency practice session. For example, the exercises may be selected based on the therapy stage of the patients participating in a particular fluency practice session.
  • In an embodiment, an audio/video communication channel may be established between any of the fluency practice devices 120 and/or the therapist devices 160. This enables fluency practice sessions such as, for example, remote therapy sessions, group therapy sessions between multiple patients and/or therapists, combinations thereof, and so on. The audio/video communication channel can be a peer-to-peer connection between the fluency practice devices 120 and/or the therapist devices 160, or can be through the server 130 via, e.g., a website, a mobile application, and so on. To this end, an audio/video channel may be established between the fluency practice devices 120 and/or the therapist devices 160 to allow direct communication between the patients and/or the therapists. The channel, in one embodiment, is established over HTTP. In an embodiment, the agent 125 or 165 of each respective device 120 or 160 is configured to stream video streams from one device to another over the established communication channel.
  • The interface between each of the devices 120 or 160 and the server 130 may be realized through, for example, a web interface (e.g., a web portal), an application installed on the device 120 or 160, a script executed on the device 120 or 160, and the like. In an embodiment, each fluency practice device 120 is installed with an agent 125 and each therapist device 160 is installed with an agent 165. Each of the agents 125 and 165 may be configured to communicate with the server 130. In certain configurations, each agent 125 or 165 can operate and be implemented as stand-alone programs and/or can communicate and be integrated with other programs or applications executed in the fluency practice device 120 and the therapist device 160, respectively. Examples for a stand-alone program may include a web application, a mobile application, and the like.
  • The server 130 may be configured to store a user profile associated with each user of the fluency practice devices 120 and/or therapist devices 160 in the database 140. Each user profile may include, but is not limited to, a name of a user, a classifications of a user (e.g., a user may be classified as either a patient, a therapist, or a guardian), friends lists, supervised users lists (e.g., a list of patients that a therapist supervises), a fluency proficiency level of a patient, a therapy stage of the patient, an assignment for a therapist (e.g., a therapist may be assigned to work with patients at a particular therapy stage stage), content shared among users of the devices 120 and 160, feedback related to a patient's vocal performances, and so on. The server 130 may be configured to utilize the user profiles to automatically determine and provide appropriate motivational content to each patient. The motivational content may include, but is not limited to, prizes, achievements, shared content, performance evaluations, additional exercises, notifications of therapy stage updates, and so on.
  • FIG. 3 shows an exemplary and non-limiting screenshot 300 illustrating a supervised users list for a therapist as displayed in a social networking application. The screenshot 300 includes a supervised users list 310. The supervised users list 310 indicates patients whose progress the therapist is currently supervising.
  • The agent 125 may be configured to capture sound samples from its respective fluency practice device 120 during a fluency practice session and to send the captured sound samples to the server 130. In an embodiment, the server 130 may be configured to receive the sound samples from the agent 125 and to send the sound samples to the feedback generator system 150. The feedback generator system 150 analyzes the patient's performance respective of the captured sound samples and generates feedback respective of the performance. The analysis and generation of feedback based on patient performances is described further in U.S. patent application Ser. No. 14/978,274 titled “A METHOD AND SYSTEM FOR ONLINE AND REMOTE SPEECH DISORDERS THERAPY,” (hereinafter the '274 Application) assigned to the common assignee, which is hereby incorporated by reference for all that it contains. The feedback generator system 150 may send the feedback to the server 130.
  • The server 130 may store the feedback in the database 140. In an embodiment, the server 130 may be configured to select one or more of the devices 120 and/or 160 and to send the feedback to the selected devices 120 and 160. In response, the selected devices 120 and 160 may display the feedback.
  • In a further embodiment, the server 130 may select the devices 120 and 160 based on a user profile associated with the fluency practice device 120 from which the sound samples were captured. As an example, the user profile may indicate a therapy stage of the patient, and the selected devices may be utilized by patients at the same therapy stage. As another example, the user profile may indicate friends of the patient, and the selected devices may be utilized by the indicated friends.
  • In an embodiment, the sever 130 may generate a feed for the patient based, in part, on the obtained feedback. The feed may be a collection of previously generated feedbacks respective of past voice productions of a patient. The feed may further indicate progress of the patient based on, for example, time periods of practice, regularity of practice, successes, attaining particular stages in therapy, and so on. The progress may further be respective of one or more challenges undertaken by the patient.
  • Based on the generated feedback and/or the feed, the server 130 may be configured to determine motivational content for display on the fluency practice device 120 respective of the patient's performance. The social feeds including but not limited to the motivational content may be, but is not limited to, a textual message (e.g., an email, a SMS message, a message displayed on the fluency practice device 120 via the agent 125), a video clip, an audio clip, an image, an achievement, a user-created exercise, a performance graph, combinations thereof, and so on. The motivational content may provide encouragement and/or motivation to the patient in the form of, for example, patient progress, challenges, achievements, rewards, and any other motivating factors.
  • FIG. 4 shows an exemplary and non-limiting screenshot 400 illustrating progress indicators presented as motivational content. The screenshot 400 includes circular progress meters 410-1 and 410-2 representing a user's progress with respect to the challenges of “practice at least 3 days for 15 minutes” and “complete 5 medium speech tasks.” In the exemplary screenshot 400, the progress meter 410-1 indicates that the user has not practiced for 15 minutes on any day, while the progress meter 410-2 indicates that the user has completed 9 medium difficulty speech tasks.
  • FIG. 5 shows an exemplary and non-limiting screenshot 500 illustrating a challenge from another user presented as motivational content. The screenshot 500 includes a challenge box 510 including an image and text related to a challenge from another patient or from a therapist. The challenge asks the patient to practice for 15 minutes on at least 4 days and to complete 4 medium difficulty speech tasks.
  • In an embodiment, the server 130 may be configured to determine and send motivational content automatically. The motivational content may be determined based on, but not limited to, the user profile, the feedback, and so on. In another embodiment, the server 130 may be configured to receive motivational content from one or more of the devices 120 and/or 160 and to send the received motivational content to the fluency practice device 120. As an example, another patient observing the patient's feedback may send a congratulatory video after a successful fluency practice session. As another example, a therapist may create and send an additional exercise to further help improve the patient's performance after a fluency practice session.
  • In an embodiment, the server 130 may be further configured to automatically update a social networking profile of the patient based on the most recent feedback. In another embodiment, the server 130 may be further configured to automatically send a notification regarding the most recent feedback. The notification may be sent via, e.g., email, short message service, the agent 125, the agent 165, and so on. The notification may be sent to select patients and/or therapists based on, e.g., the user profile. For example, only therapists assigned to the therapy stage of the patient and/or other patients that are friends of the patient may be sent the notification.
  • In yet another embodiment, the server 130 may be configured to receive social networking messages (e.g., “posts”) respective of the feedback. FIG. 6 is an exemplary and non-limiting screenshot 600 of a social networking message from a user. The screenshot 600 includes a message entry field 610 as well as message boxes 620. In the exemplary screenshot 600, the message boxes 620 include posts by a patient Moshe Rot. The posts include images and text entered in response to fluency practice sessions.
  • It should be noted that the feedback, the feed, and/or the motivational content may be sent to the fluency practice device 120 or the therapist device 160 for display in response to activity such as, but not limited to, launching of the agent 125 or 165, a user interaction with the device 120 or 160 during execution of the agent 125 or 165, and so on. The displayed items may be displayed as part of a social networking profile of a member of a social media network.
  • It should be noted that the feedback generator system 150 may comprise or be a component of the server 130 without departing from the scope of the disclosure.
  • In some implementations, each of the fluency practice device 120, the server 130, and the therapist device 160 typically includes a processing system (not shown) connected to a memory (not shown). The memory contains a plurality of instructions that are executed by the processing system. Specifically, the memory may include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing system to perform the various functions described herein.
  • The processing system may comprise or be a component of a larger processing system implemented with one or more processors. The one or more processors may be implemented with any combination of general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate array (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information.
  • It should be understood that the embodiments disclosed herein are not limited to the specific architecture illustrated in FIG. 1, and other architectures may be equally used without departing from the scope of the disclosed embodiments. Specifically, the server 130 may reside in the cloud computing platform, a datacenter, and the like. Moreover, in an embodiment, there may be a plurality of servers 130 operating as described hereinabove and configured to either have one as a standby, to share the load between them, or to split the functions between them.
  • FIG. 2 is an exemplary and non-limiting flowchart 200 illustrating a method for enhancing remote speech therapy via social media according to an embodiment. In an embodiment, the method may be performed by the server 130.
  • In optional S205, a communication channel may be established between a fluency practice device and one or more other devices. The other devices may include, but are not limited to, other fluency practice devices, therapist devices, and so on. In an embodiment, the communication channel may be established based on a user profile of a patient using the fluency practice device as described further herein above with respect to FIG. 1.
  • In S210, a voice production by the patient is received from the fluency practice device respective of a fluency practice session. The fluency practice session may be individual to the fluency practice device (i.e., the patient may be engaged in a solo session that does not require a communication channel between devices), or may be based on communications between the fluency practice device and other patient and/or therapist devices. The fluency practice session may include one or more exercises, with each exercise having a difficulty level indicating the relative degree of fluency required to perform the exercise well. The relative degree of fluency required may be indicated by, but not limited to, a fluency proficiency level of the user, a therapy stage of the user, and so on.
  • In an embodiment, the training session may be based on selections made via the fluency practice device. The selections may include initiating a session without communicating with other devices (i.e., a solo session), inviting one or more friends to join a current group training session, and scheduling a public (i.e., no invitation required to join) or semi-public (i.e., by invitation) group training session at a later time. The selections may be received via a user interface displayed on the user device. In a further embodiment, results of the fluency practice session including the voice production may be received after the fluency practice session. The scheduled training session may include establishing a communication channel between the fluency practice device and any of the invited devices at the scheduled start time.
  • In S220, an analysis of the voice production is caused. In an embodiment, the analysis may be performed by a feedback generator system (e.g., the feedback generator system 150). The analysis may be performed in real-time during the fluency practice session, or may be performed after the fluency practice session has ended. The analysis includes processing the voice production to evaluate a correct execution of the voice production respective of the exercise difficulty levels. Processing the voice production to evaluate a correct execution is described further in the above-referenced '274 Application.
  • In S230, respective of the analysis, a feedback is obtained. In an embodiment, the feedback may be received or retrieved from the feedback generator system. The feedback may be a visual feedback illustrating differences between a patient's voice production and a predefined target template. In another embodiment, the feedback may be sent for display on any of the fluency practice device, the other fluency practice devices, and/or the therapist devices.
  • In optional S235, a feed is generated or updated based on the obtained feedback. The feed is a collection of previous feedbacks respective of past user performances.
  • In 240, motivational content is determined based on the feedback and/or the feed. In an embodiment, the motivational content is further based on the user profile of the patient. The motivational content may be received from one of the other devices, or may be automatically selected. In another embodiment, additional motivational content may be determined in response to receiving the additional motivational content.
  • In S250, a display of the motivational content is caused. In an embodiment, S250 may further include sending a notification respective of the motivational content. The notification may be sent via email, short message service, an agent installed on a device, and so on. The notification may be accessed via the fluency practice device, another fluency practice device, a therapist device, and so on. The notification may indicate information related to the patient's vocal production such as, but not limited to, termination of a fluency practice session, completion of a therapy course, achievements, performance graphs, audio and video clips of the patient's performance, and so on. The notification may further include the motivational content.
  • It should be noted that the embodiments disclosed herein are described with respect to speech therapy patients merely for simplicity purposes and without limitation on the disclosure. Any fluency-practicing user seeking to improve a speech disorder may participate in fluency practice sessions and/or be motivated by social networking respective thereof without departing from the scope of the disclosure.
  • It should be noted that a portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent & Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
  • The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims (20)

What is claimed is:
1. A social networking platform for enhancing speech fluency training, comprising:
a plurality of fluency practice devices;
a plurality of therapist devices; and
a server communicatively connected to the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is configured to facilitate communications among the plurality of fluency practice devices and between the plurality of fluency practice devices and the plurality of therapist devices, wherein the server is further configured to share social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.
2. The social networking platform of claim of 1, wherein each of the social networking feeds includes any of: a feedback generated respective of a fluency practice session, and motivational content.
3. The social networking platform of claim 2, wherein the motivational content includes at least one of: an achievement respective of the feedback, and an automatic update of a user profile respective of the feedback.
4. The social networking platform of claim 2, wherein the motivational content is generated by a therapist using a therapist device respective of a user using a fluency practice device, wherein the therapist and the user are part of the same social connection.
5. The social networking platform of claim 2, wherein the motivational content is generated by a first user using a first fluency practice device and a second user using a second fluency practice device, wherein the first user and the second user are part of the same social connection.
6. The social networking platform of claim 5, wherein each fluency practice device is configured to generate the feedback respective of the fluency practice session.
7. The social networking platform of claim 5, wherein each fluency practice device is further configured to:
receive voice productions captured during the fluency practice session, wherein the fluency practice session includes at least one fluency shaping exercise;
process at least one of the received voice productions to evaluate a correct execution of the at least one fluency shaping exercise; and
generate the feedback based on the evaluated execution of the at least one fluency shaping exercise.
8. The social networking platform of claim 7, wherein the feedback is at least a visual feedback, wherein the motivational content includes the visual feedback.
9. The social networking platform of claim 7, wherein each of the server, the plurality of fluency practice devices, and the plurality of therapist devices is further configured to:
generate at least a reward respective of the correct execution of the at least one fluency shaping exercise; and
share the generated reward via at least one of the social networking feeds.
10. The social networking platform of claim 7, wherein the at least one fluency shaping exercise allows practicing fluent speech at a standardized speech rate or at a regulated speech rate in spontaneous speech.
11. The social networking platform of claim 7, wherein each fluency shaping exercise has a difficulty, wherein the difficulty of each fluency shaping exercise is based on a therapy stage of a user of one of the plurality of fluency practice devices.
12. The social networking platform of claim 2, wherein the server is further configured to establish the fluency practice session between at least one of the fluency practice devices and at least one of the therapist devices over a communication channel.
13. The social networking platform of claim 12, wherein the communication channel is any of: a text communication channel, an audio communication channel, and an audio-visual communication channel.
14. The social networking platform of claim 2, wherein each social networking feed includes any of: a textual message, a video clip, an audio clip, an image, and a user-created exercise.
15. The social networking platform of claim 2, wherein the fluency practice session is between at least two participating fluency practice devices or between at least one participating fluency practice device and at least one participating therapist device.
16. The social networking platform of claim 15, wherein the server is further configured to select each participating fluency practice device and each participating therapist device based on a fluency progress of a user of each participating fluency practice device.
17. The social networking platform of claim 2, wherein the server is further configured to store a progress of a user of each fluency practice device respective of the feedback.
18. The social networking platform of claim 1, wherein the server is further configured to communicate with each fluency practice device and each therapist device via any of: a web interface, an application installed on the device, and a script executed by the device.
19. The social networking platform of claim 1, wherein each fluency practice device and each therapist device is any one of: a personal computer, a personal digital assistant, a mobile phone, a smart phone, a tablet computer, a wearable computer device, and a game console.
20. A method for enhancing speech fluency training via social networking platform, comprising:
facilitating communications among a plurality of fluency practice devices and between a plurality of fluency practice devices and the plurality of therapist devices; and
sharing social networking feeds related to practicing speech fluency, wherein the social networking feeds are generated by at least each of the plurality of fluency practice devices.
US14/981,110 2014-12-31 2015-12-28 System and method for enhancing remote speech fluency therapy via a social media platform Abandoned US20160189566A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/981,110 US20160189566A1 (en) 2014-12-31 2015-12-28 System and method for enhancing remote speech fluency therapy via a social media platform

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462098355P 2014-12-31 2014-12-31
US14/981,110 US20160189566A1 (en) 2014-12-31 2015-12-28 System and method for enhancing remote speech fluency therapy via a social media platform

Publications (1)

Publication Number Publication Date
US20160189566A1 true US20160189566A1 (en) 2016-06-30

Family

ID=56162876

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/978,274 Abandoned US20160183867A1 (en) 2014-12-31 2015-12-22 Method and system for online and remote speech disorders therapy
US14/981,110 Abandoned US20160189566A1 (en) 2014-12-31 2015-12-28 System and method for enhancing remote speech fluency therapy via a social media platform
US14/981,072 Abandoned US20160189565A1 (en) 2014-12-31 2015-12-28 System and method for automatic provision and creation of speech stimuli for treatment of speech disorders
US14/982,230 Active 2036-10-04 US10188341B2 (en) 2014-12-31 2015-12-29 Method and device for detecting speech patterns and errors when practicing fluency shaping techniques
US16/251,872 Active 2038-06-16 US11517254B2 (en) 2014-12-31 2019-01-18 Method and device for detecting speech patterns and errors when practicing fluency shaping techniques

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/978,274 Abandoned US20160183867A1 (en) 2014-12-31 2015-12-22 Method and system for online and remote speech disorders therapy

Family Applications After (3)

Application Number Title Priority Date Filing Date
US14/981,072 Abandoned US20160189565A1 (en) 2014-12-31 2015-12-28 System and method for automatic provision and creation of speech stimuli for treatment of speech disorders
US14/982,230 Active 2036-10-04 US10188341B2 (en) 2014-12-31 2015-12-29 Method and device for detecting speech patterns and errors when practicing fluency shaping techniques
US16/251,872 Active 2038-06-16 US11517254B2 (en) 2014-12-31 2019-01-18 Method and device for detecting speech patterns and errors when practicing fluency shaping techniques

Country Status (5)

Country Link
US (5) US20160183867A1 (en)
EP (2) EP3241206A4 (en)
CN (2) CN107111961A (en)
AU (2) AU2015374409A1 (en)
WO (2) WO2016109334A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019217109A1 (en) * 2019-11-06 2021-05-06 Volkswagen Aktiengesellschaft System and method for supporting vehicle occupants in a vehicle during speech therapy
US11017693B2 (en) 2017-01-10 2021-05-25 International Business Machines Corporation System for enhancing speech performance via pattern detection and learning
WO2021127348A1 (en) * 2019-12-18 2021-06-24 Darroh Steven Voice training therapy app system and method
US11224782B2 (en) * 2017-06-04 2022-01-18 Apple Inc. Physical activity monitoring and motivating with an electronic device
US11322172B2 (en) 2017-06-01 2022-05-03 Microsoft Technology Licensing, Llc Computer-generated feedback of user speech traits meeting subjective criteria

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6511860B2 (en) * 2015-02-27 2019-05-15 富士通株式会社 Display control system, graph display method and graph display program
US20180054688A1 (en) * 2016-08-22 2018-02-22 Dolby Laboratories Licensing Corporation Personal Audio Lifestyle Analytics and Behavior Modification Feedback
EP3288035B1 (en) * 2016-08-22 2022-10-12 Dolby Laboratories Licensing Corp. Personal audio analytics and behavior modification feedback
GB201701052D0 (en) * 2017-01-20 2017-03-08 Oromi Jordi Fernandez An elecronic fluency device
CA3054362A1 (en) * 2017-02-22 2018-08-30 Snorex Llc Systems and methods for reducing snoring and/or sleep apnea
US10629200B2 (en) * 2017-03-07 2020-04-21 Salesboost, Llc Voice analysis training system
EP3776410A4 (en) 2018-04-06 2021-12-22 Korn Ferry SYSTEM AND PROCEDURE FOR INTERVIEW TRAINING WITH TIME-ADAPTED FEEDBACK
US11120795B2 (en) * 2018-08-24 2021-09-14 Dsp Group Ltd. Noise cancellation
TWI673691B (en) * 2018-11-02 2019-10-01 龎國臣 System for immersive programming language learning
US10817251B2 (en) 2018-11-29 2020-10-27 Bose Corporation Dynamic capability demonstration in wearable audio device
US10922044B2 (en) * 2018-11-29 2021-02-16 Bose Corporation Wearable audio device capability demonstration
CN109658776A (en) * 2018-12-17 2019-04-19 广东小天才科技有限公司 Recitation fluency detection method and electronic equipment
US10923098B2 (en) 2019-02-13 2021-02-16 Bose Corporation Binaural recording-based demonstration of wearable audio device functions
CN112116832A (en) * 2019-06-19 2020-12-22 广东小天才科技有限公司 Spoken language practice method and device
CN110876608A (en) * 2019-06-27 2020-03-13 上海慧敏医疗器械有限公司 Sound production rehabilitation instrument and method based on real-time fundamental frequency measurement and audio-visual feedback technology
CN110876609A (en) * 2019-07-01 2020-03-13 上海慧敏医疗器械有限公司 Voice treatment instrument and method for frequency band energy concentration rate measurement and audio-visual feedback
US11727949B2 (en) * 2019-08-12 2023-08-15 Massachusetts Institute Of Technology Methods and apparatus for reducing stuttering
US11188718B2 (en) * 2019-09-27 2021-11-30 International Business Machines Corporation Collective emotional engagement detection in group conversations
CN111554324A (en) * 2020-04-01 2020-08-18 深圳壹账通智能科技有限公司 Intelligent language fluency identification method and device, electronic equipment and storage medium
EP3967223A1 (en) * 2020-09-09 2022-03-16 Beats Medical Limited A system and method for providing tailored therapy to a user
WO2022159983A1 (en) * 2021-01-25 2022-07-28 The Regents Of The University Of California Systems and methods for mobile speech therapy
US11594149B1 (en) * 2022-04-07 2023-02-28 Vivera Pharmaceuticals Inc. Speech fluency evaluation and feedback
ES2973663A1 (en) * 2022-10-21 2024-06-21 Frau Pedro Sabater METHOD AND SYSTEM FOR THE RECOGNIZATION OF ATYPICAL DISFLUENCES IN THE STUTTERED SPEECH OF A USER (Machine-translation by Google Translate, not legally binding)
GB2632286A (en) * 2023-07-31 2025-02-05 Sony Interactive Entertainment Europe Ltd Method of audio error detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110125844A1 (en) * 2009-05-18 2011-05-26 Telcordia Technologies, Inc. mobile enabled social networking application to support closed, moderated group interactions for purpose of facilitating therapeutic care
US20120116772A1 (en) * 2010-11-10 2012-05-10 AventuSoft, LLC Method and System for Providing Speech Therapy Outside of Clinic

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4685448A (en) 1983-10-11 1987-08-11 University Of Pittsburgh Vocal tactile feedback method and associated apparatus
IL108908A (en) 1994-03-09 1996-10-31 Speech Therapy Systems Ltd Speech therapy system
US5794203A (en) 1994-03-22 1998-08-11 Kehoe; Thomas David Biofeedback system for speech disorders
US6231500B1 (en) 1994-03-22 2001-05-15 Thomas David Kehoe Electronic anti-stuttering device providing auditory feedback and disfluency-detecting biofeedback
US6109923A (en) * 1995-05-24 2000-08-29 Syracuase Language Systems Method and apparatus for teaching prosodic features of speech
US5647834A (en) * 1995-06-30 1997-07-15 Ron; Samuel Speech-based biofeedback method and system
US5733129A (en) * 1997-01-28 1998-03-31 Fayerman; Izrail Stuttering treatment technique
US6353809B2 (en) 1997-06-06 2002-03-05 Olympus Optical, Ltd. Speech recognition with text generation from portions of voice data preselected by manual-input commands
US7203649B1 (en) * 1998-04-15 2007-04-10 Unisys Corporation Aphasia therapy system
US6296489B1 (en) * 1999-06-23 2001-10-02 Heuristix System for sound file recording, analysis, and archiving via the internet for language training and other applications
US6963841B2 (en) * 2000-04-21 2005-11-08 Lessac Technology, Inc. Speech training method with alternative proper pronunciation database
US7085370B1 (en) 2000-06-30 2006-08-01 Telefonaktiebolaget Lm Ericsson (Publ) Ringback detection circuit
US7031922B1 (en) 2000-11-20 2006-04-18 East Carolina University Methods and devices for enhancing fluency in persons who stutter employing visual speech gestures
CN1293533C (en) * 2004-06-09 2007-01-03 四川微迪数字技术有限公司 Speech signal processing method for correcting stammer
US7258660B1 (en) * 2004-09-17 2007-08-21 Sarfati Roy J Speech therapy method
KR20060066416A (en) * 2004-12-13 2006-06-16 한국전자통신연구원 Device for laryngeal remote diagnosis service using voice codec and method thereof
US20060183964A1 (en) 2005-02-17 2006-08-17 Kehoe Thomas D Device for self-monitoring of vocal intensity
US9271074B2 (en) * 2005-09-02 2016-02-23 Lsvt Global, Inc. System and method for measuring sound
US20070168187A1 (en) * 2006-01-13 2007-07-19 Samuel Fletcher Real time voice analysis and method for providing speech therapy
WO2008130658A1 (en) * 2007-04-20 2008-10-30 Master Key, Llc System and method for speech therapy
US20090138270A1 (en) * 2007-11-26 2009-05-28 Samuel G. Fletcher Providing speech therapy by quantifying pronunciation accuracy of speech signals
GB2458461A (en) * 2008-03-17 2009-09-23 Kai Yu Spoken language learning system
ITGE20090037A1 (en) * 2009-06-08 2010-12-09 Linear Srl METHOD AND DEVICE TO MODIFY THE REPRODUCTION SPEED OF AUDIO-VIDEO SIGNALS
US8457967B2 (en) 2009-08-15 2013-06-04 Nuance Communications, Inc. Automatic evaluation of spoken fluency
US9532897B2 (en) * 2009-08-17 2017-01-03 Purdue Research Foundation Devices that train voice patterns and methods thereof
EP2512484A4 (en) * 2009-12-17 2013-07-24 Liora Emanuel Methods for the treatment of speech impediments
CN201741384U (en) * 2010-07-30 2011-02-09 四川微迪数字技术有限公司 Anti-stammering device for converting Chinese speech into mouth-shaped images
US8744856B1 (en) * 2011-02-22 2014-06-03 Carnegie Speech Company Computer implemented system and method and computer program product for evaluating pronunciation of phonemes in a language
US20140038160A1 (en) * 2011-04-07 2014-02-06 Mordechai Shani Providing computer aided speech and language therapy
WO2012161657A1 (en) * 2011-05-20 2012-11-29 Nanyang Technological University Systems, apparatuses, devices, and processes for synergistic neuro-physiological rehabilitation and/or functional development
WO2013108255A1 (en) * 2012-01-18 2013-07-25 Steinberg-Shapira Shirley Method and device for stuttering alleviation
US8682678B2 (en) * 2012-03-14 2014-03-25 International Business Machines Corporation Automatic realtime speech impairment correction
US20150154980A1 (en) * 2012-06-15 2015-06-04 Jemardator Ab Cepstral separation difference
US20160117940A1 (en) 2012-09-12 2016-04-28 Lingraphicare America Incorporated Method, system, and apparatus for treating a communication disorder
WO2014115115A2 (en) * 2013-01-24 2014-07-31 B. G. Negev Technologies And Applications Ltd. Determining apnea-hypopnia index ahi from speech
WO2014188408A1 (en) * 2013-05-20 2014-11-27 Beyond Verbal Communication Ltd Method and system for determining a pre-multisystem failure condition using time integrated voice analysis
US9911358B2 (en) * 2013-05-20 2018-03-06 Georgia Tech Research Corporation Wireless real-time tongue tracking for speech impairment diagnosis, speech therapy with audiovisual biofeedback, and silent speech interfaces
US9691296B2 (en) * 2013-06-03 2017-06-27 Massachusetts Institute Of Technology Methods and apparatus for conversation coach
WO2015019345A1 (en) * 2013-08-06 2015-02-12 Beyond Verbal Communication Ltd Emotional survey according to voice categorization
EP3063751A4 (en) * 2013-10-31 2017-08-02 Haruta, Pau-San Computing technologies for diagnosis and therapy of language-related disorders

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110125844A1 (en) * 2009-05-18 2011-05-26 Telcordia Technologies, Inc. mobile enabled social networking application to support closed, moderated group interactions for purpose of facilitating therapeutic care
US20120116772A1 (en) * 2010-11-10 2012-05-10 AventuSoft, LLC Method and System for Providing Speech Therapy Outside of Clinic

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11017693B2 (en) 2017-01-10 2021-05-25 International Business Machines Corporation System for enhancing speech performance via pattern detection and learning
US11322172B2 (en) 2017-06-01 2022-05-03 Microsoft Technology Licensing, Llc Computer-generated feedback of user speech traits meeting subjective criteria
US11224782B2 (en) * 2017-06-04 2022-01-18 Apple Inc. Physical activity monitoring and motivating with an electronic device
DE102019217109A1 (en) * 2019-11-06 2021-05-06 Volkswagen Aktiengesellschaft System and method for supporting vehicle occupants in a vehicle during speech therapy
WO2021127348A1 (en) * 2019-12-18 2021-06-24 Darroh Steven Voice training therapy app system and method
US20220015691A1 (en) * 2019-12-18 2022-01-20 Steven Darroh Voice training therapy app system and method

Also Published As

Publication number Publication date
US10188341B2 (en) 2019-01-29
US20160189565A1 (en) 2016-06-30
AU2015374409A1 (en) 2017-07-06
EP3241206A1 (en) 2017-11-08
EP3241206A4 (en) 2018-08-08
CN107111961A (en) 2017-08-29
US20190150826A1 (en) 2019-05-23
US20160183867A1 (en) 2016-06-30
WO2016109334A1 (en) 2016-07-07
CN107112029A (en) 2017-08-29
US11517254B2 (en) 2022-12-06
EP3241215A4 (en) 2018-08-08
AU2015374230A1 (en) 2017-07-06
WO2016109491A1 (en) 2016-07-07
EP3241215A1 (en) 2017-11-08
US20160183868A1 (en) 2016-06-30

Similar Documents

Publication Publication Date Title
US20160189566A1 (en) System and method for enhancing remote speech fluency therapy via a social media platform
MacPherson et al. An art gallery access programme for people with dementia:‘You do it for the moment’
JP2023540856A (en) Video streaming via multiplex communication and display via smart mirror
van Leer et al. Use of portable digital media players increases patient motivation and practice in voice therapy
Hwang et al. TalkBetter: family-driven mobile intervention care for children with language delay
Ybarra et al. Design considerations in developing a text messaging program aimed at smoking cessation
T VALENTINE Stuttering intervention in three service delivery models (direct, hybrid, and telepractice): Two case studies
CN104874087B (en) A kind of hand terminal system for the training of autism children's rehabilitation
Kuhlen et al. Anticipating distracted addressees: How speakers' expectations and addressees' feedback influence storytelling
Luerssen et al. Virtual agents as a service: Applications in healthcare
US9802125B1 (en) On demand guided virtual companion
US20200152304A1 (en) Systems And Methods For Intelligent Voice-Based Journaling And Therapies
US20230099519A1 (en) Systems and methods for managing stress experienced by users during events
Oudshoorn et al. Psychological eHealth interventions for people with intellectual disabilities: A scoping review
Odhammar et al. Children in psychodynamic psychotherapy: changes in global functioning
US20220148452A1 (en) User interface system
Ferguson et al. The efficacy of using telehealth to coach parents of children with autism spectrum disorder on how to use naturalistic teaching to increase mands, tacts and intraverbals
Tye-Murray et al. Hearing health care digital therapeutics: patient satisfaction evidence
Shuper Engelhard Dance movement psychotherapy for couples (DMP-C): systematic treatment guidelines based on a wide-ranging study
Araiba et al. Preliminary practice recommendations for telehealth direct applied behavior analysis services with children with autism
Craig et al. Social support behaviours and barriers in group online exercise classes for adults living with and beyond cancer: A qualitative study
Tran et al. Adaptation of a problem-solving program (Friendship Bench) to treat common mental disorders among people living with HIV and AIDS and on methadone maintenance treatment in Vietnam: formative study
Lalios ConnectHear Telelntervention Program.
Silverman McGuire et al. Simulated laughter, perceived stress, and discourse in adults with aphasia
KR20210106271A (en) Apparatus and method for providing congitive reinforcement training game based on mediaservice

Legal Events

Date Code Title Description
AS Assignment

Owner name: NOVOTALK, LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROT, MOSHE;ROTHSCHILD, LILACH;LERNER, SMADAR;AND OTHERS;REEL/FRAME:037368/0698

Effective date: 20151224

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载