US20030064716A1 - Multi-modal callback - Google Patents
Multi-modal callback Download PDFInfo
- Publication number
- US20030064716A1 US20030064716A1 US10/263,501 US26350102A US2003064716A1 US 20030064716 A1 US20030064716 A1 US 20030064716A1 US 26350102 A US26350102 A US 26350102A US 2003064716 A1 US2003064716 A1 US 2003064716A1
- Authority
- US
- United States
- Prior art keywords
- wireless terminal
- modal
- response
- callback
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004044 response Effects 0.000 claims abstract description 222
- 238000000034 method Methods 0.000 claims abstract description 43
- 238000004891 communication Methods 0.000 claims description 31
- 230000003993 interaction Effects 0.000 claims description 24
- 238000003058 natural language processing Methods 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims 2
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000015572 biosynthetic process Effects 0.000 description 8
- 238000003786 synthesis reaction Methods 0.000 description 8
- 208000034188 Stiff person spectrum disease Diseases 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 235000013550 pizza Nutrition 0.000 description 3
- 230000009118 appropriate response Effects 0.000 description 2
- 238000013475 authorization Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008570 general process Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/16—Communication-related supplementary services, e.g. call-transfer or call-hold
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/762—Media network packet handling at the source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/42195—Arrangements for calling back a calling subscriber
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/53—Centralised arrangements for recording incoming messages, i.e. mailbox systems
- H04M3/5322—Centralised arrangements for recording incoming messages, i.e. mailbox systems for recording text messages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/60—Medium conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2203/00—Aspects of automatic or semi-automatic exchanges
- H04M2203/45—Aspects of automatic or semi-automatic exchanges related to voicemail messaging
- H04M2203/4509—Unified messaging with single point of access to voicemail and other mail or messaging systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/53—Centralised arrangements for recording incoming messages, i.e. mailbox systems
- H04M3/5307—Centralised arrangements for recording incoming messages, i.e. mailbox systems for recording messages comprising any combination of audio and non-audio components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/12—Messaging; Mailboxes; Announcements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/18—Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W88/00—Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
- H04W88/18—Service support devices; Network management devices
- H04W88/184—Messaging devices, e.g. message centre
Definitions
- the present invention relates generally to wireless communication systems and more particularly, to a multi-modal callback system that is capable of generating text-based messages that may be verbally read back to users of wireless terminals.
- Wireless communication devices have recently evolved from a technology used by an elite segment of the population to a technology that is used by the masses. Worldwide, the number of wireless communication device users has reached a staggering number and is growing all of the time. In the near future, it is envisioned that almost everyone will own or use some sort of wireless communication device that is capable of performing a variety of functions. In addition to traditional wireless communication devices, many different types of portable electronic devices are in use today. In particular, notebook computers, palm-top computers, and personal digital assistants (PDA) are commonplace.
- PDA personal digital assistants
- Wireless terminal users may receive services through their respective wireless terminals by calling an automated or operator-assisted service. These services may respond to the caller by allowing the user to navigate through a menu of items that are presented by the automated operator.
- users can now receive messages in a multiple variety of formats. However, some of these formats can more easily/effectively be comprehended in the form of human speech rather than text.
- a preferred embodiment of the present invention discloses a method for audibly reproducing messages and text-based messages for a remote terminal in a wireless communication system.
- a text-based message is generated on a wireless terminal that includes a callback request indicator. Selection of the callback request indicator causes the wireless terminal to transmit a callback request to a multi-modal callback server.
- the multi-modal callback server then converts the text-based message into a voice-based message and transmits the voice-based message to the wireless terminal.
- the text-based message may merely indicate that a second message may be read to the user. Selection of the callback request indicator will cause the multi-modal callback server to connect with the wireless terminal and audibly reproduce a second message to the user.
- Another preferred embodiment of the present invention discloses a method of providing multi-modal callback in a wireless communication system.
- the preferred method discloses generating a request for information using a wireless terminal and transmitting the request for information to a multi-modal callback server.
- a multi-modal response is generated to the request for information with a response generation application and is transmitted to the wireless terminal.
- Selecting a callback request item contained in an interaction menu of the multi-modal response with the wireless terminal transmits a callback request to the multi-modal callback server based on the selection of the callback request item.
- a callback response is generated by the multi-modal callback server that is based on the callback request. The callback response is then transmitted to the wireless terminal.
- the callback response is an automated voice-based response.
- the callback response is preferentially transformed into a voice-based response with a text-to-voice application.
- the wireless terminal is disconnected from the multi-modal callback server to conserve resources and reduces costs to the user.
- the preferred request for information is a voice-based request for information.
- a plurality of words contained in the voice-based request for information are identified using a voice recognition application.
- An intent associated with the identified words is determined using a natural language processing application.
- the multi-modal response is generated based on the identity of the words and their respective intent.
- a geographic location of the wireless terminal can be determined using a geographic location application.
- the multi-modal response can also be based at least in part on the geographic location of the wireless terminal.
- the preferred multi-modal callback system includes a wireless terminal that is connected to an access network.
- the wireless terminal is operable to generate a request for information.
- a multi-modal callback server is connected to the access network, thereby connecting the multi-modal callback server to the wireless terminal.
- a response generation application located on the multi-modal callback server is operable to generate a multi-modal response to the request for information that is sent to the wireless terminal.
- the multi-modal response preferentially includes a text-based response that includes a means for having a predetermined portion or all of the text-based response read aloud to a user of the wireless terminal.
- the means for having the text-based response read aloud to the user of the wireless terminal preferentially includes an interaction menu selection item that is generated on a display of the wireless terminal, a predetermined keypad key of the wireless terminal, a voice-based command generated by a user of the wireless terminal or selection of an item generated on the display with a pointing device of the wireless terminal. Selection of one of these means for having the text-based response read aloud to the user of the wireless terminal will cause a call to be made to the multi-modal callback server, which will in turn call the wireless terminal back and read aloud the text-based response.
- the text-based response is read aloud by processing the text-based response with a text-to-voice application located on the multi-modal callback server, which allows the text of the text-based response to be read to the user.
- the preferred embodiment may also include a geographic location application that is used to determine a geographic location of the wireless terminal.
- the multi-modal response is preferentially also generated as a function of the geographic location of the wireless terminal.
- the multi-modal responses that are generated by the response generation application of the multi-modal callback server can be geographically tailored to provide responses that are related to the geographic location of the wireless terminal. For example, if a user wants directions to a particular establishment the response generation application will use the geographic location of the wireless terminal as a starting point so that more accurate directions can be provided.
- the request for information is a voice-based request for information.
- the multi-modal callback system includes a voice recognition application that is operable to identify a plurality of words contained in the voice-based request for information.
- a natural language processing application is operable to determine an intent associated with the words can also be used to generate the multi-modal response. This allows the multi-modal callback system to provide answers to consumer requests that are more relevant by targeting the response generation application to specific areas of information contained in a data content database.
- Another preferred embodiment of the present invention discloses a method of generating multi-modal messages for a user of a wireless terminal connected to an access network.
- a multi-modal response is generated in response to a request for information received from the wireless terminal.
- the multi-modal response is then transmitted to the wireless terminal.
- a text-based response is included in the preferred multi-modal response that includes a means for allowing a predetermined portion of the text-based response to be read aloud to the user of the wireless terminal.
- the means for having the text-based response read aloud to the user of the wireless terminal preferentially includes an interaction menu selection item generated on a display of the wireless terminal, a designated keypad key of the wireless terminal, a voice-based command generated by a user of the wireless terminal or a link that may be selected by a pointing device of the wireless terminal.
- selection of these means for having the text-based response read aloud to the user of the wireless terminal causes the multi-modal callback server to establish a connection with the wireless terminal and then the text is read aloud to the user.
- FIG. 1 illustrates a preferred embodiment of a multi-modal messaging system for a wireless communication system.
- FIG. 2 illustrates the general process steps performed by a preferred embodiment of the multi-modal messaging system during an illustrative operation.
- FIG. 3 illustrates a preferred embodiment of a multi-modal callback system for use in a wireless communication system.
- FIG. 4 illustrates the general process steps performed by a preferred embodiment of the multi-modal callback system during an illustrative operation.
- the present invention discloses a multi-modal messaging system 10 for a wireless communication system 12 .
- the wireless communication system 12 includes at least one wireless terminal 14 that is connected to at least one wireless access network 16 .
- the wireless access network 16 generally includes a base station transceiver that is connected to a base station server.
- the base station server is connected to a network connection that may be a publicly switched telephone network or a private network.
- the wireless access network 16 is connected to at least one switch 18 , thereby connecting the wireless terminal 14 to a multi-modal message server 20 .
- the wireless access network 16 could also be connected to a router 19 in an IP-based wireless access network as the function of transferring data between the wireless terminal 14 and the multi-modal message server 20 is provided by both types of devices.
- the multi-modal messaging system 10 discloses a method of communicating with a wireless terminal 14 using multiple modes of communication including, but not limited to, human speech and text-based messages during a single transaction or call.
- wireless terminals 14 that are connected to the wireless access network 16 preferentially communicate with the multi-modal message server 20 via the wireless access network 16 to which the wireless terminal 14 is connected.
- the multi-modal messaging system 10 also includes an automated speech recognition application with which the user of the wireless terminal 14 interacts to request and receive information from various databases containing information from a plurality of businesses.
- the wireless terminal 14 is capable of transmitting and receiving messages that may come in several formats.
- the preferred formats include human speech, which is produced using a speaker and a microphone, and text and graphic formats that are generated on a display of the wireless terminal 14 .
- the wireless terminal 14 preferentially transmits a tailored request for information to the multi-modal message server 20 in either human speech or text based message formats.
- Speech-based tailored requests for information are transmitted by means of a wireless telephone call as known in the art.
- Text-based tailored requests for information are transmitted in the form of a text message that is transmitted using a wireless communication protocol, including but not limited to a short message service (“SMS”), any wireless application protocol (“WAP”), or any email protocol.
- SMS short message service
- WAP wireless application protocol
- a user of the wireless terminal 14 establishes a connection with the multi-modal message server 20 by dialing a phone number that is associated with a participating company that operates the multi-modal message server 20 .
- the act of dialing a predefined phone number associated with the multi-modal message server 20 causes the wireless access network 16 to connect the call to the multi-modal message server 20 .
- the user of the wireless terminal 14 is capable of establishing a connection with the multi-modal message server 20 from an interactive menu that is generated on the wireless terminal 14 through a wireless application protocol or by predefined user or factory settings.
- Selecting a link or prompt to a respective multi-modal message server 20 contained in the interaction menu thereby establishes the connection between the remote terminal 14 and the multi-modal message server 20 .
- the user may enter an address or universal resource locator (“URL”) of the multi-modal message server 20 to establish the connection between the wireless terminal 14 and the multi-modal message server 20 .
- URL universal resource locator
- the operator of the multi-modal message server 20 may or may not be the actual company from which data is sought by the user of the wireless terminal 14 .
- the company operating the multi-modal message server 20 may be a third-party that is licensed or granted permission to provide certain types of data to consumers having remote terminals 14 that are associated with the company operating the multi-modal messaging system 10 .
- the provider of the wireless communication system 12 may have a contract with the operator of the multi-modal message server 20 and in turn, another company from which the user is seeking information may also have a contract with the operator of multi-modal message server 20 .
- the cooperation of all parties in these embodiments enables the multi-modal messaging system 10 to function properly despite the varying types of contractual arrangements made between respective parties.
- the multi-modal message server 20 may house the data files that contain the information requested by the user or the multi-modal message server 20 may be connected to several different company file servers that contain the desired information that is responsive to the requests for information that are generated by the wireless terminals 14 .
- the multi-modal message server 20 In response to the requests for information that are generated by the wireless terminal 14 , the multi-modal message server 20 generates structured responses that contain data that is responsive to the requests for information. In transmitting the structured responses to the wireless terminal 14 , the multi-modal messaging system 10 can select from a group of modes of communication including, but not limited to, text modes, graphic modes, animation modes, multi-media modes, pre-recorded and synthesized sounds including synthesized human speech modes, music modes, and noise modes. In particular, the preferred multi-modal messaging system 10 uses at least two of the above-referenced modes to transmit responses to the wireless terminals 14 during a single transaction or user interaction.
- modes of communication including, but not limited to, text modes, graphic modes, animation modes, multi-media modes, pre-recorded and synthesized sounds including synthesized human speech modes, music modes, and noise modes.
- the preferred multi-modal messaging system 10 uses at least two of the above-referenced modes to transmit responses to the wireless terminals 14 during a single transaction or
- the methods and protocols for transmitting information in the form of text from the multi-modal messaging system 10 to the wireless terminal 14 include, but are not limited to, SMSs, WAPs, and email protocols.
- the response is preferentially transmitted from the multi-modal message server 20 to the remote terminal 14 during a wireless telephone call that may be initiated by either the remote terminal 14 or the multi-modal message server 20 .
- the audible information contained in a response may be transmitted in an automated fashion using applications capable of synthesizing human speech and directing the synthesized human speech to a voice mail system associated with the intended recipient's wireless terminal 14 .
- the term voice mail system includes any system that is capable of receiving, storing and retrieving audible messages in an automated fashion either autonomously or on-demand via a telephone network. These include voice mail servers and both analog and digital answering machines.
- the present invention discloses the use of more than one mode of communication during the course of a single interaction between the wireless terminal 14 and the multi-modal message server 20 .
- a single interaction is defined as a set of messages required to meet the needs of a consumer or user of the wireless terminal 14 that is requesting a specific service, specific content, or specific information from the multi-modal message server 20 and the response or responses that are delivered by the multi-modal message server 20 in response to the requests for information from the wireless terminal 14 .
- the present invention discloses methods of using multiple modes of communication between a respective remote terminal 14 and a respective multi-modal message server 20 during a single interaction, thereby allowing the multi-modal message server 20 to respond to the demands of the user using both voice and text-based messages, for example.
- the wireless terminal 14 is operable to generate tailored requests for information about a particular product or service.
- the multi-modal message server 20 responds to the wireless terminal 14 by sending content responsive to the tailored requests for information via messages that are formatted as a text-based message and a voice-based message.
- the wireless terminal 14 may only be capable of conducting a wireless telephone call or the transmission or receipt of text messages, but not both operations at the same time.
- the multi-modal messaging system 10 is designed to provide the wireless terminal 14 with text-based messages that are responsive to the requests for information after the wireless telephone call has been disconnected and the user has already received the voice-based messages that are responsive to the requests for information.
- the voice call connection between the wireless terminal 14 and the multi-modal message server 20 and the text-based messages that are sent to the wireless terminal 14 may be transmitted from the multi-modal message server 20 using a dissimilar wireless communication protocol.
- the multi-modal messaging system 10 preferentially also includes a voice recognition application 22 .
- the voice recognition application 22 is preferentially located on the multi-modal message server 20 , but may also be located on a separate server that is connected with the multi-modal message server 20 .
- the voice recognition application 22 determines the identity of or recognizes respective words that are contained in voice-based requests for information that are generated by users of the wireless terminal 14 .
- the words that are identified by the voice recognition application 22 are used as inputs to a response generation application 28 in one preferred embodiment of the present invention.
- the response generation application 28 is capable of generating multi-modal responses that contain data responsive to the requests for information that are generated by the users of the wireless terminal 14 .
- the words that are identified may also be used as an input to a natural language processing application 26 that determines the intent of the words contained in the requests for information and not just the identity of the words.
- the multi-modal messaging system 10 includes a voice print application 24 that provides security to users of the wireless terminals 14 by analyzing voice prints of the user that are obtained by sampling segments of the user's speech. If the user is authenticated, access to the multi-modal messaging service 10 is provided to the user and if the user is not authenticated access is denied. Further, if the user desires to limit access to the multi-modal messaging system 10 to only themselves or select individuals, then a preference setting may be set by the owner of the wireless terminal 14 that restricts access to only pre-authorized users.
- the voice print application 24 can also be used to limit use of the wireless terminal 14 so that if the remote terminal 14 is stolen it will not be able to be used by the person who steals the wireless terminal 14 .
- the voice print application 24 can also be used to determine if the user is an authorized user that can be provided with information related to a specific account by providing authorization and authentication.
- the voice print application 24 can be located on the multi-modal message server 20 or on a voice print application server that is connected to the multi-modal message server 20 .
- the multi-modal messaging system 10 includes a natural language processing application 26 .
- the natural language processing application 26 works in conjunction with the voice recognition application 22 to ascertain the meaning of natural language requests for information that are received from the wireless terminals 14 .
- the natural language processing application 26 processes the identified words contained in the voice signals to ascertain the meaning or intent of the words that are contained in the voice signals.
- the voice recognition application 22 identifies or recognizes the particular words that are contained in the voice signals and the natural language processing application 26 interprets the meaning or intent of the recognized words contained in the voice signals.
- the natural language processing application 26 provides functionality to the multi-modal messaging system 10 that allows users to enter requests for information using natural language that is normally used in conversations between two human subjects.
- the natural language processing application 26 may be located on the multi-modal message server 20 , but, in an effort to increase the level of performance, could also be located on a separate server or a separate set of servers connected with the multi-modal message server 20 .
- the natural language processing application please refer to U.S. application Ser. No.: 10/131,898 entitled Natural Language Processing for a Location-Based Services System filed on Apr. 25, 2002 which is hereby incorporated by reference in its entirety.
- the natural language processing application 26 is connected to a response generation application 28 that uses a plurality of programmed rules in combination with the command or word contained in the request to determine what information should be retrieved and returned to the wireless terminal 14 .
- the response generation application 28 uses the words identified by the voice recognition application 22 and the intent or meaning of the words determined by the natural language processing application 26 to generate a search query that retrieves the appropriate information from a content database 34 . In other preferred embodiments, only the words identified from the voice recognition application 22 are used by the response generation application 28 to generate a response to the tailored requests for information.
- a location information application 30 is used to determine a geographic location of the wireless terminal 14 .
- the location information application 30 may be located on the multi-modal message server 20 or on another server that is connected to the multi-modal message server 20 .
- the geographic location of the user can be used to focus or narrow responses that are generated by the response generation application 28 to a specific geographic area that is appropriate to the user of the wireless terminal 14 .
- Certain types of requests for information generated by users of the wireless terminals 14 will be dependent on the current geographic location of the wireless terminal 14 and the location information application 30 is used to provide the response generation application 28 with location data that is needed to generate a geographically tailored response to requests for information that are dependent on the geographic location of the wireless terminal 14 .
- the response generation application 28 may also be connected to a virtual customer database 32 that may use application and customer proprietary information to determine user preferences for modes of communication.
- the virtual customer database 32 may include customer data that includes information about the wireless terminal 14 that the user is using such as limitations for the amount or type of data content that the wireless terminal 14 can receive or the type of display used by the wireless terminal 14 so that responses can be structured in a format that is compatible with the display.
- the user may choose not to receive certain types of large files, such as multimedia files and so forth, and these settings may be found in the virtual customer database 32 in the profile of the user.
- the response generation application 28 is used to generate structured responses to the tailored requests for information that are generated by the wireless terminal 14 .
- a query is generated and sent to the content database 34 that is connected to the response generation application 28 .
- the query is used to retrieve data that is responsive to the request for information from the content database 34 .
- the content database 34 may be located locally on the multi-modal message server 20 or housed on other servers that are connected to the multi-modal message server 20 . For example, if the wireless terminal 14 is connected to a multi-modal message server 20 provided by an airline company, the details of a flight that a user is booked on may be retrieved from the content database 34 if so desired.
- the user of the wireless terminal 14 is a regular customer of the airline company and is registered with the airline company.
- the virtual customer database 32 will know this fact and will assist the response generation application 28 by providing detailed information to the response generation application 28 about that particular user.
- the virtual customer database 32 may contain a customer identification number and a virtual key that is associated with that particular user. This information can be added to the query that is generated by the response generation application 28 , which allows the response generation application to more accurately generate responses.
- the airline company multi-modal messaging system will be able to use this information to more accurately provide responses to the user that contain accurate data related to that particular user's account and status. Further, this information can be used for authorization and authentication purposes.
- the multi-modal messaging system 10 prepares this data for transmission to the wireless terminal 14 .
- a unified messaging application 36 preferentially combines the information retrieved into a unified response that can be sent to the wireless terminal 14 if the response generation application 28 does not format the response into the predefined message formats.
- the unified response that is generated contains a text-based response and a voice-based response that is created using the data that is provided by the response generation application 28 .
- the unified message application 36 prepares the multi-modal response by generating a response in at least two formats that are suitable for the wireless terminal 14 . As set forth above, these formats may include a text-based message, a graphics-based message, a voicemail message, and an email message.
- a transcoding application 38 may be used to format the unified message into a format that is suitable for the wireless terminal 14 using information already known about the wireless terminal 14 , which is preferentially retrieved from the virtual customer database 32 .
- the transcoding application 38 may convert the text-based response into an SMS or WAP format.
- the transcoding application 38 may use a voice synthesis application to convert the speech-based response into a format suitable for the wireless terminal 14 .
- the response is then sent to the wireless access network 16 , which thereby transmits the multi-modal response to the wireless terminal 14 .
- Users of the wireless terminals 14 can define how they want the multi-modal messaging system 10 to send responses to them, or the multi-modal messaging system 10 may contain information, preferably stored in the virtual customer database 32 , about each user of the multi-modal messaging system 10 and their respective remote terminals 14 . This allows the multi-modal messaging system 10 to generate and transmit responses that are in the preferred format of the user. The multi-modal messaging system 10 allows users to determine what types of services and modes of communication will be used to transmit responses to the wireless terminal 14 .
- a call is placed on the wireless access network 16 from the wireless terminal 14 to the multi-modal message server 20 .
- a connection may be established between the wireless terminal 14 and the multi-modal message server 20 through the selection of a menu item or the entry of an address on the wireless terminal 14 .
- the wireless terminal 14 also preferentially passes information to the multi-modal message server 20 about the wireless terminal 14 using SS7, ISDN, or other in-band or out-of-band messaging protocols.
- a calling number identification (“CNI”) is preferentially passed as well as a serial number for the wireless terminal 14 . This information can be used to determine the identity of the user to which the wireless terminal 14 belongs.
- the multi-modal message server 20 uses an interface to detect the call and ‘answers’ the call from the wireless terminal 14 using text-to-speech messages or recorded speech prompts.
- the prompts can ask the user to speak the request for information using some set of predefined commands or may ask the user to utter the request for information using natural language, which will later be processed by the voice recognition application 22 and the natural language application 26 .
- the text-to-speech messages or recorded speech prompts are transmitted across the wireless access network 16 to the wireless terminal 14 .
- the user speaks the request for information into the wireless terminal 14 and the wireless terminal 14 and wireless access network 16 transmit the voice signal representing the request for information to the multi-modal message server 20 .
- the user speaks one of a pre-defined command phrases or words, which is then interpreted and used by the voice recognition application 22 to generate a response.
- the user's speech is converted to text using the voice recognition application 22 , which is then used as an input to a search query that interprets the user's command.
- a response is generated by the responses generation application 28 that is sent to the user.
- the multi-modal messaging system 10 incorporates a voice printing application 24 in conjunction with the database of proprietary customer information 34 to determine if the caller using the wireless terminal 14 is the owner of (or assigned to) the wireless terminal 14 . If the caller is not the owner of the wireless terminal 14 , (which may occur if someone borrows the wireless terminal 14 from the owner) the multi-modal messaging system 10 proceeds with the call but does not personalize any of the services based on proprietary customer information associated with the assigned user. Therefore, at any point in the process where the multi-modal messaging system 10 would use customer proprietary information, the multi-modal messaging system 10 could use additional prompts to request this information from the caller. The multi-modal messaging system 10 could also restrict access to the multi-modal messaging system 10 and the wireless terminal 14 altogether if the assigned user has preset a user preference indicating the restriction of access to unauthorized users.
- the multi-modal messaging system 10 can handle requests for information that are entered using natural speech.
- the multi-modal messaging system 10 passes the text identified from the voice recognition application 22 to a natural language processing application 26 that is used to determine the intent or meaning of the words contained in the request.
- the interpreted intent is processed by the multi-modal messaging system 10 in the same way the pre-defined commands are processed. This is made possible because the natural language processing application 26 is programmed to generate search queries based on the words identified in the request and the intent of the words contained in the request.
- the response generation application 28 uses programmed rules in combination with the commands to determine what information should be retrieved and returned to the wireless terminal 14 . These rules are stored in executable code or in a content database 34 . In one preferred embodiment of the present invention, if the multi-modal messaging system 10 determines that location information about the wireless terminal 14 is necessary to generate an appropriate response to the request for information, the multi-modal messaging system 10 uses the location information application 30 to determine the geographic location of the wireless terminal 14 .
- the wireless access network 16 can use several location determining applications that are designed to sufficiently determine the geographic location of the wireless terminal 14 to the accuracy necessary to successfully generate a response that is responsive to the request for information.
- the location information that is generated by the location information application 30 is used as part of the search query that is used to locate the desired information.
- the response generation application 28 of the multi-modal messaging system 10 Upon determining the data to be returned to the wireless terminal 14 and retrieving this data from a content database 34 , the response generation application 28 of the multi-modal messaging system 10 prepares the content to be sent to the wireless terminal 14 .
- the multi-modal messaging system 10 may use an application and customer proprietary information to determine the customer's preferences for modes of communication. Additionally, this customer data may include information about the wireless terminal 14 assigned to the user such as limitations for the amount or type of data content the device can receive. These methods for storing and accessing the customer proprietary data include those disclosed in a co-pending application entitled Virtual Customer Database, which was filed on the same day as the present application and assigned application Ser. No.: ______, which is hereby incorporated by reference in its entirety.
- the multi-modal messaging system 10 formats the content contained in the response for the wireless terminal 14 using available information about the wireless terminal 14 and individual preferences of the users.
- a unified messaging application 36 preferentially formats the content into multiple messages, if necessary, to respond to the wireless terminal 14 in the most informative way that is compatible with the wireless terminal 14 to which the user is assigned or has purchased.
- the multi-modal messaging system 10 preferentially uses a transcoding application 38 to format the content contained in the response into a suitable format for the user's wireless terminal 14 and is capable of generating responses using formats such as WML, HTML, and plain text.
- the multi-modal messaging system 10 then transmits the content to the wireless access network 16 operated by the carrier and indicates the recipient and the method for transferring the message(s) to the recipient or user.
- the messages are sent as a text message to the wireless terminal 14 using any of (but not limited to) the following: SMS, CPDP, Mobitex.
- the wireless terminal 14 receives the message(s) and the user is allowed to interact with the content contained in the response from the multi-modal messaging system 10 .
- the multi-modal messaging system 10 is used in combination with a location-based services system where the content of the messages between the system and the wireless terminal 14 contain information that is based on the current geographic location of the wireless terminal 14 .
- the location-based services system may be of the type by which the indicator of the location of the wireless terminal 14 is generated by the wireless terminal 14 and transmitted to the multi-modal messaging system 10 , determined by the multi-modal messaging system 10 , or by some combination there of
- location-based service systems refer to U.S. application Ser. No.: 09/946,111, which was filed on Sep. 4, 2002 entitled Location-Based Services and is hereby incorporated by reference in its entirety.
- FIG. 2 an illustrative example of a preferred embodiment of the present invention is set forth below.
- a user of wireless terminal 14 is planning a trip and would like to check with his or her airline to determine their flight itinerary.
- the user of wireless terminal 14 connects to the multi-modal messaging system 10 of the airline through the wireless access network 16 .
- the multi-modal messaging server 20 transmits a command prompt to the user requesting information from the user of the wireless terminal 14 .
- the user states a voice request for information, which in this example is illustrated as “Flight itinerary please”, which is transmitted to the multi-modal messaging server 20 at step 46 .
- the multi-modal messaging system 10 takes this voice request for information and uses automated speech recognition, which in the preferred embodiment includes processing the voice request for information with a voice recognition application 22 and a natural language processing application 26 , to generate a plurality of responses to the request for information.
- automated speech recognition which in the preferred embodiment includes processing the voice request for information with a voice recognition application 22 and a natural language processing application 26 , to generate a plurality of responses to the request for information.
- a voice-based response is generated that states “It will be sent to your phone” and a text-based response is generated that provides the user with the appropriate itinerary information that is tailored for that particular user.
- the multi-modal message server 20 transmits the multi-modal response to the user, which in FIG. 2 is represented as a voice-based response and a text-based response.
- the preferred embodiment uses customer information that is received from the virtual customer database 32 to determine that the user of the wireless terminal 14 has a profile with the airline.
- the profile is capable of providing the user's customer ID and possibly a virtual key that is associated with that customer that authorizes the wireless terminal 14 to receive data from the airline's database. This information allows the multi-modal messaging system 10 to authenticate and identify the user of the wireless terminal 14 in order to generate an appropriate response from the airline's data files.
- FIG. 3 wherein like reference numbers refer to the same elements set forth in the previous embodiments, another preferred embodiment of the present invention discloses a multi-modal callback system 100 for a wireless terminal 14 that is connected to at least one wireless access network 16 .
- the wireless communication system 12 is connected to at least one switch 18 and/or a router 19 , which is in turn, connected to a multi-modal callback server 102 .
- the multi-modal callback server 102 may be the same server as the multi-modal message server 20 set forth in the previous embodiments or may be another server.
- the multi-modal callback server 102 preferentially includes many of the same applications as the multi-modal message server 20 .
- the multi-modal callback system 100 provides a method for initiating a telephone call between the wireless terminal 14 and the multi-modal callback server 102 for transmitting a predefined speech-based message to the user of the wireless terminal 14 .
- the call is preferentially initiated in an automated fashion by the wireless terminal 14 after the wireless access network 16 receives a message that is transmitted from the wireless terminal 14 to the multi-modal callback server 102 requesting a callback.
- the wireless terminal 14 receives a voice-based message that reads a text-based message to the user of the wireless terminal 14 .
- the user of the wireless terminal 14 preferentially generates a request for information that is transmitted to the multi-modal callback server 102 .
- the preferred request for information is in the form of a voice-based request for information that is generated using normal speech.
- the voice request for information can be transmitted in the form of a short message that is sent from the wireless terminal 14 to the multi-modal callback server 102 .
- the wireless terminal 14 does not establish a permanent connection with the multi-modal callback server 102 when the request for information is sent to the multi-modal callback server 102 .
- the wireless terminal 14 can also transmit the request for information to the multi-modal callback server 102 in the form of a text message.
- the preferred wireless terminal 14 is illustrated as a wireless phone, but those skilled in the art should recognize that other wireless communication devices (e.g.—PDAs, laptops, and various other types of personal communication devices) could be used as a wireless terminal 14 .
- a multi-modal response to the request for information is preferentially generated by the multi-modal callback server 102 and sent to the wireless terminal 14 .
- the multi-modal response preferentially includes at least a text-based response and a speech-based response.
- Other types of responses may also be included in the multi-modal response including an email response, an instant message response, and a fax response.
- a voice recognition application 22 is used to identify a plurality of words contained in the request for information if the request is in the form of a voice-based request for information.
- a voice print application 24 can be used to verify that the user has access rights to the multi-modal callback system 100 .
- a natural language processing application 26 can be used to determine an intent associated with the words contained in the voice-based request for information. The identity of the words and the intent of the words are then used to generate an input to a response generation application 28 .
- the response generation application 28 uses the input to generate a response to the request for information that is sent by the user of the wireless terminal 14 .
- the response generation application 28 preferentially accesses a data content database 34 to retrieve information that is responsive to the request for information.
- the data content database 34 may be located on the multi-modal callback server 102 or on a data server that is connected to the multi-modal callback server 102 .
- a location information application 30 may also be included that is used to determine the geographic location of the wireless terminal 14 .
- the geographic location of the wireless terminal 14 is used for requests for information that are dependent upon the geographic location of the user.
- a virtual customer database 32 may also be included that contains a plurality of user profiles. The user profiles can be used to grant access to the data content database 34 and to authorize the user of the wireless terminal 14 .
- For more information about the virtual customer database 34 reference is made to a co-pending application filed by the same inventors and assigned U.S. application Ser. No.: ______ and entitled Virtual Customer Database, which is hereby incorporated by reference in its entirety.
- the preferred multi-modal callback system 100 also includes a voice synthesis application 104 .
- the voice synthesis application 104 is a text-to-speech application that is used to convert text-based responses into a synthesized human voice.
- the response generation application 28 generates a text-based response to the request for information
- the user of the wireless terminal 14 is capable of having the text contained therein read back to them over the wireless terminal 14 , as set forth in greater detail below. It is worth noting that the present invention could also be used to audibly play back any kind of text-based message that is sent to the wireless terminal 14 , such as short messages or instant messages.
- the response generation application 28 is used to generate a multi-modal response to the user's request for information.
- the multi-modal response includes a text-based response that is displayed on the display of the wireless terminal 14 .
- users of the wireless terminal 14 may not be able to read the text-based response or may just want to have the text-based response stated in a voice-based response.
- the preferred embodiment of the present invention allows users of the wireless terminal 14 to convert the text-based response into an audible response if desired.
- the preferred multi-modal response includes an interaction menu that is generated on a display of the wireless terminal 14 that allows the user to obtain additional information that may be categorized in the information contained in the text-based response of the multi-modal response.
- the text-based response may also include graphic information that is representative of a response, such as a trademark or service mark of a respective company.
- the interaction menu is preferentially setup so that a keypad of the wireless terminal 14 can be used to allow the user to select items from the interaction menu.
- a pointing device such as a mouse or touch-pad, may also be used to allow the user to select an item from the interaction menu.
- the user of the wireless terminal 14 can also use voice-based commands to select items contained in the interaction menu.
- the connection between the wireless terminal 14 and the multi-modal callback server 102 is preferentially terminated. This may be done for several reasons that relate to cost and proficiency of the multi-modal callback system 100 amongst other reasons. For example, the connection may be terminated so that the multi-modal callback server 102 can focus on other requests from other users thereby processing requests faster. In addition, there is typically a charge associated with the use of air or access time from the wireless communication system 12 and as such, the user will likely want to minimize use in order to keep charges down. In IP-based wireless access networks, the wireless terminal 14 is always connected to the wireless access network. In these types of networks, it is simply sufficient to note that the connection between the two devices is no longer current or active and must be re-established.
- a menu selection request is sent to the multi-modal server 102 using a wireless communication protocol, such as SMS.
- a wireless communication protocol such as SMS.
- a predefined callback number or address is embedded into each item on the interaction menu so that the wireless terminal 14 knows where to locate and obtain the information that is associated with each item listed in the interaction menu.
- the wireless terminal 14 establishes a connection to the multi-modal server 102 that is indicated by the predefined callback number or address.
- the multi-modal server 102 may simply receive a short message from the wireless terminal 14 that causes the multi-modal server 102 to establish a connection with the wireless terminal 14 .
- the multi-modal callback system 100 preferentially uses a voice synthesis application 104 to generate a voice-based message that is sent to the wireless terminal 14 .
- the voice-based message is based on a previous interaction between the wireless terminal 14 and the multi-modal callback system 100 .
- This previous interaction includes a set of transmissions including but not limited to a text message transmitted from the multi-modal messaging system 10 to the wireless terminal 14 containing instructions to the user of the wireless terminal 14 regarding: 1) the procedure for replying to the text message, 2) the use of a special code, and 3) the resulting telephone call and/or voice communication that will be initiated by the multi-modal callback system 100 .
- a user of wireless terminal 14 generates a voice-based request for information.
- the voice-based request for information is sent to the multi-modal callback server 102 , which is illustrated at step 112 .
- the user asks the multi-modal callback server 102 for “directions to Bud and Joe's”.
- the request for information is received by the multi-modal server 102 which in turn, uses automated speech processing applications to generate a response to the request for information from the user.
- a voice recognition application 22 determines the identity of the words contained in the voice-based request for information.
- a natural language processing application 26 may be used to determine an intent or meaning behind the words identified by the voice recognition application 22 . It is important to note that the multi-modal callback server 102 is also capable of handling text-based requests for information that are generated by the wireless terminal 14 .
- the response that is generated by the multi-modal callback system 100 may include a voice-based response and a text-based response.
- the response generation application 28 is used to generate a search query that searches the data content database 34 in order to retrieve the required information needed to generate a response, which is illustrated at step 118 .
- the voice recognition application 22 and the natural language processing application 26 are simply bypassed and the user's text-based request for information is used by the response generation application 28 to generate the multi-modal response.
- the voice-based response might be as follows: “Will be sent to your phone. You can have them read back to you by replying to the message.”
- the text-based response might be: “To have these directions read back just respond to this message by pressing 1 on your keypad. Directions to Bud and Joe's . . . Turn left on Main St. . . . ”
- the responses are both transmitted to the wireless terminal 14 , which is represented at step 120 .
- the call or connection between the multi-modal callback server 102 and the wireless terminal 14 is terminated so that the user is no longer charged for access.
- the user enters a callback request by selecting “1” on the keypad of the wireless terminal 14 in the present example.
- the callback request is then transmitted to the multi-modal callback server 102 , which is illustrated at step 122 .
- the callback request indicator may either be in an interactive menu or in the text-based response.
- the multi-modal callback server 102 Based on the callback request, at step 124 the multi-modal callback server 102 generates a voice-based response that is based on the text-based response that was previously sent to the wireless terminal 14 as part of the multi-modal response, which is illustrated at step 126 .
- the multi-modal callback server 102 establishes a connection with the wireless terminal 14 .
- the voice-based response is transmitted to the wireless terminal 14 , which is illustrated at step 128 .
- a voice synthesis application 104 is used to generate the voice-based response.
- the voice synthesis application 104 is preferentially capable of converting text to speech and may contain predefined voice files that may be used as responses.
- the multi-modal callback server 102 may also generate a second text-based response that is also sent to the wireless terminal 14 .
- the second text-based response may be sent instead of the voice-based response or with the voice-based response.
- the multi-modal callback server 102 may have already sent a text-based message to the user of the wireless terminal 14 .
- the text-based message could be pushed to the wireless terminal 14 or pulled by the wireless terminal 14 depending upon the particular circumstances.
- a text-based message that might be pushed to the wireless terminal 14 could be “Pizza special at Joe's Pizza, which is near your location. Press 1 for directions.”
- An interaction item is contained in the text-based message that allows the user to select an option that is presented. In this example, the user is allowed to press 1 on the keypad for directions.
- the multi-modal callback server 102 will connect with the wireless terminal 14 and audibly reproduce directions to Joe's Pizza over the wireless terminal 14 using the voice synthesis application 104 .
- the text-based message that is presented to the user is read back to the user.
- a different message is read to the user in response to a selection of an item in the interaction menu.
- the message that is read to the user does not necessarily have to be the same as the text-based message that is presented to the user of the wireless terminal 14 .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Telephonic Communication Services (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
- This application claims the benefit under 35 U.S.C. §119 of U.S. Provisional Application Serial No. 60/326,902 which was filed on Oct. 3, 2001 and entitled Multi-Modal Callback.
- The present invention relates generally to wireless communication systems and more particularly, to a multi-modal callback system that is capable of generating text-based messages that may be verbally read back to users of wireless terminals.
- Wireless communication devices have recently evolved from a technology used by an elite segment of the population to a technology that is used by the masses. Worldwide, the number of wireless communication device users has reached a staggering number and is growing all of the time. In the near future, it is envisioned that almost everyone will own or use some sort of wireless communication device that is capable of performing a variety of functions. In addition to traditional wireless communication devices, many different types of portable electronic devices are in use today. In particular, notebook computers, palm-top computers, and personal digital assistants (PDA) are commonplace.
- Users of wireless telephones and other wireless devices have recently been able to place a phone call to an automated system to request information by speaking to a basic automated speech recognition system. The basic automated speech recognition system typically responds to the caller using text-to-speech and/or recorded speech prompts. This method of information delivery is cumbersome and challenging for the caller as well as very time consuming, thereby causing callers unnecessary frustration. In some cases, the system returns too much information and the caller must listen to the entire response in order to get the information they want. In other systems the caller must verbally navigate through a deep hierarchy of prompts to get to the specific piece of information they seek.
- Wireless terminal users may receive services through their respective wireless terminals by calling an automated or operator-assisted service. These services may respond to the caller by allowing the user to navigate through a menu of items that are presented by the automated operator. With the advent of multi-modal messaging, users can now receive messages in a multiple variety of formats. However, some of these formats can more easily/effectively be comprehended in the form of human speech rather than text.
- As such, a need exists for a method of enabling the caller to reply via text to the text message sent by the system in order to initiate a phone call to the wireless terminal during which the information in the message, or additional information, is read aloud to the caller.
- A preferred embodiment of the present invention discloses a method for audibly reproducing messages and text-based messages for a remote terminal in a wireless communication system. In the preferred embodiment, a text-based message is generated on a wireless terminal that includes a callback request indicator. Selection of the callback request indicator causes the wireless terminal to transmit a callback request to a multi-modal callback server.
- In one embodiment of the present invention, the multi-modal callback server then converts the text-based message into a voice-based message and transmits the voice-based message to the wireless terminal. In another preferrred embodiment of the present invention, the text-based message may merely indicate that a second message may be read to the user. Selection of the callback request indicator will cause the multi-modal callback server to connect with the wireless terminal and audibly reproduce a second message to the user.
- Another preferred embodiment of the present invention discloses a method of providing multi-modal callback in a wireless communication system. The preferred method discloses generating a request for information using a wireless terminal and transmitting the request for information to a multi-modal callback server. A multi-modal response is generated to the request for information with a response generation application and is transmitted to the wireless terminal. Selecting a callback request item contained in an interaction menu of the multi-modal response with the wireless terminal transmits a callback request to the multi-modal callback server based on the selection of the callback request item. A callback response is generated by the multi-modal callback server that is based on the callback request. The callback response is then transmitted to the wireless terminal.
- In the preferred embodiment of the present invention, the callback response is an automated voice-based response. The callback response is preferentially transformed into a voice-based response with a text-to-voice application. After the multi-modal response is generated and sent to the wireless terminal, the wireless terminal is disconnected from the multi-modal callback server to conserve resources and reduces costs to the user.
- The preferred request for information is a voice-based request for information. A plurality of words contained in the voice-based request for information are identified using a voice recognition application. An intent associated with the identified words is determined using a natural language processing application. The multi-modal response is generated based on the identity of the words and their respective intent. A geographic location of the wireless terminal can be determined using a geographic location application. The multi-modal response can also be based at least in part on the geographic location of the wireless terminal.
- Another preferred embodiment of the present invention discloses a multi-modal callback system. The preferred multi-modal callback system includes a wireless terminal that is connected to an access network. The wireless terminal is operable to generate a request for information. A multi-modal callback server is connected to the access network, thereby connecting the multi-modal callback server to the wireless terminal. A response generation application located on the multi-modal callback server is operable to generate a multi-modal response to the request for information that is sent to the wireless terminal. The multi-modal response preferentially includes a text-based response that includes a means for having a predetermined portion or all of the text-based response read aloud to a user of the wireless terminal.
- The means for having the text-based response read aloud to the user of the wireless terminal preferentially includes an interaction menu selection item that is generated on a display of the wireless terminal, a predetermined keypad key of the wireless terminal, a voice-based command generated by a user of the wireless terminal or selection of an item generated on the display with a pointing device of the wireless terminal. Selection of one of these means for having the text-based response read aloud to the user of the wireless terminal will cause a call to be made to the multi-modal callback server, which will in turn call the wireless terminal back and read aloud the text-based response. The text-based response is read aloud by processing the text-based response with a text-to-voice application located on the multi-modal callback server, which allows the text of the text-based response to be read to the user.
- The preferred embodiment may also include a geographic location application that is used to determine a geographic location of the wireless terminal. The multi-modal response is preferentially also generated as a function of the geographic location of the wireless terminal. As such, the multi-modal responses that are generated by the response generation application of the multi-modal callback server can be geographically tailored to provide responses that are related to the geographic location of the wireless terminal. For example, if a user wants directions to a particular establishment the response generation application will use the geographic location of the wireless terminal as a starting point so that more accurate directions can be provided.
- In the preferred multi-modal callback system, the request for information is a voice-based request for information. The multi-modal callback system includes a voice recognition application that is operable to identify a plurality of words contained in the voice-based request for information. A natural language processing application is operable to determine an intent associated with the words can also be used to generate the multi-modal response. This allows the multi-modal callback system to provide answers to consumer requests that are more relevant by targeting the response generation application to specific areas of information contained in a data content database.
- Another preferred embodiment of the present invention discloses a method of generating multi-modal messages for a user of a wireless terminal connected to an access network. In this preferred embodiment, a multi-modal response is generated in response to a request for information received from the wireless terminal. The multi-modal response is then transmitted to the wireless terminal. A text-based response is included in the preferred multi-modal response that includes a means for allowing a predetermined portion of the text-based response to be read aloud to the user of the wireless terminal.
- The means for having the text-based response read aloud to the user of the wireless terminal preferentially includes an interaction menu selection item generated on a display of the wireless terminal, a designated keypad key of the wireless terminal, a voice-based command generated by a user of the wireless terminal or a link that may be selected by a pointing device of the wireless terminal. As set forth above in the other preferred embodiments, selection of these means for having the text-based response read aloud to the user of the wireless terminal causes the multi-modal callback server to establish a connection with the wireless terminal and then the text is read aloud to the user.
- Further objects and advantages of the present invention will be apparent from the following description, reference being made to the accompanying drawings wherein preferred embodiments of the invention are clearly illustrated.
- FIG. 1 illustrates a preferred embodiment of a multi-modal messaging system for a wireless communication system.
- FIG. 2 illustrates the general process steps performed by a preferred embodiment of the multi-modal messaging system during an illustrative operation.
- FIG. 3 illustrates a preferred embodiment of a multi-modal callback system for use in a wireless communication system.
- FIG. 4 illustrates the general process steps performed by a preferred embodiment of the multi-modal callback system during an illustrative operation.
- Referring to FIG. 1, the present invention discloses a
multi-modal messaging system 10 for awireless communication system 12. Thewireless communication system 12 includes at least onewireless terminal 14 that is connected to at least onewireless access network 16. Although not illustrated, thewireless access network 16 generally includes a base station transceiver that is connected to a base station server. The base station server is connected to a network connection that may be a publicly switched telephone network or a private network. In the embodiment illustrated in FIG. 1, thewireless access network 16 is connected to at least oneswitch 18, thereby connecting thewireless terminal 14 to amulti-modal message server 20. However, as further illustrated in FIG. 1, thewireless access network 16 could also be connected to arouter 19 in an IP-based wireless access network as the function of transferring data between thewireless terminal 14 and themulti-modal message server 20 is provided by both types of devices. - The
multi-modal messaging system 10 discloses a method of communicating with awireless terminal 14 using multiple modes of communication including, but not limited to, human speech and text-based messages during a single transaction or call. As set forth in detail below,wireless terminals 14 that are connected to thewireless access network 16 preferentially communicate with themulti-modal message server 20 via thewireless access network 16 to which thewireless terminal 14 is connected. Preferentially, themulti-modal messaging system 10 also includes an automated speech recognition application with which the user of thewireless terminal 14 interacts to request and receive information from various databases containing information from a plurality of businesses. - Referring to FIG. 1, during operation the
wireless terminal 14 is capable of transmitting and receiving messages that may come in several formats. The preferred formats include human speech, which is produced using a speaker and a microphone, and text and graphic formats that are generated on a display of thewireless terminal 14. In the preferred embodiment of the present invention, thewireless terminal 14 preferentially transmits a tailored request for information to themulti-modal message server 20 in either human speech or text based message formats. Speech-based tailored requests for information are transmitted by means of a wireless telephone call as known in the art. Text-based tailored requests for information are transmitted in the form of a text message that is transmitted using a wireless communication protocol, including but not limited to a short message service (“SMS”), any wireless application protocol (“WAP”), or any email protocol. - In one preferred embodiment of the present invention, a user of the
wireless terminal 14 establishes a connection with themulti-modal message server 20 by dialing a phone number that is associated with a participating company that operates themulti-modal message server 20. The act of dialing a predefined phone number associated with themulti-modal message server 20 causes thewireless access network 16 to connect the call to themulti-modal message server 20. In yet another preferred embodiment, the user of thewireless terminal 14 is capable of establishing a connection with themulti-modal message server 20 from an interactive menu that is generated on thewireless terminal 14 through a wireless application protocol or by predefined user or factory settings. Selecting a link or prompt to a respectivemulti-modal message server 20 contained in the interaction menu thereby establishes the connection between theremote terminal 14 and themulti-modal message server 20. In yet another preferred embodiment, the user may enter an address or universal resource locator (“URL”) of themulti-modal message server 20 to establish the connection between thewireless terminal 14 and themulti-modal message server 20. - Although not specifically illustrated, the operator of the
multi-modal message server 20 may or may not be the actual company from which data is sought by the user of thewireless terminal 14. The company operating themulti-modal message server 20 may be a third-party that is licensed or granted permission to provide certain types of data to consumers havingremote terminals 14 that are associated with the company operating themulti-modal messaging system 10. For example, the provider of thewireless communication system 12 may have a contract with the operator of themulti-modal message server 20 and in turn, another company from which the user is seeking information may also have a contract with the operator ofmulti-modal message server 20. The cooperation of all parties in these embodiments enables themulti-modal messaging system 10 to function properly despite the varying types of contractual arrangements made between respective parties. Further, themulti-modal message server 20 may house the data files that contain the information requested by the user or themulti-modal message server 20 may be connected to several different company file servers that contain the desired information that is responsive to the requests for information that are generated by thewireless terminals 14. - In response to the requests for information that are generated by the
wireless terminal 14, themulti-modal message server 20 generates structured responses that contain data that is responsive to the requests for information. In transmitting the structured responses to thewireless terminal 14, themulti-modal messaging system 10 can select from a group of modes of communication including, but not limited to, text modes, graphic modes, animation modes, multi-media modes, pre-recorded and synthesized sounds including synthesized human speech modes, music modes, and noise modes. In particular, the preferredmulti-modal messaging system 10 uses at least two of the above-referenced modes to transmit responses to thewireless terminals 14 during a single transaction or user interaction. - As set forth above, the methods and protocols for transmitting information in the form of text from the
multi-modal messaging system 10 to thewireless terminal 14 include, but are not limited to, SMSs, WAPs, and email protocols. In the case of audible information, the response is preferentially transmitted from themulti-modal message server 20 to theremote terminal 14 during a wireless telephone call that may be initiated by either theremote terminal 14 or themulti-modal message server 20. In yet another preferred embodiment of the present invention, the audible information contained in a response may be transmitted in an automated fashion using applications capable of synthesizing human speech and directing the synthesized human speech to a voice mail system associated with the intended recipient'swireless terminal 14. As used herein, the term voice mail system includes any system that is capable of receiving, storing and retrieving audible messages in an automated fashion either autonomously or on-demand via a telephone network. These include voice mail servers and both analog and digital answering machines. - As set forth above, the present invention discloses the use of more than one mode of communication during the course of a single interaction between the
wireless terminal 14 and themulti-modal message server 20. A single interaction is defined as a set of messages required to meet the needs of a consumer or user of thewireless terminal 14 that is requesting a specific service, specific content, or specific information from themulti-modal message server 20 and the response or responses that are delivered by themulti-modal message server 20 in response to the requests for information from thewireless terminal 14. The present invention discloses methods of using multiple modes of communication between a respectiveremote terminal 14 and a respectivemulti-modal message server 20 during a single interaction, thereby allowing themulti-modal message server 20 to respond to the demands of the user using both voice and text-based messages, for example. - As set forth above, during operation the
wireless terminal 14 is operable to generate tailored requests for information about a particular product or service. In the preferred embodiment, themulti-modal message server 20 responds to thewireless terminal 14 by sending content responsive to the tailored requests for information via messages that are formatted as a text-based message and a voice-based message. In other embodiments, thewireless terminal 14 may only be capable of conducting a wireless telephone call or the transmission or receipt of text messages, but not both operations at the same time. As such, in these embodiments of the present invention themulti-modal messaging system 10 is designed to provide thewireless terminal 14 with text-based messages that are responsive to the requests for information after the wireless telephone call has been disconnected and the user has already received the voice-based messages that are responsive to the requests for information. In addition, the voice call connection between thewireless terminal 14 and themulti-modal message server 20 and the text-based messages that are sent to thewireless terminal 14 may be transmitted from themulti-modal message server 20 using a dissimilar wireless communication protocol. - The
multi-modal messaging system 10 preferentially also includes avoice recognition application 22. Thevoice recognition application 22 is preferentially located on themulti-modal message server 20, but may also be located on a separate server that is connected with themulti-modal message server 20. Thevoice recognition application 22 determines the identity of or recognizes respective words that are contained in voice-based requests for information that are generated by users of thewireless terminal 14. The words that are identified by thevoice recognition application 22 are used as inputs to aresponse generation application 28 in one preferred embodiment of the present invention. As set forth in greater detail below, theresponse generation application 28 is capable of generating multi-modal responses that contain data responsive to the requests for information that are generated by the users of thewireless terminal 14. As further set forth in detail below, the words that are identified may also be used as an input to a naturallanguage processing application 26 that determines the intent of the words contained in the requests for information and not just the identity of the words. - In another preferred embodiment of present invention, the
multi-modal messaging system 10 includes avoice print application 24 that provides security to users of thewireless terminals 14 by analyzing voice prints of the user that are obtained by sampling segments of the user's speech. If the user is authenticated, access to themulti-modal messaging service 10 is provided to the user and if the user is not authenticated access is denied. Further, if the user desires to limit access to themulti-modal messaging system 10 to only themselves or select individuals, then a preference setting may be set by the owner of thewireless terminal 14 that restricts access to only pre-authorized users. Thevoice print application 24 can also be used to limit use of thewireless terminal 14 so that if theremote terminal 14 is stolen it will not be able to be used by the person who steals thewireless terminal 14. Thevoice print application 24 can also be used to determine if the user is an authorized user that can be provided with information related to a specific account by providing authorization and authentication. Thevoice print application 24 can be located on themulti-modal message server 20 or on a voice print application server that is connected to themulti-modal message server 20. - As briefly set forth above, in yet another preferred embodiment of the present invention the
multi-modal messaging system 10 includes a naturallanguage processing application 26. The naturallanguage processing application 26 works in conjunction with thevoice recognition application 22 to ascertain the meaning of natural language requests for information that are received from thewireless terminals 14. The naturallanguage processing application 26 processes the identified words contained in the voice signals to ascertain the meaning or intent of the words that are contained in the voice signals. As such, during operation thevoice recognition application 22 identifies or recognizes the particular words that are contained in the voice signals and the naturallanguage processing application 26 interprets the meaning or intent of the recognized words contained in the voice signals. The naturallanguage processing application 26 provides functionality to themulti-modal messaging system 10 that allows users to enter requests for information using natural language that is normally used in conversations between two human subjects. - The natural
language processing application 26 may be located on themulti-modal message server 20, but, in an effort to increase the level of performance, could also be located on a separate server or a separate set of servers connected with themulti-modal message server 20. For a more detailed discussion of the preferred natural language processing application please refer to U.S. application Ser. No.: 10/131,898 entitled Natural Language Processing for a Location-Based Services System filed on Apr. 25, 2002 which is hereby incorporated by reference in its entirety. - As illustrated in FIG. 1, the natural
language processing application 26 is connected to aresponse generation application 28 that uses a plurality of programmed rules in combination with the command or word contained in the request to determine what information should be retrieved and returned to thewireless terminal 14. Theresponse generation application 28 uses the words identified by thevoice recognition application 22 and the intent or meaning of the words determined by the naturallanguage processing application 26 to generate a search query that retrieves the appropriate information from acontent database 34. In other preferred embodiments, only the words identified from thevoice recognition application 22 are used by theresponse generation application 28 to generate a response to the tailored requests for information. - In another preferred embodiment of the
multi-modal messaging system 10, alocation information application 30 is used to determine a geographic location of thewireless terminal 14. Thelocation information application 30 may be located on themulti-modal message server 20 or on another server that is connected to themulti-modal message server 20. The geographic location of the user can be used to focus or narrow responses that are generated by theresponse generation application 28 to a specific geographic area that is appropriate to the user of thewireless terminal 14. Certain types of requests for information generated by users of thewireless terminals 14 will be dependent on the current geographic location of thewireless terminal 14 and thelocation information application 30 is used to provide theresponse generation application 28 with location data that is needed to generate a geographically tailored response to requests for information that are dependent on the geographic location of thewireless terminal 14. - The
response generation application 28 may also be connected to avirtual customer database 32 that may use application and customer proprietary information to determine user preferences for modes of communication. In addition, thevirtual customer database 32 may include customer data that includes information about thewireless terminal 14 that the user is using such as limitations for the amount or type of data content that thewireless terminal 14 can receive or the type of display used by thewireless terminal 14 so that responses can be structured in a format that is compatible with the display. In addition, the user may choose not to receive certain types of large files, such as multimedia files and so forth, and these settings may be found in thevirtual customer database 32 in the profile of the user. - As set forth above, the
response generation application 28 is used to generate structured responses to the tailored requests for information that are generated by thewireless terminal 14. Once the customer preferences and identification have been determined using thevirtual customer database 32 and possibly the geographic location of thewireless terminal 14 has been determined using thelocation information application 30, a query is generated and sent to thecontent database 34 that is connected to theresponse generation application 28. The query is used to retrieve data that is responsive to the request for information from thecontent database 34. Thecontent database 34 may be located locally on themulti-modal message server 20 or housed on other servers that are connected to themulti-modal message server 20. For example, if thewireless terminal 14 is connected to amulti-modal message server 20 provided by an airline company, the details of a flight that a user is booked on may be retrieved from thecontent database 34 if so desired. - Expanding on the example set forth above, let's say that the user of the
wireless terminal 14 is a regular customer of the airline company and is registered with the airline company. Thevirtual customer database 32 will know this fact and will assist theresponse generation application 28 by providing detailed information to theresponse generation application 28 about that particular user. For example, thevirtual customer database 32 may contain a customer identification number and a virtual key that is associated with that particular user. This information can be added to the query that is generated by theresponse generation application 28, which allows the response generation application to more accurately generate responses. The airline company multi-modal messaging system will be able to use this information to more accurately provide responses to the user that contain accurate data related to that particular user's account and status. Further, this information can be used for authorization and authentication purposes. - Once the data for the response to the user's request has been located by the
response generation application 28, themulti-modal messaging system 10 prepares this data for transmission to thewireless terminal 14. Aunified messaging application 36 preferentially combines the information retrieved into a unified response that can be sent to thewireless terminal 14 if theresponse generation application 28 does not format the response into the predefined message formats. In a preferred embodiment, the unified response that is generated contains a text-based response and a voice-based response that is created using the data that is provided by theresponse generation application 28. In essence, theunified message application 36 prepares the multi-modal response by generating a response in at least two formats that are suitable for thewireless terminal 14. As set forth above, these formats may include a text-based message, a graphics-based message, a voicemail message, and an email message. - After the unified message is created, a
transcoding application 38 may be used to format the unified message into a format that is suitable for thewireless terminal 14 using information already known about thewireless terminal 14, which is preferentially retrieved from thevirtual customer database 32. For example, for a text-based message, thetranscoding application 38 may convert the text-based response into an SMS or WAP format. For a voice-based message, thetranscoding application 38 may use a voice synthesis application to convert the speech-based response into a format suitable for thewireless terminal 14. The response is then sent to thewireless access network 16, which thereby transmits the multi-modal response to thewireless terminal 14. - Users of the
wireless terminals 14 can define how they want themulti-modal messaging system 10 to send responses to them, or themulti-modal messaging system 10 may contain information, preferably stored in thevirtual customer database 32, about each user of themulti-modal messaging system 10 and their respectiveremote terminals 14. This allows themulti-modal messaging system 10 to generate and transmit responses that are in the preferred format of the user. Themulti-modal messaging system 10 allows users to determine what types of services and modes of communication will be used to transmit responses to thewireless terminal 14. - Referring to FIG. 1, in the preferred embodiment of the present invention a call is placed on the
wireless access network 16 from thewireless terminal 14 to themulti-modal message server 20. In other preferred embodiments, a connection may be established between thewireless terminal 14 and themulti-modal message server 20 through the selection of a menu item or the entry of an address on thewireless terminal 14. Thewireless terminal 14 also preferentially passes information to themulti-modal message server 20 about thewireless terminal 14 using SS7, ISDN, or other in-band or out-of-band messaging protocols. A calling number identification (“CNI”) is preferentially passed as well as a serial number for thewireless terminal 14. This information can be used to determine the identity of the user to which thewireless terminal 14 belongs. - In one preferred embodiment, the
multi-modal message server 20 uses an interface to detect the call and ‘answers’ the call from thewireless terminal 14 using text-to-speech messages or recorded speech prompts. The prompts can ask the user to speak the request for information using some set of predefined commands or may ask the user to utter the request for information using natural language, which will later be processed by thevoice recognition application 22 and thenatural language application 26. The text-to-speech messages or recorded speech prompts are transmitted across thewireless access network 16 to thewireless terminal 14. - During operation, the user speaks the request for information into the
wireless terminal 14 and thewireless terminal 14 andwireless access network 16 transmit the voice signal representing the request for information to themulti-modal message server 20. Under one mode of operation, the user speaks one of a pre-defined command phrases or words, which is then interpreted and used by thevoice recognition application 22 to generate a response. The user's speech is converted to text using thevoice recognition application 22, which is then used as an input to a search query that interprets the user's command. As set forth below, based on the user's command, a response is generated by theresponses generation application 28 that is sent to the user. - In one embodiment of the present invention, the
multi-modal messaging system 10 incorporates avoice printing application 24 in conjunction with the database ofproprietary customer information 34 to determine if the caller using thewireless terminal 14 is the owner of (or assigned to) thewireless terminal 14. If the caller is not the owner of thewireless terminal 14, (which may occur if someone borrows thewireless terminal 14 from the owner) themulti-modal messaging system 10 proceeds with the call but does not personalize any of the services based on proprietary customer information associated with the assigned user. Therefore, at any point in the process where themulti-modal messaging system 10 would use customer proprietary information, themulti-modal messaging system 10 could use additional prompts to request this information from the caller. Themulti-modal messaging system 10 could also restrict access to themulti-modal messaging system 10 and thewireless terminal 14 altogether if the assigned user has preset a user preference indicating the restriction of access to unauthorized users. - In another preferred embodiment of the present invention, the
multi-modal messaging system 10 can handle requests for information that are entered using natural speech. In this embodiment, themulti-modal messaging system 10 passes the text identified from thevoice recognition application 22 to a naturallanguage processing application 26 that is used to determine the intent or meaning of the words contained in the request. The interpreted intent is processed by themulti-modal messaging system 10 in the same way the pre-defined commands are processed. This is made possible because the naturallanguage processing application 26 is programmed to generate search queries based on the words identified in the request and the intent of the words contained in the request. - The
response generation application 28 uses programmed rules in combination with the commands to determine what information should be retrieved and returned to thewireless terminal 14. These rules are stored in executable code or in acontent database 34. In one preferred embodiment of the present invention, if themulti-modal messaging system 10 determines that location information about thewireless terminal 14 is necessary to generate an appropriate response to the request for information, themulti-modal messaging system 10 uses thelocation information application 30 to determine the geographic location of thewireless terminal 14. Thewireless access network 16 can use several location determining applications that are designed to sufficiently determine the geographic location of thewireless terminal 14 to the accuracy necessary to successfully generate a response that is responsive to the request for information. The location information that is generated by thelocation information application 30 is used as part of the search query that is used to locate the desired information. - Upon determining the data to be returned to the
wireless terminal 14 and retrieving this data from acontent database 34, theresponse generation application 28 of themulti-modal messaging system 10 prepares the content to be sent to thewireless terminal 14. Themulti-modal messaging system 10 may use an application and customer proprietary information to determine the customer's preferences for modes of communication. Additionally, this customer data may include information about thewireless terminal 14 assigned to the user such as limitations for the amount or type of data content the device can receive. These methods for storing and accessing the customer proprietary data include those disclosed in a co-pending application entitled Virtual Customer Database, which was filed on the same day as the present application and assigned application Ser. No.: ______, which is hereby incorporated by reference in its entirety. - The
multi-modal messaging system 10 formats the content contained in the response for thewireless terminal 14 using available information about thewireless terminal 14 and individual preferences of the users. Aunified messaging application 36 preferentially formats the content into multiple messages, if necessary, to respond to thewireless terminal 14 in the most informative way that is compatible with thewireless terminal 14 to which the user is assigned or has purchased. Themulti-modal messaging system 10 preferentially uses atranscoding application 38 to format the content contained in the response into a suitable format for the user'swireless terminal 14 and is capable of generating responses using formats such as WML, HTML, and plain text. - The
multi-modal messaging system 10 then transmits the content to thewireless access network 16 operated by the carrier and indicates the recipient and the method for transferring the message(s) to the recipient or user. Preferably, the messages are sent as a text message to thewireless terminal 14 using any of (but not limited to) the following: SMS, CPDP, Mobitex. Thewireless terminal 14 receives the message(s) and the user is allowed to interact with the content contained in the response from themulti-modal messaging system 10. - In yet another preferred embodiment of the present invention, the
multi-modal messaging system 10 is used in combination with a location-based services system where the content of the messages between the system and thewireless terminal 14 contain information that is based on the current geographic location of thewireless terminal 14. The location-based services system may be of the type by which the indicator of the location of thewireless terminal 14 is generated by thewireless terminal 14 and transmitted to themulti-modal messaging system 10, determined by themulti-modal messaging system 10, or by some combination there of For a more detailed description of location-based service systems, refer to U.S. application Ser. No.: 09/946,111, which was filed on Sep. 4, 2002 entitled Location-Based Services and is hereby incorporated by reference in its entirety. - Referring to FIG. 2, an illustrative example of a preferred embodiment of the present invention is set forth below. As an example, let's say that a user of
wireless terminal 14 is planning a trip and would like to check with his or her airline to determine their flight itinerary. Atstep 40 the user ofwireless terminal 14 connects to themulti-modal messaging system 10 of the airline through thewireless access network 16. Atstep 42, themulti-modal messaging server 20 transmits a command prompt to the user requesting information from the user of thewireless terminal 14. In response, atstep 44 the user states a voice request for information, which in this example is illustrated as “Flight itinerary please”, which is transmitted to themulti-modal messaging server 20 atstep 46. - At
step 48, themulti-modal messaging system 10 takes this voice request for information and uses automated speech recognition, which in the preferred embodiment includes processing the voice request for information with avoice recognition application 22 and a naturallanguage processing application 26, to generate a plurality of responses to the request for information. As an example, in the preferred embodiment illustrated in FIG. 2, a voice-based response is generated that states “It will be sent to your phone” and a text-based response is generated that provides the user with the appropriate itinerary information that is tailored for that particular user. Atstep 50, themulti-modal message server 20 transmits the multi-modal response to the user, which in FIG. 2 is represented as a voice-based response and a text-based response. - To generate the response, the preferred embodiment uses customer information that is received from the
virtual customer database 32 to determine that the user of thewireless terminal 14 has a profile with the airline. The profile is capable of providing the user's customer ID and possibly a virtual key that is associated with that customer that authorizes thewireless terminal 14 to receive data from the airline's database. This information allows themulti-modal messaging system 10 to authenticate and identify the user of thewireless terminal 14 in order to generate an appropriate response from the airline's data files. - Referring to FIG. 3, wherein like reference numbers refer to the same elements set forth in the previous embodiments, another preferred embodiment of the present invention discloses a
multi-modal callback system 100 for awireless terminal 14 that is connected to at least onewireless access network 16. As illustrated, thewireless communication system 12 is connected to at least oneswitch 18 and/or arouter 19, which is in turn, connected to amulti-modal callback server 102. Themulti-modal callback server 102 may be the same server as themulti-modal message server 20 set forth in the previous embodiments or may be another server. As illustrated, themulti-modal callback server 102 preferentially includes many of the same applications as themulti-modal message server 20. - The
multi-modal callback system 100 provides a method for initiating a telephone call between thewireless terminal 14 and themulti-modal callback server 102 for transmitting a predefined speech-based message to the user of thewireless terminal 14. The call is preferentially initiated in an automated fashion by thewireless terminal 14 after thewireless access network 16 receives a message that is transmitted from thewireless terminal 14 to themulti-modal callback server 102 requesting a callback. During the callback, thewireless terminal 14 receives a voice-based message that reads a text-based message to the user of thewireless terminal 14. - During normal operation, the user of the
wireless terminal 14 preferentially generates a request for information that is transmitted to themulti-modal callback server 102. The preferred request for information is in the form of a voice-based request for information that is generated using normal speech. The voice request for information can be transmitted in the form of a short message that is sent from thewireless terminal 14 to themulti-modal callback server 102. In one preferred embodiment, thewireless terminal 14 does not establish a permanent connection with themulti-modal callback server 102 when the request for information is sent to themulti-modal callback server 102. Thewireless terminal 14 can also transmit the request for information to themulti-modal callback server 102 in the form of a text message. In the preferred embodiment illustrated in FIG. 1, thepreferred wireless terminal 14 is illustrated as a wireless phone, but those skilled in the art should recognize that other wireless communication devices (e.g.—PDAs, laptops, and various other types of personal communication devices) could be used as awireless terminal 14. - In the preferred embodiment of the present invention, a multi-modal response to the request for information is preferentially generated by the
multi-modal callback server 102 and sent to thewireless terminal 14. The multi-modal response preferentially includes at least a text-based response and a speech-based response. Other types of responses may also be included in the multi-modal response including an email response, an instant message response, and a fax response. - Referring to FIG. 3, once the voice request for information is received by the
multi-modal callback server 102, avoice recognition application 22 is used to identify a plurality of words contained in the request for information if the request is in the form of a voice-based request for information. After the words in the voice-request for information are identified, avoice print application 24 can be used to verify that the user has access rights to themulti-modal callback system 100. A naturallanguage processing application 26 can be used to determine an intent associated with the words contained in the voice-based request for information. The identity of the words and the intent of the words are then used to generate an input to aresponse generation application 28. - The
response generation application 28 uses the input to generate a response to the request for information that is sent by the user of thewireless terminal 14. Theresponse generation application 28 preferentially accesses adata content database 34 to retrieve information that is responsive to the request for information. Thedata content database 34 may be located on themulti-modal callback server 102 or on a data server that is connected to themulti-modal callback server 102. - A
location information application 30 may also be included that is used to determine the geographic location of thewireless terminal 14. The geographic location of thewireless terminal 14 is used for requests for information that are dependent upon the geographic location of the user. Avirtual customer database 32 may also be included that contains a plurality of user profiles. The user profiles can be used to grant access to thedata content database 34 and to authorize the user of thewireless terminal 14. For more information about thevirtual customer database 34 reference is made to a co-pending application filed by the same inventors and assigned U.S. application Ser. No.: ______ and entitled Virtual Customer Database, which is hereby incorporated by reference in its entirety. - The preferred
multi-modal callback system 100 also includes avoice synthesis application 104. Thevoice synthesis application 104 is a text-to-speech application that is used to convert text-based responses into a synthesized human voice. As such, if theresponse generation application 28 generates a text-based response to the request for information, the user of thewireless terminal 14 is capable of having the text contained therein read back to them over thewireless terminal 14, as set forth in greater detail below. It is worth noting that the present invention could also be used to audibly play back any kind of text-based message that is sent to thewireless terminal 14, such as short messages or instant messages. - The
response generation application 28 is used to generate a multi-modal response to the user's request for information. In the preferred embodiment of the present invention, the multi-modal response includes a text-based response that is displayed on the display of thewireless terminal 14. At some particular times, such as when driving, users of thewireless terminal 14 may not be able to read the text-based response or may just want to have the text-based response stated in a voice-based response. The preferred embodiment of the present invention allows users of thewireless terminal 14 to convert the text-based response into an audible response if desired. - The preferred multi-modal response includes an interaction menu that is generated on a display of the
wireless terminal 14 that allows the user to obtain additional information that may be categorized in the information contained in the text-based response of the multi-modal response. The text-based response may also include graphic information that is representative of a response, such as a trademark or service mark of a respective company. The interaction menu is preferentially setup so that a keypad of thewireless terminal 14 can be used to allow the user to select items from the interaction menu. A pointing device, such as a mouse or touch-pad, may also be used to allow the user to select an item from the interaction menu. The user of thewireless terminal 14 can also use voice-based commands to select items contained in the interaction menu. - After the multi-modal response has been sent to the
wireless terminal 14, the connection between thewireless terminal 14 and themulti-modal callback server 102 is preferentially terminated. This may be done for several reasons that relate to cost and proficiency of themulti-modal callback system 100 amongst other reasons. For example, the connection may be terminated so that themulti-modal callback server 102 can focus on other requests from other users thereby processing requests faster. In addition, there is typically a charge associated with the use of air or access time from thewireless communication system 12 and as such, the user will likely want to minimize use in order to keep charges down. In IP-based wireless access networks, thewireless terminal 14 is always connected to the wireless access network. In these types of networks, it is simply sufficient to note that the connection between the two devices is no longer current or active and must be re-established. - Once the user selects an item from the interaction menu generated on the
wireless terminal 14, a menu selection request is sent to themulti-modal server 102 using a wireless communication protocol, such as SMS. In the first response to the request for information, a predefined callback number or address is embedded into each item on the interaction menu so that thewireless terminal 14 knows where to locate and obtain the information that is associated with each item listed in the interaction menu. In response to this selection by the user, thewireless terminal 14 establishes a connection to themulti-modal server 102 that is indicated by the predefined callback number or address. In an alternative preferred embodiment of the present invention, themulti-modal server 102 may simply receive a short message from thewireless terminal 14 that causes themulti-modal server 102 to establish a connection with thewireless terminal 14. - After establishing a connection with the
wireless terminal 14, themulti-modal callback system 100 preferentially uses avoice synthesis application 104 to generate a voice-based message that is sent to thewireless terminal 14. As set forth above, the voice-based message is based on a previous interaction between thewireless terminal 14 and themulti-modal callback system 100. This previous interaction includes a set of transmissions including but not limited to a text message transmitted from themulti-modal messaging system 10 to thewireless terminal 14 containing instructions to the user of thewireless terminal 14 regarding: 1) the procedure for replying to the text message, 2) the use of a special code, and 3) the resulting telephone call and/or voice communication that will be initiated by themulti-modal callback system 100. - Referring to FIG. 4, an illustrative operational example of the preferred
multi-modal callback system 100 will be set forth below. Atstep 110, a user ofwireless terminal 14 generates a voice-based request for information. Once generated, the voice-based request for information is sent to themulti-modal callback server 102, which is illustrated atstep 112. In the present example, the user asks themulti-modal callback server 102 for “directions to Bud and Joe's”. The request for information is received by themulti-modal server 102 which in turn, uses automated speech processing applications to generate a response to the request for information from the user. Atstep 114, avoice recognition application 22 determines the identity of the words contained in the voice-based request for information. Atstep 116, a naturallanguage processing application 26 may be used to determine an intent or meaning behind the words identified by thevoice recognition application 22. It is important to note that themulti-modal callback server 102 is also capable of handling text-based requests for information that are generated by thewireless terminal 14. - As set forth above, the response that is generated by the
multi-modal callback system 100 may include a voice-based response and a text-based response. Theresponse generation application 28 is used to generate a search query that searches thedata content database 34 in order to retrieve the required information needed to generate a response, which is illustrated atstep 118. In the case of a text-based request for information, thevoice recognition application 22 and the naturallanguage processing application 26 are simply bypassed and the user's text-based request for information is used by theresponse generation application 28 to generate the multi-modal response. - In our current example, the voice-based response might be as follows: “Will be sent to your phone. You can have them read back to you by replying to the message.” The text-based response might be: “To have these directions read back just respond to this message by pressing 1 on your keypad. Directions to Bud and Joe's . . . Turn left on Main St. . . . ” After the responses are generated, they are both transmitted to the
wireless terminal 14, which is represented atstep 120. Preferentially, at that point the call or connection between themulti-modal callback server 102 and thewireless terminal 14 is terminated so that the user is no longer charged for access. - After some time has elapsed, at
step 122 the user enters a callback request by selecting “1” on the keypad of thewireless terminal 14 in the present example. The callback request is then transmitted to themulti-modal callback server 102, which is illustrated atstep 122. The callback request indicator may either be in an interactive menu or in the text-based response. Based on the callback request, atstep 124 themulti-modal callback server 102 generates a voice-based response that is based on the text-based response that was previously sent to thewireless terminal 14 as part of the multi-modal response, which is illustrated atstep 126. - At
step 126, themulti-modal callback server 102 establishes a connection with thewireless terminal 14. After the connection is established with thewireless terminal 14, the voice-based response is transmitted to thewireless terminal 14, which is illustrated atstep 128. As set forth above, avoice synthesis application 104 is used to generate the voice-based response. Thevoice synthesis application 104 is preferentially capable of converting text to speech and may contain predefined voice files that may be used as responses. - Although a
voice synthesis application 104 is used to generate a voice-based response in the preferred embodiment, themulti-modal callback server 102 may also generate a second text-based response that is also sent to thewireless terminal 14. The second text-based response may be sent instead of the voice-based response or with the voice-based response. - In yet another preferred embodiment of the present invention, the
multi-modal callback server 102 may have already sent a text-based message to the user of thewireless terminal 14. The text-based message could be pushed to thewireless terminal 14 or pulled by thewireless terminal 14 depending upon the particular circumstances. For example, a text-based message that might be pushed to thewireless terminal 14 could be “Pizza special at Joe's Pizza, which is near your location. Press 1 for directions.” An interaction item is contained in the text-based message that allows the user to select an option that is presented. In this example, the user is allowed to press 1 on the keypad for directions. - If the user of the
wireless terminal 14 presses 1 on the keypad, themulti-modal callback server 102 will connect with thewireless terminal 14 and audibly reproduce directions to Joe's Pizza over thewireless terminal 14 using thevoice synthesis application 104. In other words, in the embodiments set forth above the text-based message that is presented to the user is read back to the user. In this embodiment, a different message is read to the user in response to a selection of an item in the interaction menu. The message that is read to the user does not necessarily have to be the same as the text-based message that is presented to the user of thewireless terminal 14. - While the invention has been described in its currently best-known modes of operation and embodiments, other modes, embodiments and advantages of the present invention will be apparent to those skilled in the art and are contemplated herein.
Claims (35)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/263,501 US7233655B2 (en) | 2001-10-03 | 2002-10-03 | Multi-modal callback |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US32690201P | 2001-10-03 | 2001-10-03 | |
US10/263,501 US7233655B2 (en) | 2001-10-03 | 2002-10-03 | Multi-modal callback |
Publications (2)
Publication Number | Publication Date |
---|---|
US20030064716A1 true US20030064716A1 (en) | 2003-04-03 |
US7233655B2 US7233655B2 (en) | 2007-06-19 |
Family
ID=26949894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/263,501 Expired - Lifetime US7233655B2 (en) | 2001-10-03 | 2002-10-03 | Multi-modal callback |
Country Status (1)
Country | Link |
---|---|
US (1) | US7233655B2 (en) |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030187955A1 (en) * | 2002-03-29 | 2003-10-02 | Koch Robert A. | Remote access and retrieval of electronic files |
US20060112063A1 (en) * | 2004-11-05 | 2006-05-25 | International Business Machines Corporation | System, apparatus, and methods for creating alternate-mode applications |
US20080126922A1 (en) * | 2003-06-30 | 2008-05-29 | Hiroshi Yahata | Recording medium, reproduction apparatus, recording method, program and reproduction method |
US20080262995A1 (en) * | 2007-04-19 | 2008-10-23 | Microsoft Corporation | Multimodal rating system |
US20090164207A1 (en) * | 2007-12-20 | 2009-06-25 | Nokia Corporation | User device having sequential multimodal output user interace |
US7609669B2 (en) | 2005-02-14 | 2009-10-27 | Vocollect, Inc. | Voice directed system and method configured for assured messaging to multiple recipients |
US20090327422A1 (en) * | 2008-02-08 | 2009-12-31 | Rebelvox Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US20100048191A1 (en) * | 2008-08-15 | 2010-02-25 | Bender Douglas F | Systems and methods of initiating a call |
US20100298009A1 (en) * | 2009-05-22 | 2010-11-25 | Amazing Technologies, Llc | Hands free messaging |
WO2012031246A1 (en) * | 2010-09-03 | 2012-03-08 | Hulu Llc | Method and apparatus for callback supplementation of media program metadata |
US20120237009A1 (en) * | 2011-03-15 | 2012-09-20 | Mitel Networks Corporation | Systems and methods for multimodal communication |
US20130078975A1 (en) * | 2011-09-28 | 2013-03-28 | Royce A. Levien | Multi-party multi-modality communication |
US20130339455A1 (en) * | 2012-06-19 | 2013-12-19 | Research In Motion Limited | Method and Apparatus for Identifying an Active Participant in a Conferencing Event |
US20140005921A1 (en) * | 2012-06-27 | 2014-01-02 | Microsoft Corporation | Proactive delivery of navigation options |
EP2335240B1 (en) * | 2008-09-09 | 2014-07-16 | Deutsche Telekom AG | Voice dialog system with reject avoidance process |
US8804758B2 (en) | 2004-03-11 | 2014-08-12 | Hipcricket, Inc. | System and method of media over an internet protocol communication |
US20140288935A1 (en) * | 2005-05-13 | 2014-09-25 | At&T Intellectual Property Ii, L.P. | Apparatus and method for forming search engine queries based on spoken utterances |
US8880631B2 (en) | 2012-04-23 | 2014-11-04 | Contact Solutions LLC | Apparatus and methods for multi-mode asynchronous communication |
US9054912B2 (en) | 2008-02-08 | 2015-06-09 | Voxer Ip Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US9166881B1 (en) | 2014-12-31 | 2015-10-20 | Contact Solutions LLC | Methods and apparatus for adaptive bandwidth-based communication management |
US9218410B2 (en) | 2014-02-06 | 2015-12-22 | Contact Solutions LLC | Systems, apparatuses and methods for communication flow modification |
US9477943B2 (en) | 2011-09-28 | 2016-10-25 | Elwha Llc | Multi-modality communication |
US9503550B2 (en) | 2011-09-28 | 2016-11-22 | Elwha Llc | Multi-modality communication modification |
US9635067B2 (en) | 2012-04-23 | 2017-04-25 | Verint Americas Inc. | Tracing and asynchronous communication network and routing method |
US9641684B1 (en) | 2015-08-06 | 2017-05-02 | Verint Americas Inc. | Tracing and asynchronous communication network and routing method |
US9699632B2 (en) | 2011-09-28 | 2017-07-04 | Elwha Llc | Multi-modality communication with interceptive conversion |
US9762524B2 (en) | 2011-09-28 | 2017-09-12 | Elwha Llc | Multi-modality communication participation |
US9788349B2 (en) | 2011-09-28 | 2017-10-10 | Elwha Llc | Multi-modality communication auto-activation |
CN107767856A (en) * | 2017-11-07 | 2018-03-06 | 中国银行股份有限公司 | A kind of method of speech processing, device and server |
US10063647B2 (en) | 2015-12-31 | 2018-08-28 | Verint Americas Inc. | Systems, apparatuses, and methods for intelligent network communication and engagement |
US11004452B2 (en) * | 2017-04-14 | 2021-05-11 | Naver Corporation | Method and system for multimodal interaction with sound device connected to network |
US11019199B2 (en) | 2003-12-08 | 2021-05-25 | Ipventure, Inc. | Adaptable communication techniques for electronic devices |
US20220122608A1 (en) * | 2019-07-17 | 2022-04-21 | Google Llc | Systems and methods to verify trigger keywords in acoustic-based digital assistant applications |
US11381415B2 (en) * | 2009-11-13 | 2022-07-05 | Samsung Electronics Co., Ltd. | Method and apparatus for providing remote user interface services |
US20220358916A1 (en) * | 2021-05-10 | 2022-11-10 | International Business Machines Corporation | Creating a virtual context for a voice command |
US11574621B1 (en) * | 2014-12-23 | 2023-02-07 | Amazon Technologies, Inc. | Stateless third party interactions |
US11800329B2 (en) | 2003-12-08 | 2023-10-24 | Ingenioshare, Llc | Method and apparatus to manage communication |
US11818785B2 (en) * | 2020-10-01 | 2023-11-14 | Zebra Technologies Corporation | Reestablishment control for dropped calls |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6603835B2 (en) | 1997-09-08 | 2003-08-05 | Ultratec, Inc. | System for text assisted telephony |
US6253061B1 (en) | 1997-09-19 | 2001-06-26 | Richard J. Helferich | Systems and methods for delivering information to a transmitting and receiving device |
US6636733B1 (en) | 1997-09-19 | 2003-10-21 | Thompson Trust | Wireless messaging method |
US7003304B1 (en) | 1997-09-19 | 2006-02-21 | Thompson Investment Group, Llc | Paging transceivers and methods for selectively retrieving messages |
US6826407B1 (en) | 1999-03-29 | 2004-11-30 | Richard J. Helferich | System and method for integrating audio and visual messaging |
US6983138B1 (en) | 1997-12-12 | 2006-01-03 | Richard J. Helferich | User interface for message access |
US6848542B2 (en) * | 2001-04-27 | 2005-02-01 | Accenture Llp | Method for passive mining of usage information in a location-based services system |
US7698228B2 (en) * | 2001-04-27 | 2010-04-13 | Accenture Llp | Tracking purchases in a location-based services system |
US7437295B2 (en) * | 2001-04-27 | 2008-10-14 | Accenture Llp | Natural language processing for a location-based services system |
US6944447B2 (en) | 2001-04-27 | 2005-09-13 | Accenture Llp | Location-based services |
US7970648B2 (en) * | 2001-04-27 | 2011-06-28 | Accenture Global Services Limited | Advertising campaign and business listing management for a location-based services system |
US7881441B2 (en) * | 2005-06-29 | 2011-02-01 | Ultratec, Inc. | Device independent text captioned telephone service |
US8416925B2 (en) | 2005-06-29 | 2013-04-09 | Ultratec, Inc. | Device independent text captioned telephone service |
US7441016B2 (en) * | 2001-10-03 | 2008-10-21 | Accenture Global Services Gmbh | Service authorizer |
US7640006B2 (en) * | 2001-10-03 | 2009-12-29 | Accenture Global Services Gmbh | Directory assistance with multi-modal messaging |
US7472091B2 (en) * | 2001-10-03 | 2008-12-30 | Accenture Global Services Gmbh | Virtual customer database |
US7286651B1 (en) * | 2002-02-12 | 2007-10-23 | Sprint Spectrum L.P. | Method and system for multi-modal interaction |
US7292689B2 (en) | 2002-03-15 | 2007-11-06 | Intellisist, Inc. | System and method for providing a message-based communications infrastructure for automated call center operation |
US8170197B2 (en) | 2002-03-15 | 2012-05-01 | Intellisist, Inc. | System and method for providing automated call center post-call processing |
US8068595B2 (en) | 2002-03-15 | 2011-11-29 | Intellisist, Inc. | System and method for providing a multi-modal communications infrastructure for automated call center operation |
US8515024B2 (en) | 2010-01-13 | 2013-08-20 | Ultratec, Inc. | Captioned telephone service |
US7873654B2 (en) * | 2005-01-24 | 2011-01-18 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
US8150872B2 (en) * | 2005-01-24 | 2012-04-03 | The Intellection Group, Inc. | Multimodal natural language query system for processing and analyzing voice and proximity-based queries |
GB2424790A (en) * | 2005-03-29 | 2006-10-04 | Hewlett Packard Development Co | Communication assistance system responsive to user identification data |
US11258900B2 (en) | 2005-06-29 | 2022-02-22 | Ultratec, Inc. | Device independent text captioned telephone service |
US8019057B2 (en) * | 2005-12-21 | 2011-09-13 | Verizon Business Global Llc | Systems and methods for generating and testing interactive voice response applications |
US20090124272A1 (en) | 2006-04-05 | 2009-05-14 | Marc White | Filtering transcriptions of utterances |
US8117268B2 (en) * | 2006-04-05 | 2012-02-14 | Jablokov Victor R | Hosted voice recognition system for wireless devices |
US9436951B1 (en) | 2007-08-22 | 2016-09-06 | Amazon Technologies, Inc. | Facilitating presentation by mobile device of additional content for a word or phrase upon utterance thereof |
US8510109B2 (en) * | 2007-08-22 | 2013-08-13 | Canyon Ip Holdings Llc | Continuous speech transcription performance indication |
US20100030557A1 (en) * | 2006-07-31 | 2010-02-04 | Stephen Molloy | Voice and text communication system, method and apparatus |
US8145493B2 (en) | 2006-09-11 | 2012-03-27 | Nuance Communications, Inc. | Establishing a preferred mode of interaction between a user and a multimodal application |
US20080148014A1 (en) * | 2006-12-15 | 2008-06-19 | Christophe Boulange | Method and system for providing a response to a user instruction in accordance with a process specified in a high level service description language |
US8352261B2 (en) * | 2008-03-07 | 2013-01-08 | Canyon IP Holdings, LLC | Use of intermediate speech transcription results in editing final speech transcription results |
US20090076917A1 (en) * | 2007-08-22 | 2009-03-19 | Victor Roditis Jablokov | Facilitating presentation of ads relating to words of a message |
US8326636B2 (en) | 2008-01-16 | 2012-12-04 | Canyon Ip Holdings Llc | Using a physical phenomenon detector to control operation of a speech recognition engine |
US8611871B2 (en) | 2007-12-25 | 2013-12-17 | Canyon Ip Holdings Llc | Validation of mobile advertising from derived information |
US8352264B2 (en) * | 2008-03-19 | 2013-01-08 | Canyon IP Holdings, LLC | Corrective feedback loop for automated speech recognition |
US9973450B2 (en) * | 2007-09-17 | 2018-05-15 | Amazon Technologies, Inc. | Methods and systems for dynamically updating web service profile information by parsing transcribed message strings |
US9053489B2 (en) | 2007-08-22 | 2015-06-09 | Canyon Ip Holdings Llc | Facilitating presentation of ads relating to words of a message |
US8335830B2 (en) * | 2007-08-22 | 2012-12-18 | Canyon IP Holdings, LLC. | Facilitating presentation by mobile device of additional content for a word or phrase upon utterance thereof |
EP2088548A1 (en) | 2008-02-11 | 2009-08-12 | Accenture Global Services GmbH | Point of sale payment method |
US8676577B2 (en) | 2008-03-31 | 2014-03-18 | Canyon IP Holdings, LLC | Use of metadata to post process speech recognition output |
US8301454B2 (en) | 2008-08-22 | 2012-10-30 | Canyon Ip Holdings Llc | Methods, apparatuses, and systems for providing timely user cues pertaining to speech recognition |
US8594296B2 (en) * | 2009-05-20 | 2013-11-26 | Microsoft Corporation | Multimodal callback tagging |
US9219774B2 (en) * | 2009-11-16 | 2015-12-22 | Sap Se | Exchange of callback information |
US20130244685A1 (en) | 2012-03-14 | 2013-09-19 | Kelly L. Dempski | System for providing extensible location-based services |
US20180034961A1 (en) | 2014-02-28 | 2018-02-01 | Ultratec, Inc. | Semiautomated Relay Method and Apparatus |
US10878721B2 (en) | 2014-02-28 | 2020-12-29 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10748523B2 (en) | 2014-02-28 | 2020-08-18 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10389876B2 (en) | 2014-02-28 | 2019-08-20 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US20180270350A1 (en) | 2014-02-28 | 2018-09-20 | Ultratec, Inc. | Semiautomated relay method and apparatus |
US10007947B2 (en) | 2015-04-16 | 2018-06-26 | Accenture Global Services Limited | Throttle-triggered suggestions |
US9239987B1 (en) | 2015-06-01 | 2016-01-19 | Accenture Global Services Limited | Trigger repeat order notifications |
US10650437B2 (en) | 2015-06-01 | 2020-05-12 | Accenture Global Services Limited | User interface generation for transacting goods |
US11539900B2 (en) | 2020-02-21 | 2022-12-27 | Ultratec, Inc. | Caption modification and augmentation systems and methods for use by hearing assisted user |
Citations (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5561769A (en) * | 1994-05-10 | 1996-10-01 | Lucent Technologies Inc. | Method and apparatus for executing a distributed algorithm or service on a simple network management protocol based computer network |
US5675507A (en) * | 1995-04-28 | 1997-10-07 | Bobo, Ii; Charles R. | Message storage and delivery system |
US5764762A (en) * | 1995-06-08 | 1998-06-09 | Wave System Corp. | Encrypted data package record for use in remote transaction metered data system |
US5850517A (en) * | 1995-08-31 | 1998-12-15 | Oracle Corporation | Communication link for client-server having agent which sends plurality of requests independent of client and receives information from the server independent of the server |
US5870549A (en) * | 1995-04-28 | 1999-02-09 | Bobo, Ii; Charles R. | Systems and methods for storing, delivering, and managing messages |
US5884262A (en) * | 1996-03-28 | 1999-03-16 | Bell Atlantic Network Services, Inc. | Computer network audio access and conversion system |
US5882325A (en) * | 1996-04-30 | 1999-03-16 | Medtronic, Inc. | Electrostatic blood defoamer for heart-lung machine |
US5905736A (en) * | 1996-04-22 | 1999-05-18 | At&T Corp | Method for the billing of transactions over the internet |
US5920835A (en) * | 1993-09-17 | 1999-07-06 | Alcatel N.V. | Method and apparatus for processing and transmitting text documents generated from speech |
US5953392A (en) * | 1996-03-01 | 1999-09-14 | Netphonic Communications, Inc. | Method and apparatus for telephonically accessing and navigating the internet |
US6052367A (en) * | 1995-12-29 | 2000-04-18 | International Business Machines Corp. | Client-server system |
US6070189A (en) * | 1997-08-26 | 2000-05-30 | International Business Machines Corporation | Signaling communication events in a computer network |
US6119167A (en) * | 1997-07-11 | 2000-09-12 | Phone.Com, Inc. | Pushing and pulling data in networks |
US6138158A (en) * | 1998-04-30 | 2000-10-24 | Phone.Com, Inc. | Method and system for pushing and pulling data using wideband and narrowband transport systems |
US6157941A (en) * | 1998-03-18 | 2000-12-05 | Oracle Corporation | Architecture for client-server communication over a communication link |
US6161139A (en) * | 1998-07-10 | 2000-12-12 | Encommerce, Inc. | Administrative roles that govern access to administrative functions |
US6173259B1 (en) * | 1997-03-27 | 2001-01-09 | Speech Machines Plc | Speech to text conversion |
US6181781B1 (en) * | 1996-11-12 | 2001-01-30 | International Business Machines Corp. | Voice mail system that downloads an applet for managing voice mail messages |
US6182144B1 (en) * | 1997-12-12 | 2001-01-30 | Intel Corporation | Means and method for switching between a narrow band communication and a wide band communication to establish a continuous connection with mobile computers |
US6219638B1 (en) * | 1998-11-03 | 2001-04-17 | International Business Machines Corporation | Telephone messaging and editing system |
US6236768B1 (en) * | 1997-10-14 | 2001-05-22 | Massachusetts Institute Of Technology | Method and apparatus for automated, context-dependent retrieval of information |
US6249291B1 (en) * | 1995-09-22 | 2001-06-19 | Next Software, Inc. | Method and apparatus for managing internet transactions |
US6263358B1 (en) * | 1997-07-25 | 2001-07-17 | British Telecommunications Public Limited Company | Scheduler for a software system having means for allocating tasks |
US6282270B1 (en) * | 1995-05-26 | 2001-08-28 | International Business Machines Corp. | World wide web voice mail system |
US6301245B1 (en) * | 1998-06-09 | 2001-10-09 | Unisys Corporation | Universal Messaging system providing integrated voice, data and fax messaging services to PC/web-based clients, including a large object server for efficiently distributing voice/fax messages to web-based clients |
US6314108B1 (en) * | 1998-04-30 | 2001-11-06 | Openwave Systems Inc. | Method and apparatus for providing network access over different wireless networks |
US6333973B1 (en) * | 1997-04-23 | 2001-12-25 | Nortel Networks Limited | Integrated message center |
US6345245B1 (en) * | 1997-03-06 | 2002-02-05 | Kabushiki Kaisha Toshiba | Method and system for managing a common dictionary and updating dictionary data selectively according to a type of local processing system |
US20020035607A1 (en) * | 2000-05-25 | 2002-03-21 | Daniel Checkoway | E-mail gateway system |
US6385586B1 (en) * | 1999-01-28 | 2002-05-07 | International Business Machines Corporation | Speech recognition text-based language conversion and text-to-speech in a client-server configuration to enable language translation devices |
US6393467B1 (en) * | 1998-08-31 | 2002-05-21 | Nortel Networks Limited | Network interconnected computing device, server and notification method |
US20020091829A1 (en) * | 2000-02-22 | 2002-07-11 | Wood Christopher (Noah) | Internet message management portal |
US20020107925A1 (en) * | 2001-02-05 | 2002-08-08 | Robert Goldschneider | Method and system for e-mail management |
US20020112007A1 (en) * | 1999-11-03 | 2002-08-15 | Christopher (Noah) Wood | Personal message management system |
US20020119793A1 (en) * | 2001-02-27 | 2002-08-29 | Daniel Hronek | Mobile originated interactive menus via short messaging services |
US6453337B2 (en) * | 1999-10-25 | 2002-09-17 | Zaplet, Inc. | Methods and systems to manage and track the states of electronic media |
US20020137491A1 (en) * | 1998-09-24 | 2002-09-26 | Heimo Pentikainen | Method and system for an answering service |
US6473612B1 (en) * | 1994-04-28 | 2002-10-29 | Metro One Telecommunications, Inc. | Method for providing directory assistance services via an alphanumeric page |
US6483899B2 (en) * | 1998-06-19 | 2002-11-19 | At&T Corp | Voice messaging system |
US6504910B1 (en) * | 2001-06-07 | 2003-01-07 | Robert Engelke | Voice and text transmission system |
US20030008661A1 (en) * | 2001-07-03 | 2003-01-09 | Joyce Dennis P. | Location-based content delivery |
US6513003B1 (en) * | 2000-02-03 | 2003-01-28 | Fair Disclosure Financial Network, Inc. | System and method for integrated delivery of media and synchronized transcription |
US6516316B1 (en) * | 1998-02-17 | 2003-02-04 | Openwave Systems Inc. | Centralized certificate management system for two-way interactive communication devices in data networks |
US6523063B1 (en) * | 1999-08-30 | 2003-02-18 | Zaplet, Inc. | Method system and program product for accessing a file using values from a redirect message string for each change of the link identifier |
US20030064709A1 (en) * | 2001-10-03 | 2003-04-03 | Gailey Michael L. | Multi-modal messaging |
US20030065620A1 (en) * | 2001-10-03 | 2003-04-03 | Gailey Michael L. | Virtual customer database |
US6546005B1 (en) * | 1997-03-25 | 2003-04-08 | At&T Corp. | Active user registry |
US6587835B1 (en) * | 2000-02-09 | 2003-07-01 | G. Victor Treyz | Shopping assistance with handheld computing device |
US6594348B1 (en) * | 1999-02-24 | 2003-07-15 | Pipebeach Ab | Voice browser and a method at a voice browser |
US6598018B1 (en) * | 1999-12-15 | 2003-07-22 | Matsushita Electric Industrial Co., Ltd. | Method for natural dialog interface to car devices |
US6610417B2 (en) * | 2001-10-04 | 2003-08-26 | Oak-Mitsui, Inc. | Nickel coated copper as electrodes for embedded passive devices |
US6647257B2 (en) * | 1998-01-21 | 2003-11-11 | Leap Wireless International, Inc. | System and method for providing targeted messages based on wireless mobile location |
US6697474B1 (en) * | 2001-05-16 | 2004-02-24 | Worldcom, Inc. | Systems and methods for receiving telephone calls via instant messaging |
US6721288B1 (en) * | 1998-09-16 | 2004-04-13 | Openwave Systems Inc. | Wireless mobile devices having improved operation during network unavailability |
US6725252B1 (en) * | 1999-06-03 | 2004-04-20 | International Business Machines Corporation | Method and apparatus for detecting and processing multiple additional requests from a single user at a server in a distributed data processing system |
US6728758B2 (en) * | 1997-12-16 | 2004-04-27 | Fujitsu Limited | Agent for performing process using service list, message distribution method using service list, and storage medium storing program for realizing agent |
US6742022B1 (en) * | 1995-12-11 | 2004-05-25 | Openwave Systems Inc. | Centralized service management system for two-way interactive communication devices in data networks |
US6757718B1 (en) * | 1999-01-05 | 2004-06-29 | Sri International | Mobile navigation of network-based electronic information using spoken input |
US6775360B2 (en) * | 2000-12-28 | 2004-08-10 | Intel Corporation | Method and system for providing textual content along with voice messages |
US6782419B2 (en) * | 2000-07-24 | 2004-08-24 | Bandai Co., Ltd. | System and method for distributing images to mobile phones |
US6782253B1 (en) * | 2000-08-10 | 2004-08-24 | Koninklijke Philips Electronics N.V. | Mobile micro portal |
US6816835B2 (en) * | 2000-06-15 | 2004-11-09 | Sharp Kabushiki Kaisha | Electronic mail system and device |
US6820204B1 (en) * | 1999-03-31 | 2004-11-16 | Nimesh Desai | System and method for selective information exchange |
US6826407B1 (en) * | 1999-03-29 | 2004-11-30 | Richard J. Helferich | System and method for integrating audio and visual messaging |
US6826692B1 (en) * | 1998-12-23 | 2004-11-30 | Computer Associates Think, Inc. | Method and apparatus to permit automated server determination for foreign system login |
US6829334B1 (en) * | 1999-09-13 | 2004-12-07 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with telephone-based service utilization and control |
US6859451B1 (en) * | 1998-04-21 | 2005-02-22 | Nortel Networks Limited | Server for handling multimodal information |
US6895084B1 (en) * | 1999-08-24 | 2005-05-17 | Microstrategy, Inc. | System and method for generating voice pages with included audio files for use in a voice page delivery system |
US6898571B1 (en) * | 2000-10-10 | 2005-05-24 | Jordan Duvac | Advertising enhancement using the internet |
US6907112B1 (en) * | 1999-07-27 | 2005-06-14 | Nms Communications | Method and system for voice messaging |
US6912582B2 (en) * | 2001-03-30 | 2005-06-28 | Microsoft Corporation | Service routing and web integration in a distributed multi-site user authentication system |
US6925307B1 (en) * | 2000-07-13 | 2005-08-02 | Gtech Global Services Corporation | Mixed-mode interaction |
US6950947B1 (en) * | 2000-06-20 | 2005-09-27 | Networks Associates Technology, Inc. | System for sharing network state to enhance network throughput |
US7020251B2 (en) * | 1999-09-13 | 2006-03-28 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with real-time drilling via telephone |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5862325A (en) | 1996-02-29 | 1999-01-19 | Intermind Corporation | Computer-based communication system and method using metadata defining a control structure |
DE19756851A1 (en) | 1997-12-19 | 1999-07-01 | Siemens Ag | Method and telecommunications network for the exchange of information between a subscriber and a service provider |
AU1025399A (en) | 1998-09-22 | 2000-04-10 | Nokia Networks Oy | Method and system of configuring a speech recognition system |
IL142363A0 (en) | 1998-10-02 | 2002-03-10 | Ibm | System and method for providing network coordinated conversational services |
WO2001003011A2 (en) | 1999-07-01 | 2001-01-11 | Netmorf, Inc. | Cross-media information server |
US7167830B2 (en) | 2000-03-10 | 2007-01-23 | Entrieva, Inc. | Multimodal information services |
US6510417B1 (en) | 2000-03-21 | 2003-01-21 | America Online, Inc. | System and method for voice access to internet-based information |
-
2002
- 2002-10-03 US US10/263,501 patent/US7233655B2/en not_active Expired - Lifetime
Patent Citations (75)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5920835A (en) * | 1993-09-17 | 1999-07-06 | Alcatel N.V. | Method and apparatus for processing and transmitting text documents generated from speech |
US6473612B1 (en) * | 1994-04-28 | 2002-10-29 | Metro One Telecommunications, Inc. | Method for providing directory assistance services via an alphanumeric page |
US5561769A (en) * | 1994-05-10 | 1996-10-01 | Lucent Technologies Inc. | Method and apparatus for executing a distributed algorithm or service on a simple network management protocol based computer network |
US5675507A (en) * | 1995-04-28 | 1997-10-07 | Bobo, Ii; Charles R. | Message storage and delivery system |
US5870549A (en) * | 1995-04-28 | 1999-02-09 | Bobo, Ii; Charles R. | Systems and methods for storing, delivering, and managing messages |
US6282270B1 (en) * | 1995-05-26 | 2001-08-28 | International Business Machines Corp. | World wide web voice mail system |
US5764762A (en) * | 1995-06-08 | 1998-06-09 | Wave System Corp. | Encrypted data package record for use in remote transaction metered data system |
US5850517A (en) * | 1995-08-31 | 1998-12-15 | Oracle Corporation | Communication link for client-server having agent which sends plurality of requests independent of client and receives information from the server independent of the server |
US6249291B1 (en) * | 1995-09-22 | 2001-06-19 | Next Software, Inc. | Method and apparatus for managing internet transactions |
US6742022B1 (en) * | 1995-12-11 | 2004-05-25 | Openwave Systems Inc. | Centralized service management system for two-way interactive communication devices in data networks |
US6052367A (en) * | 1995-12-29 | 2000-04-18 | International Business Machines Corp. | Client-server system |
US5953392A (en) * | 1996-03-01 | 1999-09-14 | Netphonic Communications, Inc. | Method and apparatus for telephonically accessing and navigating the internet |
US5884262A (en) * | 1996-03-28 | 1999-03-16 | Bell Atlantic Network Services, Inc. | Computer network audio access and conversion system |
US5905736A (en) * | 1996-04-22 | 1999-05-18 | At&T Corp | Method for the billing of transactions over the internet |
US5882325A (en) * | 1996-04-30 | 1999-03-16 | Medtronic, Inc. | Electrostatic blood defoamer for heart-lung machine |
US6181781B1 (en) * | 1996-11-12 | 2001-01-30 | International Business Machines Corp. | Voice mail system that downloads an applet for managing voice mail messages |
US6345245B1 (en) * | 1997-03-06 | 2002-02-05 | Kabushiki Kaisha Toshiba | Method and system for managing a common dictionary and updating dictionary data selectively according to a type of local processing system |
US6546005B1 (en) * | 1997-03-25 | 2003-04-08 | At&T Corp. | Active user registry |
US6173259B1 (en) * | 1997-03-27 | 2001-01-09 | Speech Machines Plc | Speech to text conversion |
US6333973B1 (en) * | 1997-04-23 | 2001-12-25 | Nortel Networks Limited | Integrated message center |
US6119167A (en) * | 1997-07-11 | 2000-09-12 | Phone.Com, Inc. | Pushing and pulling data in networks |
US6263358B1 (en) * | 1997-07-25 | 2001-07-17 | British Telecommunications Public Limited Company | Scheduler for a software system having means for allocating tasks |
US6070189A (en) * | 1997-08-26 | 2000-05-30 | International Business Machines Corporation | Signaling communication events in a computer network |
US6236768B1 (en) * | 1997-10-14 | 2001-05-22 | Massachusetts Institute Of Technology | Method and apparatus for automated, context-dependent retrieval of information |
US6182144B1 (en) * | 1997-12-12 | 2001-01-30 | Intel Corporation | Means and method for switching between a narrow band communication and a wide band communication to establish a continuous connection with mobile computers |
US6728758B2 (en) * | 1997-12-16 | 2004-04-27 | Fujitsu Limited | Agent for performing process using service list, message distribution method using service list, and storage medium storing program for realizing agent |
US6647257B2 (en) * | 1998-01-21 | 2003-11-11 | Leap Wireless International, Inc. | System and method for providing targeted messages based on wireless mobile location |
US6516316B1 (en) * | 1998-02-17 | 2003-02-04 | Openwave Systems Inc. | Centralized certificate management system for two-way interactive communication devices in data networks |
US6157941A (en) * | 1998-03-18 | 2000-12-05 | Oracle Corporation | Architecture for client-server communication over a communication link |
US6859451B1 (en) * | 1998-04-21 | 2005-02-22 | Nortel Networks Limited | Server for handling multimodal information |
US6138158A (en) * | 1998-04-30 | 2000-10-24 | Phone.Com, Inc. | Method and system for pushing and pulling data using wideband and narrowband transport systems |
US6314108B1 (en) * | 1998-04-30 | 2001-11-06 | Openwave Systems Inc. | Method and apparatus for providing network access over different wireless networks |
US6301245B1 (en) * | 1998-06-09 | 2001-10-09 | Unisys Corporation | Universal Messaging system providing integrated voice, data and fax messaging services to PC/web-based clients, including a large object server for efficiently distributing voice/fax messages to web-based clients |
US6483899B2 (en) * | 1998-06-19 | 2002-11-19 | At&T Corp | Voice messaging system |
US6182142B1 (en) * | 1998-07-10 | 2001-01-30 | Encommerce, Inc. | Distributed access management of information resources |
US6161139A (en) * | 1998-07-10 | 2000-12-12 | Encommerce, Inc. | Administrative roles that govern access to administrative functions |
US6393467B1 (en) * | 1998-08-31 | 2002-05-21 | Nortel Networks Limited | Network interconnected computing device, server and notification method |
US6721288B1 (en) * | 1998-09-16 | 2004-04-13 | Openwave Systems Inc. | Wireless mobile devices having improved operation during network unavailability |
US20020137491A1 (en) * | 1998-09-24 | 2002-09-26 | Heimo Pentikainen | Method and system for an answering service |
US6219638B1 (en) * | 1998-11-03 | 2001-04-17 | International Business Machines Corporation | Telephone messaging and editing system |
US6826692B1 (en) * | 1998-12-23 | 2004-11-30 | Computer Associates Think, Inc. | Method and apparatus to permit automated server determination for foreign system login |
US6757718B1 (en) * | 1999-01-05 | 2004-06-29 | Sri International | Mobile navigation of network-based electronic information using spoken input |
US6385586B1 (en) * | 1999-01-28 | 2002-05-07 | International Business Machines Corporation | Speech recognition text-based language conversion and text-to-speech in a client-server configuration to enable language translation devices |
US6594348B1 (en) * | 1999-02-24 | 2003-07-15 | Pipebeach Ab | Voice browser and a method at a voice browser |
US6826407B1 (en) * | 1999-03-29 | 2004-11-30 | Richard J. Helferich | System and method for integrating audio and visual messaging |
US6820204B1 (en) * | 1999-03-31 | 2004-11-16 | Nimesh Desai | System and method for selective information exchange |
US6725252B1 (en) * | 1999-06-03 | 2004-04-20 | International Business Machines Corporation | Method and apparatus for detecting and processing multiple additional requests from a single user at a server in a distributed data processing system |
US6907112B1 (en) * | 1999-07-27 | 2005-06-14 | Nms Communications | Method and system for voice messaging |
US6895084B1 (en) * | 1999-08-24 | 2005-05-17 | Microstrategy, Inc. | System and method for generating voice pages with included audio files for use in a voice page delivery system |
US6523063B1 (en) * | 1999-08-30 | 2003-02-18 | Zaplet, Inc. | Method system and program product for accessing a file using values from a redirect message string for each change of the link identifier |
US7020251B2 (en) * | 1999-09-13 | 2006-03-28 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with real-time drilling via telephone |
US6829334B1 (en) * | 1999-09-13 | 2004-12-07 | Microstrategy, Incorporated | System and method for the creation and automatic deployment of personalized, dynamic and interactive voice services, with telephone-based service utilization and control |
US6453337B2 (en) * | 1999-10-25 | 2002-09-17 | Zaplet, Inc. | Methods and systems to manage and track the states of electronic media |
US20020112007A1 (en) * | 1999-11-03 | 2002-08-15 | Christopher (Noah) Wood | Personal message management system |
US6598018B1 (en) * | 1999-12-15 | 2003-07-22 | Matsushita Electric Industrial Co., Ltd. | Method for natural dialog interface to car devices |
US6513003B1 (en) * | 2000-02-03 | 2003-01-28 | Fair Disclosure Financial Network, Inc. | System and method for integrated delivery of media and synchronized transcription |
US6587835B1 (en) * | 2000-02-09 | 2003-07-01 | G. Victor Treyz | Shopping assistance with handheld computing device |
US20020091829A1 (en) * | 2000-02-22 | 2002-07-11 | Wood Christopher (Noah) | Internet message management portal |
US20020035607A1 (en) * | 2000-05-25 | 2002-03-21 | Daniel Checkoway | E-mail gateway system |
US6816835B2 (en) * | 2000-06-15 | 2004-11-09 | Sharp Kabushiki Kaisha | Electronic mail system and device |
US6950947B1 (en) * | 2000-06-20 | 2005-09-27 | Networks Associates Technology, Inc. | System for sharing network state to enhance network throughput |
US6925307B1 (en) * | 2000-07-13 | 2005-08-02 | Gtech Global Services Corporation | Mixed-mode interaction |
US6782419B2 (en) * | 2000-07-24 | 2004-08-24 | Bandai Co., Ltd. | System and method for distributing images to mobile phones |
US6782253B1 (en) * | 2000-08-10 | 2004-08-24 | Koninklijke Philips Electronics N.V. | Mobile micro portal |
US6898571B1 (en) * | 2000-10-10 | 2005-05-24 | Jordan Duvac | Advertising enhancement using the internet |
US6775360B2 (en) * | 2000-12-28 | 2004-08-10 | Intel Corporation | Method and system for providing textual content along with voice messages |
US20020107925A1 (en) * | 2001-02-05 | 2002-08-08 | Robert Goldschneider | Method and system for e-mail management |
US20020119793A1 (en) * | 2001-02-27 | 2002-08-29 | Daniel Hronek | Mobile originated interactive menus via short messaging services |
US6912582B2 (en) * | 2001-03-30 | 2005-06-28 | Microsoft Corporation | Service routing and web integration in a distributed multi-site user authentication system |
US6697474B1 (en) * | 2001-05-16 | 2004-02-24 | Worldcom, Inc. | Systems and methods for receiving telephone calls via instant messaging |
US6504910B1 (en) * | 2001-06-07 | 2003-01-07 | Robert Engelke | Voice and text transmission system |
US20030008661A1 (en) * | 2001-07-03 | 2003-01-09 | Joyce Dennis P. | Location-based content delivery |
US20030064709A1 (en) * | 2001-10-03 | 2003-04-03 | Gailey Michael L. | Multi-modal messaging |
US20030065620A1 (en) * | 2001-10-03 | 2003-04-03 | Gailey Michael L. | Virtual customer database |
US6610417B2 (en) * | 2001-10-04 | 2003-08-26 | Oak-Mitsui, Inc. | Nickel coated copper as electrodes for embedded passive devices |
Cited By (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080288586A1 (en) * | 2002-03-29 | 2008-11-20 | Koch Robert A | Remote access and retrieval of electronic files |
US8615555B2 (en) | 2002-03-29 | 2013-12-24 | Wantage Technologies Llc | Remote access and retrieval of electronic files |
US7197537B2 (en) * | 2002-03-29 | 2007-03-27 | Bellsouth Intellectual Property Corp | Remote access and retrieval of electronic files |
US20030187955A1 (en) * | 2002-03-29 | 2003-10-02 | Koch Robert A. | Remote access and retrieval of electronic files |
US20080126922A1 (en) * | 2003-06-30 | 2008-05-29 | Hiroshi Yahata | Recording medium, reproduction apparatus, recording method, program and reproduction method |
US11019199B2 (en) | 2003-12-08 | 2021-05-25 | Ipventure, Inc. | Adaptable communication techniques for electronic devices |
US11800329B2 (en) | 2003-12-08 | 2023-10-24 | Ingenioshare, Llc | Method and apparatus to manage communication |
US11792316B2 (en) * | 2003-12-08 | 2023-10-17 | Ipventure, Inc. | Adaptable communication techniques for electronic devices |
US12143901B2 (en) | 2003-12-08 | 2024-11-12 | Ingenioshare, Llc | Method and apparatus to manage communication |
US11711459B2 (en) | 2003-12-08 | 2023-07-25 | Ipventure, Inc. | Adaptable communication techniques for electronic devices |
US8804758B2 (en) | 2004-03-11 | 2014-08-12 | Hipcricket, Inc. | System and method of media over an internet protocol communication |
US20060112063A1 (en) * | 2004-11-05 | 2006-05-25 | International Business Machines Corporation | System, apparatus, and methods for creating alternate-mode applications |
US7920681B2 (en) * | 2004-11-05 | 2011-04-05 | International Business Machines Corporation | System, apparatus, and methods for creating alternate-mode applications |
US7609669B2 (en) | 2005-02-14 | 2009-10-27 | Vocollect, Inc. | Voice directed system and method configured for assured messaging to multiple recipients |
US9653072B2 (en) * | 2005-05-13 | 2017-05-16 | Nuance Communications, Inc. | Apparatus and method for forming search engine queries based on spoken utterances |
US20140288935A1 (en) * | 2005-05-13 | 2014-09-25 | At&T Intellectual Property Ii, L.P. | Apparatus and method for forming search engine queries based on spoken utterances |
US20080262995A1 (en) * | 2007-04-19 | 2008-10-23 | Microsoft Corporation | Multimodal rating system |
US10133372B2 (en) * | 2007-12-20 | 2018-11-20 | Nokia Technologies Oy | User device having sequential multimodal output user interface |
US20090164207A1 (en) * | 2007-12-20 | 2009-06-25 | Nokia Corporation | User device having sequential multimodal output user interace |
US20090327422A1 (en) * | 2008-02-08 | 2009-12-31 | Rebelvox Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US8509123B2 (en) | 2008-02-08 | 2013-08-13 | Voxer Ip Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US8412845B2 (en) | 2008-02-08 | 2013-04-02 | Voxer Ip Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US8321582B2 (en) * | 2008-02-08 | 2012-11-27 | Voxer Ip Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US9054912B2 (en) | 2008-02-08 | 2015-06-09 | Voxer Ip Llc | Communication application for conducting conversations including multiple media types in either a real-time mode or a time-shifted mode |
US8831580B2 (en) * | 2008-08-15 | 2014-09-09 | Hipcricket, Inc. | Systems and methods of initiating a call |
US8831581B2 (en) | 2008-08-15 | 2014-09-09 | Hipcricket, Inc. | System and methods of initiating a call |
US20100048191A1 (en) * | 2008-08-15 | 2010-02-25 | Bender Douglas F | Systems and methods of initiating a call |
EP2335240B1 (en) * | 2008-09-09 | 2014-07-16 | Deutsche Telekom AG | Voice dialog system with reject avoidance process |
US9009056B2 (en) | 2008-09-09 | 2015-04-14 | Deutsche Telekom Ag | Voice dialog system with reject avoidance process |
US20100298009A1 (en) * | 2009-05-22 | 2010-11-25 | Amazing Technologies, Llc | Hands free messaging |
US11381415B2 (en) * | 2009-11-13 | 2022-07-05 | Samsung Electronics Co., Ltd. | Method and apparatus for providing remote user interface services |
US11979252B2 (en) | 2009-11-13 | 2024-05-07 | Samsung Electronics Co., Ltd. | Method and apparatus for providing remote user interface services |
WO2012031246A1 (en) * | 2010-09-03 | 2012-03-08 | Hulu Llc | Method and apparatus for callback supplementation of media program metadata |
US8392452B2 (en) | 2010-09-03 | 2013-03-05 | Hulu Llc | Method and apparatus for callback supplementation of media program metadata |
US8914409B2 (en) | 2010-09-03 | 2014-12-16 | Hulu, LLC | Method and apparatus for callback supplementation of media program metadata |
US20120237009A1 (en) * | 2011-03-15 | 2012-09-20 | Mitel Networks Corporation | Systems and methods for multimodal communication |
US9699632B2 (en) | 2011-09-28 | 2017-07-04 | Elwha Llc | Multi-modality communication with interceptive conversion |
US9503550B2 (en) | 2011-09-28 | 2016-11-22 | Elwha Llc | Multi-modality communication modification |
US9762524B2 (en) | 2011-09-28 | 2017-09-12 | Elwha Llc | Multi-modality communication participation |
US9788349B2 (en) | 2011-09-28 | 2017-10-10 | Elwha Llc | Multi-modality communication auto-activation |
US9794209B2 (en) | 2011-09-28 | 2017-10-17 | Elwha Llc | User interface for multi-modality communication |
US20130078975A1 (en) * | 2011-09-28 | 2013-03-28 | Royce A. Levien | Multi-party multi-modality communication |
US9477943B2 (en) | 2011-09-28 | 2016-10-25 | Elwha Llc | Multi-modality communication |
US9002937B2 (en) * | 2011-09-28 | 2015-04-07 | Elwha Llc | Multi-party multi-modality communication |
US9635067B2 (en) | 2012-04-23 | 2017-04-25 | Verint Americas Inc. | Tracing and asynchronous communication network and routing method |
US8880631B2 (en) | 2012-04-23 | 2014-11-04 | Contact Solutions LLC | Apparatus and methods for multi-mode asynchronous communication |
US9172690B2 (en) | 2012-04-23 | 2015-10-27 | Contact Solutions LLC | Apparatus and methods for multi-mode asynchronous communication |
US10015263B2 (en) | 2012-04-23 | 2018-07-03 | Verint Americas Inc. | Apparatus and methods for multi-mode asynchronous communication |
US20130339455A1 (en) * | 2012-06-19 | 2013-12-19 | Research In Motion Limited | Method and Apparatus for Identifying an Active Participant in a Conferencing Event |
US9222788B2 (en) * | 2012-06-27 | 2015-12-29 | Microsoft Technology Licensing, Llc | Proactive delivery of navigation options |
US20140005921A1 (en) * | 2012-06-27 | 2014-01-02 | Microsoft Corporation | Proactive delivery of navigation options |
US11821735B2 (en) | 2012-06-27 | 2023-11-21 | Uber Technologies, Inc. | Proactive delivery of navigation options |
US10365114B2 (en) | 2012-06-27 | 2019-07-30 | Uber Technologies, Inc. | Proactive delivery of navigation options |
US11320274B2 (en) | 2012-06-27 | 2022-05-03 | Uber Technologies, Inc. | Proactive delivery of navigation options |
US10506101B2 (en) | 2014-02-06 | 2019-12-10 | Verint Americas Inc. | Systems, apparatuses and methods for communication flow modification |
US9218410B2 (en) | 2014-02-06 | 2015-12-22 | Contact Solutions LLC | Systems, apparatuses and methods for communication flow modification |
US11574621B1 (en) * | 2014-12-23 | 2023-02-07 | Amazon Technologies, Inc. | Stateless third party interactions |
US9166881B1 (en) | 2014-12-31 | 2015-10-20 | Contact Solutions LLC | Methods and apparatus for adaptive bandwidth-based communication management |
US9641684B1 (en) | 2015-08-06 | 2017-05-02 | Verint Americas Inc. | Tracing and asynchronous communication network and routing method |
US10063647B2 (en) | 2015-12-31 | 2018-08-28 | Verint Americas Inc. | Systems, apparatuses, and methods for intelligent network communication and engagement |
US10848579B2 (en) | 2015-12-31 | 2020-11-24 | Verint Americas Inc. | Systems, apparatuses, and methods for intelligent network communication and engagement |
US11004452B2 (en) * | 2017-04-14 | 2021-05-11 | Naver Corporation | Method and system for multimodal interaction with sound device connected to network |
CN107767856A (en) * | 2017-11-07 | 2018-03-06 | 中国银行股份有限公司 | A kind of method of speech processing, device and server |
US20220122608A1 (en) * | 2019-07-17 | 2022-04-21 | Google Llc | Systems and methods to verify trigger keywords in acoustic-based digital assistant applications |
US11869504B2 (en) * | 2019-07-17 | 2024-01-09 | Google Llc | Systems and methods to verify trigger keywords in acoustic-based digital assistant applications |
US12230269B2 (en) | 2019-07-17 | 2025-02-18 | Google Llc | Systems and methods to verify trigger keywords in acoustic-based digital assistant applications |
US11818785B2 (en) * | 2020-10-01 | 2023-11-14 | Zebra Technologies Corporation | Reestablishment control for dropped calls |
US20220358916A1 (en) * | 2021-05-10 | 2022-11-10 | International Business Machines Corporation | Creating a virtual context for a voice command |
US11646024B2 (en) * | 2021-05-10 | 2023-05-09 | International Business Machines Corporation | Creating a virtual context for a voice command |
Also Published As
Publication number | Publication date |
---|---|
US7233655B2 (en) | 2007-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7233655B2 (en) | Multi-modal callback | |
US7254384B2 (en) | Multi-modal messaging | |
US7640006B2 (en) | Directory assistance with multi-modal messaging | |
US8755494B2 (en) | Method and apparatus for voice interactive messaging | |
US20020069060A1 (en) | Method and system for automatically managing a voice-based communications systems | |
US20080059179A1 (en) | Method for centrally storing data | |
EP2367334B1 (en) | Service authorizer | |
AU2002347406A1 (en) | Multi-modal messaging and callback with service authorizer and virtual customer database | |
US20050114139A1 (en) | Method of operating a speech dialog system | |
US7689425B2 (en) | Quality of service call routing system using counselor and speech recognition engine and method thereof | |
JP2002542727A (en) | Method and system for providing Internet-based information in audible form | |
US6640210B1 (en) | Customer service operation using wav files | |
EP1708470B1 (en) | Multi-modal callback system | |
KR100376409B1 (en) | Service method for recording a telephone message and the system thereof | |
AU2007216929C1 (en) | Multi-modal callback | |
KR100506395B1 (en) | Information Offering Method and System using Agents and Automatic Speech Recognition Server in Telematics Service | |
WO2008100420A1 (en) | Providing network-based access to personalized user information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ACCENTURE GLOBAL SERVICES GMBH, SWITZERLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GAILEY, MICHAEL L.;PORTMAN, ERIC A.;BURGISS, MICHAEL J.;REEL/FRAME:013364/0410;SIGNING DATES FROM 20021001 TO 20021003 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: ACCENTURE GLOBAL SERVICES LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACCENTURE GLOBAL SERVICES GMBH;REEL/FRAME:025700/0287 Effective date: 20100901 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |