US8712779B2 - Information retrieval system, information retrieval method, and information retrieval program - Google Patents
Information retrieval system, information retrieval method, and information retrieval program Download PDFInfo
- Publication number
- US8712779B2 US8712779B2 US12/530,765 US53076508A US8712779B2 US 8712779 B2 US8712779 B2 US 8712779B2 US 53076508 A US53076508 A US 53076508A US 8712779 B2 US8712779 B2 US 8712779B2
- Authority
- US
- United States
- Prior art keywords
- information
- speech
- unit
- similarity
- degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/432—Query formulation
- G06F16/433—Query formulation using audio data
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Definitions
- the present invention relates to an information retrieval system, an information retrieval method, and an information retrieval program, and in particular, to an information retrieval system, an information retrieval method, and an information retrieval program for presenting appropriate information matching content of inputted speech from a large quantity of stored information.
- Technology for rapidly retrieving and presenting information according to content of a video or speech and the like is very useful in many kinds of situations. For example, if a system for sequentially presenting related material matching the course of proceedings during a meeting, or a system for automatically obtaining information related to content of inquiries to a telephone call center can be realized, a user can focus on the meeting or response itself, and productivity is improved.
- Patent Document 1 An example of a conventional system for performing information retrieval based on speech in this way is disclosed in Patent Document 1.
- this conventional information retrieval system is configured from a speech input means, a speech recognition means, an action data storage means in which a keyword and an action corresponding thereto are stored, a keyword extraction means for extracting a keyword registered in advance from speech recognition result text, an action data condition judging means for judging whether or not the extracted keyword satisfies any action data condition, and an action execution means for executing an action when a condition is established (furthermore, the action data condition judging means and the action execution means of FIG. 5 correspond to an “information presentation processing means” in Patent Document 1).
- the conventional information retrieval system having this type of configuration operates as follows. Specifically, a speech signal received from a speech input unit is converted to text by the speech recognition means. A keyword registered in advance is extracted from the converted text. Finally, a judgment is made as to whether or not the extracted keyword satisfies a condition for execution of an action that has been designed in advance, and if the judgment result is true, a prescribed action is executed. A principal action is to display text that has been determined in advance.
- Patent Document 1 The entire disclosures of Patent Document 1 and Non-Patent Document 1 are incorporated herein by reference thereto. An analysis of related technology according to the present invention is given as follows.
- a first problem is the point that, in a case of using a keyword-base method of assigning a keyword as a retrieval tag to information that is a target for search, it is very difficult to register in advance keywords that are appropriate and of a sufficient quantity for practical use.
- a second problem is the point that a method is used based on similar text retrieval technology, and information that can be presented is significantly limited.
- finding meeting minutes with respect to meeting speech as similar text can be realized to some extent by the similar text retrieval technology.
- a “spoken word” expression refers to an expression used in “what is called natural conversation with high spontaneity” (Non-Patent Document 1).
- a written word expression refers to a linguistic expression used in a newspaper, research paper, memo, or the like.
- test set perplexity a scale for measuring the degree of similarity between texts, the higher the value the lower the similarity that is indicated
- a third problem is the point that in both methods, with regard to misrecognition, information retrieval accuracy is not robust.
- the present invention has been made in view of the abovementioned circumstances, and provides an information retrieval system that can automatically detect with good accuracy and present appropriate information matching content of input speech, without requiring the effort of registering keywords in advance.
- an information retrieval system that comprises a speech input unit for inputting speech; an information storage unit for storing information, of a length with which text degree of similarity is computable, is associated as a retrieval tag, an information selection unit for comparing a feature of each spoken content item extracted from each item of the speech information, with a feature of spoken content extracted from the input speech, to select information with which speech information similar to the input speech is associated; and an output unit for outputting information selected by the information selection unit as information associated with the input speech.
- an information retrieval method of retrieving information associated with input speech in an information retrieval system provided with a speech input unit for inputting speech, an information storage unit for storing information, of a length with which text degree of similarity is computable, is associated as a retrieval tag, and an output unit for outputting information associated with the input speech; wherein the information retrieval system compares a feature of each spoken content item extracted from each speech information item stored by the information storage unit, with a feature of spoken content extracted from the input speech, and selects information with which speech information similar to the input speech is associated, and outputs the information as a retrieval result.
- an information retrieval program for executing in an information retrieval system provided with a speech input unit for inputting speech, an information storage unit for storing information, of a length with which text degree of similarity is computable, is associated as a retrieval tag, and an output unit for outputting information associated with the input speech, the information retrieval program executing, in the information retrieval system, processing of comparing a feature of each spoken content item extracted from each speech information item stored by the information storage unit, with a feature of spoken content extracted from the input speech, and selecting information with which speech information similar to the input speech is associated, and passing the information to the output unit.
- the present invention it is possible to retrieve and present information appropriate for input speech, without registering a large amount of keywords in advance.
- the reason for this is because information retrieval is performed using a level of similarity between the input speech and speech information, not depending on a specific keyword.
- the present invention it is possible to also retrieve and present information that does not have a direct relation or similarity to the input speech.
- the reason for this is because the information retrieval is performed using the degree of similarity between the input speech and, not information itself that is a target for retrieval, but speech information associated as a retrieval tag (index) with such information.
- FIG. 2 is a block diagram representing a configuration of the information retrieval system according to a second exemplary embodiment of the present invention.
- FIG. 3 is a block diagram representing a configuration of the information retrieval system according to a third exemplary embodiment of the present invention.
- FIG. 4 is a block diagram representing a configuration of an exemplary embodiment (meeting material automatic presentation device) applied to the present invention.
- FIG. 5 is a block diagram representing a configuration of a conventional information retrieval system operating according to input speech.
- an information retrieval system that is provided with a speech input unit for inputting speech; an information storage unit for storing information, of a length with which text degree of similarity is computable, is associated as a retrieval tag; an information selection unit for comparing a feature of each spoken content item extracted from each item of the speech information, with a feature of spoken content extracted from the input speech, to select information with which speech information similar to the input speech is associated; and an output unit for outputting information selected by the information selection unit, as information associated with the input speech.
- the information selection unit of the information retrieval system can operate to select the speech information that has a word set similar to a word set included in the input speech.
- the information retrieval system is further provided with a speech recognition unit for converting a speech signal into text data, and a degree of similarity computation unit for computing degree of similarity between two or more speech recognition results generated by the speech recognition unit; wherein, speech recognition of each of a speech signal stored as the speech information in the information storage unit and the input speech, is performed by the speech recognition unit, respectively; degree of similarity between each of the speech recognition results is computed by the a degree of similarity computation unit, and the information selection unit can be operated so as to select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity computed by the a degree of similarity computation unit.
- the information retrieval system is further provided with a speech recognition unit for converting a speech signal into text data; and a degree of similarity computation unit for computing degree of similarity between two or more speech recognition results generated by the speech recognition unit; wherein, speech recognition of the input speech is performed by the speech recognition unit; degree of similarity between a speech recognition result stored in advance as speech information in the information storage unit, and a speech recognition result of the input speech, is computed by the a degree of similarity computation unit; and the information selection unit can be operated so as to select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity computed by the a degree of similarity computation unit.
- the information retrieval system is further provided with a speech recognition unit for converting a speech signal into text data; a text feature vector generation unit for computing a text feature vector from a speech recognition result generated by the speech recognition unit; and a degree of similarity computation unit for computing degree of similarity between two or more text feature vectors computed by the text feature vector generation unit; wherein, after performing speech recognition of each of the input speech and the speech signal stored as speech information in the information storage unit, by the speech recognition unit, respective text feature vectors are computed by the text feature vector generation unit, degree of similarity between each of the text feature vectors is computed by the a degree of similarity computation unit; and the information selection unit can be operated so as to select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity computed by the a degree of similarity computation unit.
- the information retrieval system is further provided with a speech recognition unit for converting a speech signal into text data; a text feature vector generation unit for computing a text feature vector from a speech recognition result generated by the speech recognition unit; and a degree of similarity computation unit for computing degree of similarity between two or more text feature vectors computed by the text feature vector generation unit; wherein, after performing speech recognition of the input speech by the speech recognition unit, respective text feature vectors are computed by the text feature vector generation unit, from a speech recognition result of the input speech and a speech recognition result stored in advance as speech information in the information storage unit; degree of similarity between each of the text feature vectors is computed by the a degree of similarity computation unit; and the information selection unit can be operated so as to select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity computed by the a degree of similarity computation unit.
- the information retrieval system is further provided with a speech recognition unit for converting a speech signal into text data; a text feature vector generation unit for computing a text feature vector from a speech recognition result generated by the speech recognition unit; and a degree of similarity computation unit for computing degree of similarity between two or more text feature vectors computed by the text feature vector generation unit; wherein, after performing speech recognition of the input speech by the speech recognition unit, text feature vectors are computed by the text feature vector generation unit; degree of similarity between a text feature vector stored in advance as speech information in the information storage unit and a text feature vector computed from the input speech is computed by the a degree of similarity computation unit; and the information selection unit can be operated so as to select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity computed by the a degree of similarity computation unit.
- a speech recognition unit equivalent to the speech recognition unit and a text feature vector generated by a text feature vector generation unit equivalent to the text feature vector generation unit, as a text feature vector stored in advance in the information storage unit of the information retrieval system.
- the speech recognition unit of the information retrieval system can divide the input speech into blocks of arbitrary size, sequentially output a speech recognition result for each block; and for each speech recognition result output, the a degree of similarity computation unit can re-compute the degree of similarity of the speech information and the speech recognition result of all blocks that have been outputted; and the information selection unit can be operated so as to re-select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity that has been re-computed.
- the speech recognition unit of the information retrieval system can divide the input speech into blocks of arbitrary size, sequentially output a speech recognition result for each block; and for each speech recognition result output, the text feature vector generation unit can generate a text feature vector for a speech recognition result of each of the blocks; and in addition, the degree of similarity computation unit can re-compute the degree of similarity of the speech information and the speech recognition result of all blocks that have been outputted; and the information selection unit can be operated so as to re-select information with which speech information having a high degree of similarity to the input speech is associated, based on the degree of similarity that has been re-computed.
- the information retrieval system is further provided with a buffer unit for holding a speech recognition result obtained by the speech recognition unit; and a feature selection unit for selecting a speech recognition result from the buffer unit according to a prescribed feature selection rule, to be inputted to the a degree of similarity computation unit, wherein the a degree of similarity computation unit can be operated so as to use the speech recognition result selected by the feature selection unit to compute the respective degree of similarity.
- the information retrieval system is further provided with a buffer unit for holding a text feature vector generated by the text feature vector generation unit; and a feature vector selection unit for selecting a text feature vector from the buffer unit according to a prescribed feature selection rule, to be inputted to the a degree of similarity computation unit, and the a degree of similarity computation unit can be operated so as to use the text feature vector selected by the feature vector selection unit to compute the respective degree of similarity.
- the feature selection unit or the feature vector selection unit of the information retrieval system can be operated so as to change the prescribed feature selection rule, based on feedback from the information selection unit.
- An information registration unit for registering information to be retrieved and the speech information can also be provided in the information storage unit of the information retrieval system.
- the information retrieval system can further be provided with an information registration unit for registering the text feature vector or the speech recognition result held in the buffer unit, as the speech information associated with the information that is to be retrieved, in the information storage unit.
- the information retrieval system can further be provided with an information registration unit for registering a set of text feature vectors or speech recognition results selected by the feature selection unit, as the speech information associated with the information that is to be retrieved, in the information storage unit.
- the information registration unit of the information retrieval system can be operated so as to receive, and register in the information storage unit, input of speech information newly associated with information selected by the information selection unit.
- the information registration unit of the information retrieval system can be made to operate so as to record, in the information storage unit, arbitrary information outside of information selected by the information selection unit, among information referred to by the telephone reception agent during a telephone call response, as information related to content of the conversation speech between the customer and the telephone reception agent in question.
- FIG. 1 is a block diagram representing a configuration of the information retrieval system according to a first exemplary embodiment of the present invention.
- the information retrieval system according to the present exemplary embodiment is configured by including a speech input unit 101 , a speech recognition unit 102 , an information storage unit 103 , a degree of similarity computation unit 104 , and an information selection unit 105 .
- the speech input unit 101 receives as input, in sequence, digital speech signals from file input, a microphone device, or the like.
- the speech recognition unit 102 performs speech recognition processing with the digital speech signals as input, and performs conversation to a speech recognition result.
- the speech recognition result includes, besides text obtained as the result of the speech recognition processing, the likelihood thereof, time of appearance and part of speech information of each word composing the text, second or lower rank recognition candidates, and the like.
- the information storage unit 103 stores various types of information finally presented to a user and digital speech signals used as retrieval tags of each item of information.
- the form of the stored information is arbitrary. For example, forms such as text, speech, image, video, hyperlink, and the like can be considered. Furthermore, there is no particular limitation to this content.
- speech (tag speech) used as the retrieval tags must fulfill the following 2 conditions.
- the speech used as a retrieval tag must have a length of an extent such that text degree of similarity with the input speech can be computed.
- the speech used as the retrieval tag must be such that similarity with the input speech can be expected.
- the speech used as the retrieval tag be speech in the same environment and of the same task as the input speech.
- the input speech is conversational speech in a meeting
- a relationship between the information and the retrieval tag is not limited to being 1 to 1.
- a plurality of information items may be linked to one retrieval tag, and one information item may be linked to a plurality of retrieval tags.
- the degree of similarity computation unit 104 computes the degree of similarity of speech recognition results (two items of text data) outputted by the speech recognition unit 102 based on an arbitrary text degree of similarity computation algorithm.
- An arbitrary known method can be used as the text degree of similarity computation algorithm, and, for example, a method may be considered in which, with a frequency distribution of words appearing in the text as a feature amount vector, a cosine distance thereof is obtained.
- a limitation according to time or the like may be added to the inputted text data, and items of a part thereof may be compared.
- the information selection unit 105 presents appropriate information to the user, based on the degree of similarity computed by the degree of similarity computation unit 104 . Moreover, although omitted in the present exemplary embodiment, an output device such as a display, speakers, and the like, are selected as appropriate in accordance with the form or presentation mode of information presented to the user.
- the information retrieval system of the present exemplary embodiment formed as in the abovementioned configuration, it is possible to directly compare the speech recognition result of the input speech and the speech recognition result of speech that is a retrieval tag, and, based on degree of similarity thereof, to present information restricted by speech similar to the input speech. As a result, a task of selecting keywords in advance becomes unnecessary.
- the configuration is such that an operation of a direct comparison between information that is a target of retrieval and input speech is not performed, it is possible to also include in the retrieval target, information that does not have a direct relationship or similarity with the input speech. For example, by inputting a phrase (input speech) remaining in memory related to a product raised as a topic in the previous meeting, it is possible to find a catalog (information) of a specific product associated with the recorded speech (retrieval tag) of the previous meeting.
- the configuration is such that speech recognition is performed on the input speech and the speech that is a retrieval tag, using the same speech recognition unit 102 , recognition error tendencies are in coincidence with each other, so that it is possible to inhibit effects of misrecognition on the similar text retrieval.
- the configuration is such that the association of the information that is a target of retrieval and the retrieval tag is not limited to being 1 to 1, it is possible to improve retrieval accuracy in comparison to a case limited to 1 to 1. That is, by linking a plurality of information items to one retrieval tag, since the information presented for certain input speech increases, the probability that appropriate information according to the input speech will be presented becomes high. Furthermore, by linking one information item to a plurality of retrieval tags, merely if the input speech is similar to a speech that is any of the retrieval tags, it is possible to present this information.
- FIG. 2 is a block diagram representing a configuration of the information retrieval system according to the second exemplary embodiment of the present invention.
- the information retrieval system according to the present exemplary embodiment is configured by including a speech input unit 201 , a speech recognition unit 202 , an information storage unit 203 , a degree of similarity computation unit 204 , an information selection unit 205 , and an information registration unit 206 .
- the speech input unit 201 the speech recognition unit 202 , the degree of similarity computation unit 204 , and the information selection unit 205 basically operate in the same way as the respective units in the abovementioned first exemplary embodiment, and the description below focuses on points of difference from the abovementioned first exemplary embodiment.
- the information storage unit 203 stores various types of information finally presented to the user and speech recognition results used as retrieval tags of the respective items of information. There is no particular limitation to the stored information, in the same way as for the first exemplary embodiment, but the speech recognition results used as the retrieval tags must satisfy the following two conditions.
- a speech recognition result used as a retrieval tag must have a length of an extent such that text degree of similarity of the speech recognition result of the input speech can be computed (calculated).
- the speech recognition result used as the retrieval tag must be obtainable by speech recognition by the speech recognition unit 202 itself or the speech recognition unit operating under approximately the same conditions, from a speech for which similarity with the input speech can be expected.
- the information registration unit 206 registers a recognition result of the input speech in question as a new retrieval tag.
- the information storage unit 203 may be made to operate such that arbitrary speech input is received, and is registered in the information registration unit 206 , with a recognition result of the arbitrary input speech as a new retrieval tag.
- the information registration unit 206 registers a recognition result of speech inputted by the speech input unit 201 when the information is received, as a retrieval tag for this information, in the information storage unit 203 .
- the information registration unit 206 receives a text feature amount from the degree of similarity computation unit 204 , and performs registration in the information storage unit 203 .
- the speech recognition result of the speech is stored as a retrieval tag registered in the information storage unit 203 , storage capacity necessary for the information storage unit 203 is economized, in comparison with the first exemplary embodiment. Furthermore, since a target for recognition in the speech recognition unit 202 is only input speech obtained from the speech input unit 201 , this is also advantageous with regard to computation amount.
- the information registration unit 206 since the information registration unit 206 is arranged, it is possible to automatically register a recognition result (or a text feature amount) of speech inputted once, as a new retrieval tag for information presented with regard to this speech, to be used in retrieval from the next time onwards. As a result, a database is automatically strengthened only by the user using the system, and a significant effect is obtained in that a hit ratio improves.
- FIG. 3 is a block diagram representing a configuration of the information retrieval system according to a third exemplary embodiment of the present invention.
- the information retrieval system according to the present exemplary embodiment is configured by including a speech input unit 301 , a speech recognition unit 302 , a recognition result holding unit 303 , an information storage unit 304 , a degree of similarity computation unit 305 , an information selection unit 306 , and an information registration unit 307 .
- the speech input unit 301 since the speech input unit 301 , the speech recognition unit 302 , and the information storage unit 304 basically operate in the same way as the respective units in the abovementioned first and second exemplary embodiments, the description below focuses on points of difference from each of the abovementioned exemplary embodiments.
- the recognition result holding unit 303 sequentially records a speech recognition result outputted by the speech recognition unit 302 as a block. In addition, the recognition result holding unit 303 redoes similar text retrieval by calling the degree of similarity computation unit 305 and the information selection unit 306 each time output from the speech recognition unit 302 is recorded.
- the degree of similarity computation unit 305 and the information selection unit 306 basically operate in the same way as the first and the second exemplary embodiments, but in the present exemplary embodiment, it is possible to apply feedback so that the degree of similarity computation unit 305 attempts re-computation, based on an instruction of the information selection unit 306 .
- the degree of similarity computation unit 305 exhibits operations in which a degree of similarity using only a speech recognition result added relatively newly to the recognition result holding unit 303 is re-computed, or weighting is applied to the speech recognition result according to the time of corresponding input speech and the degree of similarity is re-computed.
- the information registration unit 307 basically performs an operation similar to the second exemplary embodiment, but the speech recognition result associated with information, as a retrieval tag, is a speech recognition result held by the recognition result holding unit 303 at the exact time at which the information registration unit 307 operates. Therefore, even if a similar speech signal is received as input from the speech input unit 301 , according to timing at which the information registration unit 307 operates, a different retrieval tag (speech recognition result) is given to the information.
- the recognition result holding unit 303 sequentially calling the degree of similarity computation unit 305 and the information selection unit 306 according to output of the speech recognition unit 302 , it is possible to present appropriate information based on any content at the time of the speech, for an input speech signal received sequentially.
- This type of operation is an operation that is particularly suited to speech that proceeds while moving between/among several topics, as in a meeting or in a telephone call response.
- the information retrieval system of the present exemplary embodiment by receiving feedback from the information selection unit 306 and appropriately selecting and outputting a speech recognition result used by the degree of similarity computation unit 305 , from the recognition result holding unit 303 , it is possible to present appropriate information following local (or topical) change in the speech (voice) content.
- the information registration unit 307 using the speech recognition result held in the recognition result holding unit 303 at the exact time of operating, as a retrieval tag, it becomes possible to generate a retrieval tag more strictly representing the local speech (voice) content.
- FIG. 4 is a block diagram representing a configuration of a meeting material automatic presentation device that performs automatic presentation of meeting material based on input speech.
- the meeting material automatic presentation device is configured by including a microphone 401 , a speech recognition unit 402 , a text feature vector generation unit 403 , a text feature vector buffer 404 , a text feature vector selection unit 405 , a degree of similarity computation unit 406 , an information selection unit 407 , a display device 408 , a knowledge database 409 , and an information registration interface 410 .
- the microphone 401 receives speech as input, performs A/D conversion of this to a digital signal, for introduction to the system.
- A/D conversion of this to a digital signal for introduction to the system.
- clearly a configuration is also possible in which speech from a telephone line, television, or the like, is inputted, and usage is possible for a video conference and the like.
- the speech recognition unit 402 analyzes a digital speech signal received from the microphone 401 , performs speech recognition processing based on an acoustic model or language model given in advance, and outputs a recognition result.
- LVCSR Large Vocabulary Continuous Speech Recognition
- the stochastic language model is one that models distribution of probability that a certain string of words is observed, but since this type of mechanism is used, there is a disadvantage in that unknown words cannot be recognized.
- words not known by the language model are included in the input speech, by involving words preceding and subsequent, misrecognition occurs with another string of words that is acoustically close and with high linguistic appearance probability.
- non-speech segments are removed from the input speech as a recognition pre-processing, and a clipping process is performed for each utterance segment. After that, matching with the model is performed for each clipped utterance segment. As a result, the speech recognition results are often outputted in order for each utterance segment.
- a speech recognition result outputted by the speech recognition unit 402 is not limited to the text (string of words) only. There is also output of recognition likelihood of each word, reading information, part of speech information, class information, time information, and the like. Furthermore, there are also cases of secondary or lower rank recognition candidates and reliability of the speech recognition result being outputted.
- the text feature vector generation unit 403 generates a feature amount of a type characterizing the content of text, from an input text that has been given.
- a most classical and widely used method is one of using appearance frequency distribution of words appearing in the text. In a case of combining with speech recognition, not only merely counting frequency of words included the speech recognition result text, but secondary or lower rank recognition candidates may be used, too, and weighting of appearance frequency of words by likelihood or reliability is possible.
- the text feature vector buffer 404 sequentially records the text feature vector generated by the text feature vector generation unit 403 .
- the text feature vector generation unit 403 when new input is given, generates the text feature vector not only by this input but also by combining with a past text feature vector recorded in the text feature vector buffer 404 .
- the text feature vector selection unit 405 determines the text feature vector to be taken according to a prescribed rule from the text feature vector buffer 404 , and gives this to the following degree of similarity computation unit 406 .
- the prescribed rule may use a predetermined number (for example, 10 or the like) of text feature vectors determined in advance from final additions to the text feature vector buffer 404 , or may use all text feature vectors obtained from input speech inputted in a period between the present time and a fixed past time.
- a predetermined number for example, 10 or the like
- This type of windowing is, well known in the field of text processing.
- the degree of similarity computation unit 406 compares a text feature vector obtained from the input speech, and a text feature vector stored in the knowledge database 409 .
- Various methods can be considered for the comparison, but a classical method is one of obtaining cosine distance between vectors.
- the cosine distance of vectors X and Y is the inner product of X and Y divided by respective norms. Text feature vectors whose cosine distance is small are judged to have a high similarity.
- a distance measure other than the cosine distance may be used as the distance between the text feature vectors, and a completely different algorithm may also be used.
- the information selection unit 407 selects apparently appropriate information (meeting minutes, catalog, URL, etc.), based on the degree of similarity of the text feature vector of input speech, obtained by the degree of similarity computation unit 406 , and each text feature vector stored in the knowledge database 409 .
- the information selection unit 407 may select all information for which a tag is given to a text feature vector having a degree of similarity with the text feature vector of input speech exceeding a threshold, or may select only a number of items determined in advance in order of high degree of similarity.
- the display device 408 presents information selected by the information selection unit 407 to the user. Content of the information may be displayed as it is, or a portion of the information may be extracted.
- the knowledge database 409 stores a set of text feature vectors generated by the speech recognition unit 402 and the text feature vector generation unit 403 , information to which a tag is given according to the text feature vector, and meeting speech based on the text feature vector.
- Any type of information can be stored in the knowledge database 409 .
- meeting minutes of a certain meeting a catalog of a product that is a topic of discussion, a URL of reference material, or the like, may be considered.
- the text feature vector associated with this information and the meeting speech based thereon use the meeting itself.
- the text feature vector and the information are not limited to a relationship of 1 to 1.
- a plurality of information items may be associated with one text feature vector, or a plurality of separate text feature vectors may be associated with the same information item.
- a plurality of text feature vectors generated from one meeting speech item may be associated with information items that are completely different from one another.
- This type of text feature vector for example, is obtained by inputting the first half of a meeting and the second half separately into the text feature vector generation unit 403 .
- one meeting includes several topics, and it is appropriate to present different information concerning each thereof; in such cases this mechanism demonstrates effectiveness.
- the information registration interface 410 when arbitrary information is given, forms a set of the input speech and a closest text feature vector held in the text feature vector buffer 404 , and performs registration in the knowledge database 409 .
- Explicitly given information for example, a text box in which a text file name is inputted, a Register button, and the like
- material making reference to another system at this point in time for example, a URL displayed in a web browser
- a mode may also be considered in which, at timing at which information is registered in another system that is external, registering by transferring this information to the information registration interface 410 made also be considered.
- a speech signal obtained from the microphone 401 is sequentially received as input at the speech recognition unit 402 .
- the speech recognition unit 402 detects utterance segments from the input speech, and outputs a speech recognition result for each utterance segment that is started.
- the speech recognition results obtained in this way are inputted one after another to the text feature vector generation unit 403 .
- the text feature vector generation unit 403 generates a text feature vector for the speech recognition result for each new speech recognition result that is given.
- the text feature vectors generated by the text feature vector generation unit 403 are sequentially stored in the text feature vector buffer 404 .
- the (relative time of the) input speech equivalent to this text feature vector and the speech recognition result itself may be stored together.
- the text feature vector may continue to be held as long as speech is inputted from the microphone 401 , or only close items may be held, by limiting time-out period and the number stored. It is adequate if the text feature vectors necessary for the text feature vector generation unit 403 and the information registration interface 410 can be stored.
- the text feature vector selection unit 405 and the degree of similarity computation unit 406 are started.
- the text feature vector selection unit 405 performs operation; an appropriate text feature vector is taken from the text feature vector buffer 404 , and this is given to the degree of similarity computation unit 406 .
- the degree of similarity computation unit 406 which receives this, computes the degree of similarity of the text feature vector of the input speech and each text feature vector stored in the knowledge database 409 , according to a predetermined algorithm.
- words that hardly contribute to the similarity of the content such as fillers, particles, and the like, from the text feature vector, may be removed from the computation in advance. Furthermore, preceding the computation, a normalizing process of some type is often performed.
- the information selection unit 407 uses the degree of similarity of the text feature vector of the input speech obtained by the degree of similarity computation unit 406 and of each text feature vector of the knowledge database, and selects information considered appropriate to the content of the input speech.
- the information selection unit 407 cannot select appropriate information from the degree of similarity
- a case may be envisaged where there are several text feature vectors with degrees of similarity to a certain extent, but the difference between these degrees of similarity is small, and it is difficult to select any thereof.
- the information selection unit 407 may apply feedback to the text feature vector generation unit 403 .
- a text feature vector equivalent to input speech of an earlier time is included in the degree of similarity computation.
- the knowledge database 409 is built in advance using text feature vectors obtained from past meeting speech, and material used in the meeting minutes and in the meeting.
- the meeting speech based on the text feature vectors need not be held in the knowledge database 409 , but may be held as one item of information. Furthermore, if the meeting speech is stored, there is a merit of being able to handle situations also when operation of the speech recognition unit 402 is later changed (for example, in a case where a dictionary or an acoustic model is strengthened, and the like).
- the information registration interface 410 is useful for strengthening the knowledge database 409 in a simple manner.
- a user of the system of the present example inputs speech to the microphone 401 to obtain a presentation of appropriate information.
- the user uses the information registration interface 410 , newly associates (a text feature vector of) the current input speech and presented information, and can give an instruction for adding to the knowledge database 409 .
- the input speech at this time is used as a retrieval tag for subsequent times, and a retrieval hit rate of this information is improved.
- a recorder of meeting minutes by using the information registration interface 410 , can easily associate the meeting speech and the meeting minutes, and can perform registration in the knowledge database 409 .
- a meeting proceedings support system that retrieves and presents related material based on the content of a meeting from speech of participants in the meeting.
- reception support system that retrieves and presents from material and model replies based on content of enquiries of customers, from conversation speech between customers and telephone reception agents in a call center.
- a learning support system for presenting, as needed, reference information based on lecture content, from what is spoken by a lecturer, such as in a lecture or class.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007-070758 | 2007-03-19 | ||
JP2007070758 | 2007-03-19 | ||
PCT/JP2008/055048 WO2008114811A1 (fr) | 2007-03-19 | 2008-03-19 | Système, procédé et programme de recherche d'informations |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100114571A1 US20100114571A1 (en) | 2010-05-06 |
US8712779B2 true US8712779B2 (en) | 2014-04-29 |
Family
ID=39765915
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/530,765 Active 2029-03-22 US8712779B2 (en) | 2007-03-19 | 2008-03-19 | Information retrieval system, information retrieval method, and information retrieval program |
Country Status (3)
Country | Link |
---|---|
US (1) | US8712779B2 (fr) |
JP (1) | JPWO2008114811A1 (fr) |
WO (1) | WO2008114811A1 (fr) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2008114811A1 (ja) * | 2007-03-19 | 2010-07-08 | 日本電気株式会社 | 情報検索システム、情報検索方法及び情報検索用プログラム |
JP2011109292A (ja) * | 2009-11-16 | 2011-06-02 | Canon Inc | 撮像装置、その制御方法及びプログラム並びに記憶媒体 |
WO2011156719A1 (fr) * | 2010-06-10 | 2011-12-15 | Logoscope, Llc | Système et procédé de conversion de la parole en données multimédias affichées |
BR112014008457A2 (pt) * | 2011-10-18 | 2017-04-11 | Unify Gmbh & Co Kg | processo e dispositivo para obtenção de dados gerados em uma conferência |
US9836177B2 (en) | 2011-12-30 | 2017-12-05 | Next IT Innovation Labs, LLC | Providing variable responses in a virtual-assistant environment |
US10177926B2 (en) * | 2012-01-30 | 2019-01-08 | International Business Machines Corporation | Visualizing conversations across conference calls |
JP2014032532A (ja) * | 2012-08-03 | 2014-02-20 | Advanced Media Inc | オペレータ支援システム |
US9519858B2 (en) * | 2013-02-10 | 2016-12-13 | Microsoft Technology Licensing, Llc | Feature-augmented neural networks and applications of same |
US9672822B2 (en) * | 2013-02-22 | 2017-06-06 | Next It Corporation | Interaction with a portion of a content item through a virtual assistant |
US20140245140A1 (en) * | 2013-02-22 | 2014-08-28 | Next It Corporation | Virtual Assistant Transfer between Smart Devices |
JP2014232907A (ja) * | 2013-05-28 | 2014-12-11 | 雄太 安藤 | 現在位置に基づくサイトページを所望条件順に携帯端末に表示する方法及びシステム |
JP6208631B2 (ja) * | 2014-07-04 | 2017-10-04 | 日本電信電話株式会社 | 音声ドキュメント検索装置、音声ドキュメント検索方法及びプログラム |
JP2016095399A (ja) * | 2014-11-14 | 2016-05-26 | 日本電信電話株式会社 | 音声認識結果整形装置、方法及びプログラム |
JP6083654B2 (ja) * | 2015-02-23 | 2017-02-22 | 株式会社プロフィールド | データ処理装置、データ構造、データ処理方法、およびプログラム |
US9641680B1 (en) * | 2015-04-21 | 2017-05-02 | Eric Wold | Cross-linking call metadata |
JP6389795B2 (ja) * | 2015-04-24 | 2018-09-12 | 日本電信電話株式会社 | 音声認識結果整形装置、方法及びプログラム |
US10043517B2 (en) * | 2015-12-09 | 2018-08-07 | International Business Machines Corporation | Audio-based event interaction analytics |
US10091545B1 (en) * | 2016-06-27 | 2018-10-02 | Amazon Technologies, Inc. | Methods and systems for detecting audio output of associated device |
US11232101B2 (en) * | 2016-10-10 | 2022-01-25 | Microsoft Technology Licensing, Llc | Combo of language understanding and information retrieval |
JP6551848B2 (ja) * | 2016-12-13 | 2019-07-31 | 株式会社プロフィールド | データ処理装置、データ構造、データ処理方法、およびプログラム |
US20190207946A1 (en) * | 2016-12-20 | 2019-07-04 | Google Inc. | Conditional provision of access by interactive assistant modules |
US20180218729A1 (en) * | 2017-01-31 | 2018-08-02 | Interactive Intelligence Group, Inc. | System and method for speech-based interaction resolution |
US11436417B2 (en) | 2017-05-15 | 2022-09-06 | Google Llc | Providing access to user-controlled resources by automated assistants |
US10127227B1 (en) | 2017-05-15 | 2018-11-13 | Google Llc | Providing access to user-controlled resources by automated assistants |
US10417328B2 (en) * | 2018-01-05 | 2019-09-17 | Searchmetrics Gmbh | Text quality evaluation methods and processes |
JP6660974B2 (ja) * | 2018-03-30 | 2020-03-11 | 本田技研工業株式会社 | 情報提供装置、情報提供方法、およびプログラム |
WO2020032927A1 (fr) | 2018-08-07 | 2020-02-13 | Google Llc | Assemblage et évaluation de réponses d'assistant automatisé pour des préoccupations de confidentialité |
KR102345625B1 (ko) * | 2019-02-01 | 2021-12-31 | 삼성전자주식회사 | 자막 생성 방법 및 이를 수행하는 장치 |
US11710480B2 (en) * | 2019-08-07 | 2023-07-25 | International Business Machines Corporation | Phonetic comparison for virtual assistants |
KR20190113693A (ko) * | 2019-09-18 | 2019-10-08 | 엘지전자 주식회사 | 단어 사용 빈도를 고려하여 사용자의 음성을 인식하는 인공 지능 장치 및 그 방법 |
CN112185418B (zh) * | 2020-11-12 | 2022-05-17 | 度小满科技(北京)有限公司 | 音频处理方法和装置 |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0628415A (ja) | 1992-03-24 | 1994-02-04 | Nec Corp | 電子ファイリング装置 |
JPH06164871A (ja) | 1992-11-20 | 1994-06-10 | Ricoh Co Ltd | 文書画像保存装置 |
JP2000020551A (ja) | 1998-06-30 | 2000-01-21 | Brother Ind Ltd | 音声データ検索装置および記憶媒体 |
JP2000222425A (ja) | 1999-02-02 | 2000-08-11 | Hitachi Ltd | 音声検索システム |
US20020032564A1 (en) * | 2000-04-19 | 2002-03-14 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface |
US20020128821A1 (en) * | 1999-05-28 | 2002-09-12 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating recognition grammars for voice-controlled user interfaces |
JP2002278579A (ja) | 2001-03-16 | 2002-09-27 | Ricoh Co Ltd | 音声データ検索装置 |
US20030125926A1 (en) * | 1998-10-09 | 2003-07-03 | Antonius M. W. Claassen | Automatic inquiry method and system |
JP2004295396A (ja) | 2003-03-26 | 2004-10-21 | Osaka Gas Co Ltd | 受付処理支援装置 |
US20050080614A1 (en) * | 1999-11-12 | 2005-04-14 | Bennett Ian M. | System & method for natural language processing of query answers |
JP2005215726A (ja) | 2004-01-27 | 2005-08-11 | Advanced Media Inc | 話者に対する情報提示システム及びプログラム |
JP2005341015A (ja) | 2004-05-25 | 2005-12-08 | Hitachi Hybrid Network Co Ltd | 議事録作成支援機能を有するテレビ会議システム |
JP2007018389A (ja) | 2005-07-08 | 2007-01-25 | Just Syst Corp | データ検索装置、データ検索方法、データ検索プログラムおよびコンピュータに読み取り可能な記録媒体 |
US20070094007A1 (en) * | 2005-10-21 | 2007-04-26 | Aruze Corp. | Conversation controller |
US20070136067A1 (en) * | 2003-11-10 | 2007-06-14 | Scholl Holger R | Audio dialogue system and voice browsing method |
US20070198250A1 (en) * | 2006-02-21 | 2007-08-23 | Michael Mardini | Information retrieval and reporting method system |
US20070294084A1 (en) * | 2006-06-13 | 2007-12-20 | Cross Charles W | Context-based grammars for automated speech recognition |
WO2008016102A1 (fr) | 2006-08-03 | 2008-02-07 | Nec Corporation | dispositif de calcul de similarité et dispositif de recherche d'informations |
US20080091412A1 (en) * | 2006-10-13 | 2008-04-17 | Brian Strope | Business listing search |
US7406415B1 (en) * | 2000-03-04 | 2008-07-29 | Georgia Tech Research Corporation | Phonetic searching |
US20080243514A1 (en) * | 2002-07-31 | 2008-10-02 | International Business Machines Corporation | Natural error handling in speech recognition |
US20090030680A1 (en) * | 2007-07-23 | 2009-01-29 | Jonathan Joseph Mamou | Method and System of Indexing Speech Data |
US20100114571A1 (en) * | 2007-03-19 | 2010-05-06 | Kentaro Nagatomo | Information retrieval system, information retrieval method, and information retrieval program |
US7747443B2 (en) * | 2006-08-14 | 2010-06-29 | Nuance Communications, Inc. | Apparatus, method, and program for supporting speech interface design |
US8077984B2 (en) * | 2008-01-04 | 2011-12-13 | Xerox Corporation | Method for computing similarity between text spans using factored word sequence kernels |
-
2008
- 2008-03-19 JP JP2009505231A patent/JPWO2008114811A1/ja active Pending
- 2008-03-19 US US12/530,765 patent/US8712779B2/en active Active
- 2008-03-19 WO PCT/JP2008/055048 patent/WO2008114811A1/fr active Application Filing
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0628415A (ja) | 1992-03-24 | 1994-02-04 | Nec Corp | 電子ファイリング装置 |
JPH06164871A (ja) | 1992-11-20 | 1994-06-10 | Ricoh Co Ltd | 文書画像保存装置 |
JP2000020551A (ja) | 1998-06-30 | 2000-01-21 | Brother Ind Ltd | 音声データ検索装置および記憶媒体 |
US20030125926A1 (en) * | 1998-10-09 | 2003-07-03 | Antonius M. W. Claassen | Automatic inquiry method and system |
JP2000222425A (ja) | 1999-02-02 | 2000-08-11 | Hitachi Ltd | 音声検索システム |
US20020128821A1 (en) * | 1999-05-28 | 2002-09-12 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating recognition grammars for voice-controlled user interfaces |
US20050080614A1 (en) * | 1999-11-12 | 2005-04-14 | Bennett Ian M. | System & method for natural language processing of query answers |
US7406415B1 (en) * | 2000-03-04 | 2008-07-29 | Georgia Tech Research Corporation | Phonetic searching |
US20020032564A1 (en) * | 2000-04-19 | 2002-03-14 | Farzad Ehsani | Phrase-based dialogue modeling with particular application to creating a recognition grammar for a voice-controlled user interface |
JP2002278579A (ja) | 2001-03-16 | 2002-09-27 | Ricoh Co Ltd | 音声データ検索装置 |
US20080243514A1 (en) * | 2002-07-31 | 2008-10-02 | International Business Machines Corporation | Natural error handling in speech recognition |
JP2004295396A (ja) | 2003-03-26 | 2004-10-21 | Osaka Gas Co Ltd | 受付処理支援装置 |
US20070136067A1 (en) * | 2003-11-10 | 2007-06-14 | Scholl Holger R | Audio dialogue system and voice browsing method |
JP2005215726A (ja) | 2004-01-27 | 2005-08-11 | Advanced Media Inc | 話者に対する情報提示システム及びプログラム |
JP2005341015A (ja) | 2004-05-25 | 2005-12-08 | Hitachi Hybrid Network Co Ltd | 議事録作成支援機能を有するテレビ会議システム |
JP2007018389A (ja) | 2005-07-08 | 2007-01-25 | Just Syst Corp | データ検索装置、データ検索方法、データ検索プログラムおよびコンピュータに読み取り可能な記録媒体 |
US20070094007A1 (en) * | 2005-10-21 | 2007-04-26 | Aruze Corp. | Conversation controller |
US20070198250A1 (en) * | 2006-02-21 | 2007-08-23 | Michael Mardini | Information retrieval and reporting method system |
US20070294084A1 (en) * | 2006-06-13 | 2007-12-20 | Cross Charles W | Context-based grammars for automated speech recognition |
WO2008016102A1 (fr) | 2006-08-03 | 2008-02-07 | Nec Corporation | dispositif de calcul de similarité et dispositif de recherche d'informations |
US7747443B2 (en) * | 2006-08-14 | 2010-06-29 | Nuance Communications, Inc. | Apparatus, method, and program for supporting speech interface design |
US20080091412A1 (en) * | 2006-10-13 | 2008-04-17 | Brian Strope | Business listing search |
US20100114571A1 (en) * | 2007-03-19 | 2010-05-06 | Kentaro Nagatomo | Information retrieval system, information retrieval method, and information retrieval program |
US20090030680A1 (en) * | 2007-07-23 | 2009-01-29 | Jonathan Joseph Mamou | Method and System of Indexing Speech Data |
US8077984B2 (en) * | 2008-01-04 | 2011-12-13 | Xerox Corporation | Method for computing similarity between text spans using factored word sequence kernels |
Non-Patent Citations (4)
Title |
---|
"Converting voice to text data, Operators concentrate on their resonding", Nikkei Monozukuri, Japan, Nlkkei BP, Aug. 1, 2005, No. 611, pp. 62-63, Concise English Language explanation found in English Translation of Japanese Office Action. |
International Search Report for PCT/JP2008/055048 mailed Apr. 22, 2008. |
Japanese Office Action for JP 2009-505231 mailed on Feb. 26, 2013, with Partial English Translation. |
M. Nakamura et al., "The Analysis of Acoustic and Linguistic Characteristics in Spontaneous Japanese", The Institute of Electronics, Information and Communication Engineers, Technical Report of IEICE: SP2006-4 (May 2006). |
Also Published As
Publication number | Publication date |
---|---|
WO2008114811A1 (fr) | 2008-09-25 |
US20100114571A1 (en) | 2010-05-06 |
JPWO2008114811A1 (ja) | 2010-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8712779B2 (en) | Information retrieval system, information retrieval method, and information retrieval program | |
US10210267B1 (en) | Disambiguation of a spoken query term | |
US10283119B2 (en) | Architecture for multi-domain natural language processing | |
JP7200405B2 (ja) | 音声認識のためのコンテキストバイアス | |
US11043205B1 (en) | Scoring of natural language processing hypotheses | |
US8165877B2 (en) | Confidence measure generation for speech related searching | |
KR102241972B1 (ko) | 환경 콘텍스트를 이용한 질문 답변 | |
US11081104B1 (en) | Contextual natural language processing | |
US20110246184A1 (en) | System and method for increasing accuracy of searches based on communication network | |
US20030195739A1 (en) | Grammar update system and method | |
KR20090130028A (ko) | 분산 음성 검색을 위한 방법 및 장치 | |
KR20090111825A (ko) | 언어 독립적인 음성 인덱싱 및 검색 방법 및 장치 | |
JP4930379B2 (ja) | 類似文検索方法、類似文検索システム及び類似文検索用プログラム | |
JP2004005600A (ja) | データベースに格納された文書をインデックス付け及び検索する方法及びシステム | |
CN111462748B (zh) | 语音识别处理方法、装置、电子设备及存储介质 | |
JP5753769B2 (ja) | 音声データ検索システムおよびそのためのプログラム | |
JP2020016784A (ja) | 認識装置、認識方法及び認識プログラム | |
JP5196114B2 (ja) | 音声認識装置およびプログラム | |
JP2020008690A (ja) | 抽出装置、抽出方法、およびプログラム | |
JP2008216461A (ja) | 音声認識・キーワード抽出・知識ベース検索連携装置 | |
JP7177348B2 (ja) | 音声認識装置、音声認識方法およびプログラム | |
JP7257010B2 (ja) | 検索支援サーバ、検索支援方法及びコンピュータプログラム | |
KR102267579B1 (ko) | 클라우드 기반의 음성 데이터 텍스트 변환 시스템 및 이의 실행 방법 | |
JP5152016B2 (ja) | 音声認識用辞書作成装置及び音声認識用辞書作成方法 | |
CN113870842A (zh) | 基于权重调节的语音控制方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NEC CORPORATION,JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NAGATOMO, KENTARO;REEL/FRAME:023215/0402 Effective date: 20090907 Owner name: NEC CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NAGATOMO, KENTARO;REEL/FRAME:023215/0402 Effective date: 20090907 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: NEC ASIA PACIFIC PTE LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NEC CORPORATION;REEL/FRAME:066867/0886 Effective date: 20240313 |