WO1993003453A1 - System for interactve performance and animation of prerecorded audiovisual sequences - Google Patents
System for interactve performance and animation of prerecorded audiovisual sequences Download PDFInfo
- Publication number
- WO1993003453A1 WO1993003453A1 PCT/US1992/006368 US9206368W WO9303453A1 WO 1993003453 A1 WO1993003453 A1 WO 1993003453A1 US 9206368 W US9206368 W US 9206368W WO 9303453 A1 WO9303453 A1 WO 9303453A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- animation
- story
- text
- mode
- performance
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 claims abstract description 40
- 230000000694 effects Effects 0.000 abstract 1
- 235000013339 cereals Nutrition 0.000 description 8
- 239000008267 milk Substances 0.000 description 6
- 235000013336 milk Nutrition 0.000 description 6
- 210000004080 milk Anatomy 0.000 description 6
- 230000004044 response Effects 0.000 description 5
- 235000013601 eggs Nutrition 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 241000169624 Casearia sylvestris Species 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 230000033458 reproduction Effects 0.000 description 1
- 238000003756 stirring Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
Definitions
- This invention relates to interactive audiovisual systems, and in particular to a new system which provides spoken words and other audio for performing a story in a sequential manner, coupled with interactively accessible animations.
- animations which appear in prior story playing systems have not been sequentially dependent upon one another, and thus have lacked flexibility.
- Several alternative animations may play at random times, or they appear in a particular sequence.
- the playing of certain animations may depend upon a series of actions taken by the user.
- the sequentiality of the performance of the stories is an important feature of the present invention. As discussed in greater detail below, this feature is combined in a new manner with a variety of interactive animation capabilities and audio, including contextual text pronunciation.
- Figure 1 is a block diagram of a system according to the invention.
- Figures 2 through 26 are reproductions of actual screen-capture shots of an exemplary implementation of the invention in a personal computer, illustrating the interactive capabilities of the invention.
- Figure 1 shows a basic system 5 for implementing the present invention, including a controller 10, a display 20, and an audio output 30.
- the display 20 and audio output 30 are coupled to the controller 10 by cables 40 and 50, respectively, and are driven by the controller.
- Input to the controller 10 is made by means of a mouse 60 having a mouse button 65; or input may be made by another conventional input device, such as a keyboard or the like.
- the system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system.
- the controller 10 displays a cursor 70 on a screen 80 of the display 20.
- the cursor 70 allows the user of the system to interact with the text, animations, and other visual information displayed on the screen 80.
- the system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system, using one of a variety of applications available for creating textual, graphic and animation or video sequences. Many such computer systems are available, such as the MacintoshTM system by Apple Computer. Performance sequences according to the invention may be implemented using conventional applications and techniques.
- Figures 2 through 26 are exemplary screen captures of portions of sequences implemented in one actual embodiment of the present invention.
- Figure 2 shows a title page 90, which appears on the screen 80 (shown in Figure 1), along with the cursor 70.
- Figure 2 represents a screen which appears on the display 20 , which is not separately shown.
- a sequence of text, graphics, animations, and audio recordings is stored in a memory in the controller 10. Starting the appropriate application causes the title page 90 to appear.
- two interactive buttons are provided on the title page: READ ONLY (100) and INTERACTIVE (110). The user of the system positions the cursor 70 over one of these buttons and clicks the mouse button 65 in order to access the chosen sequence.
- INTERACTIVE button 110 causes essentially the same story sequence to be performed, but in an interactive fashion.
- the INTERACTIVE mode the user is given the option of interrupting the story at various times to play animations, replay portions of the text, and again to proceed with the performance. This is discussed in detail below.
- the INTERACTIVE button 110 is clicked, the first page 120 of the story is displayed, as shown in Figure 3, and includes graphics 130, text (in this case one sentence) 140, and various "live" or interactive regions on the screen which may be clicked on to access the different features of the invention. These live regions may be defined in any of a variety of conventional manners, such as by predefining coordinates on the screen such that the desired response is generated if the user clicks within those coordinates.
- the story is performed according to a prerecorded animation and audio sequences which are associated with one another, and are preferably loaded in RAM in the controller or computer 10.
- a voice pronounces the sentence appearing on the displayed page 120.
- groups of words are highlighted in the pronounced sequence.
- the phrase “Early in the morning” (indicated by the numeral 150 in Figure 3) is highlighted while those words are pronounced from the audio track in memory, followed by highlighting (not separately shown) of the wording "Mom wakes me "while it is pronounced, and so on.
- the system is at first in a continuous play mode, in which it will proceed to perform a story absent any interruption by the user, in a predetermined, linear fashion.
- The. story sequence can then proceed to the end, while allowing the user to interrupt at certain predetermined times to enter an interactive mode, wherein certain tangential sequences are performed.
- the continuous play mode includes a number of performance loops, where certain animations and sounds are repeated until the user interrupts the mode by clicking the cursor 70 on one of a plurality of "live" regions on the screen 80.
- the live regions are preferably correlated with an identifiable graphic object displayed on the screen, such as the lamp 180, the hat 190, or the poster 200 shown in Figure 4 (as well as Figures 2 through 13).
- Each sentence is normally pronounced once automatically by the system, while the associated animation sequence is simultaneously performed on the screen.
- the user has the option of interrupting the continuous mode sequence to cause the system to pronounce any words again which the user wishes, by clicking the cursor 70 on such words.
- the user has clicked on the word "Monster", and that word is repronounced by the system.
- the pronunciation in the interactive mode is the same as the pronunciation in the continuous mode; that is, each individual word selected for a repeated pronunciation is pronounced by the system exactly as it is in the continuous mode. This can be done by accessing the same stored audio sequences while in the interactive mode as in the continuous mode.
- the entire sentence 140 may be repeated by clicking on the repeat button 210, as shown in Figure 6.
- the word groups, such as group 150 are again highlighted, as they were during their first performance in the continuous mode, and the sentence is pronounced identically.
- the associated animation may also be reperformed.
- Figures 7 through 12 demonstrate an interdependent, hierarchichal sequence of interactive animations which are achieved by the present invention.
- the user may, however, move the cursor 70 to the window 230, and open the window in a conventional drag operation.
- the window 230 is shown after dragging to a half-open position. With the window in this position, the bird 220 is still not a live, interactive region.
- the window 230 has been dragged to its fully open position. With the window in this position, the bird 220 is now a live region for interactive animation. When the cursor is moved to a position over the bird 220, as shown in Figure 11, and the mouse button 65 is clicked, the bird then performs a predetermined animation sequence, including an associated soundtrack. A frame from this animation (which is a chirping sequence) is shown in Figure 12.
- Figures 7 through 12 thus illustrate an interdependent set of sequences which are executed as part of the story performed by the system of the invention. Without first opening the window 230, the user could not get the system to perform the chirping bird sequence.
- the chirping bird sequence is a sequence which has a condition which must be fulfilled before it can be executed, in this case an action to be taken by the user.
- Other conditions precedent may be used, including random events generated by the computer or points in the story which must be reached by the system before a region becomes active or before a given interactive sequence becomes available to the user.
- Figure 13 shows a frame from another animation sequence which may be accessed by the user. When the cursor 70 is clicked on the drawer 240, a mouse 250 appears, executes a sequence with squeaking sounds, and disappears. Any number of similar sequences may be accessed by the user in other live regions defined on the screen, such as those discussed above (the lamp 180, the hat 190, and the poster 200, as well as others).
- Figure 13 shows another live region, namely a page turn tab 260.
- the system accesses the next page of the story.
- the second page 270 of the story appears at Figures 14 through 26.
- the sentence 280 is performed in the same manner as the sentence 140 shown in Figure 4, and again has an associated animation track and other audio which are performed along with the performance of the sentence.
- a repeat button 290 is provided, and fills the same function as the repeat button 210 shown in Figure 6.
- This wait mode may be a continuous loop, or a timeout may be provided, so that the story will continue in the continuous mode if the user does not interact with the system within a predetermined period of time.
- Figures 15 through 17 show an interactive animation sequence which is executed in response to the user clicking on the arm 300 of the "Mom" character 170.
- Mom 170 reaches up with her arm to stir the items in the pan 310, as shown by the progression between Figures 15 and 16, and then moves the pan 310 back and forth on the burner 320, as shown by the progression between the frames represented by Figures 16; and 17.
- This is another example of the type of interactive animation which can be implemented by the system of the invention, similar to that discussed above relative to Figure 13.
- the text 280 is different in a number of respects from the text 140 shown in Figures 2-13.
- the system pronounces the sentence 280, it pronounces the correct names of these objects as they are highlighted.
- the phrase 330 is highlighted as shown in Figure 14, the wording "cereal with milk?" is pronounced.
- the "eggs" 340 and “cereal” 350 constitute live, interactive regions. If the user clicks on the eggs 340, as in Figure 18, they are animated (again, with audio) to crack open. Mom 170 catches the falling eggs 360 in her pan 310, then turns around and cooks them on the burner 320 as shown in Figure 19. Then, she flings them over her shoulder, as shown in Figure 20, whereupon the "Dad” character 370 catches them in his plate 380, as shown in Figure 21. Dad then serves them to the Little Monster 390, as shown in Figure 22.
- the sequence of Figures 18 through 22 illustrates an animation sequence which is executed in response to a command by the user (in this case, a click of the mouse) during a wait mode, upon which the computer enters an interactive mode which interrupts the normal, continuous sequence of the performance of the story.
- the story may then automatically proceed, or may proceed after a timeout, or another page turn tab (similar to tab 260 shown in Figure 13) may be used.
- FIG. 23 through 25 An interactive animation sequence similar to the foregoing is illustrated in Figures 23 through 25.
- the user clicks on the cereal 350 it is served in an animation sequence to the Little Monster 390.
- the bowl of cereal 350 has moved beneath the milk 400, which is poured over the cereal.
- the bowl of cereal 350 then drops to the table 410 in front of the Little Monster 390.
- the cereal 350 may return to, or reappear at, its original place as in Figure 23, and the interactive animation sequence is again available to the user.
- the milk 400 may fulfill the same function as actual words in the text 280 by causing the repronunciation of the word "milk", in the context of the originally pronounced sentence, when the user clicks on it. This is illustrated in Figure 26, where the cursor 70 is clicked on the milk 400, causing it to be highlighted and causing the computer 10 to reperform the pronunciation of the word. As demonstrated by Figures 18 through 26, words and their illustrations may perform similar functions in the system of the invention.
- the present invention provides a combination of interactive features which have hitherto been unavailable, in the context of a system which performs a story in a predetermined, linear sequence from beginning to end. Coupled with this is the capability to choose between the continuous performance mode, a wait mode, and an interactive mode, in which the user can cause the computer to execute animation sequences, pronounce individual words of the text, repeat portions of the performance, and other functions. Variations on the foregoing embodiments will be apparent in light of the present disclosure, and may be constructed without departing from the scope of this invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Processing Or Creating Images (AREA)
Abstract
A system for the sequential performance of a prerecorded story including text, animations or video, and audio information. The system, preferably implemented in a personal computer, has a continuous mode, in which it performs the story linearly and unbroken; a wait mode, wherein it performs loops of animation or otherwise is on standby for commands from the user; and an interactive mode, in which the system performs animations, sounds or other activities which are tangential to the linear story. Text is displayed on a screen of the system, along with graphics and/or video. The text is pronounced by the system in the course of the sequential performance, and when the computer is in the interactive mode the user may command it to repeat words which are in the text. The repronunciation of the words is the same as the pronunciation in the originally pronounced context. In both the continuous mode and the interactive mode, the pronounced words are highlighted. Certain animations are made inaccessible to the user, even in the interactive mode, until the user has executed prerequisite steps; thus, certain animations are interdependent or nested. The performance of a given animation may depend on whether a particular action has been carried out, or on whether another animation has already been performed, or on a random factor generated by the computer.
Description
SYSTEM FOR INTERACTIVE PERFORMANCE AND ANIMATION OF PRERECORDED AUDIOVISUAL SEQUENCES
Background of the Invention This invention relates to interactive audiovisual systems, and in particular to a new system which provides spoken words and other audio for performing a story in a sequential manner, coupled with interactively accessible animations. There are systems presently in use which provide animation to a user, and which play animations in response to input from the user, such as by means of a mouse in a computer system. However, animations which appear in prior story playing systems have not been sequentially dependent upon one another, and thus have lacked flexibility.
There is also at least one system presently in use which plays an audio recording of text which appears on a display, and which allows the user of the system to select particular words to be spoken. However, the words are not spoken in the context of the text, but rather in a different and contextually irrelevant manner.
Thus, there is a lack of highly flexible and interactive linear story performance systems, which would provide multiple interactive capabilities to the user.
Summary of the Invention . It is therefore an object of this invention to provide an interactive audiovisual system which provides to the user the capability of multiple modes of animation and audio, in response to input from the user.
It is a particular object of the invention to provide such a system which also performs a story stored in memory in a sequential fashion, providing the interactive animations at
appropriate places in the story. Several alternative animations may play at random times, or they appear in a particular sequence. In addition, the playing of certain animations may depend upon a series of actions taken by the user. The sequentiality of the performance of the stories is an important feature of the present invention. As discussed in greater detail below, this feature is combined in a new manner with a variety of interactive animation capabilities and audio, including contextual text pronunciation.
Brief Description of the Drawings Figure 1 is a block diagram of a system according to the invention. Figures 2 through 26 are reproductions of actual screen-capture shots of an exemplary implementation of the invention in a personal computer, illustrating the interactive capabilities of the invention.
Description of the Preferred Embodiments
Figure 1 shows a basic system 5 for implementing the present invention, including a controller 10, a display 20, and an audio output 30. The display 20 and audio output 30 are coupled to the controller 10 by cables 40 and 50, respectively, and are driven by the controller. Input to the controller 10 is made by means of a mouse 60 having a mouse button 65; or input may be made by another conventional input device, such as a keyboard or the like. The system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system.
The controller 10 displays a cursor 70 on a screen 80 of the display 20. As discussed in detail below, the cursor 70 allows the user of the system to interact with the text, animations, and other visual information displayed on the screen 80.
The system 5 may be implemented by means of a conventional personal computer or other microprocessor-based system, using one of a variety of applications available for creating textual, graphic and animation or video sequences. Many such computer systems are available, such as the Macintosh™ system by Apple Computer. Performance sequences according to the invention may be implemented using conventional applications and techniques.
Figures 2 through 26 are exemplary screen captures of portions of sequences implemented in one actual embodiment of the present invention. Figure 2 shows a title page 90, which appears on the screen 80 (shown in Figure 1), along with the cursor 70. Figure 2 represents a screen which appears on the display 20 , which is not separately shown. A sequence of text, graphics, animations, and audio recordings is stored in a memory in the controller 10. Starting the appropriate application causes the title page 90 to appear. In the preferred embodiment, two interactive buttons are provided on the title page: READ ONLY (100) and INTERACTIVE (110). The user of the system positions the cursor 70 over one of these buttons and clicks the mouse button 65 in order to access the chosen sequence.
Clicking on the READ ONLY button 100 causes a linear, uninterruptable story sequence to be performed, along with animations, graphics and audio. Clicking on the
INTERACTIVE button 110 causes essentially the same story sequence to be performed, but in an interactive fashion.
The following description applies to both the READ ONLY and INTERACTIVE modes, with the differences being in the interactive capability of the INTERACTIVE mode. In the INTERACTIVE mode, the user is given the option of interrupting the story at various times to play animations, replay portions of the text, and again to proceed with the performance. This is discussed in detail below. Once the INTERACTIVE button 110 is clicked, the first page 120 of the story is displayed, as shown in Figure 3, and
includes graphics 130, text (in this case one sentence) 140, and various "live" or interactive regions on the screen which may be clicked on to access the different features of the invention. These live regions may be defined in any of a variety of conventional manners, such as by predefining coordinates on the screen such that the desired response is generated if the user clicks within those coordinates.
The story is performed according to a prerecorded animation and audio sequences which are associated with one another, and are preferably loaded in RAM in the controller or computer 10. Thus, in Figure 3, a voice pronounces the sentence appearing on the displayed page 120. As the sentence is pronounced, groups of words are highlighted in the pronounced sequence. Thus, the phrase "Early in the morning" (indicated by the numeral 150 in Figure 3) is highlighted while those words are pronounced from the audio track in memory, followed by highlighting (not separately shown) of the wording "Mom wakes me "while it is pronounced, and so on. Thus, in Figure 4, the phrase "Little Monster" (indicated by the numeral 160) is highlighted, and in the system of the invention that phrase is simultaneously pronounced, and associated animation is performed (such as the "Mom" character 170 walking in the door in Figure 4).
In this way, an entire story is performed automatically by the system of the invention. The system is at first in a continuous play mode, in which it will proceed to perform a story absent any interruption by the user, in a predetermined, linear fashion. The. story sequence can then proceed to the end, while allowing the user to interrupt at certain predetermined times to enter an interactive mode, wherein certain tangential sequences are performed.
In a preferred embodiment, the continuous play mode includes a number of performance loops, where certain animations and sounds are repeated until the user interrupts the mode by clicking the cursor 70 on one of a plurality of "live" regions on the screen 80. The live regions (some of which will be
discussed below) are preferably correlated with an identifiable graphic object displayed on the screen, such as the lamp 180, the hat 190, or the poster 200 shown in Figure 4 (as well as Figures 2 through 13). Each sentence is normally pronounced once automatically by the system, while the associated animation sequence is simultaneously performed on the screen. However, the user has the option of interrupting the continuous mode sequence to cause the system to pronounce any words again which the user wishes, by clicking the cursor 70 on such words. For example, in Figure 5, the user has clicked on the word "Monster", and that word is repronounced by the system. In the preferred embodiment, the pronunciation in the interactive mode is the same as the pronunciation in the continuous mode; that is, each individual word selected for a repeated pronunciation is pronounced by the system exactly as it is in the continuous mode. This can be done by accessing the same stored audio sequences while in the interactive mode as in the continuous mode.
The entire sentence 140 may be repeated by clicking on the repeat button 210, as shown in Figure 6. The word groups, such as group 150, are again highlighted, as they were during their first performance in the continuous mode, and the sentence is pronounced identically. The associated animation may also be reperformed. Figures 7 through 12 demonstrate an interdependent, hierarchichal sequence of interactive animations which are achieved by the present invention. In Figure 7, the user clicks the cursor 70 on a bird 220 which appears outside a window 230. Nothing happens,, because the bird 220 does not constitute a live region, so long as the window 230 is closed. The user may, however, move the cursor 70 to the window 230, and open the window in a conventional drag operation. With a mouse 60 connected to a computer 10, this is typically done by holding down the mouse button 65 when the cursor 70 is in place, and then moving the cursor by dragging the mouse. The window 230
will move along with the cursor, and will remain in the position it was when the mouse button 65 is released.
In Figure 9, the window 230 is shown after dragging to a half-open position. With the window in this position, the bird 220 is still not a live, interactive region.
In Figure 10, the window 230 has been dragged to its fully open position. With the window in this position, the bird 220 is now a live region for interactive animation. When the cursor is moved to a position over the bird 220, as shown in Figure 11, and the mouse button 65 is clicked, the bird then performs a predetermined animation sequence, including an associated soundtrack. A frame from this animation (which is a chirping sequence) is shown in Figure 12.
Figures 7 through 12 thus illustrate an interdependent set of sequences which are executed as part of the story performed by the system of the invention. Without first opening the window 230, the user could not get the system to perform the chirping bird sequence. The chirping bird sequence is a sequence which has a condition which must be fulfilled before it can be executed, in this case an action to be taken by the user. Other conditions precedent may be used, including random events generated by the computer or points in the story which must be reached by the system before a region becomes active or before a given interactive sequence becomes available to the user. Figure 13 shows a frame from another animation sequence which may be accessed by the user. When the cursor 70 is clicked on the drawer 240, a mouse 250 appears, executes a sequence with squeaking sounds, and disappears. Any number of similar sequences may be accessed by the user in other live regions defined on the screen, such as those discussed above (the lamp 180, the hat 190, and the poster 200, as well as others).
Figure 13 shows another live region, namely a page turn tab 260. When the user clicks on this tab, the system accesses the next page of the story. The second page 270 of the story appears at Figures 14 through 26. The sentence 280 is performed in the same manner
as the sentence 140 shown in Figure 4, and again has an associated animation track and other audio which are performed along with the performance of the sentence. A repeat button 290 is provided, and fills the same function as the repeat button 210 shown in Figure 6.
Once the sentence and associated animation and audio have been performed, the system enters a wait mode of repeated animation and sounds, during which the user has the option of clicking on a variety of live regions. This wait mode may be a continuous loop, or a timeout may be provided, so that the story will continue in the continuous mode if the user does not interact with the system within a predetermined period of time.
Figures 15 through 17 show an interactive animation sequence which is executed in response to the user clicking on the arm 300 of the "Mom" character 170. Mom 170 reaches up with her arm to stir the items in the pan 310, as shown by the progression between Figures 15 and 16, and then moves the pan 310 back and forth on the burner 320, as shown by the progression between the frames represented by Figures 16; and 17. This is another example of the type of interactive animation which can be implemented by the system of the invention, similar to that discussed above relative to Figure 13.
In Figure 14, the text 280 is different in a number of respects from the text 140 shown in Figures 2-13. First, there are graphic representations of eggs, cereal and milk, rather than the words themselves appearing in the text. When the system pronounces the sentence 280, it pronounces the correct names of these objects as they are highlighted. Thus, while the phrase 330 is highlighted as shown in Figure 14, the wording "cereal with milk?" is pronounced.
Secondly, the "eggs" 340 and "cereal" 350 constitute live, interactive regions. If the user clicks on the eggs 340, as in Figure 18, they are animated (again, with audio) to crack open. Mom 170 catches the falling eggs 360 in her pan 310, then turns around and cooks them on the burner 320 as shown in Figure 19.
Then, she flings them over her shoulder, as shown in Figure 20, whereupon the "Dad" character 370 catches them in his plate 380, as shown in Figure 21. Dad then serves them to the Little Monster 390, as shown in Figure 22. Thus, the sequence of Figures 18 through 22 illustrates an animation sequence which is executed in response to a command by the user (in this case, a click of the mouse) during a wait mode, upon which the computer enters an interactive mode which interrupts the normal, continuous sequence of the performance of the story. The story may then automatically proceed, or may proceed after a timeout, or another page turn tab (similar to tab 260 shown in Figure 13) may be used.
An interactive animation sequence similar to the foregoing is illustrated in Figures 23 through 25. When the user clicks on the cereal 350, it is served in an animation sequence to the Little Monster 390. In the frame represented by Figure 24, the bowl of cereal 350 has moved beneath the milk 400, which is poured over the cereal. The bowl of cereal 350 then drops to the table 410 in front of the Little Monster 390. After the sequence, the cereal 350 may return to, or reappear at, its original place as in Figure 23, and the interactive animation sequence is again available to the user.
The milk 400 may fulfill the same function as actual words in the text 280 by causing the repronunciation of the word "milk", in the context of the originally pronounced sentence, when the user clicks on it. This is illustrated in Figure 26, where the cursor 70 is clicked on the milk 400, causing it to be highlighted and causing the computer 10 to reperform the pronunciation of the word. As demonstrated by Figures 18 through 26, words and their illustrations may perform similar functions in the system of the invention.
Thus, the present invention provides a combination of interactive features which have hitherto been unavailable, in the context of a system which performs a story in a predetermined, linear sequence from beginning to end. Coupled with this is the capability to choose between the continuous
performance mode, a wait mode, and an interactive mode, in which the user can cause the computer to execute animation sequences, pronounce individual words of the text, repeat portions of the performance, and other functions. Variations on the foregoing embodiments will be apparent in light of the present disclosure, and may be constructed without departing from the scope of this invention.
Claims
1. A performance system for performing a story stored in memory, the system including a display and an audio system, including: first means for displaying on the display text relating to the story; means for generating pronunciations of words from the text on the audio system; second means for displaying on the display graphics related to the story; means for interactive animation of graphics appearing on the display; and means for carrying out the performance of the story, including said displaying of text and graphics and pronunciation of said text, in a sequential fashion from a beginning of the story to an end of the story.
2. The system of claim 1, farther including means for controlling the first and second displaying means and the generating means in each of a first and second mode, wherein: the first mode is a continuous mode for performing the story in a sequential fashion; the controlling means includes means for entering the second mode which is an interactive mode, including executing interruptions to the performance of the story and executing commands during said interruptions, including a command relating to resumption of the performance of the story according to the first mode.
3. The system of claim 2, wherein said commands further include a first animation command for executing a first animation sequence relating to a graphic displayed on the screen.
4. The system of claim 3, wherein said commands further include a second animation command for executing a second animation sequence selected from a plurality of animation sequences, the selected second animation sequence being determined by the first animation sequence.
5. The system of claim 3, wherein said first animation sequence is selected from a plurality of animation sequences, the selected first animation sequence depending upon a random factor determined by the performance system.
6. The system of claim 3, including means for executing the first animation sequence repeatedly.
7. The system of claim 6, including means for ceasing the repeated execution of the first animation sequence.
8. The system of claim 2, wherein said commands further include a pronunciation command for executing pronunciations of individual words of the text.
9. The system of claim 8, wherein the pronunciations of the individual words in the interactive mode are the same as the pronunciations of the words of the text in the continuous mode.
10. The system of claim 1, further including means for generating audio on the audio system relating to graphics appearing on the display.
11. The system of claim 1, further including means for highlighting pronounced portions of the text simultaneously with their pronunciation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US74038991A | 1991-08-02 | 1991-08-02 | |
US740,389 | 1991-08-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1993003453A1 true WO1993003453A1 (en) | 1993-02-18 |
Family
ID=24976304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1992/006368 WO1993003453A1 (en) | 1991-08-02 | 1992-07-30 | System for interactve performance and animation of prerecorded audiovisual sequences |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP0576628A1 (en) |
AU (1) | AU2419092A (en) |
WO (1) | WO1993003453A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995002207A1 (en) * | 1993-07-07 | 1995-01-19 | Boyer, Cyril | Process for producing films with subtitles |
EP0691609A1 (en) * | 1994-07-08 | 1996-01-10 | Microsoft Corporation | Software platform having a real world interface with animated characters |
EP0721727A1 (en) * | 1993-09-24 | 1996-07-17 | Readspeak, Inc. | Method for associating oral utterances meaningfully with writings seriatim in the audio-visual work |
EP0730272A2 (en) * | 1995-02-28 | 1996-09-04 | Kabushiki Kaisha Toshiba | Recording medium, apparatus and method of recording data on the same, and apparatus and method of reproducing data from the recording medium |
EP0797173A1 (en) * | 1996-03-22 | 1997-09-24 | Koninklijke Philips Electronics N.V. | Virtual environment navigation and interaction apparatus and method |
FR2765370A1 (en) * | 1997-06-27 | 1998-12-31 | City Media | Image processing system |
US5915256A (en) * | 1994-02-18 | 1999-06-22 | Newsweek, Inc. | Multimedia method and apparatus for presenting a story using a bimodal spine |
US5938447A (en) * | 1993-09-24 | 1999-08-17 | Readspeak, Inc. | Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work |
WO2000021057A1 (en) * | 1998-10-01 | 2000-04-13 | Mindmaker, Inc. | Method and apparatus for displaying information |
EP0917689A4 (en) * | 1996-08-02 | 2000-05-03 | Microsoft Corp | Method and system for virtual cinematography |
CN1086484C (en) * | 1993-04-21 | 2002-06-19 | 国际商业机器公司 | Interactive computer system recognizing spoken commands |
WO2009052553A1 (en) * | 2007-10-24 | 2009-04-30 | Michael Colin Gough | Method and system for generating a storyline |
-
1992
- 1992-07-30 EP EP19920917234 patent/EP0576628A1/en not_active Withdrawn
- 1992-07-30 WO PCT/US1992/006368 patent/WO1993003453A1/en not_active Application Discontinuation
- 1992-07-30 AU AU24190/92A patent/AU2419092A/en not_active Abandoned
Non-Patent Citations (2)
Title |
---|
COMPUTER GRAPHICS WORLD vol. 12, no. 8, August 1989, USA pages 39 - 46 MCMILLAN T. 'INTERACTIVE MULTIMEDIA MEETS THE REAL WORLD' * |
PRESTON J. (ED) 'COMPACT DISC INTERACTIVE - A DESIGNER'S OVERVIEW' November 1988 , KLUWER , DEVENTER-ANTWERPEN * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1086484C (en) * | 1993-04-21 | 2002-06-19 | 国际商业机器公司 | Interactive computer system recognizing spoken commands |
WO1995002207A1 (en) * | 1993-07-07 | 1995-01-19 | Boyer, Cyril | Process for producing films with subtitles |
EP0721727A1 (en) * | 1993-09-24 | 1996-07-17 | Readspeak, Inc. | Method for associating oral utterances meaningfully with writings seriatim in the audio-visual work |
EP0721727A4 (en) * | 1993-09-24 | 1997-03-05 | Readspeak Inc | Method for associating oral utterances meaningfully with writings seriatim in the audio-visual work |
US5741136A (en) * | 1993-09-24 | 1998-04-21 | Readspeak, Inc. | Audio-visual work with a series of visual word symbols coordinated with oral word utterances |
US5938447A (en) * | 1993-09-24 | 1999-08-17 | Readspeak, Inc. | Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work |
US5915256A (en) * | 1994-02-18 | 1999-06-22 | Newsweek, Inc. | Multimedia method and apparatus for presenting a story using a bimodal spine |
US6388665B1 (en) | 1994-07-08 | 2002-05-14 | Microsoft Corporation | Software platform having a real world interface with animated characters |
EP0691609A1 (en) * | 1994-07-08 | 1996-01-10 | Microsoft Corporation | Software platform having a real world interface with animated characters |
US5682469A (en) * | 1994-07-08 | 1997-10-28 | Microsoft Corporation | Software platform having a real world interface with animated characters |
EP0730272A2 (en) * | 1995-02-28 | 1996-09-04 | Kabushiki Kaisha Toshiba | Recording medium, apparatus and method of recording data on the same, and apparatus and method of reproducing data from the recording medium |
EP0730272A3 (en) * | 1995-02-28 | 1999-03-31 | Kabushiki Kaisha Toshiba | Recording medium, apparatus and method of recording data on the same, and apparatus and method of reproducing data from the recording medium |
EP0797173A1 (en) * | 1996-03-22 | 1997-09-24 | Koninklijke Philips Electronics N.V. | Virtual environment navigation and interaction apparatus and method |
EP0917689A4 (en) * | 1996-08-02 | 2000-05-03 | Microsoft Corp | Method and system for virtual cinematography |
CN1319026C (en) * | 1996-08-02 | 2007-05-30 | 微软公司 | Method and system for virtual cinematography |
EP1018100A1 (en) * | 1997-04-25 | 2000-07-12 | Readspeak, Inc. | Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work |
EP1018100A4 (en) * | 1997-04-25 | 2004-03-17 | Readspeak Inc | Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work |
FR2765370A1 (en) * | 1997-06-27 | 1998-12-31 | City Media | Image processing system |
US6324511B1 (en) | 1998-10-01 | 2001-11-27 | Mindmaker, Inc. | Method of and apparatus for multi-modal information presentation to computer users with dyslexia, reading disabilities or visual impairment |
WO2000021057A1 (en) * | 1998-10-01 | 2000-04-13 | Mindmaker, Inc. | Method and apparatus for displaying information |
US6564186B1 (en) * | 1998-10-01 | 2003-05-13 | Mindmaker, Inc. | Method of displaying information to a user in multiple windows |
WO2009052553A1 (en) * | 2007-10-24 | 2009-04-30 | Michael Colin Gough | Method and system for generating a storyline |
Also Published As
Publication number | Publication date |
---|---|
AU2419092A (en) | 1993-03-02 |
EP0576628A1 (en) | 1994-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0596823B1 (en) | Method and system for accessing associated data sets in a multimedia environment in a data processing system | |
US5697789A (en) | Method and system for aiding foreign language instruction | |
US5630017A (en) | Advanced tools for speech synchronized animation | |
US6113394A (en) | Reading aid | |
US6388665B1 (en) | Software platform having a real world interface with animated characters | |
US5758093A (en) | Method and system for a multimedia application development sequence editor using time event specifiers | |
US6128010A (en) | Action bins for computer user interface | |
JP3378759B2 (en) | Method and system for multimedia application development sequence editor using spacer tool | |
US5914717A (en) | Methods and system for providing fly out menus | |
US6188396B1 (en) | Synchronizing multimedia parts with reference to absolute time, relative time, and event time | |
JP3411305B2 (en) | Method and apparatus for multimedia authoring system and presentation system | |
EP0859996B1 (en) | Virtual environment navigation | |
US5889519A (en) | Method and system for a multimedia application development sequence editor using a wrap corral | |
CA2705907C (en) | Visual scene displays, uses thereof, and corresponding apparatuses | |
US20040201610A1 (en) | Video player and authoring tool for presentions with tangential content | |
WO1993003453A1 (en) | System for interactve performance and animation of prerecorded audiovisual sequences | |
US8127238B2 (en) | System and method for controlling actions within a programming environment | |
US5999172A (en) | Multimedia techniques | |
US6040842A (en) | Process control with evaluation of stored referential expressions in a multi-agent system adapted for use with virtual actors which are directed by sequentially enabled script agents | |
Corradini et al. | Animating an interactive conversational character for an educational game system | |
JP2002530724A (en) | Apparatus and method for training with an interpersonal interaction simulator | |
CN112698899B (en) | A data conversion method, device, equipment and medium based on data visualization | |
Prendinger et al. | MPML and SCREAM: Scripting the bodies and minds of life-like characters | |
JPH05242166A (en) | Multi-media data editing and display system | |
Ahad | Neva: A Conversational Agent Based Interface for Library Information Systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1992917234 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1992917234 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1992917234 Country of ref document: EP |