US20110181607A1 - System and method for controlling animation by tagging objects within a game environment - Google Patents
System and method for controlling animation by tagging objects within a game environment Download PDFInfo
- Publication number
- US20110181607A1 US20110181607A1 US13/064,531 US201113064531A US2011181607A1 US 20110181607 A1 US20110181607 A1 US 20110181607A1 US 201113064531 A US201113064531 A US 201113064531A US 2011181607 A1 US2011181607 A1 US 2011181607A1
- Authority
- US
- United States
- Prior art keywords
- tag
- character
- animation
- proximity
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 28
- 238000006243 chemical reaction Methods 0.000 claims description 38
- 230000004044 response Effects 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 3
- 206010021703 Indifference Diseases 0.000 abstract 1
- 230000006397 emotional response Effects 0.000 abstract 1
- 238000010422 painting Methods 0.000 description 11
- 230000008451 emotion Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 206010004224 Belligerence Diseases 0.000 description 1
- 241000086550 Dinosauria Species 0.000 description 1
- 208000027534 Emotional disease Diseases 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/56—Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
-
- A63F13/10—
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6009—Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
- A63F2300/6018—Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content where the game content is authored by the player, e.g. level editor or by game device at runtime, e.g. level is created from music data on CD
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/64—Methods for processing data by generating or executing the game program for computing dynamical parameters of game objects, e.g. motion determination or computation of frictional forces for a virtual car
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/66—Methods for processing data by generating or executing the game program for rendering three dimensional images
- A63F2300/6607—Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
Definitions
- the present invention relates to computer graphics and more particularly to the use of computers to generate animated displays. Still more particularly, the invention relates to techniques for automatically controlling animation within a video game or other graphical presentation.
- the invention provides a reactive animation system that enables game characters or other graphical characters to appear much more realistic as they interact with a virtual world in which they are displayed.
- the reactive animation system enables, for example, the creation virtual worlds where the character(s) therein will do things, such as have a facial, body or other type of physical or emotional reactions, in response to the character coming within proximity of a “tagged” element (such as a point of interest in a 3D world).
- the invention enables characters to appear much more realistic by giving the character a personality and appearing to bring the character to life in its virtual environment without having to script or animate each scene in advance.
- One approach that has appeal is to make the animation engine responsible for animated characters increasingly more intelligent. For example, it is possible to define an “intelligent” animated character within a three-dimensional environment and allow the character to react to the environment based on its programmed qualities. If the character is sufficiently intelligent, rather complex reactions can be dynamically created “on the fly” by the real time animation engine—saving the game developer the massive amount of time and effort that might otherwise be required to script out the animation sequence. See, for example, U.S. patent application Ser. No. 09/382,819 of Comair et al filed 25 Aug. 1999 entitled “Object Modeling For Computer Simulation And Animation” incorporated by reference herein.
- the tag-based animation engine can, for example, animate the character to turn toward or face the tagged object in the process of paying attention to it—creating a very realistic visual effect without the typical programming overhead normally required to specify which direction the animated character should face and when.
- the tags can be defined by designers at any location in the virtual world and given certain characteristics that are designed to cause a character that comes into a defined proximity of the tag to have some sort of reaction to the tag.
- the animation engine makes the character much more realistic and to appear as if the character is coming to life through its reactions to the tags.
- the tags are preferably associated with virtual objects of the type that would typically cause a human to have a reaction in the real world.
- the tags are preferably defined to cause the same type of reaction in the character's animation, as a typical human would have in the same circumstance in the real world. In this way, the character has much more human-like reactions to its environment while moving through the virtual world, and the character can be made to appear as if it has “come to life.”
- the tagging system of the animation engine is preferably priority based.
- each tag is assigned a priority value that is used by the animation engine to control which tag will be used when more than one tag is active.
- the animation engine By prioritizing the tags in the environment, the animation engine as able to display the character as paying attention to or reacting to the tagged object that is of highest interest to the character, based on the character's current environment and/or state, from among several tags that may be in proximity to the character at any one time.
- This tag prioritization feature further helps to make the character appear more realistic by enabling the character to prioritize its reactions in the same or similar way to that of a human.
- humans typically are confronted with numerous objects (e.g., interesting painting, view, other object etc.) or events (loud noise, flashing light, movement, etc.) that may cause a reaction at any one time.
- objects e.g., interesting painting, view, other object etc.
- events e.g., flashing light, movement, etc.
- humans by their nature, typically react to the one thing that seems to be the most important at each instant in time. For instance, a human would typically stop looking at a piece of art when a loud noise comes from another object, and then quickly turn in the direction of the loud noise. Upon determining that the noise is not a problem, a human would then typically resume looking at the piece of art.
- the object can be tagged with a tag that inspires an emotion in the character while paying attention to the tagged object.
- the emotion can, for example, be fear, happiness, or any other discernible emotion. If an object is tagged to inspire fear, the character can be animated to turn toward the object and react with a look of horror. If an object is tagged to inspire happiness, the character can be animated to turn toward the object and react with a big smile. Other emotions and reactions are possible.
- the tag can be defined to cause any type of response that corresponds to any variable or role-playing element that the character may have, as well as to cause emotional and/or physical reactions. For example, the tag could modify the animation of the character so that the character appears injured, sick or proficient while under the influence of an active tag.
- the character's animation is adapted to the tag when the tag is activated. Activation of the tag can occur when the user gets within a selected distance from the tag and/or based on some other defined event.
- the adaptation of the animation is preferably done by defining key frames for use in creating a dynamic animation sequence using the information provided by the tag.
- the dynamic animation sequence is preferably generated using the techniques known in the art as “Inbetweening” and “Inverse Kinematics.” Inbetweening enables the frames between the key frames to be generated for the dynamic animation, and inverse kinematics is used to assure that the character's movements are natural during the animation.
- the animation of the character is adapted from its generic or canned-animation to a dynamic animation based on the type of tag that has been encountered.
- the tag triggers a dynamic modification of the character's animation for the period of time it takes for the tag to become inactive, such as by the character moving out of range of the tag.
- the dynamic animation provided by the reactive animation engine of the instant invention provides a character (or other object) with realistic reactions as the character moves through the virtual environment, without having to handcraft the animation for every possible scene in advance.
- the invention enables animation to be generated on-the-fly and in an unpredictable and realistic manner.
- the character's animation can constantly change in a variety of ways and depending on many possible variables. This makes the character's animation unpredictable and greatly enhances the visual effect of the display.
- a significant advantage of the instant invention is that the character can be displayed with a myriad of animations without having to script, hand-craft or store each of the animations in advance. Instead, the reactive animations are dynamically generated on-the-fly and in real time based on the tag.
- FIGS. 1-5 show example screen effects for a first exemplary animation sequence provided by a preferred embodiment of the invention
- FIG. 5A shows an example conceptual display illustrating the location of a tag and a vector from the character's eyes to the tag
- FIGS. 6 , 7 A, 7 B, 8 and 9 show example screen effects for a second exemplary animation sequence by a preferred embodiment of the invention
- FIGS. 10A-10B illustrate an example system that may be used to create the displays of FIGS. 1-9 ;
- FIG. 11 is an example flowchart of steps performed by a tag-based animation engine of the instant invention.
- FIG. 12 illustrates an example tag data structure for tags used in accordance with the instant invention
- FIG. 13 is a more detailed example flow chart of steps performed by the tag-based animation engine of the instant invention.
- FIG. 14 is an exemplary flow chart of the steps performed by the tag-based animation engine of the instant invention in order to generate a dynamic animation sequence
- FIG. 15 is an exemplary flow chart of the steps performed by the tag-based animation engine for tag priority management.
- FIGS. 1-5 show example screen effects provided by a preferred exemplary embodiment of this invention.
- These Figures show an animated character 10 moving through an illustrative video game environment such as a corridor of a large house or castle.
- Hanging on the wall 11 of the corridor is a 3D object 12 representing a painting.
- This object 12 is “tagged” electronically to indicate that character 10 should pay attention to it when the character is within a certain range of the painting.
- the character 10 moves down the corridor (e.g., in response to user manipulation of a joystick or other interactive input device) (see FIG. 1 ) and into proximity to tagged object 12 , the character's animation is dynamically adapted so that the character appears to be paying attention to the tagged object by, for example, facing the tagged object 12 (see FIG.
- the character 10 continues to face and pay attention to the tagged object 10 while it remains in proximity to the tagged object (see FIG. 3 ). As the character moves out of proximity to the tagged object 12 (see FIG. 4 ), it ceases paying attention to the tagged object by ceasing to turn towards it. Once the animated character 10 is more than a predetermined virtual distance away from the tagged object 12 , the character no longer pays attention to the object and the object no longer influences the character.
- the character When the character first enters the corridor, as shown in FIG. 1 , the character is animated using an existing or generic animation that simply shows the character walking.
- the reactive animation engine of the instant invention adapts or modifies the animation so that the character pays attention to the painting in a natural manner.
- the animation is preferably adapted from the existing animation by defining key frames and using the tag information (including the location and type of tag). More particularly, inbetweening and inverse kinematics are used to generate (i.e., calculate) a dynamic animation sequence for the character using the key frames and based on the tag.
- the dynamic animation sequence (rather than the existing or generic animation) is then displayed while the character is within proximity to the tag.
- the tag is no longer active, the character's animation returns to the stored or canned animation (e.g., a scripted and stored animation that simply shows the character walking down the hallway and looking straight ahead).
- the object 12 is tagged with a command for character 10 to pay attention to the object but with no additional command eliciting emotion.
- FIGS. 1-5 show the character 10 paying attention to the tagged object 12 without any change of emotion.
- the tagged object 12 elicits an emotion or other reaction (e.g., fear, happiness, belligerence, submission, etc.)
- the tagged object can repel rather than attract character 10 —causing the character to flee, for example.
- Any physical, emotional or combined reaction can be defined by the tag, such as facial expressions or posture change, as well as changes in any body part of the character (e.g., position of head, shoulders, feet, arms etc.).
- FIG. 5A is an example conceptual drawing showing the theory of operation of the preferred embodiment.
- the “tag” T associated with an item in the 3D world is specified based on its coordinates in 3D space.
- FIG. 5A shows a “tag” T (having a visible line from the character to the tag for illustration purposes) defined on the painting in the 3D virtual world.
- the animated character 10 automatically responds by turning its head toward the “tag”, thereby appearing to pay attention to the tagged object.
- the dotted line in FIG. 5A illustrates a vector from the center of the character 10 to the tag T.
- the animation engine can calculate this vector based on the relative positions of character 10 and tag T in 3D space and use the vector in connection with dynamically animating the character.
- Any number of animated characters 10 (or any subsets of such characters, with different characters potentially being sensitive to different tags T) can react to the tags as they travel through the 3D world.
- FIGS. 6-9 illustrate another embodiment of the invention, wherein two tags are defined in the corridor through which the character is walking.
- a first tag T 1 is provided on the painting as described above in connection with the display sequence of FIGS. 1-5 .
- a second tag T 2 is provided on the wall mounted candle. This second tag is different from the first tag in that it is defined to only cause a reaction from by the character when the candle is animated to flare up like a powerful torch (see FIG. 7A ).
- the second tag T 2 is given a higher priority than the first tag T 1 .
- the reactive animation engine is programmed to only allow the player to react to one tag at a time, that one tag being the tag that has the highest priority of any active tags.
- the character 10 when the character 10 is walking down the corridor and gets within proximity of the two tags, the second tag is not yet active due to the fact that the candle is not flaring up. Thus, the character turns to the look at the only active tag T 1 (i.e., the painting) (see FIG. 6 ).
- the second tag T 2 which has a higher priority than T 1 , also becomes active, thereby causing the character to stop looking at the painting and turn its attention to the flaring torch (i.e., the active tag with the highest priority) (see FIG. 7A ).
- the second tag T 2 is no longer active and the reactive animation engine then causes the character to again turn its attention to the painting (i.e. the only active tag)(see FIG. 7B ).
- the character's head then begins to turn naturally back (see FIG. 8 ) to the forward or uninterested position corresponding to the stored animation (see FIG. 9 ).
- the character responds to active tags based their assigned priority. In this way, the character is made to look very realistic and appears as if it has come to life within its environment.
- the reactive animation engine E dynamically generates the character's animation to make the character react in a priority-based manner to the various tags that are defined in the environment.
- FIG. 10A shows an example interactive 3D computer graphics system 50 .
- System 50 can be used to play interactive 3D video games with interesting animation provided by a preferred embodiment of this invention.
- System 50 can also be used for a variety of other applications.
- system 50 is capable of processing, interactively in real time, a digital representation or model of a three-dimensional world.
- System 50 can display some or the entire world from any arbitrary viewpoint.
- system 50 can interactively change the viewpoint in response to real time inputs from handheld controllers 52 a , 52 b or other input devices. This allows the game player to see the world through the eyes of someone within or outside of the world.
- System 50 can be used for applications that do not require real time 3D interactive display (e.g., 2D display generation and/or non-interactive display), but the capability of displaying quality 3D images very quickly can be used to create very realistic and exciting game play or other graphical interactions.
- main unit 54 To play a video game or other application using system 50 , the user first connects a main unit 54 to his or her color television set 56 or other display device by connecting a cable 58 between the two.
- Main unit 54 produces both video signals and audio signals for controlling color television set 56 .
- the video signals are what controls the images displayed on the television screen 59 , and the audio signals are played back as sound through television stereo loudspeakers 61 L, 61 R.
- the user also needs to connect main unit 54 to a power source.
- This power source may be a conventional AC adapter (not shown) that plugs into a standard home electrical wall socket and converts the house current into a lower DC voltage signal suitable for powering the main unit 54 . Batteries could be used in other implementations.
- Controls 52 can be used, for example, to specify the direction (up or down, left or right, closer or further away) that a character displayed on television 56 should move within a 3D world. Controls 60 also provide input for other applications (e.g., menu selection, pointer/cursor control, etc.). Controllers 52 can take a variety of forms. In this example, controllers 52 shown each include controls 60 such as joysticks, push buttons and/or directional switches. Controllers 52 may be connected to main unit 54 by cables or wirelessly via electromagnetic (e.g., radio or infrared) waves.
- electromagnetic waves e.g., radio or infrared
- Storage medium 62 may, for example, be a specially encoded and/or encrypted optical and/or magnetic disk.
- the user may operate a power switch 66 to turn on main unit 54 and cause the main unit to begin running the video game or other application based on the software stored in the storage medium 62 .
- the user may, operate controllers 52 to provide inputs to main unit 54 .
- operating a control 60 may cause the game or other application to start.
- Moving other controls 60 can cause animated characters to move in different directions or change the user's point of view in a 3D world.
- the various controls 60 on the controller 52 can perform different functions at different times.
- mass storage device 62 stores, among other things, a tag-based animation engine E used to animate characters based on tags stored in the character's video game environment.
- a tag-based animation engine E used to animate characters based on tags stored in the character's video game environment.
- tag-based animation engine E makes use of various components of system 50 shown in FIG. 10B including:
- main processor 110 receives inputs from handheld controllers 52 (and/or other input devices) via graphics and audio processor 114 .
- Main processor 110 interactively responds to user inputs, and executes a video game or other program supplied, for example, by external storage media 62 via a mass storage access device 106 such as an optical disk drive.
- main processor 110 can perform collision detection and animation processing in addition to a variety of interactive and control functions.
- main processor 110 generates 3D graphics and audio commands and sends them to graphics and audio processor 114 .
- the graphics and audio processor 114 processes these commands to generate interesting visual images on display 59 and interesting stereo sound on stereo loudspeakers 61 R, 61 L or other suitable sound-generating devices.
- Main processor 110 and graphics and audio processor 114 also perform functions to support and implement the preferred embodiment tag-based animation engine E based on instructions and data E′ relating to the engine that is stored in DRAM main memory 112 and mass storage device 62 .
- example system 50 includes a video encoder 120 that receives image signals from graphics and audio processor 114 and converts the image signals into analog and/or digital video signals suitable for display on a standard display device such as a computer monitor or home color television set 56 .
- System 50 also includes an audio codec (compressor/decompressor) 122 that compresses and decompresses digitized audio signals and may also convert between digital and analog audio signaling formats as needed.
- Audio codec 122 can receive audio inputs via a buffer 124 and provide them to graphics and audio processor 114 for processing (e.g., mixing with other audio signals the processor generates and/or receives via a streaming audio output of mass storage access device 106 ).
- Graphics and audio processor 114 in this example can store audio related information in an audio memory 126 that is available for audio tasks. Graphics and audio processor 114 provides the resulting audio output signals to audio codec 122 for decompression and conversion to analog signals (e.g., via buffer amplifiers 128 L, 128 R) so they can be reproduced by loudspeakers 61 L, 61 R.
- Graphics and audio processor 114 has the ability to communicate with various additional devices that may be present within system 50 .
- a parallel digital bus 130 may be used to communicate with mass storage access device 106 and/or other components.
- a serial peripheral bus 132 may communicate with a variety of peripheral or other devices including, for example:
- a further external serial bus 142 may be used to communicate with additional expansion memory 144 (e.g., a memory card) or other devices. Connectors may be used to connect various devices to busses 130 , 132 , 142 .
- additional expansion memory 144 e.g., a memory card
- Connectors may be used to connect various devices to busses 130 , 132 , 142 .
- FIG. 11 shows an example simplified illustration of a flowchart of the tag-based animation engine E of the instant invention.
- Animation engine E may be implemented for example by software executing on main processor 110 .
- Tag-based animation engine E may first initialize a 3D world and animation game play (block 1002 ), and may then accept user inputs supplied for example via handheld controller(s) 52 (block 1004 ). In response to such user inputs, engine E may animate one or more animated characters 10 in a conventional fashion to cause such characters to move through the 3D world based on the accepted user inputs (block 1006 ).
- Tag-based animation engine E also detects whether any moving character is in proximity to a tag T defined within the 3D world (decision block 1008 ).
- the animation engine E reads the tag and computes (e.g., through mathematical computation and associated modeling, such as by using inbetweening and inverse kinematics) a dynamic animation sequence for the character 10 to make the character realistically turn toward or otherwise react to the tag (block 1010 ). Processing continues (blocks 1004 - 1010 ) until the game is stopped or some other event causes an interruption.
- FIG. 12 shows an illustrative exemplary data structure 1100 for a tag T.
- data structure 1100 includes a tag ID field 1102 that identifies the tag; three-dimensional (i.e., X, Y, Z) positional coordinate fields 1104 , 1106 , 1108 (plus further optional additional information if necessary) specifying the position of the tag in the 3D world; a proximity field 1110 (if desired) specifying how close character 10 must be to the tag in order to react to the tag; a type of tag or reaction code 1112 specifying the type of reaction to be elicited (e.g., pay attention to the tag, flee from the tag, react with a particular emotion, etc.); and a priority field 114 that defined a priority for the tag relative to other tags that may be activated at the same time as the tag.
- tag ID field 1102 that identifies the tag
- FIG. 13 shows a more detailed exemplary flow chart of the steps performed by the reactive animation engine E of the instant invention.
- the system accepts user inputs to control the character within the environment in a conventional manner (step 1304 ).
- the system initially uses scripted or canned animation that is provided with the game for the character (step 1306 ).
- the animation engine checks the characters position relative to the tags that have been defined in the 3D world by the designers of the game (step 1308 ). If the character is not within proximity to tag then the standard animation continues for the character (step 1310 ).
- the tag is read to determine the type of reaction that the tag is supposed to elicit from the character and the exact location of the tag in the 3D world (step 1312 ).
- the animation engine E uses key frames (some or all of which may come from the scripted animation) and the tag information to dynamically adapt or alter the animation of the character to the particular tag encountered (step 1314 ).
- the dynamic animation is preferably generated using a combination of inbetweening and inverse kinematics to provide a smooth and realistic animation showing a reaction to the tag.
- Particular facial animations may also be used to give the character facial emotions or reactions to the tag.
- facial animations can be selected from a defined pool of facial animations, and inbetweening or other suitable animation techniques can be used to further modify or dynamically change the facial expressions of the character in response to the tag.
- the dynamic animation then continues until the tag is no longer active (step 1316 ), as a result of, for example, the character moving out of range of the tag.
- the standard or scripted animation is then used for the character until another tag is activated (step 1318 ).
- FIG. 14 shows a simplified flow chart of the steps performed by the reactive animation engine E of the instant invention in order to generate the dynamic animation sequence in response to an activated tag.
- the animation engine reads the tag to determine the type of tag, its exact location and any other information that is associated with the tag (step 1404 ).
- the engine defines key frames for use in generating the dynamic animation (step 1406 ).
- the key frames and tag information are then used, together with inbetweening and inverse kinematics, to create an animation sequence for the character on-the-fly (step 1408 ).
- the dynamic animation sequence is adapted from the standard animation, so that only part of the animation needs to be modified, thereby reducing the overall work that must be done to provide the dynamic animation.
- the dynamic animation is preferably generated as an adaptation or alteration of the stored or standard animation. The dynamic animation then continues until the tag is no longer active (step 1410 ), at which time the characters animation returns to the standard animation.
- FIG. 15 shows an exemplary flow chart of the priority-based tagging feature of the instant invention.
- This feature enables several or many tags to be activated simultaneously while still having the character react in a realistic and in a priority based manner.
- the animation engine determines the priority of the tag (step 1502 ), as well as doing the other things described above.
- the animation engine determines if any other tags are currently active (step 1506 ). If no other tags are active, the animation engine dynamically adapts or alters the animation, in the manner described above, to correspond to the active tag (step 1508 ).
- the reactive animation engine determines the priority of each of the other active tags (step 1510 ) to determine if the current tag has a higher priority relative to each of the other currently active tags (step 1512 ). If the current tag does have the highest priority, then the animation engine dynamically generates the character's animation based on the current tag (step 1514 ). If, on the other hand, another active tag has a higher priority than the currently active tag, then the animation engine E adapts the animation in accordance with the other tag having the highest priority (step 1516 ).
- FIGS. 6-9 illustrate an exemplary priority-based display sequence as just described.
- the instant reactive animation engine E of the instant invention can be used in a variety of video games and/or other graphical applications to improve realism and game play.
- the invention enables a character to appear as if it has “come to life” in the game environment.
- the instant invention is particularly advantageous when incorporated into role playing games wherein a character interacts with a 3D world and encounters a variety of objects and/or other characters that can have certain effects on a character.
- the animation engine of the instant invention can also be implemented such that the same tag has a different effect on the character depending on the state of a variable of the character at the time the tagged object is encountered.
- a tag may be defined such that it does not cause much of a reaction from the character when the character has a high sanity level.
- the same tag may cause a drastic reaction from the character (such as eye's bulging) when the character is going proficient, i.e., when having a low sanity level.
- variable or role playing element such as health or strength
- Other characters such as monsters, can also be tagged and with prioritized tags as described above in order to cause the character to react to other characters as well as other objects.
- Tags can also be defined such that factors other than proximity (such as timing, as in the candle/torch example above) can be used alone or in addition to proximity to cause activation of the tag.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- This application is a continuation of application Ser. No. 12/654,844, filed Jan. 6, 2010, entitled “System and Method for Controlling Animation by Tagging Objects Within a Game Environment,” which is a continuation of application Ser. No. 10/078,526, now allowed, filed Feb. 21, 2002, entitled “System and Method for Controlling Animation by Tagging Objects Within a Game Environment,” which claims the benefit of U.S. provisional application Ser. No. 60/290,688 filed May 15, 2001, and U.S. provisional application Ser. No. 60/314,289 filed Aug. 24, 2001, the contents of which are incorporated reference herein in their entirety.
- The present invention relates to computer graphics and more particularly to the use of computers to generate animated displays. Still more particularly, the invention relates to techniques for automatically controlling animation within a video game or other graphical presentation. In particular, the invention provides a reactive animation system that enables game characters or other graphical characters to appear much more realistic as they interact with a virtual world in which they are displayed. The reactive animation system enables, for example, the creation virtual worlds where the character(s) therein will do things, such as have a facial, body or other type of physical or emotional reactions, in response to the character coming within proximity of a “tagged” element (such as a point of interest in a 3D world). The invention enables characters to appear much more realistic by giving the character a personality and appearing to bring the character to life in its virtual environment without having to script or animate each scene in advance.
- Many of us have seen films containing remarkably realistic dinosaurs, aliens, animated toys and other fanciful creatures. Such animations are made possible by computer graphics. Using such techniques, a computer graphics artist can specify how each object should look and how it should change in appearance over time, and a computer then models the objects and displays them on a display such as your television or a computer screen. The computer takes care of performing the many tasks required to make sure that each part of the displayed image is colored and shaped just right based on the position and orientation of each object in a scene, the direction in which light seems to strike each object, the surface texture of each object, and other factors.
- Because computer graphics generation is complex, computer-generated three-dimensional graphics just a few years ago were mostly limited to expensive specialized flight simulators, high-end graphics workstations and supercomputers. The public saw some of the images generated by these computer systems in movies and expensive television advertisements, but most of us couldn't actually interact with the computers doing the graphics generation. All this has changed with the availability of relatively inexpensive 3D graphics platforms such as, for example, the NINTENDO 64®, the NINTENDO GAMECUBE® and various 3D graphics cards now available for personal computers. It is now possible to interact with exciting 3D animations and simulations on relatively inexpensive computer graphics systems in your home or office.
- A problem graphics system designers have confronted is how to efficiently model and render realistic looking animations in real time or close to real time. To achieve more interesting dynamic animation, a number of video and computer games have used various animation techniques such as key frame transformations, inverse kinematics and the like to model and animate people, animals and other objects. See for example O'Rourke, Principles of Three-Dimensional Computer Animation (W. W. Norton 1998) at chapters 3 and 4 especially. While such techniques have been highly successful, animators have searched for ways to make animations more realistic without the need to control or map out each and every movement of an animated character beforehand.
- One approach that has appeal is to make the animation engine responsible for animated characters increasingly more intelligent. For example, it is possible to define an “intelligent” animated character within a three-dimensional environment and allow the character to react to the environment based on its programmed qualities. If the character is sufficiently intelligent, rather complex reactions can be dynamically created “on the fly” by the real time animation engine—saving the game developer the massive amount of time and effort that might otherwise be required to script out the animation sequence. See, for example, U.S. patent application Ser. No. 09/382,819 of Comair et al filed 25 Aug. 1999 entitled “Object Modeling For Computer Simulation And Animation” incorporated by reference herein.
- While such approaches have been successful, further improvements are possible. In particular, we have developed a new, efficient technique for causing an animated character to pay attention to an object within a virtual world by tagging the object. When the animated character moves into proximity with an object (e.g., in response to user control), the system checks whether the object is tagged. If the object is tagged, the animation engine animates the character to pay attention to the tagged object (e.g., by animating the character to look or stare at the tagged object so long as the character remains close to the tagged object). The tag-based animation engine can, for example, animate the character to turn toward or face the tagged object in the process of paying attention to it—creating a very realistic visual effect without the typical programming overhead normally required to specify which direction the animated character should face and when. In other words, in accordance with the invention, the tags can be defined by designers at any location in the virtual world and given certain characteristics that are designed to cause a character that comes into a defined proximity of the tag to have some sort of reaction to the tag. By defining several tags in a scene, such as, for example, in a virtual hallway through which a character is walking, the animation engine makes the character much more realistic and to appear as if the character is coming to life through its reactions to the tags. The tags are preferably associated with virtual objects of the type that would typically cause a human to have a reaction in the real world. The tags are preferably defined to cause the same type of reaction in the character's animation, as a typical human would have in the same circumstance in the real world. In this way, the character has much more human-like reactions to its environment while moving through the virtual world, and the character can be made to appear as if it has “come to life.”
- In accordance with the invention, the tagging system of the animation engine is preferably priority based. In other words, each tag is assigned a priority value that is used by the animation engine to control which tag will be used when more than one tag is active. By prioritizing the tags in the environment, the animation engine as able to display the character as paying attention to or reacting to the tagged object that is of highest interest to the character, based on the character's current environment and/or state, from among several tags that may be in proximity to the character at any one time. This tag prioritization feature further helps to make the character appear more realistic by enabling the character to prioritize its reactions in the same or similar way to that of a human. For example, in the real world, humans typically are confronted with numerous objects (e.g., interesting painting, view, other object etc.) or events (loud noise, flashing light, movement, etc.) that may cause a reaction at any one time. However, humans, by their nature, typically react to the one thing that seems to be the most important at each instant in time. For instance, a human would typically stop looking at a piece of art when a loud noise comes from another object, and then quickly turn in the direction of the loud noise. Upon determining that the noise is not a problem, a human would then typically resume looking at the piece of art. These same human-like movements and reactions can be generated by the reactive animation system of the invention, by giving the object that makes the noise a higher priority tag while active as compared to the tag associated with the piece of art. In this way, all of the tagged objects in the environment can have relative priorities assigned thereto based on, for example, the nature of the object.
- In one particular embodiment, the object can be tagged with a tag that inspires an emotion in the character while paying attention to the tagged object. The emotion can, for example, be fear, happiness, or any other discernible emotion. If an object is tagged to inspire fear, the character can be animated to turn toward the object and react with a look of horror. If an object is tagged to inspire happiness, the character can be animated to turn toward the object and react with a big smile. Other emotions and reactions are possible. In fact, the tag can be defined to cause any type of response that corresponds to any variable or role-playing element that the character may have, as well as to cause emotional and/or physical reactions. For example, the tag could modify the animation of the character so that the character appears injured, sick or insane while under the influence of an active tag.
- In accordance with the invention, the character's animation is adapted to the tag when the tag is activated. Activation of the tag can occur when the user gets within a selected distance from the tag and/or based on some other defined event. The adaptation of the animation is preferably done by defining key frames for use in creating a dynamic animation sequence using the information provided by the tag. The dynamic animation sequence is preferably generated using the techniques known in the art as “Inbetweening” and “Inverse Kinematics.” Inbetweening enables the frames between the key frames to be generated for the dynamic animation, and inverse kinematics is used to assure that the character's movements are natural during the animation. Once a tag is activated, the animation of the character is adapted from its generic or canned-animation to a dynamic animation based on the type of tag that has been encountered. Thus, the tag triggers a dynamic modification of the character's animation for the period of time it takes for the tag to become inactive, such as by the character moving out of range of the tag.
- The dynamic animation provided by the reactive animation engine of the instant invention provides a character (or other object) with realistic reactions as the character moves through the virtual environment, without having to handcraft the animation for every possible scene in advance. Thus, the invention enables animation to be generated on-the-fly and in an unpredictable and realistic manner. As a result, the character's animation can constantly change in a variety of ways and depending on many possible variables. This makes the character's animation unpredictable and greatly enhances the visual effect of the display. A significant advantage of the instant invention is that the character can be displayed with a myriad of animations without having to script, hand-craft or store each of the animations in advance. Instead, the reactive animations are dynamically generated on-the-fly and in real time based on the tag.
- These and other features and advantages of the present invention may be better and more completely understood by referring to the following detailed description of presently preferred example embodiments in conjunction with the drawings, of which:
-
FIGS. 1-5 show example screen effects for a first exemplary animation sequence provided by a preferred embodiment of the invention; -
FIG. 5A shows an example conceptual display illustrating the location of a tag and a vector from the character's eyes to the tag; -
FIGS. 6 , 7A, 7B, 8 and 9 show example screen effects for a second exemplary animation sequence by a preferred embodiment of the invention; -
FIGS. 10A-10B illustrate an example system that may be used to create the displays ofFIGS. 1-9 ; -
FIG. 11 is an example flowchart of steps performed by a tag-based animation engine of the instant invention; -
FIG. 12 illustrates an example tag data structure for tags used in accordance with the instant invention; -
FIG. 13 is a more detailed example flow chart of steps performed by the tag-based animation engine of the instant invention; -
FIG. 14 is an exemplary flow chart of the steps performed by the tag-based animation engine of the instant invention in order to generate a dynamic animation sequence; and -
FIG. 15 is an exemplary flow chart of the steps performed by the tag-based animation engine for tag priority management. -
FIGS. 1-5 show example screen effects provided by a preferred exemplary embodiment of this invention. These Figures show ananimated character 10 moving through an illustrative video game environment such as a corridor of a large house or castle. Hanging on the wall 11 of the corridor is a3D object 12 representing a painting. Thisobject 12 is “tagged” electronically to indicate thatcharacter 10 should pay attention to it when the character is within a certain range of the painting. As thecharacter 10 moves down the corridor (e.g., in response to user manipulation of a joystick or other interactive input device) (seeFIG. 1 ) and into proximity to taggedobject 12, the character's animation is dynamically adapted so that the character appears to be paying attention to the tagged object by, for example, facing the tagged object 12 (seeFIG. 2 ). In the example embodiment, thecharacter 10 continues to face and pay attention to the taggedobject 10 while it remains in proximity to the tagged object (seeFIG. 3 ). As the character moves out of proximity to the tagged object 12 (seeFIG. 4 ), it ceases paying attention to the tagged object by ceasing to turn towards it. Once theanimated character 10 is more than a predetermined virtual distance away from the taggedobject 12, the character no longer pays attention to the object and the object no longer influences the character. - When the character first enters the corridor, as shown in
FIG. 1 , the character is animated using an existing or generic animation that simply shows the character walking. However, when the tag becomes active, i.e., the character approaches thepainting 12, the reactive animation engine of the instant invention adapts or modifies the animation so that the character pays attention to the painting in a natural manner. The animation is preferably adapted from the existing animation by defining key frames and using the tag information (including the location and type of tag). More particularly, inbetweening and inverse kinematics are used to generate (i.e., calculate) a dynamic animation sequence for the character using the key frames and based on the tag. The dynamic animation sequence (rather than the existing or generic animation) is then displayed while the character is within proximity to the tag. However, when the tag is no longer active, the character's animation returns to the stored or canned animation (e.g., a scripted and stored animation that simply shows the character walking down the hallway and looking straight ahead). - In the screen effects shown in
FIGS. 1-5 , theobject 12 is tagged with a command forcharacter 10 to pay attention to the object but with no additional command eliciting emotion. Thus,FIGS. 1-5 show thecharacter 10 paying attention to the taggedobject 12 without any change of emotion. However, in accordance with the invention, it is also possible to tagobject 12 with additional data or command(s) that causecharacter 10 to do something in addition to (or instead of) paying attention to the tagged object. In one illustrative example, the taggedobject 12 elicits an emotion or other reaction (e.g., fear, happiness, belligerence, submission, etc.) In other illustrative examples, the tagged object can repel rather than attractcharacter 10—causing the character to flee, for example. Any physical, emotional or combined reaction can be defined by the tag, such as facial expressions or posture change, as well as changes in any body part of the character (e.g., position of head, shoulders, feet, arms etc.). -
FIG. 5A is an example conceptual drawing showing the theory of operation of the preferred embodiment. Referring toFIG. 6 , the “tag” T associated with an item in the 3D world is specified based on its coordinates in 3D space. Thus, to tag aparticular object 12, one specifies the location of a “tag point” or “tag surface” in 3D space to coincide with the position of a desired object in 3D space.FIG. 5A shows a “tag” T (having a visible line from the character to the tag for illustration purposes) defined on the painting in the 3D virtual world. In accordance with the invention, theanimated character 10 automatically responds by turning its head toward the “tag”, thereby appearing to pay attention to the tagged object. The dotted line inFIG. 5A illustrates a vector from the center of thecharacter 10 to the tag T. The animation engine can calculate this vector based on the relative positions ofcharacter 10 and tag T in 3D space and use the vector in connection with dynamically animating the character. - In accordance with a preferred embodiment of the invention, one can place any number of tags T at any number of locations within the 3D space. Any number of animated characters 10 (or any subsets of such characters, with different characters potentially being sensitive to different tags T) can react to the tags as they travel through the 3D world.
-
FIGS. 6-9 illustrate another embodiment of the invention, wherein two tags are defined in the corridor through which the character is walking. A first tag T1 is provided on the painting as described above in connection with the display sequence ofFIGS. 1-5 . However, in this embodiment, a second tag T2 is provided on the wall mounted candle. This second tag is different from the first tag in that it is defined to only cause a reaction from by the character when the candle is animated to flare up like a powerful torch (seeFIG. 7A ). The second tag T2 is given a higher priority than the first tag T1. The reactive animation engine is programmed to only allow the player to react to one tag at a time, that one tag being the tag that has the highest priority of any active tags. As a result, when thecharacter 10 is walking down the corridor and gets within proximity of the two tags, the second tag is not yet active due to the fact that the candle is not flaring up. Thus, the character turns to the look at the only active tag T1 (i.e., the painting) (seeFIG. 6 ). However, when the candle flares-up, the second tag T2, which has a higher priority than T1, also becomes active, thereby causing the character to stop looking at the painting and turn its attention to the flaring torch (i.e., the active tag with the highest priority) (seeFIG. 7A ). Once the torch stops flaring and returns to a normal candle, the second tag T2 is no longer active and the reactive animation engine then causes the character to again turn its attention to the painting (i.e. the only active tag)(seeFIG. 7B ). Once the character begins to move past the painting, the character's head then begins to turn naturally back (seeFIG. 8 ) to the forward or uninterested position corresponding to the stored animation (seeFIG. 9 ). Thus, in accordance with the invention, the character responds to active tags based their assigned priority. In this way, the character is made to look very realistic and appears as if it has come to life within its environment. As explained above, the reactive animation engine E dynamically generates the character's animation to make the character react in a priority-based manner to the various tags that are defined in the environment. -
FIG. 10A shows an example interactive 3Dcomputer graphics system 50.System 50 can be used to play interactive 3D video games with interesting animation provided by a preferred embodiment of this invention.System 50 can also be used for a variety of other applications. - In this example,
system 50 is capable of processing, interactively in real time, a digital representation or model of a three-dimensional world.System 50 can display some or the entire world from any arbitrary viewpoint. For example,system 50 can interactively change the viewpoint in response to real time inputs fromhandheld controllers System 50 can be used for applications that do not requirereal time 3D interactive display (e.g., 2D display generation and/or non-interactive display), but the capability of displayingquality 3D images very quickly can be used to create very realistic and exciting game play or other graphical interactions. - To play a video game or other
application using system 50, the user first connects amain unit 54 to his or hercolor television set 56 or other display device by connecting acable 58 between the two.Main unit 54 produces both video signals and audio signals for controllingcolor television set 56. The video signals are what controls the images displayed on thetelevision screen 59, and the audio signals are played back as sound throughtelevision stereo loudspeakers - The user also needs to connect
main unit 54 to a power source. This power source may be a conventional AC adapter (not shown) that plugs into a standard home electrical wall socket and converts the house current into a lower DC voltage signal suitable for powering themain unit 54. Batteries could be used in other implementations. - The user may use
hand controllers main unit 54. Controls 60 can be used, for example, to specify the direction (up or down, left or right, closer or further away) that a character displayed ontelevision 56 should move within a 3D world. Controls 60 also provide input for other applications (e.g., menu selection, pointer/cursor control, etc.).Controllers 52 can take a variety of forms. In this example,controllers 52 shown each include controls 60 such as joysticks, push buttons and/or directional switches.Controllers 52 may be connected tomain unit 54 by cables or wirelessly via electromagnetic (e.g., radio or infrared) waves. - To play an application such as a game, the user selects an
appropriate storage medium 62 storing the video game or other application he or she wants to play, and inserts that storage medium into aslot 64 inmain unit 54.Storage medium 62 may, for example, be a specially encoded and/or encrypted optical and/or magnetic disk. The user may operate apower switch 66 to turn onmain unit 54 and cause the main unit to begin running the video game or other application based on the software stored in thestorage medium 62. The user may, operatecontrollers 52 to provide inputs tomain unit 54. For example, operating a control 60 may cause the game or other application to start. Moving other controls 60 can cause animated characters to move in different directions or change the user's point of view in a 3D world. Depending upon the particular software stored within thestorage medium 62, the various controls 60 on thecontroller 52 can perform different functions at different times. - As also shown in
FIG. 10A ,mass storage device 62 stores, among other things, a tag-based animation engine E used to animate characters based on tags stored in the character's video game environment. The details of preferred embodiment tag-based animation engine E will be described shortly. Such tag-based animation engine E in the preferred embodiment makes use of various components ofsystem 50 shown inFIG. 10B including: -
- a main processor (CPU) 110,
- a
main memory 112, and - a graphics and
audio processor 114.
- In this example, main processor 110 (e.g., an enhanced IBM Power PC 750) receives inputs from handheld controllers 52 (and/or other input devices) via graphics and
audio processor 114.Main processor 110 interactively responds to user inputs, and executes a video game or other program supplied, for example, byexternal storage media 62 via a massstorage access device 106 such as an optical disk drive. As one example, in the context of video game play,main processor 110 can perform collision detection and animation processing in addition to a variety of interactive and control functions. - In this example,
main processor 110 generates 3D graphics and audio commands and sends them to graphics andaudio processor 114. The graphics andaudio processor 114 processes these commands to generate interesting visual images ondisplay 59 and interesting stereo sound onstereo loudspeakers Main processor 110 and graphics andaudio processor 114 also perform functions to support and implement the preferred embodiment tag-based animation engine E based on instructions and data E′ relating to the engine that is stored in DRAMmain memory 112 andmass storage device 62. - As further shown in
FIG. 10B ,example system 50 includes avideo encoder 120 that receives image signals from graphics andaudio processor 114 and converts the image signals into analog and/or digital video signals suitable for display on a standard display device such as a computer monitor or homecolor television set 56.System 50 also includes an audio codec (compressor/decompressor) 122 that compresses and decompresses digitized audio signals and may also convert between digital and analog audio signaling formats as needed.Audio codec 122 can receive audio inputs via abuffer 124 and provide them to graphics andaudio processor 114 for processing (e.g., mixing with other audio signals the processor generates and/or receives via a streaming audio output of mass storage access device 106). Graphics andaudio processor 114 in this example can store audio related information in anaudio memory 126 that is available for audio tasks. Graphics andaudio processor 114 provides the resulting audio output signals toaudio codec 122 for decompression and conversion to analog signals (e.g., viabuffer amplifiers loudspeakers - Graphics and
audio processor 114 has the ability to communicate with various additional devices that may be present withinsystem 50. For example, a paralleldigital bus 130 may be used to communicate with massstorage access device 106 and/or other components. A serialperipheral bus 132 may communicate with a variety of peripheral or other devices including, for example: -
- a programmable read-only memory and/or
real time clock 134, - a
modem 136 or other networking interface (which may in turn connectsystem 50 to atelecommunications network 138 such as the Internet or other digital network from/to which program instructions and/or data can be downloaded or uploaded), and -
flash memory 140.
- a programmable read-only memory and/or
- A further external
serial bus 142 may be used to communicate with additional expansion memory 144 (e.g., a memory card) or other devices. Connectors may be used to connect various devices tobusses system 50, see for example U.S. patent application Ser. No. 09/723,335 filed Nov. 28, 2000 entitled “EXTERNAL INTERFACES FOR A 3D GRAPHICS SYSTEM” incorporated by reference herein. -
FIG. 11 shows an example simplified illustration of a flowchart of the tag-based animation engine E of the instant invention. Animation engine E may be implemented for example by software executing onmain processor 110. Tag-based animation engine E may first initialize a 3D world and animation game play (block 1002), and may then accept user inputs supplied for example via handheld controller(s) 52 (block 1004). In response to such user inputs, engine E may animate one or moreanimated characters 10 in a conventional fashion to cause such characters to move through the 3D world based on the accepted user inputs (block 1006). Tag-based animation engine E also detects whether any moving character is in proximity to a tag T defined within the 3D world (decision block 1008). If acharacter 10 is in proximity to a tag T, the animation engine E reads the tag and computes (e.g., through mathematical computation and associated modeling, such as by using inbetweening and inverse kinematics) a dynamic animation sequence for thecharacter 10 to make the character realistically turn toward or otherwise react to the tag (block 1010). Processing continues (blocks 1004-1010) until the game is stopped or some other event causes an interruption. -
FIG. 12 shows an illustrativeexemplary data structure 1100 for a tag T. In the example shown,data structure 1100 includes atag ID field 1102 that identifies the tag; three-dimensional (i.e., X, Y, Z) positional coordinatefields close character 10 must be to the tag in order to react to the tag; a type of tag orreaction code 1112 specifying the type of reaction to be elicited (e.g., pay attention to the tag, flee from the tag, react with a particular emotion, etc.); and apriority field 114 that defined a priority for the tag relative to other tags that may be activated at the same time as the tag. -
FIG. 13 shows a more detailed exemplary flow chart of the steps performed by the reactive animation engine E of the instant invention. Once the 3D world and game play are initialized (step 1302), the system accepts user inputs to control the character within the environment in a conventional manner (step 1304). The system initially uses scripted or canned animation that is provided with the game for the character (step 1306). The animation engine checks the characters position relative to the tags that have been defined in the 3D world by the designers of the game (step 1308). If the character is not within proximity to tag then the standard animation continues for the character (step 1310). However, when a tag is detected (step 1308), the tag is read to determine the type of reaction that the tag is supposed to elicit from the character and the exact location of the tag in the 3D world (step 1312). The animation engine E then uses key frames (some or all of which may come from the scripted animation) and the tag information to dynamically adapt or alter the animation of the character to the particular tag encountered (step 1314). The dynamic animation is preferably generated using a combination of inbetweening and inverse kinematics to provide a smooth and realistic animation showing a reaction to the tag. Particular facial animations may also be used to give the character facial emotions or reactions to the tag. These facial animations can be selected from a defined pool of facial animations, and inbetweening or other suitable animation techniques can be used to further modify or dynamically change the facial expressions of the character in response to the tag. The dynamic animation then continues until the tag is no longer active (step 1316), as a result of, for example, the character moving out of range of the tag. Once the dynamic animation is completed, the standard or scripted animation is then used for the character until another tag is activated (step 1318). -
FIG. 14 shows a simplified flow chart of the steps performed by the reactive animation engine E of the instant invention in order to generate the dynamic animation sequence in response to an activated tag. As seen inFIG. 14 , once a tag is activated (step 1402), the animation engine reads the tag to determine the type of tag, its exact location and any other information that is associated with the tag (step 1404). The engine then defines key frames for use in generating the dynamic animation (step 1406). The key frames and tag information are then used, together with inbetweening and inverse kinematics, to create an animation sequence for the character on-the-fly (step 1408). Preferably, the dynamic animation sequence is adapted from the standard animation, so that only part of the animation needs to be modified, thereby reducing the overall work that must be done to provide the dynamic animation. In other words, the dynamic animation is preferably generated as an adaptation or alteration of the stored or standard animation. The dynamic animation then continues until the tag is no longer active (step 1410), at which time the characters animation returns to the standard animation. -
FIG. 15 shows an exemplary flow chart of the priority-based tagging feature of the instant invention. This feature enables several or many tags to be activated simultaneously while still having the character react in a realistic and in a priority based manner. As seen inFIG. 15 , when a tag is activated, the animation engine determines the priority of the tag (step 1502), as well as doing the other things described above. The animation engine then determines if any other tags are currently active (step 1506). If no other tags are active, the animation engine dynamically adapts or alters the animation, in the manner described above, to correspond to the active tag (step 1508). If, on the other hand, one or more other tags are currently active, the reactive animation engine determines the priority of each of the other active tags (step 1510) to determine if the current tag has a higher priority relative to each of the other currently active tags (step 1512). If the current tag does have the highest priority, then the animation engine dynamically generates the character's animation based on the current tag (step 1514). If, on the other hand, another active tag has a higher priority than the currently active tag, then the animation engine E adapts the animation in accordance with the other tag having the highest priority (step 1516). When the other tag having a higher priority is no longer active, but the original tag (i.e., from step 1502) is still active, then the animation engine dynamically generates the character's animation based on the original tag as soon as the higher priority tag has become inactive. In this way, the character's attention can be smoothly and realistically changed from one tagged object to another tagged object, as well as from no tagged object to a tagged object.FIGS. 6-9 illustrate an exemplary priority-based display sequence as just described. - As can be seen from the description above, the instant reactive animation engine E of the instant invention can be used in a variety of video games and/or other graphical applications to improve realism and game play. The invention enables a character to appear as if it has “come to life” in the game environment. The instant invention is particularly advantageous when incorporated into role playing games wherein a character interacts with a 3D world and encounters a variety of objects and/or other characters that can have certain effects on a character. The animation engine of the instant invention can also be implemented such that the same tag has a different effect on the character depending on the state of a variable of the character at the time the tagged object is encountered. One such variable could be the “sanity” level of the player in a sanity-based game, such as described in U.S. provisional application Ser. No. 60/184,656 filed Feb. 24, 2000 and entitled “Sanity System for Video Game”, the disclosure of which is incorporated by reference herein. In other words, a tag may be defined such that it does not cause much of a reaction from the character when the character has a high sanity level. On the other hand, the same tag may cause a drastic reaction from the character (such as eye's bulging) when the character is going insane, i.e., when having a low sanity level. Any other variable or role playing element, such as health or strength, could also be used to control the type of reaction that a particular tag has on the particular character at any given time during the game. Other characters, such as monsters, can also be tagged and with prioritized tags as described above in order to cause the character to react to other characters as well as other objects. Tags can also be defined such that factors other than proximity (such as timing, as in the candle/torch example above) can be used alone or in addition to proximity to cause activation of the tag.
- While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (16)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/064,531 US8319779B2 (en) | 2001-05-15 | 2011-03-30 | System and method for controlling animation by tagging objects within a game environment |
US13/657,290 US8593464B2 (en) | 2001-05-15 | 2012-10-22 | System and method for controlling animation by tagging objects within a game environment |
US14/049,668 US8976184B2 (en) | 2001-05-15 | 2013-10-09 | System and method for controlling animation by tagging objects within a game environment |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US29068801P | 2001-05-15 | 2001-05-15 | |
US31428901P | 2001-08-24 | 2001-08-24 | |
US10/078,526 US7667705B2 (en) | 2001-05-15 | 2002-02-21 | System and method for controlling animation by tagging objects within a game environment |
US12/654,844 US7928986B2 (en) | 2001-05-15 | 2010-01-06 | System and method for controlling animation by tagging objects within a game environment |
US13/064,531 US8319779B2 (en) | 2001-05-15 | 2011-03-30 | System and method for controlling animation by tagging objects within a game environment |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/654,844 Continuation US7928986B2 (en) | 2001-05-15 | 2010-01-06 | System and method for controlling animation by tagging objects within a game environment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/657,290 Continuation US8593464B2 (en) | 2001-05-15 | 2012-10-22 | System and method for controlling animation by tagging objects within a game environment |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110181607A1 true US20110181607A1 (en) | 2011-07-28 |
US8319779B2 US8319779B2 (en) | 2012-11-27 |
Family
ID=27373303
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/078,526 Expired - Lifetime US7667705B2 (en) | 2001-05-15 | 2002-02-21 | System and method for controlling animation by tagging objects within a game environment |
US12/654,844 Expired - Fee Related US7928986B2 (en) | 2001-05-15 | 2010-01-06 | System and method for controlling animation by tagging objects within a game environment |
US13/064,531 Expired - Fee Related US8319779B2 (en) | 2001-05-15 | 2011-03-30 | System and method for controlling animation by tagging objects within a game environment |
US13/657,290 Expired - Lifetime US8593464B2 (en) | 2001-05-15 | 2012-10-22 | System and method for controlling animation by tagging objects within a game environment |
US14/049,668 Expired - Lifetime US8976184B2 (en) | 2001-05-15 | 2013-10-09 | System and method for controlling animation by tagging objects within a game environment |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/078,526 Expired - Lifetime US7667705B2 (en) | 2001-05-15 | 2002-02-21 | System and method for controlling animation by tagging objects within a game environment |
US12/654,844 Expired - Fee Related US7928986B2 (en) | 2001-05-15 | 2010-01-06 | System and method for controlling animation by tagging objects within a game environment |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/657,290 Expired - Lifetime US8593464B2 (en) | 2001-05-15 | 2012-10-22 | System and method for controlling animation by tagging objects within a game environment |
US14/049,668 Expired - Lifetime US8976184B2 (en) | 2001-05-15 | 2013-10-09 | System and method for controlling animation by tagging objects within a game environment |
Country Status (1)
Country | Link |
---|---|
US (5) | US7667705B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100082798A1 (en) * | 2008-09-26 | 2010-04-01 | International Business Machines Corporation | Virtual universe avatar activities review |
US8593464B2 (en) | 2001-05-15 | 2013-11-26 | Nintendo Co., Ltd. | System and method for controlling animation by tagging objects within a game environment |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4785283B2 (en) * | 2000-07-31 | 2011-10-05 | キヤノン株式会社 | Server computer, control method and program |
US8037150B2 (en) | 2002-11-21 | 2011-10-11 | Aol Inc. | System and methods for providing multiple personas in a communications environment |
US7636755B2 (en) | 2002-11-21 | 2009-12-22 | Aol Llc | Multiple avatar personalities |
ATE395671T1 (en) * | 2002-11-25 | 2008-05-15 | Mentorwave Technologies Ltd | METHOD AND APPARATUS FOR VIRTUAL TOUR |
US7913176B1 (en) | 2003-03-03 | 2011-03-22 | Aol Inc. | Applying access controls to communications with avatars |
US20040179037A1 (en) | 2003-03-03 | 2004-09-16 | Blattner Patrick D. | Using avatars to communicate context out-of-band |
US7908554B1 (en) | 2003-03-03 | 2011-03-15 | Aol Inc. | Modifying avatar behavior based on user action or mood |
US20050168485A1 (en) * | 2004-01-29 | 2005-08-04 | Nattress Thomas G. | System for combining a sequence of images with computer-generated 3D graphics |
JP4559092B2 (en) * | 2004-01-30 | 2010-10-06 | 株式会社エヌ・ティ・ティ・ドコモ | Mobile communication terminal and program |
US9652809B1 (en) | 2004-12-21 | 2017-05-16 | Aol Inc. | Using user profile information to determine an avatar and/or avatar characteristics |
US20070162862A1 (en) * | 2005-07-06 | 2007-07-12 | Gemini Mobile Technologies, Inc. | Selective user monitoring in an online environment |
US20070011617A1 (en) * | 2005-07-06 | 2007-01-11 | Mitsunori Akagawa | Three-dimensional graphical user interface |
JP4116039B2 (en) * | 2006-01-27 | 2008-07-09 | 株式会社スクウェア・エニックス | GAME DEVICE, GAME PROGRESSING METHOD, PROGRAM, AND RECORDING MEDIUM |
JP4118920B2 (en) * | 2006-02-22 | 2008-07-16 | 株式会社スクウェア・エニックス | Game device, field boundary display method, program, and recording medium |
US9329743B2 (en) * | 2006-10-04 | 2016-05-03 | Brian Mark Shuster | Computer simulation method with user-defined transportation and layout |
US9098167B1 (en) | 2007-02-26 | 2015-08-04 | Qurio Holdings, Inc. | Layered visualization of content representations |
EP2132650A4 (en) * | 2007-03-01 | 2010-10-27 | Sony Comp Entertainment Us | System and method for communicating with a virtual world |
US20080215975A1 (en) * | 2007-03-01 | 2008-09-04 | Phil Harrison | Virtual world user opinion & response monitoring |
US9111285B2 (en) | 2007-08-27 | 2015-08-18 | Qurio Holdings, Inc. | System and method for representing content, user presence and interaction within virtual world advertising environments |
US8261307B1 (en) | 2007-10-25 | 2012-09-04 | Qurio Holdings, Inc. | Wireless multimedia content brokerage service for real time selective content provisioning |
US8495505B2 (en) | 2008-01-10 | 2013-07-23 | International Business Machines Corporation | Perspective based tagging and visualization of avatars in a virtual world |
US8819565B2 (en) * | 2008-05-14 | 2014-08-26 | International Business Machines Corporation | Describing elements in a virtual world based on changes since a previous encounter |
US9418330B2 (en) * | 2008-09-23 | 2016-08-16 | International Business Machines Corporation | System and method for enhancing user accessibility in a virtual universe |
KR101515859B1 (en) * | 2008-12-05 | 2015-05-06 | 삼성전자 주식회사 | Display apparatus and display method of contents list thereof |
US8988437B2 (en) | 2009-03-20 | 2015-03-24 | Microsoft Technology Licensing, Llc | Chaining animations |
US8262474B2 (en) | 2009-04-21 | 2012-09-11 | Mcmain Michael Parker | Method and device for controlling player character dialog in a video game located on a computer-readable storage medium |
US20110040555A1 (en) * | 2009-07-21 | 2011-02-17 | Wegner Peter Juergen | System and method for creating and playing timed, artistic multimedia representations of typed, spoken, or loaded narratives, theatrical scripts, dialogues, lyrics, or other linguistic texts |
US20140257806A1 (en) * | 2013-03-05 | 2014-09-11 | Nuance Communications, Inc. | Flexible animation framework for contextual animation display |
US20160062373A1 (en) * | 2014-08-29 | 2016-03-03 | Intel Corporation | Adaptive loading and cooling |
US20170228929A1 (en) * | 2015-09-01 | 2017-08-10 | Patrick Dengler | System and Method by which combining computer hardware device sensor readings and a camera, provides the best, unencumbered Augmented Reality experience that enables real world objects to be transferred into any digital space, with context, and with contextual relationships. |
US10552761B2 (en) * | 2016-05-04 | 2020-02-04 | Uvic Industry Partnerships Inc. | Non-intrusive fine-grained power monitoring of datacenters |
US10713832B2 (en) * | 2016-05-17 | 2020-07-14 | Disney Enterprises, Inc. | Precomputed environment semantics for contact-rich character animation |
CA2968589C (en) * | 2016-06-10 | 2023-08-01 | Square Enix Ltd. | System and method for placing a character animation at a location in a game environment |
CN106850650B (en) * | 2017-02-21 | 2021-06-04 | 网易(杭州)网络有限公司 | Method for accessing data by game client and client game system |
CN107577661B (en) * | 2017-08-07 | 2020-12-11 | 北京光年无限科技有限公司 | Interactive output method and system for virtual robot |
US10668382B2 (en) * | 2017-09-29 | 2020-06-02 | Sony Interactive Entertainment America Llc | Augmenting virtual reality video games with friend avatars |
US10748342B2 (en) | 2018-06-19 | 2020-08-18 | Google Llc | Interaction system for augmented reality objects |
GB2576213A (en) * | 2018-08-10 | 2020-02-12 | Sony Corp | A method for mapping an object to a location in virtual space |
CN114402264B (en) | 2019-09-17 | 2024-06-07 | Asml控股股份有限公司 | Laser module, metrology system and lithographic apparatus as alignment source |
JP7134197B2 (en) * | 2020-05-01 | 2022-09-09 | グリー株式会社 | Video distribution system, information processing method and computer program |
US12020555B2 (en) | 2022-10-17 | 2024-06-25 | Motorola Solutions, Inc. | System and method for detecting and tracking a status of an object relevant to an incident |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5563988A (en) * | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US7006098B2 (en) * | 1998-02-13 | 2006-02-28 | Fuji Xerox Co., Ltd. | Method and apparatus for creating personal autonomous avatars |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4661810A (en) * | 1985-02-19 | 1987-04-28 | International Business Machines Corporation | Method for interactive rotation of displayed graphic objects |
US4698625A (en) * | 1985-05-30 | 1987-10-06 | International Business Machines Corp. | Graphic highlight adjacent a pointing cursor |
US6031549A (en) * | 1995-07-19 | 2000-02-29 | Extempo Systems, Inc. | System and method for directed improvisation by computer controlled characters |
US6009458A (en) | 1996-05-09 | 1999-12-28 | 3Do Company | Networked computer game system with persistent playing objects |
JPH10290886A (en) | 1997-02-18 | 1998-11-04 | Sega Enterp Ltd | Image processing device and image processing method |
US6191798B1 (en) * | 1997-03-31 | 2001-02-20 | Katrix, Inc. | Limb coordination system for interactive computer animation of articulated characters |
US6366285B1 (en) * | 1997-11-21 | 2002-04-02 | International Business Machines Corporation | Selection by proximity with inner and outer sensitivity ranges |
US6545682B1 (en) * | 2000-05-24 | 2003-04-08 | There, Inc. | Method and apparatus for creating and customizing avatars using genetic paradigm |
US7136786B2 (en) * | 2001-04-12 | 2006-11-14 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for modeling interaction of objects |
US7667705B2 (en) | 2001-05-15 | 2010-02-23 | Nintendo Of America Inc. | System and method for controlling animation by tagging objects within a game environment |
US6791549B2 (en) * | 2001-12-21 | 2004-09-14 | Vrcontext S.A. | Systems and methods for simulating frames of complex virtual environments |
-
2002
- 2002-02-21 US US10/078,526 patent/US7667705B2/en not_active Expired - Lifetime
-
2010
- 2010-01-06 US US12/654,844 patent/US7928986B2/en not_active Expired - Fee Related
-
2011
- 2011-03-30 US US13/064,531 patent/US8319779B2/en not_active Expired - Fee Related
-
2012
- 2012-10-22 US US13/657,290 patent/US8593464B2/en not_active Expired - Lifetime
-
2013
- 2013-10-09 US US14/049,668 patent/US8976184B2/en not_active Expired - Lifetime
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5563988A (en) * | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US7006098B2 (en) * | 1998-02-13 | 2006-02-28 | Fuji Xerox Co., Ltd. | Method and apparatus for creating personal autonomous avatars |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8593464B2 (en) | 2001-05-15 | 2013-11-26 | Nintendo Co., Ltd. | System and method for controlling animation by tagging objects within a game environment |
US20100082798A1 (en) * | 2008-09-26 | 2010-04-01 | International Business Machines Corporation | Virtual universe avatar activities review |
US8285790B2 (en) * | 2008-09-26 | 2012-10-09 | International Business Machines Corporation | Virtual universe avatar activities review |
US8635303B2 (en) | 2008-09-26 | 2014-01-21 | International Business Machines Corporation | Virtual universe avatar activities review |
US9623337B2 (en) | 2008-09-26 | 2017-04-18 | International Business Machines Corporation | Virtual universe avatar activities review |
Also Published As
Publication number | Publication date |
---|---|
US20100115449A1 (en) | 2010-05-06 |
US7667705B2 (en) | 2010-02-23 |
US20020171647A1 (en) | 2002-11-21 |
US8593464B2 (en) | 2013-11-26 |
US7928986B2 (en) | 2011-04-19 |
US8319779B2 (en) | 2012-11-27 |
US8976184B2 (en) | 2015-03-10 |
US20130044116A1 (en) | 2013-02-21 |
US20140035932A1 (en) | 2014-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7928986B2 (en) | System and method for controlling animation by tagging objects within a game environment | |
Maes et al. | The ALIVE system: Wireless, full-body interaction with autonomous agents | |
US8555164B2 (en) | Method for customizing avatars and heightening online safety | |
JP5785254B2 (en) | Real-time animation of facial expressions | |
US8672753B2 (en) | Video game including effects for providing different experiences of the same video game world and a storage medium storing software for the video game | |
CN102458595B (en) | The system of control object, method and recording medium in virtual world | |
Cavazza et al. | Motion control of virtual humans | |
US7497779B2 (en) | Video game including time dilation effect and a storage medium storing software for the video game | |
CN102129343A (en) | Directed performance in motion capture system | |
US11957995B2 (en) | Toy system for augmented reality | |
WO2024244666A1 (en) | Animation generation method and apparatus for avatar, and electronic device, computer program product and computer-readable storage medium | |
Fu et al. | Real-time multimodal human–avatar interaction | |
JP3558288B1 (en) | System and method for video control by tagging objects in a game environment | |
JPH06236432A (en) | Virtual-reality system and generation method of virtual-reality world of virtual-reality image | |
Jung et al. | Extending H-Anim and X3D for advanced animation control | |
JP4159060B2 (en) | Image generating apparatus and information storage medium | |
Wadgaonkar et al. | Exploring behavioral anthropomorphism with robots in virtual reality | |
JP2003196679A (en) | Method for creating photo-realistic animation that expresses a plurality of emotions | |
JP2001229398A (en) | Method and device for acquiring performance animation gesture and reproducing the same on animation character | |
Balet et al. | The VISIONS project | |
Monzani | An architecture for the behavioural animation of virtual humans | |
Luengo et al. | Reusable virtual elements for virtual environment simulations | |
Madsen | Supporting Interactive Dramaturgy in a Virtual Environment for Small Children | |
Orvalho | Sketch-based Facial Modeling and Animation: an approach based on mobile devices | |
Madsen | Supporting Interactive |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ZAAA | Notice of allowance and fees due |
Free format text: ORIGINAL CODE: NOA |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: NINTENDO CO., LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NINTENDO OF AMERICA INC.;REEL/FRAME:031009/0642 Effective date: 20130812 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20241127 |