WO2018124280A1 - Système de simulation, procédé de traitement d'image, et support de stockage d'informations - Google Patents
Système de simulation, procédé de traitement d'image, et support de stockage d'informations Download PDFInfo
- Publication number
- WO2018124280A1 WO2018124280A1 PCT/JP2017/047250 JP2017047250W WO2018124280A1 WO 2018124280 A1 WO2018124280 A1 WO 2018124280A1 JP 2017047250 W JP2017047250 W JP 2017047250W WO 2018124280 A1 WO2018124280 A1 WO 2018124280A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual space
- user
- image
- virtual
- position information
- Prior art date
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 62
- 238000003672 processing method Methods 0.000 title claims description 8
- 238000012545 processing Methods 0.000 claims abstract description 278
- 238000000034 method Methods 0.000 claims abstract description 267
- 230000008569 process Effects 0.000 claims abstract description 232
- 238000009877 rendering Methods 0.000 claims abstract description 6
- 230000035807 sensation Effects 0.000 claims description 64
- 238000001514 detection method Methods 0.000 claims description 16
- 230000033001 locomotion Effects 0.000 description 50
- 238000010586 diagram Methods 0.000 description 18
- 210000003128 head Anatomy 0.000 description 17
- 101100347655 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) NAB3 gene Proteins 0.000 description 16
- 230000006870 function Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 15
- 230000008859 change Effects 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 9
- 210000001508 eye Anatomy 0.000 description 8
- 238000013459 approach Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 7
- 230000001133 acceleration Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 239000004065 semiconductor Substances 0.000 description 3
- 238000007664 blowing Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 229910052736 halogen Inorganic materials 0.000 description 2
- 150000002367 halogens Chemical class 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
Definitions
- the present invention relates to a simulation system, an image processing method, an information storage medium, and the like.
- a simulation system that generates an image that can be seen from a virtual camera in a virtual space.
- a simulation system that realizes virtual reality (VR) by displaying an image viewed from a virtual camera on an HMD (head-mounted display device)
- VR virtual reality
- Patent Document 1 Japanese Patent Document 1 or the like.
- a simulation system an image processing method, an information storage medium, and the like that can realize a virtual reality that can move in a plurality of virtual spaces.
- One aspect of the present invention includes a virtual space setting unit that performs a setting process of a virtual space in which an object is arranged, a moving object processing unit that performs a process of moving a user moving object corresponding to a user in the virtual space, A display processing unit that performs drawing processing of an image viewed from a virtual camera in the virtual space, wherein the virtual space setting unit is specific to the first virtual space and the first virtual space as the virtual space A second virtual space that is connected via a point, and the display processing unit displays the position information of the user moving object or the virtual camera before the given switching condition is established. Corresponding to the singular point by performing a process of drawing the image of the second virtual space in addition to the image of the first virtual space as the first drawing process.
- the present invention relates to a simulation system that generates an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when the line-of-sight direction is directed in the direction opposite to the first traveling direction. .
- the present invention also relates to a program that causes a computer to function as each of the above-described units, or a computer-readable information storage medium that stores the program.
- an image that can be seen from a virtual camera is generated in a virtual space in which a user moving object moves.
- a first virtual space and a second virtual space are set as the virtual space.
- the position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and the image of the second virtual space is added to the image of the first virtual space.
- An image is generated in which the image of the second virtual space is displayed in the region corresponding to the singular point.
- the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and the image of the first virtual space is drawn in addition to the image of the second virtual space.
- an image in which the image of the first virtual space is displayed in the region corresponding to the singular point is generated.
- an image in which an image of the first virtual space is displayed in a region corresponding to a singular point when the line-of-sight direction of the virtual camera faces the opposite direction side of the first traveling direction. Will be generated.
- the user moving body or the virtual camera passes through the region corresponding to the singular point, so that the position of the user moving body or the virtual camera is changed from the position of the first virtual space to the position of the second virtual space. It will be switched.
- the image of the second virtual space is drawn in addition to the image of the first virtual space, and the second virtual space is drawn in the region corresponding to the singular point.
- the image of the virtual space 2 is displayed.
- the image of the first virtual space is drawn in addition to the image of the second virtual space, and the region corresponding to the singular point is drawn.
- An image of the first virtual space is displayed. This makes it possible to provide a simulation system or the like that can realize a virtual reality that can move in a plurality of virtual spaces.
- the display processing unit may be configured such that when the user moving body or the virtual camera passes through the singular point in a second traveling direction different from the first traveling direction, The position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing process, a process of drawing at least an image of the first virtual space is performed. Also good.
- the user moving body or the virtual camera passes through the region corresponding to the singular point in the second traveling direction different from the first traveling direction, so that the position of the user moving body or the virtual camera is changed.
- the position of the second virtual space is switched to the position of the first virtual space.
- the display processing unit when the switching condition is satisfied by the user moving body or the virtual camera passing through a location corresponding to the singular point, the user moving body or the The position information of the virtual camera may be set as the position information of the third virtual space, and a process of drawing at least an image of the third virtual space may be performed as the third drawing process.
- the display processing unit may correspond to the singular points, when the plurality of users play, the plurality of user moving bodies corresponding to the plurality of users or the plurality of virtual cameras.
- the third drawing process may be permitted on condition that the vehicle passes through a place and returns to the first virtual space.
- the display processing unit may determine whether or not the switching condition is satisfied based on the input information of the user or the detection information of the sensor.
- an object corresponding to the singular point is arranged in a play field in the real space in which the user moves, and the display processing unit is configured to allow the user to move the object in the real space.
- the display processing unit is configured to allow the user to move the object in the real space.
- the display processing unit may perform processing for setting or not setting the singular point.
- the present invention includes a sensation device control unit that controls a sensation device for allowing the user to experience virtual reality (a computer functions as the sensation device control unit), and the sensation device control unit includes the switching
- the control of the sensation apparatus when the condition is not satisfied may be different from the control of the sensation apparatus when the switching condition is satisfied.
- a notification processing unit that performs a process of outputting notification information of collision between users in real space may be included (a computer is caused to function as the notification processing unit). May be)
- an information acquisition unit that acquires position information of the user in real space is included (a computer is caused to function as the information acquisition unit), and the moving body processing unit includes the acquired position information.
- the display processing unit generates a display image of a head-mounted display device worn by the user, and the display processing unit satisfies the switching condition.
- the position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as the position information of the first virtual space, and the switching condition is If established, the position information of the user moving object or the virtual camera specified by the position information of the user in the real space is used as the position information of the second virtual space. It may be set Te.
- the position information of the user in the real space is acquired, and the user moving body or the like is moved in the virtual space based on the acquired position information.
- the position of the user moving body or the virtual camera is switched from the position of the first virtual space to the position of the second virtual space, further improving the virtual reality that moves in a plurality of virtual spaces. It becomes possible to do.
- One embodiment of the present invention is based on an information acquisition unit that acquires position information of a user in real space, a virtual space setting unit that performs setting processing of a virtual space in which an object is arranged, and the acquired position information.
- a moving body processing unit that performs a process of moving a user moving body corresponding to the user in the virtual space, and a head-mounted type that performs a drawing process of an image viewed from a virtual camera in the virtual space and is worn by the user
- a display processing unit configured to generate an image to be displayed on a display device, wherein the virtual space setting unit includes, as the virtual space, a first virtual space and a singular point with respect to the first virtual space.
- the position information of the moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing process, at least an image of the first virtual space is drawn, and the user When the moving condition or the virtual camera passes through the place corresponding to the singular point in the first traveling direction and the switching condition is satisfied, it is specified by the position information of the user in the real space.
- Simulation sets the position information of the user moving body or the virtual camera as position information of the second virtual space and performs a process of drawing at least an image of the second virtual space as a second drawing process.
- the present invention also relates to a program that causes a computer to function as each of the above-described units, or a computer-readable information storage medium that stores the program.
- user position information in real space is acquired, and a user moving body or the like is moved in the virtual space based on the acquired position information.
- the position of the user moving body or the virtual camera is switched from the position of the first virtual space to the position of the second virtual space.
- a virtual space setting process for setting a virtual space in which an object is arranged, a moving body process for moving a user moving body corresponding to a user in the virtual space, and a virtual space in the virtual space.
- Display processing for rendering an image visible from the camera, and in the virtual space setting processing, the virtual space is connected to the first virtual space and the first virtual space via a singular point
- the position information of the user moving body or the virtual camera is used as the position information of the first virtual space before a given switching condition is satisfied in the display process.
- the line-of-sight direction of the virtual camera is The present invention relates to an image processing method for generating an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when facing in the direction opposite to the first traveling direction.
- Another aspect of the present invention is based on information acquisition processing for acquiring user location information in real space, virtual space setting processing for setting a virtual space in which an object is placed, and the acquired location information.
- a moving body process for moving a user moving body corresponding to the user and a drawing process for an image visible from the virtual camera in the virtual space are performed and displayed on the head-mounted display device worn by the user.
- the user moving body or the user specified by the position information of the user in the real space is set.
- the position information of the virtual camera is set as the position information of the first virtual space, and as a first drawing process, a process of drawing at least an image of the first virtual space is performed, and the user moving object or The user movement specified by the position information of the user in the real space when the switching condition is satisfied when the virtual camera passes through a place corresponding to the singular point in a first traveling direction.
- FIG. 2A and FIG. 2B are examples of the HMD used in this embodiment.
- 3A and 3B are other examples of the HMD used in the present embodiment.
- FIG. 6A to FIG. 6E are explanatory diagrams of an example of a virtual space switching method.
- Explanatory drawing of 2nd virtual space The example of an image at the time of seeing a door from the 2nd virtual space side.
- FIG. 10A and FIG. 10B are explanatory diagrams of a control example of the sensation apparatus in the second virtual space.
- FIGS. 15A and 15B are explanatory diagrams of a switching method to the third virtual space. Explanatory drawing of the switching method to the 3rd virtual space.
- FIG. 17A and FIG. 17B are explanatory diagrams of the technique of this embodiment when a plurality of users play.
- FIG. 18A and FIG. 18B are explanatory diagrams of the technique of this embodiment when a plurality of users play.
- FIG. 19A and 19B are explanatory diagrams of a switching method based on user input information and sensor detection information.
- FIG. 20A and FIG. 20B are explanatory diagrams of a method for switching the virtual space by arranging an object corresponding to a singular point.
- FIG. 21A and FIG. 21B are explanatory diagrams of a method for setting or not setting a singular point.
- 22A and 22B are explanatory diagrams of a control method for the sensation apparatus.
- FIGS. 23A and 23B are explanatory diagrams of a method for outputting collision notification information. Explanatory drawing of the output method of the alerting
- FIG. 25A and FIG. 25B are explanatory diagrams of a special image display example in the present embodiment.
- FIG. 26A and FIG. 26B are explanatory diagrams of a method for acquiring position information of a user wearing an HMD. The flowchart which shows the detailed process example of this embodiment.
- FIG. 1 is a block diagram illustrating a configuration example of a simulation system (a simulator, a game system, and an image generation system) according to the present embodiment.
- the simulation system of this embodiment is a system that simulates virtual reality (VR), for example, a game system that provides game content, a real-time simulation system such as a sports competition simulator and a driving simulator, a system that provides SNS services, and video
- VR virtual reality
- the present invention can be applied to various systems such as a content providing system that provides content such as an operating system that implements remote work.
- the simulation system of the present embodiment is not limited to the configuration shown in FIG. 1, and various modifications such as omitting some of the components (each unit) or adding other components are possible.
- the operation unit 160 is for a user (player) to input various operation information (input information).
- the operation unit 160 can be realized by various operation devices such as an operation button, a direction instruction key, a joystick, a handle, a pedal, a lever, or a voice input device.
- the storage unit 170 stores various types of information.
- the storage unit 170 functions as a work area such as the processing unit 100 or the communication unit 196.
- the game program and game data necessary for executing the game program are held in the storage unit 170.
- the function of the storage unit 170 can be realized by a semiconductor memory (DRAM, VRAM), HDD (Hard Disk Drive), SSD, optical disk device, or the like.
- the storage unit 170 includes an object information storage unit 172 and a drawing buffer 178.
- the information storage medium 180 (a computer-readable medium) stores programs, data, and the like, and its function can be realized by an optical disk (DVD, BD, CD), HDD, semiconductor memory (ROM), or the like.
- the processing unit 100 performs various processes of the present embodiment based on a program (data) stored in the information storage medium 180. That is, in the information storage medium 180, a program for causing a computer (an apparatus including an input device, a processing unit, a storage unit, and an output unit) to function as each unit of the present embodiment (a program for causing the computer to execute processing of each unit). Is memorized.
- the HMD 200 (head-mounted display device) is a device that is mounted on the user's head and displays an image in front of the user's eyes.
- the HMD 200 is preferably a non-transmissive type, but may be a transmissive type.
- the HMD 200 may be a so-called glasses-type HMD.
- the HMD 200 includes a sensor unit 210, a display unit 220, and a processing unit 240. A modification in which a light emitting element is provided in the HMD 200 is also possible.
- the sensor unit 210 is for realizing tracking processing such as head tracking, for example.
- the position and direction of the HMD 200 are specified by tracking processing using the sensor unit 210.
- the user's viewpoint position and line-of-sight direction can be specified.
- the first tracking method which is an example of the tracking method
- a plurality of light receiving elements are provided as the sensor unit 210, as will be described in detail with reference to FIGS. 2A and 2B described later.
- the position of the HMD 200 (user's head) in the real world three-dimensional space, Identify the direction.
- a plurality of light emitting elements LEDs are provided in the HMD 200 as will be described in detail with reference to FIGS. 3A and 3B described later.
- a motion sensor is provided as the sensor unit 210, and the position and direction of the HMD 200 are specified using this motion sensor.
- the motion sensor can be realized by, for example, an acceleration sensor or a gyro sensor.
- the position and direction of the HMD 200 in a three-dimensional space in the real world can be specified.
- the position and direction of the HMD 200 may be specified by a combination of the first tracking method and the second tracking method, or a combination of the first tracking method and the third tracking method.
- tracking processing that directly specifies the user's viewpoint position and line-of-sight direction may be employed.
- the display unit 220 of the HMD 200 can be realized by, for example, an organic EL display (OEL) or a liquid crystal display (LCD).
- the display unit 220 of the HMD 200 includes a first display or first display area set in front of the user's left eye and a second display or second display area set in front of the right eye. It is provided and stereoscopic display is possible.
- stereoscopic display for example, a left-eye image and a right-eye image with different parallax are generated, a left-eye image is displayed on the first display, and a right-eye image is displayed on the second display. To do.
- the left-eye image is displayed in the first display area of one display, and the right-eye image is displayed in the second display area.
- the HMD 200 is provided with two eyepiece lenses (fisheye lenses) for the left eye and the right eye, thereby expressing a VR space that extends over the entire perimeter of the user's field of view. Then, correction processing for correcting distortion generated in an optical system such as an eyepiece is performed on the left-eye image and the right-eye image. This correction process is performed by the display processing unit 120.
- the processing unit 240 of the HMD 200 performs various processes necessary for the HMD 200. For example, the processing unit 240 performs control processing of the sensor unit 210, display control processing of the display unit 220, and the like. Further, the processing unit 240 may perform a three-dimensional sound (stereoscopic sound) process to realize reproduction of a three-dimensional sound direction, distance, and spread.
- a three-dimensional sound stereographic sound
- the display part of a simulation system may be a display part of types other than HMD.
- the display unit of the simulation system may be, for example, a display (ordinary 2D monitor or dome screen) in an arcade game device, a television in a home game device, or a display in a personal computer (PC).
- the sound output unit 192 outputs the sound generated by the present embodiment, and can be realized by, for example, a speaker or headphones.
- the I / F (interface) unit 194 performs interface processing with the portable information storage medium 195, and its function can be realized by an ASIC for I / F processing or the like.
- the portable information storage medium 195 is for a user to save various types of information, and is a storage device that retains storage of such information even when power is not supplied.
- the portable information storage medium 195 can be realized by an IC card (memory card), a USB memory, a magnetic card, or the like.
- the communication unit 196 communicates with the outside (another apparatus) via a wired or wireless network, and functions thereof are hardware such as a communication ASIC or communication processor, or communication firmware. Can be realized.
- a program (data) for causing a computer to function as each unit of this embodiment is distributed from the information storage medium of the server (host device) to the information storage medium 180 (or storage unit 170) via the network and communication unit 196. May be. Use of an information storage medium by such a server (host device) can also be included in the scope of the present invention.
- the processing unit 100 is used for operation information from the operation unit 160, tracking information in the HMD 200 (information on at least one of the position and direction of the HMD, information on at least one of the viewpoint position and the line-of-sight direction), a program, and the like. Based on this, game processing (simulation processing), virtual space setting processing, moving body processing, virtual camera control processing, display processing, sound processing, or the like is performed.
- each process (each function) of this embodiment performed by each unit of the processing unit 100 can be realized by a processor (a processor including hardware).
- each process of the present embodiment can be realized by a processor that operates based on information such as a program and a memory that stores information such as a program.
- the function of each unit may be realized by individual hardware, or the function of each unit may be realized by integrated hardware.
- the processor may include hardware, and the hardware may include at least one of a circuit that processes a digital signal and a circuit that processes an analog signal.
- the processor can be configured by one or a plurality of circuit devices (for example, ICs) mounted on a circuit board or one or a plurality of circuit elements (for example, resistors, capacitors, etc.).
- the processor may be, for example, a CPU (Central Processing Unit). However, the processor is not limited to the CPU, and various processors such as a GPU (GraphicsGProcessing Unit) or a DSP (Digital Signal Processor) can be used.
- the processor may be an ASIC hardware circuit.
- the processor may include an amplifier circuit, a filter circuit, and the like that process an analog signal.
- the memory storage unit 170
- the memory stores instructions that can be read by a computer, and the processing (function) of each unit of the processing unit 100 is realized by executing the instructions by the processor.
- the instruction here may be an instruction set constituting a program, or an instruction for instructing an operation to the hardware circuit of the processor.
- the processing unit 100 includes an input processing unit 102, an arithmetic processing unit 110, and an output processing unit 140.
- the arithmetic processing unit 110 includes an information acquisition unit 111, a virtual space setting unit 112, a moving body processing unit 113, a virtual camera control unit 114, a game processing unit 115, a notification processing unit 116, a sensation device control unit 117, a display processing unit 120, A sound processing unit 130 is included.
- each process of the present embodiment executed by these units can be realized by a processor (or a processor and a memory). Various modifications such as omitting some of these components (each unit) or adding other components are possible.
- the input processing unit 102 performs processing for receiving operation information and tracking information, processing for reading information from the storage unit 170, and processing for receiving information via the communication unit 196 as input processing.
- the input processing unit 102 stores, in the storage unit 170, processing for acquiring operation information input by the user using the operation unit 160, tracking information detected by the sensor unit 210 of the HMD 200, and the information specified by the read command.
- a process for reading data from a computer or a process for receiving information from an external device (such as a server) via a network is performed as an input process.
- the reception process includes a process of instructing the communication unit 196 to receive information, a process of acquiring information received by the communication unit 196, and writing the information in the storage unit 170, and the like.
- the arithmetic processing unit 110 performs various arithmetic processes. For example, arithmetic processing such as information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing (simulation processing), display processing, or sound processing is performed.
- arithmetic processing such as information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing (simulation processing), display processing, or sound processing is performed.
- the information acquisition unit 111 (program module for information acquisition processing) performs various information acquisition processing. For example, the information acquisition unit 111 acquires position information of the user wearing the HMD 200 and the like. The information acquisition unit 111 may acquire user direction information and the like.
- the virtual space setting unit 112 (program module for virtual space setting processing) performs setting processing for a virtual space (object space) in which objects are arranged.
- objects representing display objects such as moving objects (people, robots, cars, trains, airplanes, ships, monsters, animals, etc.), maps (terrain), buildings, auditoriums, courses (roads), trees, walls, water surfaces, etc.
- a process of placing and setting an object is performed.
- the position and rotation angle of the object in the world coordinate system are determined, and the rotation angle (rotation angle around the X, Y, and Z axes) is determined at that position (X, Y, Z).
- Arrange objects Specifically, in the object information storage unit 172 of the storage unit 170, object information that is information such as the position, rotation angle, moving speed, and moving direction of an object (part object) in the virtual space is associated with the object number. Is memorized.
- the virtual space setting unit 112 performs a process of updating the object information for each frame, for example.
- the moving body processing unit 113 performs various processes on the moving body moving in the virtual space. For example, processing for moving the moving body in virtual space (object space, game space) and processing for moving the moving body are performed.
- the mobile object processing unit 113 is based on operation information input by the user through the operation unit 160, acquired tracking information, a program (movement / motion algorithm), various data (motion data), and the like.
- Control processing for moving the model object) in the virtual space or moving the moving object (motion, animation) is performed. Specifically, a simulation for sequentially obtaining movement information (position, rotation angle, speed, or acceleration) and motion information (part object position or rotation angle) of a moving body for each frame (for example, 1/60 second). Process.
- a frame is a unit of time for performing a moving / movement process (simulation process) and an image generation process of a moving object.
- the moving body is, for example, a user moving body corresponding to a user (player) in real space.
- the user moving body is a virtual user (virtual player, avatar) in the virtual space, or a boarding moving body (operation moving body) on which the virtual user is boarded (operated).
- the virtual camera control unit 114 controls the virtual camera. For example, a process for controlling the virtual camera is performed based on user operation information, tracking information, and the like input by the operation unit 160.
- the virtual camera control unit 114 controls the virtual camera set as the first-person viewpoint or the third-person viewpoint of the user. For example, by setting a virtual camera at a position corresponding to the viewpoint (first-person viewpoint) of a user moving body moving in the virtual space, and setting the viewpoint position and line-of-sight direction of the virtual camera, the position (positional coordinates) of the virtual camera And control the attitude (rotation angle around the rotation axis).
- the position and orientation of the virtual camera are controlled by setting the virtual camera at the position of the viewpoint (third-person viewpoint) following the user moving body and setting the viewpoint position and line-of-sight direction of the virtual camera.
- the virtual camera control unit 114 controls the virtual camera so as to follow the change of the user's viewpoint based on the tracking information of the user's viewpoint information acquired by the viewpoint tracking.
- tracking information viewpoint tracking information
- This tracking information can be acquired by performing tracking processing of the HMD 200, for example.
- the virtual camera control unit 114 changes the viewpoint position and the line-of-sight direction of the virtual camera based on the acquired tracking information (information on at least one of the user's viewpoint position and the line-of-sight direction).
- the virtual camera control unit 114 is configured so that the viewpoint position and the line-of-sight direction (position and posture) of the virtual camera change in the virtual space according to changes in the viewpoint position and the line-of-sight direction of the user in the real space. Set up the camera. By doing in this way, a virtual camera can be controlled to follow a user's viewpoint change based on tracking information of a user's viewpoint information.
- the game processor 115 performs various game processes for the user to play the game.
- the game processing unit 115 simulate processing unit executes various simulation processes for the user to experience virtual reality (virtual reality).
- the game process is, for example, a process for starting a game when a game start condition is satisfied, a process for advancing the started game, a process for ending a game when a game end condition is satisfied, or calculating a game result. Processing.
- the notification processing unit 116 performs various types of notification processing. For example, a warning notification process for the user is performed.
- the notification process may be a notification process using an image or sound, for example, or may be a notification process using a sensation device such as a vibration device, sound, or an air cannon.
- the sensation device control unit 117 performs various control processes of the sensation device. For example, the sensation device is controlled to allow the user to experience virtual reality.
- the display processing unit 120 performs display processing of a game image (simulation image). For example, a drawing process is performed based on the results of various processes (game process, simulation process) performed by the processing unit 100, thereby generating an image and displaying it on the display unit 220. Specifically, geometric processing such as coordinate transformation (world coordinate transformation, camera coordinate transformation), clipping processing, perspective transformation, or light source processing is performed. Based on the processing result, drawing data (the position of the vertex of the primitive surface) Coordinates, texture coordinates, color data, normal vector, ⁇ value, etc.) are created.
- the object (one or a plurality of primitive surfaces) after perspective transformation (after geometry processing) is converted into image information in units of pixels such as a drawing buffer 178 (frame buffer, work buffer, etc.).
- an image that can be seen from the virtual camera given viewpoints, the first and second viewpoints for the left eye and the right eye
- the drawing processing performed by the display processing unit 120 can be realized by vertex shader processing, pixel shader processing, or the like.
- the sound processing unit 130 performs sound processing based on the results of various processes performed by the processing unit 100. Specifically, game sounds such as music (music, BGM), sound effects, or sounds are generated, and the game sounds are output to the sound output unit 192. Note that part of the sound processing of the sound processing unit 130 (for example, three-dimensional sound processing) may be realized by the processing unit 240 of the HMD 200.
- the output processing unit 140 performs various types of information output processing. For example, the output processing unit 140 performs processing for writing information in the storage unit 170 and processing for transmitting information via the communication unit 196 as output processing. For example, the output processing unit 140 performs a process of writing information specified by a write command in the storage unit 170 or a process of transmitting information to an external apparatus (server or the like) via a network.
- the transmission process is a process of instructing the communication unit 196 to transmit information, or instructing the communication unit 196 to transmit information.
- the simulation system of this embodiment contains the virtual space setting part 112, the mobile body process part 113, and the display process part 120, as shown in FIG.
- the virtual space setting unit 112 performs processing for setting a virtual space in which objects are arranged. For example, a process is performed in which an object of a user moving object corresponding to the user, an object of an opponent moving object such as an enemy, and an object constituting a map or background are set in the virtual space.
- the user moving body corresponding to the user is, for example, a moving body that the user operates with the operation unit 160 or a moving body that moves in the virtual space following the movement of the user in the real space. This user moving body is called, for example, a character or an avatar.
- the user mobile body may be a boarding mobile body such as a robot on which the user is boarding. Further, the user moving body may be a display object on which the image is displayed, or may be a virtual object on which the image is not displayed.
- the moving body processing unit 113 performs processing for moving a user moving body (virtual camera) corresponding to the user in the virtual space.
- the user moving body is moved in the virtual space based on the operation information input by the user through the operation unit 160.
- the mobile body process part 113 is a user mobile body in virtual space based on the acquired positional information (viewpoint tracking information).
- Process to move For example, the user moving body is moved in the virtual space so as to move following the movement of the user in the real space. For example, based on the moving speed and moving acceleration of the user moving body, a process of updating the position of the user moving body for each frame is performed to move the user moving body in the virtual space (virtual field).
- the display processing unit 120 performs drawing processing of an image (object) in the virtual space. For example, a drawing process of an image viewed from a virtual camera (a given viewpoint) in the virtual space is performed. For example, a drawing process of an image seen from a virtual camera set as a viewpoint (first person viewpoint) of a user moving body such as a character (avatar) is performed. Alternatively, a drawing process of an image seen from a virtual camera set to a viewpoint (third person viewpoint) that follows the user moving body is performed.
- the generated image is desirably a stereoscopic image such as a left-eye image or a right-eye image.
- the virtual space setting unit 112 sets a plurality of first to Mth virtual spaces (M is an integer of 2 or more) as the virtual space. Specifically, the virtual space setting unit 112 sets a first virtual space and a second virtual space as virtual spaces. For example, an arrangement setting process for objects constituting the first virtual space and an arrangement setting process for objects constituting the second virtual space are performed.
- an arrangement setting process for objects constituting the first virtual space and an arrangement setting process for objects constituting the second virtual space are performed.
- the first virtual space is a room space as described later
- an object placement setting process corresponding to an object placed in the room is performed.
- the second virtual space is an ice country space as will be described later, an object placement setting process corresponding to a glacier or sea of the ice country is performed.
- the virtual space setting unit 112 sets, for example, a third virtual space.
- the third virtual space is a space above a train, which will be described later, an object placement setting process corresponding to a train, tunnel, background, or the like is performed.
- the second virtual space is a virtual space that is linked to the first virtual space via a singular point (in other words, a passing point; hereinafter the same).
- the third virtual space may be a virtual space that is linked to the first virtual space via a singular point, or may be a virtual space that is linked to the second virtual space via a singular point. There may be.
- the display processing unit 120 detects the position information (position coordinates) of the user moving object or the virtual camera. , Direction, etc.) is set as position information (position coordinates, direction, etc.) of the first virtual space.
- the position information of the user moving body or the virtual camera is associated as the position information of the first virtual space.
- the display processing unit 120 performs a process of drawing at least an image of the first virtual space as the first drawing process.
- the display processing unit 120 performs a process of drawing an image of the second virtual space in addition to the image of the first virtual space, and includes an area corresponding to the singular point (including the singular point).
- An image in which the image of the second virtual space is displayed in a region to be generated) is generated.
- the drawing process of the image in the first virtual space is realized, for example, by drawing an object configuring the first virtual space and drawing an image that can be seen from the virtual camera in the first virtual space.
- the object constituting the first virtual space is changed.
- a region (display region) corresponding to a singular point is rendered by rendering an object constituting the second virtual space and rendering an image that can be viewed from the virtual camera in the first virtual space. .
- the display processing unit 120 sets the position information of the user moving body or the virtual camera as the position information of the second virtual space. For example, when the switching condition is satisfied when the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the position information of the user moving body or the virtual camera is stored in the second virtual space. Set as location information. For example, the position information of the user moving body or the virtual camera is associated as the position information of the second virtual space instead of the first virtual space. And the display process part 120 performs the process which draws the image of a 2nd virtual space at least as a 2nd drawing process.
- the display processing unit 120 performs a process of drawing an image of the first virtual space in addition to the image of the second virtual space, and the first virtual space is displayed in the region corresponding to the singular point.
- An image in which an image of the space is displayed is generated.
- the drawing process of the image in the second virtual space is realized, for example, by drawing an object constituting the second virtual space and drawing an image that can be seen from the virtual camera in the second virtual space.
- the object constituting the second virtual space is changed.
- the region (display region) corresponding to the singular point is drawn and is realized by drawing an object constituting the first virtual space and drawing an image seen from the virtual camera in the second virtual space.
- an area (object or area) corresponding to a singular point is an area of a door (gate) described later.
- first drawing process for drawing the image of the second virtual space in addition to the image of the first virtual space the second virtual space on the other side of the door with respect to the door region.
- Drawing processing is performed so that an image is displayed.
- second drawing process for drawing the image of the first virtual space in addition to the image of the second virtual space the image of the first virtual space on the other side of the door with respect to the door area.
- the drawing process is performed so that is displayed.
- the display processing unit 120 generates an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when the line-of-sight direction of the virtual camera faces the opposite direction side of the first traveling direction. To do. For example, after the user moving body or virtual camera passes through a place corresponding to a singular point, when the visual line direction of the virtual camera turns to the opposite direction side of the traveling direction, as an image that can be seen in the visual line direction of the virtual camera, An image in which an image of the first virtual space is displayed in a region corresponding to the singular point is generated.
- the second virtual space is changed for the region (main region) other than the region corresponding to the singular point.
- the region corresponding to the singular point an image that draws the object constituting the first virtual space is generated.
- the second drawing process A process of drawing an image is performed, and a process of drawing an image of the first virtual space is not performed.
- the image of the second virtual space is drawn also in the region corresponding to the singular point.
- the direction opposite to the first direction of travel does not have to be the direction opposite to the first direction of travel, and corresponds to, for example, a negative direction when the first direction of travel is a positive direction. .
- a singular point is a point at which the standard cannot be applied under a certain standard.
- the rule that is a criterion for moving only in the first virtual space is applied to the user moving object or the virtual camera. Is done. That is, the user moving body or the virtual camera moves under the law of moving in the first virtual space.
- the singular point of the present embodiment is a point at which such a rule, which is the reference, is not applied and the rule is not followed.
- the law (reference) of moving in the first virtual space is not applied to the user moving body or the virtual camera.
- the virtual camera moves in a second virtual space different from the first virtual space.
- the singular point can be said to be a switching point for associating the position information of the user moving body or the virtual camera with the position information of the first virtual space or the position information of the second virtual space.
- the first virtual space and the second virtual space are linked via a singular point so that the association with the position information of the user moving body or the virtual camera is switched by the establishment of the switching condition.
- information for connecting the first virtual space and the second virtual space via a singular point is stored in the storage unit 170.
- the switching condition of the present embodiment is a condition for switching the correspondence between the position information of such a user moving body or virtual camera and the position information of the first and second virtual spaces by a singular point.
- the location corresponding to the singular point need not be a point, for example, and may be a surface or a region. For example, it may be a surface or region including a singular point.
- the display processing unit 120 displays the position information of the user moving body or the virtual camera as the first information. It is set as position information of one virtual space. Then, the display processing unit 120 performs a process of drawing at least an image of the first virtual space as the first drawing process. For example, as the first drawing process, a process of drawing an image of the first virtual space or drawing an image of the second virtual space in addition to the image of the first virtual space is performed. For example, when a user moving body or virtual camera that has moved from the first virtual space to the second virtual space via a singular point returns to the first virtual space, the position information of the user moving body or virtual camera is obtained.
- the first drawing process in the first virtual space is performed in association with the position information of the first virtual space.
- a user moving body etc. can come and go between the 1st virtual space and the 2nd virtual space via a singular point.
- the user moving object or the virtual camera to which the rule (standard) of moving only in the second virtual space is applied passes the place corresponding to the singular point in the second advancing direction, so that the rule Can no longer be applied and can be moved in the first virtual space.
- the second traveling direction is different from the first traveling direction.
- the first traveling direction is a positive direction based on the singular point (location of the singular point)
- the second traveling direction is a negative direction based on the singular point.
- the second traveling direction is opposite to the first traveling direction.
- the display processing unit 120 displays the user moving body when the switching condition (second switching condition) is satisfied when the user moving body or the virtual camera passes through a place corresponding to the singular point (second singular point).
- the position information of the virtual camera is set as the position information of the third virtual space.
- the position information of the user moving body is associated as the position information of the third virtual space.
- the movement to the third virtual space may be a movement from the first virtual space to the third virtual space via a singular point that connects the first virtual space and the third virtual space.
- the movement from the second virtual space to the third virtual space may be performed via a singular point that connects the second virtual space and the third virtual space.
- the display process part 120 performs the process which draws the image of a 3rd virtual space at least as a 3rd drawing process. For example, when the third virtual space is connected to the first virtual space via a singular point, for example, an image of the third virtual space is drawn as the third drawing process, In addition to the virtual space image, a process of drawing the first virtual space image is performed. Further, when the third virtual space is linked to the second virtual space via a singular point, for example, an image of the third virtual space is drawn as the third drawing process, In addition to the virtual space image, a process of drawing the second virtual space image is performed.
- the display processing unit 120 returns to the first virtual space through a plurality of user moving bodies corresponding to the plurality of users or a plurality of virtual cameras passing through a place corresponding to the singular point.
- the third drawing process is permitted on the condition. For example, the first user moving body or the first virtual camera corresponding to the first user has returned from the second virtual space to the first virtual space, but the second user movement corresponding to the second user. It is assumed that the body or the second virtual camera has not returned from the second virtual space to the first virtual space. In this case, even when the switching condition (second switching condition) is satisfied, the movement of the first user moving body or the first virtual camera to the third virtual space via the singular point is permitted. The third drawing process is not permitted.
- both the first and second user moving bodies or both the first and second virtual cameras return from the second virtual space to the first virtual space and then the switching condition is satisfied.
- the movement of the first and second user moving bodies or the first and second virtual cameras via the singular point to the third virtual space is permitted, and the third drawing process is permitted.
- the display processing unit 120 determines whether the switching condition is satisfied based on whether the user moving body or the virtual camera has passed the singular point. For example, when the user moving body or virtual camera does not pass the singular point, it is determined that the switching condition is not satisfied, and when the user moving body or virtual camera passes the singular point, the switching condition is satisfied. Judge that
- the display processing unit 120 may determine whether or not the switching condition is satisfied based on user input information or sensor detection information. For example, the display processing unit 120 determines whether the switching condition is satisfied based on user operation information, voice input information, and the like input via the operation unit 160. For example, the determination is made based on input information of a user instructing establishment of the switching condition. Alternatively, the display processing unit 120 determines whether the switching condition is satisfied based on the detection information of the sensor. For example, a sensor is provided on an object corresponding to a singular point, and it is determined whether a switching condition is satisfied based on detection information of the sensor. When the object is a door (gate), it is determined whether the switching condition is satisfied by detecting the open / closed state of the door based on the detection information of the sensor.
- an object corresponding to a singular point is arranged in a play field (play space) in a real space where the user moves.
- a play field play space
- a door object corresponding to a singular point is arranged.
- the display processing unit 120 determines that the user moving object or the virtual camera has passed the singular point when the user passes the location of the object in the real space.
- the passage condition of the singular point is determined by whether or not the user has passed the location of the object in the real space.
- the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and the second The drawing process is performed.
- the display processing unit 120 may perform processing for setting or not setting a singular point.
- a singular point that has not been set is set to a set state, or a singular point that has been set is set to a non-set state.
- the singularity setting process is, for example, a process of displaying an object corresponding to a singularity in a virtual space such as the first to third virtual spaces or permitting movement between virtual spaces via singularities.
- the singularity non-setting process hides an object corresponding to a singularity in a virtual space such as the first to third virtual spaces, or disallows movement between virtual spaces via singularities. It is processing to do.
- the simulation system of the present embodiment includes a sensation device control unit 117.
- the sensation device control unit 117 controls the sensation device for allowing the user to experience virtual reality.
- the bodily sensation device is a device for causing a user to experience virtual reality by working on a sensory organ other than the visual organ of the user, for example.
- the sensation apparatus is, for example, a blower or a vibration device described later.
- the blower can be realized by, for example, a sirocco fan or a propeller fan.
- the blower may blow hot air or cold air.
- the vibration device can be realized by, for example, a transducer or a vibration motor.
- the body sensation device may be a halogen heater or the like.
- the body sensation device may be realized by a mechanical mechanism such as an air spring or an electric cylinder.
- a mechanical mechanism such as an air spring or an electric cylinder.
- it may be a bodily sensation device that makes the user feel shaking or tilting by a mechanical mechanism such as an air spring or an electric cylinder.
- the sensation device control unit 117 makes the control of the sensation device when the switching condition is not satisfied and the control of the sensation device when the switching condition is satisfied.
- the sensation device control unit 117 does not operate the sensation device when the switching condition is not satisfied (before the switching condition is satisfied), and when the switching condition is satisfied (after the switching condition is satisfied), May be operated.
- the control mode of the sensation apparatus and the type of sensation apparatus to be controlled may be different depending on whether the switching condition is not satisfied or when the switching condition is satisfied. For example, when the switching condition is not satisfied, the sensation apparatus is controlled in the first control mode, and when the switching condition is satisfied, the sensation apparatus is controlled in the second control mode different from the first control mode. Control.
- the first control mode and the second control mode differ in the degree of sensation experienced by the user.
- the first control mode and the second control mode have different blower strengths or different wind temperatures.
- the first control mode and the second control mode differ in vibration intensity and vibration time.
- the switching condition is not satisfied, the first type of sensation apparatus may be controlled, and when the switching condition is satisfied, the second type of sensation apparatus may be controlled. That is, depending on whether or not the switching condition is satisfied, the type of the sensation apparatus to be controlled is varied.
- the simulation system also includes a notification processing unit 116.
- the notification processing unit 116 performs an output process of notification information on collision between users in real space. For example, based on the user's position information acquired by the information acquisition unit 111, a warning notification process for a collision between users is performed. For example, a prediction process is performed as to whether or not the user is in a collision positional relationship (approaching relationship). If such a positional relationship is reached, a notification process for warning that there is a possibility of a collision is performed.
- the prediction process can be realized by determining whether or not there is a possibility that the users have a positional relationship of collision based on the position, speed, acceleration, or the like of each user moving body corresponding to each user.
- the warning notification processing is an image displayed on the HMD 200, sound output from a headphone or a speaker installed in a play field, or a vibration device provided in equipment such as a user's weapon, clothing, or decoration. It can be realized by vibrations caused by the above, or various types of sensation devices (sensation devices using wind, vibration, light, air cannon, sound, etc.) provided in a real space field.
- the simulation system of this embodiment includes an information acquisition unit 111 that acquires position information of the user in real space.
- the information acquisition unit 111 acquires user position information through user viewpoint tracking or the like.
- the moving body processing unit 113 performs processing for moving the user moving body based on the acquired position information, and the display processing unit 120 generates a display image of the HMD 200 worn by the user.
- the user moving body in the virtual space is moved so as to follow the movement of the user in the real space. Then, an image that can be seen from the virtual camera corresponding to the user moving object is generated as a display image of the HMD 200.
- the display processing unit 120 displays the position information of the user moving body or the virtual camera specified by the user position information in the real space. , And set as position information of the first virtual space.
- the first drawing process is performed by associating the position information of the user moving body or the virtual camera as the position information of the first virtual space.
- the first drawing process at least a process of drawing an image in the first virtual space is performed.
- the position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as the position information of the second virtual space.
- the second drawing process is performed by associating the position information of the user moving body or the virtual camera as the position information of the second virtual space.
- a process of drawing at least an image of the second virtual space is performed.
- the information acquisition unit 111 acquires position information of a user who wears the HMD 200 so as to cover the field of view. For example, the information acquisition unit 111 acquires the position information of the user in real space based on the tracking information of the HMD 200 and the like. For example, the position information of the HMD 200 is acquired as the position information of the user wearing the HMD 200. Specifically, when the user is located in a play field (simulation field, play area) in real space (real world), position information in the play field is acquired. Note that the position information of the user may be acquired by a method of directly tracking a part such as the user or the user's head instead of the tracking process of the HMD 200.
- the virtual camera control unit 114 controls the virtual camera so as to follow the change of the user's viewpoint based on the tracking information of the user's viewpoint information.
- the input processing unit 102 acquires tracking information of viewpoint information of a user wearing the HMD 200.
- tracking information viewpoint tracking information
- viewpoint information that is at least one of the user's viewpoint position and line-of-sight direction is acquired.
- This tracking information can be acquired by performing tracking processing of the HMD 200, for example.
- the user's viewpoint position and line-of-sight direction may be directly acquired by tracking processing.
- the tracking information includes change information of the viewpoint position from the initial viewpoint position of the user (change value of the coordinate of the viewpoint position), and change information of the gaze direction from the user's initial gaze direction (rotation axis in the gaze direction). At least one of the rotation angle change values around the rotation angle). Based on the change information of the viewpoint information included in such tracking information, the user's viewpoint position and line-of-sight direction (information on the user's head position and posture) can be specified.
- a virtual reality simulation process is performed as the game process of the game played by the user.
- the virtual reality simulation process is a simulation process for simulating an event in the real space in the virtual space, and is a process for causing the user to experience the event virtually. For example, a virtual user corresponding to a user in real space or a moving body such as a boarding moving body is moved in the virtual space, or processing for causing the user to experience changes in the environment and surroundings associated with the movement is performed.
- the processing of the simulation system of this embodiment in FIG. 1 can be realized by a processing device such as a PC installed in a facility, a processing device worn by a user, or distributed processing of these processing devices. Or you may implement
- FIG. 2A shows an example of the HMD 200 used in the simulation system of this embodiment.
- the HMD 200 is provided with a plurality of light receiving elements 201, 202, and 203 (photodiodes).
- the light receiving elements 201 and 202 are provided on the front side of the HMD 200, and the light receiving element 203 is provided on the right side of the HMD 200.
- a light receiving element (not shown) is also provided on the left side, upper surface, and the like of the HMD.
- a controller (not shown) is attached to the HMD 200 or the like, and based on this controller, a motion detection process for hands and fingers is realized.
- This controller has, for example, a light emitting unit such as an LED that emits infrared rays, and a plurality of infrared cameras that photograph a hand or a finger illuminated by infrared rays. Based on the image analysis result of the image captured by the infrared camera, the movement of the hand or finger is detected. By providing such a controller, it becomes possible to detect the movement of the user's hand or finger when the door is opened.
- the HMD 200 is provided with a headband 260 and the like so that the user US can stably wear the HMD 200 on the head with a better wearing feeling.
- the HMD 200 is provided with a headphone terminal (not shown). By connecting a headphone 270 (sound output unit 192) to the headphone terminal, for example, processing of three-dimensional sound (three-dimensional audio) is performed.
- the user US can listen to the game sound.
- the operation information of the user US may be input by detecting the head movement or the swinging movement of the user US by the sensor unit 210 of the HMD 200 or the like.
- the user US wears a processing device (backpack PC) (not shown) on his back, for example.
- a processing device is realized by an information processing device such as a notebook PC.
- this processing apparatus and HMD200 are connected by the cable not shown.
- the processing device performs processing for generating an image (game image or the like) displayed on the HMD 200, and the generated image data is sent to the HMD 200 via a cable and displayed on the HMD 200.
- this processing apparatus is also capable of performing each processing (information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing, notification processing, and sensation device control of this embodiment.
- each processing of the present embodiment may be realized by a processing device (not shown) such as a PC installed in a facility, or may be realized by distributed processing of the processing device and a processing device worn by the user US. .
- base stations 280 and 284 are installed around the simulation system.
- the base station 280 is provided with light emitting elements 281 and 282, and the base station 284 is provided with light emitting elements 285 and 286.
- the light emitting elements 281, 282, 285, and 286 are realized by LEDs that emit laser (infrared laser or the like), for example.
- the base stations 280 and 284 use these light emitting elements 281, 282, 285, and 286 to emit, for example, a laser beam radially.
- the light receiving elements 201 to 203 and the like provided in the HMD 200 in FIG. 2A receive the laser beams from the base stations 280 and 284, thereby realizing the tracking of the HMD 200 and the position and direction of the head of the user US. (Viewpoint position, line-of-sight direction) can be detected.
- FIG. 3A shows another example of the HMD 200.
- a plurality of light emitting elements 231 to 236 are provided for the HMD 200. These light emitting elements 231 to 236 are realized by LEDs, for example.
- the light emitting elements 231 to 234 are provided on the front side of the HMD 200, and the light emitting element 235 and the light emitting element 236 (not shown) are provided on the back side.
- These light emitting elements 231 to 236 emit (emit) light in a visible light band, for example. Specifically, the light emitting elements 231 to 236 emit light of different colors.
- the imaging unit 150 shown in FIG. 3B is installed in at least one place around the user US (for example, the front side, the front side, the rear side, or the like).
- Image up to 236 lights that is, spot images of these light emitting elements 231 to 236 are reflected in the captured image of the imaging unit 150.
- the tracking of the user's US head (HMD) is implement
- the imaging unit 150 includes first and second cameras 151 and 152, and the first and second cameras 151 and 152 of the first and second cameras 151 and 152 are provided.
- the position of the head of the user US in the depth direction can be detected.
- the rotation angle (line of sight) of the head of the user US can also be detected. Therefore, by using such an HMD 200, when the user US faces in any direction of all 360 degrees around the image, the image (the user's virtual space in the virtual space (virtual three-dimensional space)) Image viewed from a virtual camera corresponding to the viewpoint) can be displayed on the display unit 220 of the HMD 200.
- the light emitting elements 231 to 236 infrared LEDs instead of visible light may be used.
- the position or movement of the user's head may be detected by another method such as using a depth camera.
- the tracking processing method for detecting the user's viewpoint position and line-of-sight direction is not limited to the method described with reference to FIGS.
- the tracking process may be realized by a single unit of the HMD 200 using a motion sensor or the like provided in the HMD 200. That is, tracking processing is realized without providing external devices such as the base stations 280 and 284 in FIG. 2B and the imaging unit 150 in FIG. Or you may detect viewpoint information, such as a user's viewpoint position and a gaze direction, by various viewpoint tracking methods, such as well-known eye tracking, face tracking, or head tracking.
- the display unit on which the image generated by the simulation system is displayed is not limited to the HMD, and may be a normal display used in a home game device, a business game device, or a PC.
- the determination target of the passage of the singular point, the setting target of the position information of the virtual space, etc. is mainly a user moving body or a user moving body of the virtual camera will be described as an example.
- the determination target of the passage of the object, the setting object of the position information of the virtual space, and the like may be a virtual camera.
- the user moving object corresponding to the user is described as a user character.
- the method of the present embodiment includes various games (virtual experience game, battle game, RPG, action game, competition game, sports game, horror experience game, simulation game of vehicles such as trains and airplanes, puzzle games, communication games, Alternatively, it can be applied to music games and the like, and can be applied to other than games.
- VR virtual reality
- This game is a virtual reality experience game that can move in a plurality of virtual spaces (virtual worlds).
- FIG. 4 and 5 are explanatory diagrams of the play field FL in the room used in the simulation system of this embodiment.
- FIG. 4 is a perspective view illustrating the play field FL
- FIG. 5 is a top view.
- the play field FL (play area, play space) simulating a room, a door DR, a desk DK, a bookshelf BS, etc. are arranged, and windows WD1, WD2 are provided on the wall.
- the user enters the play field FL simulating this room and enjoys a VR game.
- a plurality of users US1 and US2 enter the room, and these two users US1 and US2 can enjoy a VR game.
- Each user US1, US2 wears a processing device (backpack PC) on his / her back, for example, and an image generated by this processing device is displayed on HMD1, HMD2 (head-mounted display device).
- a management processing device (not shown) is arranged in the play field FL, and the management processing device performs data synchronization processing (communication processing) between the processing devices worn by the users US1 and US2. Done. For example, a synchronization process for displaying a user character corresponding to the user US2 (user moving body in a broad sense) on the HMD1 of the user US1 and displaying a user character corresponding to the user US1 on the HMD2 of the user US2 is performed.
- control processing of the sensation apparatus can be performed.
- an operator is waiting in the room, and performs operations of management processing devices, helping the users US1 and US2 to install the HMD1, HMD2 and jackets, operation work and guidance work for game progress.
- the base stations 280 and 284 described with reference to FIG. 2B are installed in the room of the play field FL, and the location information of the users US1 and US2 can be acquired using these base stations 280 and 284. It has become.
- the play field FL is provided with a blower BL and a vibration device VB, which are sensation devices for allowing the user to experience virtual reality.
- the vibration device VB is realized by, for example, a transducer installed under the floor of a room.
- the real-world door DR when the real-world door DR is opened, another world (a world different from the room landscape) spreads beyond the door DR.
- the user can experience a virtual reality that can pass through the door DR and go to another world.
- the images of the first virtual space corresponding to the room are displayed on the HMD1 and HMD2 of the users US1 and US2.
- an object corresponding to an object placed / installed in a room is placed in the first virtual space (first object space).
- first virtual space For example, objects corresponding to the door DR, the desk DK, the bookshelf BS, and the windows WD1 and WD2 are arranged.
- the image seen from the virtual camera (1st, 2nd virtual camera) corresponding to the viewpoint (1st, 2nd viewpoint) of user US1, US2 is produced
- the users US1 and US2 who move while wearing the HMD1 and HMD2 can experience virtual reality as if they were walking around a real room.
- FIG. 6A when the user (US1, US2) opens the door DR for the first time, the other side of the door DR remains in the room, and the area of the door DR (in a broad sense) In the region corresponding to the singular point, a room image is displayed. Then, after the user closes the door DR as shown in FIG. 6B, when the door DR is opened again as shown in FIG. 6C, the other side of the door DR changes to an ice country. That is, as shown in FIG. 7, an image of the ice country that is an image of the second virtual space VS2 (second object space) is displayed in the region of the door DR (door opening region). At this time, as shown in FIG.
- an image of a room such as a bookshelf BS and windows WD1 and WD2 is displayed around the door DR. That is, an image of the ice country which is an image of the second virtual space VS2 is displayed in the region of the door DR (region corresponding to the singular point), while the first region is displayed in the region other than the door DR. A room image which is an image of the virtual space VS1 is displayed.
- the user character UC1 (in a broad sense) corresponding to the user US1
- the user moving body moves from the room that is the first virtual space VS1 to the ice country that is the second virtual space VS2.
- the user character UC1 is a character (display object) that moves in the virtual space as the user US1 moves in the real space, and is also called an avatar.
- the present embodiment mainly deals with the case where the position information of the user US1 in the real space is acquired and the user character UC1 is moved in the virtual space (first and second virtual spaces) based on the acquired position information.
- the present embodiment is not limited to this.
- the user character UC1 may be moved in the virtual space (first and second virtual spaces) based on operation information from the operation unit 160 (game controller or the like) in FIG.
- the movement of the hand or finger of the user US1 (US2) is detected by, for example, a leap motion process. And based on this detection result, the motion process which moves the site
- the user US1 opens the door DR, the user US1 can visually recognize the movement of his or her hand or finger by looking at the movement of the hand or finger part of the user character UC1.
- FIG. 9 is an example of an image displayed when the user character UC1 that has moved to the ice country as the second virtual space VS2 looks back to the door DR side as shown in FIG.
- an image of a room that is an image of the first virtual space VS ⁇ b> 1 is displayed in the region of the door DR.
- an image of the ice country that is an image of the second virtual space VS2 is displayed around the door DR. That is, an image of a room that is an image of the first virtual space VS1 is displayed in the area of the door DR, while an area of the ice that is an image of the second virtual space VS2 is displayed in an area other than the door DR. Is displayed.
- the user character UC1 When the user character UC1 moves toward the door DR and passes through the door DR again, the user character UC1 can return to the room that is the first virtual space VS1. That is, by passing the door DR in the first advancing direction in FIG. 7, it is possible to move (warp) from the first virtual space VS1 (room) to the second virtual space VS2 (ice country). On the other hand, in FIG. 9, by passing through the door DR in the second traveling direction different from the first traveling direction, the second virtual space VS2 (land of ice) changes to the first virtual space VS1 (room). Can move. That is, the user character UC1 can freely go back and forth between the first virtual space VS1 and the second virtual space VS2 through the door DR (a place corresponding to a singular point).
- FIG. 10A and FIG. 10B show a situation when the users US1 and US2 pass through the door DR.
- the user characters UC1 and UC2 corresponding to the users US1 and US2 are located in the ice country that is the second virtual space. That is, the position information of the user characters UC1 and UC2 is set as the position information of the second virtual space.
- a blower BL and a vibration device VB which are sensation devices, are installed.
- the blower BL is installed on the front side of the users US1 and US2, and the vibration device VB is installed under the floor around the users US1 and US2.
- the blower BL starts to blow air, and the users US1 and US2 are exposed to wind (cold wind).
- a sound that makes the snowstorm “hue” is output from the speaker of the headphones worn by the users US1 and US2.
- the users US1 and US2 can feel a virtual reality as if they came to a real ice country.
- the range in which the users US1 and US2 can move is the same as the room area, and moves beyond the room wall in FIG. I can't do it.
- notification information for warning the HMD1 and HMD2 of a collision with the wall or the like is displayed so that the collision with the wall or the like can be avoided.
- the scenery of the ice country spreads at the tip of the user opening the door, the sun and ice whiteness, You can see a completely different landscape.
- the sound of wind blowing with dazzlingness improves virtual reality.
- the user can step into the end of the door.
- the user can look around and see the sea, iceberg, clear sky sun and diamond dust, and enjoy the scenery of the ice country. Also, as shown in FIGS. 10A and 10B, the sea extends beyond the cliff at one end of the door, and the user cools the liver to that height.
- the user can enjoy a horror experience because the foot is vibrated by the vibration device and the cliff in front of the user collapses.
- the user frequently goes back and forth between two virtual spaces (virtual worlds), and one person goes to the front of the door, and the other turns around behind the door and confirms that the other party cannot be seen.
- Standing sideways in the middle of the door you can experience the difference in field of view and sound between the right and left halves.
- the door DR is closed as shown in FIG. 6D, and again as shown in FIG. 6E.
- the other side of the door DR changes to the space above the train. That is, as shown in FIG. 11A, an image when the user is on the train TR, which is an image of the third virtual space VS3, is displayed in the area of the door DR.
- images of rooms such as the bookshelf BS, the windows WD1, and WD2 are displayed in an area other than the door DR.
- the virtual space switching process may not be performed when the door is opened slightly and then closed immediately. For example, depending on the user, when an image as shown in FIG. 11A is displayed, the user may be surprised and immediately close the door. In such a case, it is not desirable that the image of the train scene as shown in FIG. 11A is not displayed when the door is opened next time.
- an image as shown in FIG. 12 is displayed.
- an image of a room that is an image of the first virtual space VS ⁇ b> 1 is displayed in the region of the door DR.
- an image on a train an image of a tunnel, a train, etc.
- an image of the third virtual space VS3 is displayed around the door DR. That is, an image of a room that is an image of the first virtual space VS1 is displayed in the area of the door DR, while an area above the train that is an image of the third virtual space VS3 is displayed in an area other than the door DR.
- the image at is displayed.
- an effect image is displayed in which sparks are scattered when the upper end of the door DR hits the roof of the tunnel TN.
- the user character UC1 moves toward the door DR and passes through the door DR again, the user character UC1 can return to the room that is the first virtual space VS1. 6A to 6E, every time the door DR is opened and closed, the virtual space on the other side of the door DR is switched, such as a room, an icy country, or a train. Yes.
- the user can enjoy a thrilling experience that is suddenly thrown into a fast and dangerous place.
- the train repeats entering and exiting the tunnel, and the top edge of the door hits the tunnel roof, sparks are scattered and the thrill feeling is further improved.
- the vibration of the floor can give you the feeling of being really on the train, and you can give a feeling of speed with the blower.
- the first virtual space and the second virtual space that is linked to the first virtual space via a singular point are set as the virtual space.
- the first virtual space is, for example, a virtual space corresponding to a room
- the second virtual space is a virtual space corresponding to an ice country.
- These first and second virtual spaces are connected via a singular point corresponding to the door DR.
- the singularity is a point at which the law cannot be applied under the law that is the reference.
- the rule that the user character moves only in the first virtual space is Applied.
- the singular point of the present embodiment is a point where such a law (standard) is not applied and the law is not followed.
- the switching condition is established by passing a singular point
- the law of moving in the first virtual space is not applied to the user character, and the user character is defined as the first virtual space. It moves in a different second virtual space (Iceland).
- the position information of the user character is set as the position information of the first virtual space.
- the position information of the user character is set as the position information of the first virtual space.
- FIG. 13A information on the position P1 of the user character UC1 is set as position information on the first virtual space VS1 (room).
- a 1st drawing process the process which draws the image of the 1st virtual space VS1 (room) at least is performed. That is, in the first virtual space VS1, an image that can be seen from the virtual camera (user's viewpoint) is generated.
- the image of the second virtual space VS2 (ice country) Is generated to generate an image in which the image of the second virtual space VS2 is displayed in the region of the door DR, which is a region corresponding to the singular point. That is, as shown in FIG. 7, an image is generated in which an image of the ice country is displayed in the area of the door DR and a room image is displayed in the area other than the door DR.
- the user character UC1 (virtual camera) is switched by passing the location of the door DR corresponding to the singular point (SG) in the first traveling direction D1.
- the condition is met. That is, in FIG. 13A, the switching condition is established when the user character UC1 moves from the position P1 through the door DR to the position P2.
- the position information of the user character UC1 (virtual camera) is set as the position information of the second virtual space VS2 (ice country).
- information on the positions P2 and P3 of the user character UC1 is set as position information on the second virtual space VS2 (ice country).
- the process which draws the image of the 2nd virtual space VS2 at least is performed. That is, an image that can be seen from the virtual camera (user viewpoint) is generated in the second virtual space VS2.
- the image of the first virtual space VS ⁇ b> 1 (room). Is generated to generate an image in which the image of the first virtual space VS1 is displayed in the region of the door DR, which is a region corresponding to the singular point. That is, as shown in FIG. 9, an image is generated in which an image of a room is displayed in the area of the door DR and an image of the ice country is displayed in an area other than the door DR.
- the user character UC1 (virtual camera) passes through the location of the door DR (location corresponding to the singular point SG) in the first traveling direction D1, and the virtual camera It is assumed that the line-of-sight direction SL faces the direction opposite to the first traveling direction D1. That is, in FIG. 13A, after the user character UC1 passes through the door DR in the first traveling direction D1, the user DR looks back toward the door DR, and the visual line direction SL of the virtual camera is directed toward the door DR. .
- the line-of-sight direction SL is, for example, a direction opposite to the first traveling direction D1. In such a case, in the present embodiment, as shown in FIG. 9, an image in which an image of the first virtual space VS1 (room) is displayed in the region of the door DR (region corresponding to the singular point SG) is generated. .
- FIG. 13A it is assumed that the user character UC1 moves from the position P2 to the position P3 in the second virtual space VS2, and the visual line direction SL of the virtual camera is directed toward the door DR.
- the position P2 moves from the position P2 to the position P3 on the back side of the door DR and the line-of-sight direction SL is directed toward the door DR.
- the room image is not displayed in the area of the door DR.
- the line-of-sight direction SL of the virtual camera is directed toward the door DR at the position P2 in FIG. 13A
- the room on the other side of the door DR has an opening area as shown in FIG.
- the reason why such an image is displayed is that in the present embodiment, the first virtual space and the second virtual space are connected discontinuously via the singular point SG. Therefore, as shown in FIG. 13A, when the room position P1, which is the movement source position, is viewed from the position P2, the room image can be viewed through the opening area of the door DR. However, at the position P3 on the back side of the door DR, an image is displayed in which only the frame of the door DR (open door) stands on the glacier. In this way, it is possible to experience a virtual reality experience such as warping from the first virtual space to the second virtual space, which is a different dimension from the first virtual space. Realization of virtual reality becomes possible.
- FIG. 7 and 9 can be generated by the following method, for example.
- a pixel in an area other than the door DR an area other than the area corresponding to the singular point
- a pixel in the door DR area an area corresponding to the singular point
- FIG. 13B shows an example in which the user character UC1 does not pass through the location of the door DR (singular point SG) and the switching condition is not satisfied.
- the switching condition is not satisfied, so the virtual space is not switched, and the information of the position P2 is the first virtual space VS1.
- Set as (room) location information Accordingly, even if the visual line direction SL of the virtual camera is directed toward the door DR at the position P2, a room image is displayed in the opening of the door DR. That is, an image in which only the frame of the door DR stands on the floor of the room is displayed.
- FIG. 13B even if the user character UC1 moves from the position P2 to the position P3 and the viewing direction SL of the virtual camera is directed toward the door DR, only the frame of the door DR is on the floor of the room. A standing image is displayed.
- the second traveling direction is different from the first traveling direction D1.
- the first traveling direction D1 forward direction
- Is passing through the door DR in the second traveling direction D2 negative direction
- the opposite direction reverse polarity direction
- information on the position P4 of the user character UC1 is set as position information on the first virtual space VS1 (room).
- the process which draws the image of the 1st virtual space VS1 at least is performed. That is, a process of drawing an image of a room that is the first virtual space VS1 is performed.
- a process of drawing an image of the second virtual space VS2 is performed, and the door DR (singular point SG ), An image in which the image of the second virtual space VS2 is displayed is generated. For example, as shown in FIG.
- the line-of-sight direction SL of the virtual camera of the user character UC1 at the position P4 is facing the direction opposite to the second traveling direction D2 and facing the door DR.
- an image in which an image of the second virtual space VS2 (ice country) is displayed in the area of the door DR is generated.
- the second virtual space VS2 has moved from the position P2 to the position P4 without passing through the door DR.
- the position P4 remains at the position of the second virtual space VS2. That is, in FIG. 13A, the position P4 is switched to the position of the first virtual space VS1, but in FIG. 13B, the same position P4 remains at the position of the second virtual space VS2.
- the user character (virtual camera) position information is set as position information of the third virtual space VS3 (on the train), and at least an image of the third virtual space VS3 is drawn as the third drawing process.
- the user character UC1 passes through the door DR (singular point SG) in the first traveling direction D1 (positive direction) from the position P1 of the first virtual space VS1 to the position P2. Has moved. Thereby, since the switching condition by passage of the door DR is established, the position P2 is set as the position of the second virtual space VS2 (ice country). Then, the user character UC1 passes through the door DR in the second traveling direction D2 (negative direction) from the position P2 of the second virtual space VS2 and returns to the position P1. Thereby, since the switching condition by passage of the door DR is established, the position P1 is set as the position of the first virtual space VS1 (room).
- the door DR is opened and closed in FIG. 15B.
- the world beyond the door DR changes from the second virtual space VS2 (ice land) to the third virtual space VS3 (on the train).
- an image as shown in FIG. 11A is displayed.
- the user character UC1 has moved from the position P1 of the first virtual space VS1 to the position P2 through the door DR.
- the position P2 is set as the position of the third virtual space VS3 (on the train).
- the third drawing process at least an image of the third virtual space VS3 is drawn, thereby generating an image as shown in FIG. 11B, for example.
- an image as shown in FIG. 12 is generated. That is, as the third drawing process, a process of drawing the image of the first virtual space VS1 (room) in addition to the image of the third virtual space VS3 (on the train) is performed, and the region of the door DR An image in which an image of one virtual space VS1 is displayed is generated. In this way, not only the switching process between the first virtual space and the second virtual space, but also a switching process between the first virtual space and the third virtual space, for example, can be realized. Alternatively, switching processing between the second virtual space and the third virtual space may be performed.
- FIG. 17A a plurality of users corresponding to the user characters UC1 and UC2 are playing. Then, the user character UC1 passes through the door DR from the first virtual space VS1 and moves to the second virtual space VS2, and then passes again through the door DR and returns to the first virtual space VS1. Yes. On the other hand, after the user character UC2 passes through the door DR and moves to the second virtual space VS2, the user character UC2 remains in the second virtual space VS2.
- the third drawing in the third virtual space is performed on the condition that both of the user characters UC1 and UC2 have returned to the first virtual space VS1. Allow processing.
- FIG. 18A not only the user character UC1 but also the user character UC2 passes through the door DR from the position of the second virtual space VS2 and returns to the first virtual space VS1. Then, after the user characters UC1 and UC2 return to the first virtual space VS1, the door DR is opened and closed.
- FIG. 18B when the user characters UC1 and UC2 pass the door DR and the switching condition is satisfied, the positions of the user characters UC1 and UC2 are in the third virtual space VS3 (train of the train). Set as the top position. Then, the third drawing process in the third virtual space VS3 is performed, and images as shown in FIGS. 11B and 12 are generated. In this way, it becomes possible to prevent a situation in which some of the plurality of user characters corresponding to the plurality of users are left in the second virtual space and the game progresses, etc. Appropriate and smooth game progress can be realized.
- voice information “I want to go to the iceland” is input as input information of the user US1 using a microphone or the like. Based on such input information of the user US1, it may be determined whether or not a switching condition is satisfied, and for example, a virtual space switching process may be performed. In this way, the virtual space switching process can be realized with a simple effort of only inputting input information.
- the user input information used for determining the switching condition may be, for example, operation information input by a game controller (operation unit in a broad sense).
- the switching condition is determined using the input information based on the movement of the user's hand or finger as the user's input information. Good.
- the movement of the user's head may be detected based on an HMD sensor unit or the like and used as user input information.
- the switching condition it may be determined whether the switching condition is satisfied based on the detection information of the sensor.
- a sensor SE is provided for the door DR.
- the opening / closing state of the door DR may be determined based on the detection information of the sensor SE to determine whether the switching condition is satisfied.
- the sensor SE for example, a light receiving element as described with reference to FIG. In this way, the open / closed state of the door DR can be determined by detecting the light from the base stations 280 and 284 provided in the room as shown in FIG. 4 by the light receiving element as the sensor SE. Therefore, it is possible to determine the switching condition with a simple process.
- a motion sensor composed of an acceleration sensor, a gyro sensor, or the like may be used as the sensor SE.
- the open / close state of the door DR may be determined to determine whether or not the virtual space switching condition is satisfied.
- various types of sensors can be employed as the sensor SE.
- an object corresponding to a singular point is arranged in the play field FL in the real space where the users (US 1, US 2) move.
- a door DR is arranged as an object corresponding to a singular point.
- the user character virtual camera
- the door object simulating the door of the real space in the virtual space the door of the real space and the object of the virtual space door can be properly associated with each other, and the virtual reality of the user Can be further improved.
- the door in the real space can be opened and closed.
- the door object in the virtual space is also opened and closed, and the virtual reality of the user can be greatly improved.
- FIG. 20A As shown by B1, only the head and upper body part of the user character UC1 pass through the door DR and move to the second virtual space VS2 (ice country). .
- the lower body part of the user character UC1 remains in the first virtual space VS1 (room).
- the head and upper body of the user character UC1 appear to disappear, and it is strange that the other side of the door DR is a different world.
- the appearance is further different. Further, as shown in FIG.
- the position information of the lower body part such as the foot It is desirable to detect. Also, as shown in FIGS. 20A and 20B, the region corresponding to the singular point only needs to pass at least the virtual camera corresponding to the user's viewpoint, and the entire user character passes. There is no need to be.
- the user character UC1 is located in the second virtual space VS2 (ice country), and unlike FIG. 8, the door DR corresponding to the singular point SG is not displayed.
- the user US1 corresponding to the user character UC1 inputs, for example, a voice “Appearance door”. Thereby, the process of setting the singular point SG is performed, and the door DR corresponding to the singular point SG appears in the second virtual space VS2 and is displayed as shown in FIG.
- FIG. 21 (B) the user US1 inputs a voice of “disappearing door”, for example, to perform the process of unsetting the singular point SG, for example, as shown in FIG. 21 (A), The door DR may be hidden.
- the setting or non-setting of singular points can be arbitrarily switched based on user input information or the like, for example, an object corresponding to a singular point can be displayed or non- It becomes possible to display. As a result, for example, it is possible to hide singular points and the like, and more various game processes and game effects can be realized.
- the setting and non-setting of singular points may be switched according to, for example, the result of game processing, the game level of the user, etc., in addition to the user input information, for example.
- processing for controlling the sensation device for allowing the user to experience virtual reality is performed, and control of the sensation device when the switching condition is not satisfied and control of the sensation device when the switching condition is satisfied May be different.
- the user character UC1 moves beyond the door DR without passing through the door DR, and the virtual space switching condition is not satisfied. In this case, sensation devices such as the blower BL and the vibration device VB are not operated.
- the user character UC1 passes through the door DR and moves to the other side of the door DR, and the virtual space switching condition is satisfied. In this case, sensation devices such as the blower BL and the vibration device VB are operated. For example, control such as blowing cool air by the blower BL or vibrating the floor by the vibration device VB is performed. By doing so, as described in FIG. 10A and FIG. 10B, the virtual space is switched from the first virtual space (room) to the second virtual space (ice country). It is possible to let the user experience using a sensation apparatus such as the blower BL and the vibration device VB.
- control mode of the sensation apparatus and the type of sensation apparatus to be controlled may be different depending on whether the switching condition is not satisfied or when the switching condition is satisfied.
- the power of the blower BL may be varied, or the vibration level of the vibration device VB may be varied.
- the first sensation device for the first virtual space and the second sensation device for the second virtual space are prepared, and the first sensation device is operated when the switching condition is not satisfied.
- the second sensation apparatus may be operated.
- notification information output processing for collision between users in real space is performed.
- FIG. 23A a plurality of users US1, US2 corresponding to the user characters UC1, UC2 are playing a game.
- the user character UC1 moves through the door DR to move to the second virtual space VS2, but the user character UC2 does not pass through the door DR and enters the first virtual space VS1.
- FIG. 23B the user character UC1 moves in the second virtual space VS2 and approaches the user character UC2.
- FIG. 24 is an example of a display image of the HMD1 of the user US1. In this display image, a hand, a finger, and the like of the user US1 are displayed.
- the user character UC2 is not normally displayed on the HMD1 of the user US1.
- the user character UC2 is displayed as thin as, for example, a ghost character.
- the user character UC2 that is translucently synthesized with the background or the like with a given translucency is displayed.
- the user US1 can visually recognize that the user US2 corresponding to the user character UC2 is nearby, and can avoid a collision between users in the real world.
- the HMD2 of the user US2 also displays the user character UC1 corresponding to the user US1 as thin as a ghost character by, for example, semi-transparent composition with the background or the like. By doing so, the user US2 can visually recognize that the user US1 corresponding to the user character UC1 is nearby, and can avoid a collision between users in the real world.
- the display mode of the user character UC2 in FIG. 24 may be changed according to the risk of collision. For example, as the distance between the user characters UC1 and UC2 approaches, or as the approach speed of the user characters UC1 and UC2 increases, the display of the user character UC2 is gradually darkened to increase the degree of visual recognition. May be.
- information for notifying a collision warning may be superimposed on the display image of the HMD. Or you may alert
- various types of output processing can be assumed as the notification information output processing.
- FIG. 25A after the user characters UC1 and UC2 have moved to the second virtual space VS2, the user character UC2 has moved to a position behind the door DR. As shown in FIG. 25B, the user character UC2 moves from the position on the back side of the door DR toward the user character UC1 and passes through the door DR.
- the visual line direction SL of the virtual camera of the user character UC1 faces the direction of the door DR.
- an image in which the region of the door DR becomes an image of the room is displayed.
- FIG. 25B when the user character UC2 in the second virtual space VS2 passes through the door DR, the user character UC2 suddenly pops out from the room image in FIG. Correct images will be displayed.
- FIG. 26A information on the position PR of the user US1 in the real space is acquired.
- the information on the position PR in the real space can be acquired by, for example, the tracking process of the HMD 1 described with reference to FIGS. 2 (A) to 3 (B).
- a process of moving the user character UC1 in the virtual space shown in FIG. That is, the position PR of the user US1 in the real space shown in FIG. 26A and the position PV of the user character UC1 in the virtual space shown in FIG. 26B are associated with each other so as to be linked to the movement of the position PR.
- the position PV is changed to move the user character UC1.
- the display image of HMD1 which user US1 wears is generated by the method of this embodiment.
- the information on the position PV of the user character UC1 (virtual camera) specified by the information on the position PR of the user US1 in the real space is used as the position information on the first virtual space.
- the information on the position PV of the user character UC1 (virtual camera) specified by the information on the position PR of the user US1 in the real space is set as the position information on the second virtual space.
- FIG. 13 (A) when passing through the door DR and moving to the positions P2, P3, and as shown in FIG. 13 (B), without passing through the door DR, the positions P2, P3.
- the image displayed on the HMD 1 is different depending on the movement. That is, even in the real space at the same positions P2 and P3, the image in the second virtual space VS2 (Iceland) is displayed on the HMD1 in FIG. 13A, but FIG. Then, an image in the first virtual space VS1 (room) is displayed on the HMD1. Therefore, it becomes possible to provide the user with a mysterious VR experience that could not be realized with conventional simulation systems.
- step S1 it is determined whether or not the switching condition is satisfied. For example, when the user character does not pass through the place corresponding to the singular point and the switching condition is not satisfied, the position information of the user character or the virtual camera is set as the position information of the first virtual space.
- Step S2 position information is set as shown in FIG.
- step S3 a process of drawing the image of the second virtual space is performed to generate an image in which the image of the second virtual space is displayed in the region corresponding to the singular point.
- an image is generated in which an image of the ice country that is the second virtual space VS2 is displayed in the region of the door DR that is a region corresponding to the singular point.
- the position information of the user character or the virtual camera is set as the position information of the second virtual space (step S4).
- position information is set as shown in FIG.
- a process of drawing the image of the first virtual space is performed to generate an image in which the image of the first virtual space is displayed in the region corresponding to the singular point (step S5).
- an image in which an image of a room that is the first virtual space VS1 is displayed in the region of the door DR that is a region corresponding to the singular point is generated.
- the present invention is not limited, and techniques, processes, and configurations equivalent to these are also included in the scope of the present invention.
- the present invention can be applied to various games. Further, the present invention can be applied to various simulation systems such as a business game device, a home game device, or a large attraction system in which a large number of users participate.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
L'invention concerne un système de simulation comprenant une unité de définition d'espace virtuel, une unité de traitement de corps mobile et une unité de traitement d'affichage. L'unité de définition d'espace virtuel définit un premier espace virtuel et un second espace virtuel en tant que des espaces virtuels. Jusqu'à ce qu'une condition de commutation soit établie, l'unité de traitement d'affichage définit des informations de position relatives au corps mobile d'un utilisateur ou à une caméra virtuelle en tant que des informations de position dans le premier espace virtuel, exécute un processus de rendu d'une image du second espace virtuel de sorte que l'image du second espace virtuel est ajoutée à une image du premier espace virtuel, et génère une image dans laquelle l'image du second espace virtuel est affichée dans une zone correspondant à un point singulier. Lorsque la condition de commutation a été établie, l'unité de traitement d'affichage définit les informations de position relatives au corps mobile de l'utilisateur ou à la caméra virtuelle en tant que des informations de position dans le second espace virtuel, exécute un processus de rendu d'une image du premier espace virtuel de sorte que l'image du premier espace virtuel est ajoutée à une image du second espace virtuel, et génère une image dans laquelle l'image du premier espace virtuel est affichée dans une zone correspondant à un point singulier.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016256772A JP6761340B2 (ja) | 2016-12-28 | 2016-12-28 | シミュレーションシステム及びプログラム |
JP2016-256772 | 2016-12-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018124280A1 true WO2018124280A1 (fr) | 2018-07-05 |
Family
ID=62709645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/047250 WO2018124280A1 (fr) | 2016-12-28 | 2017-12-28 | Système de simulation, procédé de traitement d'image, et support de stockage d'informations |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6761340B2 (fr) |
WO (1) | WO2018124280A1 (fr) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163977A (zh) * | 2018-07-13 | 2019-08-23 | 腾讯数码(天津)有限公司 | 多世界虚拟场景中的虚拟通道渲染方法及装置 |
EP3968143A1 (fr) * | 2020-09-15 | 2022-03-16 | Nokia Technologies Oy | Traitement audio |
CN118451473A (zh) * | 2021-12-22 | 2024-08-06 | 株式会社Celsys | 影像生成方法及图像生成程序 |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2020026419A1 (ja) * | 2018-08-02 | 2021-08-12 | 株式会社ソニー・インタラクティブエンタテインメント | 画像生成装置および画像生成方法 |
US11373379B2 (en) * | 2018-08-23 | 2022-06-28 | Sony Interactive Entertainment Inc. | Image generation apparatus and image generation method for generating augmented reality images based on user interaction |
JP7574647B2 (ja) | 2019-01-30 | 2024-10-29 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、及びプログラムを記載した記録媒体 |
US10978019B2 (en) | 2019-04-15 | 2021-04-13 | XRSpace CO., LTD. | Head mounted display system switchable between a first-person perspective mode and a third-person perspective mode, related method and related non-transitory computer readable storage medium |
JP7351638B2 (ja) | 2019-04-23 | 2023-09-27 | 株式会社ソニー・インタラクティブエンタテインメント | 画像生成装置、画像表示システム、および情報提示方法 |
WO2021044745A1 (fr) * | 2019-09-03 | 2021-03-11 | ソニー株式会社 | Dispositif de traitement d'affichage, procédé de traitement d'affichage et support d'enregistrement |
JP7561009B2 (ja) | 2020-11-16 | 2024-10-03 | 任天堂株式会社 | 情報処理システム、情報処理プログラム、情報処理装置、および情報処理方法 |
JP7050884B6 (ja) | 2020-12-01 | 2022-05-06 | グリー株式会社 | 情報処理システム、情報処理方法、情報処理プログラム |
JP7467810B2 (ja) * | 2021-05-07 | 2024-04-16 | Kyoto’S 3D Studio株式会社 | 複合現実感提供システムおよび複合現実感提供方法 |
JP7324469B2 (ja) | 2021-06-28 | 2023-08-10 | グリー株式会社 | 情報処理システム、情報処理方法、情報処理プログラム |
JP6989199B1 (ja) | 2021-10-06 | 2022-01-05 | クラスター株式会社 | 情報処理装置 |
JP2023070149A (ja) * | 2021-11-03 | 2023-05-18 | 狂點軟體開發股▲ふん▼有限公司 | 現実世界と複数の仮想世界とを組み合わせてなる現実・仮想相互作用のロケーションベースの「Metaverse」コミュニティーシステム |
US20230419617A1 (en) | 2022-06-22 | 2023-12-28 | Meta Platforms Technologies, Llc | Virtual Personal Interface for Control and Travel Between Virtual Worlds |
JP7449508B2 (ja) | 2022-07-14 | 2024-03-14 | グリー株式会社 | 情報処理システム、情報処理方法、及びプログラム |
US12277301B2 (en) | 2022-08-18 | 2025-04-15 | Meta Platforms Technologies, Llc | URL access to assets within an artificial reality universe on both 2D and artificial reality interfaces |
US12175603B2 (en) | 2022-09-29 | 2024-12-24 | Meta Platforms Technologies, Llc | Doors for artificial reality universe traversal |
US12218944B1 (en) | 2022-10-10 | 2025-02-04 | Meta Platform Technologies, LLC | Group travel between artificial reality destinations |
WO2024101038A1 (fr) * | 2022-11-11 | 2024-05-16 | 株式会社Nttドコモ | Dispositif de déplacement d'avatar |
WO2024111123A1 (fr) * | 2022-11-25 | 2024-05-30 | 株式会社Abal | Système d'expérience d'espace virtuel et procédé d'expérience d'espace virtuel |
KR20240084992A (ko) * | 2022-12-07 | 2024-06-14 | 삼성전자주식회사 | 가상 공간들로 진입하기 위한 시각적 객체들을 표시하기 위한 웨어러블 장치 및 그 방법 |
WO2024161463A1 (fr) * | 2023-01-30 | 2024-08-08 | 株式会社ソニー・インタラクティブエンタテインメント | Dispositif de gestion de métavers, procédé de gestion de métavers et programme informatique |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006301924A (ja) * | 2005-04-20 | 2006-11-02 | Canon Inc | 画像処理方法および画像処理装置 |
JP2016062486A (ja) * | 2014-09-19 | 2016-04-25 | 株式会社ソニー・コンピュータエンタテインメント | 画像生成装置および画像生成方法 |
-
2016
- 2016-12-28 JP JP2016256772A patent/JP6761340B2/ja active Active
-
2017
- 2017-12-28 WO PCT/JP2017/047250 patent/WO2018124280A1/fr active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006301924A (ja) * | 2005-04-20 | 2006-11-02 | Canon Inc | 画像処理方法および画像処理装置 |
JP2016062486A (ja) * | 2014-09-19 | 2016-04-25 | 株式会社ソニー・コンピュータエンタテインメント | 画像生成装置および画像生成方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163977A (zh) * | 2018-07-13 | 2019-08-23 | 腾讯数码(天津)有限公司 | 多世界虚拟场景中的虚拟通道渲染方法及装置 |
US11263814B2 (en) * | 2018-07-13 | 2022-03-01 | Tencent Technology (Shenzhen) Company Limited | Method, apparatus, and storage medium for rendering virtual channel in multi-world virtual scene |
CN110163977B (zh) * | 2018-07-13 | 2024-04-12 | 腾讯数码(天津)有限公司 | 多世界虚拟场景中的虚拟通道渲染方法及装置 |
EP3968143A1 (fr) * | 2020-09-15 | 2022-03-16 | Nokia Technologies Oy | Traitement audio |
US11647350B2 (en) | 2020-09-15 | 2023-05-09 | Nokia Technologies Oy | Audio processing |
CN118451473A (zh) * | 2021-12-22 | 2024-08-06 | 株式会社Celsys | 影像生成方法及图像生成程序 |
CN118451473B (zh) * | 2021-12-22 | 2024-11-26 | 株式会社Celsys | 影像生成方法及图像生成程序 |
Also Published As
Publication number | Publication date |
---|---|
JP6761340B2 (ja) | 2020-09-23 |
JP2018109835A (ja) | 2018-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018124280A1 (fr) | Système de simulation, procédé de traitement d'image, et support de stockage d'informations | |
JP6754678B2 (ja) | シミュレーションシステム及びプログラム | |
US11094106B2 (en) | Simulation system, processing method, and information storage medium for changing a display object in response to a movement of a field of view | |
JP6306442B2 (ja) | プログラム及びゲームシステム | |
US10183220B2 (en) | Image generation device and image generation method | |
WO2018012395A1 (fr) | Système de simulation, procédé de traitement et support de stockage d'informations | |
US11738270B2 (en) | Simulation system, processing method, and information storage medium | |
JP6298130B2 (ja) | シミュレーションシステム及びプログラム | |
JP2019175323A (ja) | シミュレーションシステム及びプログラム | |
JP7144796B2 (ja) | シミュレーションシステム及びプログラム | |
JP2019152899A (ja) | シミュレーションシステム及びプログラム | |
CN112104857A (zh) | 图像生成系统、图像生成方法及信息存储介质 | |
JP7104539B2 (ja) | シミュレーションシステム及びプログラム | |
JP6774260B2 (ja) | シミュレーションシステム | |
JP6794390B2 (ja) | シミュレーションシステム及びプログラム | |
JP6622832B2 (ja) | プログラム及びゲームシステム | |
JP2018171309A (ja) | シミュレーションシステム及びプログラム | |
JP6660321B2 (ja) | シミュレーションシステム及びプログラム | |
JP6918189B2 (ja) | シミュレーションシステム及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17889153 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17889153 Country of ref document: EP Kind code of ref document: A1 |