+

WO2018113759A1 - Système et procédé de détection basés sur un système de positionnement et l'ar/mr - Google Patents

Système et procédé de détection basés sur un système de positionnement et l'ar/mr Download PDF

Info

Publication number
WO2018113759A1
WO2018113759A1 PCT/CN2017/117880 CN2017117880W WO2018113759A1 WO 2018113759 A1 WO2018113759 A1 WO 2018113759A1 CN 2017117880 W CN2017117880 W CN 2017117880W WO 2018113759 A1 WO2018113759 A1 WO 2018113759A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
preset
coordinates
positioning
virtual image
Prior art date
Application number
PCT/CN2017/117880
Other languages
English (en)
Chinese (zh)
Inventor
李凯
潘杰
郑浩
Original Assignee
大辅科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大辅科技(北京)有限公司 filed Critical 大辅科技(北京)有限公司
Publication of WO2018113759A1 publication Critical patent/WO2018113759A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V9/00Prospecting or detecting by methods not provided for in groups G01V1/00 - G01V8/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of detection, and more particularly to a detection system and a detection method based on a positioning system and an AR/MR technique.
  • Detection technology is a very traditional technology. Surveying, exploration, and surveying pipelines are all detected. Depending on the specific detection purpose, conventional methods include determining the position of an invisible object by receiving reflected waves using methods such as sonar, radar, infrared, and the like. It takes a long time for the average person to master these conventional methods, and these devices are relatively cumbersome and complex, and a more concise detection system that conforms to the habits of modern people is needed.
  • a positioning system and an AR/MR based detection system comprising:
  • AR/MR display device for displaying AR/MR images
  • a geolocation system for determining a geographic location of the AR/MR display device
  • An infrared laser positioning system for determining a 3D coordinate and a posture of the AR/MR display device in the determined area
  • the AR/MR detection system includes a database and a processing unit, and the database stores data of the object to be tested and a virtual image of the object to be tested in the area to be tested, and the processing unit is based on the geographic location, 3D coordinates, and the AR/VR display device.
  • the gesture superimposes the virtual image of the object to be tested on the real image displayed by the AR/MR display device.
  • the geolocation system comprises one or more of a GPS, GMS, LTE positioning system.
  • the geolocation system comprises a terrestrial base station and/or a terrestrial signal enhancement point.
  • a portion of the AR/MR detection system can be located in the cloud.
  • the AR/MR display device includes a head mounted display device, a smart phone, or a tablet computer.
  • the system further includes a detection device.
  • the detection device includes, but is not limited to, a sensor.
  • the detection device includes a capture device for collecting scene information.
  • the capture device includes, but is not limited to, a depth camera.
  • the AR/MR display device can receive interactive information of the user.
  • the present invention provides a positioning system and an AR/MR detecting method, including:
  • the preset virtual image is displayed in the AR/MR display device based on the positioning of the geolocation system and the infrared laser positioning system.
  • the virtual image has preset positioning coordinates.
  • the preset positioning coordinates of the virtual image include geographic coordinates.
  • the preset positioning coordinates of the virtual image include relative position coordinates.
  • the relative position coordinates include relative geographic coordinates and/or relative 3D coordinates.
  • the infrared laser positioning system is used to determine 3D coordinates and attitude of the AR/MR display device within the field region.
  • the AR/MR display device can be positioned using a relative position within a certain area, and the display coordinates of the virtual image can also be preset with the relative position.
  • the preset virtual image is displayed at the relative position.
  • the feature point in the real image is confirmed by image recognition, and the virtual image is displayed at a preset relative position with the feature point.
  • the geographic location of the AR/MR display device is determined according to the geolocation system, and when the AR/MR display device is in the preset geographic coordinate range, the virtual image is displayed at the preset relative position.
  • a non-transitory computer readable medium storing program instructions, which when executed by a processing device causes the apparatus to:
  • the preset virtual image is displayed in the AR/MR display device based on the positioning of the geolocation system and the infrared laser positioning system.
  • the virtual image has preset positioning coordinates.
  • the preset positioning coordinates of the virtual image include geographic coordinates.
  • the preset positioning coordinates of the virtual image include relative position coordinates.
  • the relative position coordinates include relative geographic coordinates and/or relative 3D coordinates.
  • the infrared laser positioning system is used to determine 3D coordinates and attitude of the AR/MR display device within the field region.
  • the AR/MR display device can be positioned using a relative position within a certain area, and the display coordinates of the virtual image can also be preset with the relative position.
  • the preset virtual image is displayed at the relative position.
  • the feature point in the real image is confirmed by image recognition, and the virtual image is displayed at a preset relative position with the feature point.
  • the geographic location of the AR/MR display device is determined according to the geolocation system, and when the AR/MR display device is in the preset geographic coordinate range, the virtual image is displayed at the preset relative position.
  • Figure 1 shows an example of a detection system based on a positioning system and an AR/MR
  • Figure 2 shows an example of a global positioning system used in the present invention
  • Figure 3 illustrates an embodiment of an AR/MR display device
  • Figure 4 illustrates an embodiment of a processing unit associated with an AR/MR display device
  • Figure 5 illustrates an embodiment of a computer system implementing a detection system of the present invention
  • Figure 6 is a flow chart showing the cooperation of the AR/MR detection system, the infrared laser positioning/scanning system, and the display device.
  • GPS or mobile communication signals are used as a global positioning system.
  • the signal can be enhanced by: setting a plurality of locators near the position to be detected for periodically transmitting the positioning signal to the surroundings, and the positioning signal coverage of the locator is used as the positioning of the locator. region.
  • the locator periodically transmits a spherical low frequency electromagnetic field to the surroundings (the coverage radius is determined by the corresponding environment and the transmission power).
  • the positioning tag is mainly used for positioning of the positioned object, and its function is to receive the low frequency magnetic field signal emitted by the positioner and the ID number of the positioner from which the positioner is resolved.
  • One or more positioning communication base stations for implementing wireless signal coverage for the location areas of all the locators, and locating the ID number of the communication base station, the received ID number of the locator, the ID number of the positioning tag, and the positioning time (The positioning communication base station transmits the time when the positioning tag transmission signal is received as the positioning time to the positioning engine server.
  • the positioning engine server is connected to the positioning communication base station via the Ethernet, and receives the ID number of the positioning communication base station, the ID number of the locator, the ID number of the positioning tag, and the positioning time, and obtains a movement trajectory of the positioning tag after processing (that is, the positioning engine) Locate the movement track of the tag carrier).
  • a system for implementing a mixed reality environment in the present invention can include a mobile display device in communication with a hub computing system.
  • the mobile display device can include a mobile processing unit coupled to a head mounted display device (or other suitable device).
  • the head mounted display device can include a display element.
  • the display element is transparent to a degree such that a user can see a real world object within the user's field of view (FOV) through the display element.
  • the display element also provides the ability to project a virtual image into the user's FOV such that the virtual image can also appear next to a real world object.
  • the system automatically tracks where the user is looking so that the system can determine where to insert the virtual image into the user's FOV. Once the system knows where to project the virtual image, the display element is used to project the image.
  • the hub computing system and one or more processing units may cooperate to construct a model of an environment including x, y, z Cartesian locations for all users in a room or other environment, real world objects, and virtual three dimensional objects.
  • the location of each head mounted display device worn by a user in the environment can be calibrated to the model of the environment and calibrated to each other. This allows the system to determine the line of sight of each user and the FOV of the environment.
  • a virtual image can be displayed to each user, but the system determines the display of the virtual image from the perspective of each user, thereby adjusting the virtual image for any parallax and occlusion from or due to other objects in the environment.
  • the model of the environment (referred to herein as a scene graph) and the tracking of the user's FOV and objects in the environment may be generated by a hub or mobile processing unit that works in concert or independently.
  • interaction encompasses both physical and linguistic interactions of a user with a virtual object.
  • a user simply looking at a virtual object is another example of a user's physical interaction with a virtual object.
  • the head mounted display device 2 can include an integrated processing unit 4.
  • the processing unit 4 can be separate from the head mounted display device 2 and can communicate with the head mounted display device 2 via wired or wireless communication.
  • the eyeglass-shaped head mounted display device 2 is worn on the user's head so that the user can view through the display and thus have an actual direct view of the space in front of the user.
  • actual direct view is used to refer to the ability to see a real world object directly with the human eye, rather than seeing the created image representation of the object. For example, viewing a room through glasses allows the user to get an actual direct view of the room, while watching a video on a television is not an actual direct view of the room. More details of the head mounted display device 2 are provided below.
  • the processing unit 4 may include many of the computing powers for operating the head mounted display device 2.
  • processing unit 4 communicates wirelessly (eg, WiFi, Bluetooth, infrared, or other wireless communication means) with one or more hub computing systems 12.
  • the hub computing system 12 can be provided remotely at the processing unit 4 such that the hub computing system 12 and the processing unit 4 communicate via a wireless network, such as a LAN or WAN.
  • hub computing system 12 may be omitted to provide a mobile mixed reality experience using head mounted display device 2 and processing unit 4.
  • the hub computing system 12 can be a computer, gaming system or console, and the like.
  • hub computing system 12 may include hardware components and/or software components such that hub computing system 12 may be used to execute applications such as gaming applications, non-gaming applications, and the like.
  • the hub computing system can include processors such as standardized processors, special purpose processors, microprocessors, and the like, which can execute instructions stored on the processor readable storage device to perform the purposes of this document Said process.
  • the hub computing system 12 further includes a capture device for capturing image data from portions of the scene within its FOV.
  • a scene is an environment in which a user moves around, this environment being captured within the FOV of the capture device and/or within the FOV of each head mounted display device 2.
  • the capture device 20 can include one or more cameras that visually monitor the user 18 and surrounding space such that the gestures and/or movements performed by the user and the structure of the surrounding space can be captured, analyzed, and executed within the application.
  • the hub computing system 12 can be connected to an audiovisual device 16 such as a television, monitor, high definition television (HDTV), etc. that can provide gaming or application vision.
  • the audiovisual device 16 includes a built-in speaker.
  • the audiovisual device 16 and the hub computing system 12 can be connected to the external speaker 22.
  • FIG. 1 illustrates an example of a plant 23 or a user's hand 23 as a real world object appearing within a user's FOV.
  • Control circuitry 136 provides various electronic devices that support other components of head mounted display device 2. More details of control circuit 136 are provided below with reference to FIG. Inside the temple 102 or mounted to the temple 102 are an earpiece 130, an inertial measurement unit 132, and a temperature sensor 138.
  • inertial measurement unit 132 (or IMU 132) includes inertial sensors, such as a three-axis magnetometer 132A, a three-axis gyroscope 132B, and a three-axis accelerometer 132C.
  • the inertial measurement unit 132 senses the position, orientation, and sudden acceleration (pitch, roll, and yaw) of the head mounted display device 2.
  • IMU 132 may also include other inertial sensors.
  • Microdisplay 120 projects an image through lens 122.
  • image generation techniques that can be used to implement the microdisplay 120.
  • the microdisplay 120 can be implemented using a transmissive projection technique in which the light source is modulated by an optically active material and illuminated from behind with white light. These techniques are typically implemented using LCD type displays with powerful backlighting and high optical energy density.
  • Microdisplay 120 can also be implemented using a reflective technique in which external light is reflected and modulated by an optically active material. Depending on the technology, the illumination is illuminated forward by a white light source or RGB source.
  • microdisplay 120 can be implemented using a transmission technique in which light is generated by the display.
  • the PicoP (TM) display engine from Microvision, Inc. uses a miniature mirrored rudder to emit a laser signal onto a small screen that acts as a transmissive element or directly emits a beam of light (eg, a laser) to the eye.
  • FIG. 3 is a block diagram depicting various components of the head mounted display device 2.
  • FIG. 4 is a block diagram depicting various components of processing unit 4.
  • a head mounted display device 2, the components of which are depicted in Figure 4 is used to provide a mixed reality experience to a user by seamlessly blending one or more virtual images with a user's view of the real world. Additionally, the head mounted display device assembly of Figure 4 includes a number of sensors that track various conditions.
  • the head mounted display device 2 will receive an instruction for the virtual image from the processing unit 4 and will provide the sensor information back to the processing unit 4.
  • Processing unit 4, the components of which are depicted in FIG. 4 will receive sensory information from head mounted display device 2 and will exchange information and data with hub computing device 12. Based on the exchange of this information and data, processing unit 4 will determine where and when to provide a virtual image to the user and accordingly send instructions to the head mounted display device of FIG.
  • control circuit 200 all components of control circuit 200 are in communication with one another via dedicated lines or one or more buses. In another embodiment, each component of control circuit 200 is in communication with processor 210.
  • Camera interface 216 provides an interface to two room-facing cameras 112 and stores images received from cameras facing the room in camera buffer 218.
  • Display driver 220 will drive microdisplay 120.
  • the display formatter 222 provides information about the virtual image being displayed on the microdisplay 120 to the opacity control circuit 224 that controls the opacity filter 114.
  • Timing generator 226 is used to provide timing data to the system.
  • Display output interface 228 is a buffer for providing images from camera 112 towards the room to processing unit 4.
  • the display input interface 230 is a buffer for receiving an image such as a virtual image to be displayed on the microdisplay 120.
  • Display output interface 228 and display input interface 230 are in communication with a tape interface 232 that is an interface to processing unit 4.
  • the power management circuit 202 includes a voltage regulator 234, an eye tracking illumination driver 236, an audio DAC and amplifier 238, a microphone preamplifier and audio ADC 240, a temperature sensor interface 242, and a clock generator 244.
  • the voltage regulator 234 receives power from the processing unit 4 via the strap interface 232 and provides this power to other components of the head mounted display device 2.
  • Each eye tracking illumination driver 236 provides an IR source for the eye tracking illumination 134A as described above.
  • the audio DAC and amplifier 238 output audio information to the headphones 130.
  • the mic preamplifier and audio ADC 240 provide an interface for the microphone 110.
  • Temperature sensor interface 242 is an interface for temperature sensor 138.
  • the power management circuit 202 also provides power to and receives data from the three-axis magnetometer 132A, the three-axis gyroscope 132B, and the three-axis accelerometer 132C.
  • FIG. 4 is a block diagram depicting various components of processing unit 4.
  • FIG. 4 shows control circuit 304 in communication with power management circuit 306.
  • the control circuit 304 includes a central processing unit (CPU) 320, a graphics processing unit (GPU) 322, a cache 324, a RAM 326, a memory controller 328 in communication with the memory 330 (eg, D-RAM), and a flash memory 334 (or other Type of non-volatile storage) flash controller 332 for communication, display output buffer 336 for communicating with head mounted display device 2 via tape interface 302 and tape interface 232, pass band interface 302 and tape interface 232 and header a display input buffer 338 for communicating with the wearable display device 2, a microphone interface 340 for communicating with an external microphone connector 342 for connection to a microphone, a PCIexpress interface for connecting to the wireless communication device 346, and (one or more ) USB port 348.
  • CPU central processing unit
  • GPU graphics processing unit
  • RAM random access memory
  • 328 random access memory
  • wireless communication device 346 can include a Wi-Fi enabled communication device, a Bluetooth communication device, an infrared communication device, and the like.
  • a USB port can be used to interface processing unit 4 to hub computing system 12 to load data or software onto processing unit 4 and to charge processing unit 4.
  • CPU 320 and GPU 322 are the primary forces used to determine where, when, and how to insert a virtual three-dimensional object into the user's field of view. More details are provided below.
  • the power management circuit 306 includes a clock generator 360, an analog to digital converter 362, a battery charger 364, a voltage regulator 366, a head mounted display power supply 376, and a temperature sensor interface 372 in communication with the temperature sensor 374 (which may be located in processing) On the wristband of unit 4).
  • Analog to digital converter 362 is used to monitor battery voltage, temperature sensors, and control battery charging functions.
  • Voltage regulator 366 is in communication with battery 368 for providing electrical energy to the system.
  • Battery charger 364 is used to charge battery 368 upon receipt of electrical energy from charging jack 370 (via voltage regulator 366).
  • the HMD power supply 376 provides power to the head mounted display device 2.
  • Camera component 423 can include an infrared (IR) light component 425, a three-dimensional (3D) camera 426, and an RGB (visual image) camera 428 that can be used to capture depth images of a scene.
  • IR infrared
  • 3D three-dimensional
  • RGB visual image
  • the IR light component 425 of the capture device 20 can emit infrared light onto the scene, and then a sensor (including a sensor not shown in some embodiments) can be used, for example using a 3-D camera 426 and/or RGB camera 428 to detect backscattered light from the surface of one or more targets and objects in the scene.
  • capture device 20 may further include a processor 432 that is communicable with image camera component 423.
  • processor 432 can include standard processors, special purpose processors, microprocessors, and the like of executable instructions, including, for example, for receiving depth images, generating suitable data formats (eg, frames), and transmitting data to a hub computing system 12 instructions.
  • Capture device 20 may further include a memory 434 that may store instructions executed by processor 432, images or image frames captured by a 3-D camera and/or RGB camera, or any other suitable information, images, and the like.
  • memory 434 may include random access memory (RAM), read only memory (ROM), cache, flash memory, a hard disk, or any other suitable storage component.
  • RAM random access memory
  • ROM read only memory
  • cache flash memory
  • hard disk or any other suitable storage component.
  • memory 434 can be a separate component in communication with image camera component 423 and processor 432.
  • memory 434 can be integrated into processor 432 and/or image capture component 423.
  • Capture device 20 is in communication with hub computing system 12 via communication link 436.
  • Communication link 436 may be a wired connection including, for example, a USB connection, a FireWire connection, an Ethernet cable connection, and/or a wireless connection such as a wireless 802.11b, 802.11g, 802.11a, or 802.11n connection.
  • hub computing system 12 may provide capture device 20 via communication link 436 with a clock that may be used to determine when to capture, for example, a scene.
  • capture device 20 provides depth information and visual (eg, RGB) images captured by, for example, 3-D camera 426 and/or RGB camera 428 to hub computing system 12 via communication link 436.
  • RGB depth information and visual
  • the depth image and the visual image are transmitted at a rate of 30 frames per second; however, other frame rates may be used.
  • the hub computing system 12 can then create models and use the models, depth information, and captured images to, for example, control applications such as games or word processing programs and/or animate avatars or on-screen characters.
  • the hub computing system 12 described above, together with the head mounted display device 2 and the processing unit 4, is capable of inserting a virtual three-dimensional object into the FOV of one or more users such that the virtual three-dimensional object expands and/or replaces the view of the real world.
  • the head mounted display device 2, the processing unit 4, and the hub computing system 12 work together because each of these devices is included for obtaining to determine where, when, and how to insert the virtual three dimensional A subset of the sensor's data.
  • the calculation of where, when, and how to insert the virtual three-dimensional object is performed by the hub computing system 12 and processing unit 4 that work in cooperation with each other. However, in still other embodiments, all calculations may be performed by the separately functioning hub computing system 12 or the processing unit(s) operating separately. In other embodiments, at least some of the calculations may be performed by the head mounted display device 2.
  • the hub 12 may further include a skeletal tracking module 450 for identifying and tracking users within another user's FOV.
  • the pivot 12 can further include a gesture recognition engine 454 for identifying gestures performed by the user.
  • hub computing device 12 and processing unit 4 work together to create a scene graph or model of the environment in which the one or more users are located, as well as to track various moving objects in the environment.
  • hub computing system 12 and/or processing unit 4 tracks the FOV of head mounted display device 2 by tracking the position and orientation of head mounted display device 2 worn by user 18.
  • the sensor information obtained by the head mounted display device 2 is transmitted to the processing unit 4.
  • this information is communicated to the hub computing system 12, which updates the scene model and transmits it back to the processing unit.
  • Processing unit 4 uses the additional sensor information it receives from head mounted display device 2 to refine the user's FOV and provide instructions to head mounted display device 2 as to where, when, and how to insert the virtual object.
  • the scene model and periodically can be updated between the hub computing system 12 and the processing unit 4 in a closed loop feedback system Track the information as explained below.
  • FIG. 5 illustrates an example embodiment of a computing system that can be used to implement hub computing system 12.
  • the multimedia console 500 has a central processing unit (CPU) 501 having a level one cache 502, a level two cache 504, and a flash ROM (read only memory) 506.
  • the level one cache 502 and the level two cache 504 temporarily store data, and thus reduce the number of memory access cycles, thereby improving processing speed and throughput.
  • the CPU 501 can be equipped with more than one core and thus with additional primary and secondary caches 502 and 504.
  • the flash ROM 506 can store executable code that is loaded during the initialization phase of the boot process when the multimedia console 500 is powered on.
  • a graphics processing unit (GPU) 508 and a video encoder/video codec (encoder/decoder) 514 form a video processing pipeline for high speed and high resolution graphics processing.
  • Data is transferred from graphics processing unit 508 to video encoder/video codec 514 via a bus.
  • the video processing pipeline outputs data to an A/V (audio/video) port 540 for transmission to a television or other display.
  • Memory controller 510 is coupled to GPU 508 to facilitate processor access to various types of memory 512 such as, but not limited to, RAM (Random Access Memory).
  • the multimedia console 500 includes an I/O controller 520, a system management controller 522, an audio processing unit 523, a network interface 524, a first USB host controller 526, a second USB controller 528, and preferably implemented on the module 518, and Front panel I/O sub-assembly 530.
  • USB controllers 526 and 528 serve as hosts for peripheral controllers 542(1)-542(2), wireless adapters 548, and external memory devices 546 (eg, flash memory, external CD/DVDROM drives, removable media, etc.) .
  • Network interface 524 and/or wireless adapter 548 provides access to a network (eg, the Internet, a home network, etc.) and may be in a variety of different wired or wireless adapter components including Ethernet cards, modems, Bluetooth modules, cable modems, and the like. Any of them.
  • a network eg, the Internet, a home network, etc.
  • wired or wireless adapter components including Ethernet cards, modems, Bluetooth modules, cable modems, and the like. Any of them.
  • System memory 543 is provided to store application data that is loaded during the boot process.
  • a media drive 544 is provided and may include a DVD/CD drive, a Blu-ray drive, a hard drive, or other removable media drive or the like.
  • Media drive 544 can be located internal or external to multimedia console 500.
  • Application data may be accessed via media drive 544 for execution, playback, etc. by multimedia console 500.
  • the media drive 544 is connected to the I/O controller 520 via a bus such as a Serial ATA bus or other high speed connection (eg, IEEE 1394).
  • the system management controller 522 provides various service functions related to ensuring the availability of the multimedia console 500.
  • Audio processing unit 523 and audio codec 532 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is transmitted between the audio processing unit 523 and the audio codec 532 via a communication link.
  • the audio processing pipeline outputs the data to the A/V port 540 for reproduction by an external audio user or an audio capable device.
  • the front panel I/O sub-assembly 530 supports the functions of the power button 550 and the eject button 552 exposed on the outer surface of the multimedia console 500, as well as any LEDs (light emitting diodes) or other indicators.
  • System power supply module 536 provides power to the components of multimedia console 500.
  • Fan 538 cools the circuitry within multimedia console 500.
  • CPU 501, GPU 508, memory controller 510, and various other components within multimedia console 500 are interconnected via one or more buses, including serial and parallel buses, memory buses, peripheral buses, and using various bus architectures. Any of a variety of processors or local buses. As an example, these architectures may include a Peripheral Component Interconnect (PCI) bus, a PCI-Express bus, and the like.
  • PCI Peripheral Component Interconnect
  • application data can be loaded from the system memory 543 into the memory 512 and/or the caches 502, 504 and executed on the CPU 501.
  • the application can present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 500.
  • applications and/or other media contained in media drive 544 can be launched or played from media drive 544 to provide additional functionality to multimedia console 500.
  • the multimedia console 500 can operate as a stand-alone system by simply connecting the system to a television or other display.
  • Capture device 20 may define additional input devices for console 500 via USB controller 526 or other interface.
  • hub computing system 12 can be implemented using other hardware architectures. No hardware architecture is required.
  • Head mounted display device 2 and processing unit 4 are in communication with a hub computing system 12 (also referred to as hub 12).
  • hub computing system 12 also referred to as hub 12
  • Each of the mobile display devices can communicate with the hub using wireless communication as described above. It is contemplated in such an embodiment that much of the information in the information for the mobile display device will be calculated and stored at the hub and transmitted to each mobile display device.
  • the hub will generate a model of the environment and provide that model to all mobile display devices in communication with the hub. Additionally, the hub can track the position and orientation of the mobile display device as well as the moving objects in the room, and then transmit this information to each mobile display device.
  • the system can include a plurality of hubs 12, each of which includes one or more mobile display devices.
  • the hubs can communicate directly with each other or via the Internet (or other network).
  • the hub 12 can be omitted altogether.
  • All of the functions performed by the hub 12 in the following description may alternatively be performed by one of the processing units 4, some of the processing units 4 that work cooperatively, or all of the processing units 4 that work cooperatively. carried out.
  • the respective mobile display device 2 performs all functions of the system 10, including generating and updating status data, scene graphs, views of each user on the scene graph, all texture and rendering information, video and audio data. And other information for performing the operations described herein.
  • the hub 12 and processing unit 4 collect data from the scene.
  • this may be image and audio data sensed by depth camera 426 and RGB camera 428 of capture device 20.
  • this may be image data sensed by head mounted display device 2 at step 656, and in particular, image data sensed by camera 112, eye tracking component 134, and IMU 132.
  • the data collected by the head mounted display device 2 is sent to the processing unit 4.
  • Processing unit 4 processes this data in step 630 and sends it to hub 12.
  • the hub 12 performs various setup steps that allow the hub 12 to coordinate image data of its capture device 20 and one or more processing units 4.
  • the camera on the head mounted display device 2 is also moved around in the scene.
  • the position and time capture of each of the imaging cameras needs to be calibrated to the scene, calibrated to each other, and calibrated to the hub 12.
  • the clock offsets of the various imaging devices in system 10 are first determined. In particular, to coordinate image data from each of the cameras in the system, it can be confirmed that the coordinated image data is from the same time.
  • image data from capture device 20 and image data incoming from one or more processing units 4 are time stamped with a single master clock in hub 12. Using time stamps for all such data for a given frame, and using the known resolution of each camera in the camera, the hub 12 determines the time offset of each of the imaging cameras in the system. Accordingly, the hub 12 can determine the differences between the images received from each camera and the adjustments to those images.
  • the hub 12 can select a reference timestamp from the frames received by one of the cameras. The hub 12 can then add time to or subtract time from the image data received from all other cameras to synchronize with the reference timestamp. It is understood that for the calibration process, various other operations can be used to determine the time offset and/or to synchronize different cameras together. The determination of the time offset can be performed once when image data from all cameras is initially received. Alternatively, it may be performed periodically, such as for example every frame or a certain number of frames.
  • the hub 12 and/or one or more processing units 4 can form a scene map or model that identifies the geometry of the scene and the geometry and location of objects (including users) within the scene. Depth and/or RGB data can be used when calibrating image data of all cameras to each other.
  • the hub 12 can then convert the distortion corrected image data points captured by each camera from a camera view to an orthogonal 3D world view.
  • This orthogonal 3D world view is a point cloud map of all image data captured by the capture device 20 and the head mounted display device camera in an orthogonal x, y, z Cartesian coordinate system. Matrix transformation formulas for converting camera views into orthogonal 3D world views are known.
  • a preset virtual image is displayed in the AR/MR display device based on the positioning of the geolocation system and the infrared laser positioning system.
  • the virtual image has preset positioning coordinates, including but not limited to geographic coordinates, 3D coordinates, or relative coordinates.
  • the relative coordinates include relative geographic coordinates and/or relative 3D coordinates.
  • the virtual image allows the user to see the object to be detected or to be built in the AR/MR display device.
  • a deep underground pipeline or a location to be perforated Therefore, the setting of the virtual image coordinates is closely related to the positioning system.
  • the geographic coordinates of the virtual image can be set.
  • the AR/MR display device is within the preset geographic coordinate range, the corresponding position in the field of view appears according to the 3D coordinates and posture of the AR/MR display device.
  • the relative coordinates can also be set.
  • a certain marker can be used as a feature point to preset the object to be detected or the object to be constructed relative to the feature.
  • the relative coordinates of the point for example, 2 m from the right side of the manhole cover, 1 m deep; or in the middle part of a wooden sign, 20 cm from the upper and lower edges, and so on.
  • Multiple markers can be used as feature points for positioning, which is more accurate.
  • the feature point is confirmed by image recognition, and the preset virtual image is superimposed on the real image.
  • the position of the virtual image is adjusted in real time with the position and posture of the AR/MR display device, and the user can see the virtual image corresponding to the real image.
  • geographic coordinates to preset virtual images is relatively straightforward, but its accuracy depends on the accuracy of the geolocation system used and the surrounding environmental conditions.
  • the position of the virtual image is preset by means of geographic coordinates and relative position addition. Those skilled in the art can arbitrarily choose according to the needs and budget of the actual application.
  • virtual images may be added remotely through the cloud.
  • the capture device for example, the infrared laser scanning system
  • the scene information of the environment in which the user (the default and the position of the AR/MR display device are consistent) is obtained, and the coordinates of the preset virtual image in the scene are added by the remote according to the newly acquired information.
  • the capture device for example, the infrared laser scanning system
  • the scene information of the environment in which the user (the default and the position of the AR/MR display device are consistent) is obtained
  • the coordinates of the preset virtual image in the scene are added by the remote according to the newly acquired information.
  • the capture device for example, the infrared laser scanning system
  • the capture device for example, the infrared laser scanning system
  • the scene information of the environment in which the user the default and the position of the AR/MR display device are consistent
  • the coordinates of the preset virtual image in the scene are added by the remote according to the newly acquired information.
  • the capture device for example, the infrared laser scanning system
  • An infrared positioner is provided at several fixed locations near the detector to receive infrared laser signals.
  • the database of the AR/MR detection system is formed by using the GPS positioning technology provided by Trimble and the 3D modeling technology of Google Project Tango and the GIS data of the pipeline in the database.
  • a virtual pipeline image is displayed on the display screen.
  • the position of the virtual pipeline image in the display screen needs to match the GIS data corresponding to the FOV.
  • the explorer can easily identify the location of hidden pipelines and construct or detect them at the appropriate locations.
  • a geolocation device is disposed near the area to be tested, and receives a mobile communication signal for positioning.
  • the database of the AR/MR detection system is formed by the 3D modeling technology of Google Project Tango technology, the data of the mobile communication base station and the GIS data of the pipeline in the database.
  • a virtual pipeline image is displayed on the display screen based on the GPS data of the AR/MR display device and the position sensor data of the smartphone.
  • the position of the virtual pipeline image in the display screen must match the GIS data corresponding to the location of the smartphone.
  • An infrared positioner is provided at several fixed locations near the constructor to receive infrared laser signals.
  • the GPS site positioning technology provided by Trimble and the 3D modeling technology of Google Project Tango are used to model the construction site, and the virtual images of the targets to be constructed are set at corresponding positions, and the database of the AR/MR construction system is superimposed.
  • a virtual image of the target to be constructed is displayed on the display screen.
  • the position of the image in the FOV needs to match the position where the actual construction is required.
  • the explorer can see the location to be constructed through Microsoft Hololens, such as the location of the hole, the location of the item, and so on. This eliminates the need for measurement and is suitable for construction sites that cannot be completely replaced by machines.
  • embodiments of the present disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the present disclosure may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors.
  • a machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).
  • a machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustic, or other forms of propagated signals (eg, carrier, infrared) Signals, digital signals, etc.) and others.
  • firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are for convenience only, and that such acts actually result from a computing device, processor, controller, or other device executing firmware, software, routines, instructions, and the like.
  • references to "an embodiment”, “an embodiment”, “an example embodiment” or a similar phrase means that the described embodiments may include specific features, structures or characteristics, but each embodiment may not necessarily include a particular Feature, structure or characteristic. In addition, these phrases are not necessarily referring to the same embodiments. In addition, it is within the knowledge of a person skilled in the relevant art to incorporate the features, structures, or characteristics into other embodiments, whether or not they are specifically described or described herein.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un système de détection basé sur un système de positionnement et l'AR (réalité augmentée)/MR (réalité mixte) Ce système de détection comprend : un dispositif d'affichage en AR/MR destiné à afficher une image en AR/MR ; un système de positionnement géographique destiné à déterminer une position géographique du dispositif d'affichage en AR/MR ; un système de positionnement laser infrarouge destiné à déterminer une coordonnée 3D et un geste du dispositif d'affichage en AR/MR dans une zone déterminée ; un système de détection AR/MR comprenant une base de données et une unité de traitement, la base de données stockant des données d'un objet à détecter dans une zone à détecter et une image virtuelle de l'objet à détecter, et l'unité de traitement superposant l'image virtuelle de l'objet à détecter sur une image réelle affichée par le dispositif d'affichage en AR/MR en fonction de la position géographique, de la coordonnée 3D et du geste du dispositif d'affichage en AR/VR.
PCT/CN2017/117880 2016-12-22 2017-12-22 Système et procédé de détection basés sur un système de positionnement et l'ar/mr WO2018113759A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611200201 2016-12-22
CN201611200201.1 2016-12-22

Publications (1)

Publication Number Publication Date
WO2018113759A1 true WO2018113759A1 (fr) 2018-06-28

Family

ID=62392147

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117880 WO2018113759A1 (fr) 2016-12-22 2017-12-22 Système et procédé de détection basés sur un système de positionnement et l'ar/mr

Country Status (2)

Country Link
CN (1) CN108132490A (fr)
WO (1) WO2018113759A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935275A (zh) * 2020-08-06 2020-11-13 杭州巨骐信息科技股份有限公司 一种智能井盖的管理系统
CN112333491A (zh) * 2020-09-23 2021-02-05 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN113239446A (zh) * 2021-06-11 2021-08-10 重庆电子工程职业学院 一种室内信息量测方法及系统
CN115348542A (zh) * 2021-05-12 2022-11-15 中移雄安信息通信科技有限公司 基于视频网络与无线网络mr测量结合的定位方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020244576A1 (fr) * 2019-06-05 2020-12-10 北京外号信息技术有限公司 Procédé de superposition d'objet virtuel sur la base d'un appareil de communication optique, et dispositif électronique correspondant
CN111242704B (zh) * 2020-04-26 2020-12-08 北京外号信息技术有限公司 用于在现实场景中叠加直播人物影像的方法和电子设备
WO2022036472A1 (fr) * 2020-08-17 2022-02-24 南京翱翔智能制造科技有限公司 Système d'interaction coopératif basé sur un avatar virtuel à échelle mixte
CN112866672B (zh) * 2020-12-30 2022-08-26 深圳卡乐星球数字娱乐有限公司 一种用于沉浸式文化娱乐的增强现实系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010046123A1 (fr) * 2008-10-23 2010-04-29 Lokesh Bitra Procédé et système virtuels de balisage
US20130278633A1 (en) * 2012-04-20 2013-10-24 Samsung Electronics Co., Ltd. Method and system for generating augmented reality scene
CN104660995A (zh) * 2015-02-11 2015-05-27 尼森科技(湖北)有限公司 一种救灾救援可视系统
CN104702871A (zh) * 2015-03-19 2015-06-10 世雅设计有限公司 无人机投影显示方法、系统及装置
CN105212418A (zh) * 2015-11-05 2016-01-06 北京航天泰坦科技股份有限公司 基于红外夜视功能的增强现实智能头盔研制

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201673267U (zh) * 2010-05-18 2010-12-15 山东师范大学 基于增强现实技术的生命探测与救援系统
CN101833115B (zh) * 2010-05-18 2013-07-03 山东师范大学 基于增强现实技术的生命探测与救援系统及其实现方法
US9329286B2 (en) * 2013-10-03 2016-05-03 Westerngeco L.L.C. Seismic survey using an augmented reality device
CN106019364B (zh) * 2016-05-08 2019-02-05 大连理工大学 煤矿开采过程中底板突水预警系统及方法
CN205680051U (zh) * 2016-05-13 2016-11-09 哲想方案(北京)科技有限公司 一种虚拟现实系统
CN106019265A (zh) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 一种多目标定位方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010046123A1 (fr) * 2008-10-23 2010-04-29 Lokesh Bitra Procédé et système virtuels de balisage
US20130278633A1 (en) * 2012-04-20 2013-10-24 Samsung Electronics Co., Ltd. Method and system for generating augmented reality scene
CN104660995A (zh) * 2015-02-11 2015-05-27 尼森科技(湖北)有限公司 一种救灾救援可视系统
CN104702871A (zh) * 2015-03-19 2015-06-10 世雅设计有限公司 无人机投影显示方法、系统及装置
CN105212418A (zh) * 2015-11-05 2016-01-06 北京航天泰坦科技股份有限公司 基于红外夜视功能的增强现实智能头盔研制

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935275A (zh) * 2020-08-06 2020-11-13 杭州巨骐信息科技股份有限公司 一种智能井盖的管理系统
CN112333491A (zh) * 2020-09-23 2021-02-05 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN112333491B (zh) * 2020-09-23 2022-11-01 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN115348542A (zh) * 2021-05-12 2022-11-15 中移雄安信息通信科技有限公司 基于视频网络与无线网络mr测量结合的定位方法及系统
CN113239446A (zh) * 2021-06-11 2021-08-10 重庆电子工程职业学院 一种室内信息量测方法及系统

Also Published As

Publication number Publication date
CN108132490A (zh) 2018-06-08

Similar Documents

Publication Publication Date Title
WO2018113759A1 (fr) Système et procédé de détection basés sur un système de positionnement et l'ar/mr
US11010965B2 (en) Virtual object placement for augmented reality
US10083540B2 (en) Virtual light in augmented reality
US10062213B2 (en) Augmented reality spaces with adaptive rules
KR102493749B1 (ko) 동적 환경에서의 좌표 프레임의 결정
US9230368B2 (en) Hologram anchoring and dynamic positioning
CN102591449B (zh) 虚拟内容和现实内容的低等待时间的融合
KR102227229B1 (ko) 추적 및 맵핑 오차에 강한 대규모 표면 재구성 기법
JP6391685B2 (ja) 仮想オブジェクトの方向付け及び可視化
US8933931B2 (en) Distributed asynchronous localization and mapping for augmented reality
US20180046874A1 (en) System and method for marker based tracking
US20180182160A1 (en) Virtual object lighting
CN102419631A (zh) 虚拟内容到现实内容中的融合
KR20150093831A (ko) 혼합 현실 환경에 대한 직접 상호작용 시스템
US20230252691A1 (en) Passthrough window object locator in an artificial reality system
US11385856B2 (en) Synchronizing positioning systems and content sharing between multiple devices
US11494997B1 (en) Augmented reality system with display of object with real world dimensions
Piérard et al. I-see-3D! An interactive and immersive system that dynamically adapts 2D projections to the location of a user's eyes
US10621789B1 (en) Tracking location and resolving drift in augmented reality head mounted displays with downward projection
WO2022129646A1 (fr) Environnement de réalité virtuelle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882809

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17882809

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载