WO2018148076A1 - Système et procédé de positionnement automatisé d'un contenu de réalité augmentée - Google Patents
Système et procédé de positionnement automatisé d'un contenu de réalité augmentée Download PDFInfo
- Publication number
- WO2018148076A1 WO2018148076A1 PCT/US2018/016197 US2018016197W WO2018148076A1 WO 2018148076 A1 WO2018148076 A1 WO 2018148076A1 US 2018016197 W US2018016197 W US 2018016197W WO 2018148076 A1 WO2018148076 A1 WO 2018148076A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- display
- render
- display device
- hmd
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 126
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 25
- 230000000007 visual effect Effects 0.000 claims abstract description 84
- 230000004308 accommodation Effects 0.000 claims description 17
- 230000008569 process Effects 0.000 description 70
- 238000009877 rendering Methods 0.000 description 19
- 238000004458 analytical method Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 11
- 230000008901 benefit Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000011449 brick Substances 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 241000270281 Coluber constrictor Species 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000004424 eye movement Effects 0.000 description 2
- OQZCSNDVOWYALR-UHFFFAOYSA-N flurochloridone Chemical compound FC(F)(F)C1=CC=CC(N2C(C(Cl)C(CCl)C2)=O)=C1 OQZCSNDVOWYALR-UHFFFAOYSA-N 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241000191291 Abies alba Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003278 mimic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Definitions
- the AR device used for viewing the AR content is embedded with sensors capable of producing depth information from the environment.
- a sensor or combination of sensors, may include RGB-D cameras, stereo cameras, infrared cameras, lidar, radar, sonar, and any other sort of sensor known by those with skill in the art of image and depth sensing. Combinations of sensor types and enhanced processing methods may be employed for depth detection.
- sensors collect point cloud data from the environment as the user moves the device and sensors through the environment. Sensor observations with varying points of view are combined to form a coherent 3D reconstruction of the complete environment.
- the AR device proceeds to send the reconstructed model to the server along with a request for AR content.
- the level of completeness may be measured for example as the percentage of the surrounding area coverage, number of discrete observations, duration of sensing, etc. and any similar quality value that may be used as a threshold.
- 3D reconstruction of the environment may be carried out with any known reconstruction method, such as ones featured in KinectFusion or Point Cloud Library (PCL).
- the AR device at the beginning of an AR viewing session starts to continuously stream RGB-D data to the server.
- the server performs the 3D reconstruction process using the received RGB-D data stream and stores the reconstructed environment model.
- the server constructs the per client environment model, it also begins to filter available AR content by removing content not preferable given that client's environment model.
- the content selection processing becomes more efficient.
- Another embodiment takes the form of a system that includes a communication interface, a processor, and data storage containing instructions executable by the processor for causing the system to carry out at least the functions described in the preceding paragraph.
- AR content comprises 3D virtual content. This includes but is not limited to virtual models of objects associated with the primary media, such as a racecar for an F1 event or a solar system for a Neil deGrasse Tyson show. User preferences may be used to look-up which racer is the user's favorite. The system may then provide the 3D model of that racer's car. If the primary media is footage of a security camera, then the AR content may be a 3D virtual model of the secure building.
- An exemplary process described herein comprises analyzing the real-world environment to measure visual characteristics.
- this includes hardware components such as sensors as well as software components such as object classifiers working together.
- the form of analysis executed and the visual characteristics measured vary. This is a direct result of various optimizations that may be leveraged, based on detectable differences in use case scenarios.
- the analysis of the real-world environment may be carried out by the AR headset, an external sensor, an external computing device, and a combination thereof. For example, the analysis may not search for surfaces suitable for rendering virtual 3D content if the available AR content does not include any virtual 3D content types.
- generating AR content render parameters comprises comparing each AR content in the selection with colors around the display to avoid render locations with poor contrast.
- One example is not rendering a Christmas tree over a green wall.
- generating AR content render parameters comprises comparing each AR content in the selection with lighting conditions around the display to avoid render locations with poor contrast.
- One example is not rendering black text over a dark wall.
- generating AR content render parameters comprises comparing each AR content in the selection with a visual complexity around the display to avoid visually complex render locations.
- generating AR content render parameters comprises comparing each AR content in the selection with textures around the display to avoid render locations with poor textures (e.g., stone or brick walls and curtains).
- generating AR content render parameters comprises, (i) virtually testing the available AR content in a plurality of potential render locations with a plurality of potential render styles, (ii), generating AR content - location - style compatibility scores, (iii) generating the AR content render parameters based on the AR content - location - style compatibility scores.
- Output of AR content per the render parameters 306 may comprise output for side-stream content, output for 2D planar content, output for 3D virtual content, and output for 360-degree immersive content.
- Content streaming and viewing 532 may commence.
- the AR content server 506 optimizes 524 the requested AR content by removing content that is not preferable for the present viewing conditions and viewing hardware.
- an optimized content stream 534 is sent from an AR content server 506 to an AR view client 504.
- Display content 536 may be displayed to the user 502 by an AR viewer client 504.
- FIG. 8 is a depiction of an example real-world environment 800 comprising a display 802 depicting a primary media content, in accordance with at least one embodiment.
- the real-world environment 800 is the inside of a room.
- the room is a user's viewing location of choice for TV supplemented with AR content.
- the room includes a TV display 802 depicting a soccer match, two blocks 804, 806 on the floor, and a window 808.
- Behind the left side of the display is a brick wall 810 and behind the right side of the display is a wall clock 812 mounted near the ceiling.
- FIG. 8 is a reference image for use with the subsequent descriptions of FIGs. 9-14.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
La présente invention porte, dans un mode de réalisation, sur des systèmes et sur des procédés qui génèrent et affichent un contenu de réalité augmentée (AR) pour un environnement du monde réel dans lequel un affichage distinct est détecté par un visiocasque (HMD). Un contenu de réalité augmentée peut être sélectionné sur la base d'un contenu multimédia identifié sur l'affichage distinct. Un contenu de réalité augmentée peut être affiché à des emplacements, à proximité du dispositif d'affichage distinct, qui sont sélectionnés sur la base de caractéristiques visuelles des emplacements. Un contenu de réalité augmentée peut être affiché avec des paramètres de rendu qui augmentent la visibilité du contenu de réalité augmentée. Un mode de réalisation peut suivre la position et l'orientation du visiocasque et peut sélectionner un emplacement pour afficher le contenu de réalité augmentée sur la base de la position et de l'orientation du visiocasque. Un mode de réalisation peut afficher des connecteurs virtuels entre un contenu de réalité augmentée et des objets identifiés dans le contenu multimédia de l'affichage distinct. Un contenu de réalité augmentée peut être affiché à des emplacements qui réduisent à un minimum les intersections des connecteurs virtuels.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762457442P | 2017-02-10 | 2017-02-10 | |
US62/457,442 | 2017-02-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018148076A1 true WO2018148076A1 (fr) | 2018-08-16 |
Family
ID=61244696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/016197 WO2018148076A1 (fr) | 2017-02-10 | 2018-01-31 | Système et procédé de positionnement automatisé d'un contenu de réalité augmentée |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018148076A1 (fr) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111986276A (zh) * | 2019-08-29 | 2020-11-24 | 芋头科技(杭州)有限公司 | 视觉增强设备中的内容生成 |
CN112734941A (zh) * | 2021-01-27 | 2021-04-30 | 深圳迪乐普智能科技有限公司 | Ar内容的属性修改方法、装置、计算机设备及存储介质 |
US20210390765A1 (en) * | 2020-06-15 | 2021-12-16 | Nokia Technologies Oy | Output of virtual content |
US20220230396A1 (en) * | 2021-01-15 | 2022-07-21 | Arm Limited | Augmented reality system |
US20220237913A1 (en) * | 2019-05-22 | 2022-07-28 | Pcms Holdings, Inc. | Method for rendering of augmented reality content in combination with external display |
EP4312108A1 (fr) * | 2022-07-25 | 2024-01-31 | Sony Interactive Entertainment Europe Limited | Dispositif d'identification dans un environnement de réalité mixte |
US12179091B2 (en) | 2019-08-22 | 2024-12-31 | NantG Mobile, LLC | Virtual and real-world content creation, apparatus, systems, and methods |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140132484A1 (en) * | 2012-11-13 | 2014-05-15 | Qualcomm Incorporated | Modifying virtual object display properties to increase power performance of augmented reality devices |
US20140168262A1 (en) * | 2012-12-18 | 2014-06-19 | Qualcomm Incorporated | User Interface for Augmented Reality Enabled Devices |
US20160147492A1 (en) * | 2014-11-26 | 2016-05-26 | Sunny James Fugate | Augmented Reality Cross-Domain Solution for Physically Disconnected Security Domains |
EP3096517A1 (fr) * | 2015-05-22 | 2016-11-23 | TP Vision Holding B.V. | Verres intelligents portables |
-
2018
- 2018-01-31 WO PCT/US2018/016197 patent/WO2018148076A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140132484A1 (en) * | 2012-11-13 | 2014-05-15 | Qualcomm Incorporated | Modifying virtual object display properties to increase power performance of augmented reality devices |
US20140168262A1 (en) * | 2012-12-18 | 2014-06-19 | Qualcomm Incorporated | User Interface for Augmented Reality Enabled Devices |
US20160147492A1 (en) * | 2014-11-26 | 2016-05-26 | Sunny James Fugate | Augmented Reality Cross-Domain Solution for Physically Disconnected Security Domains |
EP3096517A1 (fr) * | 2015-05-22 | 2016-11-23 | TP Vision Holding B.V. | Verres intelligents portables |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220237913A1 (en) * | 2019-05-22 | 2022-07-28 | Pcms Holdings, Inc. | Method for rendering of augmented reality content in combination with external display |
US11727321B2 (en) * | 2019-05-22 | 2023-08-15 | InterDigital VC Holdings Inc. | Method for rendering of augmented reality content in combination with external display |
US11995578B2 (en) | 2019-05-22 | 2024-05-28 | Interdigital Vc Holdings, Inc. | Method for rendering of augmented reality content in combination with external display |
US12179091B2 (en) | 2019-08-22 | 2024-12-31 | NantG Mobile, LLC | Virtual and real-world content creation, apparatus, systems, and methods |
CN111986276A (zh) * | 2019-08-29 | 2020-11-24 | 芋头科技(杭州)有限公司 | 视觉增强设备中的内容生成 |
US20210390765A1 (en) * | 2020-06-15 | 2021-12-16 | Nokia Technologies Oy | Output of virtual content |
US11636644B2 (en) * | 2020-06-15 | 2023-04-25 | Nokia Technologies Oy | Output of virtual content |
US20220230396A1 (en) * | 2021-01-15 | 2022-07-21 | Arm Limited | Augmented reality system |
US11544910B2 (en) * | 2021-01-15 | 2023-01-03 | Arm Limited | System and method for positioning image elements in augmented reality system |
CN112734941A (zh) * | 2021-01-27 | 2021-04-30 | 深圳迪乐普智能科技有限公司 | Ar内容的属性修改方法、装置、计算机设备及存储介质 |
EP4312108A1 (fr) * | 2022-07-25 | 2024-01-31 | Sony Interactive Entertainment Europe Limited | Dispositif d'identification dans un environnement de réalité mixte |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018148076A1 (fr) | Système et procédé de positionnement automatisé d'un contenu de réalité augmentée | |
US11210838B2 (en) | Fusing, texturing, and rendering views of dynamic three-dimensional models | |
WO2015192585A1 (fr) | Procédé et appareil de lecture d'une publicité dans une vidéo | |
US9460351B2 (en) | Image processing apparatus and method using smart glass | |
CN108475180B (zh) | 在多个显示区域之间分布视频 | |
CN110249291A (zh) | 用于在预捕获环境中的增强现实内容递送的系统和方法 | |
US20120287233A1 (en) | Personalizing 3dtv viewing experience | |
US9392248B2 (en) | Dynamic POV composite 3D video system | |
US20120068996A1 (en) | Safe mode transition in 3d content rendering | |
US20180204340A1 (en) | A depth map generation apparatus, method and non-transitory computer-readable medium therefor | |
KR20140082610A (ko) | 휴대용 단말을 이용한 증강현실 전시 콘텐츠 재생 방법 및 장치 | |
US10453244B2 (en) | Multi-layer UV map based texture rendering for free-running FVV applications | |
EP3295372A1 (fr) | Procédés, systèmes et logiciel de signature faciale | |
US10764493B2 (en) | Display method and electronic device | |
US20200304713A1 (en) | Intelligent Video Presentation System | |
US20230152883A1 (en) | Scene processing for holographic displays | |
CN110730340B (zh) | 基于镜头变换的虚拟观众席展示方法、系统及存储介质 | |
CN108076359B (zh) | 业务对象的展示方法、装置和电子设备 | |
CN113795863A (zh) | 用于图像的深度图的处理 | |
US20230125961A1 (en) | Systems and methods for displaying stereoscopic rendered image data captured from multiple perspectives | |
US20230122149A1 (en) | Asymmetric communication system with viewer position indications | |
CN114501127B (zh) | 在多画面视频中插入数字内容 | |
US20220207848A1 (en) | Method and apparatus for generating three dimensional images | |
US20200265622A1 (en) | Forming seam to join images | |
US20180095347A1 (en) | Information processing device, method of information processing, program, and image display system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18706022 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18706022 Country of ref document: EP Kind code of ref document: A1 |