CN110249291A - System and method for augmented reality content delivery in a pre-capture environment - Google Patents
System and method for augmented reality content delivery in a pre-capture environment Download PDFInfo
- Publication number
- CN110249291A CN110249291A CN201880009301.7A CN201880009301A CN110249291A CN 110249291 A CN110249291 A CN 110249291A CN 201880009301 A CN201880009301 A CN 201880009301A CN 110249291 A CN110249291 A CN 110249291A
- Authority
- CN
- China
- Prior art keywords
- content
- environment
- data
- client
- requested
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/40—Hidden part removal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/16—Indexing scheme for image data processing or generation, in general involving adaptation to the client's capabilities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/08—Bandwidth reduction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Signal Processing (AREA)
- Architecture (AREA)
- Computer Networks & Wireless Communication (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
相关申请的交叉引用Cross References to Related Applications
本申请是2017年2月1日申请的题为“System and Method for AugmentedReality Content Delivery in Pre-Captured Environments(用于在预捕获环境中的增强现实内容递送的系统和方法)”的美国临时专利申请序列号62/453,317的正式申请并根据35 U.S.C.§119(e)要求其权益,该临时专利申请整体通过引用的方式合并于此。This application is a U.S. Provisional Patent Application entitled "System and Method for AugmentedReality Content Delivery in Pre-Captured Environments" filed on February 1, 2017 Formal application Serial No. 62/453,317 and claiming the benefit thereof under 35 U.S.C. § 119(e), the Provisional Patent Application is hereby incorporated by reference in its entirety.
背景技术Background technique
只要客户端有互联网连接,在云中存储数字媒体能够实现从世界任何角落的接入性。具有较大等级的现实性的数字媒体使用需要大文件大小的高分辨率格式来编码的。传输这个信息(例如将所述数字媒体流传输到客户端设备)需要相称的通信资源的大分配。视觉上丰富的虚拟现实(VR)内容和增强现实(AR)内容这两者都消耗大量数据,这对于AR和VR内容递送来说存在问题的。递送所述内容的服务和消费所述内容的客户端的之间的数据连接的有限带宽是个重要瓶颈。Storing digital media in the cloud enables accessibility from anywhere in the world, as long as the client has an Internet connection. Digital media with greater levels of realism are encoded using high-resolution formats that require large file sizes. Transmitting this information (eg, streaming the digital media to a client device) requires a commensurately large allocation of communication resources. Visually rich virtual reality (VR) content and augmented reality (AR) content both consume large amounts of data, which is problematic for AR and VR content delivery. The limited bandwidth of the data connection between the service delivering the content and the clients consuming the content is a significant bottleneck.
例如,电影VR技术使用的球形视频大幅增加对在线视频馈送的视频分辨率要求。在电影VR中,传送的视频流包含不仅像在普通2D视频这样的屏幕上看到的所有像素,而且还包含来自单个观看点的全360°视野像素。当电影VR内容被消费时,整个视频内容仅一小区域在任意给定时间点是可见的。在客户端设备上显示的球形视图的较小的裁切的区域需要提供全HD(即,1080p)分辨率以为了观看体验匹配上目前使用的高分辨率2D视频提供的分辨率。当将单视场360度球形视频与立体视场360度3D球形视频(其中针对每只眼睛需要分开的图像馈送)进行比较,内容递送所需的带宽成为主要瓶颈。下一代拟真内容格式(其不仅提供从单视点的360度立体视图,还允许用户在所述内容内的限制区域内移动)将消耗的信息通信资源呈指数级增加。For example, the spherical video used by cinematic VR technology dramatically increases the video resolution requirements for online video feeds. In cinematic VR, the delivered video stream contains not only all the pixels as seen on a screen like normal 2D video, but also pixels from a full 360° field of view from a single point of view. When cinematic VR content is consumed, only a small area of the entire video content is visible at any given point in time. The smaller cropped area of the spherical view displayed on the client device needs to provide full HD (ie, 1080p) resolution in order for the viewing experience to match the resolution provided by high resolution 2D video in use today. When comparing monoscopic 360 degree spherical video to stereoscopic 360 degree 3D spherical video (where separate image feeds are required for each eye), the bandwidth required for content delivery becomes a major bottleneck. Next-generation immersive content formats, which not only provide a 360-degree stereoscopic view from a single point of view, but also allow users to move within a restricted area within the content, will consume an exponential increase in information and communication resources.
在VR空间内移动的能力不是这些下一代格式大小背后的唯一的驱动器,因为下一代AR显示技术将要优选地表现相同级别的空间现实感。用于递送AR和VR体验的当前内容流传输方法依赖传统的数据传输优化技术(例如,空间2D图像像素压缩和时间运动补偿)。传统数据压缩技术没有解决AR用例的细微差别,且还可以被改进。The ability to move within VR space is not the only driver behind the size of these next-generation formats, as next-generation AR display technologies will preferably represent the same level of spatial realism. Current content streaming methods for delivering AR and VR experiences rely on traditional data transmission optimization techniques (eg, spatial 2D image pixel compression and temporal motion compensation). Traditional data compression techniques do not address the nuances of AR use cases and can be improved upon.
发明内容Contents of the invention
本申请描述了用于选择性提供增强现实(AR)信息给AR显示客户端设备的系统和方法。在一些实施方式中,用于提供AR信息给所述客户端设备的带宽可以通过选择性向客户端设备传送仅外表没有被AR观看位置中物理对象遮挡的AR内容元素而被降低。This application describes systems and methods for selectively providing augmented reality (AR) information to AR display client devices. In some implementations, the bandwidth used to provide AR information to the client device may be reduced by selectively delivering to the client device only AR content elements that are not outwardly obscured by physical objects in the AR viewing position.
例如,在一个实施方式中,系统(例如,AR内容服务器)确定AR观看环境内AR显示设备的位置。针对AR内容的至少第一元素(例如,视频或AR对象),系统选择所述AR观看环境内的显示位置。系统确定AR观看环境中的任意物理对象是否位于沿着所述AR显示设备与所述第一元素的显示位置之间的视线上。仅响应于确定没有物理对象在所述视线上的情况下,系统传送AR内容的所述第一元素给所述AR显示设备。系统可以处理不同显示位置的AR内容的一些元素(包括可能在运动的元素并由此具有可变化的显示位置),并可以向AR显示设备发送仅从AR显示设备角度来看位置没有被物理对象挡住的元素。在一些实施方式中,所述AR元素可以是一部分AR对象或视频,由此该对象或视频的未遮挡部分被发送到客户端而遮挡部分不被发送。For example, in one embodiment, a system (eg, an AR content server) determines the location of an AR display device within an AR viewing environment. For at least a first element of AR content (eg, a video or an AR object), the system selects a display location within the AR viewing environment. The system determines whether any physical object in the AR viewing environment is located along a line of sight between the AR display device and the display location of the first element. The system transmits the first element of AR content to the AR display device only in response to determining that no physical object is in the line of sight. The system can handle some elements of AR content in different display positions (including elements that may be in motion and thus have variable display positions), and can send to the AR display device only the position from the perspective of the AR display device without being manipulated by physical objects. Blocking elements. In some implementations, the AR element may be a portion of an AR object or video, whereby the unoccluded portion of the object or video is sent to the client and the occluded portion is not.
AR头戴式装置可以执行根据实施方式的示例性方法。该方法包括获取增强现实(AR)观看位置的数字重构三维(3D)环境,检测所述数字重构3D环境中的对象,确定该对象的深度和几何形状,发送针对增强现实(AR)内容流的请求给服务器,该请求包括指示所述对象的所述深度和所述几何形状的信息,以及从服务器接收所述AR内容流,该AR内容流不包括所述对象遮挡的AR内容。An AR headset may perform an exemplary method according to an embodiment. The method includes acquiring a digitally reconstructed three-dimensional (3D) environment of an augmented reality (AR) viewing location, detecting an object in the digitally reconstructed 3D environment, determining the depth and geometry of the object, transmitting content specific to the augmented reality (AR) A stream request is made to the server, the request including information indicative of the depth and the geometry of the object, and the AR content stream is received from the server, the AR content stream not including AR content occluded by the object.
根据实施方式,所述方法还包括使用全球定位系统(GPS)数据、无线数据以及图像识别中的一者或多者来确定所述AR观看位置。在一个实施方式中,经由存储的之前AR会话、存储的数据与实时收集的数据的组合或深度数据点云中的一者或多者来获取所述数字重构3D环境。According to an embodiment, the method further includes determining the AR viewing location using one or more of Global Positioning System (GPS) data, wireless data, and image recognition. In one embodiment, the digitally reconstructed 3D environment is obtained via one or more of stored previous AR sessions, a combination of stored data and real-time collected data, or a point cloud of depth data.
在另一实施方式中,经由通过从多个视点检查所述AR观看位置的方式来扫描所述AR观看位置,获取所述数字重构3D环境。In another embodiment, the digitally reconstructed 3D environment is acquired by scanning the AR viewing location by examining the AR viewing location from multiple viewpoints.
在一个实施方式中,所述方法还包括跟踪所述数字重构3D环境内的视点位置及朝向。In one embodiment, the method further comprises tracking viewpoint position and orientation within the digitally reconstructed 3D environment.
在一个实施方式中,指示所述对象的所述深度和所述几何形状的所述信息包括实时原始传感器数据、所述对象的景深图或RGB-D数据流中的一者或多者。In one embodiment, said information indicative of said depth and said geometry of said object comprises one or more of real-time raw sensor data, a depth map of said object, or an RGB-D data stream.
在一个实施方式中,所请求的AR内容的类型是拟真视频,且指示至少一个现实世界对象的所述深度和所述几何形状的所述信息被格式化为现实世界AR观看位置的球形景深图。In one embodiment, the type of AR content requested is immersive video and said information indicative of said depth and said geometry of at least one real world object is formatted as a spherical depth of field for a real world AR viewing position picture.
另一实施方式涉及增强现实(AR)内容服务器,其包括处理器、通信接口以及数据存储(storage)。在该实施方式中,所述AR内容服务器被配置成经由所述通信接口接收针对AR内容的请求,从所述数据存储获取所请求的AR内容,获取AR观看位置的数字重构三维(3D)环境,经由所述通信接口接收AR观看位置内的视点位置及朝向,经由所请求的AR内容与所述数字重构3D环境和视点位置及朝向的比较执行可视性分析,以及通过根据该可视性分析将与从所述视点位置及朝向不可视的一个或多个对象相关联的数据移除来修改所请求的AR内容。Another embodiment relates to an augmented reality (AR) content server including a processor, a communication interface, and data storage. In this embodiment, the AR content server is configured to receive a request for AR content via the communication interface, retrieve the requested AR content from the data store, obtain a digitally reconstructed three-dimensional (3D) view of the AR viewing position. environment, receiving a viewpoint position and orientation within an AR viewing position via the communication interface, performing visibility analysis via a comparison of the requested AR content with the digitally reconstructed 3D environment and viewpoint position and orientation, and by The visibility analysis modifies the requested AR content by removing data associated with one or more objects not visible from the viewpoint position and orientation.
根据一个实施方式,从所述数据存储、增强现实(AR客户端)或所述数据存储和所述AR客户端的组合中一者或多者能够获取所述数字重构3D环境。According to one embodiment, said digitally reconstructed 3D environment is retrievable from one or more of said data store, augmented reality (AR client) or a combination of said data store and said AR client.
在另一实施方式中,或在同一个实施方式中,所请求的AR内容的所述修改能够是所请求的AR内容的内容类型的函数,其中该内容类型包括三维(3D)场景、光场、具有深度数据的拟真视频中的一者或多者。In another embodiment, or in the same embodiment, said modification of the requested AR content can be a function of the content type of the requested AR content, where the content type includes three-dimensional (3D) scenes, light fields One or more of , immersive video with depth data.
此外,在一个或多个实施方式中,所修改的所请求的AR内容能够包括基于更新的视点位置及朝向在所请求的AR内容内插入之前移除的三维(3D)对象。Additionally, in one or more implementations, the modified requested AR content can include a previously removed three-dimensional (3D) object inserted within the requested AR content based on the updated viewpoint position and orientation.
在一个实施方式中,所述可视性分析包括识别所述数字重构3D环境的哪些地方具有比与所请求AR内容相关联的深度数据要小的深度值。In one embodiment, the visibility analysis includes identifying where in the digitally reconstructed 3D environment has a depth value that is smaller than depth data associated with the requested AR content.
另一实施方式涉及方法,该方法包括接收针对增强现实(AR)内容视频流的请求,从数据存储获取所请求的AR内容视频流,获取现实世界AR观看位置的数字重构3D环境,接收所述现实世界AR观看位置的AR客户端的视点位置及朝向,分析所述数字重构3D环境和所述视点位置及朝向以确定所请求的AR内容视频流是否包括在所述数字重构3D环境中被遮挡的一个或多个对象,以及移除在所述数字重构3D环境中被遮挡的所述一个或多个对象。Another embodiment relates to a method comprising receiving a request for an augmented reality (AR) content video stream, retrieving the requested AR content video stream from a data store, obtaining a digitally reconstructed 3D environment of a real-world AR viewing location, receiving the analyzing the digitally reconstructed 3D environment and the viewpoint position and orientation to determine whether the requested AR content video stream is included in the digitally reconstructed 3D environment one or more objects that are occluded, and removing the one or more objects that are occluded in the digitally reconstructed 3D environment.
在一个实施方式中,所述方法包括向AR显示设备发送修改的AR内容视频流。In one embodiment, the method includes sending the modified AR content video stream to the AR display device.
在一个实施方式中,所述分析所述数字重构3D环境和所述视点位置及朝向包括确定所述AR显示设备的位置,识别所述数字重构3D环境中沿着从所述AR显示设备的视点位置及朝向确定的视线的一个或多个物理对象,以及基于所述视线,确定所述AR内容流中的所述一个或多个对象是否被所述一个或多个物理对象遮挡。In one embodiment, said analyzing said digitally reconstructed 3D environment and said viewpoint position and orientation comprises determining the position of said AR display device, identifying and one or more physical objects facing the determined line of sight, and based on the line of sight, determining whether the one or more objects in the AR content stream are occluded by the one or more physical objects.
在一个实施方式中,所述分析所述数字重构3D环境和所述视点位置及朝向包括将所述数字重构3D环境的深度值与所述AR内容视频流中的深度值进行比较,以及丢弃所述AR内容视频流中深度值大于所述数字重构3D环境中对应深度值的部分,以补偿从所述数字重构3D环境丢失的由一个或多个动态对象和/或静态对象造成的遮挡。In one embodiment, said analyzing said digitally reconstructed 3D environment and said viewpoint position and orientation comprises comparing a depth value of said digitally reconstructed 3D environment with a depth value in said AR content video stream, and Discarding the part of the AR content video stream whose depth value is greater than the corresponding depth value in the digitally reconstructed 3D environment, to compensate for the loss from the digitally reconstructed 3D environment caused by one or more dynamic objects and/or static objects of occlusion.
在本申请的实施方式的一种变形中,客户端知道其从服务器请求的内容类型,并基于此,能够提供尽可能紧凑格式的环境信息。例如,针对拟真视频,所述客户端可以不发送整个环境模型,而是仅发送从此刻使用视点呈现的环境的球形景深图。In a variant of the embodiment of the present application, the client knows the type of content it requests from the server, and based on this, can provide context information in as compact a format as possible. For example, for immersive video, the client may not send the entire environment model, but only a spherical depth map of the environment presented from this moment using the viewpoint.
在本申请的实施方式是的一种变形中,所述AR客户端设备不执行视点及朝向跟踪,而是使用外部的外看内(outside-looking-in)跟踪方案。在第一进一步变形中,所述AR客户端设备首先接收来自外部跟踪方案的跟踪信息,将其变换成与在3D重构中使用的相同的坐标系统,以及然后将其发送到内容服务器。在第二进一步变形中,所述外部跟踪方案将所述视点及朝向变换成与在所述3D重构中使用的相同的坐标系统,以及然后将其发送给内容服务器。在第三进一步变形中,所述AR内容服务器接收来自外部跟踪方案的跟踪信息,且该服务器将其变换成与在所述3D重构中使用的相同的坐标系统。In a variation of the embodiments of the present application, the AR client device does not perform viewpoint and orientation tracking, but uses an external outside-looking-in tracking scheme. In a first further variant, the AR client device first receives tracking information from an external tracking solution, transforms it into the same coordinate system as used in the 3D reconstruction, and then sends it to the content server. In a second further variant, the external tracking scheme transforms the viewpoint and orientation into the same coordinate system as used in the 3D reconstruction and then sends this to the content server. In a third further variant, the AR content server receives tracking information from an external tracking solution, and the server transforms it into the same coordinate system as used in the 3D reconstruction.
在本申请的实施方式的一种变形中,所述AR内容服务器一旦光场数据已经被上传到该服务器就产生光场的深度信息。在运行时间期间,当AR内容服务器接收针对光场内容的请求时,通过使用之前已经产生的深度信息能够更有效率地修改所述内容(消除对在内容递送期间执行深度检测的需求)。In a variant of an embodiment of the present application, the AR content server generates depth information of the light field once the light field data has been uploaded to the server. During runtime, when an AR content server receives a request for light field content, the content can be modified more efficiently (eliminating the need to perform depth detection during content delivery) by using depth information that has been previously generated.
在本公开的之后的段落以及任意其他位置描述的实施方式、变形以及置换中的任意者能够关于任意实施方式(这其中包括关于任意方法实施方式和关于任意系统实施方式)被实施。Any of the implementations, variations, and permutations described in subsequent paragraphs and anywhere else in this disclosure can be implemented with respect to any implementation, including with respect to any method implementation and with respect to any system implementation.
所公开的系统和方法能够在许多在线AR内容递送用例中明显减少在AR内容服务与观看的客户端设备之间的数据传输量。该减少基于环境(其中,客户端消费内容)的物理特性,且能够基于每客户端而被执行。这种方案能够与其他传统数据压缩方法结合,由此提供进一步的内容递送优化,而不用牺牲传统内容递送压缩技术可实现的好处。The disclosed systems and methods can significantly reduce the amount of data transfer between the AR content service and the viewing client device in many online AR content delivery use cases. This reduction is based on the physical characteristics of the environment in which clients consume content, and can be performed on a per-client basis. This approach can be combined with other traditional data compression methods, thereby providing further content delivery optimization without sacrificing the benefits achievable with traditional content delivery compression techniques.
附图说明Description of drawings
在附图中,在各个附图中相同的附图标记表示相同或功能上相似的元素,附图与下面的具体实施方式一起整合在说明书中或形成说明书的部分,并用于进一步解释包括要求保护发明的概念的实施方式以及解释这些实施方式的各种原理和优点。In the drawings, the same reference numerals in each of the drawings represent the same or functionally similar elements, and the drawings are integrated in or form a part of the specification together with the following detailed description, and are used to further explain including claims Embodiments of the inventive concepts are presented and various principles and advantages of these embodiments are explained.
图1是根据至少一个实施方式的用于在预捕获环境中的AR内容递送的方法的可视概图。FIG. 1 is a visual overview of a method for AR content delivery in a pre-capture environment, according to at least one embodiment.
图2是根据至少一个实施方式的针对预捕获环境中的AR内容递送由AR客户端执行的方法的流程图。2 is a flowchart of a method performed by an AR client for delivery of AR content in a pre-capture environment, in accordance with at least one implementation.
图3是根据至少一个实施方式的针对预捕获环境中的AR内容递送由AR内容服务器执行的方法的流程图。3 is a flowchart of a method performed by an AR content server for delivery of AR content in a pre-capture environment, in accordance with at least one embodiment.
图4是根据至少一个实施方式的用于预捕获环境中的AR内容递送的方法的顺序图。4 is a sequence diagram of a method for AR content delivery in a pre-capture environment, according to at least one embodiment.
图5是根据至少一个实施方式的在示例现实世界AR观看位置中用户的示例性立体图。5 is an example perspective view of a user in an example real-world AR viewing position, according to at least one implementation.
图6是根据至少一个实施方式的数字重构3D环境的平面图。6 is a plan view of a digitally reconstructed 3D environment, according to at least one embodiment.
图7是根据至少一个实施方式的所请求的AR内容和图6的数字重构3D环境的平面图。7 is a plan view of requested AR content and the digitally reconstructed 3D environment of FIG. 6, according to at least one embodiment.
图8是示例性示出根据至少一个实施方式的对图7的所请求的AR内容执行可视化分析的平面图。FIG. 8 is a plan view exemplarily illustrating performing a visual analysis on the requested AR content of FIG. 7 according to at least one embodiment.
图9是示例性示出根据至少一个实施方式的修改的AR内容和图6的数字重构3D环境的平面图。FIG. 9 is a plan view exemplarily showing modified AR content and the digitally reconstructed 3D environment of FIG. 6 according to at least one embodiment.
图10是示出根据至少一个实施方式的图5的用户看到的图9的修改的AR内容的示例性立体图。10 is an exemplary perspective view illustrating the modified AR content of FIG. 9 as seen by the user of FIG. 5 in accordance with at least one embodiment.
图11示出在一些实施方式中可以被用作AR客户端观看器的示例性无线发射/接收单元(WTRU)。11 illustrates an example wireless transmit/receive unit (WTRU) that may be used as an AR client viewer in some implementations.
图12示出了在一些实施方式中可以用作AR内容服务器或可以用作AR内容存储的示例性网络实体。Figure 12 illustrates an example network entity that may act as an AR content server or may act as an AR content store in some implementations.
本领域技术人员理解,为了简单明晰,图示了图中的元素,且该元素不必要按照比例绘制。例如,图中的一些元素的大小相对于其他元素可以放大以帮助改善对本发明实施方式的理解。Skilled artisans appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the size of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the invention.
在附图中在合适时用常规符号表示装置和方法组件,其仅显示与理解本发明的实施方式有关的特定细节,以使得本公开不被一些细节(该细节对于理解了在此的描述的本领域技术人员来说是很明显的)模糊。Apparatus and method components are represented by conventional symbols where appropriate in the drawings, which merely show specific details that are relevant to an understanding of the embodiments of the invention so that the disclosure is not overwhelmed by details that are essential to an understanding of the description herein. Obvious to those skilled in the art) ambiguity.
具体实施方式Detailed ways
在进行详细描述之前,注意在各种附图中描绘以及结合附图描述的实体、连接、排列等通过示例的方式且不是限定的方式来提供的。因此,关于以下的任何和所有陈述或其他指示(可能是孤立的和脱离上下文被认为是绝对的,因此是限制性的)可能只能在其前面建设性加上诸如“在至少一个实施例中......”之类的条款的情况下被恰当地理解:特定图形“描绘”的内容的、特定图形中的特定元素或实体“是”什么或“具有”什么、以及任何和所有类似的陈述。Before proceeding to the detailed description, it is noted that the entities, connections, arrangements, etc. depicted in and described in connection with the various figures are provided by way of example and not by way of limitation. Accordingly, any and all statements or other indications regarding the following (which may be taken in isolation and out of context to be absolute and thus limiting) may only be preceded constructively by words such as "In at least one embodiment ..." of what a particular figure "depicts", what a particular element or entity in a particular figure "is" or "has", and any and all Similar statements.
本申请公开了用于在预捕获环境中的增强现实内容递送的方法和系统。本公开涉及增强现实(AR)内容服务器和AR客户端(例如,在AR设备上运行的AR客户端,例如光学穿透AR头戴式装置或视频穿透AR头戴式装置)之间的内容数据传输。本申请描述的示例性方法利用客户端设备向内容服务器提供关于要显示增强内容所在的环境的信息。AR内容服务器基于每个个体客户端的环境特性调节AR内容递送,由此对于特定客户端不可视的冗余数据从传输到客户端的内容流中被移除。The present application discloses methods and systems for augmented reality content delivery in a pre-capture environment. This disclosure relates to content between an augmented reality (AR) content server and an AR client (e.g., an AR client running on an AR device, such as an optical-see-through AR headset or a video-see-through AR headset) data transmission. The exemplary method described herein utilizes a client device to provide a content server with information about the environment in which enhanced content is to be displayed. The AR content server adjusts AR content delivery based on the environmental characteristics of each individual client, whereby redundant data that is not visible to a particular client is removed from the content stream transmitted to the client.
在示例性实施方式中,在AR会话开始,AR客户端设备重构当前3D环境的数字模型。该模型的生成可以使用对应于相同位置的之前收集的数据。在其他情况中,通过使用内置于AR客户端设备的传感器(例如RGB-D或其他深度传感器)扫描环境来制作所述模型。之前收集的数据和当前收集的数据的组合可以用于改善所述模型的精度。然后所述模型的副本被发送到内容服务器,由此该服务器对当前3D环境的数字版本的即时接入。通过整合来自感测客户端设备的更多传感器数据,可以随时更新并改善所述模型的服务器副本。所述模型可用于执行虚拟对象可视性分析。客户端设备还使用收集的环境数据来辅助位置和朝向跟踪(姿态跟踪)。所述设备的姿态(视点位置和朝向)用于同步AR客户端显示的增强元素与用户头部运动的转化。所述姿态信息被发送给服务器或由服务器使用所述环境模型或其他数据来计算以允许所述服务器估计AR内容的元素从用户视点的可视性。从用户视点,现实世界对象(例如,桌子、建筑、树、门以及墙)能够遮挡AR内容。这会在AR内容的深度距离用户比现实世界对象更远但也在同一条视线上时发生。包含高分辨率深度信息的真实AR内容考虑增强现实性感知。In an exemplary embodiment, at the start of an AR session, the AR client device reconstructs a digital model of the current 3D environment. The generation of the model may use previously collected data corresponding to the same location. In other cases, the model is made by scanning the environment with sensors built into the AR client device, such as RGB-D or other depth sensors. A combination of previously collected data and currently collected data can be used to improve the accuracy of the model. A copy of the model is then sent to the content server, thereby giving the server instant access to the digital version of the current 3D environment. The server copy of the model can be updated and improved over time by incorporating more sensor data from sensing client devices. The model can be used to perform virtual object visibility analysis. The client device also uses the collected environmental data to assist in position and orientation tracking (attitude tracking). The pose of the device (viewpoint position and orientation) is used to synchronize the translation of augmented elements displayed by the AR client with the user's head movement. The pose information is sent to the server or calculated by the server using the environment model or other data to allow the server to estimate the visibility of elements of the AR content from the user's point of view. From the user's point of view, real world objects (eg, tables, buildings, trees, doors, and walls) can occlude AR content. This occurs when the depth of the AR content is further away from the user than real-world objects but are also on the same line of sight. Authentic AR content including high-resolution depth information allows for augmented reality perception.
在一个实施方式中,用于观看AR内容的AR设备装配有能够从环境产生深度信息的传感器。传感器或传感器的组合可以包括一个或多个传感器,例如RGB-D相机、立体相机、红外相机、激光雷达、雷达、声呐以及深度感测领域的技术人员所知的任意其他种类的传感器。在一些实施方式中,传感器类型的组合和增强的处理方法被用于深度检测。当用户初始化观看会话时,该观看的客户端通过收集来自环境的深度数据重构所述环境模型。In one embodiment, an AR device for viewing AR content is equipped with sensors capable of generating depth information from the environment. A sensor or combination of sensors may include one or more sensors such as RGB-D cameras, stereo cameras, infrared cameras, lidar, radar, sonar, and any other kind of sensor known to those skilled in the art of depth sensing. In some embodiments, a combination of sensor types and enhanced processing methods are used for depth detection. When a user initiates a viewing session, the viewing client reconstructs the environment model by collecting depth data from the environment.
在所述客户端设备上执行重构过程期间,在用户在环境内移动设备和传感器时,传感器收集来自所述环境的点云数据。具有变化视点的传感器观测被组合以形成关于完整环境的连贯的3D重构。一旦3D重构达到完成度阈值,AR客户端向AR内容服务器发送所述重构模型和针对特定AR内容的请求。在示例性实施方式中,例如所述3D重构的完成度可以用环绕区域覆盖百分比、离散观测数、感测持续时间等以及能够用作阈值的任意类似的质量值来衡量。可以使用任意已知重构方法来执行环境的3D重构,例如在KinectFusionTM或点云库(PCL)中描述的重构方法。During execution of the reconstruction process on the client device, the sensor collects point cloud data from the environment as the user moves the device and the sensor within the environment. Sensor observations with varying viewpoints are combined to form a coherent 3D reconstruction of the complete environment. Once the 3D reconstruction reaches the completion threshold, the AR client sends the reconstructed model and a request for specific AR content to the AR content server. In an exemplary embodiment, for example, the degree of completion of the 3D reconstruction may be measured in terms of percent coverage of the surrounding area, number of discrete observations, sensing duration, etc., and any similar quality value that can be used as a threshold. The 3D reconstruction of the environment can be performed using any known reconstruction method, such as those described in KinectFusion ™ or the Point Cloud Library (PCL).
在该过程的一种变形中,在AR观看会话开始,客户端开始连续向服务器流传输RGB-D数据。该AR客户端然后发送内容请求。在该变形中,AR内容服务器使用接收的RGB-D数据流执行3D重构过程并存储重构环境模型。在服务器构建每客户端(per-client)环境模型时,其还开始通过移除被当前环境遮挡的虚拟元素来修改AR内容。随着3D重构变得更完整,内容移除处理变得更精确。In one variation of this process, at the beginning of an AR viewing session, the client begins to continuously stream RGB-D data to the server. The AR client then sends a content request. In this variation, the AR content server performs a 3D reconstruction process using the received RGB-D data stream and stores the reconstructed environment model. As the server builds the per-client environment model, it also starts modifying the AR content by removing virtual elements occluded by the current environment. As the 3D reconstruction becomes more complete, the content removal process becomes more precise.
操作AR客户端设备的用户可以发起针对AR会话的请求。该请求被发送到AR内容服务器并指示用户已经选择的特定内容。要被显示的AR内容可以被定义为针对AR内容服务器的网络链接以及AR内容服务器内保有的特定AR内容的参考。操作系统可以基于用户激活的链接类型的检测来自动处理初始化AR客户端。所述客户端可以被嵌入有用于接收所述内容链接的应用,例如网页浏览器和移动消息应用。一旦所述用户开始AR会话,之前描述的环境的3D重构被初始化。A user operating an AR client device may initiate a request for an AR session. This request is sent to the AR content server and indicates the specific content that the user has selected. The AR content to be displayed may be defined as a web link to the AR content server and a reference to specific AR content held within the AR content server. The operating system may automatically handle initializing the AR client based on the detection of the type of link activated by the user. The client may be embedded with an application for receiving the content link, such as a web browser and a mobile messaging application. Once the user starts an AR session, the previously described 3D reconstruction of the environment is initiated.
在所述重构过程之后,客户端开始姿态跟踪过程。该姿态跟踪过程的目的是估计客户端设备在哪一位置以及该客户端设备相对于之前重构环境面向哪个方向。可以使用任意已知的跟踪技术来进行姿态跟踪,且可以使用重构环境模型和客户端设备传感器数据来辅助所述姿态跟踪。在一些实施方式中,一旦已经确定客户端设备姿态,则客户端向AR内容服务器发送重构环境模型和所确定的姿态以及针对特定AR内容的请求。After the reconstruction process, the client starts the pose tracking process. The purpose of this pose tracking process is to estimate where the client device is and which direction the client device is facing relative to the previously reconstructed environment. Pose tracking can be performed using any known tracking technique and can be assisted using reconstructed environment models and client device sensor data. In some implementations, once the client device pose has been determined, the client sends the AR content server a reconstructed environment model and determined pose along with a request for specific AR content.
所述AR内容服务器从所述AR客户端接收所述AR内容请求,并从内容存储取得所请求的内容。所述AR内容服务器然后修改所请求的AR内容以移除冗余数据。在这种情况下,内容递送优化涉及AR内容服务器执行的过程。在内容递送优化期间,所请求的AR内容的在观看者不可见的位置呈现的优化部分将被从内容中移除。在AR中,AR内容的虚拟元素优选地被感知为观看者看到的现实世界的部分(或与其融合)。为了增强现实感知,本申请公开的实施方式将阻挡用户视野的虚拟对象的物理对象的外表复制到虚拟对象。通过从内容移除被现实世界元素遮挡的所有元素,该内容的大小可以被减小而不会造成用户能够观测到的品质损失。依据所请求的内容类型,AR内容服务器使用稍微不同的内容递送优化过程。实际上,甚至客户端设备也可以基于其请求的AR内容的类型而利用进一步的改进。在下面部分描述用于优化光场、拟真视频以及合成3D场景的不同方式。The AR content server receives the AR content request from the AR client, and obtains the requested content from a content store. The AR content server then modifies the requested AR content to remove redundant data. In this case, content delivery optimization involves a process performed by the AR content server. During content delivery optimization, the optimized portion of the requested AR content that is rendered at a location that is not visible to the viewer will be removed from the content. In AR, virtual elements of AR content are preferably perceived as (or merged with) the part of the real world that the viewer sees. In order to enhance reality perception, embodiments disclosed in the present application copy the appearance of physical objects of virtual objects that block the user's view to the virtual objects. By removing all elements from the content that are occluded by real-world elements, the size of the content can be reduced without a user-observable loss of quality. Depending on the type of content requested, the AR content server uses a slightly different content delivery optimization process. In fact, even client devices can take advantage of further improvements based on the type of AR content they request. Different approaches for optimizing light fields, photorealistic video, and compositing 3D scenes are described in the following sections.
光场数据能够包括从一些不同视点在一个场景同时获取的子图像阵列。基于该子图像阵列,能够通过计算的方式生成原始视点之间的新视点。所述子图像阵列也可以被处理以实现汇编的最终图像的重新聚焦。最相关地,所述子图像阵列可以用于例如经由子图像的视差及不一致图的分析来提取与所捕获的场景对应的深度信息。Light field data can comprise an array of sub-images acquired simultaneously in a scene from several different viewpoints. Based on the array of sub-images, new viewpoints between original viewpoints can be generated computationally. The array of sub-images can also be processed to achieve refocusing of the compiled final image. Most relevantly, the array of sub-images may be used to extract depth information corresponding to the captured scene, eg via analysis of disparity and disparity maps of the sub-images.
针对光场优化,所述AR内容服务器从光场提取深度信息并将光场景深图与从AR客户端接收的环境模型对准。针对所述光场的每个子图像,AR服务器将组合的光场景深图和所述客户端环境模型的视点变换为匹配所述光场子图像的视点。针对所述光场子图像的每个像素,AR内容服务器将来自客户端的环境模型的深度值与从所述光场数据提取的对应的深度值进行比较。如果所述光场的景深图具有大于来自客户端环境模型的深度值的深度值,则所述光场子图像中的对应像素被丢弃,因为它们被现实世界对象遮挡且由此对客户端是不可见的。For light field optimization, the AR content server extracts depth information from the light field and aligns the light scene depth map with the environment model received from the AR client. For each sub-image of the light field, the AR server transforms the viewpoint of the combined light scene depth map and the client environment model to match the viewpoint of the light field sub-image. For each pixel of the light field sub-image, the AR content server compares the depth value from the client's environment model with the corresponding depth value extracted from the light field data. If the depth map of the light field has a depth value greater than the depth value from the client's environment model, then the corresponding pixels in the light field sub-image are discarded because they are occluded by real world objects and thus are not visible to the client. visible.
至于拟真视频(其具有相关联的深度信息可用),可以通过将该拟真视频的深度值与所述环境相对于跟踪的相机姿态的深度值进行比较来执行优化。对于任意内容移除,AR内容服务器操作以重新呈现视频。在重新呈现期间,在用户的环境中被现实世界元素遮挡的视频数据的部分被丢弃。这些大的丢弃区域能够使用传统视频压缩方法被有效压缩。As for photorealistic videos (which have associated depth information available), optimization can be performed by comparing the depth values of the photorealistic video with the depth values of the environment relative to the tracked camera pose. For any content removal, the AR content server operates to re-render the video. During re-rendering, portions of the video data that are occluded by real-world elements in the user's environment are discarded. These large dropout regions can be efficiently compressed using conventional video compression methods.
在一些实施方式中,通过考虑到从现实世界观看环境捕获的几何形状视图内对虚拟3D元素执行可视性检测,包括虚拟3D信息的合成3D场景可得到优化。在所述可视性检测中,确定从用户的视点的内容的可视性。依据虚拟AR元素的几何形状描述,能够关于对象或关于每个对象的每个顶点来执行可视性检测。在一些实施方式中,对象级移除(相对于顶点级移除)被执行,其中对象几何形状的移除部分会导致需要明显的对象数据重新组织。这会阻止3D呈现算法使用缓冲对象,其将所述内容的部分作为GPU存储器中的静态实体进行存储。相关内容递送优化一完成,修改的AR内容会被发送到AR客户端。In some embodiments, a synthetic 3D scene including virtual 3D information may be optimized by performing visibility detection on virtual 3D elements within a geometry view captured from a real world viewing environment. In the visibility detection, the visibility of the content from the user's point of view is determined. From the geometry description of the virtual AR element, visibility detection can be performed with respect to objects or with respect to each vertex of each object. In some implementations, object-level removal (as opposed to vertex-level removal) is performed where the removed portion of the object's geometry results in the need for significant object data reorganization. This prevents 3D rendering algorithms from using buffer objects, which store parts of the content as static entities in GPU memory. As soon as the related content delivery optimization is completed, the modified AR content will be sent to the AR client.
所述AR客户端从AR内容服务器接收修改的AR内容并将其显示给观看者。为了显示所述内容,AR客户端从客户端设备深度传感器确定设备姿态。基于朝向信息,显示过程将接收的内容坐标系统与用户的坐标系统对准。在一些实施方式中,在对准内容之后,子过程将从客户端设备传感器接收的深度值与从AR内容服务器接收的修改的内容的深度值进行比较。所述内容中深度值大于客户端设备深度传感器中的对应深度值的区域能够被丢弃并不被呈现,因为不然它们要在在环境中被现实物理元素遮挡的空间位置呈现。所述AR内容服务器已经移除了在客户端发送给服务器的环境模型中存在的元素遮挡的AR内容元素,但是这种运行时间深度比较能够处理从发送到内容服务器的环境模型丢失的动态元素和静态元素导致的遮挡。The AR client receives the modified AR content from the AR content server and displays it to the viewer. To display the content, the AR client determines the device pose from the client device depth sensor. Based on the orientation information, the display process aligns the received content coordinate system with the user's coordinate system. In some implementations, after aligning the content, the sub-process compares the depth value received from the client device sensor to the depth value of the modified content received from the AR content server. Regions of the content with depth values greater than the corresponding depth values in the client device depth sensor can be discarded and not rendered because they would otherwise be rendered at spatial locations that are occluded by real-world physical elements in the environment. The AR content server already removes AR content elements that are occluded by elements present in the environment model sent to the server by the client, but this runtime depth comparison is able to handle missing dynamic elements and Occlusion caused by static elements.
图1是根据至少一个实施方式的用于在预捕获环境中的AR内容递送的方法的可视概观。在可视概观中描绘了用户100穿戴AR可穿戴装置客户端设备102。用户100在包括大桌子106的现实世界AR观看位置104。102内的客户端设备开始经由AR可穿戴装置客户端设备102内的数据收集相机和传感器重构当前环境。客户端102使用AR可穿戴装置中的相机和传感器收集关于所述桌子的形状和位置的数据,并将该数据发送到所述AR内容服务器108。Figure 1 is a visual overview of a method for AR content delivery in a pre-capture environment, according to at least one embodiment. A user 100 is depicted wearing an AR wearable client device 102 in the visual overview. The user 100 is in a real world AR viewing location 104 including a large table 106. The client device within 102 begins to reconstruct the current environment via the data collection cameras and sensors within the AR wearable client device 102. The client 102 collects data about the shape and position of the table using cameras and sensors in the AR wearable device and sends this data to the AR content server 108 .
用户100希望消耗(consume)描绘两个虚拟人物的AR内容。AR客户端向AR内容服务器108发送内容请求114。AR内容服务器108执行关于所选内容的可视性分析。AR内容服务器108基于关于当前环境的用户的位置和朝向,确定顶部的虚拟人物是可视的110的以及底部的虚拟人物112会被桌子106遮挡。The user 100 wishes to consume AR content depicting two virtual characters. The AR client sends a content request 114 to the AR content server 108 . The AR content server 108 performs visibility analysis on the selected content. The AR content server 108 determines that the top avatar is visible 110 and the bottom avatar 112 is occluded by the table 106 based on the user's position and orientation with respect to the current environment.
AR内容服务器108然后通过移除底部的虚拟人物来修改所述AR内容。标准压缩技术可以用于减小修改的帧的大小。修改的内容流然后从服务器被发送到用户100以用于消耗。The AR content server 108 then modifies the AR content by removing the bottom avatar. Standard compression techniques can be used to reduce the size of the modified frame. The modified content stream is then sent from the server to the user 100 for consumption.
一个实施方式包括AR可穿戴装置客户端设备102执行的过程。该过程包括使用AR客户端设备102生成现实世界AR观看位置的数字重构3D环境。该过程包括检测数字重构3D环境中的对象。该过程还包括确定所述对象的深度和几何形状。该过程还包括发送指示所述对象的深度和几何形状的信息到AR内容服务器108。该过程还包括发送来自AR客户端的针对AR内容的请求到AR内容服务器108。该过程还包括在AR客户端设备102处从AR内容服务器108接收AR内容流116,其中所接收的AR内容流108已经被AR内容服务器108过滤以排除会被对象112遮挡的AR内容。One embodiment includes a process performed by the AR wearable client device 102 . The process includes using the AR client device 102 to generate a digitally reconstructed 3D environment of the real world AR viewing location. The process consists of detecting objects in a digitally reconstructed 3D environment. The process also includes determining the depth and geometry of the object. The process also includes sending information indicative of the depth and geometry of the object to the AR content server 108 . The process also includes sending a request for AR content from the AR client to the AR content server 108 . The process also includes receiving, at the AR client device 102 , an AR content stream 116 from the AR content server 108 , wherein the received AR content stream 108 has been filtered by the AR content server 108 to exclude AR content that would be occluded by the object 112 .
图2是根据至少一个实施方式的由AR客户端执行的用于预捕获环境中的AR内容递送的方法的流程图。图2描绘了过程200,其包括元素202-210。在元素202,AR客户端(例如,客户端102)确定现实世界AR观看位置。在元素204,所述过程200包括识别并跟踪在现实世界AR观看位置104内的AR客户端(例如客户端102)的视点位置和朝向。在元素206,所述过程200包括将指示所述视点位置和朝向的信息发送到AR内容服务器108。在元素208,所述过程200包括将针对AR内容的请求发送给AR内容服务器108。在元素201,所述过程200包括在所述AR客户端设备102处从AR内容服务器接收AR内容流,例如流116,其中接收的AR内容流(例如,流116)已经被AR内容服务器(例如,服务器108)过滤以排除从所述视点位置和朝向在现实世界AR观看位置中不可见的AR内容。2 is a flowchart of a method performed by an AR client for AR content delivery in a pre-capture environment, in accordance with at least one embodiment. FIG. 2 depicts process 200, which includes elements 202-210. At element 202, an AR client (eg, client 102) determines a real-world AR viewing location. At element 204 , the process 200 includes identifying and tracking a viewpoint position and orientation of an AR client (eg, client 102 ) within the real-world AR viewing location 104 . At element 206 , the process 200 includes sending information indicative of the viewpoint location and orientation to the AR content server 108 . At element 208 , the process 200 includes sending a request for AR content to the AR content server 108 . At element 201, the process 200 includes receiving, at the AR client device 102, an AR content stream, such as stream 116, from an AR content server, wherein the received AR content stream (e.g., stream 116) has been processed by the AR content server (e.g., , server 108) filters to exclude AR content that is not visible in the real-world AR viewing position from the viewpoint position and orientation.
图3是根据至少一个实施方式的由AR内容服务器执行的用于预捕获环境中的AR内容递送的方法的流程图。图3描绘了过程300,其包括元素302-310。在元素302,AR内容服务器(例如,服务器108)从AR客户端(例如,客户端设备102)接收针对AR内容的请求。在元素304,所述过程300包括获取现实世界AR观看位置(例如,位置104)的数字重构3D环境。在元素306,所述过程300包括接收现实世界AR观看位置的AR客户端的视点位置和朝向。在元素308,所述过程300包括将所请求的AR内容与数字重构3D环境和视点位置及朝向进行比较,以执行对所请求AR内容的可视性分析,以及通过移除通过可视性分析指出的从视点位置和朝向不可见的部分来修改所请求AR内容。在元素310,所述过程300包括通过发送修改的AR内容给AR客户端来完成针对AR内容的请求。3 is a flowchart of a method performed by an AR content server for delivery of AR content in a pre-capture environment, according to at least one embodiment. Figure 3 depicts a process 300 that includes elements 302-310. At element 302, an AR content server (eg, server 108) receives a request for AR content from an AR client (eg, client device 102). At element 304, the process 300 includes obtaining a digitally reconstructed 3D environment of a real world AR viewing location (eg, location 104). At element 306, the process 300 includes receiving a viewpoint location and orientation of the AR client for a real world AR viewing location. At element 308, the process 300 includes comparing the requested AR content to the digitally reconstructed 3D environment and viewpoint position and orientation to perform a visibility analysis of the requested AR content and The portion of the analysis indicated that is not visible from the viewpoint position and orientation is modified to modify the requested AR content. At element 310, the process 300 includes fulfilling the request for the AR content by sending the modified AR content to the AR client.
在至少一个实施方式中,获取现实世界AR观看位置的数字重构3D环境包括从数据存储获取所述重构3D环境。在至少一个实施方式中,获取现实世界AR观看位置的数字重构3D环境包括从AR客户端获取重构3D环境。在至少一个实施方式中,获取现实世界AR观看位置的数字重构3D环境包括使用来自数据存储的数据和来自AR客户端设备的实时数据生成所述重构3D环境。In at least one embodiment, obtaining a digitally reconstructed 3D environment of a real-world AR viewing location includes obtaining the reconstructed 3D environment from a data store. In at least one embodiment, obtaining the digitally reconstructed 3D environment of the AR viewing position in the real world includes obtaining the reconstructed 3D environment from the AR client. In at least one embodiment, obtaining a digitally reconstructed 3D environment of a real world AR viewing location includes generating the reconstructed 3D environment using data from a data store and real-time data from an AR client device.
在AR内容服务器的至少一个实施方式中,修改所请求的AR内容依据内容类型而变化。在一个这样的实施方式中,所述内容类型是3D场景,以及修改所请求的AR内容包括:(i)从所请求的AR内容移除被遮挡的3D对象,(ii)获取针对视点位置和朝向的更新,以及(iii)基于更新的视点位置和朝向,向AR客户端发送早前被移除但现在可视的AR对象。在另一这样的实施方式中,所述内容类型是光场,以及修改所请求的AR内容包括:(i)逐帧处理所述光场内容以移除被遮挡部分,以及(ii)重新打包剩余数据以进行流传输。在另一这样的实施方式中,所述内容类型是具有深度数据的拟真视频,以及修改所请求的AR内容包括:(i)移除所述视频中数字重构3D环境的深度值比与该视频相关联的深度值要小的部分,(ii)重新呈现帧,以及(iii)重新打包所述视频以用于流传输。In at least one embodiment of the AR content server, modifying the requested AR content varies by content type. In one such embodiment, the content type is a 3D scene, and modifying the requested AR content includes: (i) removing occluded 3D objects from the requested AR content, (ii) obtaining information about the viewpoint position and An update of the orientation, and (iii) based on the updated viewpoint position and orientation, the AR object that was removed earlier but is now visible is sent to the AR client. In another such embodiment, the content type is light field, and modifying the requested AR content includes: (i) processing the light field content frame by frame to remove occluded portions, and (ii) repackaging Remaining data for streaming. In another such embodiment, the content type is immersive video with depth data, and modifying the requested AR content includes: (i) removing the depth value ratio of the digitally reconstructed 3D environment in the video and The portion of the video whose associated depth value is smaller, (ii) re-renders the frame, and (iii) re-packages the video for streaming.
图4是根据至少一个实施方式的用于预捕获环境中的AR内容递送的方法的顺序图。为了用作进一步资源,下面段落包含图4的顺序图的示例遍历。4 is a sequence diagram of a method for AR content delivery in a pre-capture environment, according to at least one embodiment. To serve as a further resource, the following paragraphs contain an example traversal of the sequence diagram of FIG. 4 .
客户端设备102在重构环境重构当前AR观看位置404。用户100在步骤406启动AR观看客户端102,并使用AR头戴式装置扫描当前环境408,该AR头戴式装置能够直接或无线地耦合到所述观看客户端。客户端设备102可以可替换地确定当前AR观看位置并将该信息发送到已经具有环境模型的AR内容服务器108。扫描并重构所述环境对于跟踪目的也是有用的。在图4中,当数字重构完成时,在步骤410,用户100被通知“客户端就绪”。The client device 102 reconstructs the current AR viewing position 404 in the reconstruction environment. The user 100 starts the AR viewing client 102 at step 406 and scans the current environment 408 using an AR headset that can be directly or wirelessly coupled to the viewing client. The client device 102 may alternatively determine the current AR viewing position and send this information to the AR content server 108 which already has a model of the environment. Scanning and reconstructing the environment is also useful for tracking purposes. In FIG. 4, when the digital reconstruction is complete, at step 410, the user 100 is notified that "client is ready".
然后用户100能够如步骤412所示发起AR内容流传输(观看)会话。用户100输入或选择AR内容链接414并将其给到AR客户端102。一接收到所述链接,AR客户端进行姿态跟踪414(但是该步骤可以在较早时间被初始化)。AR客户端发送针对数字重构3D环境的内容请求416以及当前AR客户端姿态给AR内容服务器108。The user 100 can then initiate an AR content streaming (viewing) session as shown in step 412 . The user 100 enters or selects the AR content link 414 and gives it to the AR client 102 . Upon receiving the link, the AR client performs pose tracking 414 (but this step could have been initiated at an earlier time). The AR client sends a content request 416 for the digitally reconstructed 3D environment and the current AR client pose to the AR content server 108 .
在步骤420,AR内容服务器108在取得内容418以及步骤420从AR内容存储402获取所请求的AR内容。At step 420 , the AR content server 108 retrieves the requested AR content from the AR content store 402 at the fetch content 418 and at step 420 .
内容流传输和观看422可以在获取后开始。AR内容服务器108通过移除被现实世界环境遮挡的元素来优化所请求的AR内容424。这可以涉及修改内容帧、重新呈现修改的帧等等。AR内容服务器108执行可视性分析(AR客户端也可以执行这个)以使用数字重构3D环境和客户端姿态确定哪个AR内容对于用户100是不可见的。然后,优化的内容424将经由优化的内容流426而被提供给AR观看者客户端102,由此用户100能够看到显示的内容428。Content streaming and viewing 422 may begin after acquisition. The AR content server 108 optimizes the requested AR content 424 by removing elements occluded by the real world environment. This may involve modifying content frames, re-rendering modified frames, and so on. The AR content server 108 performs a visibility analysis (the AR client may also perform this) to determine which AR content is invisible to the user 100 using the digitally reconstructed 3D environment and client pose. The optimized content 424 will then be provided to the AR viewer client 102 via the optimized content stream 426 whereby the user 100 can see the displayed content 428 .
至少一个实施方式包括用于使用AR客户端数字重构增强现实(AR)观看位置的过程。该过程还包括从AR客户端向AR内容服务器发送描述数字重构AR观看位置的信息。该过程还包括从AR客户端向AR内容服务器发送针对AR内容的请求。该过程还包括从AR客户端向AR内容服务器发送描述AR观看位置内AR客户端的位置和朝向的信息。该过程还包括在AR内容服务器处基于接收的描述数字重构AR观看位置的信息和接收的描述AR客户端的位置和朝向的信息来确定所请求的AR内容的可视性。该过程还包括在AR内容服务器处通过移除所请求的AR内容中被确定为不可见的部分来修改所请求的AR内容。该过程还包括从AR内容服务器向AR客户端发送修改的AR内容。该过程还包括使用AR客户端用修改的AR内容来增强AR观看位置。At least one embodiment includes a process for digitally reconstructing an augmented reality (AR) viewing position using an AR client. The process also includes sending information describing the digitally reconstructed AR viewing position from the AR client to the AR content server. The process also includes sending a request for AR content from the AR client to the AR content server. The process also includes sending information from the AR client to the AR content server describing the position and orientation of the AR client within the AR viewing location. The process also includes determining, at the AR content server, visibility of the requested AR content based on the received information describing the digitally reconstructed AR viewing position and the received information describing the position and orientation of the AR client. The process also includes modifying, at the AR content server, the requested AR content by removing portions of the requested AR content that are determined to be invisible. The process also includes sending the modified AR content from the AR content server to the AR client. The process also includes augmenting the AR viewing location with the modified AR content using the AR client.
图5是根据至少一个实施方式的用户500和示例性现实世界AR观看位置502的图示。图5是用于帮助图6至10的描述的参考图像。图5描绘示例性现实世界AR观看位置中的用户500。该观看位置502是房间,其包含两个门口504和506、梳妆台508、沙发510、以及椅子512。用户500使用AR头戴式装置514扫描现实世界AR观看位置。5 is an illustration of a user 500 and an example real-world AR viewing location 502 in accordance with at least one implementation. FIG. 5 is a reference image used to assist the description of FIGS. 6 to 10 . FIG. 5 depicts a user 500 in an exemplary real-world AR viewing position. The viewing location 502 is a room that includes two doorways 504 and 506 , a dresser 508 , a sofa 510 , and a chair 512 . User 500 uses AR headset 514 to scan a real world AR viewing location.
图6是根据至少一个实施方式的数字重构3D环境的平面图。用户600使用AR客户端扫描AR观看位置并生成数字重构3D环境。该数字重构包括两个门口602和604、梳妆台606、沙发608以及椅子610。图2是该重构的平面图。用户600穿戴的AR头戴式装置确定其姿态并将姿态和所述数字重构发送到AR内容服务器108。6 is a plan view of a digitally reconstructed 3D environment, according to at least one embodiment. User 600 scans the AR viewing location using the AR client and generates a digitally reconstructed 3D environment. This digital reconstruction includes two doorways 602 and 604 , a dresser 606 , a sofa 608 and a chair 610 . Figure 2 is a plan view of the reconstruction. The AR headset worn by the user 600 determines its pose and sends the pose and the digital reconstruction to the AR content server 108 .
图7是根据至少一个实施方式的所请求的AR内容和图6的数字重构3D环境的图示。用户600已经请求了描绘九个星星700的AR内容(例如在AR天文馆应用中)。该AR内容的坐标系统通过使用数字重构3D环境和姿态信息而被与现实世界观看位置对准。可以以多种方式执行该对准。例如,可以选择该对准以最小化由于遮挡而不会被显示的AR内容量(例如星星的数量)。可以使用用户输入来选择所述对准,例如用户可以调整所述对准直到选择了偏好的对准。可以基于其他输入来选择所述对准,例如AR显示设备中的一个或多个加速度计可以用于确保关于AR内容的“上”方向与AR观看环境的“上”方向对准。在一些实施方式中,可以选择对准由此用户在AR会话开始时面向的方向可以与AR内容的偏好的观看方向相对准。7 is an illustration of requested AR content and the digitally reconstructed 3D environment of FIG. 6, according to at least one embodiment. User 600 has requested AR content depicting nine stars 700 (eg, in an AR planetarium application). The coordinate system of the AR content is aligned with the real world viewing position by using digitally reconstructed 3D environment and pose information. This alignment can be performed in a number of ways. For example, the alignment may be chosen to minimize the amount of AR content (such as the number of stars) that would not be displayed due to occlusion. The alignment may be selected using user input, for example the user may adjust the alignment until a preferred alignment is selected. The alignment may be selected based on other inputs, for example one or more accelerometers in the AR display device may be used to ensure that the "up" direction with respect to the AR content is aligned with the "up" direction of the AR viewing environment. In some implementations, the alignment can be selected such that the direction the user is facing at the start of the AR session can be aligned with the preferred viewing direction of the AR content.
基于所选择的对准,AR内容的每个元素的显示位置(例如每个星星的显示位置)可以通过使用所选的对准将这些元素的坐标从AR内容的坐标系统变换到3D观看环境的坐标系统来确定。Based on the selected alignment, the display position of each element of the AR content (such as the display position of each star) can be transformed from the coordinate system of the AR content to the coordinates of the 3D viewing environment by using the selected alignment system to determine.
图8是根据至少一个实施方式的对图7的所请求的AR内容执行的可视性分析的图示。AR内容服务器108使用数字重构3D环境和姿态信息执行可视性分析。AR内容服务器108确定包括在原始AR内容中的四个星星802,804,806以及808从用户600的视点是不可见的,且将其从所述内容移除。8 is an illustration of a visibility analysis performed on the requested AR content of FIG. 7 in accordance with at least one embodiment. The AR content server 108 performs visibility analysis using the digitally reconstructed 3D environment and pose information. The AR content server 108 determines that the four stars 802, 804, 806, and 808 included in the original AR content are not visible from the viewpoint of the user 600, and removes them from the content.
图9是根据至少一个实施方式的修改的AR内容和图6的数字重构3D环境的图示。用户已经请求了描绘九个星星的AR内容,但是AR内容服务器已经修改了所述AR内容且现在在修改的数据流中仅出现五个星星。9 is an illustration of modified AR content and the digitally reconstructed 3D environment of FIG. 6 in accordance with at least one embodiment. The user has requested AR content depicting nine stars, but the AR content server has modified the AR content and now only five stars appear in the modified data stream.
图10是根据至少一个实施方式的图5的用户看到的图9的修改的AR内容的图示。图10示出了通过修改的AR内容增强的现实世界环境。5个剩余的星星1002,1004,1006,1008以及1010都是可见的。4个被移除的星星802,804,806和808没有被墙、椅子等遮挡,但是在数据流中都没有出现。10 is an illustration of the modified AR content of FIG. 9 as seen by the user of FIG. 5 in accordance with at least one embodiment. Figure 10 shows a real world environment augmented by modified AR content. The 5 remaining stars 1002, 1004, 1006, 1008 and 1010 are all visible. The 4 removed stars 802, 804, 806 and 808 are not obscured by walls, chairs, etc., but none of them appear in the data stream.
本申请公开的示例性实施方式使用一个或多个有线和/或无线网络节点来实施,例如无线发射/接收单元(WTRU)或其他网络实体。The exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as wireless transmit/receive units (WTRUs) or other network entities.
图11是在本申请的实施方式中可以用作AR客户端观看器的示例性WTRU 1102的系统图。如图11中所示,WTRU 1102可以包括处理器1118、包括收发信机1120的通信接口1119、发射/接收部件1122、扬声器/麦克风1124、键盘1126、显示器/触摸板1128、不可移除存储器1130、可移除存储器1132、电源1134、全球定位系统(GPS)芯片组1136以及传感器1138。应该了解的是,在保持符合实施例的同时,WTRU 1102可以包括前述部件的任何子组合。Figure 11 is a system diagram of an exemplary WTRU 1102 that may function as an AR client viewer in embodiments of the present application. As shown in FIG. 11 , WTRU 1102 may include processor 1118, communication interface 1119 including transceiver 1120, transmit/receive component 1122, speaker/microphone 1124, keyboard 1126, display/touchpad 1128, non-removable memory 1130 , removable memory 1132 , power supply 1134 , global positioning system (GPS) chipset 1136 , and sensor 1138 . It should be appreciated that the WTRU 1102 may include any subcombination of the foregoing components while remaining consistent with the embodiments.
处理器1118可以是通用处理器、专用处理器、常规处理器、数字信号处理器(DSP)、多个微处理器、与DSP核心关联的一个或多个微处理器、控制器、微控制器、专用集成电路(ASIC)、现场可编程门阵列(FPGA)电路、其他任何类型的集成电路(IC)以及状态机等等。处理器1118可以执行信号编码、数据处理、功率控制、输入/输出处理、和/或其他任何能使WTRU 1102在无线环境中工作的功能。处理器1118可以耦合至收发信机1120,收发信机1120可以耦合至发射/接收部件1122。虽然图11将处理器1118和收发信机1120描述成单独组件,然而应该了解,处理器1118和收发信机1120也可以集成在一个电子组件或芯片中。Processor 1118 can be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), multiple microprocessors, one or more microprocessors associated with a DSP core, a controller, a microcontroller , application-specific integrated circuits (ASICs), field-programmable gate array (FPGA) circuits, any other type of integrated circuit (IC), and state machines, among others. Processor 1118 may perform signal encoding, data processing, power control, input/output processing, and/or any other functions that enable WTRU 1102 to operate in a wireless environment. Processor 1118 can be coupled to transceiver 1120 , which can be coupled to transmit/receive component 1122 . Although FIG. 11 depicts the processor 1118 and the transceiver 1120 as separate components, it should be understood that the processor 1118 and the transceiver 1120 may also be integrated in one electronic component or chip.
发射/接收部件1122可被配置成经由空中接口1116来发射或接收去往或来自基站的信号。举个例子,在一个实施例中,发射/接收部件1122可以是被配置成发射和/或接收RF信号的天线。作为示例,在另一个实施例中,发射/接收部件1122可以是被配置成发射和/或接收IR、UV或可见光信号的发射器/检测器。在再一个实施例中,发射/接收部件1122可被配置成发射和接收RF和光信号。应该了解的是,发射/接收部件1122可以被配置成发射和/或接收无线信号的任何组合。Transmit/receive component 1122 may be configured to transmit or receive signals to or from a base station via air interface 1116 . For example, in one embodiment, transmit/receive component 1122 may be an antenna configured to transmit and/or receive RF signals. As an example, in another embodiment, the transmit/receive component 1122 may be an emitter/detector configured to emit and/or receive IR, UV, or visible light signals. In yet another embodiment, the transmit/receive component 1122 may be configured to transmit and receive RF and optical signals. It should be appreciated that the transmit/receive component 1122 may be configured to transmit and/or receive any combination of wireless signals.
此外,虽然在图11中将发射/接收部件1122描述成是单个部件,但是WTRU 1102可以包括任何数量的发射/接收部件1122。更具体地说,WTRU 1102可以使用MIMO技术。由此,在一个实施例中,WTRU 1102可以包括两个或多个通过空中接口1116来发射和接收无线电信号的发射/接收部件1122(例如多个天线)。Furthermore, although the transmit/receive component 1122 is depicted in FIG. 11 as a single component, the WTRU 1102 may include any number of transmit/receive components 1122 . More specifically, the WTRU 1102 may use MIMO technology. Thus, in one embodiment, the WTRU 1102 may include two or more transmit/receive components 1122 (eg, multiple antennas) for transmitting and receiving radio signals over the air interface 1116 .
收发信机1120可被配置成对发射/接收部件1122所要传送的信号进行调制,以及对发射/接收部件1122接收的信号进行解调。如上所述,WTRU 1102可以具有多模能力。因此,收发信机1120可以包括允许WTRU 1102借助多种RAT(例如UTRA和IEEE 802.11)来进行通信的多个收发信机。The transceiver 1120 may be configured to modulate signals to be transmitted by the transmit/receive component 1122 and to demodulate signals received by the transmit/receive component 1122 . As noted above, the WTRU 1102 may be multimode capable. Accordingly, the transceiver 1120 may include multiple transceivers that allow the WTRU 1102 to communicate over multiple RATs (eg, UTRA and IEEE 802.11).
WTRU 1102的处理器1118可以耦合到扬声器/麦克风1124、键盘1126和/或显示器/触摸板1128(例如液晶显示器(LCD)显示单元或有机发光二极管(OLED)显示单元),并且可以接收来自这些部件的用户输入数据。处理器1118还可以向扬声器/麦克风1124、键盘1126和/或显示器/触摸板1128输出用户数据。此外,处理器1118可以从诸如不可移除存储器1130和/或可移除存储器1132之类的任何适当的存储器中存取信息,以及将信息存入这些存储器。不可移除存储器1130可以包括随机存取存储器(RAM)、只读存储器(ROM)、硬盘或是其他任何类型的记忆存储设备。可移除存储器1132可以包括订户身份模块(SIM)卡、记忆棒、安全数字(SD)记忆卡等等。在其他实施例中,处理器1118可以从那些并非实际位于WTRU1102的存储器存取信息,以及将数据存入这些存储器,作为示例,此类存储器可以位于服务器或家庭计算机(未显示)。Processor 1118 of WTRU 1102 may be coupled to speaker/microphone 1124, keyboard 1126, and/or display/touchpad 1128 (such as a liquid crystal display (LCD) display unit or an organic light emitting diode (OLED) display unit) and may receive user input data. Processor 1118 may also output user data to speaker/microphone 1124 , keyboard 1126 and/or display/touchpad 1128 . In addition, processor 1118 may access information from, and store information in, any suitable memory, such as non-removable memory 1130 and/or removable memory 1132 . The non-removable memory 1130 may include random access memory (RAM), read only memory (ROM), hard disk, or any other type of memory storage device. Removable memory 1132 may include a Subscriber Identity Module (SIM) card, a memory stick, a Secure Digital (SD) memory card, and the like. In other embodiments, the processor 1118 may access information from, and store data into, memory that is not physically located on the WTRU 1102, such memory may be located on a server or home computer (not shown), as examples.
处理器1118可以接收来自电源1134的电力,并且可被配置分发和/或控制用于WTRU 1102中的其他组件的电力。电源1134可以是为WTRU 1102供电的任何适当设备。例如,电源1134可以包括一个或多个干电池组(如镍镉(Ni-Cd)、镍锌(Ni-Zn)、镍氢(NiMH)、锂离子(Li-ion)等等)、太阳能电池以及燃料电池等等。Processor 1118 may receive power from power supply 1134 and may be configured to distribute and/or control power for other components in WTRU 1102 . Power source 1134 may be any suitable device for powering WTRU 1102 . For example, the power source 1134 may include one or more dry battery packs (such as nickel cadmium (Ni-Cd), nickel zinc (Ni-Zn), nickel metal hydride (NiMH), lithium ion (Li-ion), etc.), solar cells, and fuel cells and more.
处理器1118还可以耦合到GPS芯片组1136,该芯片组可被配置成提供与WTRU 1102的当前位置相关的位置信息(例如经度和纬度)。作为来自GPS芯片组1136的信息的补充或替换,WTRU 1102可以经由空中接口1116接收来自基站的位置信息,和/或根据从两个或更多个附近基站接收的信号定时来确定其位置。应该了解的是,在保持符合实施例的同时,WTRU 1102可以借助任何适当的定位方法来获取位置信息。The processor 1118 may also be coupled to a GPS chipset 1136 that may be configured to provide location information (eg, longitude and latitude) related to the current location of the WTRU 1102 . In addition to or instead of information from GPS chipset 1136, WTRU 1102 may receive location information from base stations via air interface 1116 and/or determine its location based on signal timing received from two or more nearby base stations. It should be appreciated that the WTRU 1102 may obtain location information by any suitable positioning method while remaining consistent with the embodiments.
处理器1118还可以耦合到其他周边设备1138,其中所述周边设备可以包括提供附加特征、功能和/或有线或无线连接的一个或多个软件和/或硬件模块。例如,周边设备1138可以包括加速度计、电子指南针、卫星收发信机、数码相机(用于照片或视频)、通用串行总线(USB)端口、振动设备、电视收发信机、免提耳机、模块、调频(FM)无线电单元、数字音乐播放器、媒体播放器、视频游戏机模块以及因特网浏览器等等。The processor 1118 may also be coupled to other peripheral devices 1138, which may include one or more software and/or hardware modules that provide additional features, functionality, and/or wired or wireless connectivity. For example, peripheral devices 1138 may include accelerometers, electronic compasses, satellite transceivers, digital cameras (for photo or video), Universal Serial Bus (USB) ports, vibrating devices, television transceivers, hands-free headsets, modules, frequency modulation (FM) radio units, digital music players, media players, video game console modules, Internet browsers, and more.
图12示出了可以在本公开的实施方式中使用的示例性网络实体1290,例如作为AR内容服务器或作为AR内容存储。如图12所示,网络实体1290包括通信接口1292、处理器1294以及非暂态数据存储1296,所有这些通过总线、网络或其他通信路径1928以通信地方式链接。FIG. 12 shows an exemplary network entity 1290 that may be used in embodiments of the present disclosure, eg, as an AR content server or as an AR content store. As shown in FIG. 12 , network entity 1290 includes communication interface 1292 , processor 1294 , and non-transitory data storage 1296 , all of which are communicatively linked by bus, network, or other communication path 1928 .
通信接口1292可以包括一个或多个有线通信接口和/或一个或多个无线通信接口。关于有线通信,通信接口1292可以包括一个或多个接口,例如以太网接口。关于无线通信,通信接口1292可以包括多个组件,例如,一个或多个天线、为一种或多种类型的无线通信(例如LTE)设计和配置的一个或多个收发信机/芯片组、和/或本领域技术人员认为合适的任意其他组件。进一步地,关于无线通信,通信接口1292可以按规模装配有适用于用作在网络侧用作无线通信(例如,LTE通信、Wi-Fi通信等)的配置(与客户端侧相对)。因此,通信接口1292可以包括合适的设备和电路(可能包括多个收发信机)来用于服务多个移动站、UE或覆盖区域中的其他接入终端。Communication interface 1292 may include one or more wired communication interfaces and/or one or more wireless communication interfaces. For wired communications, communication interface 1292 may include one or more interfaces, such as an Ethernet interface. With respect to wireless communications, the communications interface 1292 may include a number of components, such as one or more antennas, one or more transceivers/chipsets designed and configured for one or more types of wireless communications (e.g., LTE), and/or any other components considered appropriate by those skilled in the art. Further, regarding wireless communication, the communication interface 1292 may be sized with a configuration suitable for use as wireless communication (eg, LTE communication, Wi-Fi communication, etc.) on the network side (as opposed to the client side). Accordingly, communication interface 1292 may comprise suitable equipment and circuitry (possibly including multiple transceivers) for serving multiple mobile stations, UEs or other access terminals in the coverage area.
处理器1294可以包括本领域技术人员认为合适的任意类型的一个或多个处理器,一些示例包括通用微处理器和专用DSP。Processor 1294 may include one or more processors of any type deemed suitable by those skilled in the art, some examples include general purpose microprocessors and special purpose DSPs.
数据存储1296可以采用任意非暂态计算机可读介质或这些介质的组合的形式,一些示例包括闪存存储器、只读存储器(ROM)以及随机存取存储器(RAM)等,以及本领域技术人员认为可以使用的任意一种或多种类型的非暂态数据存储。如图12所示,数据存储1296包含处理器1294可执行的程序指令1297,用于执行本申请描述的各种网络实体功能的各种组合。Data storage 1296 may take the form of any non-transitory computer-readable medium or combination of these, some examples include flash memory, read-only memory (ROM), and random-access memory (RAM), as recognized by those skilled in the art. Any one or more types of non-transitory data storage used. As shown in FIG. 12, data store 1296 contains program instructions 1297 executable by processor 1294 for performing various combinations of various network entity functions described herein.
在上述描述中,已经描述了特定实施方式。但是,本领域技术人员可以理解在不背离权利要求提出的本发明的范围的情况下可以做出各种修改和改变。因此,说明书和附图视为示例性的而绝非限制性的,且所有这些修改都包括在本发明教示的范围内。In the foregoing description, specific embodiments have been described. However, it is understood by those skilled in the art that various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims. Accordingly, the specification and figures are to be regarded as illustrative rather than restrictive, and all such modifications are included within the teaching of the invention.
存在的或变得明显的益处、优点、问题解决方案以及可以带来任何益处、优点或解决方案的任意元素不被理解为任何或所有权利要求的关键的、必须的或必要的特征或元素。本发明只由所附权利要求来限定,这些权利要求包括在本申请未决阶段做出的任何修改以及授权的权利要求的所有等同。The existence or becoming apparent of benefits, advantages, solutions to problems and any element that would bring about any benefit, advantage or solution is not to be construed as a critical, required or essential feature or element of any or all of the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of such claims as issued.
此外在本申请文件中,关系术语(例如,第一和第二、顶部和底部等)可以用于仅为了区分一个实体或动作与另一实体或动作,而不必要求或暗示这些实体或动作之间的任意实际关系或顺序。术语“包括(comprises)”、“包括(comprising)”、“具有(has)”、“具有(having”、“包含(includes)”、“包含(including)”、“含有(contains)”、“含有(containing)”或其任意其他变化旨在包括开放式包含,由此包括、具有、包含、含有所列元素的过程、方法、物品或装置不仅包括这些元素,还可以包括没有明确列出或这些过程、方法、物品或装置固有的其他元素。被“包括”、“具有”、“包含”、“含有”的元素在没有进一步限制的情况下不排除在包括、具有、包含、含有该元素的过程、方法、物品或装置中存在另外的相同的元素。术语“一(a)”和“一(an)”被定义为一个或多个,除非在本申请中另有明确说明。术语“基本上”、“本质上”、“近似地”、“大约”或其任意其他版本被定义为接近本领域技术人员理解的,且在一个非限制实施方式中该术语被定义为在10%内,在另一实施方式中在5%内,在另一实施方式中在1%内,以及在另一实施方式中在0.5%内。本申请使用的术语“耦合”被定义为连接,但是不必须是直接连接以及不必须是机械连接。以某种方式“配置”的设备或结构以至少该方式配置,但是也可以以没有列出的方式配置。Also in this document, relative terms (e.g., first and second, top and bottom, etc.) may be used solely to distinguish one entity or action from another without necessarily requiring or implying a relationship between these entities or actions. any actual relationship or sequence between them. The terms "comprises", "comprising", "has", "having", "includes", "including", "contains", " "containing" or any other variation thereof is intended to include open inclusion, whereby a process, method, article, or apparatus that includes, has, comprises, or contains listed elements includes not only those elements, but may also include elements not expressly listed or other elements inherent in these processes, methods, articles, or apparatus. An element that is "comprised," "has," "comprises," or "contains" does not, without further limitation, exclude the inclusion, The process, method, article or apparatus of the same element is otherwise present. The terms "a" and "an" are defined as one or more, unless expressly stated otherwise in this application. The term " Substantially", "essentially", "approximately", "approximately" or any other version thereof are defined as close to what a person skilled in the art would understand, and in one non-limiting embodiment the term is defined as being within 10% of , within 5% in another embodiment, within 1% in another embodiment, and within 0.5% in another embodiment. The term "coupled" as used in this application is defined as connecting, but not Must be a direct connection and not necessarily a mechanical connection. A device or structure that is "configured" in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
可以理解,一些实施方式包括一个或多个通用或专用处理器(或“处理设备”),例如微处理器、数字信号处理器、定制处理器和现场可编程门阵列(FPGA)以及独特存储的程序指令(包括软件和固件),其控制一个或多个处理器联合某些非处理器电路执行本申请描述的方法和/或装置的一些、多数或全部功能。可替换地,一些或所有功能可以由没有存储程序指令的状态机实施,或在一个或多个专用集成电路(ASCI)中实施,其中每个功能或某些功能的一些组合被实施为定制逻辑。当然,可以使用两种方式的组合。It is understood that some embodiments include one or more general or special purpose processors (or "processing devices"), such as microprocessors, digital signal processors, custom processors, and field programmable gate arrays (FPGAs), as well as uniquely stored Program instructions (including software and firmware) that control one or more processors in conjunction with certain non-processor circuits to perform some, most or all of the functions of the methods and/or apparatuses described herein. Alternatively, some or all functions may be implemented by a state machine with no stored program instructions, or in one or more application-specific integrated circuits (ASCIs), where each function or some combination of certain functions are implemented as custom logic . Of course, a combination of both approaches can be used.
因此,本公开的一些实施方式或其部分可以将一个或多个处理设备和存储在有形计算机可读存储设备中的一个或多个软件组件(例如,程序代码、固件、常驻软件、微代码等)组合,这些组合形成执行本申请描述的功能的特别配置的装置。形成特别编程的设备的这些组合在本申请中可以统称为“模块”。该模块的软件组件部分可以以任意计算机语言编写并可以是单片代码库的部分,或可以在更离散代码部分被开发(例如典型地在面向对象的计算机语言中)。此外,所述模块可以被分布到多个计算机平台、服务器、终端等。给定的模块甚至可以被实施为分开的处理器设备和/或计算硬件平台执行所描述的功能。Accordingly, some embodiments of the present disclosure, or portions thereof, may combine one or more processing devices and one or more software components (e.g., program code, firmware, resident software, microcode) stored in a tangible computer-readable storage device etc.), which combinations form a specifically configured apparatus for performing the functions described herein. These combinations forming a specially programmed device may collectively be referred to as "modules" in this application. The software component parts of the module may be written in any computer language and may be part of a monolithic code library, or may be developed in more discrete code parts (such as typically in an object-oriented computer language). Furthermore, the modules may be distributed to multiple computer platforms, servers, terminals, etc. A given module may even be implemented as a separate processor device and/or computing hardware platform to perform the described functions.
此外,实施方式能够被实施为计算机可读存储介质,其上存储有计算机可读代码用于对计算机(例如包括处理器)编程以执行本申请描述和要求的方法。计算机可读存储介质的示例包括但不限于硬盘、CD-ROM、光存储设备、磁存储设备、ROM(只读存储器)、PROM(可编程只读存储器)、EPROM(可擦除可编程只读存储器)、EEPROM(电可擦除可编程只读存储器)以及闪存存储器。此外,可以预见,本领域技术人员尽管例如通过可用时间、当前技术以及经济考虑激发会做出可能重大的努力和许多设计选择,但是其在本申请工公开的概念和原理的指导下在最小实验的情况下容易能够产生这些软件指令和程序以及IC。Furthermore, the embodiments can be implemented as a computer-readable storage medium having stored thereon computer-readable code for programming a computer (eg, comprising a processor) to perform the methods described and claimed herein. Examples of computer-readable storage media include, but are not limited to, hard drives, CD-ROMs, optical storage devices, magnetic storage devices, ROM (Read Only Memory), PROM (Programmable Read Only Memory), EPROM (Erasable Programmable Read Only Memory), Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), and Flash memory. Furthermore, it is envisioned that those skilled in the art, while motivated, for example, by available time, current technology, and economic considerations, will make potentially significant effort and many design choices, who, guided by the concepts and principles disclosed in the present application, will make it with minimal experimentation. It is easy to be able to generate these software instructions and programs as well as the IC in the case.
提供本公开的摘要以使得读者快速查明技术公开的本质。需要理解的是,其不用于解释或限制权利要求的范围或含义。此外,在前述具体实施方式部分中,可以知道在各种实施方式中组合各种特征以精简本公开。这种公开方法不被解释为反映要求的实施方式需要比每项权利要求中明确记载更多的特征。相反,权利要求反映的发明主题在于比单个公开实施方式的所有特征更少。因此,权利要求在此被整合到具体实施方式,每项权利要求自身作为独自地要求保护的主题。The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is to be understood that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description section, it can be seen that various features are combined in various implementations in order to streamline the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, the claims reflect inventive subject matter in less than all features of a single disclosed embodiment. Thus the claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411868107.8A CN119806321A (en) | 2017-02-01 | 2018-01-25 | Systems and methods for augmented reality content delivery in a pre-capture environment |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201762453317P | 2017-02-01 | 2017-02-01 | |
| US62/453,317 | 2017-02-01 | ||
| PCT/US2018/015264 WO2018144315A1 (en) | 2017-02-01 | 2018-01-25 | System and method for augmented reality content delivery in pre-captured environments |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411868107.8A Division CN119806321A (en) | 2017-02-01 | 2018-01-25 | Systems and methods for augmented reality content delivery in a pre-capture environment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN110249291A true CN110249291A (en) | 2019-09-17 |
Family
ID=61198899
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201880009301.7A Pending CN110249291A (en) | 2017-02-01 | 2018-01-25 | System and method for augmented reality content delivery in a pre-capture environment |
| CN202411868107.8A Pending CN119806321A (en) | 2017-02-01 | 2018-01-25 | Systems and methods for augmented reality content delivery in a pre-capture environment |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411868107.8A Pending CN119806321A (en) | 2017-02-01 | 2018-01-25 | Systems and methods for augmented reality content delivery in a pre-capture environment |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US11024092B2 (en) |
| EP (1) | EP3577631A1 (en) |
| CN (2) | CN110249291A (en) |
| WO (1) | WO2018144315A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020248777A1 (en) * | 2019-06-10 | 2020-12-17 | Oppo广东移动通信有限公司 | Control method, head-mounted device, and server |
Families Citing this family (35)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10204530B1 (en) | 2014-07-11 | 2019-02-12 | Shape Matrix Geometric Instruments, LLC | Shape-matrix geometric instrument |
| US10430147B2 (en) * | 2017-04-17 | 2019-10-01 | Intel Corporation | Collaborative multi-user virtual reality |
| CN115564900A (en) * | 2018-01-22 | 2023-01-03 | 苹果公司 | Method and apparatus for generating a synthetic reality reconstruction of planar video content |
| US10915781B2 (en) * | 2018-03-01 | 2021-02-09 | Htc Corporation | Scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium |
| US10783230B2 (en) * | 2018-05-09 | 2020-09-22 | Shape Matrix Geometric Instruments, LLC | Methods and apparatus for encoding passwords or other information |
| US10984600B2 (en) | 2018-05-25 | 2021-04-20 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
| US10818093B2 (en) | 2018-05-25 | 2020-10-27 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
| US20190385372A1 (en) * | 2018-06-15 | 2019-12-19 | Microsoft Technology Licensing, Llc | Positioning a virtual reality passthrough region at a known distance |
| US10569164B1 (en) * | 2018-09-26 | 2020-02-25 | Valve Corporation | Augmented reality (AR) system for providing AR in video games |
| US11006091B2 (en) * | 2018-11-27 | 2021-05-11 | At&T Intellectual Property I, L.P. | Opportunistic volumetric video editing |
| US10665037B1 (en) | 2018-11-28 | 2020-05-26 | Seek Llc | Systems and methods for generating and intelligently distributing forms of extended reality content |
| US11074697B2 (en) | 2019-04-16 | 2021-07-27 | At&T Intellectual Property I, L.P. | Selecting viewpoints for rendering in volumetric video presentations |
| US10970519B2 (en) | 2019-04-16 | 2021-04-06 | At&T Intellectual Property I, L.P. | Validating objects in volumetric video presentations |
| US11012675B2 (en) | 2019-04-16 | 2021-05-18 | At&T Intellectual Property I, L.P. | Automatic selection of viewpoint characteristics and trajectories in volumetric video presentations |
| US11153492B2 (en) | 2019-04-16 | 2021-10-19 | At&T Intellectual Property I, L.P. | Selecting spectator viewpoints in volumetric video presentations of live events |
| US10460516B1 (en) | 2019-04-26 | 2019-10-29 | Vertebrae Inc. | Three-dimensional model optimization |
| US11546721B2 (en) | 2019-06-18 | 2023-01-03 | The Calany Holding S.À.R.L. | Location-based application activation |
| CN112102497B (en) * | 2019-06-18 | 2024-09-10 | 卡兰控股有限公司 | System and method for attaching applications and interactions to static objects |
| US11341727B2 (en) | 2019-06-18 | 2022-05-24 | The Calany Holding S. À R.L. | Location-based platform for multiple 3D engines for delivering location-based 3D content to a user |
| US11516296B2 (en) * | 2019-06-18 | 2022-11-29 | THE CALANY Holding S.ÀR.L | Location-based application stream activation |
| CN112102498A (en) | 2019-06-18 | 2020-12-18 | 明日基金知识产权控股有限公司 | System and method for virtually attaching applications to dynamic objects and enabling interaction with dynamic objects |
| US11282288B2 (en) | 2019-11-20 | 2022-03-22 | Shape Matrix Geometric Instruments, LLC | Methods and apparatus for encoding data in notched shapes |
| KR102629990B1 (en) * | 2019-12-03 | 2024-01-25 | 엘지전자 주식회사 | Hub and Electronic device including the same |
| US10777017B1 (en) | 2020-01-24 | 2020-09-15 | Vertebrae Inc. | Augmented reality presentation using a uniform resource identifier |
| US11995870B2 (en) * | 2020-01-27 | 2024-05-28 | Citrix Systems, Inc. | Dynamic image compression based on perceived viewing distance |
| US11055049B1 (en) * | 2020-05-18 | 2021-07-06 | Varjo Technologies Oy | Systems and methods for facilitating shared rendering |
| US11250629B2 (en) | 2020-05-22 | 2022-02-15 | Seek Xr, Llc | Systems and methods for optimizing a model file |
| TWI815021B (en) * | 2020-07-06 | 2023-09-11 | 萬達人工智慧科技股份有限公司 | Device and method for depth calculation in augmented reality |
| US12327277B2 (en) | 2021-04-12 | 2025-06-10 | Snap Inc. | Home based augmented reality shopping |
| US11943227B2 (en) | 2021-09-17 | 2024-03-26 | Bank Of America Corporation | Data access control for augmented reality devices |
| US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
| US12412205B2 (en) | 2021-12-30 | 2025-09-09 | Snap Inc. | Method, system, and medium for augmented reality product recommendations |
| US11887260B2 (en) * | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
| US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
| EP4625941A1 (en) * | 2024-03-26 | 2025-10-01 | InterDigital CE Patent Holdings, SAS | Processing management of real environment data in extended reality applications |
Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6246415B1 (en) * | 1998-04-30 | 2001-06-12 | Silicon Graphics, Inc. | Method and apparatus for culling polygons |
| CN101339667A (en) * | 2008-05-27 | 2009-01-07 | 中国科学院计算技术研究所 | A Visibility Judgment Method for Virtual Dynamic Groups |
| CN102113003A (en) * | 2008-06-03 | 2011-06-29 | 索尼电脑娱乐公司 | Hint-based streaming of auxiliary content assets for an interactive environment |
| WO2012012161A2 (en) * | 2010-06-30 | 2012-01-26 | Barry Lynn Jenkins | System and method of from-region visibility determination and delta-pvs based content streaming using conservative linearized umbral event surfaces |
| US20120092328A1 (en) * | 2010-10-15 | 2012-04-19 | Jason Flaks | Fusing virtual content into real content |
| CN102509343A (en) * | 2011-09-30 | 2012-06-20 | 北京航空航天大学 | Binocular image and object contour-based virtual and actual sheltering treatment method |
| WO2015027105A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Virtual reality content stitching and awareness |
| EP2863336A2 (en) * | 2013-10-17 | 2015-04-22 | Samsung Electronics Co., Ltd. | System and method for reconstructing 3d model |
| WO2015090420A1 (en) * | 2013-12-19 | 2015-06-25 | Metaio Gmbh | Slam on a mobile device |
| WO2015100490A1 (en) * | 2014-01-06 | 2015-07-09 | Sensio Technologies Inc. | Reconfiguration of stereoscopic content and distribution for stereoscopic content in a configuration suited for a remote viewing environment |
| US20160104452A1 (en) * | 2013-05-24 | 2016-04-14 | Awe Company Limited | Systems and methods for a shared mixed reality experience |
| US20160210787A1 (en) * | 2015-01-21 | 2016-07-21 | National Tsing Hua University | Method for Optimizing Occlusion in Augmented Reality Based On Depth Camera |
| CN105814626A (en) * | 2013-09-30 | 2016-07-27 | Pcms控股公司 | Method, apparatus, system, device and computer program product for providing augmented reality display and/or user interface |
| CN105869215A (en) * | 2016-03-28 | 2016-08-17 | 上海米影信息科技有限公司 | Virtual reality imaging system |
| CN106155311A (en) * | 2016-06-28 | 2016-11-23 | 努比亚技术有限公司 | AR helmet, AR interactive system and the exchange method of AR scene |
| US20160360104A1 (en) * | 2015-06-02 | 2016-12-08 | Qualcomm Incorporated | Systems and methods for producing a combined view from fisheye cameras |
Family Cites Families (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6525726B1 (en) * | 1999-11-02 | 2003-02-25 | Intel Corporation | Method and apparatus for adaptive hierarchical visibility in a tiled three-dimensional graphics architecture |
| US20020080143A1 (en) | 2000-11-08 | 2002-06-27 | Morgan David L. | Rendering non-interactive three-dimensional content |
| US8042094B2 (en) | 2004-07-08 | 2011-10-18 | Ellis Amalgamated LLC | Architecture for rendering graphics on output devices |
| US20080225048A1 (en) * | 2007-03-15 | 2008-09-18 | Microsoft Corporation | Culling occlusions when rendering graphics on computers |
| KR101545008B1 (en) * | 2007-06-26 | 2015-08-18 | 코닌클리케 필립스 엔.브이. | Method and system for encoding a 3d video signal, enclosed 3d video signal, method and system for decoder for a 3d video signal |
| US20120075433A1 (en) | 2010-09-07 | 2012-03-29 | Qualcomm Incorporated | Efficient information presentation for augmented reality |
| HUE047021T2 (en) | 2010-09-20 | 2020-04-28 | Qualcomm Inc | An adaptable framework for cloud assisted augmented reality |
| US8908911B2 (en) | 2011-03-04 | 2014-12-09 | Qualcomm Incorporated | Redundant detection filtering |
| US8964025B2 (en) * | 2011-04-12 | 2015-02-24 | International Business Machines Corporation | Visual obstruction removal with image capture |
| KR20130000160A (en) | 2011-06-22 | 2013-01-02 | 광주과학기술원 | User adaptive augmented reality mobile device and server and method thereof |
| US9070216B2 (en) * | 2011-12-14 | 2015-06-30 | The Board Of Trustees Of The University Of Illinois | Four-dimensional augmented reality models for interactive visualization and automated construction progress monitoring |
| US9378591B2 (en) * | 2012-07-27 | 2016-06-28 | Nokia Technologies Oy | Method and apparatus for detecting occlusion in an augmented reality display |
| US20160307374A1 (en) * | 2013-12-19 | 2016-10-20 | Metaio Gmbh | Method and system for providing information associated with a view of a real environment superimposed with a virtual object |
| US20150262412A1 (en) * | 2014-03-17 | 2015-09-17 | Qualcomm Incorporated | Augmented reality lighting with dynamic geometry |
| US9977495B2 (en) * | 2014-09-19 | 2018-05-22 | Utherverse Digital Inc. | Immersive displays |
| CN104539925B (en) | 2014-12-15 | 2016-10-05 | 北京邮电大学 | The method and system of three-dimensional scenic augmented reality based on depth information |
| US9369689B1 (en) * | 2015-02-24 | 2016-06-14 | HypeVR | Lidar stereo fusion live action 3D model video reconstruction for six degrees of freedom 360° volumetric virtual reality video |
| US10491711B2 (en) | 2015-09-10 | 2019-11-26 | EEVO, Inc. | Adaptive streaming of virtual reality data |
| CN105931288A (en) | 2016-04-12 | 2016-09-07 | 广州凡拓数字创意科技股份有限公司 | Construction method and system of digital exhibition hall |
| US20180053352A1 (en) * | 2016-08-22 | 2018-02-22 | Daqri, Llc | Occluding augmented reality content or thermal imagery for simultaneous display |
| WO2018068236A1 (en) | 2016-10-10 | 2018-04-19 | 华为技术有限公司 | Video stream transmission method, related device and system |
-
2018
- 2018-01-25 CN CN201880009301.7A patent/CN110249291A/en active Pending
- 2018-01-25 US US16/480,277 patent/US11024092B2/en active Active
- 2018-01-25 WO PCT/US2018/015264 patent/WO2018144315A1/en not_active Ceased
- 2018-01-25 EP EP18705041.4A patent/EP3577631A1/en active Pending
- 2018-01-25 CN CN202411868107.8A patent/CN119806321A/en active Pending
Patent Citations (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6246415B1 (en) * | 1998-04-30 | 2001-06-12 | Silicon Graphics, Inc. | Method and apparatus for culling polygons |
| CN101339667A (en) * | 2008-05-27 | 2009-01-07 | 中国科学院计算技术研究所 | A Visibility Judgment Method for Virtual Dynamic Groups |
| CN102113003A (en) * | 2008-06-03 | 2011-06-29 | 索尼电脑娱乐公司 | Hint-based streaming of auxiliary content assets for an interactive environment |
| WO2012012161A2 (en) * | 2010-06-30 | 2012-01-26 | Barry Lynn Jenkins | System and method of from-region visibility determination and delta-pvs based content streaming using conservative linearized umbral event surfaces |
| US20120092328A1 (en) * | 2010-10-15 | 2012-04-19 | Jason Flaks | Fusing virtual content into real content |
| CN102509343A (en) * | 2011-09-30 | 2012-06-20 | 北京航空航天大学 | Binocular image and object contour-based virtual and actual sheltering treatment method |
| US20160104452A1 (en) * | 2013-05-24 | 2016-04-14 | Awe Company Limited | Systems and methods for a shared mixed reality experience |
| WO2015027105A1 (en) * | 2013-08-21 | 2015-02-26 | Jaunt Inc. | Virtual reality content stitching and awareness |
| CN105814626A (en) * | 2013-09-30 | 2016-07-27 | Pcms控股公司 | Method, apparatus, system, device and computer program product for providing augmented reality display and/or user interface |
| EP2863336A2 (en) * | 2013-10-17 | 2015-04-22 | Samsung Electronics Co., Ltd. | System and method for reconstructing 3d model |
| WO2015090420A1 (en) * | 2013-12-19 | 2015-06-25 | Metaio Gmbh | Slam on a mobile device |
| WO2015100490A1 (en) * | 2014-01-06 | 2015-07-09 | Sensio Technologies Inc. | Reconfiguration of stereoscopic content and distribution for stereoscopic content in a configuration suited for a remote viewing environment |
| US20160210787A1 (en) * | 2015-01-21 | 2016-07-21 | National Tsing Hua University | Method for Optimizing Occlusion in Augmented Reality Based On Depth Camera |
| US20160360104A1 (en) * | 2015-06-02 | 2016-12-08 | Qualcomm Incorporated | Systems and methods for producing a combined view from fisheye cameras |
| CN105869215A (en) * | 2016-03-28 | 2016-08-17 | 上海米影信息科技有限公司 | Virtual reality imaging system |
| CN106155311A (en) * | 2016-06-28 | 2016-11-23 | 努比亚技术有限公司 | AR helmet, AR interactive system and the exchange method of AR scene |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020248777A1 (en) * | 2019-06-10 | 2020-12-17 | Oppo广东移动通信有限公司 | Control method, head-mounted device, and server |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119806321A (en) | 2025-04-11 |
| US11024092B2 (en) | 2021-06-01 |
| EP3577631A1 (en) | 2019-12-11 |
| US20190371073A1 (en) | 2019-12-05 |
| WO2018144315A1 (en) | 2018-08-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11024092B2 (en) | System and method for augmented reality content delivery in pre-captured environments | |
| US11711504B2 (en) | Enabling motion parallax with multilayer 360-degree video | |
| CN112927362B (en) | Map reconstruction method and device, computer readable medium and electronic device | |
| CN110869980B (en) | Distributing and rendering content as a spherical video and 3D portfolio | |
| US9684953B2 (en) | Method and system for image processing in video conferencing | |
| CN104504671B (en) | Method for generating virtual-real fusion image for stereo display | |
| EP4038478A1 (en) | Systems and methods for video communication using a virtual camera | |
| KR101609486B1 (en) | Using motion parallax to create 3d perception from 2d images | |
| EP3942796A1 (en) | Method and system for rendering a 3d image using depth information | |
| WO2017189490A1 (en) | Live action volumetric video compression / decompression and playback | |
| KR102455468B1 (en) | Method and apparatus for reconstructing three dimensional model of object | |
| You et al. | Internet of Things (IoT) for seamless virtual reality space: Challenges and perspectives | |
| US9380263B2 (en) | Systems and methods for real-time view-synthesis in a multi-camera setup | |
| US11758101B2 (en) | Restoration of the FOV of images for stereoscopic rendering | |
| CN106101575A (en) | A method, device and mobile terminal for generating augmented reality photos | |
| CN113253845A (en) | View display method, device, medium and electronic equipment based on eye tracking | |
| KR20120129313A (en) | System and method for transmitting three-dimensional image information using difference information | |
| WO2016184285A1 (en) | Article image processing method, apparatus and system | |
| WO2018175217A1 (en) | System and method for relighting of real-time 3d captured content | |
| US20240015264A1 (en) | System for broadcasting volumetric videoconferences in 3d animated virtual environment with audio information, and procedure for operating said device | |
| CN106875478A (en) | Experience the AR devices of mobile phone 3D effect | |
| US20150085086A1 (en) | Method and a device for creating images | |
| KR20110060180A (en) | Method and apparatus for generating 3D model by selecting object of interest | |
| US20240185511A1 (en) | Information processing apparatus and information processing method | |
| Tran et al. | A personalised stereoscopic 3D gallery with virtual reality technology on smartphone |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| TA01 | Transfer of patent application right |
Effective date of registration: 20230505 Address after: Delaware Applicant after: Interactive Digital VC Holdings Address before: Wilmington, Delaware, USA Applicant before: PCMS HOLDINGS, Inc. |
|
| TA01 | Transfer of patent application right |