+

WO2018175217A1 - Système et procédé pour donner un nouvel éclairage à un contenu capturé en 3d en temps réel - Google Patents

Système et procédé pour donner un nouvel éclairage à un contenu capturé en 3d en temps réel Download PDF

Info

Publication number
WO2018175217A1
WO2018175217A1 PCT/US2018/022779 US2018022779W WO2018175217A1 WO 2018175217 A1 WO2018175217 A1 WO 2018175217A1 US 2018022779 W US2018022779 W US 2018022779W WO 2018175217 A1 WO2018175217 A1 WO 2018175217A1
Authority
WO
WIPO (PCT)
Prior art keywords
geometry
real
btfs
time
captured
Prior art date
Application number
PCT/US2018/022779
Other languages
English (en)
Inventor
Tatu V. J. HARVIAINEN
Original Assignee
Pcms Holdings, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pcms Holdings, Inc. filed Critical Pcms Holdings, Inc.
Publication of WO2018175217A1 publication Critical patent/WO2018175217A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • 0003 VR and AR content may be obtained by a VR or AR device, such as a head-mounted display.
  • Modern VR and AR devices are contextually aware of their surroundings. Advanced camera and sensor arrays combined with intelligent data processing enable the reconstruction of a digital 3D model of a real- world viewing environment. Indeed, this is a prerequisite for accurately and precisely tracking the position and orientation on the device within the real-world environment. Many modern devices create the digital 3D reconstruction of the present viewing environment and various analyses are performed on this data to facilitate enhanced functionalities.
  • 3D content is rendered using an AR device that content is rendered within the real-world environment (e.g., an AR environment).
  • 3D content is rendered using a VR device that content is rendered within a present VR experience (e.g., a VR environment).
  • the present VR environment may be a virtual telepresence meeting room, a VR game, a live stream of the current real-world environment, even a distant real-world environment, etc.
  • Visual inharmoniousness can be a result of inserting 3D cartoon content into a photo-realistic setting, black and white content into a colorful setting, contemporary content into a historic setting, and inferior parameters being applied during rendering.
  • a virtual 3D model of a real-world object is captured and a rendering of this object is displayed on a client device in real-time.
  • Relighting the object includes rendering the object under modified lighting conditions.
  • One embodiment takes the form of a process that includes sending respective bidirectional texture functions (BTFs) associated with a plurality of surface regions of an object to a client, and, iteratively: capturing a plurality of images of the object; generating a three-dimensional (3D) geometry of the object based on the captured plurality of images; determining information regarding one or more of the BTFs associated with the plurality of surface regions of the object; and sending the generated 3D geometry and the determined information regarding the one or more BTFs to the client to enable the client to render a representation of the 3D geometry using the BTFs associated with the plurality of surface regions, the determined information regarding the BTFs, and information about the local lighting of the environment in which the object is to be displayed.
  • BTFs bidirectional texture functions
  • the object is a person.
  • the sending respective BTFs associated with a plurality of surface regions of the object to the client includes sending at least one of spatially varying bidirectional reflectance distribution functions (SVBRDFs) and spatially varying bidirectional surface-scattering reflectance distribution functions (SVBSSRDFs).
  • SVBRDFs spatially varying bidirectional reflectance distribution functions
  • SVBSSRDFs spatially varying bidirectional surface-scattering reflectance distribution functions
  • the sending respective BTFs associated with the plurality of surface regions of the object to the client includes sending one or more of separated diffuse color, specular, reflectance, and normal maps.
  • the generating the 3D geometry of the object based on the captured plurality of images includes obtaining point-cloud data from a set of red, green, blue and depth (RGB-D) sensors; identifying surfaces depicted within the point-cloud data; and selecting a portion of the identified surfaces to define the generated 3D geometry.
  • RGB-D red, green, blue and depth
  • the information regarding the one or more BTFs is a mapping of BTF to texture coordinates.
  • the determining information regarding the one or more BTFs can include determining per vertex texture coordinates for the 3D geometry of the object at least in part by iteratively (i) comparing current RGB-D data of a content capture environment with an augmented image comprising a relit rendered 3D geometry and (ii) adjusting texture coordinates based on the comparison.
  • At least the generating 3D geometry of the object based on the captured plurality of images and the determining information regarding the one or more BTFs are carried out in realtime.
  • respective BTFs associated with the plurality of surface regions of the object are generated at least in part by: performing an initialization sub process to: capture the respective BTFs for each surface region of the object, and capture an initial three dimensional (3D) geometry of the object; and responsive to a request, performing the (i) sending the BTFs to the client and (ii) generating a lighting model of an environment; and performing a run-time sub process to: capture a subsequent 3D geometry of the object; and determine per vertex texture coordinates associated with each captured 3D geometry of the object at least in part by (i) comparing current RGB-D data of the environment with a collocated image augmented by a rendered 3D geometry of the object and (ii) adjusting the texture coordinates based on the comparison, wherein the rendered 3D geometry is rendered using the captured BTFs and the generated lighting model of the environment.
  • Another embodiment is directed to a method including determining a present lighting model of a render environment; entering a real-time three-dimensional (3D) content session; receiving bidirectional texture functions (BTFs) associated with an object; and, during a run-time sub process: receiving a real-time captured 3D geometry of the object; receiving texture coordinates associated with the received 3D geometry; and rendering the real-time captured 3D geometry of the object using the determined present lighting model, the received BTFs, and the received texture coordinates.
  • BTFs bidirectional texture functions
  • the entering the real-time 3D content session is carried out by a real-time 3D rendering client.
  • the render environment is one or more of a physical real-world environment and a virtual digital environment.
  • requesting to enter the real-time 3D content session includes requesting the 3D content session from real-time 3D content capture server.
  • the method also includes mapping the BTFs to the real-time captured 3D geometry.
  • Another embodiment is directed to a system including a processer, an RGB-D sensor array, a communication interface, and data storage containing instructions executable by the processor for: receiving bidirectional texture functions (BTFs) associated with an object from an initialization sub process; receiving a request for real-time 3D content capture session via the communication interface; capturing the object using the RGB-D sensor array and generating 3D geometry of the object using the processor; determining an alignment between the generated 3D geometry and the BTFs; and transmitting the determined alignment, the generated 3D geometry, and the BTFs to a rendering client to enable lighting correction of captured realtime 3D object.
  • BTFs bidirectional texture functions
  • Some embodiments herein are directed to performing an initial analysis step before starting a live-streaming and viewing session.
  • Material-appearance descriptors such as a bidirectional texture function (BTF) and an initial 3D geometry of selected objects can be captured and stored during the initial analysis.
  • the stored object data can be sent to a client at the start of a media viewing session, for example.
  • BTF bidirectional texture function
  • a rendering client begins a media viewing session and executes a dynamic relighting of the objects in view of local lighting conditions using the previously-captured BTF or other descriptors.
  • a real-time 3D geometry of the object is captured and compared against the initial 3D geometry of the object.
  • the rendering client receives material appearance information (e.g., BTF) separately from a real-time 3D geometry of the object and relights the object in view of client-environment lighting conditions.
  • the relighted content can then be rendered and displayed by the rendering client.
  • FIG. 1 depicts a method for relighting real-time 3D captured content executed by a real-time 3D content capture server, in accordance with at least one embodiment.
  • FIG. 2 depicts a method for relighting real-time 3D captured content executed by a real-time 3D content rendering client, in accordance with at least one embodiment.
  • FIG. 3 depicts various elements of an example 3D capture system initialization sub process, in accordance with at least one embodiment.
  • FIG. 4 depicts a user holding a light probe within a 3D capture environment for real-time 3D capture setup, in accordance with at least one embodiment.
  • FIGs. 5A-5J depict a visual-sequence-representation of elements of a run-time process executed by a real-time 3D content capture server, in accordance with at least one embodiment.
  • FIG. 6 depicts an exemplary wireless transmit/receive unit (WTRU) that may be employed as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in other embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 7 depicts an exemplary network entity that may be employed as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in other embodiments.
  • the system and process disclosed herein includes a relighting of real-time 3D captured content.
  • Real-time 3D captured content such as a full 3D appearance of a person captured with RGB-D sensors, is relighted so that the lighting used for rendering the real-time 3D captured content matches the lighting in a physical environment or virtual environment wherein the content is augmented to or rendered within.
  • the lighting that is to be matched is the lighting of a real-world environment of a viewing-client.
  • the lighting that is to be matched is the lighting of a digital environment of a viewing- client such as the lighting of a presently displayed VR scene.
  • BTFs bidirectional texture functions
  • BRDF bidirectional reflectance distribution function
  • Use cases include 3D telepresence, digital teleportation of virtual 3D content, first-person replay of previously recorded sessions, second-person replay of previously recorded sessions, third-person replay of previously recorded sessions, large-scale environment modulation (e.g., an insertion of lifelike mountain ranges in the distance), and the like.
  • the space in which the captured digital representation of the person is augmented and the space from which the digital representation is captured can have drastically different lighting conditions. Without a process that includes relighting, augmented elements will look out of place. The illusion of the virtual elements being physically present in the real-world viewing environment will break down.
  • the previously recorded session may have been captured indoors under florescent lighting and a user may be viewing a recorded session outdoors under sunlight. Corrections to the optical qualities of the previously recorded session can include relighting the content to better suit the present environment.
  • the initialization sub process provides additional information aside from merely capturing the full 3D geometry and RGB textures as seen by the RGB camera.
  • the initialization sub process captures the material properties that describe the appearance of a material (i) under any lighting condition and (ii) inspected from any viewing direction.
  • the material properties may be embodied as a BTF.
  • the material properties may be embodied as a spatially varying bidirectional reflectance distribution function (SVBRDF).
  • SVBRDF spatially varying bidirectional surface-scattering reflectance distribution functions
  • SVBSSRDFs spatially varying bidirectional surface-scattering reflectance distribution functions
  • this disclosure describes a multifaceted system and process including the above- described initialization sub process, a real-time data capture and data migration sub process, and a rendering and display sub process.
  • the described system and process includes objects digitally captured in a first location under certain lighting conditions, being rendered at a second location in accordance with present lighting conditions at the second location.
  • One or more embodiments include a process that occurs in realtime after the initialization sub process.
  • the process has three separate steps: (i) material properties and initial geometry capture which is carried out as an individual preprocessing step (such as the initialization sub process), (ii) lighting model capture done at the beginning of a real-time capture session, and (iii) run-time processing of captured real-time 3D data, wherein object geometry is captured and transmitted to the rendering clients.
  • BTF materials are remapped to the real-time captured 3D geometry during the run-time processing step.
  • At least one embodiment includes a system.
  • a viewing client or a rendering client executes elements of the process disclosed herein.
  • a content capture server executes elements of the process disclosed herein.
  • Other embodiments of the disclosed systems and methods operate in a client-server fashion. Real-time 3D capture of content can be performed by a content capture server, to which rendering clients and viewing clients connect to receive and display captured content.
  • information regarding present lighting conditions in a viewing client's environment can be sent to the content capture server.
  • the content capture server uses the BTFs and the received present lighting conditions to appropriately render the content (for example, to render the content to appear naturally inserted) and then streams the rendered content in realtime to the viewing client.
  • the content capture server sends BTFs to the rendering client and then streams the captured 3D geometry in real-time to the rendering client.
  • the BTFs and streamed 3D geometry are used for dynamic relighting of the captured objects at the rendering client side.
  • the process disclosed herein may be described as an initialization sub process, a process executed by a content capturing server, and/or a process executed by a rendering client.
  • the following paragraphs relate to such a description and are provided as further examples of the system and process disclosed herein.
  • the initialization sub process includes BTF and initial geometry capture.
  • the sub process identifies materials featured in the object to be 3D captured during run-time in the form of BTFs.
  • the sub process also captures a 3D geometry of the object that will be used for finding correct texture coordinates for the real-time captured geometry during run-time.
  • BTFs and the initial captured geometry are sent to the content capturing server and stored for future use.
  • the process executed by the 3D capturing server includes capturing and reconstructing lighting in a present environment (such as after the initialization sub process and before the start of a 3D capture session).
  • the content capturing server after reconstruction, sends BTFs to a rendering client that has requested a session.
  • the process can include capturing and reconstructing 3D geometry of the selected object.
  • the process can include solving per vertex texture coordinates for the captured object. Solving of texture coordinates can be based on comparing rendering of BTFs using prevailing environment lighting and image data captured by sensors used for realtime 3D capture.
  • the capturing server reconstructs the 3D geometry by aligning and/or assigning applicable BTFs with the captured 3D content based on real-time data and the information obtained in the initialization sub process. Furthermore, the process can include transmitting the captured 3D geometry with the solved texture coordinates to the rendering client.
  • the process executed by the rendering client can include capturing a real-world environment lighting model. In VR embodiments, this is replaced by capturing a digital environment (e.g., VR scene) lighting model.
  • the rendering client receives BTFs from the content capturing server.
  • the process includes receiving captured 3D geometry with texture coordinates from the content capturing server.
  • the process includes rendering the captured 3D content for the viewer.
  • the content can be rendered at a viewpoint matching a viewpoint of the viewer and the content can be rendered using a virtual lighting model based on the captured environment lighting model. This process for relighting results in the rendered virtual elements having correct viewpoint dependent lighting effects.
  • One embodiment takes the form of a process including before real-time capture of an object's geometry, material properties and initial geometry of the object are captured in a format that includes information used in a dynamic relighting step during run-time.
  • the geometry of the object is captured and a mapping between material properties captured in a preprocessing stage and real-time captured geometry is resolved.
  • Preprocessing enables a capturing server to transmit universal material appearance data separately from the geometry that changes during each time step of the capture session.
  • FIG. 1 depicts a flow diagram of a method 100 for relighting real-time 3D captured content executed by a real-time 3D content capture server, in accordance with at least one embodiment.
  • a basic architecture of the system is arranged as a client-server model.
  • a real-time 3D capture server creates a new real-time 3D capture session to which rendering clients can connect.
  • block 102 provides for capturing per object BTFs.
  • BTFs for a plurality of surface regions and an initial geometry for each object to be captured in real-time can be acquired.
  • Initial UV coordinates can be selected at this point or at a later point.
  • a new real-time 3D capture session is initialized.
  • a prerequisite step of the process is to acquire a model of the current lighting conditions in the physical environment (i.e., the environment wherein the real-time 3D capturing is to take place).
  • the process provides for defining which objects are to be 3D captured. For example, a selection of objects to be captured can be made automatically via system settings, user preferences, and the like or is made manually by a user via a user interface.
  • the environment lighting model is captured and can be stored at block 110 Environment lighting model.
  • Block 112 provides that clients (e.g., rendering devices) requesting to join to a real-time session are connected to the content capture server.
  • Block 114 provides that appropriate BTFs are identified and sent to each client requesting to join. Appropriate BTFs are the BTFs associated with the real-time content requested by the client. For example, BTFs and object geometry with texture coordinates (UV coordinates) captured and stored at block 116 can be provided to the rendering clients at block 114.
  • clients e.g., rendering devices
  • block 118 provides that the content capture server receives data from an RGB-D sensor (or multiple sensors) and performs a 3D reconstruction of the captured object(s) using the sensor data.
  • the content capture server continuously receives data from sensors 120. Since a vertex order and count of the 3D reconstruction of the captured object can vary for each time step (e.g. from RGB-D sensor frame to frame), a further step shown can include mapping BTFs with the vertices of the 3D reconstruction of the object. As shown in block 122, mapping BTFs can be performed at least in part by interactively adjusting UV coordinates and comparing images and is discussed in greater detail below.
  • Block 124 provides that the 3D reconstructed geometry and selected UV coordinates are sent to the rendering client.
  • Decision block 130 provides for determining whether a user requests to terminate a session. If no, the process returns to block 112 wherein the client connects or reconnects to the server to enable rendering. If the user does request to terminate a session at decision block 130, the real-time 3D capture session is ended at block 140.
  • BTF capture is considered a universal term describing various approaches that capture view dependent characteristics of a material in addition to a diffuse texture of the material. Materials can include various fabrics, human constituents such as hair and skin, metals, and many other material types as well.
  • the disclosed systems and methods employ the captured BTF data for virtual relighting. By using BTF data, the methods and systems in embodiments herein can accurately reproduce an appearance of original material from any view angle and under any lighting.
  • Various methods for capturing, storing and rendering BTFs are known by those with skill in the art.
  • certain typical BTF capturing methods employ lighting and camera domes to capture image data from a variety of viewpoints under a variety of lighting conditions.
  • spatially varying spatially varying bidirectional reflectance distribution functions may be generated from a handful of typical smartphone camera images.
  • a SVBRDF is one type of BTF.
  • diffuse and specular maps can be captured with a relatively simple setup.
  • techniques are known for generating diffuse and specular maps of human faces.
  • diffuse and specular maps do not have the absolute appearance reproduction power of full BTF
  • systems and methods disclosed herein are not dependent on using a particular method of BTF capture.
  • Different formats may be used to represent a captured material appearance (e.g., using a particular BTF format opposed to another). Any of the material appearance capture solutions described above can be used.
  • appearances of the various object materials can be captured and represented in a manner which enables relighting, and in turn, a correct reproduction of viewpoint-dependent visual characteristics from any viewing direction.
  • Capturing the spatially-varying appearances of the various object materials may be performed as a pre-processing step such as an initialization sub process.
  • An initialization sub process may employ systems and hardware that are not used in the rest of the process disclosed herein. For example, a full-room camera array with digitally controlled lighting fixtures may be used for BTF capture, and a single RGB-D camera may be used for real-time 3D content capture.
  • the resulting material appearance properties (e.g., BTFs) may be stored for later use during run-time processing.
  • Block 108 of FIG. 1 provides for environment lighting model capture and storing the environment lighting model in block 110.
  • Any available method that produces a description of directions and intensities of primary light sources in the environment e.g., (i) a 3D content capture location: typically a real- world environment, and (ii) a render location: typically a real-world environment for AR embodiments and a digital VR scene for VR embodiments
  • a 3D content capture location typically a real-world environment
  • a render location typically a real-world environment for AR embodiments and a digital VR scene for VR embodiments
  • One step of an exemplary process disclosed herein includes a run-time process for capturing 3D content.
  • block 104 initialize ne real-time 3D capture session can include initializing a runtime process.
  • the content capture server continuously captures (e.g., measures and/or receives) data from an RGB-D sensor or from multiple sensors as shown in block 120.
  • the means by which the RGB-D data is captured need not be limited to the cases discussed herein. Any system capable of reconstructing a 3D geometry of the object may be utilized by the content capture server. For example, reconstructed 3D geometry may be segmented.
  • Block 106 which provides that the objects to be 3D captures be defined, in one embodiment can include identifying and selecting objects to be captured in the current session using the reconstructed 3D geometry.
  • the content capture server can define texture coordinates (UV coordinates) that help with accurately mapping BTFs to the object.
  • BTFs have been captured at the pre-processing step / initialization sub process such as block 116.
  • a mapping of the BTFs onto various material surface regions of the selected segmented objects can be explicitly generated for substantially each individual time step of the capture as described in block 122.
  • Texture coordinates can be determined by tracking the pose (relative position and orientation) of the captured object. Further explaining block 122, tracking can be performed by iteratively comparing the initial object geometry, captured during the initialization sub process along with the BTFs, with a real-time geometry reconstructed using current sensor data received from block 120.
  • An initial pose of the object can also be produced by any other tracking method, such as a skeleton tracking of human body (e.g., provided by the KinectTM sensor or similar).
  • the method can include tuning the UVs to improve reproduction of the material appearances at the rendering step.
  • UVs are updated with an iterative approach, wherein the real-time captured 3D geometry is rendered with the BTFs using the UVs created in the initial or previous step and relighted using the lighting model captured at the 3D content capture environment (such as, for example, the physical environment where the 3D content capture takes place rather than the rendering environment).
  • the resulting augmented image e.g., relighted virtual content and any supplemental elements
  • a comparison is performed to quantify an amount of visual deviation between the original and augmented images.
  • the comparison uses edge detectors such as a Sobel filter to quantify difference and/or to identify regions for further analysis. Iteratively, minor adjustments to the UVs may be made. Adjustments are selected to reduce (in ideal embodiments, minimize) the quantified visual deviation between the augmented scene and the image captured by the RGB-D sensor. Smaller deviations indicate that the final UVs assigned to the geometry are more accurate.
  • block 124 provides for sending 3D reconstructed geometry with the final resulting UVs to a rendering client. UV Mapping
  • UV mapping is a 3D modeling process for projecting a 2D image onto a 3D model's surface for texture mapping.
  • a UV mapping process projects a texture map onto a 3D object.
  • the letters “U” and “V” denote the axes of a 2D texture because "X", "Y” and “Z” are already used to denote the axes of the 3D object in model space.
  • UV texturing permits polygons that make up a 3D object to be painted with color (and other surface attributes) from an ordinary image. The image is called a UV texture map.
  • UV mapping process involves assigning pixels in the image to surface mappings on the polygon, usually done by "programmatically” copying a triangular piece of the image map and pasting it onto a triangle on the object.
  • UV is an alternative to projection mapping in that UV mapping only maps into a texture space rather than into the geometric space of the object.
  • a rendering computation uses the UV texture coordinates to determine how to paint the three-dimensional surface.
  • a flow diagram 200 illustrates a method for relighting real-time 3D captured content executed by a real-time 3D content rendering client, in accordance with at least one embodiment.
  • lighting conditions, material properties and a 3D geometry of captured content are separated so that the rendering client dynamically relights the captured content.
  • block 210 provides that a rendering client starts a new viewing session.
  • the rendering client captures a lighting model of the render environment as shown in block 212.
  • the render environment may be a real-world location or a digital VR scene.
  • the environment lighting model can be stored at block 214, and the rendering client connects to the content capture server and requests a connection to an existing capture session, as shown in block 216.
  • the rendering client receives BTFs associated with the captured object(s) as shown in block 218.
  • the BTFs 220 are used during rendering.
  • the rendering client When the rendering client receives a new temporal instance of the real-time captured object (e.g., a frame of the captured 3D geometry), it performs a rendering step.
  • the pose i.e., position and orientation of the device in relation with the real-world environment
  • the tracking is performed by an HMD worn by the viewer.
  • External, outside-looking-in tracking systems may be employed as well.
  • any sufficiently accurate tracking method can be used, including but not limited to optical, inertial measuring, magnetic, and sonic tracking solutions.
  • the rendering client begins tracking its own position and orientation in order to align the captured content.
  • the client receives a new frame of the captured geometry, as shown in block 224, the client renders the geometry as seen from the viewpoint defined by the latest pose given by the tracking solution.
  • Received geometry is also rendered per the BTFs 220 and environment lighting model 214 as shown in block 226.
  • block 226 includes the rendering client relighting the content received from the real-time capturing server in accordance with the lighting model that is present in the physical environment of the rendering client.
  • block 228 provides for outputting the rendered image to a display.
  • the rendering client receives a 3D geometry with UVs for each captured object that is to be relit from the content capture server.
  • the received 3D geometry, tracked pose, BTFs, and the environment lighting model of the render environment are used by the rendering client to render the realtime 3D content from the point of view of the viewer with desired lighting effects.
  • Decision block 230 provides for determining whether a user requests to terminate a session or if a server has terminated a session. If yes, the session ends viewing at block 232, if not, the session continues at block 224 having the rendering client receive a 3D geometry with UVs from a real-time capturing server.
  • FIG. 3 depicts various elements of an example 3D capture system initialization sub process 300 in accordance with at least one embodiment.
  • the example depicted in FIG. 3 is a use case wherein a user captures BTFs and geometry of his/her face.
  • the user has his/her face first captured by a dedicated BTF capture setup called a lightstage capture setup 310.
  • the same lightstage capture setup also creates a 3D model of the user for reference.
  • the lightstage capture setup 310 captures image and depth data of the user from a plurality of viewpoints and generates raw data.
  • the raw data is processed 320 to create a virtual 3D reconstruction of the user's head 330.
  • the raw data is used to generate material appearance data for relighting 340.
  • the materials can be characterized using diffuse color, reflectance and normal maps (e.g., BTFs).
  • the capture session depicted in FIG. 3 can be carried out at a separate location from where the actual real-time capture is to take place.
  • the real-time capture session is done at a location where there are several calibrated RGB-D sensors capable capturing user from multiple directions.
  • block 104 provides for a user to start the real-time capture system by requesting a real-time capture server to initiate a new session.
  • the user may indicate that the real-time 3D content capture server should capture his/her face as part of block 106 in which objects to be 3D captured are defined or selected.
  • the rendering device may receive material property data and 3D reference geometry (e.g. BTFs and initial reference geometry captured at the initialization sub process).
  • the capture session guides the user to assist in capturing an environment lighting model of the capture environment.
  • FIG. 4 depicts a user 402 holding a light probe within a 3D capture environment for real-time 3D capture setup 404, in accordance with at least one embodiment.
  • the user helps the system, which includes RGB-D sensors 408, to process to capture the lighting model of the capture environment by holding a light probe (which may be a spherical object reflecting the lighting of the environment) in his/her hand at the beginning of the real-time capture session.
  • the content capturing server collects samples from the light probe and creates the environment lighting model from the samples. After the environment lighting model has been acquired, the content capturing server begins a run-time process and waits for render clients to join a session.
  • FIGs. 5A-5J depicts a visual-sequence-representation of elements of a run-time process executed by a real-time 3D content capture server, in accordance with at least one embodiment.
  • a user is captured using several RGB-D sensors. With each frame, the system produces point-cloud data from the sensors. The system reconstructs a 3D geometry by merging and triangulating the point clouds produced by the RGB-D sensors. The system determines a best-fit texture mapping and sends an indication of such to a render client along with real-time 3D captured content.
  • a user 502 is captured with several RGB-D sensors 504. These sensors are positioned so that all sides of the user are captured even as the user moves within their own physical environment.
  • point clouds 506 are produced by the RGB-D sensors 504, and such point clouds 506 are captured at each time step of the real-time capture process.
  • a time step may be a time for single frame capture of the RGB-D sensors 504.
  • point cloud data 506 may be used by a plurality of analysis modules.
  • 3D geometry 508 indicates a boundary between the user and the user's physical environment and can be a 3D geometry representing an outline of the user 502.
  • an initial 3D geometry 512 captured during a preprocessing step (such as an initialization sub process) is compared against the real-time 3D geometry 510 and an initial alignment is determined.
  • a preprocessing step such as an initialization sub process
  • an initial alignment is determined.
  • image and depth processing techniques are known by those of skill in the art for determining the position and alignment of a known object within a virtual space.
  • the object of interest 514 is isolated from the rest of the captured data based on the alignment and fitting determined in FIG. 5D.
  • a texture mapping 518 using UV coordinates for example, captured during the preprocessing step and associated with the initial 3D geometry, is projected onto the real-time 3D geometry. This projection is based at least in part on the determined initial alignment.
  • FIG. 5G several renderings of the object of interest are created 520, each with slightly different parameters as shown with forward facing 522 versus profile view 524. These renderings need not be displayed, but are created to be analyzed.
  • the real-time 3D geometry is rendered with slight variations in the texture coordinates from the point of view of each RGB-D sensor. The rendered images may then be compared with the unprocessed images acquired from the RGB-D sensors.
  • the set of texture coordinates such as UV texture mapping discussed above, is shown 528.
  • the smallest measured difference with respect to the unprocessed RGB-D images is used as a starting point for a next iteration of a texture coordinate refinement process.
  • Quantifying an amount of difference between the images may be accomplished via a variety of means. In some cases, an RGB difference is calculated for each corresponding pixel pair. In some cases, an RGB difference is calculated only for pixels near the edges of the rendered/augmented/inserted content. In other cases, a metric other than a simple RGB difference is employed.
  • the texture coordinates refinement process is complete, for example, when texture mapping adjustment iterations are done to a system-defined precision, the refined texture coordinates and the real-time 3D geometry 540 is sent to a rendering client.
  • an object to be captured is detected and tracked using captured data.
  • One embodiment uses model based tracking, wherein the initial reference geometry acquired in an initialization sub process stage is compared against real-time 3D geometry reconstructed from a present RGB-D sensor data. Based on a best fit between the initial reference geometry and the real-time 3D geometry reconstructed from the present RGB-D sensor data, a cropped portion of the reconstructed geometry representing the users head is selected for rendering, such as the head 540 shown in FIG. 5J.
  • texture coordinates are projected from an initial reference geometry to vertices of a reconstructed geometry.
  • a further step in one embodiment of a run-time process includes iterative texture mapping refinement.
  • the reconstructed geometry is rendered from the viewpoint of each RGB-D sensor using the lighting model of the capture environment and the captured BTFs as referred to in block 122 in FIG. 1.
  • the resulting augmented images are compared with the RGB images captured from the RGB-D sensors. Deviations between image areas depicting the user's head are calculated and quantified.
  • the calculating and quantifying can be repeated many times in accordance with system requirements, adding small variations to the texture mapping coordinates and having a bias for a UV mappings that reduce the deviation between the rendered augmented images and the images acquired from RGB-D sensors.
  • the content capturing server sends the captured 3D geometry and the selected texture coordinates to the rendering clients as shown in block 124 of FIG. 1.
  • the real-time 3D capture of an object in one physical location is then streamed as 3D data to another physical location, where it is augmented as if being part of the real physical space investigated by a viewer.
  • 3D content captured in real-time from a physical location is then streamed to a fully synthetic virtual world.
  • the process executed by the rendering client is a bit different, as the environment lighting is known because it is already part of known 3D data.
  • the system does not need to redundantly acquire a model of the lighting and can instead use the available lighting data.
  • modules various hardware elements of one or more of the described embodiments are referred to as "modules” that carry out (i.e., perform, execute, and the like) various functions that are described herein in connection with the respective modules.
  • a module includes hardware (e.g., one or more processors, one or more microprocessors, one or more microcontrollers, one or more microchips, one or more application-specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more memory devices) deemed suitable by those of skill in the relevant art for a given implementation.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Each described module may also include instructions executable for carrying out the one or more functions described as being carried out by the respective module, and it is noted that those instructions could take the form of or include hardware (i.e., hardwired) instructions, firmware instructions, software instructions, and/or the like, and may be stored in any suitable non-transitory computer-readable medium or media, such as commonly referred to as RAM, ROM, etc.
  • Exemplary embodiments disclosed herein are implemented using one or more wired and/or wireless network nodes, such as a wireless transmit/receive unit (WTRU) or other network entity.
  • WTRU wireless transmit/receive unit
  • Any of the disclosed real-time 3D content capture server embodiments and real-time 3D content rendering client embodiments may be implemented using one or both of the systems depicted in FIGs. 6 and 7. All other embodiments discussed in this detailed description may be implemented using either or both of FIG. 6 and FIG. 7 as well.
  • various hardware and software elements required for the execution of the processes described in this disclosure such as sensors, dedicated processing modules, user interfaces, important algorithms, etc., may be omitted from FIGs. 6 and 7 for the sake of visual simplicity.
  • FIG. 6 depicts an exemplary wireless transmit/receive unit (WTRU) that may be employed as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in other embodiments.
  • WTRU wireless transmit/receive unit
  • FIG. 6 may be employed to execute any of the processes disclosed herein (e.g., (i) the method described in relation to FIG. 1 and (ii) the method described in relation to FIG. 2). As shown in FIG.
  • the WTRU 602 may include a processor 618, a communication interface 619 including a transceiver 620, a transmit/receive element 622, a speaker/microphone 624, a keypad 626, a display/touchpad 628, a nonremovable memory 630, a removable memory 632, a power source 634, a global positioning system (GPS) chipset 636, and sensors 638. It will be appreciated that the WTRU 602 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.
  • GPS global positioning system
  • the processor 618 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
  • the processor 618 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 602 to operate in a wireless environment.
  • the processor 618 may be coupled to the transceiver 620, which may be coupled to the transmit/receive element 622. While FIG. 6 depicts the processor 618 and the transceiver 620 as separate components, it will be appreciated that the processor 618 and the transceiver 620 may be integrated together in an electronic package or chip.
  • the transmit/receive element 622 may be configured to transmit signals to, or receive signals from, a base station over the air interface 616.
  • the transmit/receive element 622 may be an antenna configured to transmit and/or receive RF signals.
  • the transmit/receive element 622 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, as examples.
  • the transmit/receive element 622 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 622 may be configured to transmit and/or receive any combination of wireless signals.
  • the WTRU 602 may include any number of transmit/receive elements 622. More specifically, the WTRU 602 may employ MIMO technology. Thus, in one embodiment, the WTRU 602 may include two or more transmit/receive elements 622 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 616.
  • the transceiver 620 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 622 and to demodulate the signals that are received by the transmit/receive element 622.
  • the WTRU 602 may have multi-mode capabilities.
  • the transceiver 620 may include multiple transceivers for enabling the WTRU 602 to communicate via multiple RATs, such as UTRA and IEEE 802.11 , as examples.
  • the processor 618 of the WTRU 602 may be coupled to, and may receive user input data from, the speaker/microphone 624, the keypad 626, and/or the display/touchpad 628 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED) display unit).
  • the processor 618 may also output user data to the speaker/microphone 624, the keypad 626, and/or the display/touchpad 628.
  • the processor 618 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 630 and/or the removable memory 632.
  • the non-removable memory 630 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device.
  • the removable memory 632 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like.
  • SIM subscriber identity module
  • SD secure digital
  • the processor 618 may access information from, and store data in, memory that is not physically located on the WTRU 602, such as on a server or a home computer (not shown).
  • the processor 618 may receive power from the power source 634, and may be configured to distribute and/or control the power to the other components in the WTRU 602.
  • the power source 634 may be any suitable device for powering the WTRU 602.
  • the power source 634 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium- ion (Li-ion), and the like), solar cells, fuel cells, and the like.
  • the processor 618 may also be coupled to the GPS chipset 636, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 602.
  • location information e.g., longitude and latitude
  • the WTRU 602 may receive location information over the air interface 616 from a base station and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 602 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
  • the processor 618 may further be coupled to other peripherals 638, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity.
  • the peripherals 638 may include sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
  • sensors such as an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module
  • FIG. 7 depicts an exemplary network entity that may be employed as a real-time 3D content capture server in some embodiments and as a real-time 3D content rendering client in other embodiments.
  • FIG. 7 may be employed to execute any of the processes disclosed herein.
  • network entity 790 includes a communication interface 792, a processor 794, and non-transitory data storage 796, all of which are communicatively linked by a bus, network, or other communication path 798.
  • Communication interface 792 may include one or more wired communication interfaces and/or one or more wireless-communication interfaces. With respect to wired communication, communication interface 792 may include one or more interfaces such as Ethernet interfaces, as an example. With respect to wireless communication, communication interface 792 may include components such as one or more antennae, one or more transceivers/chipsets designed and configured for one or more types of wireless (e.g., LTE) communication, and/or any other components deemed suitable by those of skill in the relevant art. And further with respect to wireless communication, communication interface 792 may be equipped at a scale and with a configuration appropriate for acting on the network side— as opposed to the client side— of wireless communications (e.g., LTE communications, Wi Fi communications, and the like). Thus, communication interface 792 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for serving multiple mobile stations, UEs, or other access terminals in a coverage area.
  • wireless communication interface 792 may include the appropriate equipment and circuitry (perhaps including multiple transceivers) for
  • Processor 794 may include one or more processors of any type deemed suitable by those of skill in the relevant art, some examples including a general-purpose microprocessor and a dedicated DSP.
  • Data storage 796 may take the form of any non-transitory computer-readable medium or combination of such media, some examples including flash memory, read-only memory (ROM), and random- access memory (RAM) to name but a few, as any one or more types of non-transitory data storage deemed suitable by those of skill in the relevant art could be used. As depicted in FIG. 7, data storage 796 contains program instructions 797 executable by processor 794 for carrying out various combinations of the various network-entity functions described herein.
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • a processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
  • processors such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein.
  • processors or “processing devices”
  • FPGAs field programmable gate arrays
  • unique stored program instructions including both software and firmware
  • some embodiments of the present disclosure may combine one or more processing devices with one or more software components (e.g., program code, firmware, resident software, micro-code, etc.) stored in a tangible computer-readable memory device, which in combination form a specifically configured apparatus that performs the functions as described herein.
  • software components e.g., program code, firmware, resident software, micro-code, etc.
  • modules may be written in any computer language and may be a portion of a monolithic code base, or may be developed in more discrete code portions such as is typical in object- oriented computer languages.
  • the modules may be distributed across a plurality of computer platforms, servers, terminals, and the like. A given module may even be implemented such that separate processor devices and/or computing hardware platforms perform the described functions.
  • an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein.
  • Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Des systèmes et des procédés pour donner un nouvel éclairage à un contenu capturé en 3D en temps réel comprennent un sous-processus d'initialisation et un sous-processus de temps d'exécution. Pendant le sous-processus d'initialisation, des descripteurs d'apparence de matériau tels que des fonctions de texture bidirectionnelle (BTFs) et une géométrie initiale d'objets sélectionnés sont capturés et enregistrés. Un client de rendu exécute un nouvel éclairage dynamique des objets en fonction de conditions d'éclairage locales à l'aide des descripteurs d'apparence de matériau complexes. Pendant le processus de temps d'exécution, une géométrie 3D est capturée et comparée à la géométrie initiale de l'objet. Grâce à un procédé itératif, un mappage entre des matériaux identifiés dans le sous-processus d'initialisation et la géométrie 3D capturée en temps réel est résolu. Un client de rendu reçoit des informations d'apparence de matériau (par exemple, un BTF) séparément de la géométrie 3D en temps réel de l'objet et rend/donne un nouvel éclairage à l'objet en fonction de conditions d'éclairage d'environnement client.
PCT/US2018/022779 2017-03-24 2018-03-16 Système et procédé pour donner un nouvel éclairage à un contenu capturé en 3d en temps réel WO2018175217A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762476412P 2017-03-24 2017-03-24
US62/476,412 2017-03-24

Publications (1)

Publication Number Publication Date
WO2018175217A1 true WO2018175217A1 (fr) 2018-09-27

Family

ID=61906859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/022779 WO2018175217A1 (fr) 2017-03-24 2018-03-16 Système et procédé pour donner un nouvel éclairage à un contenu capturé en 3d en temps réel

Country Status (1)

Country Link
WO (1) WO2018175217A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465945A (zh) * 2020-12-07 2021-03-09 网易(杭州)网络有限公司 一种模型生成方法、装置、存储介质及计算机设备
CN114272602A (zh) * 2021-12-27 2022-04-05 福建天晴在线互动科技有限公司 一种基于光照探针的动态游戏对象烘焙方法及其系统
CN115293960A (zh) * 2022-07-28 2022-11-04 珠海视熙科技有限公司 融合图像的光照调节方法、装置、设备及介质
US11503270B1 (en) * 2021-08-10 2022-11-15 Varjo Technologies Oy Imaging systems and methods for facilitating local lighting

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FILIP J ET AL: "Bidirectional Texture Function Modeling: A State of the Art Survey", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY, USA, vol. 31, no. 11, 1 November 2009 (2009-11-01), pages 1921 - 1940, XP011266778, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2008.246 *
HADRIEN CROUBOIS ET AL: "Fast Image Based Lighting for Mobile Realistic AR", 1 January 2014 (2014-01-01), LIRIS UMR CNRS 5205; Lyon, France, XP055484551, Retrieved from the Internet <URL:https://hal.archives-ouvertes.fr/hal-01534711/document> [retrieved on 20180614] *
MARTIN HATKA ET AL: "BTF rendering in blender", PROCEEDINGS VRCAI. ACM SIGGRAPH INTERNATIONAL CONFERENCE ONVIRTUAL REALITY CONTINUUM AND ITS APPLICATIONS IN INDUSTRY (VRCAI), 11 December 2011 (2011-12-11), XX, pages 265 - 272, XP055485929, ISBN: 978-1-4503-1060-4, DOI: 10.1145/2087756.2087794 *
MICHAEL ZOLLHÖFER ET AL: "Real-time non-rigid reconstruction using an RGB-D camera", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 33, no. 4, 27 July 2014 (2014-07-27), pages 1 - 12, XP058051979, ISSN: 0730-0301, DOI: 10.1145/2601097.2601165 *
MINGSONG DOU ET AL: "Fusion4D", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 35, no. 4, 11 July 2016 (2016-07-11), pages 1 - 13, XP058275854, ISSN: 0730-0301, DOI: 10.1145/2897824.2925969 *
NEWCOMBE RICHARD A ET AL: "DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time", 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 7 June 2015 (2015-06-07), pages 343 - 352, XP032793463, DOI: 10.1109/CVPR.2015.7298631 *
THEOBALT C ET AL: "Seeing People in Different Light-Joint Shape, Motion, and Reflectance Capture", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 13, no. 4, 1 July 2007 (2007-07-01), pages 663 - 674, XP011190839, ISSN: 1077-2626, DOI: 10.1109/TVCG.2007.1006 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465945A (zh) * 2020-12-07 2021-03-09 网易(杭州)网络有限公司 一种模型生成方法、装置、存储介质及计算机设备
CN112465945B (zh) * 2020-12-07 2024-04-09 网易(杭州)网络有限公司 一种模型生成方法、装置、存储介质及计算机设备
US11503270B1 (en) * 2021-08-10 2022-11-15 Varjo Technologies Oy Imaging systems and methods for facilitating local lighting
CN114272602A (zh) * 2021-12-27 2022-04-05 福建天晴在线互动科技有限公司 一种基于光照探针的动态游戏对象烘焙方法及其系统
CN115293960A (zh) * 2022-07-28 2022-11-04 珠海视熙科技有限公司 融合图像的光照调节方法、装置、设备及介质
CN115293960B (zh) * 2022-07-28 2023-09-29 珠海视熙科技有限公司 融合图像的光照调节方法、装置、设备及介质

Similar Documents

Publication Publication Date Title
US11024092B2 (en) System and method for augmented reality content delivery in pre-captured environments
US11636613B2 (en) Computer application method and apparatus for generating three-dimensional face model, computer device, and storage medium
US10474227B2 (en) Generation of virtual reality with 6 degrees of freedom from limited viewer data
US10607567B1 (en) Color variant environment mapping for augmented reality
US10559121B1 (en) Infrared reflectivity determinations for augmented reality rendering
CN110148204B (zh) 用于在真实环境的视图中表示虚拟对象的方法和系统
US11189043B2 (en) Image reconstruction for virtual 3D
CN106375748B (zh) 立体虚拟现实全景视图拼接方法、装置及电子设备
US10818064B2 (en) Estimating accurate face shape and texture from an image
US11184599B2 (en) Enabling motion parallax with multilayer 360-degree video
CN110869980B (zh) 将内容分发和呈现为球形视频和3d资产组合
US8953022B2 (en) System and method for sharing virtual and augmented reality scenes between users and viewers
US20180276882A1 (en) Systems and methods for augmented reality art creation
WO2018175217A1 (fr) Système et procédé pour donner un nouvel éclairage à un contenu capturé en 3d en temps réel
US10171785B2 (en) Color balancing based on reference points
WO2019143572A1 (fr) Procédé et système de collaboration ar et vr dans des espaces partagés
KR101885090B1 (ko) 영상 처리 장치, 조명 처리 장치 및 그 방법
CN105474263A (zh) 用于产生三维人脸模型的系统和方法
CN106231292B (zh) 一种立体虚拟现实直播方法、装置及设备
CN112509117A (zh) 手部三维模型的重建方法、装置、电子设备及存储介质
US20220237913A1 (en) Method for rendering of augmented reality content in combination with external display
WO2018093661A1 (fr) Système et procédé de mise en correspondance de conditions d&#39;éclairage pour une présence virtuelle partagée
WO2018148076A1 (fr) Système et procédé de positionnement automatisé d&#39;un contenu de réalité augmentée
WO2019133505A1 (fr) Procédé et système de maintien d&#39;un étalonnage de couleur à l&#39;aide d&#39;objets communs
CN114080582A (zh) 用于稀疏分布式渲染的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18716056

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18716056

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载