CN119323656A - Method for synthesizing real virtual image - Google Patents
Method for synthesizing real virtual image Download PDFInfo
- Publication number
- CN119323656A CN119323656A CN202411844429.9A CN202411844429A CN119323656A CN 119323656 A CN119323656 A CN 119323656A CN 202411844429 A CN202411844429 A CN 202411844429A CN 119323656 A CN119323656 A CN 119323656A
- Authority
- CN
- China
- Prior art keywords
- virtual
- texture
- illumination
- intensity
- lighting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The invention belongs to the technical field of image processing, and particularly relates to a realistic virtual image synthesis method. The method comprises the steps of firstly obtaining multi-mode data of a target scene containing environmental illumination, depth and texture characteristics and extracting the characteristics of a real scene. Then, the virtual illumination is calibrated according to the real illumination, and the coverage direction, intensity and ambient light are adjusted. And processing the edge by combining the self-adaptive texture mapping and illumination fusion, optimizing texture transition according to illumination and edge intensity, and adjusting the illumination intensity and reflectivity. And finally guiding and optimizing textures and illumination weights by means of depth information. Through the process, the defects in the prior art are effectively overcome, the image synthesis quality is improved, the fidelity and immersion sense of the virtual fusion scene of the augmented reality are enhanced, the application scope of the virtual reality technology is expanded, and the virtual reality image synthesis quality and application efficiency are improved.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a realistic virtual image synthesis method.
Background
At the moment of the vigorous development of virtual reality technology, the importance of image synthesis technology as a core element for constructing an immersive virtual environment is self-evident. However, current image synthesis techniques expose a number of drawbacks that are difficult to ignore when dealing with complex real scenes blended with virtual elements. The existing technical means is excessively simplified in the light treatment link, the fixed illumination parameters are generally adopted to set the illumination effect of the virtual object, and the characteristic of illumination dynamic change in the real scene is not considered. The rough processing causes the light and shadow of the virtual object in the synthesized scene to be very unnatural, the shadow is distributed disorderly and disorderly, the adjustment of brightness is unbalanced and misaligned, the visual harmony of virtual and reality fusion is greatly destroyed, and the sense of reality and the credibility of the whole scene are weakened. In texture mapping and edge processing, the traditional method mostly relies on fixed proportion to fuse virtual and real textures, and lacks sensitivity and adaptability to illumination conditions and object edge intensity differences. At the area of uneven illumination or the edge of an object, the hard fusion mode is extremely easy to cause abrupt texture transition, and defects such as blurry, abrupt fracture and the like appear. The shortage of depth information is more the key short plate that restricts the improvement of image synthesis quality. In the previous technical scheme, texture and illumination weight cannot be optimized according to complex depth difference between a virtual object and a real scene in the synthesis process. This omission directly leads to the lack of spatial layering of the composite image, and distortion is serious when the near-view virtual object and the distant-view real background are fused, visual mess is clustered, and the position relation and the shielding logic of the virtual object in the real space cannot be accurately and finely presented, so that the urgent requirement of industry on the high-quality image synthesis technology is difficult to meet.
Disclosure of Invention
Aiming at the technical problems in the background technology, the invention provides a realistic virtual image synthesis method which is simple in method and can effectively improve the image quality.
In order to achieve the above purpose, the technical scheme adopted by the invention comprises the following steps:
S1, firstly, acquiring an actual image of a target scene, acquiring multi-mode data comprising environmental illumination, depth information and texture features, and extracting features of the actual scene image;
s2, performing illumination matching on the virtual image according to illumination information of the real scene, including calibrating the direction and intensity of a light source, and adjusting virtual illumination based on a physical illumination model;
s3, combining the texture mapping with the illumination fusion edge processing, and adopting a self-adaptive texture mapping method to self-adaptively adjust the transition area of the texture mapping according to the illumination and the edge intensity;
S4, finally synthesizing texture and illumination information subjected to depth perception optimization into a final image through texture and illumination adjustment guided by the depth information;
The specific implementation method of the edge processing combining texture mapping and illumination fusion in the step S3 is as follows:
s31, firstly, realizing weighted fusion of the virtual texture and the real texture by the synthesized texture value: Wherein For the final synthesized texture value,Is the texture value of the virtual object,For a texture value in a real scene,In order to smooth the weight factor,The gradient value of the real scene represents the edge strength;
S32, adjusting the illumination intensity and the reflectivity into a synthetic formula: Wherein For the final resultant illumination value, the effects of texture, illumination intensity and reflectivity are taken into account,Is the reflectance based on the change in the angle of incidence of the illumination.
Preferably, the step S1 of extracting features of the image of the real scene includes illumination estimation, analysis of light source direction, intensity and color temperature information in the scene, depth information extraction, acquisition of object depth information in the scene through stereoscopic vision, and texture analysis, namely extraction of texture features in the scene.
Preferably, the adjusting the virtual illumination based on the physical illumination model in step S2 is implemented as follows:
s21, firstly, according to the illumination direction in the real scene And virtual light source directionCalculating an error of the direction of the light source and adjusting: Wherein For the virtual light source direction after calibration,Is a calibration coefficient;
S22, calibrating the illumination intensity of the virtual light source to match the illumination intensity of the real scene, and comparing the actual illumination intensity in the real scene And the illumination intensity of the virtual objectAdjusting according to the difference of illumination intensity: Wherein The adjusted virtual illumination intensity is obtained;
s23, adjusting the virtual image according to the ambient light information of the real scene, wherein the virtual illumination intensity is required to be adjusted according to the ambient light information: Wherein, the method comprises the steps of, wherein, For the final resultant intensity of the illumination to be achieved,Is the intensity of the ambient light,Is the adjustment factor for the ambient light.
Preferably, the reflectance based on the change in the incident angle of the illuminationThe specific formula of (2) is: Wherein Is the reflectivity at normal incidence,Is the angle between the incident ray and the normal to the object surface.
Preferably, the implementation of texture and illumination adjustment guided by the depth information in step S4 is that the depth difference between the virtual object and the real scene is calculated under the guidance of the depth information, and the weights of the texture and illumination are adjusted according to the difference: Wherein For the final synthesized image values, including depth-guided illumination and texture adjustment,The depth value of the position (x, y) in the image of the real scene and the depth value of the virtual object at the position (x, y) are respectively,Is a depth adjustment factor that controls the effect of depth differences on the final synthesis result.
Compared with the prior art, the method has the advantages and positive effects that in illumination processing, the virtual light source direction, the intensity and the ambient light are calibrated according to the illumination information of the real scene, the limitation of fixed parameters is broken through, the light shadow of the virtual object is changed along with the dynamic change of the reality, and the sense of reality is improved. In the aspect of texture mapping, the self-adaptive strategy adjusts the weight according to illumination and edge intensity, changes the fixed proportion fusion defect, realizes the natural transition of textures at complex illumination and edges, and improves the texture. On the utilization of depth information, texture and illumination weight are optimized by means of depth difference, spatial layering sense is enhanced, position and shielding relation is accurately presented, the defects of hard fusion, distortion, weak layering sense and the like in the prior art are effectively overcome, and better technical support is provided for virtual reality image synthesis.
Detailed Description
In order that the above objects, features and advantages of the application may be more clearly understood, a further description of the application will be provided with reference to the following examples. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as described herein, and therefore the present invention is not limited to the specific embodiments of the disclosure that follow.
In the embodiment, at the moment of the vigorous development of the virtual reality technology, the construction of a highly realistic reality virtual fusion scene becomes a key requirement. The traditional image synthesis technology has obvious defects when facing complex and changeable real environments and fusing rich and various virtual elements. The shadow disorder and brightness imbalance of the virtual object caused by the mismatch of illumination, the transition hardness and detail distortion caused by the misalignment of texture mapping, and the fusion hierarchy confusion and position relation violation caused by the depth processing defect. The invention provides a method for synthesizing a real virtual image. Firstly, acquiring an actual image of a target scene, acquiring multi-mode data comprising environmental illumination, depth information and texture features, and extracting features of the actual scene image, wherein the extracting of the features of the actual scene image comprises illumination estimation, namely analyzing the direction, intensity and color temperature information of a light source in the scene, extracting the depth information, namely acquiring object depth information in the scene through stereoscopic vision, and analyzing the texture, namely extracting the texture features in the scene.
In order to realize accurate matching of virtual objects and illumination of complex reality scenes, the problem of unnatural fusion caused by illumination difference is solved, and a scheme of comprehensively adjusting virtual illumination based on a physical illumination model is adopted. First according to the illumination direction in the real sceneAnd virtual light source directionCalculating an error of the direction of the light source and adjusting: Wherein For the virtual light source direction after calibration,The illumination intensity of the virtual light source is calibrated to match the illumination intensity of the real scene by comparing the actual illumination intensity in the real sceneAnd the illumination intensity of the virtual objectAdjusting according to the difference of illumination intensity: Wherein Then, the virtual image is required to be adjusted according to the ambient light information of the real scene, and the virtual illumination intensity is required to be adjusted according to the ambient light information: Wherein, the method comprises the steps of, wherein, For the final resultant intensity of the illumination to be achieved,Is the intensity of the ambient light,Is the adjustment factor for the ambient light.
Next, considering that the conventional generic texture map fuses textures at a fixed ratio, there is no flexibility. The invention adopts self-adaptive texture mapping, optimizes texture transition according to illumination and edge intensity, has more accurate edge processing, adjusts texture weight according to the edge intensity, eliminates blurring and mutation, and realizes smooth and natural transition. The specific implementation method of the edge processing combining texture mapping and illumination fusion comprises the steps that firstly, synthesized texture values are realized by weighting and fusing virtual textures and real textures: Wherein For the final synthesized texture value,Is the texture value of the virtual object,For a texture value in a real scene,In order to smooth the weight factor,Representing edge intensity for gradient values of a real scene, and then incorporating the adjustment of illumination intensity and reflectivity into a synthesis formula: Wherein For the final resultant illumination value, the effects of texture, illumination intensity and reflectivity are taken into account,Is the reflectance based on the change in the angle of incidence of the illumination.
Finally, in order to enhance the space layering sense of the synthesized image, the condition of fusion distortion caused by improper processing of the virtual and real depth difference is improved, and a scheme of adjusting textures and illumination weights by guiding depth information is adopted. Under the guidance of the depth information, calculating the depth difference between the virtual object and the real scene, and adjusting the weights of textures and illumination according to the difference: Wherein For the final synthesized image values, including depth-guided illumination and texture adjustment,The depth value of the position (x, y) in the image of the real scene and the depth value of the virtual object at the position (x, y) are respectively,Is a depth adjustment factor that controls the effect of depth differences on the final synthesis result. The fidelity and immersion sense of the reality virtual fusion scene are improved, and the virtual reality technology is promoted to be applied in multiple fields.
The present invention is not limited to the above-mentioned embodiments, and any equivalent embodiments which can be changed or modified by the technical content disclosed above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above-mentioned embodiments according to the technical substance of the present invention without departing from the technical content of the present invention still belong to the protection scope of the technical solution of the present invention.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411844429.9A CN119323656B (en) | 2024-12-16 | 2024-12-16 | A method for synthesizing realistic virtual images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202411844429.9A CN119323656B (en) | 2024-12-16 | 2024-12-16 | A method for synthesizing realistic virtual images |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN119323656A true CN119323656A (en) | 2025-01-17 |
| CN119323656B CN119323656B (en) | 2025-06-20 |
Family
ID=94232741
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202411844429.9A Active CN119323656B (en) | 2024-12-16 | 2024-12-16 | A method for synthesizing realistic virtual images |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119323656B (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101246600A (en) * | 2008-03-03 | 2008-08-20 | 北京航空航天大学 | A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera |
| CN101710429A (en) * | 2009-10-12 | 2010-05-19 | 湖南大学 | Illumination algorithm of augmented reality system based on dynamic light map |
| US20120320039A1 (en) * | 2011-06-14 | 2012-12-20 | Samsung Electronics Co., Ltd. | apparatus and method for image processing |
| CN111199573A (en) * | 2019-12-30 | 2020-05-26 | 成都索贝数码科技股份有限公司 | Virtual-real mutual reflection method, device, medium and equipment based on augmented reality |
| CN118135152A (en) * | 2023-12-14 | 2024-06-04 | 联通沃音乐文化有限公司 | Virtual-real fusion processing method for AR implantation in XR system |
| CN118945487A (en) * | 2024-07-22 | 2024-11-12 | 广州磐碟塔信息科技有限公司 | Virtual image synthesis method, device, equipment and storage medium |
-
2024
- 2024-12-16 CN CN202411844429.9A patent/CN119323656B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101246600A (en) * | 2008-03-03 | 2008-08-20 | 北京航空航天大学 | A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera |
| CN101710429A (en) * | 2009-10-12 | 2010-05-19 | 湖南大学 | Illumination algorithm of augmented reality system based on dynamic light map |
| US20120320039A1 (en) * | 2011-06-14 | 2012-12-20 | Samsung Electronics Co., Ltd. | apparatus and method for image processing |
| CN111199573A (en) * | 2019-12-30 | 2020-05-26 | 成都索贝数码科技股份有限公司 | Virtual-real mutual reflection method, device, medium and equipment based on augmented reality |
| CN118135152A (en) * | 2023-12-14 | 2024-06-04 | 联通沃音乐文化有限公司 | Virtual-real fusion processing method for AR implantation in XR system |
| CN118945487A (en) * | 2024-07-22 | 2024-11-12 | 广州磐碟塔信息科技有限公司 | Virtual image synthesis method, device, equipment and storage medium |
Non-Patent Citations (3)
| Title |
|---|
| DAVID R. WALTON: "Synthesis of environment maps for mixed reality", ISMAR, 9 October 2017 (2017-10-09) * |
| 夏麟;董子龙;章国锋;: "基于环境映照自动对齐的高质量虚实融合技术", 计算机辅助设计与图形学学报, no. 10, 15 October 2011 (2011-10-15) * |
| 陈宝权;秦学英;: "混合现实中的虚实融合与人机智能交融", 中国科学:信息科学, no. 12, 20 December 2016 (2016-12-20) * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119323656B (en) | 2025-06-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102663741B (en) | Method for carrying out visual stereo perception enhancement on color digit image and system thereof | |
| JP2013127774A (en) | Image processing device, image processing method, and program | |
| CN102982538A (en) | Nature color simulating method of resource satellite multi-spectral image | |
| CN107886552A (en) | Stick picture disposing method and apparatus | |
| CN102436666A (en) | Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform | |
| CN112508812A (en) | Image color cast correction method, model training method, device and equipment | |
| Wang et al. | End-to-end exposure fusion using convolutional neural network | |
| US12020094B2 (en) | Image processing device, printing system, and non-transitory computer-readable storage medium storing image processing program that render three dimensional (3D) object | |
| CN120014171B (en) | Model construction method and device based on holder scanning identification | |
| CN119323656B (en) | A method for synthesizing realistic virtual images | |
| CN118474549A (en) | Image processing method and processing system based on regional exposure control | |
| CN117319807B (en) | Light and shadow imaging method and system for karst cave dome | |
| CN113962851A (en) | A Realistic Color Pencil Drawing Generation Method | |
| WO2023102189A2 (en) | Iterative graph-based image enhancement using object separation | |
| KR20160001897A (en) | Image Processing Method and Apparatus for Integrated Multi-scale Retinex Based on CIELAB Color Space for Preserving Color | |
| Heckaman et al. | Brighter, more colorful colors and darker, deeper colors based on a theme of brilliance | |
| CN118799207B (en) | Mobile terminal scene-in-one picture generation system and method based on artificial intelligence | |
| Zou et al. | Underwater image enhancement method based on illumination correction and color correction | |
| CN120182159B (en) | Underwater image enhancement method and system | |
| Lakshmi et al. | Analysis of tone mapping operators on high dynamic range images | |
| CN118096544B (en) | Light field image enhancement method based on gray color fusion | |
| CN118334531B (en) | Remote sensing image fusion method, system and storage medium based on vegetation coverage | |
| CN118411319B (en) | Single-frame exposure image HDR enhancement method and device based on multidimensional mapping | |
| Gabrijelčič Tomc et al. | Colorimetric accuracy of color reproductions in the 3D scenes | |
| CN119722479A (en) | Fusion method of visible light and near-infrared images based on illumination difference and reflection characteristics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |