+

CN119323656A - Method for synthesizing real virtual image - Google Patents

Method for synthesizing real virtual image Download PDF

Info

Publication number
CN119323656A
CN119323656A CN202411844429.9A CN202411844429A CN119323656A CN 119323656 A CN119323656 A CN 119323656A CN 202411844429 A CN202411844429 A CN 202411844429A CN 119323656 A CN119323656 A CN 119323656A
Authority
CN
China
Prior art keywords
virtual
texture
illumination
intensity
lighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411844429.9A
Other languages
Chinese (zh)
Other versions
CN119323656B (en
Inventor
郭洪涛
褚丽丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Tian Jing Electronic Technology Co ltd
Original Assignee
Shandong Tian Jing Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Tian Jing Electronic Technology Co ltd filed Critical Shandong Tian Jing Electronic Technology Co ltd
Priority to CN202411844429.9A priority Critical patent/CN119323656B/en
Publication of CN119323656A publication Critical patent/CN119323656A/en
Application granted granted Critical
Publication of CN119323656B publication Critical patent/CN119323656B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention belongs to the technical field of image processing, and particularly relates to a realistic virtual image synthesis method. The method comprises the steps of firstly obtaining multi-mode data of a target scene containing environmental illumination, depth and texture characteristics and extracting the characteristics of a real scene. Then, the virtual illumination is calibrated according to the real illumination, and the coverage direction, intensity and ambient light are adjusted. And processing the edge by combining the self-adaptive texture mapping and illumination fusion, optimizing texture transition according to illumination and edge intensity, and adjusting the illumination intensity and reflectivity. And finally guiding and optimizing textures and illumination weights by means of depth information. Through the process, the defects in the prior art are effectively overcome, the image synthesis quality is improved, the fidelity and immersion sense of the virtual fusion scene of the augmented reality are enhanced, the application scope of the virtual reality technology is expanded, and the virtual reality image synthesis quality and application efficiency are improved.

Description

Method for synthesizing real virtual image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a realistic virtual image synthesis method.
Background
At the moment of the vigorous development of virtual reality technology, the importance of image synthesis technology as a core element for constructing an immersive virtual environment is self-evident. However, current image synthesis techniques expose a number of drawbacks that are difficult to ignore when dealing with complex real scenes blended with virtual elements. The existing technical means is excessively simplified in the light treatment link, the fixed illumination parameters are generally adopted to set the illumination effect of the virtual object, and the characteristic of illumination dynamic change in the real scene is not considered. The rough processing causes the light and shadow of the virtual object in the synthesized scene to be very unnatural, the shadow is distributed disorderly and disorderly, the adjustment of brightness is unbalanced and misaligned, the visual harmony of virtual and reality fusion is greatly destroyed, and the sense of reality and the credibility of the whole scene are weakened. In texture mapping and edge processing, the traditional method mostly relies on fixed proportion to fuse virtual and real textures, and lacks sensitivity and adaptability to illumination conditions and object edge intensity differences. At the area of uneven illumination or the edge of an object, the hard fusion mode is extremely easy to cause abrupt texture transition, and defects such as blurry, abrupt fracture and the like appear. The shortage of depth information is more the key short plate that restricts the improvement of image synthesis quality. In the previous technical scheme, texture and illumination weight cannot be optimized according to complex depth difference between a virtual object and a real scene in the synthesis process. This omission directly leads to the lack of spatial layering of the composite image, and distortion is serious when the near-view virtual object and the distant-view real background are fused, visual mess is clustered, and the position relation and the shielding logic of the virtual object in the real space cannot be accurately and finely presented, so that the urgent requirement of industry on the high-quality image synthesis technology is difficult to meet.
Disclosure of Invention
Aiming at the technical problems in the background technology, the invention provides a realistic virtual image synthesis method which is simple in method and can effectively improve the image quality.
In order to achieve the above purpose, the technical scheme adopted by the invention comprises the following steps:
S1, firstly, acquiring an actual image of a target scene, acquiring multi-mode data comprising environmental illumination, depth information and texture features, and extracting features of the actual scene image;
s2, performing illumination matching on the virtual image according to illumination information of the real scene, including calibrating the direction and intensity of a light source, and adjusting virtual illumination based on a physical illumination model;
s3, combining the texture mapping with the illumination fusion edge processing, and adopting a self-adaptive texture mapping method to self-adaptively adjust the transition area of the texture mapping according to the illumination and the edge intensity;
S4, finally synthesizing texture and illumination information subjected to depth perception optimization into a final image through texture and illumination adjustment guided by the depth information;
The specific implementation method of the edge processing combining texture mapping and illumination fusion in the step S3 is as follows:
s31, firstly, realizing weighted fusion of the virtual texture and the real texture by the synthesized texture value: Wherein For the final synthesized texture value,Is the texture value of the virtual object,For a texture value in a real scene,In order to smooth the weight factor,The gradient value of the real scene represents the edge strength;
S32, adjusting the illumination intensity and the reflectivity into a synthetic formula: Wherein For the final resultant illumination value, the effects of texture, illumination intensity and reflectivity are taken into account,Is the reflectance based on the change in the angle of incidence of the illumination.
Preferably, the step S1 of extracting features of the image of the real scene includes illumination estimation, analysis of light source direction, intensity and color temperature information in the scene, depth information extraction, acquisition of object depth information in the scene through stereoscopic vision, and texture analysis, namely extraction of texture features in the scene.
Preferably, the adjusting the virtual illumination based on the physical illumination model in step S2 is implemented as follows:
s21, firstly, according to the illumination direction in the real scene And virtual light source directionCalculating an error of the direction of the light source and adjusting: Wherein For the virtual light source direction after calibration,Is a calibration coefficient;
S22, calibrating the illumination intensity of the virtual light source to match the illumination intensity of the real scene, and comparing the actual illumination intensity in the real scene And the illumination intensity of the virtual objectAdjusting according to the difference of illumination intensity: Wherein The adjusted virtual illumination intensity is obtained;
s23, adjusting the virtual image according to the ambient light information of the real scene, wherein the virtual illumination intensity is required to be adjusted according to the ambient light information: Wherein, the method comprises the steps of, wherein, For the final resultant intensity of the illumination to be achieved,Is the intensity of the ambient light,Is the adjustment factor for the ambient light.
Preferably, the reflectance based on the change in the incident angle of the illuminationThe specific formula of (2) is: Wherein Is the reflectivity at normal incidence,Is the angle between the incident ray and the normal to the object surface.
Preferably, the implementation of texture and illumination adjustment guided by the depth information in step S4 is that the depth difference between the virtual object and the real scene is calculated under the guidance of the depth information, and the weights of the texture and illumination are adjusted according to the difference: Wherein For the final synthesized image values, including depth-guided illumination and texture adjustment,The depth value of the position (x, y) in the image of the real scene and the depth value of the virtual object at the position (x, y) are respectively,Is a depth adjustment factor that controls the effect of depth differences on the final synthesis result.
Compared with the prior art, the method has the advantages and positive effects that in illumination processing, the virtual light source direction, the intensity and the ambient light are calibrated according to the illumination information of the real scene, the limitation of fixed parameters is broken through, the light shadow of the virtual object is changed along with the dynamic change of the reality, and the sense of reality is improved. In the aspect of texture mapping, the self-adaptive strategy adjusts the weight according to illumination and edge intensity, changes the fixed proportion fusion defect, realizes the natural transition of textures at complex illumination and edges, and improves the texture. On the utilization of depth information, texture and illumination weight are optimized by means of depth difference, spatial layering sense is enhanced, position and shielding relation is accurately presented, the defects of hard fusion, distortion, weak layering sense and the like in the prior art are effectively overcome, and better technical support is provided for virtual reality image synthesis.
Detailed Description
In order that the above objects, features and advantages of the application may be more clearly understood, a further description of the application will be provided with reference to the following examples. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as described herein, and therefore the present invention is not limited to the specific embodiments of the disclosure that follow.
In the embodiment, at the moment of the vigorous development of the virtual reality technology, the construction of a highly realistic reality virtual fusion scene becomes a key requirement. The traditional image synthesis technology has obvious defects when facing complex and changeable real environments and fusing rich and various virtual elements. The shadow disorder and brightness imbalance of the virtual object caused by the mismatch of illumination, the transition hardness and detail distortion caused by the misalignment of texture mapping, and the fusion hierarchy confusion and position relation violation caused by the depth processing defect. The invention provides a method for synthesizing a real virtual image. Firstly, acquiring an actual image of a target scene, acquiring multi-mode data comprising environmental illumination, depth information and texture features, and extracting features of the actual scene image, wherein the extracting of the features of the actual scene image comprises illumination estimation, namely analyzing the direction, intensity and color temperature information of a light source in the scene, extracting the depth information, namely acquiring object depth information in the scene through stereoscopic vision, and analyzing the texture, namely extracting the texture features in the scene.
In order to realize accurate matching of virtual objects and illumination of complex reality scenes, the problem of unnatural fusion caused by illumination difference is solved, and a scheme of comprehensively adjusting virtual illumination based on a physical illumination model is adopted. First according to the illumination direction in the real sceneAnd virtual light source directionCalculating an error of the direction of the light source and adjusting: Wherein For the virtual light source direction after calibration,The illumination intensity of the virtual light source is calibrated to match the illumination intensity of the real scene by comparing the actual illumination intensity in the real sceneAnd the illumination intensity of the virtual objectAdjusting according to the difference of illumination intensity: Wherein Then, the virtual image is required to be adjusted according to the ambient light information of the real scene, and the virtual illumination intensity is required to be adjusted according to the ambient light information: Wherein, the method comprises the steps of, wherein, For the final resultant intensity of the illumination to be achieved,Is the intensity of the ambient light,Is the adjustment factor for the ambient light.
Next, considering that the conventional generic texture map fuses textures at a fixed ratio, there is no flexibility. The invention adopts self-adaptive texture mapping, optimizes texture transition according to illumination and edge intensity, has more accurate edge processing, adjusts texture weight according to the edge intensity, eliminates blurring and mutation, and realizes smooth and natural transition. The specific implementation method of the edge processing combining texture mapping and illumination fusion comprises the steps that firstly, synthesized texture values are realized by weighting and fusing virtual textures and real textures: Wherein For the final synthesized texture value,Is the texture value of the virtual object,For a texture value in a real scene,In order to smooth the weight factor,Representing edge intensity for gradient values of a real scene, and then incorporating the adjustment of illumination intensity and reflectivity into a synthesis formula: Wherein For the final resultant illumination value, the effects of texture, illumination intensity and reflectivity are taken into account,Is the reflectance based on the change in the angle of incidence of the illumination.
Finally, in order to enhance the space layering sense of the synthesized image, the condition of fusion distortion caused by improper processing of the virtual and real depth difference is improved, and a scheme of adjusting textures and illumination weights by guiding depth information is adopted. Under the guidance of the depth information, calculating the depth difference between the virtual object and the real scene, and adjusting the weights of textures and illumination according to the difference: Wherein For the final synthesized image values, including depth-guided illumination and texture adjustment,The depth value of the position (x, y) in the image of the real scene and the depth value of the virtual object at the position (x, y) are respectively,Is a depth adjustment factor that controls the effect of depth differences on the final synthesis result. The fidelity and immersion sense of the reality virtual fusion scene are improved, and the virtual reality technology is promoted to be applied in multiple fields.
The present invention is not limited to the above-mentioned embodiments, and any equivalent embodiments which can be changed or modified by the technical content disclosed above can be applied to other fields, but any simple modification, equivalent changes and modification made to the above-mentioned embodiments according to the technical substance of the present invention without departing from the technical content of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (5)

1.一种现实虚拟图像合成方法,其特征在于,包括以下步骤:1. A method for synthesizing a real virtual image, characterized in that it comprises the following steps: S1、首先获取目标场景的实际图像,获取包含环境光照、深度信息、纹理特征的多模态数据,并对现实场景图像进行特征提取;S1. First, obtain the actual image of the target scene, obtain multimodal data including ambient lighting, depth information, and texture features, and perform feature extraction on the real scene image; S2、根据现实场景的光照信息对虚拟图像进行光照匹配,包括光源方向、强度的校准,基于物理光照模型对虚拟光照进行调整;S2. Perform lighting matching on the virtual image according to the lighting information of the real scene, including calibration of the light source direction and intensity, and adjusting the virtual lighting based on the physical lighting model; S3、接着结合纹理映射与光照融合的边缘处理,采用自适应纹理映射方法,根据光照和边缘的强度自适应调整纹理映射的过渡区域;S3, then combining the edge processing of texture mapping and illumination fusion, adopting an adaptive texture mapping method, and adaptively adjusting the transition area of texture mapping according to the intensity of illumination and edge; S4、最后通过深度信息引导的纹理和光照调整,将经过深度感知优化后的纹理和光照信息合成到最终图像中;S4, finally, through the texture and lighting adjustment guided by the depth information, the texture and lighting information optimized by depth perception are synthesized into the final image; 所述步骤S3结合纹理映射与光照融合的边缘处理的具体实现方法为:The specific implementation method of step S3 combining edge processing with texture mapping and illumination fusion is: S31、首先合成后的纹理值通过对虚拟纹理和现实纹理的加权融合来实现:,其中为最终合成后的纹理值,是虚拟物体的纹理值,为现实场景中的纹理值,为平滑权重因子,为现实场景的梯度值,表示边缘强度;S31, firstly, the synthesized texture value is realized by weighted fusion of the virtual texture and the real texture: ,in is the final synthesized texture value, is the texture value of the virtual object, is the texture value in the real scene, is the smoothing weight factor, is the gradient value of the real scene, indicating the edge strength; S32、然后将光照强度与反射率的调整纳入合成公式中:,其中为最终合成的光照值,考虑了纹理、光照强度和反射率的影响,是基于光照入射角度的变化的反射率。S32. Then incorporate the adjustment of light intensity and reflectivity into the synthesis formula: ,in The final synthetic lighting value takes into account the effects of texture, light intensity and reflectivity. It is the reflectivity that changes based on the angle of incidence of light. 2.根据权利要求1所述的一种现实虚拟图像合成方法,其特征在于,所述步骤S1中对现实场景图像进行特征提取包括光照估计:分析场景中的光源方向、强度、颜色温度信息,深度信息提取:通过立体视觉获取场景中的物体深度信息,纹理分析:提取场景中的纹理特征。2. According to the method for synthesizing a real virtual image according to claim 1, it is characterized in that the feature extraction of the real scene image in step S1 includes lighting estimation: analyzing the direction, intensity, and color temperature information of the light source in the scene, depth information extraction: obtaining the depth information of objects in the scene through stereoscopic vision, and texture analysis: extracting texture features in the scene. 3.根据权利要求1所述的一种现实虚拟图像合成方法,其特征在于,所述步骤S2中基于物理光照模型对虚拟光照进行调整的实现为:3. The method for synthesizing a real virtual image according to claim 1, wherein the adjustment of the virtual lighting based on the physical lighting model in step S2 is implemented as follows: S21、首先根据现实场景中的光照方向和虚拟光源方向计算光源方向的误差并进行调整:,其中为校准后的虚拟光源方向,为校准系数;S21. First, according to the lighting direction in the real scene and virtual light source direction Calculate the error in the light source direction and make adjustments: ,in is the direction of the virtual light source after calibration, is the calibration factor; S22、接着对虚拟光源的光照强度进行校准,使其与现实场景的光照强度匹配,通过比较现实场景中实际的光照强度和虚拟物体的光照强度,根据光照强度的差异进行调整:,其中为调整后的虚拟光照强度;S22, then calibrate the illumination intensity of the virtual light source to match the illumination intensity of the real scene, and compare the actual illumination intensity in the real scene. and the lighting intensity of virtual objects , adjusting for differences in light intensity: ,in is the adjusted virtual light intensity; S23、然后需要根据现实场景的环境光信息对虚拟图像进行调节,虚拟光照强度需要根据环境光信息进行调节:,其中,为最终合成的光照强度,是环境光强度,是环境光的调节系数。S23. Then the virtual image needs to be adjusted according to the ambient light information of the real scene, and the virtual light intensity needs to be adjusted according to the ambient light information: ,in, is the final synthesized light intensity, is the ambient light intensity, is the adjustment factor for ambient light. 4.根据权利要求1所述的一种现实虚拟图像合成方法,其特征在于,所述基于光照入射角度的变化的反射率的具体公式为:,其中是在垂直入射时的反射率,是入射光线与物体表面法线之间的夹角。4. A method for synthesizing a real virtual image according to claim 1, characterized in that the reflectivity based on the change of the incident angle of light The specific formula is: ,in is the reflectivity at normal incidence, It is the angle between the incident light ray and the surface normal of the object. 5.根据权利要求1所述的一种现实虚拟图像合成方法,其特征在于,所述步骤S4中通过深度信息引导的纹理和光照调整的具体实现为在深度信息的引导下,计算虚拟物体与现实场景的深度差异,并根据该差异调整纹理和光照的权重:,其中为最终合成的图像值,包括深度引导的光照与纹理调整,分别为现实场景图像中位置(x,y)的深度值和虚拟物体在位置(x,y)的深度值,是深度调整系数,控制深度差异对最终合成结果的影响。5. A method for synthesizing a virtual image according to claim 1, characterized in that the texture and illumination adjustment guided by the depth information in step S4 is specifically implemented by calculating the depth difference between the virtual object and the real scene under the guidance of the depth information, and adjusting the weights of the texture and illumination according to the difference: ,in The final composite image value, including depth-guided lighting and texture adjustments, are the depth value of the position (x, y) in the real scene image and the depth value of the virtual object at the position (x, y). is the depth adjustment coefficient, which controls the impact of depth difference on the final synthesis result.
CN202411844429.9A 2024-12-16 2024-12-16 A method for synthesizing realistic virtual images Active CN119323656B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411844429.9A CN119323656B (en) 2024-12-16 2024-12-16 A method for synthesizing realistic virtual images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411844429.9A CN119323656B (en) 2024-12-16 2024-12-16 A method for synthesizing realistic virtual images

Publications (2)

Publication Number Publication Date
CN119323656A true CN119323656A (en) 2025-01-17
CN119323656B CN119323656B (en) 2025-06-20

Family

ID=94232741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411844429.9A Active CN119323656B (en) 2024-12-16 2024-12-16 A method for synthesizing realistic virtual images

Country Status (1)

Country Link
CN (1) CN119323656B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera
CN101710429A (en) * 2009-10-12 2010-05-19 湖南大学 Illumination algorithm of augmented reality system based on dynamic light map
US20120320039A1 (en) * 2011-06-14 2012-12-20 Samsung Electronics Co., Ltd. apparatus and method for image processing
CN111199573A (en) * 2019-12-30 2020-05-26 成都索贝数码科技股份有限公司 Virtual-real mutual reflection method, device, medium and equipment based on augmented reality
CN118135152A (en) * 2023-12-14 2024-06-04 联通沃音乐文化有限公司 Virtual-real fusion processing method for AR implantation in XR system
CN118945487A (en) * 2024-07-22 2024-11-12 广州磐碟塔信息科技有限公司 Virtual image synthesis method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246600A (en) * 2008-03-03 2008-08-20 北京航空航天大学 A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera
CN101710429A (en) * 2009-10-12 2010-05-19 湖南大学 Illumination algorithm of augmented reality system based on dynamic light map
US20120320039A1 (en) * 2011-06-14 2012-12-20 Samsung Electronics Co., Ltd. apparatus and method for image processing
CN111199573A (en) * 2019-12-30 2020-05-26 成都索贝数码科技股份有限公司 Virtual-real mutual reflection method, device, medium and equipment based on augmented reality
CN118135152A (en) * 2023-12-14 2024-06-04 联通沃音乐文化有限公司 Virtual-real fusion processing method for AR implantation in XR system
CN118945487A (en) * 2024-07-22 2024-11-12 广州磐碟塔信息科技有限公司 Virtual image synthesis method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAVID R. WALTON: "Synthesis of environment maps for mixed reality", ISMAR, 9 October 2017 (2017-10-09) *
夏麟;董子龙;章国锋;: "基于环境映照自动对齐的高质量虚实融合技术", 计算机辅助设计与图形学学报, no. 10, 15 October 2011 (2011-10-15) *
陈宝权;秦学英;: "混合现实中的虚实融合与人机智能交融", 中国科学:信息科学, no. 12, 20 December 2016 (2016-12-20) *

Also Published As

Publication number Publication date
CN119323656B (en) 2025-06-20

Similar Documents

Publication Publication Date Title
CN102663741B (en) Method for carrying out visual stereo perception enhancement on color digit image and system thereof
JP2013127774A (en) Image processing device, image processing method, and program
CN102982538A (en) Nature color simulating method of resource satellite multi-spectral image
CN107886552A (en) Stick picture disposing method and apparatus
CN102436666A (en) Object and scene fusion method based on IHS (Intensity, Hue, Saturation) transform
CN112508812A (en) Image color cast correction method, model training method, device and equipment
Wang et al. End-to-end exposure fusion using convolutional neural network
US12020094B2 (en) Image processing device, printing system, and non-transitory computer-readable storage medium storing image processing program that render three dimensional (3D) object
CN120014171B (en) Model construction method and device based on holder scanning identification
CN119323656B (en) A method for synthesizing realistic virtual images
CN118474549A (en) Image processing method and processing system based on regional exposure control
CN117319807B (en) Light and shadow imaging method and system for karst cave dome
CN113962851A (en) A Realistic Color Pencil Drawing Generation Method
WO2023102189A2 (en) Iterative graph-based image enhancement using object separation
KR20160001897A (en) Image Processing Method and Apparatus for Integrated Multi-scale Retinex Based on CIELAB Color Space for Preserving Color
Heckaman et al. Brighter, more colorful colors and darker, deeper colors based on a theme of brilliance
CN118799207B (en) Mobile terminal scene-in-one picture generation system and method based on artificial intelligence
Zou et al. Underwater image enhancement method based on illumination correction and color correction
CN120182159B (en) Underwater image enhancement method and system
Lakshmi et al. Analysis of tone mapping operators on high dynamic range images
CN118096544B (en) Light field image enhancement method based on gray color fusion
CN118334531B (en) Remote sensing image fusion method, system and storage medium based on vegetation coverage
CN118411319B (en) Single-frame exposure image HDR enhancement method and device based on multidimensional mapping
Gabrijelčič Tomc et al. Colorimetric accuracy of color reproductions in the 3D scenes
CN119722479A (en) Fusion method of visible light and near-infrared images based on illumination difference and reflection characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载