CN115205349A - A Deep Neural Network-Based Interferogram Automatic Registration and Phase Unpacking Method - Google Patents
A Deep Neural Network-Based Interferogram Automatic Registration and Phase Unpacking Method Download PDFInfo
- Publication number
- CN115205349A CN115205349A CN202210969274.6A CN202210969274A CN115205349A CN 115205349 A CN115205349 A CN 115205349A CN 202210969274 A CN202210969274 A CN 202210969274A CN 115205349 A CN115205349 A CN 115205349A
- Authority
- CN
- China
- Prior art keywords
- phase
- neural network
- deep neural
- interferogram
- noise
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
本发明公开了一种基于深度神经网络的干涉图自动配准和相位解包方法,具体为:通过仿真生成真实相位和包裹相位,添加噪声;由包裹相位生成干涉图,通过对干涉图平移旋转以模拟实际采集中存在的系统抖动,生成数据集;设置深度神经网络模型、优化算法,使用混合损失函数和生成的数据集对深度神经网络进行训练;根据评价指标,判断相位恢复效果是否满足要求,若满足则进入下一步;不满足则更改网络结构,修改优化函数、学习率和损失函数等参数的数值,并重新训练;将实际采集的干涉图,作为深度神经网络模型的输入,计算得到预测的真实相位。本发明提出的方法不需要对干涉图配准,减少了计算时间,具有良好的抗噪声能力,提高了相位解包的准确性。
The invention discloses a method for automatic registration and phase unpacking of an interferogram based on a deep neural network. The method comprises the following steps: generating a real phase and a wrapping phase through simulation and adding noise; generating an interferogram from the wrapping phase; Generate a data set by simulating the system jitter existing in the actual acquisition; set the deep neural network model and optimization algorithm, use the hybrid loss function and the generated data set to train the deep neural network; judge whether the phase recovery effect meets the requirements according to the evaluation index , if it is satisfied, go to the next step; if it is not satisfied, change the network structure, modify the values of parameters such as optimization function, learning rate and loss function, and retrain; use the actually collected interferogram as the input of the deep neural network model, and calculate The predicted true phase. The method proposed by the invention does not need to register the interferogram, reduces the calculation time, has good anti-noise capability, and improves the accuracy of phase unpacking.
Description
技术领域technical field
本发明涉及移相干涉技术领域,特别是一种基于深度神经网络的干涉图自动配准和相位解包方法。The invention relates to the technical field of phase-shifting interference, in particular to an interferogram automatic registration and phase unpacking method based on a deep neural network.
背景技术Background technique
移相式激光干涉仪一般通过激光器波长调谐或者通过压电陶瓷片(PZT)推动参考镜产生调制相位,然后图像采集系统利用光电探测器(如CCD)采集不同移相量的干涉图,常用的移相干涉法采用四步移相法,对于采集到的干涉图的处理过程一般包括相位计算和相位解包两个步骤。Phase-shifting laser interferometers generally generate modulation phases by tuning the laser wavelength or pushing the reference mirror through a piezoelectric ceramic sheet (PZT), and then the image acquisition system uses photodetectors (such as CCDs) to collect interferograms with different phase-shifting amounts. The phase-shifting interferometry adopts a four-step phase-shifting method, and the processing of the collected interferogram generally includes two steps of phase calculation and phase unpacking.
在相位计算过程中,由于采集系统系统中存在外部干扰,抖动等因素,导致不同移相量的干涉图存在旋转偏移等现象,难以精确配准,从而导致计算的包裹相位存在相位恢复误差,而设计一个非常稳定的干涉系统需要非常高的成本,且对环境稳定性依赖程度高。为解决上述问题,一些学者提出了一系列方法,如使四幅干涉图具有相似的灰度分布,采用互相干运算确定干涉图之间的位置匹配关系;通过圆载频处理技术得到每个子图的基频能量和相位信息,通过各个子图之间的相位差矫正位置匹配误差和移相量误差,然而这些方法计算过程复杂,难以在动态测量系统中应用。During the phase calculation process, due to external interference, jitter and other factors in the acquisition system, the interferograms with different phasors have rotational offset and other phenomena, which are difficult to accurately register, resulting in phase recovery errors in the calculated package phase. Designing a very stable interferometric system requires a very high cost and is highly dependent on environmental stability. In order to solve the above problems, some scholars have proposed a series of methods, such as making the four interferograms have similar grayscale distribution, using mutual interference operation to determine the position matching relationship between the interferograms; obtaining the The fundamental frequency energy and phase information can be used to correct the position matching error and phasor error through the phase difference between the sub-images. However, these methods are complicated in calculation process and difficult to apply in dynamic measurement systems.
在由包裹相位计算真实相位的相位解包过程中,常用的解包算法如路径追踪算法,在噪声水平较大时会出现解包误差,系统欠采样时采集数据中相邻两点间的差值会超过π使得解包错误,另外常用的最小范数法如预处理共轭梯度算法迭代过程十分复杂,所占用时间长。总之,现有相位计算和相位解包过程中存在相位恢复准确度差和计算时间长的问题。In the phase unpacking process in which the real phase is calculated from the wrapped phase, the commonly used unpacking algorithms such as the path tracing algorithm will have unpacking errors when the noise level is large. When the system is under-sampled, the difference between two adjacent points in the collected data The value will exceed π, which will make the unpacking error. In addition, the commonly used minimum norm method, such as the preprocessing conjugate gradient algorithm, the iterative process is very complicated and takes a long time. In conclusion, there are problems of poor phase recovery accuracy and long calculation time in the existing phase calculation and phase unpacking process.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种快速、准确的基于深度神经网络的干涉图自动配准和相位解包方法。The purpose of the present invention is to provide a fast and accurate method for automatic registration and phase unpacking of interferogram based on deep neural network.
实现本发明目的的技术解决方案为:一种基于深度神经网络的干涉图自动配准和相位解包方法,包括以下步骤:The technical solution for realizing the object of the present invention is: a deep neural network-based interferogram automatic registration and phase unpacking method, comprising the following steps:
步骤S1、通过仿真生成二维真实相位,并计算二维包裹相位,添加噪声;Step S1, generating a two-dimensional real phase through simulation, and calculating a two-dimensional wrapping phase, adding noise;
步骤S2、根据二维包裹相位计算生成相应光强干涉图,通过对图像随机平移旋转来模拟实际采集中存在的系统抖动,生成数据集;Step S2, generating a corresponding light intensity interferogram according to the two-dimensional wrapping phase calculation, and simulating the system jitter existing in the actual acquisition by randomly translating and rotating the image to generate a data set;
步骤S3、设置深度神经网络模型结构、参数、优化算法,使用混合损失函数和步骤2生成的数据集对深度神经网络模型进行训练;Step S3, setting the deep neural network model structure, parameters, and optimization algorithm, and using the hybrid loss function and the data set generated in
步骤S4、根据结构相似度、峰值信噪比等评价指标,判断相位恢复效果是否满足要求,若满足则进入调制步骤S5;不满足则更改网络结构,修改优化函数、学习率和损失函数等参数的数值,并转至步骤S3重新训练;Step S4, according to evaluation indicators such as structural similarity, peak signal-to-noise ratio, etc., determine whether the phase recovery effect meets the requirements, and if so, enter the modulation step S5; if not, change the network structure, and modify parameters such as the optimization function, the learning rate, and the loss function. , and go to step S3 to retrain;
步骤S5、将真实系统采集的光强干涉图,作为训练后深度神经网络模型的输入,经计算得到预测的真实相位。Step S5, the light intensity interferogram collected by the real system is used as the input of the deep neural network model after training, and the predicted real phase is obtained through calculation.
本发明与现有技术相比,其显著优点为:(1)直接从干涉图计算其真实相位,不需要对各个不同移相量的干涉图配准再计算包裹相位和真实相位,减少了计算时间,可以在动态测量中使用;(2)抗噪声能力强,常用的相位解包算法如最小路径算法,在噪声等级较大时会出现解包错误的现象,本发明在数据集生成阶段通过给包裹相位添加不同种类、不同等级的噪声,提高了网络对去除不同级别噪声的泛化能力,提高了相位解包的精确性。Compared with the prior art, the present invention has the following significant advantages: (1) the real phase is directly calculated from the interferogram, and it is not necessary to register the interferograms of different phasors to calculate the wrapped phase and the real phase, which reduces the need for calculation (2) Strong anti-noise ability, commonly used phase unpacking algorithms such as the minimum path algorithm, when the noise level is large, the phenomenon of unpacking errors will occur. Adding different types and different levels of noise to the wrapped phase improves the generalization ability of the network to remove different levels of noise and improves the accuracy of phase unpacking.
附图说明Description of drawings
图1是本发明基于深度神经网络的干涉图自动配准和相位解包方法的流程图。FIG. 1 is a flow chart of the method for automatic registration and phase unpacking of interferograms based on deep neural network according to the present invention.
图2是本发明生成的初始矩阵的三维直方图。FIG. 2 is a three-dimensional histogram of the initial matrix generated by the present invention.
图3是本发明生成的真实相位的曲面图。Figure 3 is a surface plot of the true phase generated by the present invention.
图4是本发明生成的真实相位与添加了噪声的包裹相位的二维平面图。Figure 4 is a two-dimensional plan view of the real phase and the noise-added wrapped phase generated by the present invention.
图5是经过平移旋转变换并裁剪了中心区域的光强干涉图。Figure 5 is the light intensity interferogram after translation and rotation transformation and cropping the central region.
图6是使用的卷积神经网络结构图。Figure 6 is a diagram of the structure of the convolutional neural network used.
图7是使用的残差块的结构图。FIG. 7 is a structural diagram of the residual block used.
图8是网络计算输出的真实相位图。Figure 8 is the true phase plot of the network computation output.
具体实施方式Detailed ways
本发明的目的在于提供一种基于深度神经网络的干涉图自动配准和相位解包方法,解决传统光强干涉恢复算法抗噪声能力差,存在系统抖动造成相位平移旋转使得恢复相位不准确及计算时间长等问题。The purpose of the present invention is to provide an interferogram automatic registration and phase unpacking method based on a deep neural network, which solves the problem that the traditional light intensity interference recovery algorithm has poor anti-noise ability, and the existence of system jitter causes phase translation and rotation, which makes the recovery phase inaccurate and calculation. long time etc.
结合图1,本发明一种基于深度神经网络的干涉图自动配准和相位解包方法,包括以下步骤:1, a deep neural network-based interferogram automatic registration and phase unpacking method of the present invention includes the following steps:
步骤S1、通过仿真生成二维真实相位,并计算二维包裹相位,添加噪声;Step S1, generating a two-dimensional real phase through simulation, and calculating a two-dimensional wrapping phase, adding noise;
步骤S2、根据二维包裹相位计算生成相应光强干涉图,通过对图像随机平移旋转来模拟实际采集中存在的系统抖动,生成数据集;Step S2, generating a corresponding light intensity interferogram according to the two-dimensional wrapping phase calculation, and simulating the system jitter existing in the actual acquisition by randomly translating and rotating the image to generate a data set;
步骤S3、设置深度神经网络模型结构、参数、优化算法,使用混合损失函数和步骤2生成的数据集对深度神经网络模型进行训练;Step S3, setting the deep neural network model structure, parameters, and optimization algorithm, and using the hybrid loss function and the data set generated in
步骤S4、根据结构相似度、峰值信噪比等评价指标,判断相位恢复效果是否满足要求,若满足则进入调制步骤S5;不满足则更改网络结构,修改优化函数、学习率和损失函数这些参数的数值,并转至步骤S3重新训练;Step S4, according to evaluation indicators such as structural similarity, peak signal-to-noise ratio, etc., determine whether the phase recovery effect meets the requirements, and if so, enter the modulation step S5; if not, change the network structure, and modify the parameters of the optimization function, learning rate and loss function. , and go to step S3 to retrain;
步骤S5、将真实系统采集的光强干涉图,作为训练后深度神经网络模型的输入,经计算得到预测的真实相位。Step S5, the light intensity interferogram collected by the real system is used as the input of the deep neural network model after training, and the predicted real phase is obtained through calculation.
作为一种具体示例,步骤S1中,通过仿真生成二维真实相位,并计算二维包裹相位,添加噪声,具体如下:As a specific example, in step S1, the two-dimensional real phase is generated by simulation, and the two-dimensional wrapped phase is calculated, and noise is added, as follows:
S11、随机生成大小在特定区间内的矩阵,矩阵的数值在设定区间内,且满足高斯分布或均匀分布之一;随机选择一种插值算法,将初始矩阵扩展,作为真实相位ω;S11. Randomly generate a matrix with a size within a specific interval, the value of the matrix is within the set interval, and satisfies one of Gaussian distribution or uniform distribution; randomly select an interpolation algorithm, and expand the initial matrix as the real phase ω;
S12、根据公式计算包裹相位并随机从椒盐噪声、高斯噪声中选择一种添加至包裹相位 S12. According to the formula Calculate wrap phase And randomly select one of salt and pepper noise and Gaussian noise to add to the wrapped phase
作为一种具体示例,所述S12中,随机从椒盐噪声、高斯噪声中选择一种添加至包裹相位具体如下:As a specific example, in S12, randomly select one of salt and pepper noise and Gaussian noise to add to the wrapped phase details as follows:
函数angle表示计算复数的相位角,函数值在[-π,π]中,先将包裹相位除以π,再从椒盐噪声、高斯噪声中随机选择其一添加至包裹相位,其中椒盐噪声密度为0.01~0.2之间的随机数,高斯噪声标准差为0.01~0.20之间的随机数,添加噪声完成后,再将相位乘以π恢复其数值范围。The function angle represents the calculation of the phase angle of the complex number. The function value is in [-π,π]. First wrap the phase Divide by π, and then randomly select one of the salt and pepper noise and Gaussian noise to add to the wrapping phase, where the salt and pepper noise density is a random number between 0.01 and 0.2, and the standard deviation of the Gaussian noise is a random number between 0.01 and 0.20. After the noise is complete, multiply the phase by π to restore its range of values.
作为一种具体示例,步骤S2中,根据二维包裹相位计算生成相应光强干涉图,具体如下:As a specific example, in step S2, a corresponding light intensity interferogram is generated according to the two-dimensional wrapping phase calculation, as follows:
S21、随机生成大小在设定区间内的矩阵,矩阵的数值在设定区间内,且满足均匀分布;选择一种插值算法,将初始矩阵扩展,作为背景光强A;S21. Randomly generate a matrix whose size is within the set interval, and the value of the matrix is within the set interval and satisfies a uniform distribution; select an interpolation algorithm, and expand the initial matrix as the background light intensity A;
S22、随机生成大小在设定区间内的矩阵,矩阵的数值在设定区间内,且满足均匀分布;选择一种插值算法,将初始矩阵扩展,作为对比度项V;S22. Randomly generate a matrix whose size is within the set interval, and the value of the matrix is within the set interval and satisfies a uniform distribution; select an interpolation algorithm, and expand the initial matrix as the contrast term V;
S23、采用四步移相法,对于已经添加噪声的包裹相位生成四个不同移相相位的光强干涉图 S23, adopt the four-step phase shift method, for the package phase to which noise has been added Generate light intensity interferograms with four different phase-shifted phases
S24、对生成的光强干涉图随机旋转-10°至10°,随机纵向和横向循环平移-20到20个像素,从中心区域截取大小为256×256的部分作为最终的光强干涉图,同时真实相位也截取中心区域大小为256×256的部分作为最终的光强干涉图。S24, to the generated light intensity interferogram Random rotation -10° to 10°, random vertical and horizontal circular translation - 20 to 20 pixels, intercepting a 256×256 part from the central area as the final light intensity interferogram, while the real phase also intercepts the central area with a size of The 256×256 section serves as the final light intensity interferogram.
作为一种具体示例,步骤S3中所述设置深度神经网络模型结构、参数、优化算法,其中优化算法采用Adam算法,网络模型结构如下:As a specific example, the deep neural network model structure, parameters, and optimization algorithm are set as described in step S3, wherein the optimization algorithm adopts Adam algorithm, and the network model structure is as follows:
深度神经网络模型结构基于残差块的U-Net,光强干涉图作为网络输入,输出对应真实相位;The structure of the deep neural network model is based on the U-Net of the residual block, the light intensity interferogram is used as the input of the network, and the output corresponds to the real phase;
深度神经网络模型包括编码器、瓶颈层、解码器;编码器有4层,每一层由2连续的残差块构成,每一层的输出作为下一层的输入和对应解码器层的输入;瓶颈层包含2个连续的残差块;解码器和编码器的层数相同,每一层包含一个上采样层和2个连续的残差块用于特征的解码;解码器最后一层输出的特征图经过1×1卷积后输出真实相位。The deep neural network model includes an encoder, a bottleneck layer, and a decoder; the encoder has 4 layers, each layer is composed of 2 consecutive residual blocks, and the output of each layer is used as the input of the next layer and the input of the corresponding decoder layer ; The bottleneck layer contains 2 consecutive residual blocks; the number of layers of the decoder and the encoder is the same, each layer contains an upsampling layer and 2 consecutive residual blocks for feature decoding; the output of the last layer of the decoder The feature map of is 1×1 convoluted to output the true phase.
作为一种具体示例,步骤S3中所述使用混合损失函数和步骤2生成的数据集对深度神经网络模型进行训练,混合损失函数Lmix(x,y)的公式如下:As a specific example, the mixed loss function and the data set generated in
Lmix(x,y)=α1Ll1(x,y)+α2LMS-SSIM(x,y)L mix (x,y)=α 1 L l1 (x,y)+α 2 L MS-SSIM (x,y)
其中α1和α2为超参数,α1设置为0.14,α2设置为0.86;where α 1 and α 2 are hyperparameters, α 1 is set to 0.14, and α 2 is set to 0.86;
平均绝对误差损失Ll1(x,y)公式为:The formula for the mean absolute error loss L l1 (x, y) is:
其中,x表示实际的真实相位,y表示深度神经网络模型网络输出的预测真实相位,N表示真实相位的矩阵元素个数;Among them, x represents the actual true phase, y represents the predicted true phase output by the deep neural network model network, and N represents the number of matrix elements of the true phase;
多尺度结构相似度损失LMS-SSIM(x,y)公式为:The multi-scale structural similarity loss L MS-SSIM (x, y) formula is:
其中,cj、sj分别表示将原图像进行j次连续的低通滤波和采样间隔为2的下采样后计算对比度项、结构项,M表示连续的低通滤波总次数;Among them, c j and s j respectively represent the contrast item and the structure item after performing j consecutive low-pass filtering on the original image and down-sampling with a sampling interval of 2, and M represents the total number of consecutive low-pass filtering;
亮度项l(x,y)公式为:The formula for the luminance term l(x,y) is:
对比度项c(x,y)公式为:The formula for the contrast term c(x,y) is:
结构项s(x,y)公式为:The formula for the structure item s(x,y) is:
其中,μx,μy分别代表x和y的均值,σx,σy分别代表x和y的标准差,σxy表示x和y的协方差,C1,C2,C3是常数值,满足:Among them, μ x , μ y represent the mean of x and y, respectively, σ x , σ y represent the standard deviation of x and y, respectively, σ xy represents the covariance of x and y, C 1 , C 2 , C 3 are constant values ,Satisfy:
C1=(K1L)2 C 1 =(K 1 L) 2
C2=(K2L)2 C 2 =(K 2 L) 2
C3=C2/2C 3 =C 2 /2
其中K1=0.01,K2=0.03,L=2B-1,B=8。Wherein K 1 =0.01, K 2 =0.03, L=2 B -1, B=8.
下面结合附图及具体实施例对本发明作进一步详细描述。The present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.
实施例Example
结合图1,本实施例基于深度神经网络的光强干涉图自动配准和相位解包方法,包括以下步骤:With reference to FIG. 1, the present embodiment of the deep neural network-based light intensity interferogram automatic registration and phase unpacking method includes the following steps:
步骤S1、生成特定大小的随机矩阵,通过插值算法生成真实相位,由真实相位生成包裹相位,并对包裹相位添加一定的噪声;Step S1, generating a random matrix of a specific size, generating a real phase through an interpolation algorithm, generating a wrapped phase from the real phase, and adding a certain noise to the wrapped phase;
更进一步的,方法步骤如下:Further, the method steps are as follows:
S11、随机生成一个方阵(大小在2×2到25×25之间,数值范围2-30),分布类型为均匀分布或高斯分布之一,如图2所示,再随机从最近邻插值,二次插值,双三次插值中选择一种方法将矩阵扩展至320×320,作为真实相位,图3是真实相位的曲面图。S11. Randomly generate a square matrix (the size is between 2×2 and 25×25, the value range is 2-30), the distribution type is one of uniform distribution or Gaussian distribution, as shown in Figure 2, and then randomly interpolate from the nearest neighbor , quadratic interpolation, bicubic interpolation, choose a method to expand the matrix to 320×320, as the real phase, Figure 3 is the surface map of the real phase.
S12、根据公式计算包裹相位,其中ω是真实相位,是包裹相位,函数angle表示计算复数的相位角,值在[-π,π]中,先将包裹相位除以π,再随机从椒盐噪声,高斯噪声中随机选择其一添加至包裹相位,其中椒盐噪声密度为0.01至0.2之间的随机数,高斯噪声标准差在0.01至0.20,添加噪声完成后,再将相位乘以π恢复其原始数值范围,图4中左图是真实相位,右图是添加了噪声的包裹相位。S12. According to the formula Calculate the wrapped phase, where ω is the true phase, is the wrapping phase, the function angle represents calculating the phase angle of the complex number, the value is in [-π,π], first wrap the phase Divide by π, and then randomly select one of the salt and pepper noise and Gaussian noise to add to the wrapped phase, where the salt and pepper noise density is a random number between 0.01 and 0.2, and the standard deviation of the Gaussian noise is between 0.01 and 0.20. After adding noise, The phase is then multiplied by π to restore its original value range. The left image in Figure 4 is the true phase, and the right image is the wrapped phase with noise added.
步骤S2、包裹相位生成光强干涉图,光强干涉图经过随机平移旋转后,截取中心区域作为最终干涉相位;Step S2, wrapping the phase to generate a light intensity interferogram, and after the light intensity interferogram is randomly translated and rotated, the central area is intercepted as the final interference phase;
更进一步的,方法步骤如下:Further, the method steps are as follows:
S21、随机生成一个方阵(大小在2×2到5×5之间,数值范围在0-1),且满足均匀分布;通过线性插值算法,将初始矩阵扩展为320×320的矩阵,再将数值范围线性映射到0.7-1,作为背景光强A;S21. Randomly generate a square matrix (the size is between 2×2 and 5×5, and the value range is 0-1), and it satisfies the uniform distribution; through the linear interpolation algorithm, the initial matrix is expanded into a 320×320 matrix, and then Linearly map the numerical range to 0.7-1 as the background light intensity A;
S22、随机生成一个方阵(大小在2×2到5×5之间,数值范围在0-1),且满足均匀分布;通过线性插值算法,将初始矩阵扩展为320×320的矩阵,再将数值范围线性映射到0.7-1,作为对比度项V;S22. Randomly generate a square matrix (the size is between 2×2 and 5×5, and the value range is 0-1), and satisfy the uniform distribution; through the linear interpolation algorithm, the initial matrix is expanded into a 320×320 matrix, and then Linearly map the numerical range to 0.7-1 as the contrast term V;
S23、采用四步移相法,对于已经添加噪声的包裹相位生成四个不同移相相位的光强干涉图:S23, adopt the four-step phase shift method, for the package phase to which noise has been added Generate light intensity interferograms with four different phase-shifted phases:
S24、对生成的光强干涉图和随机旋转-10°至10°,并随机纵向和横向循环平移-20到20个像素,再从所有光强干涉图的中心区域截取大小为256×256的部分作为最终的光强干涉图,同时真实相位也截取中心区域大小为256×256的部分作为最后的真实相位,图5是截取了中心区域的四幅干涉图。S24, to the generated light intensity interferogram and Randomly rotate -10° to 10°, and randomly rotate longitudinally and laterally by -20 to 20 pixels, and then intercept a 256×256 part from the central area of all light intensity interferograms as the final light intensity interferogram. The real phase also intercepts the part of the central area with a size of 256×256 as the final real phase. Figure 5 shows four interferograms intercepted from the central area.
步骤S3、调整深度神经网络结构,优化算法,损失函数,使用生成数据集训练。Step S3, adjust the structure of the deep neural network, optimize the algorithm, and the loss function, and use the generated data set for training.
所述卷积神经网络机构如图6所示,神经网络包含一个四层的编码器,一个瓶颈层,一个四层的解码器,编码器的每层有两个连续的残差块组成,每一层编码器的输出作为下一层编码器的输入,同时经过跳跃连接作为相同层次解码器的输入,每层解码器接受前一层的输出经过PixelShuffle操作上采样后,和对应层次编码器的输出拼接后,再经过两个连续的残差块后,作为下一层的输入,最后一层解码器的输出特征图经过1x1的卷积后得到最终的输出。残差块的结构如图7所示,其中包含四个3×3卷积和1个1×1卷积,当残差块的输出特征图的高宽减半时,第一个3×3卷积的步长为2,padding为1,否则步长为1,同时第一个3×3卷积实现特征图的通道数变化,后三个3×3卷积层步长始终为1,输入通道数和输出通道数不变。The mechanism of the convolutional neural network is shown in Figure 6. The neural network includes a four-layer encoder, a bottleneck layer, and a four-layer decoder. Each layer of the encoder consists of two consecutive residual blocks, each The output of the encoder of one layer is used as the input of the encoder of the next layer, and at the same time, it is used as the input of the decoder of the same layer through skip connection. After the output is spliced, after two consecutive residual blocks are used as the input of the next layer, the output feature map of the last layer of the decoder is subjected to 1x1 convolution to obtain the final output. The structure of the residual block is shown in Figure 7, which contains four 3×3 convolutions and one 1×1 convolution. When the height and width of the output feature map of the residual block are halved, the first 3×3 convolution The step size of the convolution is 2, the padding is 1, otherwise the step size is 1, and the first 3×3 convolution realizes the change of the number of channels of the feature map, and the step size of the last three 3×3 convolution layers is always 1. The number of input channels and output channels remain unchanged.
所使用的混合损失函数公式如下:The mixed loss function formula used is as follows:
Lmix(x,y)=α1Ll1(x,y)+α2LMS-SSIM(x,y)L mix (x,y)=α 1 L l1 (x,y)+α 2 L MS-SSIM (x,y)
其中平均绝对误差损失公式为:where the mean absolute error loss formula is:
其中,x代表实际的真实相位,y代表网络输出的预测真实相位,N代表真实相位的矩阵元素个数。Among them, x represents the actual true phase, y represents the predicted true phase output by the network, and N represents the number of matrix elements of the true phase.
多尺度结构相似度损失公式为:The multi-scale structural similarity loss formula is:
其中亮度项公式:The formula for the luminance term is:
对比度项公式:Contrast term formula:
结构项公式:Structure item formula:
其中μx,μy分别代表x和y的均值,σx,σy分别代表x和y的标准差,σxy表示x和y的协方差,C1,C2,C3是常数值,满足:where μ x , μ y represent the mean of x and y, σ x , σ y represent the standard deviation of x and y, respectively, σ xy represent the covariance of x and y, C 1 , C 2 , C 3 are constant values, Satisfy:
C1=(K1L)2 C 1 =(K 1 L) 2
C2=(K2L)2 C 2 =(K 2 L) 2
C3=C2/2C 3 =C 2 /2
其中K1=0.01,K2=0.03,L=2B-1,此处B=8。where K 1 =0.01, K 2 =0.03, L=2 B -1, where B=8.
cj与sj分别表示将原图像进行j次连续的低通滤波和采样间隔为2的下采样后计算对比度和结构,α1和α2为超参数,设置为0.14和0.86。c j and s j respectively represent the contrast and structure of the original image after j consecutive low-pass filtering and down-sampling with a sampling interval of 2. α 1 and α 2 are hyperparameters, which are set to 0.14 and 0.86.
步骤S4,通过评价指标如结构相似度,均方根误差,峰值信噪比的大小判断相位恢复效果是否满足要求,若满足则调制步骤S5;否则对模型结构、参数调整,并转至步骤S3重新训练,图8是训练完成的网络计算输出的真实相位。Step S4, judge whether the phase recovery effect meets the requirements by evaluating indicators such as structural similarity, root mean square error, and peak signal-to-noise ratio, and if so, modulate step S5; otherwise, adjust the model structure and parameters, and go to step S3 Retraining, Figure 8 is the true phase of the network computed output after training.
步骤S5、由真实系统采集的光强干涉图作为网络输入,经网络计算得到真实相位。Step S5, the light intensity interferogram collected by the real system is used as the network input, and the real phase is obtained through the network calculation.
综上所述,本发明直接从干涉图计算其真实相位,不需要对各个不同移相量的干涉图配准再计算包裹相位和真实相位,减少了计算时间,可以在动态测量中使用。此外,本发明抗噪声能力强,常用的相位解包算法如最小路径算法,在噪声等级较大时会出现解包错误的现象,本发明在数据集生成阶段通过给包裹相位添加不同种类、不同等级的噪声,提高了网络对去除不同级别噪声的泛化能力,提高了相位解包的精确性。To sum up, the present invention directly calculates the real phase from the interferogram, and does not need to register the interferograms of different phasors to calculate the wrapped phase and the real phase, which reduces the calculation time and can be used in dynamic measurement. In addition, the present invention has strong anti-noise capability. Commonly used phase unpacking algorithms such as the minimum path algorithm may cause unpacking errors when the noise level is relatively large. The present invention adds different types and different types of packing phases to the packing phase during the data set generation stage. The level of noise improves the generalization ability of the network to remove different levels of noise, and improves the accuracy of phase unpacking.
Claims (6)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210969274.6A CN115205349A (en) | 2022-08-12 | 2022-08-12 | A Deep Neural Network-Based Interferogram Automatic Registration and Phase Unpacking Method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210969274.6A CN115205349A (en) | 2022-08-12 | 2022-08-12 | A Deep Neural Network-Based Interferogram Automatic Registration and Phase Unpacking Method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115205349A true CN115205349A (en) | 2022-10-18 |
Family
ID=83585109
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210969274.6A Pending CN115205349A (en) | 2022-08-12 | 2022-08-12 | A Deep Neural Network-Based Interferogram Automatic Registration and Phase Unpacking Method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115205349A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116911356A (en) * | 2023-07-04 | 2023-10-20 | 内蒙古工业大学 | InSAR phase unwrapping method and device based on deep convolutional neural network optimization and storage medium |
| CN118687603A (en) * | 2024-08-26 | 2024-09-24 | 齐鲁工业大学(山东省科学院) | A fiber optic signal demodulation algorithm based on fully connected neural network |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111208512A (en) * | 2020-01-15 | 2020-05-29 | 电子科技大学 | Interferometric measurement method based on video synthetic aperture radar |
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
-
2022
- 2022-08-12 CN CN202210969274.6A patent/CN115205349A/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021007812A1 (en) * | 2019-07-17 | 2021-01-21 | 深圳大学 | Deep neural network hyperparameter optimization method, electronic device and storage medium |
| CN111208512A (en) * | 2020-01-15 | 2020-05-29 | 电子科技大学 | Interferometric measurement method based on video synthetic aperture radar |
Non-Patent Citations (3)
| Title |
|---|
| LEI KONG 等: "1D phase unwrapping based on the quasi-gramian matrix and deep learning for interferometric optical fiber sensing applications", JOURNAL OF LIGHTWAVE TECHNOLOGY, vol. 40, no. 1, 7 October 2021 (2021-10-07), pages 252 - 261 * |
| 王成 等: "基于COSMO-SkyMed雷达影像的山地城市地表形变监测研究", 电子测量技术, vol. 41, no. 09, 27 April 2018 (2018-04-27), pages 103 - 108 * |
| 马靓婷 等: "相位分块与拟合法结合的InSAR相位解缠算法", 遥感信息, vol. 35, no. 02, 20 April 2020 (2020-04-20), pages 115 - 120 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116911356A (en) * | 2023-07-04 | 2023-10-20 | 内蒙古工业大学 | InSAR phase unwrapping method and device based on deep convolutional neural network optimization and storage medium |
| CN118687603A (en) * | 2024-08-26 | 2024-09-24 | 齐鲁工业大学(山东省科学院) | A fiber optic signal demodulation algorithm based on fully connected neural network |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112381172B (en) | InSAR interference image phase unwrapping method based on U-net | |
| CN115205349A (en) | A Deep Neural Network-Based Interferogram Automatic Registration and Phase Unpacking Method | |
| CN111402240A (en) | A 3D surface measurement method based on single-frame color fringe projection based on deep learning | |
| CN113311433B (en) | InSAR interferometric phase two-step unwrapping method combining quality map and minimum cost flow | |
| CN109633648B (en) | Multi-baseline phase estimation device and method based on likelihood estimation | |
| CN104730519B (en) | A kind of high-precision phase position unwrapping method of employing error iterative compensation | |
| CN102607465A (en) | Phase unwrapping method based on colored phase shift stripe secondary encoding | |
| CN110109105A (en) | A method of the InSAR technical monitoring Ground Deformation based on timing | |
| CN115760598A (en) | Digital holographic wrapped phase distortion compensation method based on deep learning | |
| CN111043953A (en) | Two-dimensional phase unwrapping method based on deep learning semantic segmentation network | |
| CN113589286B (en) | Unscented Kalman filtering phase unwrapping method based on D-LinkNet | |
| CN103886582B (en) | One kind utilizes the preferable satellite-borne synthetic aperture interferometer radar Image registration method of characteristic point Voronoi diagram | |
| CN116148855A (en) | Time-series InSAR Atmospheric Phase Removal and Deformation Calculation Method and System | |
| CN111598929B (en) | Two-Dimensional Unwrapping Method Based on Time-Series Differential Interferometric Synthetic Aperture Radar Data | |
| Gao et al. | Two-dimensional phase unwrapping method using a refined D-LinkNet-based unscented Kalman filter | |
| CN107544069B (en) | Multi-baseline phase unwrapping method based on plane approximation model | |
| CN108548502A (en) | A kind of dynamic object method for three-dimensional measurement | |
| CN112859077A (en) | Multistage synthetic aperture radar interference phase unwrapping method | |
| Deng et al. | D-SRCAGAN: DEM super-resolution generative adversarial network | |
| CN113011107B (en) | One-dimensional optical fiber sensing signal phase recovery method based on deep convolutional neural network | |
| CN113240604B (en) | Iterative optimization method for time-of-flight depth images based on convolutional neural network | |
| CN103745489A (en) | Method for constructing base station signal field intensity map based on compressed sensing | |
| CN109859322A (en) | A kind of spectrum posture moving method based on deformation pattern | |
| Xiao et al. | Image representation on curved optimal triangulation | |
| CN107504919A (en) | Wrapped phase three-dimension digital imaging method and device based on phase mapping |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |