CN118424131A - Full-field three-dimensional deformation measurement method and device based on image features and seed points - Google Patents
Full-field three-dimensional deformation measurement method and device based on image features and seed points Download PDFInfo
- Publication number
- CN118424131A CN118424131A CN202410507872.0A CN202410507872A CN118424131A CN 118424131 A CN118424131 A CN 118424131A CN 202410507872 A CN202410507872 A CN 202410507872A CN 118424131 A CN118424131 A CN 118424131A
- Authority
- CN
- China
- Prior art keywords
- region
- reference image
- points
- camera reference
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/16—Measuring arrangements characterised by the use of optical techniques for measuring the deformation in a solid, e.g. optical strain gauge
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/44—Morphing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention provides a full-field three-dimensional deformation measurement method and device based on image features and seed points, which relate to the technical field of material deformation test, and are used for carrying out speckle treatment on the surface of a to-be-tested sample and acquiring surface information of the to-be-tested sample with speckle marks by adopting a binocular camera, wherein the surface information of the to-be-tested sample comprises a left camera reference image and a right camera reference image; correcting and matching the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera to obtain parallax maps corresponding to the left camera reference image and the right camera reference image; generating parallax values of all to-be-measured points in the to-be-measured sample through the parallax map, and calculating three-dimensional space coordinates of corresponding points according to the parallax values to obtain three-dimensional point cloud coordinates before deformation; determining an interested region in the left camera reference image, determining a region to be matched corresponding to the interested region in the right camera reference image, uniformly dividing the interested region to obtain a plurality of calculation regions, extracting region characteristic points corresponding to each calculation region, and obtaining an affine transformation matrix corresponding to the center point of each calculation region; calculating displacement parameters of the center points corresponding to the calculation areas respectively based on affine transformation matrixes of the calculation areas, and taking the center points as primary seed points, wherein the displacement parameters are used as deformation parameters of the seed points for diffusion to obtain deformed three-dimensional point cloud coordinates; and acquiring three-dimensional deformation data of all to-be-measured points based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, and completing full-field three-dimensional deformation measurement of the to-be-measured sample. The invention is beneficial to improving the matching precision and the matching efficiency of the matching algorithm.
Description
Technical Field
The invention relates to the technical field of material deformation testing, in particular to a full-field three-dimensional deformation measuring method and device based on image features and seed points.
Background
In the fields of science and engineering, the three-dimensional digital image correlation method is widely applied to three-dimensional deformation measurement due to the advantages of simple device, non-contact and the like. The three-dimensional digital image correlation method can measure the three-dimensional shape and the full-field three-dimensional deformation of the curved object, and has very rich application scenes. However, since it directly processes a high-resolution digital image, its calculation amount is also considerable, which has a problem of long calculation time consumption. With the development of digital image acquisition technology, the image resolution and sampling rate are improved, and the problem is more and more remarkable, so that the application of the three-dimensional digital image correlation method in some scenes such as real-time monitoring is limited. Secondly, as a measurement method, it is also important to maintain its high accuracy.
The Chinese patent with publication number CN556653500A discloses a full-field three-dimensional strain measurement method of a label-free structure, which integrates a neural network and binocular vision, wherein a binocular vision system is calibrated firstly, digital images before and after deformation of the label-free structure are directly acquired, and a surface measurement area and a division measurement grid point are selected on the image before deformation of a left visual angle; performing SIFT feature point detection and matching on left and right view images before deformation of a label-free structure, and establishing a self-adaptive three-dimensional matching data set through three-level screening of feature point pairs; constructing and training a self-adaptive stereo matching artificial neural network, inputting measurement grid points of a left view angle into the neural network, and obtaining measurement grid points of a right view angle; respectively tracking measurement grid points of the images before and after deformation based on a multi-scale optical flow algorithm under two view angles, and calculating three-dimensional coordinates of the measurement grid points before and after deformation; based on the subfield projection and the least squares fit, the three-dimensional strain is calculated point by point. However, the above solution cannot avoid that the deformation of the region of interest is discontinuous, which easily causes error proliferation of the matching algorithm. Therefore, it is necessary to provide a full-field three-dimensional deformation measurement method and device based on image features and seed points to improve the matching accuracy and matching efficiency of the matching algorithm.
Disclosure of Invention
In view of the above, the invention provides a full-field three-dimensional deformation measurement method and device based on image features and seed points, which are used for rapidly acquiring region feature points and center points through region image feature matching, and obtaining displacement parameters of the center points for diffusion based on IC-GN iterative optimization, so that error proliferation caused by deformation discontinuity is effectively avoided, and the accuracy and efficiency of a matching algorithm are improved.
The invention provides a full-field three-dimensional deformation measurement method based on image features and seed points, which comprises the following steps:
Carrying out speckle treatment on the surface of a sample to be detected, and acquiring surface information of the sample to be detected with speckle marks by adopting a binocular camera, wherein the surface information of the sample to be detected comprises a left camera reference image and a right camera reference image;
Correcting and matching the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera to obtain parallax maps corresponding to the left camera reference image and the right camera reference image;
Generating parallax values of all to-be-measured points in the to-be-measured sample through the parallax map, and calculating three-dimensional space coordinates of corresponding points according to the parallax values to obtain three-dimensional point cloud coordinates before deformation;
Determining an interested region in the left camera reference image, determining a region to be matched corresponding to the interested region in the right camera reference image, uniformly dividing the interested region to obtain a plurality of calculation regions, extracting region characteristic points corresponding to each calculation region, and obtaining an affine transformation matrix corresponding to the center point of each calculation region;
Calculating displacement parameters of the center points corresponding to the calculation areas respectively based on affine transformation matrixes of the calculation areas, and taking the center points as primary seed points, wherein the displacement parameters are used as deformation parameters of the seed points for diffusion to obtain deformed three-dimensional point cloud coordinates;
And acquiring three-dimensional deformation data of all to-be-measured points based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, and completing full-field three-dimensional deformation measurement of the to-be-measured sample.
On the basis of the above technical solution, preferably, the correcting and matching the left camera reference image and the right camera reference image based on the external parameters and the internal parameters of the binocular camera, to obtain disparity maps corresponding to the left camera reference image and the right camera reference image, specifically includes:
correcting and matching the left camera reference image and the right camera reference image through polar constraint based on external parameters and internal parameters of the binocular camera, so that corresponding points in the left camera reference image and the right camera reference image are on the same horizontal polar line;
Calculating a ZNCC related function by adopting an integral graph acceleration-based method in a parallax range on the horizontal polar line in the left camera reference image and the right camera reference image by taking a subarea divided in the left camera reference image and the right camera reference image as a unit;
And performing iterative optimization based on a reverse synthetic Gaussian Newton algorithm and the ZNCC related function to obtain disparity maps corresponding to the left camera reference image and the right camera reference image.
On the basis of the above technical solution, preferably, the expression of the ZNCC correlation function is:
Wherein N represents the total number of pixels in the sub-region, S r represents the sum of gray values of pixels in the sub-region of the reference image, S t represents the sum of gray values of pixels in the sub-region of the deformed image, S rr represents the sum of squares of gray values of pixels in the sub-region of the reference image, S tt represents the sum of squares of gray values of pixels in the sub-region of the deformed image, and S rt represents the sum of products of gray values of corresponding pixels in the sub-region of the reference image and the deformed image.
Still further preferably, the determining a region of interest in the left camera reference image, determining a region to be matched corresponding to the region of interest in the right camera reference image, and uniformly dividing the region of interest to obtain a plurality of calculation regions, extracting region feature points corresponding to each calculation region, and obtaining an affine transformation matrix corresponding to a center point of each calculation region, which specifically includes:
selecting a region of interest from the left camera reference image by a manual frame, selecting a region to be matched from the right camera reference image by a frame, and uniformly cutting the region of interest into a plurality of calculation regions;
based on a SIFT feature extraction algorithm and a weighted value function, correspondingly extracting the region feature points of the calculation region and the region to be matched;
And if the region feature points meet the correlation condition, taking the region feature points as the matching points, determining a matching calculation region corresponding to the matching points, and calculating an affine transformation matrix corresponding to the central point of the matching calculation region.
Still further preferably, if the region feature point satisfies a correlation condition, the matching method further includes:
acquiring the nearest neighbor point of the region feature point and a descriptor vector corresponding to the secondary neighbor point of the region feature point;
Judging whether the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than a preset distance ratio;
If the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than a preset distance ratio, judging whether ZNSSD correlation coefficients of the nearest point are all larger than a preset cost threshold;
and if ZNSSD correlation coefficients of the nearest points are all larger than a preset cost threshold, taking the nearest points of the regional feature points as matching points.
Still further preferably, the calculating the displacement parameters of the center points corresponding to the calculation regions based on the affine transformation matrix of each calculation region respectively uses the center points as primary seed points, and specifically includes:
And converting affine matrixes corresponding to the central points of the calculation areas into first-order functions, substituting the first-order functions into a reverse synthetic Gaussian Newton algorithm for iterative optimization, obtaining displacement parameters corresponding to the central points, and taking the central points as primary seed points.
Still further preferably, the expression of the weighting function is:
Wherein (x n,yn) represents the position coordinate of the target pixel point, (x 0,y0) represents the position coordinate of the central point, sigma represents the scale of the sub-region where the target pixel point is located, d represents the Euclidean distance between the target pixel point and the central point, and r represents the step size of the sub-region where the target pixel point is located.
In a second aspect of the application, a full-field three-dimensional deformation measurement device based on image features and seed points is provided, the full-field three-dimensional deformation measurement device comprises a speckle information acquisition module, a first coordinate processing module, a second coordinate processing module and a three-dimensional deformation measurement module which are sequentially connected, wherein,
The speckle information acquisition module is used for carrying out speckle treatment on the surface of a sample to be detected, and acquiring surface information of the sample to be detected with speckle marks by adopting a binocular camera, wherein the surface information of the sample to be detected comprises a left camera reference image and a right camera reference image;
The first coordinate processing module is used for correcting and matching the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera, obtaining a parallax image corresponding to the left camera reference image and the right camera reference image, generating parallax values of all points to be measured in the sample to be measured through the parallax image, and calculating three-dimensional space coordinates of corresponding points according to the parallax values to obtain three-dimensional point cloud coordinates before deformation;
the second coordinate processing module is used for determining an interested region in the left camera reference image, determining a region to be matched corresponding to the interested region in the right camera reference image, uniformly dividing the interested region to obtain a plurality of calculation regions, extracting region characteristic points corresponding to the calculation regions, acquiring affine transformation matrixes corresponding to central points of the calculation regions, respectively calculating displacement parameters of the central points corresponding to the calculation regions based on the affine transformation matrixes of the calculation regions, using the central points as primary seed points, and diffusing the displacement parameters as deformation parameters of the seed points to obtain deformed three-dimensional point cloud coordinates;
The three-dimensional deformation measurement module is used for acquiring three-dimensional deformation data of all to-be-measured points based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, and completing full-field three-dimensional deformation measurement of the to-be-measured sample.
In a third aspect of the application there is provided an electronic device comprising a processor, a memory for storing instructions, a user interface and a network interface for communicating to other devices, the processor for executing instructions stored in the memory.
In a fourth aspect of the application, a computer readable storage medium is provided having stored thereon a computer program for execution by a processor to perform the steps of implementing a full field three-dimensional deformation measurement method based on image features and seed points.
Compared with the prior art, the full-field three-dimensional deformation measurement method and device based on the image features and the seed points have the following beneficial effects:
(1) The method is characterized in that a time domain matching method based on image features and seed points is adopted, affine transformation matrixes corresponding to the region feature points and the center points of the corresponding region are rapidly obtained through region image feature matching, displacement parameters of the center points are obtained based on IC-GN iterative optimization, the displacement parameters are used as initial seed point parameters to be diffused based on a correlation optimal principle, error proliferation caused by discontinuous deformation is effectively avoided, accuracy and efficiency of a matching algorithm are improved, and high-precision rapid three-dimensional deformation field measurement is achieved;
(2) The error matching caused by the interference of factors such as noise, weak textures, parallax shielding and the like is effectively avoided, meanwhile, the uniqueness of the photographed image of the left and right cameras and the consistency of the left and right coupling are utilized to check and reject the error matching data, and the optimal matching point of the whole pixel is obtained, so that the robustness and the accuracy of a matching algorithm are improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a full-field three-dimensional deformation measurement method based on image features and seed points provided by the invention;
FIG. 2 is a schematic diagram of parameter diffusion based on correlation optimization principle according to the present invention;
FIG. 3 is a schematic diagram of a full-field three-dimensional deformation measurement system based on image features and seed points provided by the invention;
FIG. 4 is a schematic structural diagram of a full-field three-dimensional deformation measuring device provided by the invention;
fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Reference numerals illustrate: 1. a CCD camera; 2. a light source; 3. a lens; 4. a light filter; 5. a full-field three-dimensional deformation measuring device; 51. a speckle information acquisition module; 52. a first coordinate processing module; 53. a second coordinate processing module; 54. a three-dimensional deformation measurement module; 6. an electronic device; 61. a processor; 62. a communication bus; 63. a user interface; 64. a network interface; 65. a memory.
Detailed Description
The following description of the embodiments of the present invention will clearly and fully describe the technical aspects of the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
The application is described in further detail below with reference to fig. 1-5.
Referring to fig. 1, the application provides a full-field three-dimensional deformation measurement method based on image features and seed points, and the method comprises the steps of S1-S6.
Step S1, carrying out speckle treatment on the surface of a sample to be detected, and acquiring surface information of the sample to be detected with speckle marks by adopting a binocular camera, wherein the surface information of the sample to be detected comprises a left camera reference image and a right camera reference image.
In the step, speckle treatment is carried out on the surface of the sample to be detected, so that the characteristic information of the surface of the object is enhanced. The specific speckle manufacturing method can be selected according to different working conditions and shapes of objects to be measured, such as a speckle gun method, a screen printing method, a roller method and the like. Before measurement, the system is required to be calibrated integrally, five large round calibration plates can be adopted and calibrated based on Zhang Zhengyou calibration method, the internal and external parameters of the binocular camera system are obtained, and then the calibrated binocular system can be adopted to collect speckle images before and after sample deformation and then the speckle images are transmitted to a computer for processing analysis.
And S2, correcting and matching the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera, and obtaining parallax maps corresponding to the left camera reference image and the right camera reference image.
In this embodiment, a left camera reference image and a right camera reference image obtained when the left camera reference image is not deformed are used as images to be processed, an interested region is selected on the left camera reference image through a manual frame, and key parameters such as a point taking interval, a searching step length, iteration times, a subarea size and the like are set.
The present step further includes steps S21 to S23.
Step S21, based on external parameters and internal parameters of the binocular camera, correcting and matching the left camera reference image and the right camera reference image through polar constraint, so that corresponding points in the left camera reference image and the right camera reference image are on the same horizontal polar line.
In the step, according to the external parameters and the internal parameters of the binocular camera which are acquired in advance, the left and right camera reference images are corrected to be aligned, and the processing such as filtering noise reduction is performed, so that the influence of image noise on speckle matching is further reduced.
Step S22, taking the subareas divided in the left camera reference image and the right camera reference image as units, and calculating a ZNCC related function by adopting an integral graph acceleration-based method in the parallax range on the horizontal polar line in the left camera reference image and the right camera reference image.
In the step, the ZNCC related function is calculated by adopting an integral graph acceleration-based method in the parallax range on the polar line by taking the subarea as a unit, so that the acquisition efficiency is further improved. And then, a three-step search method is adopted to obtain corresponding matching points between the left image and the right image, and error matching data is checked and removed by utilizing uniqueness and left-right coupling consistency, so that the optimal matching points of the whole pixels are obtained. The three-step searching method is mainly to set a certain step length according to the convergence radius of the IC-GN algorithm to perform interval searching, gradually approaches the extreme point of the ZNCC related coefficient through different step length interval searching, so that the optimal matching point is quickly obtained, and the algorithm efficiency is improved.
In one example, the step size may be set to 5 pixels first, and the first correlation calculation may be performed every 5 pixel points from the search start. And then taking the extreme points of the correlation coefficient as the central points of the second matching, reducing the searching step length to 3 pixels, and searching by 3 pixel points at intervals of the maximum points on the left and right sides to obtain the central points of the third matching. And finally, reducing the searching step length to 1 pixel, and searching the maximum value points on the left and right sides point by point to obtain the correlation coefficient extreme point which is the optimal matching point of the whole pixel. In addition, there is inevitably a mismatch due to interference of noise, weak texture, parallax occlusion, and the like. In order to improve the robustness and accuracy of the matching algorithm, error matching data is checked and removed by utilizing the uniqueness and the left-right coupling consistency, and the optimal matching point of the whole pixel is obtained.
In this embodiment, the expression of the ZNCC correlation function is:
Wherein N represents the total number of pixels in the sub-region, S r represents the sum of gray values of pixels in the sub-region of the reference image, S t represents the sum of gray values of pixels in the sub-region of the deformed image, S rr represents the sum of squares of gray values of pixels in the sub-region of the reference image, S tt represents the sum of squares of gray values of pixels in the sub-region of the deformed image, and S rt represents the sum of products of gray values of corresponding pixels in the sub-region of the reference image and the deformed image.
And S23, performing iterative optimization based on a reverse synthetic Gaussian Newton algorithm and a ZNCC related function, and obtaining disparity maps corresponding to the left camera reference image and the right camera reference image.
And S3, generating parallax values of all to-be-measured points in the to-be-measured sample through the parallax map, and calculating three-dimensional space coordinates of the corresponding points according to the parallax values to obtain three-dimensional point cloud coordinates before deformation.
In the step, iterative optimization is carried out based on an inverse synthetic Gaussian Newton (IC-GN) algorithm to obtain high-precision sub-pixel optimal matching points, all image subareas are traversed, matching of corresponding points between a left camera reference image and a right camera reference image is completed, and three-dimensional point cloud coordinates before deformation are obtained through calculation.
And S4, determining an interested region in the left camera reference image, determining a region to be matched corresponding to the interested region in the right camera reference image, uniformly dividing the interested region to obtain a plurality of calculation regions, extracting region characteristic points corresponding to each calculation region, and obtaining an affine transformation matrix corresponding to the center point of each calculation region.
The present step further includes steps S41 to S43.
Step S41, selecting a region of interest in a manual frame in the left camera reference image, selecting a region to be matched in a frame in the right camera reference image, and uniformly cutting the region of interest into a plurality of calculation regions.
In this step, the pre-framed region of interest is divided into several larger computation regions a 0、A1…An to perform multi-threaded parallel computation on the CPU, thereby improving the algorithm efficiency.
Step S42, based on the SIFT feature extraction algorithm and the weighted value function, the region feature points of the calculation region and the region to be matched are correspondingly extracted.
In this step, for each calculation region, matching feature points are extracted based on SIFT features, and since the closer the surrounding pixels are to the feature points, the larger the influence on the descriptors is, when the feature descriptors are generated, different regions are divided for pixels at different distances, and different weighting coefficients are assigned thereto.
The expression of the weighting function is:
Where (x n,yn) represents the position coordinates of the target pixel point and (x 0,y0) represents the position coordinates of the center point. Sigma represents the scale of the sub-region where the target pixel point is located, d represents the Euclidean distance between the target pixel point and the center point, and r represents the step size of the sub-region where the target pixel point is located.
Step S43, if the region feature points meet the correlation condition, the region feature points are used as the matching points, the matching calculation regions corresponding to the matching points are determined, and the affine transformation matrix corresponding to the center points of the matching calculation regions is calculated.
In this embodiment, the nearest neighbor point of the region feature point and the descriptor vector corresponding to the secondary neighbor point of the region feature point are obtained; judging whether the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than a preset distance ratio; if the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than the preset distance ratio, judging whether ZNSSD correlation coefficients of the nearest point are all larger than a preset cost threshold value; if ZNSSD correlation coefficients of the nearest points are all larger than a preset cost threshold, the nearest points of the regional feature points are used as matching points.
It can be understood that when the features are matched, the distance features and the gray distribution features are adopted as multidimensional similarity measures, and the specific steps are as follows: firstly, it is determined whether the ratio D P of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point satisfies D P < th (th is generally 0.6-0.8). If not, the point is directly abandoned, if yes, the gray matching cost is further judged, and ZNSSD related coefficient rows of the surrounding areas of the two points are adopted for judgment (C ZNSSD is generally taken to be more than 0.8). And if the correlation of the matching points meets the requirement, taking the matching points as final matching points. After all feature points of each region are matched, an affine transformation matrix of the center point of each region is rapidly calculated.
And S5, respectively calculating displacement parameters of the center points corresponding to the calculation areas based on affine transformation matrixes of the calculation areas, and using the center points as primary seed points, and diffusing the displacement parameters as deformation parameters of the seed points to obtain deformed three-dimensional point cloud coordinates.
In the step, affine matrixes corresponding to the center points of the calculation areas are converted into a first-order function, the first-order function is substituted into a reverse synthetic Gaussian Newton algorithm for iterative optimization, displacement parameters corresponding to the center points are obtained, and the center points are used as primary seed points.
In this embodiment, the affine matrix at the center of each region is converted into a first-order function, and the conversion relationship is as follows:
Wherein, (f 11,f12,f13,f21,f22,f23) is a parameter of an affine matrix, which can be solved by a least square method; (u, u x,uy,v,vx,vy) is a parameter vector of a step function.
Substituting the converted first-order function parameters into an IC-GN algorithm for iterative optimization to obtain center point displacement parameters of sub-pixel precision, and taking the center point displacement parameters as deformation parameters of primary seed points.
In this embodiment, as shown in fig. 2, a calculation state of points in the flag matrix record image is defined. Judging whether points which do not participate in calculation exist in the eight fields of the seed points, namely points with marks of 0. If so, the transfer shape function parameters are transferred, and the rationality of transfer is judged based on the correlation criterion. And carrying out internal parameter transfer optimization on each region in parallel in a multithreading manner until the calculated point number M is more than or equal to 0.66 (M represents the total number of points to be measured).
The seed point is transmitted mainly based on the principle of correlation optimization to diffuse deformation parameters, and is transmitted to an uncomputed point in the field, and the correlation coefficient C of the point is calculated. Here, an upper threshold and a lower threshold are preset, and if C is larger than the upper threshold, the seed is taken as a new seed point for next diffusion; if C is smaller than the lower threshold, directly discarding the C; if C is located between the two, the current sub-pixel displacement is used as an initial value, the IC-GN iterative optimization is carried out on the initial value, and the judgment is carried out again.
And (3) aiming at the problem of discontinuous boundary deformation of the area possibly occurring in the deformation measurement result, adopting an area space-time weight distribution criterion to perform recalculation optimization. Firstly, finding out a boundary discontinuous point according to deformation continuity, and resetting the mark of the boundary discontinuous point to 0. Dividing sub-areas for all the uncomputed points, distributing weights for the rest pixel points of the sub-areas according to the time correlation Ct, the space correlation Cd and the center Euclidean distance D, and acquiring deformation information of the computing points again, wherein the computing formula is as follows:
Wherein q (a) is deformation information of the calculation points, q (n) is deformation information of the rest points in the subarea, and T (n) represents the calculation weight of each point.
And S6, based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, acquiring three-dimensional deformation data of all to-be-measured points, and completing full-field three-dimensional deformation measurement of the to-be-measured sample.
By adopting an airspace matching algorithm based on epipolar-parallax constraint, the efficiency of the matching algorithm is greatly improved, a time domain matching algorithm based on image features and seed points is provided, the initial displacement of the whole pixel of the center of the region is rapidly obtained through region image feature matching, the displacement parameter of the sub-pixel precision of the center point is obtained based on IC-GN iterative optimization, and the displacement parameter is used as the initial seed point parameter to be diffused based on a correlation optimal principle until the acquisition of displacement parameters of all to-be-calculated points is completed, error proliferation caused by deformation discontinuity is effectively avoided, the precision and efficiency of the matching algorithm are improved, and high-precision rapid three-dimensional deformation field measurement is realized.
In this embodiment, a measurement system corresponding to a full-field three-dimensional deformation measurement method based on image features and seed points is also provided. As shown in fig. 3, the measurement system comprises a CCD camera, a lens, a light source, an optical filter and a computer; the CCD camera is used for acquiring a speckle image of the surface of the sample to be detected; the lens is matched with the camera for use and is used for acquiring a speckle image of the surface of the sample; the light source is used for illuminating the environment, so that the quality of the acquired image is ensured; the optical filter is arranged right in front of the CCD camera lens, so that interference of ambient light on speckle images is reduced; the computer is used for receiving the speckle image shot by the system and carrying out processing analysis to obtain the surface full-field three-dimensional deformation information of the object to be detected.
Based on the above method, the present application further provides a full-field three-dimensional deformation measurement device based on image features and seed points, referring to fig. 4, the full-field three-dimensional deformation measurement device 5 includes a speckle information acquisition module 51, a first coordinate processing module 52, a second coordinate processing module 53, and a three-dimensional deformation measurement module 54, which are sequentially connected, wherein,
The speckle information acquisition module 51 is used for carrying out speckle processing on the surface of the sample to be detected, and acquiring surface information of the sample to be detected with speckle marks by adopting a binocular camera, wherein the surface information of the sample to be detected comprises a left camera reference image and a right camera reference image;
The first coordinate processing module 52 is configured to correct and match the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera, obtain parallax maps corresponding to the left camera reference image and the right camera reference image, generate parallax values of each point to be measured in the sample to be measured according to the parallax maps, and calculate three-dimensional space coordinates of the corresponding point according to the parallax values, so as to obtain three-dimensional point cloud coordinates before deformation;
The second coordinate processing module 53 is configured to determine a region of interest in the left camera reference image, determine a region to be matched corresponding to the region of interest in the right camera reference image, uniformly divide the region of interest to obtain a plurality of calculation regions, extract region feature points corresponding to each calculation region, obtain affine transformation matrices corresponding to center points of each calculation region, respectively calculate displacement parameters of the center points corresponding to the calculation regions based on the affine transformation matrices of each calculation region, and diffuse the displacement parameters with the center points as first generation seed points, so as to obtain deformed three-dimensional point cloud coordinates;
The three-dimensional deformation measurement module 54 is configured to obtain three-dimensional deformation data of all points to be measured based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, and complete full-field three-dimensional deformation measurement of the sample to be measured.
In one possible example, based on external parameters and internal parameters of the binocular camera, performing correction matching on the left camera reference image and the right camera reference image, and obtaining disparity maps corresponding to the left camera reference image and the right camera reference image, specifically includes:
based on external parameters and internal parameters of the binocular camera, correcting and matching the left camera reference image and the right camera reference image through polar constraint, so that corresponding points in the left camera reference image and the right camera reference image are on the same horizontal polar line;
Taking subregions divided in the left camera reference image and the right camera reference image as units, and calculating a ZNCC related function by adopting an integral graph acceleration-based method in a parallax range on a horizontal polar line in the left camera reference image and the right camera reference image;
And performing iterative optimization based on a reverse synthetic Gaussian Newton algorithm and a ZNCC related function to obtain disparity maps corresponding to the left camera reference image and the right camera reference image.
In one possible example, the expression for the ZNCC correlation function is:
Wherein N represents the total number of pixels in the sub-region, S r represents the sum of gray values of pixels in the sub-region of the reference image, S t represents the sum of gray values of pixels in the sub-region of the deformed image, S rr represents the sum of squares of gray values of pixels in the sub-region of the reference image, S tt represents the sum of squares of gray values of pixels in the sub-region of the deformed image, and S rt represents the sum of products of gray values of corresponding pixels in the sub-region of the reference image and the deformed image.
In one possible example, a region of interest is determined in a left camera reference image, a region to be matched corresponding to the region of interest is determined in a right camera reference image, the region of interest is uniformly divided to obtain a plurality of calculation regions, region feature points corresponding to the calculation regions are extracted, and an affine transformation matrix corresponding to center points of the calculation regions is obtained, and the method specifically includes:
Selecting a region of interest from a left camera reference image by a manual frame, selecting a region to be matched from a right camera reference image by a frame, and uniformly cutting the region of interest into a plurality of calculation regions;
based on a SIFT feature extraction algorithm and a weighted value function, correspondingly extracting region feature points of a calculation region and a region to be matched;
If the region feature points meet the correlation condition, the region feature points are used as the matching points, a matching calculation region corresponding to the matching points is determined, and an affine transformation matrix corresponding to the center point of the matching calculation region is calculated.
In one possible example, if the region feature point satisfies the correlation condition, the region feature point is taken as a matching point, which specifically includes:
Acquiring the nearest neighbor point of the region feature point and a descriptor vector corresponding to the secondary neighbor point of the region feature point;
Judging whether the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than a preset distance ratio;
if the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than the preset distance ratio, judging whether ZNSSD correlation coefficients of the nearest point are all larger than a preset cost threshold value;
if ZNSSD correlation coefficients of the nearest points are all larger than a preset cost threshold, the nearest points of the regional feature points are used as matching points.
In one possible example, the method calculates displacement parameters of center points corresponding to the calculation regions based on affine transformation matrix of each calculation region, and uses the center points as primary seed points, specifically includes:
And converting affine matrixes corresponding to the central points of the calculation areas into first-order functions, substituting the first-order functions into a reverse synthetic Gaussian Newton algorithm for iterative optimization, obtaining displacement parameters corresponding to the central points, and taking the central points as primary seed points.
In one possible example, the expression of the weighting function is:
Wherein, (x n,yn) represents the position coordinate of the target pixel point, (x 0,y0) represents the position coordinate of the center point, sigma represents the scale of the sub-region where the target pixel point is located, d represents the Euclidean distance between the target pixel point and the center point, and r represents the step size of the sub-region where the target pixel point is located.
Referring to fig. 5, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 5, the electronic device 6 may include: at least one processor 61, at least one network interface 64, a user interface 63, a memory 65, at least one communication bus 62.
Wherein the communication bus 62 is used to enable connected communication between these components.
The user interface 63 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 63 may further include a standard wired interface and a standard wireless interface.
The network interface 64 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein processor 61 may comprise one or more processing cores. The processor 61 connects various parts within the entire server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 65, and calling data stored in the memory 65. Alternatively, the processor 61 may be implemented in hardware in at least one of digital signal processing (DigitalSignalProcessing, DSP), field programmable gate array (Field-ProgrammableGateArray, FPGA), and programmable logic array (ProgrammableLogicArray, PLA). The processor 61 may integrate one or a combination of several of a central processor (CentralProcessingUnit, CPU), an image processor (GraphicsProcessingUnit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 61 and may be implemented by a single chip.
The memory 65 may include a random access memory (RandomAccessMemory, RAM) or a Read-only memory (Read-only memory). Optionally, the memory 65 includes a non-transitory computer readable medium (non-transitorycomputer-readablestoragemedium). Memory 65 may be used to store instructions, programs, code, a set of codes, or a set of instructions. The memory 65 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 65 may also optionally be at least one storage device located remotely from the aforementioned processor 61. As shown in fig. 5, an operating system, a network communication module, a user interface module, and an application program of a rotor dynamic balance weight method may be included in the memory 65 as a computer storage medium.
In the electronic device 6 shown in fig. 5, the user interface 63 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 61 may be used to invoke an application program in memory 65 that stores a rotor dynamic balancing method that, when executed by one or more processors, causes the electronic device to perform one or more of the methods as in the embodiments described above.
A computer readable storage medium having instructions stored thereon. When executed by one or more processors, cause a computer to perform a method such as one or more of the embodiments described above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all of the preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the various embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (10)
1. A full-field three-dimensional deformation measurement method based on image features and seed points, the method comprising:
Carrying out speckle treatment on the surface of a sample to be detected, and acquiring surface information of the sample to be detected with speckle marks by adopting a binocular camera, wherein the surface information of the sample to be detected comprises a left camera reference image and a right camera reference image;
Correcting and matching the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera to obtain parallax maps corresponding to the left camera reference image and the right camera reference image;
Generating parallax values of all to-be-measured points in the to-be-measured sample through the parallax map, and calculating three-dimensional space coordinates of corresponding points according to the parallax values to obtain three-dimensional point cloud coordinates before deformation;
Determining an interested region in the left camera reference image, determining a region to be matched corresponding to the interested region in the right camera reference image, uniformly dividing the interested region to obtain a plurality of calculation regions, extracting region characteristic points corresponding to each calculation region, and obtaining an affine transformation matrix corresponding to the center point of each calculation region;
Calculating displacement parameters of the center points corresponding to the calculation areas respectively based on affine transformation matrixes of the calculation areas, and taking the center points as primary seed points, wherein the displacement parameters are used as deformation parameters of the seed points for diffusion to obtain deformed three-dimensional point cloud coordinates;
And acquiring three-dimensional deformation data of all to-be-measured points based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, and completing full-field three-dimensional deformation measurement of the to-be-measured sample.
2. The method according to claim 1, wherein the performing correction matching on the left camera reference image and the right camera reference image based on the external parameters and the internal parameters of the binocular camera, and obtaining the disparity map corresponding to the left camera reference image and the right camera reference image specifically includes:
correcting and matching the left camera reference image and the right camera reference image through polar constraint based on external parameters and internal parameters of the binocular camera, so that corresponding points in the left camera reference image and the right camera reference image are on the same horizontal polar line;
Calculating a ZNCC related function by adopting an integral graph acceleration-based method in a parallax range on the horizontal polar line in the left camera reference image and the right camera reference image by taking a subarea divided in the left camera reference image and the right camera reference image as a unit;
And performing iterative optimization based on a reverse synthetic Gaussian Newton algorithm and the ZNCC related function to obtain disparity maps corresponding to the left camera reference image and the right camera reference image.
3. The method of claim 2, wherein the expression of the ZNCC correlation function is:
Wherein N represents the total number of pixels in the sub-region, S r represents the sum of gray values of pixels in the sub-region of the reference image, S t represents the sum of gray values of pixels in the sub-region of the deformed image, S rr represents the sum of squares of gray values of pixels in the sub-region of the reference image, S tt represents the sum of squares of gray values of pixels in the sub-region of the deformed image, and S rt represents the sum of products of gray values of corresponding pixels in the sub-region of the reference image and the deformed image.
4. The method of claim 1, wherein the determining a region of interest in the left camera reference image, determining a region to be matched corresponding to the region of interest in the right camera reference image, uniformly dividing the region of interest to obtain a plurality of calculation regions, extracting region feature points corresponding to each calculation region, and obtaining an affine transformation matrix corresponding to a center point of each calculation region, comprises:
selecting a region of interest from the left camera reference image by a manual frame, selecting a region to be matched from the right camera reference image by a frame, and uniformly cutting the region of interest into a plurality of calculation regions;
based on a SIFT feature extraction algorithm and a weighted value function, correspondingly extracting the region feature points of the calculation region and the region to be matched;
And if the region feature points meet the correlation condition, taking the region feature points as the matching points, determining a matching calculation region corresponding to the matching points, and calculating an affine transformation matrix corresponding to the central point of the matching calculation region.
5. The method of claim 4, wherein if the region feature point satisfies a correlation condition, using the region feature point as a matching point specifically includes:
acquiring the nearest neighbor point of the region feature point and a descriptor vector corresponding to the secondary neighbor point of the region feature point;
Judging whether the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than a preset distance ratio;
If the ratio of the Euclidean distance of the descriptor vector of the nearest point to the Euclidean distance of the descriptor vector of the next nearest point is smaller than a preset distance ratio, judging whether ZNSSD correlation coefficients of the nearest point are all larger than a preset cost threshold;
and if ZNSSD correlation coefficients of the nearest points are all larger than a preset cost threshold, taking the nearest points of the regional feature points as matching points.
6. The method according to claim 1, wherein the calculating displacement parameters of the center points corresponding to the calculation regions based on the affine transformation matrix of each calculation region respectively, and using the center points as primary seed points specifically comprises:
And converting affine matrixes corresponding to the central points of the calculation areas into first-order functions, substituting the first-order functions into a reverse synthetic Gaussian Newton algorithm for iterative optimization, obtaining displacement parameters corresponding to the central points, and taking the central points as primary seed points.
7. The method of claim 4, wherein the expression of the weighting function is:
Wherein (x n,yn) represents the position coordinate of the target pixel point, (x 0,y0) represents the position coordinate of the central point, sigma represents the scale of the sub-region where the target pixel point is located, d represents the Euclidean distance between the target pixel point and the central point, and r represents the step size of the sub-region where the target pixel point is located.
8. The full-field three-dimensional deformation measuring device based on the image characteristics and the seed points is characterized in that the full-field three-dimensional deformation measuring device (5) comprises a speckle information acquisition module (51), a first coordinate processing module (52), a second coordinate processing module (53) and a three-dimensional deformation measuring module (54) which are sequentially connected, wherein,
The speckle information acquisition module (51) is used for carrying out speckle treatment on the surface of a sample to be detected and acquiring surface information of the sample to be detected with speckle marks by adopting a binocular camera, wherein the surface information of the sample to be detected comprises a left camera reference image and a right camera reference image;
The first coordinate processing module (52) is configured to perform correction matching on the left camera reference image and the right camera reference image based on external parameters and internal parameters of the binocular camera, obtain disparity maps corresponding to the left camera reference image and the right camera reference image, generate disparity values of each point to be measured in the sample to be measured according to the disparity maps, and calculate three-dimensional space coordinates of the corresponding point according to the disparity values, so as to obtain three-dimensional point cloud coordinates before deformation;
The second coordinate processing module (53) is configured to determine a region of interest in the left camera reference image, determine a region to be matched corresponding to the region of interest in the right camera reference image, uniformly divide the region of interest to obtain a plurality of calculation regions, extract region feature points corresponding to each calculation region, obtain an affine transformation matrix corresponding to a center point of each calculation region, respectively calculate displacement parameters of the center point corresponding to the calculation region based on the affine transformation matrix of each calculation region, and use the center point as a primary seed point, and diffuse the displacement parameters as deformation parameters of the seed point to obtain deformed three-dimensional point cloud coordinates;
the three-dimensional deformation measurement module (54) is used for acquiring three-dimensional deformation data of all to-be-measured points based on the three-dimensional point cloud coordinates before deformation and the three-dimensional point cloud coordinates after deformation, and completing full-field three-dimensional deformation measurement of the to-be-measured sample.
9. An electronic device comprising a processor (61), a memory (65), a user interface (63) and a network interface (64), the memory (65) being adapted to store instructions, the user interface (63) and the network interface (64) being adapted to communicate to other devices, the processor (61) being adapted to execute the instructions stored in the memory (65) to cause the electronic device (6) to perform the method according to any of claims 1-7.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1-7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410507872.0A CN118424131A (en) | 2024-04-25 | 2024-04-25 | Full-field three-dimensional deformation measurement method and device based on image features and seed points |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410507872.0A CN118424131A (en) | 2024-04-25 | 2024-04-25 | Full-field three-dimensional deformation measurement method and device based on image features and seed points |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118424131A true CN118424131A (en) | 2024-08-02 |
Family
ID=92309980
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410507872.0A Pending CN118424131A (en) | 2024-04-25 | 2024-04-25 | Full-field three-dimensional deformation measurement method and device based on image features and seed points |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118424131A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119151781A (en) * | 2024-11-19 | 2024-12-17 | 东南大学 | Method and system for uniform calibration of internal and external parameters of underwater camera and splicing of array images |
| CN119935003A (en) * | 2025-03-26 | 2025-05-06 | 合肥工业大学 | Real-time monitoring method of rock deformation in reservoir dam shoulder based on digital image correlation |
-
2024
- 2024-04-25 CN CN202410507872.0A patent/CN118424131A/en active Pending
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119151781A (en) * | 2024-11-19 | 2024-12-17 | 东南大学 | Method and system for uniform calibration of internal and external parameters of underwater camera and splicing of array images |
| CN119935003A (en) * | 2025-03-26 | 2025-05-06 | 合肥工业大学 | Real-time monitoring method of rock deformation in reservoir dam shoulder based on digital image correlation |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN106548462B (en) | A Nonlinear SAR Image Geometric Correction Method Based on Thin Plate Spline Interpolation | |
| CN118424131A (en) | Full-field three-dimensional deformation measurement method and device based on image features and seed points | |
| CN106875443B (en) | The whole pixel search method and device of 3-dimensional digital speckle based on grayscale restraint | |
| EP3516625A1 (en) | A device and method for obtaining distance information from views | |
| CN112489099B (en) | Point cloud registration method and device, storage medium and electronic equipment | |
| CN111709977A (en) | Binocular depth learning method based on adaptive unimodal stereo matching cost filtering | |
| CN103544492B (en) | Target identification method and device based on depth image three-dimension curved surface geometric properties | |
| CN115294145B (en) | Method and system for measuring sag of power transmission line | |
| CN105654547A (en) | Three-dimensional reconstruction method | |
| JP5901447B2 (en) | Image processing apparatus, imaging apparatus including the same, image processing method, and image processing program | |
| CN111681186A (en) | Image processing method and device, electronic equipment and readable storage medium | |
| CN116129037B (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
| WO2021051382A1 (en) | White balance processing method and device, and mobile platform and camera | |
| CN114383564A (en) | Depth measurement method, device, device and storage medium based on binocular camera | |
| CN101894369B (en) | A Real-time Method for Computing Camera Focal Length from Image Sequence | |
| CN109766896A (en) | A kind of method for measuring similarity, device, equipment and storage medium | |
| CN118982516B (en) | A method and system for monitoring pyrenoid algae based on binocular images | |
| CN109741389A (en) | A Local Stereo Matching Method Based on Region-Based Matching | |
| CN110930344B (en) | Target quality determination method, device and system and electronic equipment | |
| JP2014164525A (en) | Method, device and program for estimating number of object | |
| CN114648544B (en) | A sub-pixel ellipse extraction method | |
| CN115049976B (en) | A method, system, device and medium for predicting wind direction and wind speed of a transmission line | |
| CN107358655B (en) | Identification method of hemispherical surface and conical surface models based on discrete stationary wavelet transform | |
| CN108230377B (en) | Point cloud data fitting method and system | |
| CN119851354B (en) | Multi-view-based joint data labeling method, device, equipment and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |