WO2008045997A2 - Extraction de caractéristiques à partir d'imagerie stéréo - Google Patents
Extraction de caractéristiques à partir d'imagerie stéréo Download PDFInfo
- Publication number
- WO2008045997A2 WO2008045997A2 PCT/US2007/081084 US2007081084W WO2008045997A2 WO 2008045997 A2 WO2008045997 A2 WO 2008045997A2 US 2007081084 W US2007081084 W US 2007081084W WO 2008045997 A2 WO2008045997 A2 WO 2008045997A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional vector
- dimensional
- feature
- vector object
- stereo pair
- Prior art date
Links
- 238000000605 extraction Methods 0.000 title description 12
- 238000000034 method Methods 0.000 claims description 48
- 230000003287 optical effect Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
Definitions
- Stereo vision is a process for determining the depth or distance of points in a scene based on a change in position of the points in two images of the scene captured from different viewpoints in space.
- Stereo vision algorithms have been used in many computer based applications to model terrain and objects for vehicle navigation, surveying, and geometric inspection, for example.
- Computer based stereo vision uses computer processors executing various known stereo vision algorithms to recover a three-dimensional scene from multiple images of the scene taken from different perspectives (referred to hereinafter as a "stereo pair").
- stereo pair As computer processing speeds increase, the applications for computer based stereo vision analysis of imagery also increase.
- a captured digital image begins as a raster image.
- a raster image is a data file or structure representing a generally rectangular grid of pixels, or points of color, on a computer monitor, paper, or other display device.
- Each pixel of the image can be associated with an attribute, such as color.
- the color of each pixel for example, can be individually defined.
- Images in the RGB color space for instance, often consist of colored pixels defined by three bytes, one byte each for red, green and blue. An image with only black and white pixels requires only a single bit for each pixel.
- Point cloud models, digital terrain models, and digital elevation models can be likened to pixels including data describing location and elevation attributes of a particular point in the scene.
- Computers have also been used to automate much of the analysis required for stereo vision analysis. For example, edge-based methods have been used for establishing correspondence between image points by matching image-intensity patterns along conjugate epipolar lines. Moreover, semi-automated methods have also been implemented where a computer first receives input from a human and then uses this input to establish correspondence between the images in a stereo pair. Thus, computers have become an important tool for generating three-dimensional digital models of scenes in stereo vision.
- Feature extraction includes the use of feature extraction algorithms that use cues to detect and isolate various areas of the geospatial data. These feature extraction algorithms may be used to extract features from the geospatial data, such as roads, railways, and water bodies, for example, that can be displayed on maps or in a Geographic Information System (GIS). A GIS user, a cartographer, or other person can then view the results displayed in the map or a rendered view of the GIS.
- GIS Geographic Information System
- a method for generating a three-dimensional vector object includes representing a feature within a scene from a stereo pair of images depicting the scene from different viewpoints.
- the method further includes establishing corresponding points between a first two-dimensional vector object representing the feature in a first image of the stereo pair and a second two- dimensional vector object representing the feature in a second image of the stereo pair.
- the method further includes analyzing disparities and similarities between the corresponding points of the first and second two-dimensional objects.
- the method further includes generating a three-dimensional vector object representing the feature in three-dimensions based on results of the analysis of the disparities and similarities between the first and second two-dimensional vector objects.
- Figures IA and IB illustrate a method of extracting a three dimensional vector object using stereo vision analysis
- Figure 2A illustrates two cameras acquiring images representing a scene from different viewpoints
- Figure 2B illustrates two-dimensional vector objects representing a road in vector format in each of a stereo pair
- Figure 3 illustrates a three-dimensional vector object generated by analyzing the two-dimensional vector objects of Figure 2B;
- Figure 4 illustrates the three-dimensional vector object of Figure 3 along with an associated three-dimensional digital point models
- Figure 5 illustrates a method for generating a three-dimensional vector object from a stereo pair of images
- Figure 6 illustrates a suitable computing environment in which several embodiments may be implemented.
- the present invention relates to extracting three-dimensional feature lines and polygons using stereo imagery analysis.
- the principles of the embodiments described herein describe the structure and operation of several examples used to illustrate the present invention. It should be understood that the drawings are diagrammatic and schematic representations of such example embodiments and, accordingly, are not limiting of the scope of the present invention, nor are the drawings necessarily drawn to scale. Well-known devices and processes have been excluded so as not to obscure the discussion in details that would be known to one of ordinary skill in the art.
- Several embodiments disclosed herein use a combination of manual and automatic processes to produce a fast and accurate tool for at least semi-automated digitization of a three-dimensional model of a scene from a stereo pair.
- Several embodiments extract three-dimensional features and create a vector layer for a three- dimensional scene from the stereo imagery.
- Several embodiments also use pattern- recognition processes for extraction of features from a stereo pair to subsequently generate the three-dimensional vector objects. These three-dimensional vector objects can then be associated with the imagery as a three-dimensional vector layer.
- FIG. IA a method of extracting a three dimensional vector object representing a feature within a scene illustrated.
- a three-dimensional scene 100 is illustrated where two cameras HOA and HOB are acquiring images 120A and 120B of the scene 100 from different viewpoints in space.
- the images 120A and 120B acquired from different viewpoints differ corresponding to the viewpoint from which the image was acquired.
- three-dimensional digital point models can represent topography of the Earth or another surface in digital format, for example by coordinates and numerical descriptions of altitude.
- two-dimensional vector objects can be extracted and analyzed to generate three-dimensional vector objects representing features within the scene 100.
- a feature 130 is illustrated in the scene 100 of Figure IA.
- the depicted feature 130 is different in the acquired images 120A and 120B depending on the viewpoint from which the images 120A and 120B are acquired.
- two-dimensional vector objects 125 A and 125B have been extracted from the images 120A and 120B respectively.
- the differences between the feature 130 as depicted in images 120A and 120B are illustrated in an overlaid manner by comparing two-dimensional vector objects 125A and 125B.
- a stereo vision analysis algorithm includes a preprocessing step where matching points are associated within each of the two dimensional vector objects 125 A and 125B extracted from the stereo pair. This step is often referred to as "correspondence establishment.” For example, in Figure IB points 140, 150, 160, and 170 can be established as corresponding points of the vector objects 125A and 125B. The quality of a match can be measured by comparing windows centered at the two locations of the match, for example, using the sum of squared intensity differences (SSD). Many different methods for correspondence establishment, rectification of images, calibration, and recovering three-dimensional digital point models are known in the art and commonly implemented for deriving three-dimensional digital point models from a stereo pair. After correspondence is established, disparities and similarities are analyzed to generate a three-dimensional vector object representing the feature 130 in three dimensions.
- the three-dimensional vector object can include points, lines, and polygons, for example.
- FIG. 2A another example method for generating a three- dimensional vector object is illustrated.
- Two cameras 200A and 200B are shown acquiring images 205 A and 205B respectively representing a scene 210 from different viewpoints.
- the scene 210 can include any surface, object, geography, or any other view capable of image capture.
- the scene illustrated in Figure 2A includes a mountain 215 and road 220 features.
- the mountain 215 and the road 220 are merely examples of geographic objects within a scene.
- the images 205A and 205B captured by the cameras 200A and 200B are of the same scene 210 from different viewpoints, thus resulting in differences in the relative position of various points of the road 220 for example depicted within the different images 205 A and 205B.
- the captured images 205A and 205B can be stored in a memory 225 and accessed by a data processing device 230, such as a conventional or special purpose computer.
- the memory 225 may be, but need not be, shared, local, remote, or otherwise associated with the cameras 200A and 200B or the data processing device 230.
- the data processing device 230 includes computer executable instructions for accessing and analyzing the images 205A and 205B stored on the memory 225 to extract two-dimensional features from the images 205 A and 205B.
- the extraction of two-dimensional features can be at least semi- automated in that the data processing device 230 can receive inputs from a user, or operate fully autonomously from user input, to identify features within the images 205A and 205B.
- the data processing device 230 can receive an input, such as user selection of the road 220, as a cue for identifying pixels within the images 205 A and 205B representing the road 220.
- the data processing device 230 identifies the road 220 within the different images 205A and 205B and extracts the road 220 as a two-dimensional feature from each image 205A and 205B.
- the data processing device 230 generates two-dimensional vector objects 235A and 235B representing the road in each of the stereo pair.
- Figure 2B illustrates the two- dimensional vector objects 235 A and 235B representing the road 220 in vector format extracted from each of the stereo pair 205 A and 205B.
- the data processing device 230 can collect any two-dimensional geospatial feature from the imagery, such as roads, buildings, water bodies, vegetation, pervious- impervious surfaces, multi-class image classification, and land cover.
- the data processing device 230 can use multiple spatial attributes (e.g. size, shape, texture, pattern, spatial association and/or shadow), for example, with spectral information to collect geospatial features from the imagery and create vector data representing the features.
- a first two-dimensional vector object 235 A, and a second two- dimensional vector object 235B, representing the road 220 in each stereo pair 205 A and 205B respectively have been generated.
- the two dimensional vector objects can be vector shapefiles.
- the two-dimensional vector objects 235A and 235B differ in relative position and shape due to the different viewpoints from which the images 205 A and 205B are acquired.
- the first and second vector objects 235 A and 235B are compared and analyzed, using trigonometric stereo vision and image matching algorithms, to derive position attributes describing the road 220 in the scene 210. From the relative position attributes, a three-dimensional vector object 300 is generated as illustrated in Figure 3. This three-dimensional vector object 300 represents the road 220 in three-dimensions in the vector domain. The first two-dimensional vector object 235 A and the second two- dimensional vector object 235 B need not both represent the entire road 220, however.
- a first two-dimensional vector object only represented a portion of the road 220 of which a second two-dimensional vector object represents
- three- dimensional information may still be gathered describing the portion of the road 220 represented by both of the first and second two-dimensional vector objects.
- a three dimensional vector object can be generated representing the portion of the road represented by both the first and second two dimensional vector objects.
- the entire feature, in this instance the road 220 need not have the same start and end points in each of the stereo pair in order to derive three dimensional information or three dimensional vector objects describing the feature.
- Various other geospatial data can be generated by analyzing the stereo pair 205 A and 205B.
- three-dimensional digital point models can be generated describing the scene 210 in three-dimensions using conventional stereo imagery analysis.
- the three-dimensional vector object 300 representing the road 220 can be associated as a vector layer with three-dimensional point models 400 as illustrated in Figure 4.
- the three-dimensional point models 400 can also be compared to the three- dimensional vector object 300 to confirm the accuracy of the three-dimensional vector object 300 and/or to confirm the accuracy of the three-dimensional point model 400.
- a digital terrain model generated by analyzing a stereo pair can be compared to a three-dimensional vector object generated by analyzing the same stereo pair.
- the comparison of the digital terrain model to the three-dimensional vector object can be used to check the accuracy of the three-dimensional vector object and/or the digital terrain model.
- the stereo pair is acquired (500).
- the stereo pair can be acquired by a pair of cameras or the same camera from two different view points in the field, for example.
- the stereo pair can depict geography including a feature from different viewpoints.
- the stereo pair can be digital images or analog images later converted to a digital format and can be stored in a computer readable medium, transmitted over a communications connection, or otherwise retained for analysis.
- a first two-dimensional vector object is generated by extracting the feature from a first image of the stereo pair (505).
- the first two-dimensional vector object can be generated in an at least semi-autonomous manner.
- feature extraction software such as Feature Analyst for ERDAS IMAGINE by Leica Geosystems
- the feature can be extracted using only limited input received by a user in this example.
- a user can select pixels representative of the feature in the first image of the stereo pair and the feature extraction software can use the representative pixels to identify the feature in the first image of the stereo pair.
- the two-dimensional vector object can be generated as a vector file including lines and polygons representing the feature in the vector domain.
- a second two-dimensional vector object is generated in a similar manner to the first two-dimensional vector object by extracting the feature from a second image of the stereo pair (510). The feature is extracted and the second two-dimensional vector object can be generated in an at least semi-autonomous manner using software.
- Correspondence between the stereo pair is established (515). Correspondence can be established in a manual, semi-autonomous, or automated manner. For example, a user can select at least one corresponding point on each of the two-dimensional vector objects (e.g. see Figure IB). Based on the corresponding point(s) selected by the user, software can identify additional corresponding points on the vector objects derived from the stereo pair.
- Features that may be only partially represented in the stereo pair can be extended (517). For example, a feature may be partially represented in each image of the stereo pair, but not all of the feature is so represented.
- a feature such as a road may appear in each image of the stereo pair and also in only one image as it extends out of one image into another.
- the match can be extrapolated and used to describe the road in three dimensions even though the road cannot be visualized in stereo.
- the invention can be applied to describing features in three dimensions using stereo pairs, even when the features cannot be visualized in the stereo imagery because the features are represented or partially represented in only one image.
- the features may not have the same start and end points in each of the stereo pair of images and the teachings herein may still be implemented to gather information describing the feature in three dimensions for the portions of the feature that are represented in both images of the stereo pair.
- interpolation or extrapolation algorithms may be used to extend the feature within an image where applicable.
- Disparities and similarities between the points of correspondence are analyzed (520) using trigonometric stereo vision and image matching algorithms to determine three-dimensional position and elevation of various points of the feature represented by the pair of two-dimensional vector objects.
- the feature is extracted in three-dimensions.
- a three-dimensional vector object is generated (525) and the three-dimensional vector object can be stored in memory, saved as a vector layer associated with the feature and/or scene, or otherwise utilized.
- a three-dimensional digital point model such as a point cloud, digital terrain model or digital elevation model, can also be generated (530) using stereo imagery analysis of disparities and similarities between the stereo pair of images.
- the three- dimensional digital point model can be associated with the three-dimensional vector object (535).
- the three-dimensional digital point model can also be compared to the three-dimensional vector object to identify any disparities and similarities between the two.
- the disparities and similarities can be analyzed to determine the accuracy of the three-dimensional digital model and/or the three-dimensional vector object representing the feature (540). For example, certain discontinuities in the images of the stereo pair, such as shadows, interference from other features, and changes in light conditions, may introduce error in one of the three-dimensional digital model or the three-dimensional vector object.
- a four-dimensional vector object can also be generated (545).
- the four- dimensional vector object can be generated by first generating a first three-dimensional vector object representing a feature at a first point in time and comparing the first three- dimensional vector object to a second three-dimensional vector object representing the same feature but generated at a second point in time that is later in time than the first point in time.
- the four-dimensional vector object can illustrate three-dimensional changes to the feature over time.
- Three-dimensional vector objects generated from different stereo pairs captured under different conditions can also be compared to determine accuracy of the three-dimensional vector objects.
- a first stereo pair may be acquired under a first set of conditions, such as lighting conditions, time of day, by different equipment, angle of sunlight, etc.
- a second stereo pair can be acquired under a different set of conditions.
- Three-dimensional vector objects generated from the different stereo pairs can be compared to determine whether errors exist in the three-dimensional vector objects.
- stereo vision algorithms can be carried out using stereo vision algorithms executed by a data processor.
- the data processor can be part of a conventional or special purpose computer system.
- Embodiments within the scope of embodiments illustrated herein can also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer.
- such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
- a network or another communications connection either hardwired, wireless, or a combination of hardwired or wireless
- the computer properly views the connection as a computer-readable medium.
- any such connection is properly termed a computer-readable medium.
- Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- Figure 6 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which several embodiments may be implemented. Although not required, several embodiments will be described in the general context of computer-executable instructions, such as program modules, being executed by computers in network environments.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein.
- an exemplary system for implementing several embodiments includes a general purpose computing device in the form of a conventional computer 620, including a processing unit 621, a system memory 622, and a system bus 623 that couples various system components including the system memory 622 to the processing unit 621.
- the system bus 623 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- the system memory includes read only memory (ROM) 624 and random access memory (RAM) 625.
- ROM read only memory
- RAM random access memory
- a basic input/output system (BIOS) 626 containing the basic routines that help transfer information between elements within the computer 620, such as during start-up, may be stored in ROM 624.
- the computer 620 may also include a magnetic hard disk drive 627 for reading from and writing to a magnetic hard disk 639, a magnetic disk drive 628 for reading from or writing to a removable magnetic disk 629, and an optical disk drive 630 for reading from or writing to removable optical disk 631 such as a CD ROM or other optical media.
- the magnetic hard disk drive 627, magnetic disk drive 628, and optical disk drive 630 are connected to the system bus 623 by a hard disk drive interface 632, a magnetic disk drive-interface 633, and an optical drive interface 634, respectively.
- the drives and their associated computer-readable media provide nonvolatile storage of computer-executable instructions, data structures, program modules and other data for the computer 620.
- exemplary environment described herein employs a magnetic hard disk 639, a removable magnetic disk 629 and a removable optical disk 631
- other types of computer readable media for storing data can be used, including magnetic cassettes, flash memory cards, digital versatile disks, Bernoulli cartridges, RAMs, ROMs, and the like.
- Program code means comprising one or more program modules may be stored on the hard disk 639, magnetic disk 629, optical disk 631, ROM 624 or RAM 625, including an operating system 635, one or more application programs 636, other program modules 637, and program data 638.
- a user may enter commands and information into the computer 620 through keyboard 640, pointing device 642, or other input devices (not shown), such as a microphone, joy stick, game pad, satellite dish, scanner, or the like.
- These and other input devices are often connected to the processing unit 621 through a serial port interface 646 coupled to system bus 623.
- the input devices may be connected by other interfaces, such as a parallel port, a game port or a universal serial bus (USB).
- a monitor 647 or another display device is also connected to system bus 623 via an interface, such as video adapter 648.
- personal computers typically include other peripheral output devices (not shown), such as speakers and printers.
- the computer 620 may operate in a networked environment using logical connections to one or more remote computers, such as remote computers 649a and 649b.
- Remote computers 649a and 649b may each be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically include many or all of the elements described above relative to the computer 620, although only memory storage devices 650a and 650b and their associated application programs 636a and 636b have been illustrated in Figure 6.
- the logical connections depicted in Figure 6 include a local area network (LAN) 651 and a wide area network (WAN) 652 that are presented here by way of example and not limitation.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in office-wide or enterprise-wide computer networks, intranets and the Internet.
- the computer 620 When used in a LAN networking environment, the computer 620 is connected to the local network 651 through a network interface or adapter 653. When used in a WAN networking environment, the computer 620 may include a modem 654, a wireless link, or other means for establishing communications over the wide area network 652, such as the Internet.
- the modem 654, which may be internal or external, is connected to the system bus 623 via the serial port interface 646.
- program modules depicted relative to the computer 620, or portions thereof may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing communications over wide area network 652 for analyzing a stereo pair of images can be used.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne la production d'un objet vecteur à trois dimensions représentant une caractéristique dans une scène par analyse des objets vecteurs à deux dimensions qui représentent la caractéristique dans la paire stéréo. Les objets vecteurs à deux dimensions sont analysés au moyen d'algorithmes visuels stéréo afin de générer un objet vecteur à trois dimensions. Les résultats de cette analyse découlent des positions à trois dimensions des points correspondants des objets vecteurs à deux dimensions. L'objet vecteur à trois dimensions est créé sur la base des résultats de l'analyse visuelle stéréo. L'objet vecteur à trois dimensions peut être comparé aux modèles de points partiels numériques à trois dimensions. L'objet vecteur à trois dimensions peut également être comparé à un autre objet vecteur à trois dimensions créé à partir d'une paire stéréo qui est capturée sous différentes conditions.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US5486206A | 2006-10-11 | 2006-10-11 | |
US11/548,62 | 2006-10-11 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2008045997A2 true WO2008045997A2 (fr) | 2008-04-17 |
WO2008045997A3 WO2008045997A3 (fr) | 2008-09-18 |
Family
ID=39283620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/081084 WO2008045997A2 (fr) | 2006-10-11 | 2007-10-11 | Extraction de caractéristiques à partir d'imagerie stéréo |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2008045997A2 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107705363A (zh) * | 2017-10-20 | 2018-02-16 | 北京世纪高通科技有限公司 | 一种道路三维可视化建模方法及装置 |
CN108491850A (zh) * | 2018-03-27 | 2018-09-04 | 北京正齐口腔医疗技术有限公司 | 三维牙齿网格模型的特征点自动提取方法及装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998003021A1 (fr) * | 1996-06-28 | 1998-01-22 | Sri International | Petit module de vision pour l'analyse stereo et de mouvement en temps reel |
US6628819B1 (en) * | 1998-10-09 | 2003-09-30 | Ricoh Company, Ltd. | Estimation of 3-dimensional shape from image sequence |
US6426748B1 (en) * | 1999-01-29 | 2002-07-30 | Hypercosm, Inc. | Method and apparatus for data compression for three-dimensional graphics |
US6980690B1 (en) * | 2000-01-20 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing apparatus |
US20020012472A1 (en) * | 2000-03-31 | 2002-01-31 | Waterfall Andrew E. | Method for visualization of time sequences of 3D optical fluorescence microscopy images |
JP4400808B2 (ja) * | 2000-09-11 | 2010-01-20 | ソニー株式会社 | 画像処理装置および方法、並びに記録媒体 |
-
2007
- 2007-10-11 WO PCT/US2007/081084 patent/WO2008045997A2/fr active Application Filing
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107705363A (zh) * | 2017-10-20 | 2018-02-16 | 北京世纪高通科技有限公司 | 一种道路三维可视化建模方法及装置 |
CN107705363B (zh) * | 2017-10-20 | 2021-02-23 | 北京世纪高通科技有限公司 | 一种道路三维可视化建模方法及装置 |
CN108491850A (zh) * | 2018-03-27 | 2018-09-04 | 北京正齐口腔医疗技术有限公司 | 三维牙齿网格模型的特征点自动提取方法及装置 |
CN108491850B (zh) * | 2018-03-27 | 2020-04-10 | 北京正齐口腔医疗技术有限公司 | 三维牙齿网格模型的特征点自动提取方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
WO2008045997A3 (fr) | 2008-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080089577A1 (en) | Feature extraction from stereo imagery | |
US10475232B2 (en) | Three-dimentional plane panorama creation through hough-based line detection | |
Böhm et al. | Automatic marker-free registration of terrestrial laser scans using reflectance | |
US11816829B1 (en) | Collaborative disparity decomposition | |
Haala et al. | Extraction of buildings and trees in urban environments | |
Li et al. | An improved building boundary extraction algorithm based on fusion of optical imagery and LiDAR data | |
CN110472623A (zh) | 图像检测方法、设备以及系统 | |
US20100207936A1 (en) | Fusion of a 2d electro-optical image and 3d point cloud data for scene interpretation and registration performance assessment | |
CN107527038A (zh) | 一种三维地物自动提取与场景重建方法 | |
Becker et al. | Combined feature extraction for façade reconstruction | |
CN103871072B (zh) | 基于投影数字高程模型的正射影像镶嵌线自动提取方法 | |
Han et al. | Assessment of dense image matchers for digital surface model generation using airborne and spaceborne images–an update | |
CN107590444A (zh) | 静态障碍物的检测方法、装置及存储介质 | |
CN118429524A (zh) | 基于双目立体视觉的车辆行驶环境建模方法及系统 | |
Ebrahimikia et al. | True orthophoto generation based on unmanned aerial vehicle images using reconstructed edge points | |
CN113963107A (zh) | 一种基于双目视觉的大型目标三维重建方法及系统 | |
Parmehr et al. | Automatic registration of optical imagery with 3d lidar data using local combined mutual information | |
CN114972646A (zh) | 一种实景三维模型独立地物的提取与修饰方法及系统 | |
CN113723373A (zh) | 一种基于无人机全景影像的违建检测方法 | |
WO2008045997A2 (fr) | Extraction de caractéristiques à partir d'imagerie stéréo | |
Novacheva | Building roof reconstruction from LiDAR data and aerial images through plane extraction and colour edge detection | |
Ziems et al. | Multiple-model based verification of road data | |
Beumier et al. | Building change detection from uniform regions | |
CN107808160B (zh) | 三维建筑物提取方法和装置 | |
CN113191423A (zh) | 一种基于slam的用于土地监管的穿戴设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07844164 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07844164 Country of ref document: EP Kind code of ref document: A2 |