WO2006011255A1 - Procédé de synthèse d’image panoramique, procédé de détection d’objet, dispositif de synthèse d’image panoramique, système imageur, dispositif de détection d’objet et programme de synthèse d’image panoramique - Google Patents
Procédé de synthèse d’image panoramique, procédé de détection d’objet, dispositif de synthèse d’image panoramique, système imageur, dispositif de détection d’objet et programme de synthèse d’image panoramique Download PDFInfo
- Publication number
- WO2006011255A1 WO2006011255A1 PCT/JP2005/001829 JP2005001829W WO2006011255A1 WO 2006011255 A1 WO2006011255 A1 WO 2006011255A1 JP 2005001829 W JP2005001829 W JP 2005001829W WO 2006011255 A1 WO2006011255 A1 WO 2006011255A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- overlapping
- panoramic
- group
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 76
- 238000003384 imaging method Methods 0.000 title claims description 25
- 230000002194 synthesizing effect Effects 0.000 title claims description 21
- 238000001514 detection method Methods 0.000 claims description 66
- 238000011156 evaluation Methods 0.000 claims description 64
- 239000000203 mixture Substances 0.000 claims description 27
- 230000015572 biosynthetic process Effects 0.000 claims description 26
- 238000003786 synthesis reaction Methods 0.000 claims description 25
- 238000001308 synthesis method Methods 0.000 claims description 23
- 238000003702 image correction Methods 0.000 claims description 13
- 238000012937 correction Methods 0.000 claims description 11
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 abstract description 17
- 230000008569 process Effects 0.000 description 24
- 238000004364 calculation method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 14
- 235000002198 Annona diversifolia Nutrition 0.000 description 6
- 241000282842 Lama glama Species 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000009825 accumulation Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000003252 repetitive effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- TZCXTZWJZNENPQ-UHFFFAOYSA-L barium sulfate Chemical compound [Ba+2].[O-]S([O-])(=O)=O TZCXTZWJZNENPQ-UHFFFAOYSA-L 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 229920000995 Spectralon Polymers 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2624—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
Definitions
- panorama image composition method and object detection method panorama image composition device, imaging device, object detection device, and panorama image composition program
- the present invention relates to a technique for synthesizing a so-called V Lama image and a technique for detecting an image power object using a panoramic image.
- a large panoramic image is synthesized from a plurality of captured images having different angles of view, and the synthesized panoramic image is used.
- a method of performing image processing such as background difference is effective.
- a motion vector is obtained by image analysis from the position and orientation of a camera and an input image versus camera, and camera parameters are calculated based on the motion vector.
- a method is disclosed in which panoramic image synthesis is performed by combining images according to the calculated camera parameters. By this method, it is possible to perform accurate alignment between each image, and there is little positional distortion! ⁇ Panorama images can be combined.
- Patent Document 1 Japanese Patent Laid-Open No. 2002-150264
- the generated panoramic image is a V, panoramic image in which the light source state is different for each part, that is, the light source environment is the same. If such a light source environment is not the same, if background difference is performed using a panoramic image, the background region other than the object is also detected as an object, and the accuracy of object detection is greatly reduced.
- the present invention makes it possible to synthesize a panoramic image having the same light source environment over the entire image that can be detected even when the light source environment changes. Let it be an issue.
- the present invention provides, as a method for synthesizing a V Lama image, a first overlapping region in common and a second overlapping region other than the first overlapping region.
- a duplicate image group consisting of N images (N is an integer greater than or equal to 3) images and a second extended image that has no second overlap region, and the linear coefficient of the extended image is obtained as the first overlap region.
- the size of the image to be expanded is calculated using the pixel values of each of the images in the overlapping image group in, and using the calculated linear coefficient and the pixel values of each of the N images in the second overlapping region.
- the panorama image is synthesized based on the extended image that has been expanded to the second overlap area and this size has been expanded.
- the extended image can be expanded to the second overlapping region by using the linear coefficient obtained from each image of the first overlapping region. That is, an image in the same light source environment as the arbitrarily selected extended image can be extended to “extended image + second overlapping region”. Therefore, it is possible to easily generate a panoramic image having the same light source environment over the entire image.
- the "normal llama image with the same light source environment” refers to the estimated position, direction, intensity, and number of light sources in each part of the panorama image. This means that they are substantially the same. Theoretically, it is possible to make the light source environment completely the same, but in reality, the light source environment may not be completely the same due to errors in the calculation of linear coefficients. The present invention includes such a case.
- the present invention provides a method for detecting an object from an image as a method of detecting a normal llama image of the present invention.
- a plurality of panoramic images including the imaging range of the detection target image are generated by the synthesis method, and a predicted background image having the same light source environment as the detection target image is generated based on the detection target image and the plurality of panoramic images. By comparing this with the detection target image, object detection is performed.
- a predicted background image having the same light source environment is generated by using a plurality of V Lamma images having the same light source environment throughout the entire image to be detected.
- object detection with high accuracy can be performed even for images in which the light source environment differs from the V-llama image.
- FIG. 1 is a configuration diagram of an object detection device according to a first embodiment of the present invention.
- FIG. 2 is a diagram showing a coordinate system that defines the position and orientation of the camera.
- FIG. 3 is a diagram showing a coordinate conversion method to a panoramic projection plane coordinate system.
- FIG. 4 is a flowchart showing the operation of the V-Lama image synthesizing unit of FIG. 1, and showing the V-Lama image synthesizing method according to the first embodiment of the present invention.
- FIG. 5 is a schematic diagram showing an example of a selected overlapping image group.
- FIG. 6 is a diagram for explaining a method for calculating a linearization coefficient.
- FIG. 7 is a diagram showing object detection using a panoramic image.
- FIG. 8 is a flowchart showing a panoramic image synthesis method according to the second embodiment of the present invention.
- FIG. 9 is a diagram showing an example of a no-V llama image synthesized using the panoramic image synthesis method according to the first embodiment of the present invention.
- FIG. 10 is a diagram showing an example of an evaluation image group.
- FIG. 11 is a diagram schematically showing FIG.
- FIG. 12 is a flowchart showing a panoramic image synthesis method according to the third embodiment of the present invention. Yat.
- FIG. 13 is a schematic diagram of an example of an evaluation image group.
- FIG. 14 is a flowchart showing evaluation image group determination processing.
- FIG. 15 is a diagram for explaining the process of panoramic image synthesis.
- FIG. 16 shows an example of a panoramic image.
- FIG. 17 shows a comparison result between the first embodiment and the third embodiment.
- FIG. 18 is a schematic diagram of an example of an evaluation image group.
- FIG. 19 is a flowchart showing evaluation image group determination processing.
- N N is 3 having a first overlapping region in common and a second overlapping region other than the first overlapping region.
- the step of obtaining an overlapping image group consisting of images and the extended image without the second overlapping region, and the linear coefficient of the extended image in the first overlapping region Using the pixel values of each of the images in the overlapping image group, the step of calculating, the linear coefficient, and the pixel values of each of the N images in the second overlapping region, And expanding the size of the image to the second overlapping region, and providing a panoramic image based on the extended image whose size has been expanded.
- the N images are images having different light source environments.
- a panoramic image synthesis method according to a first aspect for acquiring an image is provided.
- the method includes a step of determining the reliability of the calculated linear coefficient of the extended image, and when it is determined that the reliability is low, a first acquisition of a duplicate image group is performed.
- a method for synthesizing a panoramic image of a state is provided.
- the obtaining step selects a duplicate image group from images stored in the image storage unit, and a new image is input to the image storage unit.
- the re-selection of the overlapping image group is performed, the suitability of image synthesis is compared between the new overlapping image group and the old overlapping image group, and it is determined whether or not the panoramic image is updated based on the comparison result.
- the suitability of image composition is determined using the number of diffuse reflection pixels.
- the V A panoramic image synthesizing method according to a fourth aspect for updating an image is provided.
- the suitability of image composition is determined using independence between images.
- a panoramic image synthesizing method according to a fourth aspect for updating an image is provided.
- a step of generating a plurality of panoramic images including a photographing range of the detection target image by the panoramic image synthesis method of the first aspect, the detection target image, the plurality of panoramic images,
- An object detection method comprising the steps of: generating a predicted background image having the same light source environment as the detection target image, and performing object detection by comparing the detection target image and the predicted background image I will provide a.
- the object detection method according to the seventh aspect wherein the plurality of panoramic images is at least three.
- the reliability of the calculated linear coefficient for each overlapping image group A panoramic image synthesizing method according to the first aspect in which the image with the highest reliability is selected is provided.
- the method includes a step of determining whether or not an evaluation image group exists for the extended image, and when the evaluation image group exists, using the evaluation image group, in front
- a panoramic image synthesis method according to a first aspect for calculating a linear coefficient of a recorded extended image is provided.
- the eleventh aspect of the present invention when there are a plurality of evaluation image groups, the one having the highest image independence in the second overlapping region included in the evaluation image group is selected.
- a panoramic image synthesis method according to an aspect is provided.
- the panorama image composition of the tenth aspect is selected in which the number of pixels in the image region occupied by the evaluation image group is the largest. Provide a method.
- N is an integer of 3 or more sheets having a first overlapping region in common and a second overlapping region other than the first overlapping region.
- Means for obtaining an overlapping image group consisting of an image and the second extended region and an extended image group, and a linear coefficient of the extended image, the overlapping image group in the first overlapping region Means for calculating using the pixel values of each of the images, the linear coefficient, and the pixel values of each of the N images in the second overlapping region, and determining the size of the extended image.
- a panorama image synthesizing device for synthesizing a V-llama image based on the extended image whose size has been expanded.
- the imaging unit, the captured image correction unit that performs geometric correction and optical correction on the image captured by the imaging unit, and the captured image correction unit correct the image.
- an imaging device that acquires from images stored in the image storage unit.
- the imaging device according to the fourteenth aspect, an object detection unit that performs object detection using a panoramic image synthesized by the panorama image synthesis device included in the imaging device,
- an object detection device comprising a detection object display unit for displaying a region of an object detected by the object detection unit.
- N having a first overlapping area in common and having a second overlapping area other than the first overlapping area (N is an integer greater than or equal to 3) images and the second overlapping area,
- a plurality of partial images whose partial image regions overlap each other and whose light source environment is different are input, and each of the partial images is based on the input partial image.
- a panorama image synthesizer is provided that generates and outputs a no-llama image with the same light source environment over a wider area.
- FIG. 1 is a configuration diagram of an object detection apparatus using the panoramic image synthesis method according to the first embodiment of the present invention.
- 101 is an imaging unit that captures still images or moving images
- 102 is a captured image correction unit that corrects an image captured by the imaging unit 101
- 103 is corrected using the captured image correction unit 102.
- 104 is an image storage unit that stores the captured image
- 104 is a panoramic image synthesis unit as a panoramic image synthesis device that combines panoramic images from images stored in the image storage unit 103
- 105 is an image and panorama image captured by the imaging unit 101.
- An object detection unit 106 that performs object detection using the panoramic image synthesized by the llama image synthesis unit 104, and a detection object display unit 106 that displays the object area detected by the object detection unit 105.
- the imaging unit 101, the captured image correction unit 102, the image storage unit 103, and the panoramic image composition unit 104 constitute an imaging apparatus of the present invention.
- the object detection described here refers to the synthesis of at least three panoramic images with different light source environments and using this as a background to detect what appears outside the background from the captured images.
- the environmental condition is that the light source is a parallel light source, and the surface of the object is It is assumed that it is a completely diffuse reflecting surface.
- the imaging unit 101 captures images with different imaging directions. Specifically, for example, panchiru
- Shooting is performed with a video camera or digital still camera that can operate (HPT), or a video camera or digital still camera that is fixed to a pan that can operate with ⁇ .
- HPT video camera or digital still camera that can operate
- a video camera or digital still camera that is fixed to a pan that can operate with ⁇ is used.
- the image capturing unit 101 sends camera position / posture information together with the captured image to the captured image correction unit 102.
- the camera position / posture information refers to the five parameters (Xw, Yw, Zw, ⁇ , ⁇ ) shown in Fig. 2.
- Xw, Yw, Zw are the camera position coordinates (point o) in the world coordinate system, and ⁇ , ⁇ are the angles to rotate counterclockwise with respect to the positive direction of the Y and X axes in the camera coordinate system (pan angle). , Tilt angle).
- the position / posture of the camera may be detected using, for example, a position detection device such as GPS or a posture detection device such as a gyroscope.
- a pre-planned shooting plan power may be calculated.
- the pre-planned shooting plan is, for example, that the camera is set to an angular velocity in the positive direction of ⁇ from the starting point (X, Y, Z, ⁇ , ⁇ ).
- the captured image correction unit 102 performs geometric correction, optical correction, and coordinate conversion from the captured image coordinate system to the panoramic projection plane coordinate system on the image captured by the imaging unit 101.
- geometric correction of the captured image lens distortion is corrected.
- the lens distortion correction method by Weng et al. Described in “Converter Vision Technology Review and Future Prospects, pp. 42-44, New Technology Communication, 1998” may be used.
- optical correction of the captured image peripheral light reduction is corrected. For example, there is no time variation of luminance as shown in the “Natural Vision R & D Project (Next Generation Video Display / Transmission System) Research and Development Report, pp.
- a panorama projection plane projected onto a cylindrical coordinate system such as a captured image coordinate system (i, j)
- f is a function that converts to the coordinate system (u, V)
- f is the camera position and orientation information sent from the imaging unit 101. Is expressed by the following equation.
- the conversion from the captured image coordinate system (i, j) to the panorama projection plane coordinate system (u, V) may be performed using a spherical coordinate system. If the pan / tilt angle is small, projection using a plane coordinate system may be performed.
- each correction parameter used in the above-described captured image correction unit 102 is obtained in advance before performing object detection with this apparatus.
- the image storage unit 103 stores the image corrected by the captured image correction unit 102 and camera position / posture information in a storage medium.
- the storage medium is realized by, for example, a memory, a hard disk, or a film.
- the V Lama image composition unit 104 generates a panorama image from a plurality of images stored in the image storage unit 103.
- the operation of the V Lama image synthesis unit 104 will be described with reference to the flowchart of FIG.
- step S11 among the images stored in the image storage unit 103, a second overlapping image group having a first overlapping region (A region) in common and a second region other than the A region.
- Select an overlapping image group consisting of N images (N is an integer greater than or equal to 3) images with overlapping region (B region) and images as extended images without B region (Sl l) .
- N is 3.
- FIG. 5 is a schematic diagram showing an example of the selected overlapping image group. In FIG. 5, 1, 1, 1
- I are images selected from the image storage unit 103, and these four images I, I, I, I, I are A
- the overlapping area of each image may be detected using the coordinate value of each image in the panorama projection plane coordinate system (u, V), for example. Further, when there are a plurality of overlapping image groups for a certain pixel position, for example, the one having the highest independence between the three images having the B region may be selected. This specific method will be described later.
- step S12 the overlapped image group selected in step S11 is used.
- a linear coefficient used for expansion of the tension image is calculated.
- the linear coefficient is calculated by using the pixel values of each image of the overlapping image group in the A region, and using Shashua's method (“Shashua A.,“ Geometry and Photometry in 3D Visual Recognition, PD thesis, Dept. Brain and , MIT, 1992 ").
- Shashua can represent an image in any light source direction by linear combination of three images with different light source directions by assuming a parallel light source and that the subject surface is a completely diffuse reflection surface. It is shown that.
- I, 1, 1 are the vector representations of three images with different light source directions.
- c [c 1 c 2 c 3 ] T is calculated as a linear coefficient for the image I.
- the diffuse reflection pixel indicates a pixel that behaves as in (Equation 2).
- step S13 the reliability of the linear coefficient calculated in step S12 is determined.
- the reliability determination of the linear coefficient is to determine the independence between the three images having the B region in the overlapping image group.
- the condition number of the matrix D calculated in step S12 is used. In other words, the condition number of the matrix D is obtained, and when this condition number is smaller than the predetermined threshold, it is determined that the three images having the selected B region are independent and the calculated linear coefficient is reliable. To do.
- it is larger than the predetermined threshold value it is determined that the calculated linear coefficient is not reliable, and the process returns to step S11 to return to the overlapping image group having the same A area and different from the V and previous overlapping image groups. Reselect.
- ⁇ In any image, two or more of the selected three pixels have the same normal direction. ⁇ At least two or more images have the same light source state.
- a panoramic image was synthesized using such a reliable low linear coefficient.
- a panoramic image with a large luminance error is generated.
- the three images with the B region have different light source directions and three or more pixels with different normal directions on the object surface. It satisfies the condition that it exists.
- a panoramic image is synthesized using such a highly reliable linear coefficient, it is possible to generate a panoramic image with a small luminance error.
- the above-described luminance error indicates a luminance error between an ideal panoramic image captured in the same light source environment as the extended image and the generated panoramic image.
- the calculated linear coefficient has the highest reliability. May be selected. Specifically, for example, the candidate with the minimum number of conditions may be selected as the overlapping image group. This is because the smaller the condition number, the higher the reliability of the calculated linear coefficient, or the higher the independence of the three images with B images included in the selected overlapping image group.
- the selection of duplicate images and the 3 in the selected images are based on the presence or absence of D- 1 in (Equation 3) and the number of D conditions with high reliability. Optimal linear coefficients are obtained by selecting points. Therefore, other methods will not work as long as they use the same criteria.
- D- 1 and D condition numbers are simultaneously calculated in S12.
- the reliability of the calculated D-D condition numbers is determined. If there is no reliability, three points P (X, y ), P (X, y), P (X, y) may be reselected. in this case
- step S14 using this linear coefficient and the pixel values of three images having the B region, the extension is performed. Extend the image size.
- the three images I, 1, 1 are areas other than the image area of the extended image I.
- the extended image included in the overlapping image group can be expanded to “original image region + B region” by steps Sl l-S14. Therefore, by repeatedly performing such processing, it is possible to easily generate a no-llama image having the same light source environment over the entire image.
- step S15 it is determined whether or not the image has been expanded to the entire region of the desired panoramic image. If it is determined that the entire area has not been expanded yet (No in S 15), the image expanded in step S 14 is stored in the storage medium in the image storage unit 103 (S 16), and the process returns to step S 11. On the other hand, when it is determined that the entire area has been expanded (Yes in S15), the panoramic image is stored in the storage medium in the image storage unit 103 (S17). As a determination method in step S15, for example, when the number of pixels of the expanded image becomes the number of pixels for the entire region, it may be determined that the entire region has been expanded.
- a storage medium for storing the V Lama image for example, a memory may be a node disk, a film, or the like. In this way, by repeating the processing of steps S 11 to S 16, one panoramic image having the same light source environment expanded to the entire area is stored in the image storage unit 103. By steps S11 to S16, the panoramic image synthesis method according to the present embodiment is realized.
- an end determination is performed (S18).
- S18 in this embodiment, as will be described later, in order to perform object detection, it is necessary to generate three normal llama images with different light source directions. For this reason, here, when there are three or more panoramic images having independence between images in the image storage unit 103 (Yes in S18), the processing is terminated, and three V-llama images having independence between images are displayed. If the number is less than the number (No in S18), the process returns to step S11 and a new panoramic image is synthesized.
- the value of a determinant is used. Specifically, three arbitrary V Lama images (I P , I P , I
- 3 p is determined to be a panoramic image with a different light source direction, and the process ends.
- the determinant IDI has a value ⁇
- the selected three panoramic images I P can be determined to be independent of each other
- a combination of the other three VRAM images stored in the image storage unit 103 is selected and the same processing is performed. If any of the three panorama images stored in the image storage unit 103 is selected and the value of the determinant becomes 0, the process returns to step S 11. Of course, if there are two or less images, the process returns to step S11.
- the V Lama image synthesis unit 104 synthesizes 3 V Lama images having independence between images.
- FIG. 9 is an example of a panoramic image synthesized by the panoramic image synthesizing method according to the present embodiment.
- FIG. 9 (a) shows the extended image 10 whose image size is expanded and three images I 1, 1 and 1 with different light source directions.
- Fig. 9 (b) shows the result of Fig. 9 (a) 4 by the method of this embodiment described above.
- FIG. 9 (c) shows the images I and I using the histogram matching method (for details, see “Image Analysis Nordbook", The University of Tokyo Press, pp. 463, 1991).
- the V-llama image has no joints in the image with less luminance error, as compared with the image in Fig. 9 (c). That is, according to the panoramic image synthesis method according to the present embodiment, a panoramic image with a small luminance error is synthesized even if images having different light source directions are given.
- the panoramic image synthesized by the method according to the present embodiment is a panoramic image composed only of diffuse reflection pixels. Such an image is called a panoramic base image.
- the object detection unit 105 performs object detection on the image received from the captured image correction unit 102 using the three V Lama images input from the V Lama image synthesis unit 104.
- the object detection here may use, for example, a background difference.
- FIG. 7 shows a case where the table 22 is photographed in a room where the refrigerator 21 and the table 22 are placed, and an object (empty can 23) placed on the table 22 is detected from the image.
- IP are panorama images input from the panorama image composition unit 104, and The floor, refrigerator 21 and table 22 are taken as a scene.
- I p , I p are already synthesized using the image taken before taking the object detection image.
- images I 1, I 1, I corresponding to the image area of image I from panoramic images I P , I P , I P are represented as
- Image I contains objects other than the background (empty can 23 in the example of Fig. 7).
- ⁇ 3 may be selected as a point other than the background (for example, points ⁇ 1, ⁇ 2, and ⁇ 3 are selected in the area where the empty can 23 is reflected). In this case, the linear coefficient cannot be calculated correctly.
- RANSAC ⁇ MA.FischlerandR.C.Bolles, '' Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography, Commun, Assoc, Comp, Mach, 24, pp.381-395, 1981
- the three points PI, P2, and P3 can be selected on the background components of all four images, and linear coefficients that are not affected by anything other than the background can be calculated.
- the difference in pixel value between the predicted background image IE and the original image I is equal to a predetermined threshold value.
- the larger area is detected as the object area.
- the detected object display unit 106 displays the area of the object detected by the object detection unit 105 on a display device such as a monitor.
- the force (Formula 3) is used to calculate the linear coefficient using (Formula 3). Is an equation that assumes that the selected three points P, P, and P are all diffusely reflected. Fruit
- a correct linear coefficient can be calculated by using four images for panoramic image synthesis and object detection.
- the light source in the actual environment includes ambient light such as mutual reflection
- the linear coefficient can be calculated with high accuracy by using more than five images. It is possible to perform image expansion and object detection using linearization. For example, even when ambient light is included, it has already been reported that a linear image with little error can be created using ninety nine images (for details, see “Bidirectional reflection distribution”). Separation of reflection components based on frequency characteristics of functions ”, Information Processing Society Research Report, CVIM 2002-134-1, pp.1-8, 2002).
- an image captured in a real environment includes many pixels other than diffuse reflection pixels. These pixels become noise in the image compositing process and cause a luminance error of the generated panoramic image to increase.
- the luminance error of the V-llama image increases. For this reason, in a system in which images are sequentially input, such as a surveillance system, a panoramic image is obtained with a luminance error smaller than that of a normal Vrama image when a new input image is used. If possible, it is preferable to update the image. As a result, the influence of noise caused by pixels other than the diffuse reflection pixels can be minimized.
- FIG. 8 is a flowchart showing the operation of the V Lama image synthesis unit 104 in this embodiment.
- steps common to FIG. 4 are denoted by the same reference numerals as in FIG. 4, and detailed description thereof is omitted here.
- the difference from the first embodiment is that a step S21 for determining whether or not to update the V Lama image is provided.
- step S21 when a new image is input to the image storage unit 103 in step S21, the overlapping image group is reselected, and the new overlapping image group and the old overlapping image group having the same overlapping area are imaged. Compare the suitability of the synthesis. If it is determined that the new overlapping image group is more suitable for image composition, it is determined that the V Lama image is updated (Yes in S21), and the process proceeds to step S11 and subsequent steps. On the other hand, if not (No in S21), the panorama image is not updated.
- the new overlapping image group is an overlapping image group in which a new input image and an image stored so far are combined when a new image is input to the image storage unit 103.
- the combination method can be realized using the conditions and methods already described in step S11.
- the processes up to steps S12 to S14 are respectively performed to create a composite image.
- the number of diffuse reflection pixels in the synthesized image thus created is compared, and the larger number of pixels may be considered suitable.
- a pixel having a pixel value equal to or smaller than a certain threshold is used as a shadow pixel, and a pixel having a pixel value equal to or greater than a certain threshold is used as a specular reflection pixel.
- the sum of the excluded pixels may be the number of diffuse reflection pixels.
- the suitability of image composition may be judged using the total number of diffuse reflection pixels in each overlapping image group.
- the correct linear coefficient cannot be calculated. Therefore, the larger the number of diffuse reflection pixels in the A area of the overlapping image used, the more expected the correct linear coefficient is calculated. it can.
- the panorama image is updated. That's fine.
- the suitability of image composition may be judged using independence between images. That is, when the new duplicate image group is more independent than the old duplicate image group, the panorama image may be updated.
- the determination of independence between images can be realized by, for example, the method described in step S13 (calculating the condition number, and determining that the smaller the condition number, the higher the independence).
- the condition number of the new duplicate image group is calculated using the luminance values at the same coordinate position as the three coordinate positions used when calculating the linear coefficient in the old duplicate image group, and the calculated new duplicate image group When the condition number is smaller than the condition number of the old overlapping image group, the panorama image may be updated.
- the panoramic image synthesis unit 104 calculates a linear coefficient by a processing method different from that of the first embodiment.
- evaluation image group means “at least one image having the same light source environment as the extended image exists in the image group connected as the extended image group and the extended image group V”. It is defined as “image group”. More specifically, a certain extended image I
- N and N images constituting the overlapping image group of the present invention are first identified, and then N regions constituting the overlapping image group when the B region of the N images is set as an extended image.
- the N images that make up the overlapping image group are concatenated, such as specifying the image, and further identifying the N images that make up the overlapping image group with the B region of the N images. We will identify them sequentially. When the last identified N images overlap with the original extended image I and the same light source environment,
- evaluation image group Multiple sets of N images identified up to now and the original extended image I are referred to as an “evaluation image group”.
- this evaluation image group it is possible to evaluate a luminance error by using a plurality of A regions together, and an image can be obtained while evaluating the luminance error sequentially in each A region. Compared to the extended first embodiment, it is possible to synthesize a panoramic image with less luminance error.
- the original extended image itself is used as an image having the same light source environment as the original extended image.
- An example of using an image different from the original extended image will be described later.
- FIG. 10 shows an example of a specific evaluation image group.
- Figure 10 shows the evaluation image group (I, I).
- Fig. 11 shows a group of images with corners, which are schematically represented as a single image.
- I is connected by two overlapping images (with A i + B 1 (I, I).
- FIG. 12 is a flowchart showing the operation of the V Lama image synthesis unit 104 in the present embodiment. It is.
- steps common to FIG. 4 are denoted by the same reference numerals as in FIG. 4, and detailed description thereof is omitted here.
- the difference from the first embodiment is that it includes a step S31 for determining whether or not an evaluation image group exists, and a step S32 for calculating a linear coefficient with a small error when the evaluation image group exists. This is the point.
- step S31 the presence / absence of the evaluation image group is determined from the images stored in the image storage unit 103. If there is an evaluation image group (Yes in S31), a linear coefficient with less error is calculated using an image belonging to this evaluation image group (S32). On the other hand, if not (No in S31), the processing after step S11 is performed as in the first embodiment.
- FIG. 13 shows an example of an evaluation image group consisting of n sets of I force and I for image I (light source
- kG is the kth image group
- FIG. 14 is a flowchart showing a specific processing flow of the evaluation image group determination step S31. Hereinafter, the processing flow of S31 will be described with reference to FIG.
- step S41 one arbitrary image I is selected from the images stored in the image storage unit 103.
- step S42 it is determined whether or not all images stored in the image storage unit 103 in step S41 have been selected.
- the history of the image I selected in step S41 is stored in a memory, and the image stored in the image storage unit 103 is referred to.
- Step S43 it is determined whether or not a pair exists. This determination can be realized by using the overlapping region detection method described in the first embodiment. As a result of the determination, when two or more image groups do not exist (No in S43), the process returns to Step S41 and image I is reselected. On the other hand, the image group is at least
- Step S43 When two sets exist (Yes in S43), 0 is substituted for k in Step S44.
- step S45 one set (A, k) of the image group determined in step S43.
- step S46 B k + 1 k + 1
- A means k + 1 n n detected in step S43.
- Step S48 and if k is too small (Yes in S48), increment k (step S49), return to step S45 and repeat the process. On the other hand, if k is not smaller than the threshold beam (No in S48), the process returns to step S41 and image I is reselected.
- the threshold t for the number of repetitions may be determined as follows, for example. That is, when the total number of captured images stored in the image storage unit 103 is a, when a is divided by the number N of images having the B region in the overlapping image group (3 in the description of each embodiment) The value of the quotient can be used as the threshold value t.
- the condition number of the matrix D described in the first embodiment is calculated in each B area, and the value of the condition number is equal to or less than the threshold value in each B area, and the condition in each B area
- the evaluation image group having the smallest sum of the number values may be selected as having the highest independence. Thereby, a panoramic image with less luminance error can be synthesized.
- evaluation image group occupies them.
- the image having the largest number of pixels in the image area may be selected. As a result, the number of calculations until the final panoramic image is synthesized is reduced, and the luminance error of the panoramic image can be reduced.
- step S31 is performed by performing the processing of steps S41—S50.
- the presence or absence of the evaluation image group can be determined.
- C k is a linear coefficient set of the k-1th image group and the kth image group.
- Equation 8 by finding the coefficient set that gives the highest evaluation value e-val, it is possible to minimize the accumulation of errors due to repetitive calculations when synthesizing the V Lama image.
- the linear coefficient can be calculated.
- the evaluation function E (A) is expressed as I and I k k k k in the kth A region
- i represents the pixel value in I.
- Equation 9 use (Equation 9) to find three sets of coefficients c for A 1 ! /! Similarly, using (Equation 10), the three coefficient pairs c, c, c are found in the A456 region. This is done using RANSAC, for example, the evaluation value e va
- FIG. 15 is a view for explaining the process of panoramic image synthesis according to the present embodiment.
- the extended image I is extended to area 1 using overlap area A. Similarly, using overlap area A
- the top three sets of linear coefficients to be calculated are retained, and the combination of linear coefficients with the highest final evaluation value is selected.
- the luminance error (the difference value of the luminance value is 2 or less) with the image I used for the composite image 1 1 ⁇ composition in the A region is minimized.
- FIG. 16 is a panoramic image synthesized by the method described with reference to FIG. FIG. 16 (a) is an extended image I, and FIG. 16 (b) is a panoramic image I generated by this embodiment.
- FIG. 16 is a panoramic image synthesized by the method described with reference to FIG. FIG. 16 (a) is an extended image I, and FIG. 16 (b) is a panoramic image I generated by this embodiment.
- FIG. 16 is a panoramic image synthesized by the method described with reference to FIG. FIG. 16 (a) is an extended image I
- FIG. 16 (b) is a panoramic image I generated by this embodiment.
- FIG. 17 shows a comparison result between the first embodiment and this embodiment.
- FIG. 17 shows the mean square error of the luminance and the variance of the error when the correct image is used as a reference for the panoramic image synthesized in each embodiment. From FIG. 17, it can be seen that the panorama image generated by the present embodiment has less error and less error variance than the panorama image generated by the first embodiment. This shows the effectiveness of the panoramic image synthesis method according to the third embodiment.
- the evaluation value e-val holds a plurality of high power coefficient pairs in each A region, and the combination of coefficient sets having the highest evaluation value e in the combination of those coefficient sets is the optimal linear relationship.
- the evaluation value e-val instead of calculating and holding multiple linear coefficient pairs in each A area, for example, instead of calculating and holding multiple linear coefficient sets, Perform selection to calculate the linear coefficient ⁇ Evaluate the calculated linear coefficient using (Equation 8), and repeat the process multiple times to obtain the highest evaluation value e-val and the combination of coefficients You may choose.
- the pixel value calculated using the linear coefficient calculated in each region of (Equation 4) and (Equation 6), and the original target value are used.
- the linear coefficient may be obtained so that the square error with the pixel value of the extended image is minimized.
- the initial value of the linear coefficient the highest evaluation linear coefficient obtained in each A region is used, and the gradient method is used to minimize the error function E shown in (Equation 12).
- This square error function is shown in (Equation 13).
- i b is the kth pixel position in image I
- m, n indicate the number of pixels in each A area.
- step S31 for determining presence or absence is slightly different from the flow in FIG.
- Figure 19 shows the image power with the same light source environment as the extended image I.
- the image I is different from the extended image I.
- step S41 one arbitrary image I is selected from the images stored in the image storage unit 103.
- step S61 an image of the same light source environment as image I selected in step S41 is displayed.
- Detect image I As a specific detection method, for example, it does not matter if a measurement object is placed in the measurement object region and its change is examined. For example, it may be possible to place an object of barium sulfate or Spectralon that does not cause specular reflection by causing complete diffuse reflection.
- the light source environment is considered to change with time. Even if it is judged that the optical environment has changed, it will not help. This is particularly effective in an environment where sunlight enters through, for example, a window.
- a light source state may be imaged using a specular sphere, and the change may be detected. Of course, it does not matter to shoot the light source state with the wide-angle camera facing the ceiling.
- the position of the pixel with the highest luminance in the imaging region may be recognized, and when the position moves, it may be determined that the light source environment has changed. This is because the brightness is the highest and the pixel is considered to cause specular reflection! /, So the change in the light source position is detected by detecting the movement of this position.
- step S42 it is determined whether all images stored in the image storage unit 103 in step S41 have been selected. As a result of the determination, if all the images have already been selected (Yes in S42), it is determined that there is no evaluation image group (S50).
- step S62 (Yes in step S62), 0 is substituted for k in step S44.
- step S45 one set of image groups (A, B) is selected from the image group determined in step S62, and at least one set is further included in the B region of the selected image group.
- step S45 If it is determined that does not exist (No in S45), the process returns to step S41 to reselect the image.
- step S63 B k + 1 k + 1
- Step S49 Whether or not the threshold value is too small (Yes in S48), increment k (Step S49), and return to Step S45 to repeat the process. On the other hand, if k is not smaller than the threshold beam (No in S48), return to step S41 and reselect image I. Do.
- the presence or absence of can be determined.
- the step S31 for determining whether or not an evaluation image group exists and the step S32 for calculating a linear coefficient with a small error when the evaluation image group exists include Since linear coefficients can be calculated to minimize the accumulation of errors due to repetitive calculations during image composition, panoramic image composition with fewer luminance errors can be performed, and the accuracy of object detection can be improved. improves.
- a panoramic image having a constant light source environment can be synthesized, and by using a plurality of the V-Rama images, an image force captured in an arbitrary light source environment can be detected, so that for example, It is useful for detecting intruders in surveillance systems in places where the light environment changes frequently, such as outdoors.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Editing Of Facsimile Originals (AREA)
Abstract
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006528343A JP3971783B2 (ja) | 2004-07-28 | 2005-02-08 | パノラマ画像合成方法および物体検出方法、パノラマ画像合成装置、撮像装置、物体検出装置、並びにパノラマ画像合成プログラム |
US11/132,654 US7535499B2 (en) | 2004-07-28 | 2005-05-19 | Panorama image synthesis method, object detection method, panorama image synthesis system, image shooting apparatus, and panorama image synthesis program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-220719 | 2004-07-28 | ||
JP2004220719 | 2004-07-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/132,654 Continuation US7535499B2 (en) | 2004-07-28 | 2005-05-19 | Panorama image synthesis method, object detection method, panorama image synthesis system, image shooting apparatus, and panorama image synthesis program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006011255A1 true WO2006011255A1 (fr) | 2006-02-02 |
Family
ID=35731677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/001829 WO2006011255A1 (fr) | 2004-07-28 | 2005-02-08 | Procédé de synthèse d’image panoramique, procédé de détection d’objet, dispositif de synthèse d’image panoramique, système imageur, dispositif de détection d’objet et programme de synthèse d’image panoramique |
Country Status (3)
Country | Link |
---|---|
US (1) | US7535499B2 (fr) |
JP (1) | JP3971783B2 (fr) |
WO (1) | WO2006011255A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017059233A (ja) * | 2015-09-15 | 2017-03-23 | 株式会社リコー | 画像合成方法及び装置 |
CN107492084A (zh) * | 2017-07-06 | 2017-12-19 | 哈尔滨理工大学 | 基于随机性的典型成团细胞核图像合成方法 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9805301B1 (en) * | 2005-03-04 | 2017-10-31 | Hrl Laboratories, Llc | Dynamic background estimation for video analysis using evolutionary optimization |
US20070047834A1 (en) * | 2005-08-31 | 2007-03-01 | International Business Machines Corporation | Method and apparatus for visual background subtraction with one or more preprocessing modules |
JP4671430B2 (ja) * | 2006-06-02 | 2011-04-20 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、プログラム及び記録媒体 |
KR100866230B1 (ko) | 2007-04-12 | 2008-10-30 | 삼성전자주식회사 | 파노라마 사진 촬영 방법 |
JP4783461B2 (ja) * | 2007-09-25 | 2011-09-28 | 富士通株式会社 | 画像合成装置及び方法 |
US8934008B2 (en) | 2009-12-07 | 2015-01-13 | Cognitech, Inc. | System and method for determining geo-location(s) in images |
JPWO2011093031A1 (ja) * | 2010-02-01 | 2013-05-30 | 日本電気株式会社 | 携帯端末、行動履歴描写方法、及び行動履歴描写システム |
KR20110131949A (ko) * | 2010-06-01 | 2011-12-07 | 삼성전자주식회사 | 영상 처리 장치 및 방법 |
CN101872113B (zh) * | 2010-06-07 | 2014-03-19 | 中兴通讯股份有限公司 | 一种全景照片的拍摄方法及设备 |
US8861890B2 (en) | 2010-11-24 | 2014-10-14 | Douglas Alan Lefler | System and method for assembling and displaying individual images as a continuous image |
KR101784176B1 (ko) * | 2011-05-25 | 2017-10-12 | 삼성전자주식회사 | 영상 촬영 장치 및 그 제어방법 |
EP2803013A1 (fr) * | 2012-01-09 | 2014-11-19 | Qualcomm Incorporated | Mise à jour d'antémémoire d'ocr |
TWI578271B (zh) | 2012-10-23 | 2017-04-11 | 義晶科技股份有限公司 | 動態影像處理方法以及動態影像處理系統 |
KR101776706B1 (ko) * | 2012-11-30 | 2017-09-08 | 한화테크윈 주식회사 | 복수의 카메라 기반 사람계수장치 및 방법 |
US9639987B2 (en) * | 2013-06-27 | 2017-05-02 | Canon Information And Imaging Solutions, Inc. | Devices, systems, and methods for generating proxy models for an enhanced scene |
US10521100B2 (en) | 2015-08-28 | 2019-12-31 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
US10521099B2 (en) * | 2015-08-28 | 2019-12-31 | Facebook, Inc. | Systems and methods for providing interactivity for panoramic media content |
CN108881702B (zh) * | 2017-05-09 | 2020-12-11 | 浙江凡后科技有限公司 | 一种多摄像头捕捉物体运动轨迹的系统及方法 |
US20220141384A1 (en) * | 2020-10-30 | 2022-05-05 | Flir Surveillance, Inc. | Situational awareness-based image annotation systems and methods |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07193746A (ja) * | 1993-10-20 | 1995-07-28 | Philips Electron Nv | 画像処理システム |
JPH0993430A (ja) * | 1995-09-26 | 1997-04-04 | Canon Inc | 画像合成方法及び画像合成装置 |
JPH11242373A (ja) * | 1998-02-26 | 1999-09-07 | Minolta Co Ltd | 荷電装置 |
JP2004139219A (ja) * | 2002-10-16 | 2004-05-13 | Seiko Instruments Inc | 画像処理方法および画像処理装置 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5550641A (en) * | 1991-05-15 | 1996-08-27 | Gentech Corporation | System and method for rendering images |
US5598515A (en) * | 1994-01-10 | 1997-01-28 | Gen Tech Corp. | System and method for reconstructing surface elements of solid objects in a three-dimensional scene from a plurality of two dimensional images of the scene |
US6549681B1 (en) * | 1995-09-26 | 2003-04-15 | Canon Kabushiki Kaisha | Image synthesization method |
US5748194A (en) * | 1996-05-08 | 1998-05-05 | Live Picture, Inc. | Rendering perspective views of a scene using a scanline-coherent look-up table |
JPH09321972A (ja) | 1996-05-28 | 1997-12-12 | Canon Inc | 画像合成装置及び方法 |
IL119831A (en) * | 1996-12-15 | 2002-12-01 | Cognitens Ltd | A device and method for three-dimensional reconstruction of the surface geometry of an object |
US6552744B2 (en) * | 1997-09-26 | 2003-04-22 | Roxio, Inc. | Virtual reality camera |
JPH11242737A (ja) | 1998-02-26 | 1999-09-07 | Ricoh Co Ltd | 画像処理方法及び装置並びに情報記録媒体 |
US6278466B1 (en) * | 1998-06-11 | 2001-08-21 | Presenter.Com, Inc. | Creating animation from a video |
US7057650B1 (en) * | 1998-06-22 | 2006-06-06 | Fuji Photo Film Co., Ltd. | Image sensing apparatus and method for synthesizing a composite image |
JP3849385B2 (ja) * | 2000-02-03 | 2006-11-22 | コニカミノルタホールディングス株式会社 | 画像処理装置、画像処理方法および画像処理プログラムを記録したコンピュータ読取可能な記録媒体 |
US6978051B2 (en) * | 2000-03-06 | 2005-12-20 | Sony Corporation | System and method for capturing adjacent images by utilizing a panorama mode |
JP2002150264A (ja) | 2000-11-07 | 2002-05-24 | Nippon Telegr & Teleph Corp <Ntt> | モザイク画像合成方法及びモザイク画像合成装置並びにモザイク画像合成プログラムを記録した記録媒体 |
JP4551018B2 (ja) * | 2001-04-05 | 2010-09-22 | 富士通株式会社 | 画像結合装置 |
US20030002735A1 (en) * | 2001-06-29 | 2003-01-02 | Hideaki Yamamoto | Image processing method and image processing apparatus |
US7224387B2 (en) * | 2002-01-09 | 2007-05-29 | Hewlett-Packard Development Company, L.P. | Method and apparatus for correcting camera tilt distortion in panoramic images |
JP3940690B2 (ja) | 2002-03-25 | 2007-07-04 | 株式会社東芝 | 画像処理装置及びその方法 |
WO2006011261A1 (fr) * | 2004-07-26 | 2006-02-02 | Matsushita Electric Industrial Co., Ltd. | Procede de traitement d’image, dispositif de traitement d’image et programme de traitement d’image |
-
2005
- 2005-02-08 WO PCT/JP2005/001829 patent/WO2006011255A1/fr active Application Filing
- 2005-02-08 JP JP2006528343A patent/JP3971783B2/ja not_active Expired - Lifetime
- 2005-05-19 US US11/132,654 patent/US7535499B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07193746A (ja) * | 1993-10-20 | 1995-07-28 | Philips Electron Nv | 画像処理システム |
JPH0993430A (ja) * | 1995-09-26 | 1997-04-04 | Canon Inc | 画像合成方法及び画像合成装置 |
JPH11242373A (ja) * | 1998-02-26 | 1999-09-07 | Minolta Co Ltd | 荷電装置 |
JP2004139219A (ja) * | 2002-10-16 | 2004-05-13 | Seiko Instruments Inc | 画像処理方法および画像処理装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017059233A (ja) * | 2015-09-15 | 2017-03-23 | 株式会社リコー | 画像合成方法及び装置 |
CN107492084A (zh) * | 2017-07-06 | 2017-12-19 | 哈尔滨理工大学 | 基于随机性的典型成团细胞核图像合成方法 |
CN107492084B (zh) * | 2017-07-06 | 2021-06-25 | 哈尔滨理工大学 | 基于随机性的典型成团细胞核图像合成方法 |
Also Published As
Publication number | Publication date |
---|---|
US7535499B2 (en) | 2009-05-19 |
JP3971783B2 (ja) | 2007-09-05 |
US20060023090A1 (en) | 2006-02-02 |
JPWO2006011255A1 (ja) | 2008-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006011255A1 (fr) | Procédé de synthèse d’image panoramique, procédé de détection d’objet, dispositif de synthèse d’image panoramique, système imageur, dispositif de détection d’objet et programme de synthèse d’image panoramique | |
US7860343B2 (en) | Constructing image panorama using frame selection | |
JP4435865B2 (ja) | 画像処理装置、画像分割プログラムおよび画像合成方法 | |
JP3882005B2 (ja) | 画像生成方法、物体検出方法、物体検出装置および画像生成プログラム | |
US8542741B2 (en) | Image processing device and image processing method | |
US9766057B1 (en) | Characterization of a scene with structured light | |
KR100790887B1 (ko) | 영상 처리장치 및 방법 | |
WO1999058927A1 (fr) | Dispositif et procede de generation d'images | |
WO2005084017A1 (fr) | Système multiprojection | |
JP2001227914A (ja) | 物体監視装置 | |
WO2019193859A1 (fr) | Procédé, dispositif, système et programme d'étalonnage de caméra | |
KR100596976B1 (ko) | 왜곡 영상 보정 장치 및 방법 및 이를 이용하는 영상디스플레이 시스템 | |
JP4860431B2 (ja) | 画像生成装置 | |
JP5151922B2 (ja) | 画素位置対応関係特定システム、画素位置対応関係特定方法および画素位置対応関係特定プログラム | |
EP1372341A2 (fr) | Système d'affichage d'informations et terminal d'informations portable | |
JP6099281B2 (ja) | 書籍読み取りシステム及び書籍読み取り方法 | |
JPH10320558A (ja) | キャリブレーション方法並びに対応点探索方法及び装置並びに焦点距離検出方法及び装置並びに3次元位置情報検出方法及び装置並びに記録媒体 | |
JP2005141655A (ja) | 3次元モデリング装置及び3次元モデリング方法 | |
WO2022044807A1 (fr) | Dispositif et procédé de traitement d'informations | |
KR20190127543A (ko) | 비디오 시퀀스에서 모션을 감지하기 위한 방법 | |
JP4902564B2 (ja) | マーカ検出識別装置およびそのプログラム | |
JP4776983B2 (ja) | 画像合成装置及び画像合成方法 | |
JP2006323139A (ja) | プロジェクタ・カメラサーバ、及び画像投影方法 | |
JP2002230585A (ja) | 3次元画像表示方法及び記録媒体 | |
JP2008311906A (ja) | 画像データ照合装置、画像合成装置及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 11132654 Country of ref document: US |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
WWP | Wipo information: published in national office |
Ref document number: 11132654 Country of ref document: US |
|
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006528343 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05709881 Country of ref document: EP Kind code of ref document: A1 |