WO2000068890A1 - Procede de rendu utilisant des images mosaiques concentriques - Google Patents
Procede de rendu utilisant des images mosaiques concentriques Download PDFInfo
- Publication number
- WO2000068890A1 WO2000068890A1 PCT/US2000/012839 US0012839W WO0068890A1 WO 2000068890 A1 WO2000068890 A1 WO 2000068890A1 US 0012839 W US0012839 W US 0012839W WO 0068890 A1 WO0068890 A1 WO 0068890A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- slit
- image
- images
- novel
- captured
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 113
- 238000009877 rendering Methods 0.000 title claims abstract description 47
- 230000008569 process Effects 0.000 claims abstract description 59
- 238000012937 correction Methods 0.000 claims abstract description 10
- 230000015654 memory Effects 0.000 claims description 25
- 238000002156 mixing Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims 4
- 238000012544 monitoring process Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 19
- 230000003287 optical effect Effects 0.000 description 8
- 238000013459 approach Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 241000212384 Bifora Species 0.000 description 1
- 238000012952 Resampling Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Definitions
- the invention is related to an image-based rendering system and process for rendering novel views of a 3D scene, and more particularly, to a system and process for rendering these novel views using concentric mosaics.
- the original Adelson and Bergen work defined a 7D plenoptic function as the intensity of light rays passing through the camera center at every location (V x , V y , V z ) at every possible angle ( ⁇ , ⁇ ), for every wavelength ⁇ , at every time t, i.e.,
- the 2D embodiment of the plenoptic function is the easiest to construct.
- the 2D parameterization of the plenoptic function does not allow novel views from different viewpoints within the scene to be rendered. It would be possible to render novel views using the 5D or 4D embodiments of the plenoptic function. It is, however, very difficult to construct a 5D complete plenoptic function [MB95, KS96] because the feature correspondence is a difficult problem.
- the 4D embodiment is not too difficult to construct because of its unique parameterization, the resulting lightfield is large even for a small object (therefore small convex hull). This presents problems when trying to render novel views, and to date a real-time walk-through a real 3D scene has not been fully demonstrated.
- each concentric mosaic represents a collection of consecutive slit images of the surrounding 3D scene taken in a direction tangent to a viewpoint on a circle on a plane within the scene.
- the mosaics are concentric in that the aforementioned circles on the plane are concentric.
- This system and process is based on a unique 3D parameterization of the previously described plenoptic function that will hereafter be referred to as concentric mosaics. Essentially, this parameterization requires that the concentric mosaics be created by constraining "camera" motion on concentric circles in the circle plane within the 3D scene. Once created, the concentric mosaics can be used to create novel views of the 3D scene without explicitly recovering the scene geometry.
- each concentric mosaic is generated by capturing slit-shaped views of the 3D scene from different viewpoints along one of the concentric circles.
- Each captured slit image is uniquely indexed by its radial location (i.e., which concentric mosaic it belongs to) and its angle of rotation (i.e., where on the mosaic).
- the radial location is the radius of the concentric center on which the slit image was captured, and the angle of rotation is defined as the number of degrees from a prescribed beginning on the concentric circle to the viewpoint from which the slit image was captured.
- the height of the slit image reflects the vertical field of view of the capturing camera.
- slit images are captured at each viewpoint on the concentric circle in both directions tangent to the circle. If so, the unique identification of each slit image also includes an indication as to which of the two tangential directions the slit image was captured.
- Novel views from viewpoints on the aforementioned circle plane are rendered using the concentric mosaics composed of slit images.
- the viewpoint of a novel view may be anywhere within a circular region defined by the intersection of the outermost concentric mosaic with the circle plane.
- the novel view image consists of a collection of side-by-side slit images, each of which is identified by a ray emanating from the viewpoint of the novel view on the circle plane and extending toward a unique one of the slit images needed to construct the novel view.
- Each of the rays associated with the slit images needed to construct the novel view will either coincide with a ray identifying one of the previously captured slit images in the concentric mosaics, or it will pass between two of the concentric circles on the circle plane.
- the previously captured slit image associated with the coinciding ray can be used directly to construct part of the novel view.
- a new slit image is formed by interpolating between the two previously captured slit images associated with the rays originating from the adjacent concentric circles that are parallel to the non-coinciding ray of the novel view.
- This interpolation process preferably includes determining the distance separating the non-coinciding ray from each of the two adjacent parallel rays. Once these distances are determined, a ratio is computed for each slit image associated with the parallel rays based on the relative distance between the non-coinciding ray and each of the parallel rays.
- This ratio represents how much weight the slit image will be given in a blending process that combines the two slit images associated with the parallel rays to form an interpolated slit image.
- the rendering system and process can also be modified to mitigate the effects of potential distortions in the rendered images. These distortions can arise when objects depicted in a novel view are relatively close because the slit images used to construct the novel views were captured at viewpoints on the circle plane that are different from the viewpoint of the novel view.
- the potential distortion can be compensated for using a scaling process.
- This scaling process includes first determining the average distance from the viewpoint of each captured slit image to the objects of the 3D scene depicted therein. Then, for each captured slit image found to coincide with one of the slit images needed to construct a novel view, and for each interpolated slit image, the average distance from the objects of the 3D scene depicted in the captured slit image to the viewpoint of the novel view is determined. A ratio is computed by dividing the first distance by the second distance. This ratio is used to scale the vertical dimension of the slit image.
- the system used to create the concentric mosaics typically uses multiple slit-image cameras that are rotated around a center point.
- a single camera such as a video camera, may be used instead.
- a regular image is captured using the video camera, rather than a slit image, to obtain a sequence of images.
- a regular image includes multiple image lines wherein each image line is associated with a ray direction (a ray direction is an angle at which a light ray enters the lens of the camera).
- a concentric mosaic is formed by combining each image line having the same ray direction in the sequence of images. For example, the 20 th image line taken from each image may be used to form a concentric mosaic.
- An image from a novel viewpoint may then be rendered using the same techniques described for the slit-image embodiment.
- an image from a novel viewpoint may be rendered without rebinning the images into concentric mosaics.
- multiple ray directions are calculated from the novel viewpoint.
- an image line is obtained from the sequence of images that has a ray direction substantially aligning with the ray direction from the novel viewpoint.
- the obtained image lines are then combined to render an image from the novel viewpoint.
- FIG. 1 is a diagram depicting a general-purpose computing device constituting an exemplary system for implementing the present invention.
- FIG. 2 is a flow chart diagramming an overall process for creating concentric mosaics and rendering novel views according to one embodiment of the present invention.
- FIG. 3 is a diagram showing a side view of a simplified setup for capturing slit images of a real 3D scene using cameras.
- FIG. 4A is a diagram showing concentric circles on the circle plane and concentric mosaics associated with each circle.
- FIGS. 4B through 4D are images depicting concentric mosaics taken at three different radial locations (i.e., camera locations in this case).
- FIG. 5 is a flow chart diagramming a process for accomplishing the concentric mosaic creation module of the overall process of FIG. 2.
- FIG. 6A is a diagram showing one of the concentric circles and the associated concentric mosaic, as well as the slit images captured at two viewpoints on the circle in a first tangential direction.
- FIG. 6B is a diagram showing the concentric circle of FIG. 6A and a companion concentric mosaic, as well as slit images captured at the same two viewpoints on the circle, but in a second, opposite tangential direction.
- FIG. 7 is a flow chart diagramming a process for accomplishing the novel view rendering module of the overall process of FIG. 2.
- FIG. 8 is a diagram showing a top view of the simplified setup of FIG. 3 where back-to-back cameras are employed at every camera location on the rotating beam.
- FIG. 9 is a diagram showing a series of concentric circles and the concentric mosaics associated with two of them, as well as rays associated with a novel view's viewpoint that coincide with rays associated with the concentric mosaics, thereby indicating the slit images identified by the rays can be used to construct a part of the novel view.
- FIG.10 is a diagram showing a series of concentric circles on the circle plane, and a ray associated with a novel view's viewpoint that does not coincide with any rays associated with the concentric mosaics.
- FIG. 1 1 is a diagram illustrating the sequence of events associated with creating an interpolated slit image from two previously captured slit images.
- FIG. 12A is a diagram showing a side view of the circle plane with rays emanating from three viewpoints toward an object in the 3D scene to illustrate the depth distortion that can occur in a rendered novel view if objects in the novel view are too close to the various viewpoints.
- FIG. 12B is a diagram illustrating the problem with rendering novel views using a previously captured slit images from viewpoints in back of and in front of the viewpoint of the captured view, when an object depicted in the novel view is too close to the various viewpoints, as shown in FIG. 12A.
- FIG. 13 is a flow chart diagramming a process for compensating for the depth distortion depicted in Figs. 12A and 12B.
- FIGS. 14A and 14B are diagrams showing how to render an image using concentric mosaics.
- FIG. 15 is a side elevational view of another embodiment of the present invention wherein a single camera is mounted to a rotating table to capture a real scene.
- FIG. 16 shows a top view of the camera and rotating table of FIG. 15.
- FIG. 17 is a diagram illustrating a sequence of images captured by the camera of FIG. 15.
- FIG. 18 is a diagram showing the construction of a concentric mosaic using the camera of FIG. 15, wherein the camera is mounted in a normal direction.
- FIG. 19 is a diagram showing the construction of a concentric mosaic using a camera similar to the camera of FIG. 15 wherein the camera is mounted in a tangential direction.
- FIG. 20 is a diagram illustrating constant depth correction for images rendered from a novel viewpoint.
- FIG. 21 is a flow chart of a method for rendering an image from a novel viewpoint using images captured from the camera of FIG. 15.
- FIG. 22 is a diagram illustrating a novel viewpoint and a viewing direction obtained from user input.
- FIG. 23 is a diagram illustrating a ray projected from the novel viewpoint of FIG. 22 through a point Q on a camera path.
- FIG. 24 is a diagram illustrating multiple rays projected from the novel viewpoint of FIG. 22 and using the projected rays to render an image.
- FIG. 25 is a flow chart of a method for rendering an image using the embodiment of FIG. 15.
- FIG 26 is a flow chart of additional steps that may be used with the rendering technique of FIG.
- FIG. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented.
- the invention will be described in the general context of computer-executable instructions, such as program modules, being executed by a personal computer.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- the invention may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
- the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote memory storage devices.
- an exemplary system for implementing the invention includes a general purpose computing device in the form of a conventional personal computer 20, including a processing unit 21, a system memory 22, and a system bus 23 that couples various system components including the system memory to the processing unit 21.
- the system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- the system memory includes read only memory (ROM) 24 and random access memory (RAM) 25.
- ROM read only memory
- RAM random access memory
- the personal computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM or other optical media.
- the hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical drive interface 34, respectively.
- the drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the personal computer 20.
- a number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38.
- a user may enter commands and information into the personal computer 20 through input devices such as a keyboard 40 and pointing device 42 (such as a mouse).
- a camera 55 (such as a digital/electronic still or video camera, or film/photographic scanner) capable of capturing a sequence of images 56 can also be included as an input device to the personal computer 20.
- the images 56 are input into the computer 20 via an appropriate camera interface 57.
- This interface 57 is connected to the system bus 23, thereby allowing the images to be routed to and stored in the RAM 25, or one of the other data storage devices associated with the computer 20.
- image data can be input into the computer 20 from any of the aforementioned computer-readable media as well, without requiring the use of the camera 55.
- Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
- These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB).
- a monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48.
- personal computers typically include other peripheral output devices (not shown), such as speakers and printers.
- the personal computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49.
- the remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the personal computer 20, although only a memory storage device 50 has been illustrated in FIG. 2.
- the logical connections depicted in FIG. 1 include a local area network (LAN) 51 and a wide area network (WAN) 52.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, Intranets and the Internet.
- the personal computer 20 When used in a LAN networking environment, the personal computer 20 is connected to the local network 51 through a network interface or adapter 53.
- the personal computer 20 When used in a WAN networking environment, the personal computer 20 typically includes a modem 54 or other means for establishing communications over the wide area network 52, such as the Internet.
- the modem 54 which may be internal or external, is connected to the system bus 23 via the serial port interface 46.
- program modules depicted relative to the personal computer 20, or portions thereof may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- the exemplary operating environment having now been discussed, the remaining part of this description section will be devoted to a description of the program modules embodying the invention.
- one concentric mosaic is created for each concentric circle defined within the "circle plane" of the 3D scene.
- the system and process according to one embodiment of the present invention are designed to perform the following steps, as shown by the high-level flow diagram of FIG. 2: a) generate a series of concentric mosaics each comprising a plurality of consecutive slit images that collectively depict the surrounding 3D scene (step 200); and b) render novel views of the 3D scene from viewpoints within the circular regions defined by the series of concentric mosaics using the slit images (step 202).
- one technique for accomplishing the constrained camera motion needed to capture images of a real 3D scene is to mount several cameras 300 (C 0 , C k , C n ) on a rotating horizontal beam 302. While, only three cameras 300 are shown in FIG. 3, there would actually be many more positioned along the rotating beam. As will be discussed later, it is desirable to create as many concentric mosaics as possible—and so the need for many cameras or camera locations.
- the cameras 300 can be of any type appropriate for capturing images of the scene. Of course, since the images captured by the camera will be manipulated digitally (as will be described later), it would be convenient if digital cameras were employed to capture the images of the scene.
- the beam 302 is raised up from "ground level" by a support 304. This expands the vertical field of view and provides a more realistic perspective. Preferably the beam is raised high enough to ensure the full vertical field of view possible from the camera is realized.
- the interface between the support 304 and the beam 302 is configured to allow the beam to be rotated a full 360 degrees. Thus, this interface defines a center of rotation 306.
- a counterweight 308 is placed at the other end of the beam to counter balance the weight of the cameras 300.
- the plane defined by the circular region swept out by rotating the beam 360 degrees is the aforementioned circle plane (which in the depicted case is a horizontally oriented plane), and the circles traced out by each camera on the beam when the beam is rotated constitute the aforementioned concentric circles on the circle plane.
- one of the cameras 300 is positioned at the center of rotation 306.
- the depicted multiple camera configuration is preferred as several of the concentric mosaics can be created simultaneously. However, it would also be feasible to use just one camera and move its radial location on the beam to create each concentric mosaic separately. In fact it is this latter set-up that was used in a tested embodiment of the present invention.
- the set-up included a tripod with a rotatable mounting, a horizontally oriented beam, and a digital video camera.
- the cameras 300 are so-called “slit” or “vertical line” cameras in that only the center lines of each image of the 3D scene captured is used. These images will hereinafter be referred to as “slit images”.
- the prescribed width of the slit images is as narrow as possible — ideally the width of a single pixel.
- the overall pixel size is variable, and while it is not preferred, the slit images could have a width of more than one pixel.
- each slit image exhibit a "horizontal" field of view not exceeding about 0.36 degrees, and ideally the "horizontal" field of view should be about 0.12 degrees.
- the measure of the height of a slit image can be referred to as its longitudinal field of view and the measure of the width of a slit image can be referred to as its lateral field of view. This terminology applies because the height of a slit image will typically be much greater than its width.
- the slit images are captured periodically while the camera is rotated a full 360 degrees.
- the images are captured with a frequency, which given the prescribed width of the images and the speed of rotation, will produce images that when placed side-by-side in the order they were taken, form a concentric mosaic.
- the camera motion is made continuous and at a uniform speed for the full 360 degrees of rotation. This will ensure the images will line up perfectly with no overlaps or gaps.
- the process of creating a series of consecutive slit images that collectively depict the surrounding 3D scene is performed for each of the concentric circles 400 on the circle plane to form a collection of concentric mosaics 402, as shown in FIG. 4A.
- the left hand side shows a series of concentric circles including the center of rotation Co (i.e., a circle with a radius of zero), one of the intermediate circles C/, and finally an outermost circle C n .
- Slit images are captured by moving a camera along each of the concentric circles and then combined in an side-by- side manner to form the concentric mosaics CMQ, CM , and CM , respectively.
- FIG. 4B is from the center camera (CM 0 )
- FIG. 4C is from the middle camera (CM k )
- FIG. 4D is from the outermost camera (CM n ). Notice that there is significant occlusion around the green cylinder across the sequence of mosaics. Also notice that the significant specular light change on the wall.
- each slit image is created by capturing the view of the 3D scene in directions tangent to one of the concentric circles in the circle plane (step 500).
- the geometry of this step is best seen in Figs. 6A and 6B.
- each slit image L (depicted as lines 600) is captured at a viewpoint v (identified by the short radial lines 602) on a concentric circle 604 in the circle plane, in a direction tangent to the circle.
- the next step 502 in the process of creating a concentric mosaic is to uniquely identify each of the captured slit images.
- a radial location e.g., the distance from the center of rotation to the concentric circle from which the slit image was captured
- an angle of rotation e.g., the number of degrees from a prescribed beginning on the concentric circle to the viewpoint from which the slit image was captured
- FIG. 6A depicts capturing views of the 3D scene in only one of the directions tangent to a concentric circle 604 at each of its viewpoints 602.
- slit images are captured in the opposite tangential direction as well to create a companion concentric mosaic CM (depicted as rectangular region 608') for each of the concentric circles 604 on the circle plane.
- CM concentric mosaic
- step 500 of the flow diagram of FIG. 5 preferably includes capturing the view of the 3D scene in both directions tangent to the concentric circles in the circle plane. It is also noted that when the camera position is coincident with the center of rotation 306 (i.e. CMQ, CMrf), the concentric mosaics are identical except for being shifted by 180 degrees.
- Each slit image 600' is also uniquely identified in the same way as the slit images 600 associated the first tangential direction. Specifically, each slit image 600' is identified by a radial location and an angle of rotation. However, some designation must also be included to indicate that the slit image 600' views the 3D scene from a tangential direction on the specified concentric circle that is opposite from the slit images 600. For example, this can be done by adding an additional identifier like clockwise/counterclockwise to the indexing of all slit images.
- the oppositely-directed slit images at each viewpoint on a concentric circle are ideally captured by using two cameras 800, 800' placed back-to-back on the rotating beam 802 at each camera position, as shown in FIG. 8.
- the cameras 800, 800' are pointed in a direction that is tangent to the circle traced out by the cameras as the beam 802 rotates, but at the same time in a direction exactly opposite each other. It is noted that here again while only three sets of cameras and camera positions are depicted, there would actually be many more. Of course, alternately, a single pair of back-to-back cameras could be used, where the radial position of this pair is changed between each complete rotation of the beam to create the respective concentric mosaics. Or, a single camera could be used and reversed in direction at each camera position.
- the slit images associated with a circle on the circle plane can be placed side-by-side in the order in which they were taken to produce the aforementioned concentric mosaics C ⁇ /&.
- the term "concentric mosaic” as used in the context of the description of the present invention refers to a collection of uniquely identified, stored slit images associates with one of the concentric circles on the circle plane of the 3D scene. These concentric mosaics are depicted as the rectangular region 608 in FIG. 6A and the rectangular region 608' in FIG. 6B. As will become apparent from the next section, the concentric mosaics are all that is needed to render a novel view the 3D scene from a viewpoint anywhere on the circle plane.
- a plenoptic function can be formed at any point in the aforementioned circular region on the circle plane using the concentric mosaics.
- the concentric mosaics is a 3D plenoptic function, parameterized by three variables, namely the radius, angle of rotation and vertical field of view.
- the process of creating concentric mosaics of a surrounding 3D scene has involved capturing images of a real scene using cameras.
- the present invention is not limited to generating the concentric mosaics from a real 3D scene.
- the concentric mosaics can also be generated synthetically using any appropriate graphics program capable of producing a 3D scene.
- these programs can be employed to generate slit images of the desired dimensions, which depict a portion of a surrounding synthetic 3D scene, from a perspective that is tangent (in either direction) to a circle on a circle plane disposed within a synthetic 3D scene.
- the captured slit images can refer to either real images or synthetic images.
- 3D scene can be rendered because these images will represent all the possible slit images that would be needed to generate a novel view.
- a convenient way of envisioning this concept is to think of the novel view as being made up of a series of side-by-side slit images, each having the dimensions described previously.
- Each of these slit images of the 3D scene can be indexed by a ray traversing the circular region formed by the concentric mosaics, where the ray originates at the viewpoint of the novel view on the circle plane and extends along the plane toward the portion of the 3D scene associated with the slit image.
- the location of a previously captured slit image can be indexed by a ray originating at its associated viewpoint on one of the concentric circles in the circle plane and extending towards the portion of the 3D scene captured by the slit image.
- the rays 606, 606' refer to the location of the previously captured slit images associated with a viewpoint 602. These rays 606, 606' are directed tangentially (in opposite directions) from the concentric circle 604 as discussed previously.
- a novel view from a viewpoint inside the aforementioned circular region can be rendered by first identifying all the rays emanating from the viewpoint that represent the locations of the slit images needed to construct the novel view (step 700).
- step 702 it is determined if any ray associated with a previously captured slit image coincides with one of the rays representing the location of the slit images that are needed to form the novel view. This is accomplished by checking both radial location and rotation angle.
- any such coinciding ray will identify the associated previously captured slit image as one of the slit images needed to form a part of the novel view.
- the previously captured slit images associated with the coinciding rays are selected and associated with the appropriate ray emanating from the viewpoint of the novel view.
- this last step involves associating the unique identifier for a slit image (i.e., its radial location and angle of rotation) with the appropriate ray emanating from the novel view's viewpoint.
- This process is illustrated schematically in FIG. 9.
- the left hand side of FIG. 9 shows the circle plane 900 with several of the concentric circles (Co, , , and C n ) 902, 904, 906, 908.
- the rays 910, 912 are shown emanating from viewpoint P) 914 and respectively traversing the circle plane 900 toward two of the slit images needed to form the novel view.
- the captured rays that coincide with rays 910, 912 need to be found.
- a coinciding captured ray can be found by determining where a ray emanating from viewpoint 914 becomes tangent with one of the concentric circles.
- ray 910 is tangent to circle 904 at point (v/) 916 and ray 912 becomes tangent to circle 906 at point (v ⁇ 918.
- the rays (not shown) that are associated with these points 916, 918, and have the same direction as rays 910, 912, identify the slit images needed to form part of the novel view. There may be, however, practical limits to the number of concentric mosaics that can be created.
- the optical characteristics of the camera(s) limits the number of distinct concentric mosaics that can be formed.
- a limitation on the number of concentric mosaics can have an impact on the rendering of novel views because some of the slit images needed to form a novel view may not exactly line up with one of the captured slit images.
- it is possible that a slit image needed to form a part of the novel view must contain portions of two adjacent captured views.
- the foregoing situation can be illustrated as a ray 1000 emanating from a viewpoint 1002 associated with a novel view, which never becomes tangent to any of the concentric circles 1004, 1006, 1008 on the circle plane 1010.
- Linear inte ⁇ olation involves blending two captured slit images to form the slit image required for the novel view being created. These slit images are blended in proportion to the proximity of their associated rays to the non-coinciding ray, as will be explained in detail below.
- step 706 this is accomplished beginning with step 706 by identifying the two rays, which are parallel to a non-coinciding ray, and which respectively originate from viewpoints on the concentric circles of the circle plane lying on either side of the non-coinciding ray. This identification process can be accomplished by comparing radial locations.
- step 708 is to compute the distance separating the non-coinciding rays from each of their adjacent parallel rays.
- step 710 a ratio is established for each slit image whose location is represented by one of the pairs of adjacent parallel rays based on the relative distance between parallel rays and the associated non-coinciding ray. This is accomplished via the following equations:
- d is the distance between ray 1002 and ray 1014
- d 2 is the distance between ray 1002 and 1012.
- 1 10 o 2 is the pixel value (intensity) incurred by the ray 1002.
- the computed ratio specifies the weight each the aforementioned slit images associated with a pair of adjacent parallel rays is to be given in the blending process to generate the inte ⁇ olated slit image needed to construct the novel view. For example, referring to FIG. 10, if the ray 1000 identifying the needed slit image is halfway between the adjacent parallel rays 1012, 1014, then the aforementioned ratio for each of the slit images associated with the adjacent parallel rays would be 50 percent, and so each slit image would be given equal weight in the blending process. However, in a case where the non- coinciding ray is closer to one of the adjacent parallel rays than the other, the weight given each of the associated slit images would be different.
- the ratio computed for this situation would dictate that the slit image identified by the parallel ray closest to the non-coinciding ray be given a weight of about 66 percent. Likewise, the ratio would dictate that the slit image identified by the parallel ray furthest from the non-coinciding ray be given a weight of about 33 percent.
- blended slit image would constitute pixels having characteristics 66 percent attributable to the slit image associated with the closer of the two parallel rays and 33 percent attributable to the other slit image.
- each newly inte ⁇ olated slit image is associated with the applicable non-coinciding ray (step 714). The foregoing inte ⁇ olation process is illustrated graphically in FIG. 11.
- the captured slit image 1116 associated with concentric matrix 1118 is selected as being one of the slit images identified by a pair of the aforementioned parallel rays (such as ray 1012 of FIG. 10).
- the captured slit image 1120 associated with of concentric matrix 1 122 is selected as being the other slit image identified by the pair of parallel rays (such as ray 1014 of FIG. 10).
- the two slit images 1 116, 1120 are blended, as dictated by their applicable ratios, to form an inte ⁇ olated slit image 1124.
- each longitudinally corresponding pixel in the two slit images being combined is blended together.
- the top-most pixel of one slit image is blended with the corresponding top-most pixel of the other slit image, and so on.
- each laterally corresponding pixel in the two slit images would be blended together.
- the first occurring pixel in each row of pixels in the two slit images would be blended together, and so on.
- some additional blending between adjacent pixels in the combined image may also be performed to improve the quality of the inte ⁇ olated slit image.
- the next step 716 in the process of rendering a novel view is to combine all the slit images identified by the coinciding rays, as well as any inte ⁇ olated slit images, in the proper order to create the novel view.
- the combining process is illustrated for the case where rays associated with captured slit images coincide with rays identifying slit images needed to construct the novel view.
- the slit image (L ) 920 of the concentric matrix (CMfc) 922 is the slit image identified by the captured ray associated with point (v ) 916.
- the slit image (L ⁇ 924 of the concentric matrix (CM/) 926 is the slit image identified by the captured ray associated with point (v ⁇ 918. Therefore, these slit images 920, 924 can be used to construct part of the novel view 928. Specifically, slit image (/ ' ) 930 and slit image (j 932 are placed in the appropriate location in the novel view being rendered. In the case where the previously described linear inte ⁇ olation process is required to produce a slit image for the novel view from adjacent captured slit images, the slit image placement step would be performed after the inte ⁇ olated slit image is constructed. This is illustrated at the bottom of FIG.
- a novel view can be constructed having an overall lateral field of view of from that provided by one slit image, up to a full 360 degree view of the 3D scene from the viewpoint associated with the novel view.
- a novel view rendered in accordance with the foregoing process can have a wide range of possible lateral fields of view, its longitudinal field of view (or height) is more limited. Essentially, the maximum longitudinal field of view of a novel view will be limited to that of the camera(s) that was used to capture the original slit images making up the concentric mosaics.
- slit images were captured from viewpoints lying on concentric circles in the circle plane and from a perspective directed along the plane. This produces slit images that are longitudinally centered about the circle plane.
- the novel views were then constructed from these slit images.
- the slit images used to construct novel views were not captured at the novel view's viewpoint. If the objects associated with the 3D scene are all distant, then the separation between the aforementioned viewpoints would not matter, and the rendered view would be accurate. However, if there are objects in the 3D scene that are close, the resulting rendered view could exhibit depth distortions in pixel locations lying above and below the center of the height of the novel view. The reason for this potential distortion can be best explained in reference to Figs.
- FIG. 12A depicts a side view of the circle plane 1200 and a point (O) 1202 where the plane would intersect an object 1204 in the 3D scene.
- Three viewpoints (PCM, Pnew, P'new) 1206, 1208, 1210 are depicted on the circle plane 1200, each of which has three rays 1212, 1214, 1216 directed outward therefrom.
- One of the rays 1212 associated with each viewpoint is directed along the circle plane 1200.
- Another ray 1214 is directed above the plane 1200, and the third ray 1216 is directed below the plane.
- the rays 1214 directed above the circle plane 1200 represent the direction to a top-most pixel (or a row of pixels) in a slit image capturing a portion of the object 1204, and that the rays 1216 directed below the circle plane represent the direction to a bottom-most pixel (or a row of pixels) in the slit image.
- the angle separating the upward directed rays 1214 from the downward directed rays 1216 represents the longitudinal (vertical in this case) field of view associated with the system used to capture the slit images. This angle would be the same at each viewpoint 1206, 1208, 1210. Further, referring now to both Figs.
- this scaling process would preferably begin with the step 1300 of ascertaining the average distance to the objects associated with a slit image. This distance is known for the synthetic scene, or can be recovered using conventionally computer vision algorithms for the real scene. In step 1302, this ascertained distance value would be attributed to the slit image and stored for future use.
- the process described previously is performed to the point where a slit image of a concentric mosaic is identified for use in constructing the novel view.
- step 1304 for each such slit image identified, the average distance between the objects of the 3D scene depicted in the slit image and the viewpoint of the novel view is determined.
- Another possibility is to have multiple cameras (not shown) mounted around the periphery of the circular table. These multiple cameras simultaneously activate so that an instantaneous image of the entire scene is captured. When multiple cameras are used, it is unnecessary for the table to rotate. For pu ⁇ oses of illustration, only the video camera embodiment is described hereinafter.
- Depth correction is described above in relation to FIGS. 12 and 13.
- the aspect ratios of objects should remain the same after depth correction regardless of the distance between the object and the camera.
- the aspect ratio is a good approximation for frontal parallel objects and can be easily computed by identifying the boundary of the object in the image.
- the cross ratio of four co-linear points [Fau93] may be computed to verify if the perspective projection is preserved.
- the camera or cameras are shown as being moved in a circle, this is not necessarily the case.
- the invention is also applicable to camera or cameras moving in other configurations, such as squares, ellipses, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU48371/00A AU4837100A (en) | 1999-05-11 | 2000-05-10 | Rendering method using concentric mosaics images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/309,753 | 1999-05-11 | ||
US09/309,753 US6750860B1 (en) | 1998-12-28 | 1999-05-11 | Rendering with concentric mosaics |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000068890A1 true WO2000068890A1 (fr) | 2000-11-16 |
Family
ID=23199538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/012839 WO2000068890A1 (fr) | 1999-05-11 | 2000-05-10 | Procede de rendu utilisant des images mosaiques concentriques |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU4837100A (fr) |
WO (1) | WO2000068890A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2398224A1 (fr) * | 2004-10-01 | 2011-12-21 | The Board Of Trustees Of The Leland Stanford Junior University | Agencements d'imagerie et procédés correspondants |
US8358354B2 (en) | 2009-01-26 | 2013-01-22 | The Board Of Trustees Of The Leland Stanford Junior University | Correction of optical abberations |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998034195A1 (fr) * | 1997-01-30 | 1998-08-06 | Yissum Research Development Company Of The Hebrew University Of Jerusalem | Mosaique panoramique generalisee |
US5831619A (en) * | 1994-09-29 | 1998-11-03 | Fujitsu Limited | System for generating image of three-dimensional object seen from specified viewpoint |
WO1999006943A1 (fr) * | 1997-08-01 | 1999-02-11 | Sarnoff Corporation | Procede et appareil destines a effectuer un alignement multitrame, du local au global, pour former des images en mosaiques |
-
2000
- 2000-05-10 AU AU48371/00A patent/AU4837100A/en not_active Abandoned
- 2000-05-10 WO PCT/US2000/012839 patent/WO2000068890A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5831619A (en) * | 1994-09-29 | 1998-11-03 | Fujitsu Limited | System for generating image of three-dimensional object seen from specified viewpoint |
WO1998034195A1 (fr) * | 1997-01-30 | 1998-08-06 | Yissum Research Development Company Of The Hebrew University Of Jerusalem | Mosaique panoramique generalisee |
WO1999006943A1 (fr) * | 1997-08-01 | 1999-02-11 | Sarnoff Corporation | Procede et appareil destines a effectuer un alignement multitrame, du local au global, pour former des images en mosaiques |
Non-Patent Citations (3)
Title |
---|
KANG S B ET AL: "3-D SCENE DATA RECOVERY USING OMNIDIRECTIONAL MULTIVASELINE STEREO", INTERNATIONAL JOURNAL OF COMPUTER VISION,US,KLUWER ACADEMIC PUBLISHERS, NORWELL, vol. 25, no. 2, 1 November 1997 (1997-11-01), pages 167 - 183, XP000723758, ISSN: 0920-5691 * |
MCMILLAN L ET AL: "PLENOPTIC MODELING: AN IMAGE-BASED RENDERING SYSTEM", COMPUTER GRAPHICS PROCEEDINGS (SIGGRAPH),US,NEW YORK, IEEE, 6 August 1995 (1995-08-06), pages 39 - 46, XP000546214, ISBN: 0-89791-701-4 * |
WERNER T ET AL: "RENDERING REAL-WORLD OBJECTS USING VIEW INTERPOLATION", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION,US,LOS ALAMITOS, IEEE COMP. SOC. PRESS, vol. CONF. 5, 20 June 1995 (1995-06-20), pages 957 - 962, XP000557468, ISBN: 0-7803-2925-2 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2398224A1 (fr) * | 2004-10-01 | 2011-12-21 | The Board Of Trustees Of The Leland Stanford Junior University | Agencements d'imagerie et procédés correspondants |
US8358367B2 (en) | 2004-10-01 | 2013-01-22 | The Board Of Trustees Of The Leland Stanford Junior University | Imaging arrangements and methods therefor |
US8395696B2 (en) | 2004-10-01 | 2013-03-12 | The Board Of Trustees Of The Leland Stanford Junior University | Imaging arrangements and methods therefor |
US8358354B2 (en) | 2009-01-26 | 2013-01-22 | The Board Of Trustees Of The Leland Stanford Junior University | Correction of optical abberations |
Also Published As
Publication number | Publication date |
---|---|
AU4837100A (en) | 2000-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shum et al. | Rendering with concentric mosaics | |
Shum et al. | Review of image-based rendering techniques | |
Kellnhofer et al. | Neural lumigraph rendering | |
US6750860B1 (en) | Rendering with concentric mosaics | |
US6639596B1 (en) | Stereo reconstruction from multiperspective panoramas | |
US7542073B2 (en) | Scene capturing and view rendering based on a longitudinally aligned camera array | |
Rademacher et al. | Multiple-center-of-projection images | |
Aliaga et al. | Plenoptic stitching: a scalable method for reconstructing 3d interactive walk throughs | |
Kang et al. | Tour into the picture using a vanishing line and its extension to panoramic images | |
US20060066612A1 (en) | Method and system for real time image rendering | |
US20080246757A1 (en) | 3D Image Generation and Display System | |
Oliveira | Image-based modeling and rendering techniques: A survey | |
Lin et al. | On the number of samples needed in light field rendering with constant-depth assumption | |
US6795090B2 (en) | Method and system for panoramic image morphing | |
Yu et al. | Multiperspective modeling, rendering, and imaging | |
Hornung et al. | Interactive pixel‐accurate free viewpoint rendering from images with silhouette aware sampling | |
Evers‐Senne et al. | Image based interactive rendering with view dependent geometry | |
Hung et al. | Augmenting panoramas with object movies by generating novel views with disparity‐based view morphing | |
WO2000068890A1 (fr) | Procede de rendu utilisant des images mosaiques concentriques | |
Koch et al. | View synthesis and rendering methods | |
Marrinan et al. | Image Synthesis from a Collection of Depth Enhanced Panoramas: Creating Interactive Extended Reality Experiences from Static Images | |
Siu et al. | Modeling and rendering of walkthrough environments with panoramic images | |
Shum et al. | An introduction to image-based rendering | |
WO2002017646A1 (fr) | Systeme d'imagerie tridimensionnelle | |
Ryoo | Full-view panoramic navigation using orthogonal cross cylinder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |