WO1997034213A2 - Computerized graphics systems - Google Patents
Computerized graphics systems Download PDFInfo
- Publication number
- WO1997034213A2 WO1997034213A2 PCT/IL1997/000094 IL9700094W WO9734213A2 WO 1997034213 A2 WO1997034213 A2 WO 1997034213A2 IL 9700094 W IL9700094 W IL 9700094W WO 9734213 A2 WO9734213 A2 WO 9734213A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- scene
- generating
- auxiliary
- point
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 219
- 238000013507 mapping Methods 0.000 claims abstract description 52
- 238000009877 rendering Methods 0.000 claims description 52
- 239000013598 vector Substances 0.000 claims description 52
- 239000000872 buffer Substances 0.000 claims description 44
- 238000002156 mixing Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 239000002131 composite material Substances 0.000 abstract 1
- 230000009466 transformation Effects 0.000 description 17
- 238000004880 explosion Methods 0.000 description 16
- 229910052698 phosphorus Inorganic materials 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 11
- 238000012360 testing method Methods 0.000 description 10
- CIWBSHSKHKDKBQ-JLAZNSOCSA-N Ascorbic acid Chemical compound OC[C@H](O)[C@H]1OC(=O)C(O)=C1O CIWBSHSKHKDKBQ-JLAZNSOCSA-N 0.000 description 9
- 238000009825 accumulation Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 229910052720 vanadium Inorganic materials 0.000 description 8
- 238000007796 conventional method Methods 0.000 description 7
- 229910052739 hydrogen Inorganic materials 0.000 description 7
- 240000004050 Pentaglottis sempervirens Species 0.000 description 6
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 6
- 238000001125 extrusion Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 3
- 229910052710 silicon Inorganic materials 0.000 description 3
- 239000010703 silicon Substances 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000000739 chaotic effect Effects 0.000 description 2
- 229910052757 nitrogen Inorganic materials 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 229910004688 Ti-V Inorganic materials 0.000 description 1
- 229910010968 Ti—V Inorganic materials 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 238000000205 computational method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 229910052717 sulfur Inorganic materials 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
Definitions
- the present invention relates to computerized graphics systems in general, and in particular to computerized graphics systems for producing images of three-dimensional scenes.
- BACKGROUND OF THE INVENTION Producing a representation of a three-dimensional scene using a computerized graphics system is well known in the art.
- a three-dimensional scene is stored in the computerized graphics system.
- Conditions of lighting and a point of view of an observer of the scene are specified.
- Some or all of reflections, diffuse reflections, specular reflections, shadows, and refractions due to objects in the scene are computed. Based on the computed results, an image of the three-dimensional scene is generated.
- Lighting effects are generally implemented by affecting the color of surfaces in the scene by employing what is referred to in the art as local and global shading models.
- local shading models the color of each object is determined by analyzing the object itself and the light sources alone.
- global shading models the color of an object is affected by that of other objects.
- the major local shading models determine the color of an object as being either constant, constant plus correction for ambient light (which practically produces constant color), or as a function of the angle between the normal to the surface and the light source direction.
- Gouraud shading such a color is computed for each vertex of the object, and then the color is interpolated to yield the color inside the object.
- Phong shading the normal to the surface is interpolated and the model is invoked for each pixel.
- ray tracing typically begins with a single ray of light through every image element or pixel, from the eye of the observer to an object in the scene, and then recursively traces other rays of light beginning at the intersection between the first ray of light and the object in the scene, spawning reflection, refraction and shadow rays.
- Radiosity Another global shading model known in the prior art is known as "radiosity.” In the radiosity method, correct simulation of energy transfer in the scene is attempted.
- beam tracing A method to accelerate the computation of ray tracing known in the prior art is known as "beam tracing."
- beam tracing beams of rays are traced instead of individual light rays. These beams are defined by reflecting the view point along the plane of a polygon in the scene, generating bounding planes passing through the view point and the edges of the polygon, and by computing, in object space, the polygons visible inside a beam.
- Beam tracing has two primary disadvantages: 1) it is limited to reflections on planar polygons, with refractions being computed by a crude approximation, and curved objects not handled at all; and 2) all computations are done in object space, thus making the method highly inefficient and very difficult to implement.
- beam tracing cannot be implemented using standard graphics APIs such as OpenGL, the standard API for 3-D graphics suggested by Silicon Graphics and adopted by most computer manufacturers.
- the present invention seeks to provide an improved computerized graphics system for producing representations, typically pictures, of three-dimensional scenes.
- the present invention provides an improved system which overcomes the known disadvantages of the prior art as discussed above.
- the present invention is not limited to computing reflections on planar polygons and allows for computing lighting effects for curved objects.
- the present invention is an image-space method, utilizing the z- buffer method and stencil bit planes, and can be implemented using standard graphics APIs.
- the present invention also seeks to provide improved methods for receiving 3D information regarding a scene and generating therefrom a 2D view of the scene including at least one of shadow, refractions and reflections.
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in the first planar surface, from 3D information regarding the 3D scene the method including generating an auxiliary 2D image of the reflection of the second surface in the first planar surface, wherein the step of generating includes reflecting the second surface through the plane, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface, generating a main 2D image of the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface, and compositing the
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in the first non-planar surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, generating a main 2D image of the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary
- the compositing step includes employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surface and at least one second surface wherein at least one of the second surfaces is reflected in each of the first non-planar surfaces including generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
- the step of generating an auxiliary 2D image of the reflection includes computing a virtual surface including a 3D approximation of the reflection of the second surface in the first surface, and rendering the virtual surface, thereby to generate the auxiliary image.
- the step of computing a virtual second surface includes selecting a set of second surface locations within the second surface to be reflected in the first surface, for each second surface location in the set determining a polygon P on an approximation of the first surface, wherein a real reflection point of the second surface location falls within the polygon, and computing a possible reflection point for at least one point of the polygon, and computing a location within the virtual surface corresponding to the second surface location based at least partly on the possible reflection points.
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in the first surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution.
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first non-planar surfaces the method including for each of the at least one first non-planar surfaces, generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
- a method for computing shadows generated on a planar polygon by a light source in an image of a three-dimensional polygonal scene from a point of view of an observer including computing at least one shadowed polygon within a plane, wherein at least one shadow is cast on the shadowed polygon, computing a stencil which includes visible pixels of the at least one shadowed polygon, rendering objects in the scene, including computing a scene including the at least one shadowed polygon using hidden-surface removal, to produce a preliminary image, computing at least one shadow polygon by projecting a scene polygon in the direction of the light source such that shadow generating portions of the scene polygon are generally projected above the plane and non-shadow generating portions of the scene polygon are generally projected below the plane, and rendering the at least one shadow polygon in a shadow color using the stencil and using hidden-surface removal to produce at least one rendered shadow polygon.
- the at least one first surfaces includes at least two first surfaces A and B and the step of generating a main 2D image for surface A includes generating a main 2D image of the scene including an image of the first non-planar surface A which does not include any reflections of any one of the second surfaces in the first non-planar surface A, and wherein the step of generating a main 2D image for surface B includes rendering surface B into the main 2D image of the scene generated for surface A.
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first surfaces including for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any reflections of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first planar surfaces the method including for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does, not include any reflections of any one of the second surfaces in the first planar surface, and for each pair of first planar surfaces and second surfaces characterized in that the second surface is reflected in the first planar surface generating an auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating includes reflecting the second surface through a plane which includes the first plane in the pair, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs
- the at least one second surface includes a plurality of second surfaces and wherein the step of generating an auxiliary 2D image for each pair of first and second surfaces includes generating a single auxiliary 2D image which includes reflections of all of the plurality of second surfaces in the first non-planar image.
- the steps of generating the main image and compositing include texture mapping the auxiliary image onto the first surface.
- the first non-planar surface includes a polygonal mesh.
- the compositing step includes the step of alpha-blending the auxiliary and main images.
- the step of generating an auxiliary 2D image includes generating an auxiliary 2D image of the reflections of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of reflected second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of the reflected second surfaces.
- the step of rendering includes using a z-buffer to remove any liidden portions of the reflected second surfaces.
- the step of generating an auxiliary 2D image includes recursively invoking the same method for different first surfaces in the scene.
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in the first surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image of the refraction of the second surface in the first surface, generating a main 2D image of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main
- the compositing step includes employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first surfaces the method including generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface generating an auxiliary 2D image of the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D
- the step of generating an auxiliary 2D image of the refraction includes computing a virtual surface including a 3D approximation of the refraction of the second surface in the first surface, and rendering the virtual surface, thereby to generate the auxiliary image.
- the step of computing a virtual second surface includes selecting a set of second surface locations within the second surface to be refracted in the first surface, for each second surface location in the set determining a polygon on an approximation of the first surface, wherein a real refraction point of the second surface location falls within the polygon, and computing a possible refraction point for at least one point of the polygon, and computing a location within the virtual surface corresponding to the second surface location based at least partly on the possible refraction point.
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in the first surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image, at a first resolution, of the refraction of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution.
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one" first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first surfaces including for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface generating an auxiliary 2D image of the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface refracted in the first planar surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image of the refraction of the second surface in the first planar surface by refracting the second surface through the first planar surface, thereby to define a virtual second surface, and generating a main 2D image of the scene including a 2D image of the first planar surface which does not include the refraction of the second surface in the first planar surface, wherein the step of generating includes texture mapping the auxiliary image onto the first surface.
- the method also includes the step of using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface.
- the at least one first surface includes at least two first surfaces A and B and the step of generating a main 2D image for surface A includes generating a main 2D image of the scene including an image of the first surface A which does not include any refractions of any one of the second surfaces in the first surface A, and wherein the step of generating a main 2D image for surface B includes rendering surface B into the main 2D image of the scene generated for surface A.
- the step of generating includes texture mapping the auxiliary image onto the first surface.
- a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first planar surfaces the method including for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does not include any refractions of any one of the second surfaces in the first planar surface, and for each pair of first planar surfaces and second surfaces characterized in that the second surface is refracted in the first planar surface generating an auxiliary 2D image of the refraction of the second surface in the first planar surface, wherein the step of generating includes refracting the second surface through a plane which includes the
- the at least one second surfaces includes a plurality of second surfaces and wherein the step of generating an auxiliary 2D image for each pair of first and second surfaces includes generating a single auxiliary 2D image which includes refractions of all of the plurality of second surfaces in the first imase. Additionally in accordance with a preferred embodiment of the present invention, the steps of generating the main image and compositing include texture mapping the auxiliary image onto the first surface.
- the first surface includes a polygonal mesh.
- the compositing step includes the step of alpha-blending the auxiliary and main images.
- the step of generating an auxiliary 2D image includes generating an auxiliary 2D image of the refractions of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of refracted second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of the refracted second surfaces.
- the step of rendering includes using a z-buffer to remove any hidden portions of the refracted second surfaces.
- the step of generating an auxiliary 2D image includes recursively invoking the same method for different first surfaces in the scene.
- the method also includes generating an additional auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the reflection of the second surface in the first surface and the refraction of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
- the method also includes generating an additional auxiliary 2D image of the shadow of the second surface on the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the refraction of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
- the method also includes generating an additional auxiliary 2D image of the shadow of the second surface on the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the reflection of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
- a method for generating a 2D image of a 3D scene from a point of view using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the method including, for each of the at least one objects generating an environment map for a selected point, for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within the environment map, and rendering the object's polygons using the environment map as a texture map, wherein the step of generating an environment map includes rendering at least one of the surfaces on each of four faces of a four-faced pyramid defined about the selected point.
- the method also includes mapping the faces of the pyramid, on each of which at least one of the surfaces are rendered, onto a 2D circle, thereby to generate a modified environment map.
- the step of computing an ordered pair of texture coordinates includes computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, computing an intersection point of the ray with an object centered at the selected point, and computing a pair of texture coordinates as a function of a vector extending from the selected point to the intersection point.
- the method also includes the step of generating a depth map of the surfaces as seen from the selected point, the depth map corresponding to the environment map, and wherein the combining step includes assigning a weight to each individual one of the directions based on a depth which the depth map assigns to the individual one of the directions.
- the selected point includes a user-selected location in the scene.
- the selected point, for which an environment map is generated for an individual one of the at least one objects includes a central location of the individual object.
- apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in the first non-planar surface, from 3D information regarding the 3D scene including auxiliary 2D image apparatus which generates the reflection of the second surface in the first non-planar surface, main 2D image apparatus which generates the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non- planar surface, and compositor apparatus which combines the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
- apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in the first planar surface, from 3D information regarding the 3D scene including auxiliary 2D image apparatus which generates the reflection of the second surface in the first planar surface
- the auxiliary 2D image apparatus includes reflecting apparatus which reflects the second surface through the plane, thereby defining a virtual second surface, and removal apparatus which uses a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface
- main 2D image apparatus for generating the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface includes reflecting apparatus which reflects the second surface through the plane, thereby defining a virtual second surface
- removal apparatus which uses a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface
- a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in the first surface, from 3D information regarding the 3D scene including recursively performing the following steps generating an auxiliary 2D image of the reflection of the second surface in the first surface, generating a main 2D image of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
- the total number of vertices defined by the object approximating polygons may be less than 3 times the number of object approximating polygons since two or more polygons may have one or more vertices in common.
- the number of polygons used to approximate different objects need not be the same.
- the "selected point”, also termed herein the "object point”, for which an environment map is generated in the course of rendering an object's polygons, may be interior to that object, but alternatively may be on the surface of that object or even exterior to that object.
- Fig. IA is a simplified block diagram of a computerized graphics system constructed and operative in accordance with a preferred embodiment of the present invention
- Fig. IB is a simplified pictorial illustration of an example of a scene as operated upon in accordance with a preferred embodiment of the present invention
- FIG. 2A and 2B taken together, comprise a simplified flowchart illustration of a preferred method of operation of the system of Fig. IA;
- Fig. 3 is a simplified flowchart illustration of a preferred implementation of step 110 of Fig. 2A;
- Fig. 4 is a simplified flowchart illustration of a preferred implementation of step 130 of Fig. 2A;
- Fig. 5 is a simplified flowchart illustration of a preferred implementation of steps 190 and 200 of Fig. 2B;
- Fig. 6 is a simplified flowchart illustration of a preferred method of implementation of step 210 of Fig. 2B;
- Fig. 7 is a simplified flowchart illustration of a preferred implementation of step 230 of Fig. 2B;
- Fig. 8 is a simplified flowchart illustration of a preferred implementation of step 220 of Fig. 2B;
- Fig. 9 is a simplified flowchart illustration of a preferred implementation of step 390 of Fig. 7;
- Fig. 10 is a simplified flowchart illustration of a preferred implementation of step 460 of Fig. 9;
- Figs. 11A and 11B, taken together, are a simplified flowchart illustration of a preferred implementation of step 480 of Fig. 10;
- Fig. 12 is a simplified flowchart illustration of a preferred implementation of step 490 of Fig. 10;
- Fig. 13 is a simplified flowchart illustration of a preferred implementation of step 500 of Fig. 10;
- Fig. 14 is a simplified flowchart illustration of a preferred implementation of step 510 ofFig. 10
- Fig. 15 is a simplified flowchart illustration of a preferred implementation of step
- Fig. 18 is a pictorial illustration of a scene including four spheres each having a shadow rendered on the floor plane of the scene;
- Fig. 19 is a pictorial illustration of one of the spheres of Fig. 18, a point of view from which the sphere is being observed, and a four-faced pyramid surrounding the sphere;
- Figs. 20A - 20D are pictorial illustrations of the scene of Fig. 18 rendered onto each of the four faces, respectively, of the pyramid of Fig. 19
- Fig. 21 is a pictorial illustration of the pyramid of Fig. 19 unfolded onto a planar surface, where the scene of Fig. 18 is rendered onto each of the faces of the pyramid;
- Fig. 22 is a pictorial illustration of a tessellation of the front face of the pyramid;
- Fig. 23 is a pictorial illustration of the tessellation of Fig. 22 mapped into a 2D circle
- Fig. 24 is a circular environment map including a mapping, as illustrated in Figs. 22 - 23, of each of the faces of the pyramid onto the 2D circle of Fig. 23;
- Fig. 25 is a rendering of the entire scene of Fig. 18 including a reflection of the entire scene in the sphere of Fig. 19, the reflection having been generated by texture-mapping the environment map of Fig. 24 onto the sphere 1510;
- Fig. 26 is a simplified block diagram of a system for generating a 2D image, including reflections, of a 3D scene from a point of view, using 3D information regarding the 3D scene,
- Fig. 27 is a simplified flowchart illustration of a preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating a 2D image of a 3D scene using a pyramidal technique to compute an environment map
- Fig. 28 is a simplified flowchart illustration of a first preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating texture coordinates useful in generating a 2D image of a 3D scene either according to the method of Fig. 27 or according to conventional methods;
- Fig. 29 is a simplified flowchart illustration of a second preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating texture coordinates useful in generating a 2D image of a 3D scene either according to the method of Fig. 27 or according to conventional methods;
- Fig. 30 is a pictorial illustration of the relationship between geometric entities participating in the methods shown and described herein.
- Fig. IA is a simplified block diagram of a computerized graphics system constructed and operative in accordance with a preferred embodiment of the present invention. It is appreciated that the apparatus of Fig. IA may be implemented partly in computer hardware and partly in software, or entirely in custom hardware. Preferably, the apparatus of the present invention is implemented in suitably programmed computer hardware.
- the computer hardware may comprise, for example, a Silicon Graphics Indigo-2 Impact Graphics workstation, commercially available from Silicon Graphics, Mountain View, California, USA.
- the system of Fig. IA comprises a control unit 10, receiving an input comprising a scene, as described more fully below with reference to Figs. 2 A and 2B.
- the input to the control unit 10 typically comprises 3-D coordinates of all objects and light sources in the scene, color characteristics for all objects in the scene, textures, and viewing parameters.
- the system of Fig. IA also comprises a scan-conversion unit 15, receiving input from the control unit 10.
- the input received by the scan-conversion unit 15 typically comprises 2-D coordinates of all polygons in the scene and depth information, and optionally also comprises color information, which color information may have been computed by the control unit 10 based on lighting information.
- the operation of the scan-conversion unit 15 is described more fully below with reference to Figs. 2A and 2B.
- the system of Fig. 1 A also comprises a reflections, refractions and shadows unit (RRS) 20.
- the RRS 20 receives an input from the control unit 10 indicating what operation is to be carried out and, based on the received input, provides a control signal to a 3-D transformation and lighting unit (3-D unit) 25.
- the RRS 20 also provides a control signal to a stencils unit 30 and a z-buffer 35.
- the z-buffer 35 may comprise a plurality of z-buffers.
- the stencils unit 30 and the z-buffer 35 are also connected to the scan conversion unit 15.
- the system of Fig. IA also comprises an image buffer 40, an optional accumulation buffer 45, an alpha bit planes unit 50, and a texture bit planes unit 55.
- the optional accumulation buffer 45 is used as a working area, preferably of the same size as the image buffer 40.
- the optional accumulation buffer 45 is referred to herein primarily for purposes of clarity of description; it is appreciated that the present invention need not include an accumulation buffer 45.
- the system of Fig. IA also comprises an image output unit 60, receiving input from the image buffer 40.
- the output unit 60 typically produces the image contained in the image buffer 40 on a device such as a video monitor, printer, plotter, or other output device as is well known in the art.
- Each of the units 15, 30, 35, 40, 45, 50, and 55 are preferably connected to each of the other units 15, 30, 35, 40, 45, 50 and 55 with two-way connections.
- the term "stencil” or “stencil bit plane,” as used herein, refers generally to an image used to mask another image; that is, to specify a sub-image comprising a subset of the pixels of an image.
- a stencil is a single bit image at the same resolution as that of the generated image, comprising a single bit per pixel.
- Fig. IB is a simplified pictorial illustration of an example of a scene as operated upon and in accordance with various preferred embodiments of the present invention.
- the scene of Fig. IB is useful in understanding terms used in the present specification, as it depicts reflections, refractions, shadows, and other aspects of a scene as may be produced by various steps in various embodiments described below.
- the scene of Fig. IB comprises a polygon 70 on which shadow is cast, a shadow
- a polygonal object 74 as cast on polygon 70, a polygonal object 74, a reflecting surface 75, a refracting surface 76, a refracting beam 78, a reflection 80, and a light source 82.
- Figs. 2A and 2B which, taken together, comprise a simplified flowchart illustration of a preferred method of operation of the system of Fig. IA. It is appreciated that the method of Figs. 2A and 2B, as well as other methods described below, need not necessarily be performed in a particular order, and that in fact, for reasons of implementation, a particular implementation of the methods may be performed in a different order than another particular implementation.
- the method of Figs. 2A and 2B preferably includes the following steps:
- a scene to be processed is input by the control unit 10, along with viewing parameters which describe how the scene is to be viewed (step 100).
- the scene typically comprises both information about surfaces in the scene and lighting parameters.
- the information about surfaces typically includes both geometry and color characteristics of the surfaces, typically including the following: base color; diffuse reflection coefficients; specular reflection coefficients; refraction indices; and other characteristics.
- the specular reflection coefficients are particularly useful in performing the methods of the present invention.
- the specular reflection coefficients are usually described in RGBA form, as is well known in the art and are described in chapter 13 of Foley et al, referred to above.
- the lighting parameters typically define the location, orientation, and intensity of lighting in the scene.
- Viewing parameters typically comprise point of view parameters, which define the point of view of an observer of the scene, including location of the point of view, orientation of view, and field of view. Processing by the method of Figs. 2A and 2B is in accordance with the viewing parameters.
- step 110 A virtual world W, a stencil S, and a rendered surface RO are initialized (step 110).
- a preferred implementation of step 110 is described in more detail below with reference to Fig. 3.
- a stenciled portion SZ of the z-buffer, defined according to the stencil S, is initialized, typically by setting all values within SZ to the largest possible number, also referred to herein as setting to infinity (step 120).
- All surfaces in W are rendered into a stenciled portion SB of the accumulation buffer 45 and into SZ, including computation of a color C for each pixel point P through which each surface O is visible (step 130).
- color includes information on color in the usual sense as well as on reflectance and on refraction characteristics.
- rendering may include the application of a clipping plane, which is a plane defining a portion of the scene, on one side of the plane, which is to be clipped or removed from the image. Methods for applying a clipping plane are well known in the art.
- the stenciled portion SB of the accumulation buffer 45 is defined by the stencil
- a stenciled portion SI of the image buffer 40 is similarly defined.
- a preferred implementation of step 130 is described in more detail below with reference to Fig. 4.
- the current color of each pixel in the stenciled portion SB of the accumulation buffer 45 is replaced with a computed color by a weighted average of the current color of the pixel and the corresponding pixel in the accumulation buffer AB. (step 140).
- This process is known as alpha blending.
- the weights of the weighted average are determined based, at least in part, on characteristics of the rendered surface RO such as, for example, reflectance and opacity of RO. Other factors, such as level of recursion of the method may also have an effect on the weights.
- Typical methods, known in the art, for determining the weights are described in Foley et al, refe ⁇ ed to above, in chapter 16.
- recursion criteria may be defined in order to determine when a given level of recursion is to be terminated. Such criteria may include: time spent on processing a particular level of the method; amount of change in the rendered surface RO computed in the current level of iteration; reaching a maximum recursion level, such as, for example, 4 levels of recursion; or screen space occupied by the surface being processed being smaller than a predetermined minimum size such as, for example 10 pixels. Examples of a scene after performing one or more recursions of steps 120 through 140 are described in more detail below with reference to Figs. 17A, 17B, 17D, 17E, 17G, and 17H.
- Recursion criteria are checked to see whether any recursion-ending criteria are satisfied (step 150). If no recursion-ending criteria are satisfied, processing continues with step 190. If some recursion criteria are satisfied, the current level of recursion is ended (step 160). As each recursion level is ended, a check is made as to whether processing is presently at the top recursion level (step 170). If so, the method of Figs. 2A and 2B terminates (step 180), and the present image in the image buffer 40 is output by the control unit 10, being the result of the method of Figs. 2A and 2B. Otherwise, processing continues at the next higher recursion level (step 185).
- a surface of interest is generally any visible surface which has an effect on computation of the rendered image.
- a surface of interest includes any visible surface for which we are interested in at least one of the following: computing shadows falling on the object; computing reflections from the object; computing refraction through the object. Whether or not a surface is of interest may be determined using methods well-known in the art, based partly on the color as described above.
- step 200 The next surface of interest O, found in step 190, is selected for processing (step 200).
- a preferred implementation of the method of steps 190 and 200 is described in more detail below with reference to Fig. 5.
- step 210 A new stencil S' is prepared (step 210).
- a preferred implementation of step 210 is described in more detail below with reference to Fig. 6.
- Step 210 may, in principle, be carried out at the same time as step 130, described above. Carrying out step 210 at the same time as step 130 may be preferable in order to increase the efficiency of the method of Figs. 2A and 2B.
- Shadows falling on surface O are computed if desired (step 220). Methods of shadow computation are well known in the art and are described, for example, in Reeves, Salesin, and Cook, referred to above. A preferred implementation of step 220, believed to be possibly preferable to those known in the art, is described in more detail below with reference to Fig. 8.
- step 120 If computation of reflections and/or refractions from O is desired, a new virtual world W is computed, and steps beginning at step 120 are then performed recursively, using S', W and O instead of S, W, and RO (steps 230 and 240).
- S', W and O instead of S, W, and RO (steps 230 and 240).
- W is computed based in part on the surface O and the fact that reflections and/or refractions are to be computed.
- An example of a scene after performing all recursions of steps 120 through 240 is described in more detail below with reference to Fig. 171.
- Fig. 3 is a simplified flowchart illustration of a preferred implementation of step 110 of Fig. 2A.
- the method of Fig. 3 preferably comprises the following steps:
- the virtual world W is set to contain all surfaces in the scene, (step 250). Stencil
- the rendered surface RO is set to be the background, that is, an infinite plane at a very great distance, further away from the point of view than all surfaces, (step 270)
- Fig. 4 is a simplified flowchart illustration of a preferred implementation of step 130 of Fig. 2A.
- the method of Fig. 4 preferably includes the following steps:
- a color C is computed for each pixel through which O is visible (step 290).
- the color computation takes the following into account: the color of O, optionally using a texture map; the color of ambient light; the normal to O at P, optionally perturbed using a bump map; the location and orientation of light sources; and other factors, as are well known in the art.
- Fig. 5 is a simplified flowchart illustration of a preferred implementation of steps 190 and 200 of Fig. 2B.
- the method of Fig. 5 preferably includes the following steps:
- O is considered a surface of interest (step 330)
- Fig. 6 is a simplified flowchart illustration of a preferred implementation of step 210 of Fig. 2B.
- the method of Fig. 6 preferably includes the following steps:
- Pixels covered by O and through which O is visible are determined, using SZ (step 350). Typically, this determination is made using a z-buffer operating on the stenciled portion SZ. Computational methods using a z-buffer are well-known in the art and are described, for example, in Foley et al, referred to above, at pages 668-672.
- a new stencil S' is then computed as the intersection of stencil S and the pixels determined in step 350 (step 360).
- Fig. 7 is a simplified flowchart illustration of a preferred implementation of step 230 of Fig. 2B.
- the method of Fig. 7 preferably includes the following steps:
- a surface O is determined (step 370).
- the viewer's location and orientation are determined from the initial viewing parameters (step 380).
- a new virtual world W is computed as a transformation of the scene, based on the surface of O (step 390).
- a preferred implementation of step 390 is described in more detail below with reference to Fig. 9.
- the transformation depends on the type of surface O. If, for example, the surface O is planar polygonal, W contains all surfaces in the scene reflected and/or refracted through the plane on which the surface O lies. In the case of reflection, W' does not contain the reflection of surfaces that lie on the side of the surface of O away from the viewer. In the case of refraction, W' does not contain the refraction of surfaces that lie on the side of the surface of O nearest the viewer. In the case of a planar polygonal O, each point in W is reflected through the plane in a uniform way; that is, the same reflection transformation is used for all points.
- Figs. 10 - 16 Briefly, in the case of a curved surface O2, a set of sample points is generated on O2. The sample points, when connected form a subdivision of O2 into polygons. The subdivision is also known as a tessellation. For each point Q in the scene W, it is determined, as described below, in which tessellation polygon T the point Q is reflected and/or refracted.
- Q is refracted by taking refraction coefficients into account and applying Snell's law as described below with reference to Fig. 1 IB.
- a weighted average of the reflected and/or refracted points is then computed, and the final location of Q in the reflected and/or refracted world is taken to be the weighted average.
- the weights for the weighted average are determined as a function of the proximity of Q to the rays reflected and/or refracted from the vertices.
- Fig. 8 is a simplified flowchart illustration of a preferred implementation of step 220 of Fig. 2B.
- the method of Fig. 8 is described for the case of a single shadowed polygon.
- the term "shadowed polygon,” as used herein, refers generally to a polygon on which shadow falls.
- the term “shadow polygon,” as used herein, refers generally to a representation of a shadow that is used to create shadowed polygons. It is appreciated that the method of Fig. 8 may be applied repeatedly to a plurality of shadowed polygons in the scene, and that shadow polygons may be used to approximate non-polygonal objects in the scene.
- the method of Fig. 8 preferably includes the following steps:
- Shadow polygons are computed (step 415).
- a preferred method for computing shadow polygons is as follows:
- G be a row vector (a,b,c,d) of the coefficients of the implicit equation of the plane in which the polygon PO, on which shadow is to fall, lies.
- M (L dot G - epsilon) I - G ⁇ times L where: dot indicates dot product; epsilonO ⁇ epsilon « 1; I is the 4 x 4 identity matrix;
- QT is the transpose of G; and times indicates matrix multiplication.
- the projection of a 3-D point Q is given by QM.
- QM The projection of a 3-D point Q.
- epsilon > 0 we maintain the spatial order between polygon PO and the shadow polygons, resulting in the removal of shadows of objects that are beyond polygon PO during the hidden surface removal stage of the rendering.
- a stencil, including the visible pixels of polygon PO is computed (step 420).
- Objects in the scene are rendered, preferably by computing a z-buffer of the scene including the shadowed polygon, preferably with hidden surfaces removed (step 425).
- the shadow polygon is then rendered in a shadow color, preferably by using the stencil and the z-buffer computed in step 425, to produce a rendered shadow polygon
- alpha blending is used to blend the generated shadows with the image generated in step 425.
- the weights used are determined by the desired
- the rendered shadow polygon is then blended with the preliminary shadowed image to produce a final shadowed image (step 435). Examples of a scene after performing step 435 are described in more detail below with reference to Figs. 17Q and 17R.
- Fig. 9 is a simplified flowchart illustration of a preferred implementation of step 390 of Fig. 7.
- the method of Fig. 9 preferably comprises the following steps: Each surface in the scene is approximated with a polygonal mesh (step 450).
- a polygonal mesh typically comprises: a set of vertices; a set of edges connecting the vertices and defining polygonal faces; and optionally a set of normals, one at each vertex.
- the approximation with a polygonal mesh may be an exact approximation such as, for example, when a planar polygon is represented by a mesh containing a plurality of polygons.
- the final accuracy of the generated image depends in large pan on the density of the mesh since a denser mesh generally gives a closer approximation and therefore a higher quality image.
- a reflected and or refracted image for point P', or an approximation thereto is computed (step 460).
- the collection of all such reflected and/or refracted images P' defines a mesh belonging to the reflected and/or refracted world W', and thus defines the reflected and/or refracted world W ⁇
- the computation of P' of a vertex P differs according to the type of reflecting and/or refracting surface SR, and is described as follows with reference to Fig. 10.
- Fig. 10 is a simplified flowchart illustration of a preferred implementation of step 460 of Fig. 9.
- the method of Fig. 10 preferably includes the following steps:
- Each reflecting and/or refracting surface is classified and a reflected and/or refracted image is computed accordingly (step 470).
- the reflecting and/or refracting surface is classified, and the reflected and/or refracted image is computed, as follows: for a planar surface (step 480), described in more detail below with reference to
- Figs. HA and l lB for an orthogonal linear 3-D extrusion of a convex planar curve (step 490), described in more detail below with reference to Fig. 12; for a cylinder (step 500), described in more detail below with reference to Fig.
- step 510 for a sphere (step 510), described in more detail below with reference to Fig. 14; for a general convex 3-D polygonal mesh (step 520), described in more detail below with reference to Fig. 15; for a general concave 3-D polygonal mesh (step 530), described in more detail below with reference to Fig. 15; for a general 3-D polygonal mesh (step 540), described in more detail below with reference to Fig. 16.
- Figs. 11 A and 1 IB which, taken together, comprise a simplified flowchart illustration of a preferred implementation of step 480 of Fig. 10.
- the methods of Figs. 11 A and 1 IB preferably comprise the following steps:
- P' lies on a straight line L defined by V and Q
- Q is between V and P' on the line L
- the distance between Q and P' is equal to the distance between Q and P.
- a point H on the reflecting surface SR is found, such that H is the point on surface SR nearest to P, the point to be reflected (step 550). Since surface SR is planar, the reflection P' lies on the straight line defined by P and H, on the opposite side of surface SR, and at the same distance from H as P is from H (step 553).
- the computation of P' can thus be carried out using methods well known in the art, such as projection of point P onto the plane in which surface SR lies, distance computation, and point and vector addition.
- the transformation necessary to compute the image P' of a point P is the same for all points P in the world W
- the matrix M depends neither on the position of the viewer nor on the position of the point P to be reflected.
- a refraction of a point P, denoted as P', according to a refracting surface SR, relative to a viewer at position V, obeys Snell's law of refraction, described, for example, in
- the refracted image for point P' lies on the line from V to H, on the opposite side of SR (relative to V), and the length of the segment from H to P' is equal to the length of the segment from H to P.
- the computation of P' can thus be carried out using methods well known in the art, such as equation solving, distance computation, and point and vector addition.
- the transformation necessary to compute the image P' of a point P is the same for all points P in the world W
- the transformation can be represented by an affine transformation as follows.
- the position of P' is given by
- N 1 be a 4x4 rotation matrix representing a coordinate transformation to such a coordinate system
- N "1 be the inverse rotation.
- the final refraction matrix is given by:
- SR but are between SR and the viewer.
- the refracted image W' of the polygon will lie wholly or partially on the wrong side of SR, the side of SR on which there should be no refraction seen.
- This case is dealt with during rendering such as, for example, in step 130 described above, by applying an appropriate clipping plane, such as the plane of SR as a front clipping plane. Surfaces lying on the other side of the surface of 0, not seen by the viewer, may be eliminated or ignored (step 561).
- Fig. 12 is a simplified flowchart illustration of a preferred implementation of step 490 of Fig. 10. It is appreciated that there is at most a single reflection P' for any point P on a convex curve, and there is at most a single refraction
- the method of Fig. 12 preferably comprises the following steps: The problem of computing reflection and/or refraction for an orthogonal linear
- step 565 A direction parallel to the extrusion direction of the 3-D extrusion is chosen.
- a coordinate transformation is performed to make the chosen direction the "z" direction, and all further computations of Fig. 12 are performed in the x-y plane, until step 620, described below.
- the curve SR must preferably be represented in a form which supports generating a point at any location on SR and computing the normal, or an approximation thereof, at any point on SR.
- SR is a parametric curve, point generation and normal computation are easily accomplished using the parametric equation.
- SR is an implicit curve
- any appropriate method for point generation may be used, such as the "marching cubes" method which is well known in the art, and which is described by Foley et al, referred to above, at pages 1035 - 1036.
- the normal to an implicit curve may be computed from the implicit equation by differentiation. If the curve SR is not already represented in a form supporting the operations described above, the curve SR is preferably represented in such a form (step 570).
- a plurality of sample points is generated along the curve SR (step 580).
- the number of sample points generated may depend on the desired degree of accuracy in the computation of the point P', which is an approximation to the true reflection and/or refraction of the point P. If a more precise approximation is desired, more points are computed.
- a typical method for producing the sample points is to sample the parameter space of the curve SR using uniform intervals whose size is determined by the desired number of sample points. Alternatively, it is possible to adaptively modify the parameter value associated with a given point if the given point is too close to or too far from a previous point, in accordance with some predetermined criterion.
- sample points For each sample point, the geometry of the sample point, that is, its location, and the normal to the curve SR at the sample point are computed (step 590).
- the sample points are referred to hereinbelow as Tl,...,Tm, and the associated normals as Nl,...,Nm.
- Tl,...,Tm the geometry of the sample point, that is, its location, and the normal to the curve SR at the sample point are computed (step 590).
- the sample points are referred to hereinbelow as Tl,...,Tm, and the associated normals as Nl,...,Nm.
- the discussion below assumes that the sample points Tl,...,Tm were generated in a counter clockwise order, but it is appreciated that, more generally, the sample points Tl,...,Tm may be generated in any order.
- a reflected and/or refracted ray Ri is generated for each sample point Ti and the associated normal Ni (step 600), dependent on the viewer location V.
- V-Ti vector vector
- Ni normal Ni
- Ri is such that the angle between Ni and Di is equal to the angle between Ni and (V-Ti). Rays of this type are referred to herein as "front-facing rays.” Otherwise, Di is identical to (Ti-V).
- the direction Di of Ri is such that the angle between the vector (V-Ti) and Di obeys Snell's law.
- sin(alpha) (N1/N2) > 1 the refraction is defined as the total internal reflection; that is, the same transformation as reflection from surface SR.
- a plurality of cells C(i,k) are determined by the rays and by line segments connecting adjacent sample points as follows (step 610). Let k be an number greater than 1.
- some cells may be non convex, in which case such cells are preferably split into two convex subcells for easier testing, typically by extending the defining rays of such a cell into lines and intersecting the lines.
- P' (u/(u+v)) Pi' + (v/(u+v)) P(i+1)'
- the weights u and v in the above equation may be determined by any appropriate method such as, for example, one of the following:
- FIG. 13 is a simplified flowchart illustration of a prefe ⁇ ed implementation of step 500 of Fig. 10.
- Fig. 17U is an example of a scene containing Ci, O, OV, OP, Q', V, B, R, P, P', dv, and dp as described hereinbelow.
- Fig. 17U is described in more detail below.
- the method of Fig. 13 preferably includes the following steps: The problem of computing reflection for a cylindrical surface is reduced to a 2-D problem as described above with reference to step 565 of Fig. 12. The result is a circle Ci which is the generating circle of the cylinder.
- a first ray OV from the center O of the circle Ci to the viewer V and a second ray OP from the center O of the circle Ci to the point P are generated, and a bisector ray B is generated from the center O of the circle Ci such that the angle between OV and B is equal to the angle between OP and B (step 650).
- the distances dv and dp of V and P respectively from the surface of the generating circle Ci are computed (step 660). Suppose that V is nearer to the generating circle
- Ci Ci than P is.
- a ray R is generated from the center O of the circle Ci between OV and B such that the ratio of angle (R.OV) to angle (R,B) equals dv/dp (step 670). It is appreciated that, if
- V is not nearer to the generating circle Ci than P is, then R will lie between B and OP, and all computations will be similar to those described.
- step 690 It is appreciated that, optionally, repeated iterations of the method of Fig. 13 may lead to increasingly exact approximations P' (step 700), by repeated bisection of the angle using current and previous values of P'.
- Fig. 14 is a simplified flowchart illustration of a preferred implementation of step 510 of Fig. 10.
- the reflecting surface SR is a 3-
- the problem is reduced to a 2-D problem (step 705) as follows.
- the center O of the sphere, the viewer P, the point P, and the point Q, described above in reference to Figs. 11 A and 1 IB, will all lie on the same 2-D plane because of the geometry of a sphere.
- SR is a general convex polygonal 3-D mesh or a general concave polygonal 3-D mesh.
- the convex case for calculating reflection which is generally the same as the concave case for calculating refraction, will be described first.
- all polygonal faces in the mesh are assumed to be triangular.
- any polygonal mesh may be reduced to a polygonal mesh with all faces triangular using methods well known in the art, such as that described in R. P. Ronfart and J. R. Rossignac, referred to above (step 720).
- a reflected and/or refracted ray is computed (step 725).
- the reflected and/or refracted ray Ri which has a direction vector Di, is computed for each mesh vertex Ti and associated normal Ni.
- the three rays Ri, Rj, Rk defined by the three vertices Ti, Tj, Tk of a mesh triangle T(i,j,k) define a space cell C(i,j,k).
- each pair of rays defining a bi-linear surface parameterized by the length of each ray, the surface starting at the vertices of the rays (steps 730 and 740).
- R2 is as follows.
- Ray Ri is defined by a point Ti and a vector Di.
- the parametric equations for the rays are:
- the surface B is called bilinear because it is linear in both parameters s and t.
- A2 (T2_z - Tl_z) (D2_x - Dl_x) - (T2_x - Tl_x) (D2_z - Dl_z)
- B2 (D2_z - Dl_z) (X-Tl_x) + (D2_x - Dl_x) (Z - Tl_z) +
- the coordinates of P are used as the x, y, and z variables in implicit equation of the surface, giving the result n.
- the result n is equal to 0 if P is on the bi-linear surface, is positive if P lies on a first side of the bi-linear surface, and is negative if P lies on a second side of the bi-linear surface.
- each mesh triangle may be oriented such that the result n is positive for all three bi-linear surfaces of a triangle if P lies in the cell defined by that triangle.
- One method of orientation of the triangles is to orient all triangles in a counter clockwise order as seen by the viewer. By performing the indicated test on each cell, the cell C in which the point P lies can be determined. Each of the vertices of C are used to reflect and/or refract P, yielding images Pi',
- the P' which is thus computed is an approximation of the "true" P ⁇
- the accuracy of the approximation depends, in general, on the density of the mesh vertices and on the method used to compute the weighted average in step
- intersection points of the plain with the rays Ri are computed.
- the weight of each point Pi' is then taken as the distance of the point P from the corresponding intersection point, divided by the sum of the three distances.
- SR is a general concave 3-D polygonal mesh reflecting surface or a general convex 3-D polygonal mesh refracting surface
- a single point P can generate several reflected and/or refracted images P'. Therefore, it is possible that multiple images P' need to be computed in steps 760 and 770.
- Fig. 16 is a simplified flowchart illustration of a preferred implementation of step 540 of Fig. 10.
- the mesh is preferably partitioned into concave and convex pieces (step 780), and then the method of Fig. 15 is performed separately for each piece (step 790).
- SR when treating objects lying partially in front and in back of SR, when SR is not a planar surface, one of two methods is preferably used:
- W being rendered, in addition to comparing the object's depth against the ordinary z-buffer, it is also compared against the depth of that pixel in the depth map of SR. If the new pixel is closer to the viewer than the conesponding pixel in the depth map, the new pixel is preferably not rendered, nor is it used to update the z-buffer.
- This method is effectively a generalization of the concept of a front clipping plane; the part of the front clipping plane is played, in the case of this method by a front clipping buffer.
- any available knowledge about the original scene in order to accelerate the computations of the various methods of the present invention. If, for example, the only moving objects in the scene are known to always lie in front of every reflecting and/or refracting surface SR, it is appreciated that it is possible to pre-process the scene by clipping, once and for all, the objects lying partially behind SR, as in method 1) above.
- bounding box or sphere hierarchies around objects can be used, as is standard in many graphics methods.
- a preferable method of treating point light sources in the scene when generating W is as follows. Use the point light sources to compute diffuse color values at the vertices of all scene polygonal meshes at the beginning of the method of Fig. 2. These color values are then used when performing Gouraud or Phong shading during rendering in step 130 of Fig. 2.
- This method may be preferable to reflecting and/or refracting of each point light source in the same manner as any other point P, since in this case the normal at the point P, if there is one, should also be reflected and/or refracted.
- the normal may be reflected by generating any two points on the normal, reflecting and/or refracting them as points, subtracting the reflected and/or refracted points to obtain a vector, and then normalizing the vector.
- FIG. 17A is a simplified bird's-eye view illustration of a logical representation of an example of a scene after performing steps 100 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and 830, a reflective surface 810 of rectangular object 830, a light source 840, and a viewer's position S50.
- FIG. 17B is a simplified pictorial illustration of a representation of an example of the scene referred to in Fig. 17A after performing steps 100 through 140, and as contained in image buffer 40. Illustrated are a reflecting surface 800, rectangular objects 820 and 830, and the representation 860.
- FIG. 17C is a simplified illustration of an representation of an example of stencil S', designated by reference number 870, after performing step 360 of Fig. 6.
- FIG. 17D is a simplified bird's-eye view illustration of a logical representation of an example of a scene after recursively performing steps 120 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and
- a reflective surface 810 of rectangular object 830 a reflective surface 810 of rectangular object 830, reflected rectangular objects 890 and 900, a reflective surface 910 of reflected rectangular object 900, a light source 840, a reflected light source 880, and a viewer's position 850.
- FIG. 17E is a simplified pictorial illustration of a representation of an example of the scene referred to in Fig. 17D after recursively performing steps 120 through 140, and as contained in image buffer 40. Illustrated are a reflecting surface 800, reflected rectangular objects 890 and 900, a reflected reflective surface 910 of reflected rectangular object 900, and the representation 860.
- Fig. 17F is a simplified illustration of an representation of an example of stencil S", designated by reference number 870, after performing step 360 of Fig. 6.
- Fig. 17G is a simplified bird's-eye view illustration of a logical representation of an example of a scene after recursively performing steps 120 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and
- a reflective surface 910 of reflected rectangular object 900 a light source 840, a reflected light source SSO, a reflected reflecting surface 940, a reflected light source 950 of reflected light source SSO. and a viewer's position 850.
- Fig. 17H is a simplified pictorial illustration of a representation of an example of the scene refened to in Fig. 17G after recursively performing steps 120 through 140. and as contained in image buffer 40. Illustrated are a reflected reflective surface 910, a reflection 891, and the representation S60.
- Fig. 171 is a simplified pictorial representation of a representation of an example of a scene after performing all recursions of steps 120 through 240, and as contained in accumulation buffer 45. Illustrated are reflecting surface 800, rectangular objects 820 and 830, reflected rectangular objects 890 and 900, a reflected reflective surface 910 of reflected rectangular object 900, a reflection 891 of reflected rectangular object 890, and the representation 861. Reference is now made to Fig.
- 17J which is a simplified bird's-eye view illustration of an example of steps 565 through 630 for calculating refraction and/or reflection of a point P on an orthogonal linear 3-D extrusion of a convex planar curve, consisting of a viewer's position 850, a curved surface SR, designated by reference number 960, point P to be reflected and/or refracted, designated by reference number 970, sample points 975 and 976, normals 977 and 978 to sample points 975 and 976 respectively, reflecting and/or refracting rays 980, reference rays 985, refracted points 990, and calculated refraction point 991.
- Fig. 17K is a simplified bird's-eye illustration of an example of a scene prior to performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, and light direction vector 1020. Reference is now made to Fig. 17L, which is a simplified pictorial illustration of an example of a scene prior to performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 that receives shadows, and light direction vector 1020.
- Fig. 17M is a simplified illustration of an example of the projection of objects onto a planar polygon as described in step 415 in order to create shadows of the projected objects onto the planar polygon. Illustrated in a two- dimensional representation are rectangular objects 1030, planar polygonal surface 1040, projected shadows 1050, projection vectors 1060, and light source 1070.
- Fig. 17N is a simplified pictorial illustration of an example of a scene after performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 that receives shadows, light direction vector 1020, and shadow polygon 1030.
- FIG. 17O is a simplified illustration of an example of a stencil 1040 created after performing step 420 of Fig. 8.
- Fig. 17P is a simplified illustration of an example of surface 1015 of rectangular object 1010 with projected shadow 1016 after performing step 425 of Fig. 8.
- Fig. 17Q is a simplified pictorial illustration of an example of a scene after performing step 435 of Fig. 8. Illustrated are rectangular objects
- FIG. 17R is a simplified pictorial illustration of an example of a scene after performing step 435 of Fig. 8 for all polygons in the scene on which a shadow or shadows are projected. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 having received shadows 1016 and 1017, and light direction vector 1020.
- Fig. 17S is a simplified illustration of an example of a scene as described with reference to Fig.
- FIG. 17T is a simplified illustration of an example of a scene as described with reference to Fig. 17B, containing a reflecting and/or refracting surface SR, a viewer's position V, a normal vector N, a point H on reflecting and/or refracting surface SR, a point to be refracted P, and a refracted image P'.
- FIG. 17U is a simplified illustration of an example of a scene as described with reference to Fig. 13, containing a generating circle Ci of a cylinder, a center O of the circle Ci, a viewer's position V, rays OV, OP, B, and R, an intersection point Q', a point to be reflected and/or refracted P, and distances dv and dp.
- Fig. 17V is a simplified bird's-eye view illustration of an example of steps 565 through 630 for calculating refraction and/or reflection of a point P on an orthogonal linear 3-D extrusion of a convex planar curve, consisting of a viewer's position V, a curved surface SR, , point P to be reflected and/or refracted, sample points Ti and Ti+1, normals Ni and Ni+1 to sample points Ti and Ti+1 respectively, reflecting and/or refracting rays Ri and Ri+1, reference rays V-Ti and V-ti+1, refracted points P'i and P'i+1, cell C(i,l), and calculated refraction point P'.
- Fig. 18 is a pictorial illustration of a scene including four spheres 1510, 1520, 1530 and 1540 each having a shadow 1550, 1560, 1570 and 1580 respectively. The shadows are rendered on the floor plane 1590 of the scene.
- Fig. 19 is a pictorial illustration of sphere 1510 of Fig. 18, a center point 1610 of the sphere 1510, a point of view 1620 from which the sphere 1510 is being observed, and a four-faced pyramid 1630 sunounding the sphere 1510 and centered at the center point 1610.
- the pyramid 1630 is defined by four vertices 1632, 1634, 1636 and 1638.
- the pyramid 1630 includes a front face 1640 which is orthogonal to the line segment 1650 which connects the point of view 1620 and the center point 1610.
- Figs. 20A - 20D are pictorial illustrations of the scene of Fig. 18 rendered onto each of the four faces, respectively, of the pyramid 1630 of Fig. 18.
- the notation employed in Figs. 20A - 20D is that a non-primed reference numeral denotes the scene element which is rendered onto the relevant face whereas a primed reference numeral denotes a reflection of the scene element identified by that reference numeral, which reflection is rendered onto the relevant face.
- Fig. 21 is a pictorial illustration of the pyramid 1630 of Fig. 1 unfolded onto a planar surface, where the scene of Fig. 18 is rendered onto each of the faces of the pyramid.
- Fig. 22 is a pictorial illustration of a tessellation of the front face 1640 of the pyramid 1630.
- Fig. 23 is a pictorial illustration of the tessellation of Fig. 22 mapped into a 2D circle 1660.
- Fig. 24 is a circular environment map 1670 including a mapping, as illustrated in Figs. 22 - 23, of each of the faces of the pyramid 1630 onto the 2D circle 1660 of Fig. 23.
- Fig. 25 is a rendering of the entire scene of Fig. 18 including a reflection of the entire scene in sphere 1510. This reflection is generated by texture-mapping the environment map 1670 of Fig. 24 onto the sphere 1510.
- Fig. 26 is a simplified block diagram of a system for generating a 2D image, including reflections, of a 3D scene from a point of view, using 3D information regarding the 3D scene, which is constructed and operative in accordance with a prefened embodiment of the present invention.
- the apparatus of Fig. 26 is generally similar to the apparatus of Fig. IA.
- 3D information regarding an artificial scene may be generated by a conventional workstation 1770 such as an IBM PC.
- 3D information regarding a natural scene may be generated using a state of the art measuring device such as a CyberWare measuring device, commercially available from Cyber-Ware Inc., 2110 De Monte Ave, Monterey, CA, 93940, USA, preferably in conjunction with a suitable camera 1790.
- Fig. 27 is a simplified flowchart illustration of a prefened method of operation for reflections, refractions and shadows unit 1720 of Fig. 26 in conjunction with units 1715 and
- the method of Fig. 27 preferably includes the following steps for each object in the scene from which a 2D image, seen from a selected viewpoint V, is generated. Preferably, the steps of Fig. 27 are performed only for those objects in the scene which are visible from the selected viewpoint V and on which it is desired to see reflections.
- Step 1800 A four-faced pyramid whose center is located at a selected
- a (typically user-selected and/or central) location C within the current object is used to compute an environment map for that object.
- the selected location C of a point-symmetric object is preferably the point about which the object is symmetric.
- the selected location C for an object having an axis of symmetry is preferably the center of the axis of symmetry.
- the pyramid has four faces including a front face which is orthogonal to the line connecting the viewpoint and the selected object point C.
- Fig. 19 illustrates a four-faced pyramid 1630 whose center 1610 is located at a selected C within the cunent object which may be used to compute an environment map for object 1600 in Fig. 19.
- the pyramid 1630 has four faces defined by vertices 1632, 1634, 1636, 1638 including a front face 1640 which is orthogonal to the line 1650 connecting the viewpoint 1620 and the selected object point 1610 (also termed point C).
- the environment map is computed as follows: a. Each of the triangular faces of the pyramid is circumscribed within a bounding rectangle. An example of this is illustrated in Figs. 20A - 20D. b. The scene is rendered onto each of the bounding rectangles. c. Each of the triangular faces, with the scene rendered thereupon, is mapped onto a single circle which forms the environment map. This step may be performed using tessellation, as described herein, or may be performed as follows:
- a vector R(x,y,z) is defined which is the difference between that point and between the selected object point C.
- s and t are the coordinates on the circle corresponding to the point P on the pyramid.
- Figs. 20A - 20D are pictorial illustrations of four triangular faces with a single scene rendered thereupon.
- a reflection vector R(x,y,z) is computed for each vertex Q of the reflecting object, using Equation 3 where V is the viewpoint from which the object is viewed and N is a "normal" to the object at Q. "Normal” is a known term in the art of computer graphics which is not identical to the term "perpendicular”. Texture coordinates s,t for the vertex Q are computed by plugging the coordinates of R(x,y,z) into Equations 1 and 2.
- a conventional rendering process 1820 is performed in which the environment map is mapped as a texture map onto the object.
- Conventional rendering is described in Foley et al Computer Graphics: principles and practice, published by Addison- Wesley, 1990.
- Fig. 28 is a simplified flowchart illustration of a prefened method for generating a 2D image of a 3D scene from a point of view including a location and a direction, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces wherein at least one of the surfaces is reflected in each of the objects, wherein the 3D information regarding the 3D scene including, for each of the at least one objects, a first plurality of polygons which approximate the surface of the object and which define a second plurality of vertices.
- the method comprises the following steps for each of the at least one objects, a.
- the step (b), in which an ordered pair of texture coordinates is computed for each object vertex Q, typically comprises the following steps:
- Equation 5 (i) Computing a reflection ray R based on the point of view, the vertex and a normal to the object at the vertex (step 1830 of Fig. 28; Equation 3); (ii) Computing an intersection point U of the ray R with a user-selected sphere centered at the selected point, using Equations 4 and 5 in which C is the selected point. r is the radius of the sphere and t is the positive solution of the quadratic equation termed herein Equation 5.
- Step 1850 Computing a pair of texture coordinates as a function of a vector from the selected point C to the intersection point U, as described above with reference to step 1810 of Fig. 27 except that the role of R in step 1810 is replaced, in step 1850, by the final reflection vector F.
- Fig. 29 is a simplified flowchart illustration of another prefened method for generating texture coordinates for each vertex Q of each reflecting object in a 3D scene, in order to generate a 2D image of that 3D scene, either according to the method of Fig. 27 or according to conventional methods.
- Steps 1860 and 1900 of Fig. 29 may be similar to steps 1830 and 1850 respectively.
- a preliminary step 1852 is added in which a depth map is computed and step 1S40 of Fig. 29 is replaced by the following steps: a.
- Step 1870 The intersection U of the reflected ray R computed in step 1860 with a user-selected sphere centered at the selected point C is computed, using Equations 4 and 5 in which C is the selected point, r is the radius of the sphere and t is the positive solution of the quadratic equation termed herein Equation 5.
- Step 1880 Compute an intermediate ray I(x,y,z) by subtracting U - C.
- Step 18S4 Compute intermediate texture coordinates (s 1 , t') for the intermediate ray by applying Equations 1 and 2 to I(x,y,z).
- Step 188S - Access the depth map and retrieve the value w stored at the coordinates (s',f). e.
- Step 1890 ⁇ Compute a final ray L using Equation 6 which combines the intermediate ray I and the reflected ray R.
- w' is as defined in Equation 7.
- d max in Equation 7 is an upper bound on the depth of the scene e.g. the depth of a box which bounds the scene.
- Preliminary depth map computing step 1852 is performed as follows:
- a four-faced pyramid whose center is located at a selected (typically user- selected and/or central) location C within the cunent object is used to compute a depth map for that object.
- the depth map is computed as follows: a. Each of the triangular faces of the pyramid is circumscribed within a bounding rectangle. b. The scene is rendered, using conventional methods, onto each of the bounding rectangles and a z-buffer generated in the course of the rendering process is stored on each of the triangular faces. c. Each of the triangular faces, with the z-buffer stored thereupon, is mapped onto a single circle which forms the depth map. This step may be performed using tessellation, as described herein, or may be performed as follows:
- a vector R(x,y,z) is defined which is the difference between that point and between the selected object point C.
- s and t are the coordinates on the circle conesponding to the point P on the pyramid.
- the 3-D Reflection Subdivision The reflector is approximated using a triangular mesh. Each vertex of the mesh possesses a normal. This step is redundant when the reflector is already given as a triangular mesh.
- Mesh vertices can be classified into hidden and (potentially) visible, the visible ones being strictly front-facing, profile or back-facing. We force visible back-facing vertices to be profile vertices, profile and strictly front-facing vertices are called front-facing, and contour vertices that are strictly front-facing are called cut vertices.
- the explosion map method uses two maps: an explosion map and a hidden map.
- the reflection triangles do not cover the whole circle, because the concealed region was not mapped onto the circle yet (in addition, the line segment connecting two adjacent profile vertices leaves out a small portion of the circle).
- Each adjacent pair of contour vertices Vi
- Vj defines an extension polygon whose vertices are Vi, Vj, Ei, Ej, where Ei, Ej are points on the rays Ri, Rj outside the circle.
- the distance between a point Ei and the circle center could be approximately 1.3 the radius of the circle.
- the last step in building the explosion map is filling its polygons with a constant value denoting the index of the conesponding mesh triangles. Extension polygons are filled with the index of their adjacent mesh triangle, which, by construction, is unique. Polygon fill can be done by graphics hardware.
- the explosion map represents a discrete projection of the reflection subdivision on a two-dimensional sheet around the object. The index of a reflection cell is written on all discrete map pixels covered by it. In a separate location, we store a list of the exact intersection points Ii, indexed by cell index.
- a hidden map which is simply an item buffer of the visible mesh triangles. That is, it is a discrete map (of a resolution much smaller than the frame buffer) in which a visible mesh triangle is mapped to a 2-D triangle filled with the index of the mesh triangle. Using the explosion map to compute virtual vertices.
- the explosion map circle represents a mapping of all possible reflection directions from the reflector's boundary. We use it to directly generate the final virtual vertex Q'.
- For each reflector we define two explosion maps (near and far), a hidden map, and an auxiliary z-buffer of itself.
- the near map is computed using a sphere that tightly bounds the object, and the far map is computed using a sphere that bounds all the scene. It is important to understand that although the topologies of the two maps are quite similar (because cells do not intersect each other), their geometries are different; reflection rays, which determine the geometry of map vertices, evolve non-linearly.
- V are now computed, and used as weights in a weighted average of the vertices and normals of V that produces a plane of reflection (H, NH) through which Q is mirrored to produce Q'.
- the second case is when Q is not hidden; we utilize the explosion maps.
- the pixels in which they fall contain indices of two (generally, different) mesh triangles Tn, Tf. In general, none of these triangles conesponds to the conect reflection cell in which Q is located, because In, If were computed using a line to the center of the sphere, which is rather arbitrary.
- the intersection point In is mapped to the conect cell when Q is very near (or on) the near bounding sphere (and similarly to If).
- Each of the triangles Tn, Tf defines an auxiliary virtual vertex Qn', Qf.
- Those auxiliary virtual vertices are computed by (1) computing the barycentric coordinates of In, If relative to Tn, Tf, (2) using these coordinates as weights in a weighted average of the vertices and normals of Tn, Tf to produce planes of reflection, and (3) minoring Q across the two planes of reflection.
- the final virtual vertex Q' is a weighted average of Qn', Qf, i.e., it is on the line segment connecting Qn', Qf .
- the weights are determined, say, by the relative distances dn, df of Q from the near and far spheres:
- Q' ⁇ ⁇ ⁇ Qn' ⁇ / ⁇ dn ⁇ + ⁇ Qf ⁇ / ⁇ df ⁇ ⁇ / ⁇ l/ ⁇ dn ⁇ +l/ ⁇ df ⁇ ⁇ .
- the stencil When rendering these polygons, the stencil automatically masks the parts that do not belong to our finite cone.
- the hidden region can be represented as the union of the inside of the cone and a truncated pyramid defined by the intersection between (1) the two planes passing through the viewer and tangent to the cone, and (2) the plane passing through the two edges on the geometric contour.
- the hidden region of a finite cone is more complex, but we don't need it. Thus, a few simple 'plane side' tests and a cone containment test suffice in order to implement the hidden region test. If Q lies in the hidden region and is a vertex of a mixed polygon, we intersect the ray from the viewer to it with the front part of the cone, and create Q' by minoring Q across the tangent plane at the intersection point.
- Spheres and cylinders allow a direct computation of the point of reflection H, by reducing the computation to a 2-D one.
- the method described below does not utilize the reflection subdivision at all.
- Beta Alpha ⁇ dE / ⁇ dE+dQ ⁇ . Assume without loss of generality that dE ⁇ dQ.
- H' to be a point such that the angle between H'-C, E-C is Beta.
- H' is a first approximation to H.
- a second approximation is to take a point H" as a weighted average of H' and HO (the point at which Q-C intersects the circle). The weights are determined by the relative squared distances of Q to the reflection rays from ⁇ , HO.
- Concave Reflectors The behavior of the reflection of a reflected object depends on which of three regions it is located in.
- the reflection of an object located in region A looks like an enlarged, deformed version of the object, which is otherwise similar to the object.
- the reflection of an object located in region B looks like an enlarged, deformed, upside-down version of the object.
- the reflection of objects located in region C is utterly chaotic. Reflections of objects lying in regions A and B can be computed exactly as for convex reflectors, because in these regions the reflection subdivision is well-defined (since the reflection cells are disjoint). Virtual vertices and polygons can be computed and rendered.
- Reflections of objects lying in region C or intersecting that region are unpredictable and chaotic anyway, so almost any policy for computing virtual vertices will be satisfactory. For example, we can simply select the first cell in which a vertex lies and create a virtual vertex according to that cell.
- the software components of the present invention may, if desired, be implemented in ROM (read-only memory) form.
- the software components may, generally, be implemented in hardware, if desired, using conventional techniques.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9819870A GB2326806A (en) | 1996-03-14 | 1997-03-13 | Computerized graphics systems |
AU21057/97A AU2105797A (en) | 1996-03-14 | 1997-03-13 | Computerized graphics systems |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL117502 | 1996-03-14 | ||
IL11750296A IL117502A0 (en) | 1996-03-14 | 1996-03-14 | System for processing graphic scenes |
IL11951096A IL119510A0 (en) | 1996-10-28 | 1996-10-28 | Apparatus and method for generating 2D images of 3D scenes including reflections |
IL119510 | 1996-10-28 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO1997034213A2 true WO1997034213A2 (en) | 1997-09-18 |
WO1997034213A3 WO1997034213A3 (en) | 1997-10-23 |
Family
ID=26323234
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL1997/000094 WO1997034213A2 (en) | 1996-03-14 | 1997-03-13 | Computerized graphics systems |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU2105797A (en) |
GB (1) | GB2326806A (en) |
WO (1) | WO1997034213A2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000049573A1 (en) * | 1999-02-19 | 2000-08-24 | Sony Computer Entertainment Inc. | System for and method of implementing refraction mapping |
WO2001020554A1 (en) * | 1999-09-10 | 2001-03-22 | Sony Computer Entertainment Inc. | Method and apparatus for rendering images with refractions |
WO2007129476A1 (en) | 2006-05-09 | 2007-11-15 | Sega Corporation | Image processing program and image processor |
AT503743B1 (en) * | 2002-10-09 | 2008-05-15 | Vrvis Zentrum Fuer Virtual Rea | METHOD FOR THE COMPUTER-BASED PRESENTATION OF OBJECTS |
EP2204776A1 (en) * | 2007-11-01 | 2010-07-07 | Konami Digital Entertainment Co., Ltd. | Image processing device, image processing method, information recording medium and program |
KR20170091710A (en) * | 2014-12-03 | 2017-08-09 | 노키아 테크놀로지스 오와이 | Digital video rendering |
KR101910893B1 (en) | 2017-04-21 | 2019-01-04 | 오스템임플란트 주식회사 | Apparatus And Method For Selecting Data Using Clipping Fuction |
US20220237852A1 (en) * | 2021-01-25 | 2022-07-28 | Nvidia Corporation | Techniques for texture filtering using refracted ray cones |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0727581B2 (en) * | 1988-09-09 | 1995-03-29 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Graphic processing device |
GB2240017A (en) * | 1990-01-15 | 1991-07-17 | Philips Electronic Associated | New, interpolated texture values are fed back to texture memories |
WO1993011500A1 (en) * | 1991-11-27 | 1993-06-10 | Seiko Epson Corporation | Pixel modification unit |
US5461712A (en) * | 1994-04-18 | 1995-10-24 | International Business Machines Corporation | Quadrant-based two-dimensional memory manager |
-
1997
- 1997-03-13 AU AU21057/97A patent/AU2105797A/en not_active Abandoned
- 1997-03-13 WO PCT/IL1997/000094 patent/WO1997034213A2/en active Application Filing
- 1997-03-13 GB GB9819870A patent/GB2326806A/en not_active Withdrawn
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6563508B1 (en) | 1999-02-19 | 2003-05-13 | Sony Computer Entertainment Inc. | System for and method of implementing refraction mapping |
US7075533B2 (en) | 1999-02-19 | 2006-07-11 | Sony Computer Entertainment Inc. | System for and method of implementing refraction mapping |
WO2000049573A1 (en) * | 1999-02-19 | 2000-08-24 | Sony Computer Entertainment Inc. | System for and method of implementing refraction mapping |
WO2001020554A1 (en) * | 1999-09-10 | 2001-03-22 | Sony Computer Entertainment Inc. | Method and apparatus for rendering images with refractions |
US6784882B1 (en) | 1999-09-10 | 2004-08-31 | Sony Computer Entertainment Inc. | Methods and apparatus for rendering an image including portions seen through one or more objects of the image |
US6972758B2 (en) | 1999-09-10 | 2005-12-06 | Sony Computer Entertaiment Inc. | Methods and apparatus for rendering an image including portions seen through one or more objects of the image |
AT503743B1 (en) * | 2002-10-09 | 2008-05-15 | Vrvis Zentrum Fuer Virtual Rea | METHOD FOR THE COMPUTER-BASED PRESENTATION OF OBJECTS |
JP4983793B2 (en) * | 2006-05-09 | 2012-07-25 | 株式会社セガ | Image processing program and image processing apparatus |
WO2007129476A1 (en) | 2006-05-09 | 2007-11-15 | Sega Corporation | Image processing program and image processor |
EP2034446A1 (en) * | 2006-05-09 | 2009-03-11 | Sega Corporation | Image processing program and image processor |
US8400445B2 (en) | 2006-05-09 | 2013-03-19 | Sega Corporation | Image processing program and image processing apparatus |
EP2034446A4 (en) * | 2006-05-09 | 2011-01-05 | Sega Corp | Image processing program and image processor |
EP2204776A4 (en) * | 2007-11-01 | 2010-12-01 | Konami Digital Entertainment | Image processing device, image processing method, information recording medium and program |
TWI384414B (en) * | 2007-11-01 | 2013-02-01 | Konami Digital Entertainment | Image processing device, image processing method, and information recording medium |
EP2204776A1 (en) * | 2007-11-01 | 2010-07-07 | Konami Digital Entertainment Co., Ltd. | Image processing device, image processing method, information recording medium and program |
CN101911129B (en) * | 2007-11-01 | 2013-08-28 | 科乐美数码娱乐株式会社 | Image processing device, image processing method, information recording medium and program |
KR20170091710A (en) * | 2014-12-03 | 2017-08-09 | 노키아 테크놀로지스 오와이 | Digital video rendering |
KR102059732B1 (en) * | 2014-12-03 | 2020-02-20 | 노키아 테크놀로지스 오와이 | Digital video rendering |
KR101910893B1 (en) | 2017-04-21 | 2019-01-04 | 오스템임플란트 주식회사 | Apparatus And Method For Selecting Data Using Clipping Fuction |
US20220237852A1 (en) * | 2021-01-25 | 2022-07-28 | Nvidia Corporation | Techniques for texture filtering using refracted ray cones |
US12229869B2 (en) * | 2021-01-25 | 2025-02-18 | Nvidia Corporation | Techniques for texture filtering using refracted ray cones |
Also Published As
Publication number | Publication date |
---|---|
GB2326806A (en) | 1998-12-30 |
GB9819870D0 (en) | 1998-11-04 |
WO1997034213A3 (en) | 1997-10-23 |
AU2105797A (en) | 1997-10-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7030879B1 (en) | System and method of improved calculation of diffusely reflected light | |
US7362332B2 (en) | System and method of simulating motion blur efficiently | |
Ofek et al. | Interactive reflections on curved objects | |
EP1128330B1 (en) | Visibility splatting and image reconstruction for surface elements | |
Zhang et al. | Visibility culling using hierarchical occlusion maps | |
US6618047B1 (en) | Visibility calculations for 3d computer graphics | |
US6583787B1 (en) | Rendering pipeline for surface elements | |
US7009608B2 (en) | System and method of using multiple representations per object in computer graphics | |
US5442733A (en) | Method and apparatus for generating realistic images using a discrete representation | |
EP1128375B1 (en) | Texture filtering for surface elements | |
US6825840B2 (en) | System and method of adjusting ray origins when shading vertices with rays | |
US7973790B2 (en) | Method for hybrid rasterization and raytracing with consistent programmable shading | |
EP1128331B1 (en) | Hierarchical data structures for surface elements | |
US6542154B1 (en) | Architectural extensions to 3D texturing units for accelerated volume rendering | |
WO1997034213A2 (en) | Computerized graphics systems | |
García et al. | Interactive ray tracing using the compute shader in directx 11 | |
US20030184546A1 (en) | Image processing method | |
Sobierajski | Global illumination models for volume rendering | |
Zhang et al. | Single-pass point rendering and transparent shading | |
KR100372901B1 (en) | Ray-tracing acceleration method using ZF-buffer | |
Han et al. | In-depth buffers | |
Měch | Real-Time Image-Based Rendering Using Surface Proxies and Texture Packing | |
Meseth et al. | Interactive fragment tracing | |
Frühauf | Joining volume with surface rendering | |
Kim et al. | A study on the ray-tracing acceleration technique based on the ZF-buffer algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN YU AM AZ BY KG KZ MD RU TJ TM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN YU AM AZ BY KG KZ MD RU TJ TM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
ENP | Entry into the national phase |
Ref country code: GB Ref document number: 9819870 Kind code of ref document: A Format of ref document f/p: F |
|
NENP | Non-entry into the national phase |
Ref country code: JP Ref document number: 97532408 Format of ref document f/p: F |
|
NENP | Non-entry into the national phase |
Ref country code: CA |
|
122 | Ep: pct application non-entry in european phase |