+

WO1998006067A1 - Hardware-accelerated photoreal rendering - Google Patents

Hardware-accelerated photoreal rendering Download PDF

Info

Publication number
WO1998006067A1
WO1998006067A1 PCT/US1997/013563 US9713563W WO9806067A1 WO 1998006067 A1 WO1998006067 A1 WO 1998006067A1 US 9713563 W US9713563 W US 9713563W WO 9806067 A1 WO9806067 A1 WO 9806067A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
data
image data
tag
graphics
Prior art date
Application number
PCT/US1997/013563
Other languages
French (fr)
Other versions
WO1998006067A9 (en
Inventor
George Randolph Smith, Jr.
Karin P. Smith
David John Stradley
Original Assignee
Intergraph Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intergraph Corporation filed Critical Intergraph Corporation
Priority to EP97936355A priority Critical patent/EP0920678A1/en
Publication of WO1998006067A1 publication Critical patent/WO1998006067A1/en
Publication of WO1998006067A9 publication Critical patent/WO1998006067A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the present invention pertains to hardware-accelerated computer processing of multidimensional data, and. particularly, to systems for the creation and display of photoreal interactive graphics.
  • the rendering of graphical images is an application of digital signal processing which requires intensive computation at multiple levels of the process.
  • the typical three dimensional graphics workstation architecture consists of multiple subsystems that arc allocated functions uniquely.
  • General purpose computers are typically used for non- display rendering of three dimensional models into human viewable images.
  • the non- display rendering process entails representation of the scene as a set of polygons, to which attributes such as texture and shadowing are applied through computation.
  • each sub-system is used for a different part of the processing.
  • the CPU/Main memory is used for general algorithmic processing, and temporary storage of data.
  • Peripheral devices are used to allow human interaction with the workstation, to transmit and to permanently store digital information.
  • One such sub-system is the graphics display accelerator, typically used to take geometric shapes, mathematically place them in three dimensional mathematical space, associate simulated lighting and optical effects, and produce an electronic picture in a frame buffer for visible display using a two dimensional display component.
  • the graphics display accelerator is a one-way state machine pipeline processor with low volume, highly abstracted data flowing in, and low level information displayed on the workstation monitor.
  • the operation of the graphics display accelerator in the conventional architecture can be understood in the context of rendering processes.
  • the ray trace rendering process consists of defining a series of polygons (typically triangles), at least one viewport or window through which the human views the simulated scene, and various light sources and other optical materials into a mathematical three dimensional space.
  • a viewer eye location is selected and a window on an arbitrary view plane are selected.
  • the window is considered to be comprised of small image elements (pixels) arranged into a grid at a desired output resolution.
  • the window is chosen to correspond to the display monitor at a given resolution, and the eye location is chosen to be at some location outside the screen to approximate where a human observer ' s eye may actually be located.
  • a ray is fired from the eye location, through the pixel, and into the scene: the pixel is then colored (and assigned other attributes) based upon the intersection of the ray with objects and light sources within the scene.
  • the ray in effect, bounces around the objects within the scene, and surface and optical effectors modify the simulated light rays, and alter the eventual characteristics of the display pixel. More information regarding ray tracing may be found in Computer Graphics, Principles and
  • Z-Buffers are used to spatially sort the polygons and to reduce the processing of un-necessary (non-visible) polygons. If a polygon is obscured by another polygon, the ray is not processed.
  • the graphics display accelerator frequently uses a simpler type of rendering.
  • Such accelerators commonly provide special purpose hardware components that provide Z-Buffer sorting, a stencil buffer, texture processing and geometry (vertex coordinate) transformation calculation.
  • Using a typical graphic display accelerator generally consists of loading each polygon into local memory on the graphic display adapter, projecting the three dimensional coordinates on to the viewport two dimensional coordinate system, tri-linearly interpolating (with perspective correction) any surface color and optional texture pattern.
  • Current state-of-the-art rendering employs a high-performance three dimensional graphics library such as, for example, OpenGL, that is supported by numerous hardware and software vendors OpenGL significantly speeds the process ot preview rende ⁇ ng, but it has some limitations These limitations include its inability to directly support Phong shading and bump maps, graphics effects which provide more realistic images than simple Gouraud shading, which is supported by OpenGL
  • a rendering apparatus for providing, with respect to a detined viewer location and a delined viewport, a desired rende ⁇ ng of objects defined by object data having an object data format, in a three dimensional object space
  • the apparatus in this embodiment has a graphics accelerator for transforming object data into image data determined with respect to the defined viewer location and the defined viewport, and a rendc ⁇ ng processor tor converting at least one parameter characterizing the desired rende ⁇ ng into parameter data in object data format, feeding the parameter data to the graphics accelerator, and converting resulting image data as to the at least one parameter to a further processed result pertinent to the desired rendering
  • the apparatus has an intermediate memory in which is stored the image data from the graphics accelerator, wherein the rende ⁇ ng processor converts the image data stored within the intermediate memory into the further processed result
  • the image data may be defined by values associated with a plurality ol pixel locations in an image.
  • the rendering processor before feeding the object data to the graphics accelerator, utilizes a tag assigned to each of the objects, so as to associate by tag pixel locations in the image with objects.
  • Each of the objects has a surface that may be represented by a plurality of primitive polygons
  • the rende ⁇ ng processor before feeding the object data to the graphics accelerator, may utilize a tag assigned to the p ⁇ mitive polygons, so as to associate by tag pixel locations with p ⁇ mitive polygons.
  • the tag may be a color.
  • the rendering processor as part of converting resulting image data, identifies by tag the portions ot object surfaces present in the image, and rest ⁇ cts further processing associated with the desired rende ⁇ ng to such portions so as to reduce processing overhead associated with the desired rendering.
  • a graphics rende ⁇ ng program stored on a computer readable medium for providing a desired rendering of objects defined by object data having an object data format, in a three dimensional object space.
  • the program is contigured so as to be executable by a computer having a two dimensional graphics accelerator for transforming object data into image data determined with respect to a defined viewer location and a delined viewport.
  • the program When loaded into the computer, the program causes the establishment ot a rende ⁇ ng apparatus having a graphics accelerator for transforming object data into image data determined with respect to the detined viewer location and the detined viewport, and a rendering processor for converting at least one parameter characte ⁇ zing the desired rendering into object data lormat, feeding the ob
  • a rende ⁇ ng apparatus having a graphics accelerator for transforming object data into image data determined with respect to the detined viewer location and the detined viewport, and a rendering processor for converting at least one parameter characte ⁇ zing the desired rendering into object data lormat, feeding the ob
  • the computer further includes an intermediate memory in which the rendering program causes to be stored the image data from the graphics accelerator, and wherein the rendering processor converts the image data stored within the intermediate memory into the further processed result
  • FIG. 1 is a block diagram ot the rende ⁇ ng graphics architecture with an open application programming interface, in accordance with the present invention.
  • FIGS. 2a and 2b illustrate the process of computing normals to polygon vertices of a surface to enable Gouraud shading of the surface.
  • FIGS. 3a and 3b illustrate the process of perturbing surface normals to replicate the effect ot surface texture, in accordance with the present invention
  • FIG. 4 illustrates the process of computing shadows.
  • FIGS. 5a and 5b illustrate the process ol computing procedural three dimensional texture.
  • FIG. 6 is a flow diagram depicting the steps of image rendering in accordance with the present invention.
  • FIG 7 shows an eye looking towards a three-dimensional cube
  • FIG 8 shows a t ⁇ angle for which the normal ot an interior pixel is sought.
  • image data refers to the processed product of the scene data, after application, singly or recursively, of dedicated graphics display accelerator
  • Graphics rende ⁇ ng software 10 is the first step toward the goal of Real Time Reality on the desktop (i.e. the ability to render photorealistic images in real-time or near real-time).
  • the rende ⁇ ng of "Toy Story” required a few hundred prop ⁇ etary RISC/UNIX computers working 24 hours a day for a couple of years.
  • the present invention will allow creative and technical professionals to render the same number of frames in a traction of that time.
  • graphics display accelerator 14 is now used to produce intermediate results, where these results may be uti /ed in a general purpose ray trace rendering algorithm, or other graphics algorithm
  • Such intermediate results are used to determine polygonal normals, texture coordinates lor three dimensional texture algo ⁇ thms, local coordinates tor two dimensional surface textu ⁇ ng, bump-map perturbations ot the visible rays and to determine interpolated world coordinates ol polygonal surfaces
  • the complete rendering process is now split amongst two ot the major subsystems, the graphics display accelerator 14 (or hardware accelerator) and the CPU/Main Memory subsystem, thus improving perlormance of the rende ⁇ ng process on the three dimensional graphics workstation over that ol a general purpose computer
  • the rende ⁇ ng software includes a graphics library with an open application programming interface (API) which extends
  • the graphics workstation is now able to produce higher quality graphics, incorporating features ol photoreal, or production, rendering.
  • normals may be interpolated pixel-by-pixel (Phong shading), to produce higher quality three dimensional graphics than are available using Gouraud shading (provided by OpenGL), in which normals are computed for faces ot polygons, and then averaged to derive values tor polygon vertices.
  • the graphics rende ⁇ ng subsystem (which includes the rendering library) dramatically accelerates graphics attributes, such as Gouraud shading, which are standard to the existing (such as OpenGL) high-performance three dimensional graphics library.
  • the invention accelerates features that OpenGL does not support, such as Phong shading, Bump maps, Shadows, and Procedural three dimensional textures.
  • FIGS. 2a and 2b illustrate the process of computing normals to polygon vertices ol a surface to enable Gouraud shading ot the surface.
  • Three dimensional models are usually represented as a large collection of smaller polygons, typically triangles, and the quality of the final rendered image depends on the sophistication of the algorithms used to shade these polygons. Shown in FIG. 2(a) are the faces of several polygons 20, and an associated normal vector 22 which is used to indicate how light will be reflected by that polygon Shown in FIG.
  • 2b is the first step in the Gouraud shading process, which is to calculate the normals 24 for each of the polygon's vertices (corners) by averaging the normals 22 from the surrounding faces. The computed normals for the vertices are then used to determine the RGB (red, green, and blue) color components for the vertices as seen by an observing eye or camera. These RGB components are based on the color of the material combined with the effect of any light sources. Gouraud shading is based on calculating the color components of each of the pixels forming the polygon by linear interpolation of the RGB values at the vertices.
  • RGB color scheme is for exemplary purposes only, and another coloring scheme, such as CYMK (cyan, yellow, magenta, and black) may also be used.
  • Gouraud shading provides acceptable results, but the highest quality three dimensional graphics demand something more. The problem is that Gouraud shading simply calculates the normals at the vertices and then interpolates the colors of the polygon's interior pixels. And, especially for curved polygons, the shading effect is not too realistic as the interpolation does not account for different normal values across the polygon surface.
  • Phong shading is to derive a normal for every interior pixel, and to then apply a given shading model to that individual pixel based on its normal component.
  • FIGS. 3a and 3b illustrate the process of perturbing surface normals to replicate the effect of surface texture, in accordance with the present invention.
  • Real-world objects are rarely geometrically smooth; instead, they often have distortions in their surfaces giving them a physical texture.
  • a strawberry which is all one color, but whose dimples give it a rich physical texture.
  • FIG 3a shows one way to replicate this effect, where one explicitly creates geometncal distortions 30 into the model's surface
  • This solution requires sigmlicant modeling etlort and results in excessive amounts of computation
  • FIG 3b shows an alternate solution, in which the surlace normals 32 are perturbed, causing them to reflect the light so as to provide a similar ettect
  • perturbing the surface normals requires the use ot a bump map. which simulates the effect ot displacing the points on the surtace above or below their actual positions Bump maps fully complement the process of Phong shading, in which individual normals have already been computed tor each pixel on the surlace, the bump map is used to modify these normals prior to the shading model being applied to each pixel
  • Bump maps are distinct from the concept ol texture (pattern) maps, in which an image is projected onto a surtace Texture maps range from flat images (such as geometric patterns) to actual photographs ol rough surl ccs
  • texture maps of rough surlaces don't look quite right, because they simply atfect the a surface's shading, and not its surtace shape They also tend to look inco ⁇ ect because the direction of the light source used to create the texture map is typically different from the direction ol the light illuminating the mapped three dimensional object That is, unless the light sources for the rough pattern arc the same as the one within the three dimensional object space, when viewing the texture mapped onto an object within the object space, one sees that something is wrong with the resultant image
  • bump maps are not used for producing realistic output with OpenGL
  • bump maps provide superbly textured three dimensional images, overcoming the visual lighting problems requires use of Phong shading, and such shading is not directly supported by OpenGL
  • Phong shading such shading is not directly supported by
  • FIG 4 shows a light source 40 causing a shadow 42 to be casted by a first object 44 onto a second object 46.
  • An important component to realistic imaging is to ensure that objects within a given three dimensional obiect space cast proper shadows However, doing so greatly increases the amount of computation that needs to be performed Creating shadows is computationally intensive because an area in shadow is rarely pure black. Instead, the area usually contains some amount of color content m a diminishcd form.
  • a preferred embodiment provides special extensions for the acceleration of shadow creation
  • FIGS 5a and 5b illustrate the process ot computing procedural three dimensional texture, textures not directly supported by OpenGL
  • FIG 5a shows that the application of flat, two-dimensional textures 50 to three dimensional objects 52 usually results in unrealistic results, particularly when attempting to display a cross-sectional cut-away view 54 through an object.
  • FIG. 5a shows that the application of flat, two-dimensional textures 50 to three dimensional objects 52 usually results in unrealistic results, particularly when attempting to display a cross-sectional cut-away view 54 through an object.
  • procedural three dimensional textures 56 provide the ability to define a more realistic texture that occupies three dimensions and understands the geometry ol the object in question
  • procedural three dimensional wood-grain 56 texture is applied to an object, then taking a cross-sectional view ol the object reveals the grain 58 ot the wood inside the object (or whatever else, such as a knot hole, that was detined to be within the texture). This provides tor a more realistic image.
  • the invention provides lor accelerated generation and utilization of procedural three dimensional textures.
  • FIG 6 shows a flow diagram depicting the steps ol image rende ⁇ ng in accordance with the present invention.
  • a preterred embodiment utilizes a graphics display adapter (preferably optimized for this task) to off-load such processing from the host computer so as to free the CPU and memory subsystem for other processing tasks.
  • a preferred embodiment allows intermediate graphics processed images to be displayed and for successive attributes to be applied without reinitiating the entire rendering process.
  • three dimensional-coded color data is written to the graphics accelerator (step 60) and polygon identification information is read back (step 62). By applying flat color shading information to each polygon as it is processed in the graphics display accelerator, each pixel in the output "image" from the graphics display accelerator uniquely identifies the front visible polygon at that position in the viewport.
  • the process consists of the steps of: estabhshing view parameters. artificially coding each polygon with a unique "color.” placing the polygon in the display space, "displaying” the view, and reading the resulting "image" which consists ol coded polygon identifiers.
  • linear interpolated Barycent ⁇ c coordinates (u,v,w
  • Barycent ⁇ c coordinates may then be used to calculate additional parameters during the rcndc ⁇ ng process including the direction ol normal vectors, the three dimensional texture coordinates, two dimensional texture coordinates and the world coordinates ol each pixel in the output image
  • This process consists ol: establishing view parameters, coding the "color” value of each vertex ot the polygon, placing the polygon in the display space, "displaying” the view; and reading the resulting "image” which consists ot the polygon Barycent ⁇ c coordinates
  • This "image” is then used to directly identity the linear interpolated position ol the pixel on the polygon
  • This linear interpolation value is then tor example, applied to the normal vector direction, used in the process ol calculation ol the three dimensional texture value, in looking up the two dimensional surlace image texture, and in calculating the glow coordinate of the polygon visible at that viewpoint pixel location
  • Multi-colored encoding provides increased precision of intermediate data.
  • texture maps to encode positional information limitations ot the hardware processing architecture can be encountered
  • This process consists ot a multi-pass algo ⁇ thm to obtain (lor example) tirst the u coordinates and then the v coordinates.
  • the w coordinates are obtained by subtraction
  • the process consists ot the steps ol. establishing view parameters: placing a coded "texture” in the texture memory (step 66); placing the polygon in the display space;
  • step 68 the resulting "image" which consists ot coarse Barycent ⁇ c coordinates and quadrature phase encoded values for additional precision.
  • the coarse Barycent ⁇ c coordinates are then combined with a simple logic structure and applied to the quadrature phase encoded values tor additional precision
  • Step 70 Intermediate processed image data are now available tor the application of photorealistic rendering algorithms (step 70) such as shading, textu ⁇ ng, and shadowing, as discussed above.
  • FIG 7 shows an eye looking towards a three-dimensional cube, and illustrates a situation in which a preterred embodiment may take a three dimensional problem, such as ray tracing (discussed hereinabove) the cube with respect to that eye location, and partially converts the ray tracing problem into a two dimensional problem, such as Z buffer sorting (discussed hereinabove), so as to allow a two dimensional accelerator to speed up portions of the more complex three dimensional ray tracing problem.
  • the cube is one object of perhaps many in an obiect space, where the object space has defined within it all of the objects, light sources, etc., that are to be rendered to an output (display, film, etc.).
  • Each object has associated characteristics such as color, texture, reflectivity, opacity.
  • characteristics such as color, texture, reflectivity, opacity.
  • ray tracing algorithms will either waste computing resources to trace the B surlace, only to then replace the computed data with dilferent data co ⁇ esponding to the nearest (to the eye) visible surface, or the algorithm will employ some three dimensional hidden-surface removal algorithm to first reduce the complexity of the ray-tracing problem
  • the hidden-surface removal algo ⁇ thm employs three dimensional techniques, such techniques require substantial computation resources to calculate the removal problem
  • a prefe ⁇ ed embodiment implements all or parts of a given three dimensional rendenng problem as on or more two dimensional operations, where the processing ot the two dimensional operations are perlormed with last two dimensional graphics accelerators
  • the ray trace problem is broken into two parts, the first being a Z butler sorting problem, a common procedure provided by two dimensional accelerators
  • the polygons are rendered to memory (i.e rendered but not made visible to the display output) with the two dimensional accelerator
  • the resultant two dimensional image contains only those pixels visible from the given eye location 72 By virtue of having rendered this scene, it is now possible to easily identity the visible portions ot all objects with respect to the defined viewing eye location.
  • the identification is pertormed by temporanly assigning a pseudo-color to each object (or polygon used to lorm the object), before passing the scene data to the graphics accelerator.
  • the colors in the resultant image indicate to which object each image pixel belongs.
  • a preferred embodiment may utilize the stencil buffer to indicate whether a given pixel is m shadow from a given light source. If the pixel is in shadow, then the ray tracer does not need to go back to that light source (although it may have to return a ray to a different light source)
  • FIG. 8 shows a tnangle 80 for which the normal of an inte ⁇ or pixel 82 is sought, so as to allow the application of Phong shading to the t ⁇ angle.
  • calculations for a given three dimensional object may be greatly minimized through use ol the two dimensional accelerator rende ⁇ ng technique to identify visible polygon portions for a particular eye location.
  • OpenGL does not provide an operation tor performing Phong shading, because Phong shading requires that a normal be calculated for every pixel to which the shading is to be applied.
  • the surtace is composed of a series ol triangles that approximate the shape of the surlace (For a curved surtace, it is assumed that the surlace is sufficiently tesselated so that the human eye is unable to distinguish the t ⁇ angle mesh from a surface having the true curvature.)
  • OpenGL does not provide lor obtaining a normal tor each of the pixels within each triangle, so a preferred embodiment compensates lor this as follows First, Barycent ⁇ c coordinates are used to represent the coordinates of the vertices lor each pixel within a given triangle.
  • a prc-determined function is utilized to encode the Barycent ⁇ c coordinates as color values and these color values are assigned to the vertices and used to set the color along the edge of the triangle.
  • the particular conversion function is not important, so long as the Barycent ⁇ c values are reversibly uniquely encoded into color values
  • the set color function results a color spectrum, along one edge, that ranges in value between the two color values assigned to the vertices forming a given edge segment
  • different colors are used to encode the different edge segments of the t ⁇ angle
  • dif lerent colors may be assigned to each of the X, Y, and Z axises, and then the segment vertices assigned a color corresponding to its location with respect to the o ⁇ gin for the three axises
  • These color assignments allow the determination of the Barycent ⁇ c coordinates for any inte ⁇ or pixel. For example, assume that a first edge segment lies on the X axis. The pixel color tor any pixel along the X axis segment may now be used to determine the pixel's distance along the X axis. A similar computation may also be performed for a second edge segment tor the t ⁇ angle.
  • any inte ⁇ or pixel location may be determined from the combination of the first and second segments. That is, if a perpendicular line is drawn Irom a first pixel along the first edge segment towards the interior ot the triangle, and a second perpendicular line is drawn trom a second pixel along the second edge segment, the Barycent ⁇ c coordinates tor the point identified by where the two lines intersect may be calculated from the Baryccnt ⁇ c coordinates tor the first and second pixels. Once the Barycent ⁇ c coordinates are known, then it is relatively simple to calculate the normal tor that point With the normal tor that pixel point, it is now possible to apply Phong shading. (This technique also applies to bump map processing.) Such processing is in stark contrast to OpenGL.
  • Barycent ⁇ c coordinates may be pertormed in hardware or software, as neither processing lormat affects the int ⁇ nsic nature ot the invention

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A rendering apparatus provides, with respect to a defined viewer location and a defined viewport, a desired rendering of objects defined by object data having an object data format, in a three-dimensional object space. The apparatus may have a graphics accelerator for transforming object data into image data determined with respect to the defined viewer location and the defined viewport. The apparatus also may have a rendering processor for first converting at least one parameter characterizing the desired rendering into parameter data in object data format, feeding the parameter data to the graphics accelerator, and then converting resulting image data as to the at least one parameter to a further processed result pertinent to the desired rendering.

Description

Attorney Docket: 1247/ 156 WO Hardware-Accelerated Photoreal Rendering
Technical Field The present invention pertains to hardware-accelerated computer processing of multidimensional data, and. particularly, to systems for the creation and display of photoreal interactive graphics.
Background of the Invention The rendering of graphical images is an application of digital signal processing which requires intensive computation at multiple levels of the process. The typical three dimensional graphics workstation architecture consists of multiple subsystems that arc allocated functions uniquely. General purpose computers are typically used for non- display rendering of three dimensional models into human viewable images. The non- display rendering process entails representation of the scene as a set of polygons, to which attributes such as texture and shadowing are applied through computation.
By applying some specialized architectural components of the three dimensional graphics workstation, the non-display rendering process can be accelerated. In the state- of-the-art graphics workstation, each sub-system is used for a different part of the processing. The CPU/Main memory is used for general algorithmic processing, and temporary storage of data. Peripheral devices are used to allow human interaction with the workstation, to transmit and to permanently store digital information.
One such sub-system is the graphics display accelerator, typically used to take geometric shapes, mathematically place them in three dimensional mathematical space, associate simulated lighting and optical effects, and produce an electronic picture in a frame buffer for visible display using a two dimensional display component.
In this conventional architecture, the graphics display accelerator is a one-way state machine pipeline processor with low volume, highly abstracted data flowing in, and low level information displayed on the workstation monitor. The operation of the graphics display accelerator in the conventional architecture can be understood in the context of rendering processes.
One such rendering process is referred to as ray tracing (or ray casting) an image to be displayed. The ray trace rendering process consists of defining a series of polygons (typically triangles), at least one viewport or window through which the human views the simulated scene, and various light sources and other optical materials into a mathematical three dimensional space. A viewer eye location is selected and a window on an arbitrary view plane are selected. The window is considered to be comprised of small image elements (pixels) arranged into a grid at a desired output resolution. Often the window is chosen to correspond to the display monitor at a given resolution, and the eye location is chosen to be at some location outside the screen to approximate where a human observer's eye may actually be located. Then, for each pixel of the window, a ray is fired from the eye location, through the pixel, and into the scene: the pixel is then colored (and assigned other attributes) based upon the intersection of the ray with objects and light sources within the scene. The ray, in effect, bounces around the objects within the scene, and surface and optical effectors modify the simulated light rays, and alter the eventual characteristics of the display pixel. More information regarding ray tracing may be found in Computer Graphics, Principles and
Practice, 2d Ed. in C, by Foley, van Dam, ct al, Addison-Wesley publ. ( 1996): this reference is hereby incorporated herein by reference.
To reduce the amount of processing necessary, frequently software "Z-Buffers" are used to spatially sort the polygons and to reduce the processing of un-necessary (non-visible) polygons. If a polygon is obscured by another polygon, the ray is not processed. However, because ray tracing is so computationally expensive, even in a high-performance three dimensional workstation, the graphics display accelerator frequently uses a simpler type of rendering. Such accelerators commonly provide special purpose hardware components that provide Z-Buffer sorting, a stencil buffer, texture processing and geometry (vertex coordinate) transformation calculation.
Using a typical graphic display accelerator generally consists of loading each polygon into local memory on the graphic display adapter, projecting the three dimensional coordinates on to the viewport two dimensional coordinate system, tri-linearly interpolating (with perspective correction) any surface color and optional texture pattern. Current state-of-the-art rendering employs a high-performance three dimensional graphics library such as, for example, OpenGL, that is supported by numerous hardware and software vendors OpenGL significantly speeds the process ot preview rendeπng, but it has some limitations These limitations include its inability to directly support Phong shading and bump maps, graphics effects which provide more realistic images than simple Gouraud shading, which is supported by OpenGL
Summary of the Invention In a preferred embodiment there is provided a rendering apparatus for providing, with respect to a detined viewer location and a delined viewport, a desired rendeπng of objects defined by object data having an object data format, in a three dimensional object space The apparatus in this embodiment has a graphics accelerator for transforming object data into image data determined with respect to the defined viewer location and the defined viewport, and a rendcπng processor tor converting at least one parameter characterizing the desired rendeπng into parameter data in object data format, feeding the parameter data to the graphics accelerator, and converting resulting image data as to the at least one parameter to a further processed result pertinent to the desired rendering
In a further embodiment, the apparatus has an intermediate memory in which is stored the image data from the graphics accelerator, wherein the rendeπng processor converts the image data stored within the intermediate memory into the further processed result The image data may be defined by values associated with a plurality ol pixel locations in an image. In a further embodiment, the rendering processor, before feeding the object data to the graphics accelerator, utilizes a tag assigned to each of the objects, so as to associate by tag pixel locations in the image with objects. Each of the objects has a surface that may be represented by a plurality of primitive polygons, and the rendeπng processor, before feeding the object data to the graphics accelerator, may utilize a tag assigned to the pπmitive polygons, so as to associate by tag pixel locations with pπmitive polygons. The tag may be a color. The rendering processor, as part of converting resulting image data, identifies by tag the portions ot object surfaces present in the image, and restπcts further processing associated with the desired rendeπng to such portions so as to reduce processing overhead associated with the desired rendering.
Related methods are provided. In a further embodiment, there is provided a graphics rendeπng program stored on a computer readable medium for providing a desired rendering of objects defined by object data having an object data format, in a three dimensional object space. The program is contigured so as to be executable by a computer having a two dimensional graphics accelerator for transforming object data into image data determined with respect to a defined viewer location and a delined viewport. When loaded into the computer, the program causes the establishment ot a rendeπng apparatus having a graphics accelerator for transforming object data into image data determined with respect to the detined viewer location and the detined viewport, and a rendering processor for converting at least one parameter characteπzing the desired rendering into object data lormat, feeding the ob|ect data to the graphics accelerator, and converting resulting image data to a lurther processed result pertinent to the desired rendering.
The computer further includes an intermediate memory in which the rendering program causes to be stored the image data from the graphics accelerator, and wherein the rendering processor converts the image data stored within the intermediate memory into the further processed result
Brief Description of the Drawings The invention will be more readily understood by reference to the following descπption, taken with the accompanying drawings, in which:
FIG. 1 is a block diagram ot the rendeπng graphics architecture with an open application programming interface, in accordance with the present invention.
FIGS. 2a and 2b illustrate the process of computing normals to polygon vertices of a surface to enable Gouraud shading of the surface. FIGS. 3a and 3b illustrate the process of perturbing surface normals to replicate the effect ot surface texture, in accordance with the present invention FIG. 4 illustrates the process of computing shadows. FIGS. 5a and 5b illustrate the process ol computing procedural three dimensional texture. FIG. 6 is a flow diagram depicting the steps of image rendering in accordance with the present invention. FIG 7 shows an eye looking towards a three-dimensional cube
FIG 8 shows a tπangle for which the normal ot an interior pixel is sought.
Detailed Descnπtion ot the Preferred Embodiment Descπbed now, with reference to FIG 1 , is the preferred embodiment of a novel workstation architecture employing rendering soltware 10 The rendering software is employed on a graphics workstation, designated generally by numeral 12 Many applications exist where the apparatus and methods taught by this invention are advantageously applied. In the preferred embodiment, the application is to the near rcal- time processing ot three dimensional graphics, and the discussion is cast in graphics processing terms It is to be understood, however, that the term "scene data," as used in this descπption and the appended claims, includes all multidimensional data structures which are advantageously operated upon in a highly parallel mode which treats elements, referred to as "pixels," of the processed data structure in a parallel manner. By casting the scene data of a digital signal processing problem in terms ol graphics constructs, the high speed performance of a dedicated graphics display accelerator 14 can be applied to parallel processing problems otherwise requiring intensive computational resources. Similarly, as used in this descπption of the invention and in the appended claims, the term "image data" refers to the processed product of the scene data, after application, singly or recursively, of dedicated graphics display accelerator
14
Graphics rendeπng software 10 is the first step toward the goal of Real Time Reality on the desktop (i.e. the ability to render photorealistic images in real-time or near real-time). For example, the rendeπng of "Toy Story" required a few hundred propπetary RISC/UNIX computers working 24 hours a day for a couple of years. In the future, the present invention will allow creative and technical professionals to render the same number of frames in a traction of that time.
High speed performance in high-quality rendering is achieved by using graphics display accelerator 14 in a novel way. Instead of a one-way pipeline process that ends in the human visible display of a simulated three dimensional space, display graphics accelerator 14 is now used to produce intermediate results, where these results may be uti /ed in a general purpose ray trace rendering algorithm, or other graphics algorithm Such intermediate results are used to determine polygonal normals, texture coordinates lor three dimensional texture algoπthms, local coordinates tor two dimensional surface textuπng, bump-map perturbations ot the visible rays and to determine interpolated world coordinates ol polygonal surfaces The complete rendering process is now split amongst two ot the major subsystems, the graphics display accelerator 14 (or hardware accelerator) and the CPU/Main Memory subsystem, thus improving perlormance of the rendeπng process on the three dimensional graphics workstation over that ol a general purpose computer The rendeπng software includes a graphics library with an open application programming interface (API) which extends the capabilities of OpenGL, and accelerates the creation and display ol photoreal interactive graphics A sample API is provided in the related provisional application bearing serial number 60/023,513, which is incorporated herein by relerence
By virtue of the acceleration ol the rendeπng process, the graphics workstation is now able to produce higher quality graphics, incorporating features ol photoreal, or production, rendering. For example, in the shading process described hereinbelow, normals may be interpolated pixel-by-pixel (Phong shading), to produce higher quality three dimensional graphics than are available using Gouraud shading (provided by OpenGL), in which normals are computed for faces ot polygons, and then averaged to derive values tor polygon vertices. The graphics rendeπng subsystem (which includes the rendering library) dramatically accelerates graphics attributes, such as Gouraud shading, which are standard to the existing (such as OpenGL) high-performance three dimensional graphics library. Additionally, the invention accelerates features that OpenGL does not support, such as Phong shading, Bump maps, Shadows, and Procedural three dimensional textures. Thus, for the first time, creative and technical professionals can perform last preview rendeπng with πchly textured high-quality images, rather than being constrained to simple Gouraud shading
The additional attributes which can be applied to produce high-quality three dimensional graphics can be appreciated by reference to FIGS. 2-5. FIGS. 2a and 2b illustrate the process of computing normals to polygon vertices ol a surface to enable Gouraud shading ot the surface. Three dimensional models are usually represented as a large collection of smaller polygons, typically triangles, and the quality of the final rendered image depends on the sophistication of the algorithms used to shade these polygons. Shown in FIG. 2(a) are the faces of several polygons 20, and an associated normal vector 22 which is used to indicate how light will be reflected by that polygon Shown in FIG. 2b is the first step in the Gouraud shading process, which is to calculate the normals 24 for each of the polygon's vertices (corners) by averaging the normals 22 from the surrounding faces. The computed normals for the vertices are then used to determine the RGB (red, green, and blue) color components for the vertices as seen by an observing eye or camera. These RGB components are based on the color of the material combined with the effect of any light sources. Gouraud shading is based on calculating the color components of each of the pixels forming the polygon by linear interpolation of the RGB values at the vertices. Note that the use of the RGB color scheme is for exemplary purposes only, and another coloring scheme, such as CYMK (cyan, yellow, magenta, and black) may also be used. Gouraud shading provides acceptable results, but the highest quality three dimensional graphics demand something more. The problem is that Gouraud shading simply calculates the normals at the vertices and then interpolates the colors of the polygon's interior pixels. And, especially for curved polygons, the shading effect is not too realistic as the interpolation does not account for different normal values across the polygon surface. A solution to this problem (referenced herein as Phong shading) is to derive a normal for every interior pixel, and to then apply a given shading model to that individual pixel based on its normal component. However, although providing much superior results over Gouraud shading, this solution is computationally intensive as there may be many interior pixels (dependent upon display resolution) for each polygon. The present invention, by viπue of dramatically accelerated processing, makes practical the utilization of Phong shading on a graphics workstation.
FIGS. 3a and 3b illustrate the process of perturbing surface normals to replicate the effect of surface texture, in accordance with the present invention. Real-world objects are rarely geometrically smooth; instead, they often have distortions in their surfaces giving them a physical texture. Consider the surface of a strawberry, which is all one color, but whose dimples give it a rich physical texture. FIG. 3a shows one way to replicate this effect, where one explicitly creates geometncal distortions 30 into the model's surface This solution, however, requires sigmlicant modeling etlort and results in excessive amounts of computation FIG 3b shows an alternate solution, in which the surlace normals 32 are perturbed, causing them to reflect the light so as to provide a similar ettect Generally, perturbing the surface normals requires the use ot a bump map. which simulates the effect ot displacing the points on the surtace above or below their actual positions Bump maps fully complement the process of Phong shading, in which individual normals have already been computed tor each pixel on the surlace, the bump map is used to modify these normals prior to the shading model being applied to each pixel
Bump maps are distinct from the concept ol texture (pattern) maps, in which an image is projected onto a surtace Texture maps range from flat images (such as geometric patterns) to actual photographs ol rough surl ccs However, texture maps of rough surlaces don't look quite right, because they simply atfect the a surface's shading, and not its surtace shape They also tend to look incoπect because the direction of the light source used to create the texture map is typically different from the direction ol the light illuminating the mapped three dimensional object That is, unless the light sources for the rough pattern arc the same as the one within the three dimensional object space, when viewing the texture mapped onto an object within the object space, one sees that something is wrong with the resultant image Thus bump maps are not used for producing realistic output with OpenGL Although bump maps provide superbly textured three dimensional images, overcoming the visual lighting problems requires use of Phong shading, and such shading is not directly supported by OpenGL Thus, in addition to accelerating the rendeπng process of standard OpenGL effects, the invention provides the ability to create images that are not possible with OpenGL alone
FIG 4 shows a light source 40 causing a shadow 42 to be casted by a first object 44 onto a second object 46. An important component to realistic imaging is to ensure that objects within a given three dimensional obiect space cast proper shadows However, doing so greatly increases the amount of computation that needs to be performed Creating shadows is computationally intensive because an area in shadow is rarely pure black. Instead, the area usually contains some amount of color content m a diminishcd form. A preferred embodiment provides special extensions for the acceleration of shadow creation
FIGS 5a and 5b illustrate the process ot computing procedural three dimensional texture, textures not directly supported by OpenGL FIG 5a shows that the application of flat, two-dimensional textures 50 to three dimensional objects 52 usually results in unrealistic results, particularly when attempting to display a cross-sectional cut-away view 54 through an object. FIG. 5b shows, in contrast, that procedural three dimensional textures 56 provide the ability to define a more realistic texture that occupies three dimensions and understands the geometry ol the object in question For example, when a procedural three dimensional wood-grain 56 texture is applied to an object, then taking a cross-sectional view ol the object reveals the grain 58 ot the wood inside the object (or whatever else, such as a knot hole, that was detined to be within the texture). This provides tor a more realistic image. In a preferred embodiment, the invention provides lor accelerated generation and utilization of procedural three dimensional textures.
FIG 6 shows a flow diagram depicting the steps ol image rendeπng in accordance with the present invention. Unlike with a conventional graphics processing system, where a host computer is required to compute all graphics, Z-Buffer and stencil processing within its CPU and memory subsystem (a burden on the host), a preterred embodiment utilizes a graphics display adapter (preferably optimized for this task) to off-load such processing from the host computer so as to free the CPU and memory subsystem for other processing tasks. A preferred embodiment allows intermediate graphics processed images to be displayed and for successive attributes to be applied without reinitiating the entire rendering process. In accordance with the invention, three dimensional-coded color data is written to the graphics accelerator (step 60) and polygon identification information is read back (step 62). By applying flat color shading information to each polygon as it is processed in the graphics display accelerator, each pixel in the output "image" from the graphics display accelerator uniquely identifies the front visible polygon at that position in the viewport.
The process consists of the steps of: estabhshing view parameters. artificially coding each polygon with a unique "color." placing the polygon in the display space, "displaying" the view, and reading the resulting "image" which consists ol coded polygon identifiers.
These pixel values are now the polygon identification codes and represent the identification ol each front visible polygon from the viewport
By applying coded vertex "color" values as each polygon is placed in the graphics display adapter, linear interpolated Barycentπc coordinates (u,v,w| can be determined (step 64) in the viewport two dimensional coordinate system These Barycentπc coordinates may then be used to calculate additional parameters during the rcndcπng process including the direction ol normal vectors, the three dimensional texture coordinates, two dimensional texture coordinates and the world coordinates ol each pixel in the output image
This process consists ol: establishing view parameters, coding the "color" value of each vertex ot the polygon, placing the polygon in the display space, "displaying" the view; and reading the resulting "image" which consists ot the polygon Barycentπc coordinates
This "image" is then used to directly identity the linear interpolated position ol the pixel on the polygon This linear interpolation value is then tor example, applied to the normal vector direction, used in the process ol calculation ol the three dimensional texture value, in looking up the two dimensional surlace image texture, and in calculating the glow coordinate of the polygon visible at that viewpoint pixel location
Multi-colored encoding, employed by the invention, provides increased precision of intermediate data. In the process of using texture maps to encode positional information limitations ot the hardware processing architecture can be encountered
Typically 8 bits per "color" is the design parameter of the graphics display adapter Additional encoding information can be supplied by utilizing all three colors in a quadrature encoding algorithm to increase the precision ot the returned positional information.
This process consists ot a multi-pass algoπthm to obtain (lor example) tirst the u coordinates and then the v coordinates. The w coordinates are obtained by subtraction
The process consists ot the steps ol. establishing view parameters: placing a coded "texture" in the texture memory (step 66); placing the polygon in the display space;
"displaying" the view; and reading (step 68) the resulting "image" which consists ot coarse Barycentπc coordinates and quadrature phase encoded values for additional precision. The coarse Barycentπc coordinates are then combined with a simple logic structure and applied to the quadrature phase encoded values tor additional precision
Intermediate processed image data are now available tor the application of photorealistic rendering algorithms (step 70) such as shading, textuπng, and shadowing, as discussed above.
FIG 7 shows an eye looking towards a three-dimensional cube, and illustrates a situation in which a preterred embodiment may take a three dimensional problem, such as ray tracing (discussed hereinabove) the cube with respect to that eye location, and partially converts the ray tracing problem into a two dimensional problem, such as Z buffer sorting (discussed hereinabove), so as to allow a two dimensional accelerator to speed up portions of the more complex three dimensional ray tracing problem. The cube is one object of perhaps many in an obiect space, where the object space has defined within it all of the objects, light sources, etc., that are to be rendered to an output (display, film, etc.). Each object has associated characteristics such as color, texture, reflectivity, opacity. For a given eye location 72, what the eye '"sees" of the cube 74 is sections of sides A, C, and D (the side facing the eye but not visible to us), but side B is not visible to the eye. So, if we were to ray trace the cube with respect to the eye's indicated location, ray tracing algorithms will either waste computing resources to trace the B surlace, only to then replace the computed data with dilferent data coπesponding to the nearest (to the eye) visible surface, or the algorithm will employ some three dimensional hidden-surface removal algorithm to first reduce the complexity of the ray-tracing problem However, since the hidden-surface removal algoπthm employs three dimensional techniques, such techniques require substantial computation resources to calculate the removal problem
To reduce the complexity of the three dimensional calculations, a prefeπed embodiment implements all or parts of a given three dimensional rendenng problem as on or more two dimensional operations, where the processing ot the two dimensional operations are perlormed with last two dimensional graphics accelerators In the present example, the ray trace problem is broken into two parts, the first being a Z butler sorting problem, a common procedure provided by two dimensional accelerators For each polygon comprising the sides ot the cube, the polygons are rendered to memory (i.e rendered but not made visible to the display output) with the two dimensional accelerator The resultant two dimensional image contains only those pixels visible from the given eye location 72 By virtue of having rendered this scene, it is now possible to easily identity the visible portions ot all objects with respect to the defined viewing eye location. In one embodiment, the identification is pertormed by temporanly assigning a pseudo-color to each obiect (or polygon used to lorm the object), before passing the scene data to the graphics accelerator. The colors in the resultant image indicate to which object each image pixel belongs. Thus, computing may then continue on with the three-dimensional ray tracing algorithm, where rays are only fired and computed for the pixels shown to be visible from the two dimensional rendenng process. In this fashion, a tremendous amount of computing has been avoided since the ray tracing did not process rays tor the hidden portions of the cube's 74 polygons
Similarly, tor shadow processing during ray tracing, a preferred embodiment may utilize the stencil buffer to indicate whether a given pixel is m shadow from a given light source. If the pixel is in shadow, then the ray tracer does not need to go back to that light source (although it may have to return a ray to a different light source)
FIG. 8 shows a tnangle 80 for which the normal of an inteπor pixel 82 is sought, so as to allow the application of Phong shading to the tπangle. As with the ray trace example, calculations for a given three dimensional object may be greatly minimized through use ol the two dimensional accelerator rendeπng technique to identify visible polygon portions for a particular eye location. As noted hereinabove, OpenGL does not provide an operation tor performing Phong shading, because Phong shading requires that a normal be calculated for every pixel to which the shading is to be applied. In OpenGL, tor a given surlace, the surtace is composed of a series ol triangles that approximate the shape of the surlace (For a curved surtace, it is assumed that the surlace is sufficiently tesselated so that the human eye is unable to distinguish the tπangle mesh from a surface having the true curvature.) OpenGL does not provide lor obtaining a normal tor each of the pixels within each triangle, so a preferred embodiment compensates lor this as follows First, Barycentπc coordinates are used to represent the coordinates of the vertices lor each pixel within a given triangle. Then, a prc-determined function is utilized to encode the Barycentπc coordinates as color values and these color values are assigned to the vertices and used to set the color along the edge of the triangle. (The particular conversion function is not important, so long as the Barycentπc values are reversibly uniquely encoded into color values ) The set color function results a color spectrum, along one edge, that ranges in value between the two color values assigned to the vertices forming a given edge segment Through encoding the Barycentπc coordinates as colors, and assigning them along the triangle edge in this fashion, it is now possible to obtain the normal tor any pixel interior to the triangle. In a prelerred embodiment, different colors are used to encode the different edge segments of the tπangle Alternatively, dif lerent colors may be assigned to each of the X, Y, and Z axises, and then the segment vertices assigned a color corresponding to its location with respect to the oπgin for the three axises These color assignments allow the determination of the Barycentπc coordinates for any inteπor pixel. For example, assume that a first edge segment lies on the X axis. The pixel color tor any pixel along the X axis segment may now be used to determine the pixel's distance along the X axis. A similar computation may also be performed for a second edge segment tor the tπangle. Any inteπor pixel location may be determined from the combination of the first and second segments. That is, if a perpendicular line is drawn Irom a first pixel along the first edge segment towards the interior ot the triangle, and a second perpendicular line is drawn trom a second pixel along the second edge segment, the Barycentπc coordinates tor the point identified by where the two lines intersect may be calculated from the Baryccntπc coordinates tor the first and second pixels Once the Barycentπc coordinates are known, then it is relatively simple to calculate the normal tor that point With the normal tor that pixel point, it is now possible to apply Phong shading. (This technique also applies to bump map processing.) Such processing is in stark contrast to OpenGL. lor OpenGL only provides normals for the vertices ol the triangle, and a normal tor an interior point computed as the average ot the three defining vertices The problem with this simplistic approach is that tor curved surfaces, yielding a curved triangle, the center normal value is wrong A prelerred embodiment therefore allows tor much more detailed and realistic representations
More information regarding the use of Barycentπc coordinates may be found in Curves and Surfaces for Computer Aided Geometric Design, A Practical Guide. 2d Ed., by Gerald Faπn (1990): this publication is hereby incorporated herein by reference. In one preferred embodiment, for a triangle having vertices A, B, and C, every texture normal within the triangle is a weighted sum ot A, B, and C. And, each of the Barycentπc coordinates u, v, w are assigned to each of the three sides ol a tπangle, where u + v + w = 1.0. Further, in alternate embodiments, the processing ol Barycentπc coordinates may be pertormed in hardware or software, as neither processing lormat affects the intπnsic nature ot the invention

Claims

What is claimed is:
1 A rendering apparatus lor providing, with respect to a defined viewer location and a detined viewport, a desired rendering ol objects defined by object data having an obiect data format, in a three dimensional object space, the apparatus comprising: a. a graphics accelerator for transforming ob|ect data into image data determined with respect to the defined viewer location and the defined viewport: and b a rendering processor tor converting at least one parameter characteπzing the desired rendering into parameter data in object data lormat, feeding the parameter data to the graphics accelerator, and converting resulting image data as to the at least one parameter to a further processed result pertinent to the desired rendeπng.
2. A rendering apparatus according to claim 1 , further compnsmg an intermediate memory in which is stored the image data trom the graphics accelerator, wherein the rendering processor converts the image data stored within the intermediate memory into the further processed result
3. A rendering apparatus according to claim 1 , wherein the image data is defined by values associated with a plurality of pixel locations in an image. 4. A rendering apparatus according to claim 3, wherein each of the objects has a surface represented by a plurality of primitive polygons, and the rendering processor, before feeding the object data to the graphics accelerator, utilizes a tag assigned to the primitive polygons, so as to associate by tag pixel locations with pπmitive polygons. 5. A rendering apparatus according to claim 3, wherein the rendering processor, before feeding the object data to the graphics accelerator, utilizes a tag assigned to each of the objects, so as to associate by tag pixel locations in the image with objects. 6 A rendering apparatus according to claim 5, wherein the rendeπng processor, as part of converting resulting image data, identifies by tag the portions of object surfaces present in the image, and restricts further processing associated with the desired rendeπng to such portions so as to reduce processing overhead associated with the desired rendeπng
7 A rendering apparatus according to claim 6, wherein the tag is a color
8 A rendering apparatus according to claim 5, wherein the tag is a color 9 A rendering apparatus according to claim 4, wherein the tag is a color
10 A method tor providing, with respect to a delined viewer location and a defined viewport, a desired rendering ot objects detined by obiect data having an object data lormat, in a three dimensional obiect space, the method compπsing: a providing a graphics accelerator tor transforming obiect data into image data determined with respect to the detined viewer location and the defined viewport; b. converting at least one parameter charactenzing the desired rendeπng into parameter data in object data format; c. teeding the parameter data to the graphics accelerator; and d converting resulting image data as to the at least one parameter to a further processed result pertinent to the desired rendeπng
1 1 A method according to claim 10, further compπsing providing an intermediate memory tor storing the image data from the graphics accelerator, and wherein step (d) also includes converting the image data stored within the intermediate memory into the lurther processed result
12 A method according to claim 10, wherein the image data is delined by values associated with a plurality of pixel locations in an image
13. A method according to claim 12, wherein each ol the objects has a surface represented by a plurality ol primitive polygons, and wherein step (b) also includes the step of utilizing a tag assigned to the pπmitive polygons, so as to associate by tag pixel locations with primitive polygons.
14. A method according to claim 12, wherein step (b) also includes the step of utilizing a tag assigned to each of the objects, so as to associate by tag pixel locations in the image with objects. 15. A method according to claim 14, wherein converting resulting image data includes the step of identifying by tag portions of object surfaces present in the lmage, and restricting further processing associated with the desired rendering to such portions so as to reduce processing overhead associated with the desired rendenng A method according to claim 15, wherein the tag is a color. A method according to claim 14, wherein the tag is a color. A method according to claim 13, wherein the tag is a color. A method lor rendering graphics data descnbing three dimensional objects detined within an object space, the method comprising the steps of : a selecting a graphics effect which is output resolution dependent; b rendeπng the plurality ol objects with a two dimensional graphics accelerator, such rendeπng causing a memory to contain pixel data corresponding to a predetermined output resolution; and c. applying the graphics effect to the pixel data. A graphics rendering program stored on a computer readable medium for providing a desired rendering of objects detined by obiect data having an object data format, in a three dimensional object space, the program contigured so as to be executable by a computer having a two dimensional graphics accelerator for transforming object data into image data determined with respect to a defined viewer location and a detined viewport, the program when loaded into the computer causing the establishment ol a rendenng apparatus compnsing: a a graphics accelerator for transforming object data into image data determined with respect to the defined viewer location and the defined viewport; and b. a rendering processor for converting at least one parameter charactenzing the desired rendering into parameter data in object data format, feeding the parameter data to the graphics accelerator, and converting resulting image data to a further processed result pertinent to the desired rendering. A graphics rendering program according to claim 12, wherein the computer further includes an intermediate memory in which the rendering program causes to be stored the image data from the graphics accelerator, and wherein the rendering processor converts the image data stored within the intermediate memory into the further processed result.
PCT/US1997/013563 1996-08-01 1997-08-01 Hardware-accelerated photoreal rendering WO1998006067A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP97936355A EP0920678A1 (en) 1996-08-01 1997-08-01 Hardware-accelerated photoreal rendering

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US2379596P 1996-08-01 1996-08-01
US60/023,795 1996-08-01
US2351396P 1996-08-07 1996-08-07
US60/023,513 1996-08-07

Publications (2)

Publication Number Publication Date
WO1998006067A1 true WO1998006067A1 (en) 1998-02-12
WO1998006067A9 WO1998006067A9 (en) 1998-07-09

Family

ID=26697254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/013563 WO1998006067A1 (en) 1996-08-01 1997-08-01 Hardware-accelerated photoreal rendering

Country Status (2)

Country Link
EP (1) EP0920678A1 (en)
WO (1) WO1998006067A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001063556A1 (en) * 2000-02-25 2001-08-30 Inzomia Ab A method and device for picture presentation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996019780A1 (en) * 1994-12-22 1996-06-27 Apple Computer, Inc. Three-dimensional graphics rendering system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996019780A1 (en) * 1994-12-22 1996-06-27 Apple Computer, Inc. Three-dimensional graphics rendering system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SEQUIN AND SMYRL: "PARAMETERIZED RAY TRACING", COMPUTER GRAPHICS, vol. 23, no. 3, July 1989 (1989-07-01), NEW YORK, pages 307 - 314, XP002047491 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001063556A1 (en) * 2000-02-25 2001-08-30 Inzomia Ab A method and device for picture presentation

Also Published As

Publication number Publication date
EP0920678A1 (en) 1999-06-09

Similar Documents

Publication Publication Date Title
CN111508052B (en) Rendering method and device of three-dimensional grid body
US6903741B2 (en) Method, computer program product and system for rendering soft shadows in a frame representing a 3D-scene
US6532013B1 (en) System, method and article of manufacture for pixel shaders for programmable shading
US5307450A (en) Z-subdivision for improved texture mapping
US6567083B1 (en) Method, system, and computer program product for providing illumination in computer graphics shading and animation
US5613048A (en) Three-dimensional image synthesis using view interpolation
US8610729B2 (en) Floating point computer system with fog
US6922193B2 (en) Method for efficiently calculating texture coordinate gradient vectors
US7170527B2 (en) Interactive horizon mapping
US7106325B2 (en) System and method for rendering digital images having surface reflectance properties
US6806886B1 (en) System, method and article of manufacture for converting color data into floating point numbers in a computer graphics pipeline
US20070139408A1 (en) Reflective image objects
US7158133B2 (en) System and method for shadow rendering
US7071937B1 (en) Dirt map method and apparatus for graphic display system
US6690369B1 (en) Hardware-accelerated photoreal rendering
US6975319B1 (en) System, method and article of manufacture for calculating a level of detail (LOD) during computer graphics processing
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
Batagelo et al. Real-time shadow generation using bsp trees and stencil buffers
EP0856815A2 (en) Method and system for determining and/or using illumination maps in rendering images
US6906729B1 (en) System and method for antialiasing objects
Groß et al. Advanced rendering of line data with ambient occlusion and transparency
Panda Computer Graphics
US20180005432A1 (en) Shading Using Multiple Texture Maps
US5926183A (en) Efficient rendering utilizing user defined rooms and windows
EP0920678A1 (en) Hardware-accelerated photoreal rendering

Legal Events

Date Code Title Description
AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
COP Corrected version of pamphlet

Free format text: PAGES 2/7, 3/7 AND 5/7, DRAWINGS, REPLACED BY NEW PAGES 2/7, 3/7 AND 5/7; AFTER RECTIFICATION OF OBVIOUS ERRORS AS AUTHORIZED BY THE INTERNATIONAL SEARCHING AUTHORITY

WWE Wipo information: entry into national phase

Ref document number: 1997936355

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1997936355

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1997936355

Country of ref document: EP

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载