+

WO2007017834A2 - Disparity value generator - Google Patents

Disparity value generator Download PDF

Info

Publication number
WO2007017834A2
WO2007017834A2 PCT/IB2006/052730 IB2006052730W WO2007017834A2 WO 2007017834 A2 WO2007017834 A2 WO 2007017834A2 IB 2006052730 W IB2006052730 W IB 2006052730W WO 2007017834 A2 WO2007017834 A2 WO 2007017834A2
Authority
WO
WIPO (PCT)
Prior art keywords
disparity
vertices
pixels
scene
attribute
Prior art date
Application number
PCT/IB2006/052730
Other languages
French (fr)
Other versions
WO2007017834A3 (en
Inventor
Karl J. Wood
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007017834A2 publication Critical patent/WO2007017834A2/en
Publication of WO2007017834A3 publication Critical patent/WO2007017834A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation

Definitions

  • This invention relates to a method for displaying a three dimensional scene on a display apparatus. More particularly, it relates to a method of generating disparity values for synthesis of multiple scene views.
  • a typical stereoscopic display apparatus incorporates lenticular lenses for directing images of different scene views in different directions towards the viewer, as is described in the commonly assigned International patent application publication WO 1997/47142, although any display apparatus capable of presenting different images to each eye may be used.
  • the typical interocular distance between human eyes is 64mm, and so the images presented to the left and right eyes are nominally those that would be seen when viewing the 3-D scene from two different points 64mm apart.
  • the effective interocular distance between the two scene views may be reduced dramatically to improve viewer comfort while still maintaining a convincing 3-D effect.
  • pixels in the 2-D image can be shifted to the left or right according to their disparity value to synthesise left and right images which rely on the parallax introduced by the shifting to stimulate perception of depth.
  • the distance between a pixel in the left eye image and the corresponding pixel in the right eye image is known as the disparity.
  • the image that the left eye sees is disparate from the image that the right eye sees, and the two images that are seen (out of the nine that are displayed) depends on the angle the viewer views the display from.
  • Some 3-D displays are only capable of delivering fatigue-free viewing over a limited range of display depths, and exceeding this range may result in unwanted visual artefacts and increased viewer discomfort.
  • the paper "Visualization of arbitrary-shaped 3D scenes on depth-limited 3D displays" by Andre Redert, ISBN: 0-7695-2223-8 describes a 3-D rendering process, whereby a depth map is first high-pass filtered and then scaled before synthesis with a 2-D image to create an enhanced perception of image depth.
  • the paper discloses that for creating the perception of image depth in humans, emphasising local depth differences between objects in a scene is more important than displaying the objects at their geometrically correct depths, and that the emphasis can be achieved by high-pass filtering the depth map for the scene.
  • the paper also discloses that the depth map can be scaled to bring the depth range of the map within the depth range capable of being displayed on the display apparatus to avoid severe image quality loss.
  • the filtered and scaled depth map is considered as a disparity map, each one of the disparity map's values corresponding to a pixel of the 2-D image and setting the disparity required for that pixel between the images of the left and right eye scene views to make the pixel appear at the correct depth in the 3-D image.
  • a 3-D effect is created by rendering a 2-D image with correct perspective and object depth ordering, i.e. making objects appear smaller the further away that they are and making objects in the foreground obscure other objects behind them in the background.
  • the viewer still has the impression of looking directly at the screen rather than some point in front of or behind it.
  • a 2-D computer graphics image could be synthesised with its pixel depth map (available as a by-product of the 2-D rendering process) to give multiple image views for display on a multiple view display.
  • 3-D computer graphics are typically rendered in a 3-D graphics pipeline as is well known to those skilled in the art.
  • the rendering process begins by traversing a scene graph comprising all of the objects to be rendered in the image. As the scene graph is traversed each object required for rendering is passed into the 3-D graphics pipeline in the form of vertices.
  • Each vertex has a position attribute setting the position of the vertex in 3-D space, and may also include other attributes like the vertex's colour or alpha (transparency) or a texture map coordinate. Operations like lighting and shading may be performed on the vertices, and vertices forming image portions that that are outside the scene to be rendered or are occluded by other image portions in the scene may be discarded.
  • the vertices are rasterised into pixels, the rasterisation process typically comprising interpolating between the vertices' parameters and assigning the interpolated parameters to the corresponding pixels.
  • the rasterisation process typically comprising interpolating between the vertices' parameters and assigning the interpolated parameters to the corresponding pixels.
  • multiple pixels are rasterised across each triangular area that is defined by 3 vertices.
  • Some of the rasterised pixels may be textured by a texture map to alter one or more of each pixel's interpolated parameters to increase the realism or 3-D effect of the resulting image.
  • each vertex has a texture map coordinate pointing to a particular texture map value.
  • the texture map coordinates are interpolated to give texture map coordinates for each rasterised pixel.
  • one or more of a pixel's interpolated parameters may be textured by the texture map value pointed to by the pixel's texture map coordinate.
  • each pixel is held in a depth buffer and the pixels may be depth-tested for each image position by comparing the depths of the pixels at that position so that pixels hidden behind other pixels can be discarded.
  • the colour of the front-most pixel may be blended with the colours of the pixel(s) behind it, as will be apparent to the skilled reader.
  • the pixels are held in the frame buffer and when the whole scene has been rendered they are read out as a 2-D image for display on a display.
  • US Patent 6,664,958 discloses that a texture map may be applied to the depth buffer of the pixels to alter the depths of the pixels by varying amounts.
  • the disclosure asserts that the pixel depth variations introduced by the texture mapping alters the results of the occlusion (depth) test that follows to produce visualisation effects in the 2-D image where one object is partially occluded by another object.
  • this disclosure does not address creation of a disparity map, and texturing the pixel depth buffer necessarily causes distortion to the 2-D image.
  • a method for generating disparity values for pixels of a two dimensional (2-D) image of a three dimensional (3-D) scene, the 3-D scene comprising one or more objects, each object represented by a plurality of vertices comprising: a) storing for each vertex of a portion of an object at least one disparity attribute according to a level of 3-D effect required for the portion of the object; b) rasterising the portion of the object into pixels, the rasterising comprising interpolating between the at least one disparity attribute of the vertices of the portion of the object to give at least one disparity attribute for each pixel; and c) calculating the pixels' respective disparity values according to the pixels' respective at least one disparity attributes; and repeating steps a), b), and c) for a portion of a different object, the portion of the different object requiring a different level of 3-D effect.
  • the first aspect of the invention provides a method for the generation of disparity values for a 3-D scene, wherein the disparity values and a 2-D image of the 3-D scene are used together to synthesise left and right eye image views of the 3-D scene for display on a stereoscopic or multiple view display to give a viewer the perception of viewing the scene in 3-D.
  • Each disparity value specifies how far apart a pixel of the 2-D image of the 3-D scene should be placed in images of left and right eye scene views to stimulate viewer perception of pixel depth.
  • the disparity attributes for the vertices of an object of the 3-D scene may be determined taking into account the level of 3-D effect required for the object.
  • a low level of 3-D effect could be applied for less important objects of the 3-D scene and a higher level of 3-D effect applied to other more important objects of the 3-D scene to enhance and draw attention to them.
  • the level of 3-D effect applied to objects may be dynamically altered with time over a series of images displaying the scene, for example at critical points during a 3-D game the strength of the 3-D effect applied to the image portion representing a game character's head could be suddenly increased to make the head appear to project out towards the viewer.
  • Such a scheme may be used to increase the interaction of the game with the viewer and enable implementation of exciting special effects.
  • the dynamic control of the level of 3-D effect enables an individual user to set the disparity at the level they are most comfortable with viewing.
  • the aforementioned paper "Just Enough Reality: Comfortable 3D Viewing via Microstereopsis” describes how the level of disparity between left and right eye scene views can be reduced (at the expense of 3-D effect) to improve viewer comfort, and to reduce the "lock-in" time for a viewer's eyes to lock onto the left and right eye scene views so the viewer can perceive a 3-D image.
  • the disparity values are generated distinct from the 2-
  • the storing of the first aspect of the invention further comprises: - determining a reference plane associated with the portion of the object in 3-D space;
  • an object's depths in the 3-D scene are preferably high-pass filtered by measuring the geometric distances of the object's vertices from a reference plane. As described earlier with reference to the paper of Andre Redert, high-pass filtering objects' depths results in an enhanced perception of 3-D image depth.
  • the calculation of the disparity attributes takes place using the vertices of the object to give geometric distances, and then the disparity attributes are interpolated to give disparity attributes for the pixels of the object using the existing graphics hardware. This makes the mathematics required for the calculation of the disparity attributes of the pixels straightforward to implement.
  • edges of an object when viewed from the position of the viewer may be textured to increase the 3-D effect between that object and other objects of the scene.
  • the texture map that is applied to the object's edge is a Transition' texture, giving a large and very sharp transition in disparity values at the object's edge to emphasise the depth difference between the object and other adjacent objects, increasing the 3-D effect.
  • Transition texture maps may be applied to object's edges to increase or decrease the level of 3-D effect between the object's edge and other adjacent objects. Furthermore the values read from the transition texture maps may be dynamically scaled to give dynamic control of the level of 3-D effect.
  • an apparatus configured to generate disparity values for pixels of a two dimensional (2-D) image of a three dimensional (3-D) scene, the 3-D scene comprising one or more objects, each object represented by a plurality of vertices
  • the apparatus comprising storage means; and processing means, operable to: a) store for each vertex of a portion of an object at least one disparity attribute according to a level of 3-D effect required for the portion of the object; b) rasterise the portion of the object into pixels, the rasterising comprising interpolating between the at least one disparity attribute of the vertices of the portion of the object to give at least one disparity attribute for each pixel; and c) calculate the pixels' respective disparity values according to the pixels' respective at least one disparity attributes; and repeat steps a), b), and c) for a portion of a different object, the portion of the different object requiring a different level of 3-D effect.
  • the second aspect of the invention provides apparatus configured for the generation of disparity values for a 3-D scene, wherein the disparity values and a 2-D image of the 3-D scene are used together to synthesise left and right eye image views of the 3-D scene for display on a stereoscopic or multiple view display to give a viewer the perception of viewing the scene in 3-
  • the disparity attributes for the vertices of an object of the 3-D scene may be determined taking into account the level of 3-D effect required for the object.
  • Figure 1 shows a block diagram of the development of a 3-D graphics application.
  • Figure 2 shows a plan diagram of a user viewing a 3-D display and perceiving a pixel to be further away from them than the display screen.
  • Figure 3 shows a flow diagram of the method for rendering and displaying a 3-D image.
  • Figure 4 shows a block diagram of the architecture for a 3-D graphics pipeline.
  • Figure 5 shows a block diagram of the processing and storage elements of a computer system for running 3-D game middleware.
  • Figure 6 shows a plan view of an object, some of whose faces face towards a viewing position and some of whose faces face away from the viewing position and cannot be seen.
  • Figure 7 shows a flow chart of the method for rendering a 2-D image and disparity map.
  • Figures 8a and 8b show plan diagrams of the calculation of geometric distances.
  • Figure 9 shows a diagram of a disparity map of a scene having one object.
  • Embodiments of the invention will now be described with reference to a 3-D graphics pipeline, although the invention may also be implemented in embodiments using other 3-D graphics tools as will be apparent to the person skilled in the art. For example, steps from the method of the invention may be performed at different stages in the pipeline, and in different orders depending on the exact form of application.
  • FIG. 1 shows the development of the software for a typical 3-D graphics computer game.
  • the artist 10 decides how the objects appearing in the game should be drawn with the aid of 3-D modelling tools 11 to test the various possibilities.
  • a scene graph a software data structure 12 called a scene graph.
  • Each object in the scene graph comprises vertices that set the shape and colouring of the object.
  • Each vertex has a normal vector specifying the direction that is normal to the object face partially formed by the vertex.
  • each object in the scene graph also comprises the reference plane that is to be used for high-pass filtering the depth of that object in the 3-D graphics pipeline.
  • the programmer 13 writes the software application 14 that controls how the game works and how it should respond to user inputs.
  • the application 14 and scene graph 12 are combined together into the 3-D game middleware 15 which is made available to users on a storage media 16.
  • the storage media 16 is for example an optical disk or a memory on a server that a user can access over a network to obtain the middleware 15.
  • the software is sent to the user's equipment via a signal from a network, for example a user may use their Personnel Computer (PC) to connect to an Internet site storing the middleware 15, and then download the middleware 15 to their PC's hard disk for execution at a later time.
  • PC Personnel Computer
  • the diagram of Figure 2 shows how displaying an image pixel 20 in a left eye scene view disparate from an image pixel 21 in a right eye scene view on a screen 25 creates the perception of viewing a virtual pixel 22.
  • the viewer perceives the virtual pixel 22 to be at a different distance away from them than the pixels 20 and 21 shown on the screen 25.
  • the left eye image view comprising pixel 20 is directed towards left eye 23 and the right eye image view comprising pixel 21 is directed towards right eye 24 using a lenticular lens 26.
  • the distance between pixels 20 and 21 on the screen is known as the disparity 27, and the level of disparity may be altered to make the virtual pixel 22 appear closer to or further away from the screen 25, altering the 3-D effect that is seen by the viewer.
  • the principle of displaying one image to the left eye and a different but correlated image to the right eye to create the perception of image depth may be effected by any means capable of displaying different images to the viewer's left and right eyes, like for example a head mounted display.
  • the flow diagram of Figure 3 shows the steps for generating images of the different scene views on a multiple view display.
  • a two dimensional image of the scene from a first viewpoint is rendered.
  • the two dimensional image is W pixels wide, H pixels high, and each pixel has RGBA (Red, Blue, Green, Alpha) components.
  • the 2-D RGBA image is written into a frame buffer.
  • step 32 a disparity map of the scene having one disparity value D for each pixel is rendered, and at step 33 the disparity map is written into the frame buffer, overwriting the A components of the pixels with the disparity values.
  • each RGBA pixel in the frame buffer after storing the 2-D image in step 31 becomes an RGBD pixel in the frame buffer after overwriting the A components with D components in step 33.
  • the 3-D graphics pipeline is designed to output pixels with only three components (RGB) one line at a time, in step 34 the W RGBD pixels are read out as though they were W + W/3 RGB pixels, hence three pixels of RGBD, RGBD, RGBD are read out as four pixels of RGB, DRG, BDR, GBD.
  • step 35 the image synthesiser receives each line of pixels and reads them as W RGBD pixels. It then shifts the pixels according to the disparity D values to synthesise different images of the scene as seen from different viewpoints. Then the image synthesiser combines the images so that when the images are displayed on a multiple view display in step 36 the viewer's eyes each see a different image view.
  • the D values are scaled to fit within the disparity range capable of being displayed on the display. This scaling is done within the image synthesiser, or within the 3-D graphics pipeline if knowledge of the display device capabilities is available.
  • the flow diagram of Figure 3 illustrates the case where the disparity map is rendered after the rendering of the 2-D image.
  • the disparity map is for example rendered before the 2-D image or rendered simultaneously with the 2-D image if sufficient processing power is available in the 3-D graphics pipeline.
  • FIG 4 shows the architecture of a 3-D graphics pipeline.
  • the vertex shader 41 runs vertex programs 511 (see Figure 5 described below) to process vertices, the vertex programs perform many operations on the vertices like for example lighting them according to any light sources that are present and transforming their position according to changes in the objects position.
  • the block 42 clips vertices that are outside the viewer's field of view and culls vertices that form back-facing object faces that are occluded from view.
  • the culling process is further discussed below in relation to Figure 6.
  • the process known as the homogenous divide also occurs in this block, adding a fourth dimension to each vertex specifying how the X 1 Y 1 Z dimensions of the vertex should be scaled for perspective effects, causing objects that are further away from the viewer to be reduced in size.
  • the rasteriser 43 defines image pixels by interpolating between stored attributes of vertices forming each image face. For example if an image face is triangular having two vertices of a white colour and one vertex of a black colour, then the colour of the rasterised pixels of the object face will progressively change from white along one side of the triangle through grey and to black at the opposite corner of the triangle.
  • the other attributes of the vertices are also interpolated, like for example the vertex's position and texture map coordinates.
  • the depth (Z) dimensions of the rasterised pixels are stored in a Z buffer.
  • the pixel shader 44 runs pixel programs 510 (see Figure 5) to process pixels; the pixel programs may perform many operation on the pixels like for example shading and texturing them so that the colours of rasterised pixels of a particular object face do not remain smoothly interpolated but become textured and more life-like.
  • the block 45 performs a pixel depth test by for each image position comparing the Z buffer depth of every pixel at that image position and discarding all the pixels except for the one that is closest to the viewer. There remains one pixel for every image position, the remaining pixels stored in the frame buffer 46.
  • FIG. 5 shows the processing and storage elements of a computer system configured for running 3-D game middleware 15.
  • the Central Processing Unit (CPU) 50 handles the scene graph 52 and the
  • 3-D graphics application 53 from data in memory 51.
  • the 3-D graphics application specifies when events should occur and the scene graph 52 specifes the 3-D objects as discussed above in relation to Figure 1.
  • the CPU 50 traverses the scene graph 52 and sends the objects in the scene graph required for rendering on the display to the Graphics Processing Unit (GPU)
  • GPU Graphics Processing Unit
  • the GPU 55 stores the objects to be rendered in memory 56 by storing vertices defining the objects in the vertex buffer 57.
  • the vertices typically define triangles forming the object's surfaces, however the vertices may also define other shapes like for example quadrilaterals.
  • Each vertex has a series of attributes including the position of the vertex in 3-D space and the colour of the vertex. Many other attributes, like a texture map identifier for pointing to a particular texture map, may be stored with each vertex depending on how the artist 10 has defined the object.
  • the constants buffer 58 stores constant parameters like for example the position of light sources in the 3-D scene and procedural data for animation effects.
  • the constants register 58 stores the position of the reference plane used for high-pass filtering an object's depth values, although in other embodiments the reference plane may be stored in other ways, like for example as an attribute to each vertex.
  • Vertex programs 511 are also stored in memory 56, and these define how vertices should be processed like for example to transform them from one position to another or alter their colour according to the light falling on them in the scene. Vertex programs 511 may be written by the artist 10 using modelling tools 11 to implement different graphical effects.
  • the memory 56 includes pixel programs 510 that control how pixels should be processed, like for example to texture or shade pixels so they form a more realistic and life-like image of the scene.
  • the memory 56 includes textures 59 that the pixel programs use to texture pixels, for example a pixel program 510 may use a brick wall texture 59 to texture a flat object so that it looks like a brick wall.
  • the frame buffer 512 stores the final image pixels, and when the scene has been fully rendered the frame buffer's contents are sent to the display device for display to the viewer.
  • the process of culling is now discussed further in relation to the plan diagram in Figure 6 of a viewer viewing an object.
  • the culling process comprises identifying the object faces 63, 610 that cannot be seen when viewed from the viewer's viewing point 60.
  • the operation of this process is not only relevant to culling vertices, but also to determining where the edges of an object appear to be from a particular viewing point.
  • the object 61 comprises face 62 facing towards the viewer 60 and face
  • the eye vectors 64, 67 point from the object faces 62, 63 towards the viewer's position 60.
  • the normal vectors 65, 68 point in the direction that is normal to the object faces 62, 63. For each object face the angle between the faces' eye vector and the faces' normal vector is obtained. If the angle is greater than 90 degrees, like angle 69 is, then the object's face is back-facing and cannot be seen by the viewer and so the vertices defining the object face should be discarded.
  • FIG. 7 shows a flow diagram of the process in the 3-D graphics pipeline for rendering a disparity map of the scene. This process is now described with reference to Figures 4 - 7.
  • the flow diagram begins at block 70 where objects to be rendered have just been broken down into vertices that are stored in the vertex buffer 57.
  • Block 71 the vertices of a portion of an object are passed into the vertex shader 41.
  • Block 71 is where disparity attributes are stored with vertices according to the level of 3-D effect required.
  • a vertex program 511 calculates for each object face the angle between the eye vector and the face's normal vector to determine whether the face is forward-facing or backward-facing, in the same manner as previously described in relation to the culling process.
  • the vertex program 511 identifies and stores texture map coordinates as disparity attributes to the vertices that lie on or close to the boundary 611 between forward facing 62 and backward facing 610 object faces when the object is viewed from the viewing point 60.
  • a vertex is determined to lie on or close to the boundary 611 if it has a calculated angle between the eye vector and normal vector of substantially 90 degrees.
  • texture map coordinates are assigned to vertices having angles between 75 and 105 degrees, however this will vary according to the shape of the object and the number of vertices used to define it. For example the size of the angle 613 between forward and backward facing object faces obviously influences the range of calculated angles that should be used. Identifying the vertices forming edges of objects is a process well known in the art as 'silhouette edge detection', and this process is described in the book 'Game Programming Gems 2', page 436 - 443, ISBN 1-58450-054-9.
  • the texture map coordinates point to values in a texture map 59 that is stored in memory 56, and these values are later used to texture pixels' disparity attributes to create an enhanced perception of image depth. These texture map values are commonly known in the art as texels.
  • the texture map coordinates for different objects are set to point to different texture maps. The different texture maps have different texel values, enabling different levels of 3-D effect to be applied for the different objects.
  • a vertex program 511 calculates the geometric distance from a reference plane to each vertex of a portion of an object.
  • the determination of the reference plane and the calculation is later explained in relation to Figures 8a and 8b.
  • the geometric distances are effectively high- pass filtered versions of the depths of portions of objects and the vertices' disparity attributes are set according to the geometric distances. This high- pass filtering causes the disparity values to give the viewer an enhanced perception of image depth after the disparity values are calculated from the disparity attributes.
  • the disparity attributes are stored as vertices' colours, for example the geometric distance of a particular vertex from a reference plane could be stored as that vertex's colour.
  • the block 73 is where culling and clipping of vertices and the homogenous divide takes place, as discussed above in relation to Figure 4 block 42.
  • the vertices are rasterised into pixels and the attributes of the vertices are interpolated to give pixel attributes as discussed earlier in relation to Figure 4 block 43.
  • the vertices' texture map coordinates are also interpolated, and if there is a first vertex with a texture map coordinate (disparity attribute) pointing at a first texture map value (texel), and a second vertex with a texture map coordinate (disparity attribute) pointing at a second texture map value (texel), then when a pixel is rasterised between the first and second vertices the pixel will have a texture map coordinate (disparity attribute) that points to a texture map value (texel) that is between the first texture map value and the second texture map value.
  • the rasterisation of pixels results in the pixels having interpolated colours that are representative of interpolated geometric distances.
  • the geometric distances were not represented as colours in block 71 , but as additional vertex attributes, the rasterisation of pixels results in the pixels having interpolated additional attributes that are representative of interpolated geometric distances.
  • the rasterised pixels are sent to the pixel shader 44, and a pixel shader program 510 retrieves the texture map values in texture map 59 pointed to by the pixels' texture map coordinates.
  • the texture map values are used to texture the pixels' disparity attributes to enhance the viewers' perception of image depth. For example, in a preferred embodiment where a pixel's interpolated geometric distance was earlier stored as the pixel's colour, the pixel's colour is then textured with the texture map value pointed to by the pixel's texture map coordinate. Hence the pixel's colour becomes a combination of the interpolated geometric distance and the texture map value.
  • texture map coordinates have been assigned to pixels and where geometric distances are stored as an additional attribute to each pixel, the texture map values are combined with this additional attribute. The pixels' disparity values are then set as the combined geometric distances and texture map values.
  • the level of 3-D effect applied to a portion of an object of the 3-D scene is changed by scaling the disparity values.
  • Many methods of achieving this scaling will be apparent to those skilled in the art, like for example changing the position of the reference plane to effect the size of the geometric distances, or multiplying the disparity values or geometric distances or texture map values by a factor having a value according to the level of 3-D effect required.
  • the texture map coordinates point to values in a Transition texture map.
  • the Transition texture map textures vertices' disparity attributes to give a sharp transition in disparity values at the edges of objects to emphasise the 3-D effect.
  • Transition texture map may be used to control the level of 3-D effect, for example the coordinates may point to a transition texture map having very high texel values to give a high level of 3-D effect.
  • the disparity map values are stored into the frame buffer, forming W RGBD pixels as described earlier in relation to Figure 3.
  • Figure 7 describes a preferred embodiment where disparity attributes are stored for both the geometric distances and for the texture map coordinates.
  • the benefits of object level control of 3-D effects are still obtained by implementing only one of geometric distances and texture map coordinates.
  • geometric distances are not calculated and only the Transition texture map texel values are used in calculating the disparity values.
  • Transition texture map coordinates are not assigned and only the geometric distances are used in calculating the disparity values.
  • the reference plane 82, 812 is defined to be closer to the portion of the object 80, 810 than the viewer 85, 815 and to be substantially the same distance away from the viewer 85, 815 in all the directions of the vertices of the portion of the object 80, 810.
  • the distance from the viewing point 85, 815 to the reference plane 82, 812 is a low-frequency (ie. slow-varying) component of the distance from the viewing point 85, 815 to the six different vertices of the portion of the object 80, 810.
  • the distance from the reference plane 82, 812 to the vertices of the portions of the object 80, 810 is the high-frequency (ie fast-varying) component of the distance from the viewing point 85, 815 to the six different vertices of the portion of the object 80, 810. Therefore the geometric distances 84, 814 are a high-pass filtered version of the distances from the viewing point 85, 815 to the vertices of the object portion 80, 810.
  • the reference plane 82 intersects with the object 80 so that virtually all of the low frequency components of the distances from the viewing point 85 to the vertices of the portion of the object 80 are removed to give high-pass filtered geometric distances.
  • the reference plane may simply be closer to the portion of the object 810 than the viewing point 815 to remove a smaller portion of the low frequency component distances.
  • the eye vector 83 is the vector that points from the portion of the object
  • the reference plane 82 is defined to be normal to the eye vector 83 of the portion of the object 80 in order to make all the distances from the viewing point to the reference plane in the directions of the vertices of the object 80 substantially the same.
  • the reference plane is defined to be normal to the Z-axis 816, and while this gives more variation between the distances from the viewing point to the reference plane in the directions of the vertices of the portion of the object 810, it is still sufficient to remove the low frequency component without having too adverse an effect on the accuracy of the measurement of the high frequency components (geometric distances).
  • the reference plane is closer to the portion of the object than the viewer and is substantially the same distance away from the viewer in the directions towards the vertices of the portion of the object.
  • the reference plane is stored as a constant 58, however in other embodiments the reference plane may be stored in other ways like for example as an attribute to each vertex.
  • each object has a different reference plane for straightforward and accurate high-pass filtering, however in a second embodiment the same reference plane may be applied to every object in the scene, or in a third embodiment there may be different reference planes for different portions of the same object.
  • These reference plane definitions still perform the objective of high-pass filtering the distance from the viewer to the vertices of the portion of the object.
  • the geometric distances are scaled during their calculation to set the level of image depth enhancement that is required for a particular object.
  • object level control of the level of depth enhancement is achieved by an additional calculation that scales the geometric distances at any stage after their calculation.
  • the disparity value for each pixel in an exemplary embodiment is calculated by the addition of firstly the pixel's interpolated disparity attribute that was calculated from corresponding vertices' disparity attributes that were set according to the vertices' geometric distances, and secondly the Transition texture map values pointed to by the pixel's interpolated disparity attribute that was calculated from corresponding vertices' disparity attributes that were set according to the vertices' Transition texture map coordinates.
  • Figure 9 shows a plan diagram of exemplary scene objects 90, 91 , 92 with reference planes 93, 94, 95 respectively being viewed from the direction of eye vector 96.
  • the diagrammatic guidelines 910, 911 , 912 relate the transitions between objects
  • the geometric distances on axes 97 show for each object how far away the visible surfaces of the object are from the reference plane of the object.
  • the geometric distances are scaled according to the level of 3-D effect required.
  • the Transition texture map values on axes 98 show the Transition texture map being applied at the edges of each object.
  • the amplitude of the Transition texture map values is altered according to the level of 3-D effect required.
  • the texture map coordinates may be distributed over a larger or smaller range of pixels respectively to increase or reduce the number of pixels around the object's edge that the Transition texture map is applied.
  • the disparity values are shown on axes 99.
  • the Transition texture map values and geometric distances are added to give disparity values, however in a further embodiment the Transition or geometric distance values may be scaled by a factor before addition to give a higher weighting to the Transition texture map values or the geometric distance in the disparity value.
  • only one of the geometric distances or the Transition texture map coordinates are stored as disparity attributes, and the disparity values are the same as (or a scale factor of) the geometric distances or the Transition texture map values respectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a method and apparatus configured for generating disparity values for pixels of a two dimensional (2-D) image of a three dimensional (3-D) scene. The 3-D scene comprises one or more objects and each object is represented by a plurality of vertices. The disparity values and 2-D image are subsequently used for synthesising left and right eye image views of the 3-D scene for display on a stereoscopic or multiple view display, giving a viewer the perception of viewing the scene in 3-D. The method comprises storing for each vertex of a portion of a first object at least one disparity attribute according to a level of 3-D effect required for the portion of the first object, and storing for each vertex of a portion of a second object at least one disparity attribute according to a different level of 3-D effect required for the portion of the second object. Then the vertices of the portion of the object are rasterised into pixels. The rasterising process comprises interpolating between the disparity attributes of the vertices to give disparity attributes for each rasterised pixel. Finally each pixel's disparity value is calculated according to the pixel's disparity attributes.

Description

DESCRIPTION
DISPARITY VALUE GENERATOR
This invention relates to a method for displaying a three dimensional scene on a display apparatus. More particularly, it relates to a method of generating disparity values for synthesis of multiple scene views.
It is well known in the art to display multiple views of a scene on a specially configured display apparatus to create the impression of viewing the scene in three dimensions (3-D) to a user. Typically an image showing one scene view is presented to the user's left eye and another image showing a different scene view is presented to the user's right eye, resulting in the user perceiving a 3-D image. This is commonly referred to as stereoscopy. A typical stereoscopic display apparatus incorporates lenticular lenses for directing images of different scene views in different directions towards the viewer, as is described in the commonly assigned International patent application publication WO 1997/47142, although any display apparatus capable of presenting different images to each eye may be used. The typical interocular distance between human eyes is 64mm, and so the images presented to the left and right eyes are nominally those that would be seen when viewing the 3-D scene from two different points 64mm apart. However, as described in the paper "Just Enough Reality: Comfortable 3D Viewing via Microstereopsis ", Nagata et al, IEEE Transactions on Circuits and Systems for Video Technology, Vol.10, No.3, pp.387-396, 2000, the effective interocular distance between the two scene views may be reduced dramatically to improve viewer comfort while still maintaining a convincing 3-D effect.
There are also multiple view autostereoscopic display devices, such as the Philips 3D LCD described in the paper "Image preparation for 3-D LCD" by Cees van Berkel, available in Proceedings of the SPIE Vol. 3639, p. 84-91 , Stereoscopic Displays and Virtual Reality Systems Vl. Instead of separate images for each of just two scene views, there are in general N images respectively for N scene views. By having multiple views, the 3D effect may be enhanced: as the viewer moves laterally relative to the screen, the image may appear to turn as the 3D image presented becomes comprised of different pairs of 2D images. Dedicating one 3-D graphics renderer to rendering each image can become expensive, and time multiplexing a renderer is often impractical because such a renderer is typically used to capacity in rendering one image per frame period in order to provide for smooth motion.
As described in the commonly assigned International patent application publication WO 98/43442, it is possible to instead synthesise the multiple images from a single two dimensional (2-D) image and a depth map holding the distances of each 2-D image pixel from the position of the viewer to give considerable savings in processing overheads. To synthesise the multiple images the depth map is considered as a disparity map holding a disparity value for each pixel of the 2-D image. Each pixel's disparity value sets how far apart that pixel in the left eye image should be from that pixel in the right eye image in order to make the pixel appear to the viewer to be at the correct distance (depth) away from them. For example, in a simple stereoscopic application where respective images are required for the left and right eyes, pixels in the 2-D image can be shifted to the left or right according to their disparity value to synthesise left and right images which rely on the parallax introduced by the shifting to stimulate perception of depth. The distance between a pixel in the left eye image and the corresponding pixel in the right eye image is known as the disparity. In an autostereoscopic display with for example 9 different images (9 different scene views) the image that the left eye sees is disparate from the image that the right eye sees, and the two images that are seen (out of the nine that are displayed) depends on the angle the viewer views the display from.
Some 3-D displays are only capable of delivering fatigue-free viewing over a limited range of display depths, and exceeding this range may result in unwanted visual artefacts and increased viewer discomfort. The paper "Visualization of arbitrary-shaped 3D scenes on depth-limited 3D displays" by Andre Redert, ISBN: 0-7695-2223-8 describes a 3-D rendering process, whereby a depth map is first high-pass filtered and then scaled before synthesis with a 2-D image to create an enhanced perception of image depth. The paper discloses that for creating the perception of image depth in humans, emphasising local depth differences between objects in a scene is more important than displaying the objects at their geometrically correct depths, and that the emphasis can be achieved by high-pass filtering the depth map for the scene. The paper also discloses that the depth map can be scaled to bring the depth range of the map within the depth range capable of being displayed on the display apparatus to avoid severe image quality loss. The filtered and scaled depth map is considered as a disparity map, each one of the disparity map's values corresponding to a pixel of the 2-D image and setting the disparity required for that pixel between the images of the left and right eye scene views to make the pixel appear at the correct depth in the 3-D image.
In the field of 3-D computer graphics, a 3-D effect is created by rendering a 2-D image with correct perspective and object depth ordering, i.e. making objects appear smaller the further away that they are and making objects in the foreground obscure other objects behind them in the background. However, the viewer still has the impression of looking directly at the screen rather than some point in front of or behind it. It has been proposed that a 2-D computer graphics image could be synthesised with its pixel depth map (available as a by-product of the 2-D rendering process) to give multiple image views for display on a multiple view display. However, it is difficult to take the depth map for the scene and identify the parts of the depth map that correspond to particular objects in the scene since information on individual objects is not present in the 2-D image or the depth map.
3-D computer graphics are typically rendered in a 3-D graphics pipeline as is well known to those skilled in the art. The rendering process begins by traversing a scene graph comprising all of the objects to be rendered in the image. As the scene graph is traversed each object required for rendering is passed into the 3-D graphics pipeline in the form of vertices. Each vertex has a position attribute setting the position of the vertex in 3-D space, and may also include other attributes like the vertex's colour or alpha (transparency) or a texture map coordinate. Operations like lighting and shading may be performed on the vertices, and vertices forming image portions that that are outside the scene to be rendered or are occluded by other image portions in the scene may be discarded. The vertices are rasterised into pixels, the rasterisation process typically comprising interpolating between the vertices' parameters and assigning the interpolated parameters to the corresponding pixels. Typically multiple pixels are rasterised across each triangular area that is defined by 3 vertices.
Some of the rasterised pixels may be textured by a texture map to alter one or more of each pixel's interpolated parameters to increase the realism or 3-D effect of the resulting image. Typically for texture mapping each vertex has a texture map coordinate pointing to a particular texture map value. When the vertices are rasterised into pixels the texture map coordinates are interpolated to give texture map coordinates for each rasterised pixel. Then one or more of a pixel's interpolated parameters may be textured by the texture map value pointed to by the pixel's texture map coordinate.
The depth of each pixel is held in a depth buffer and the pixels may be depth-tested for each image position by comparing the depths of the pixels at that position so that pixels hidden behind other pixels can be discarded. In cases where the front-most pixel has an alpha value making the front-most pixel wholly or partially transparent, the colour of the front-most pixel may be blended with the colours of the pixel(s) behind it, as will be apparent to the skilled reader. The pixels are held in the frame buffer and when the whole scene has been rendered they are read out as a 2-D image for display on a display.
US Patent 6,664,958 discloses that a texture map may be applied to the depth buffer of the pixels to alter the depths of the pixels by varying amounts. The disclosure asserts that the pixel depth variations introduced by the texture mapping alters the results of the occlusion (depth) test that follows to produce visualisation effects in the 2-D image where one object is partially occluded by another object. However this disclosure does not address creation of a disparity map, and texturing the pixel depth buffer necessarily causes distortion to the 2-D image.
It is therefore an object of the present invention to provide an improved method for generating disparity values.
According to a first aspect of the invention, there is provided a method for generating disparity values for pixels of a two dimensional (2-D) image of a three dimensional (3-D) scene, the 3-D scene comprising one or more objects, each object represented by a plurality of vertices, the method comprising: a) storing for each vertex of a portion of an object at least one disparity attribute according to a level of 3-D effect required for the portion of the object; b) rasterising the portion of the object into pixels, the rasterising comprising interpolating between the at least one disparity attribute of the vertices of the portion of the object to give at least one disparity attribute for each pixel; and c) calculating the pixels' respective disparity values according to the pixels' respective at least one disparity attributes; and repeating steps a), b), and c) for a portion of a different object, the portion of the different object requiring a different level of 3-D effect.
The first aspect of the invention provides a method for the generation of disparity values for a 3-D scene, wherein the disparity values and a 2-D image of the 3-D scene are used together to synthesise left and right eye image views of the 3-D scene for display on a stereoscopic or multiple view display to give a viewer the perception of viewing the scene in 3-D. Each disparity value specifies how far apart a pixel of the 2-D image of the 3-D scene should be placed in images of left and right eye scene views to stimulate viewer perception of pixel depth. Owing to the first aspect of the invention, the disparity attributes for the vertices of an object of the 3-D scene may be determined taking into account the level of 3-D effect required for the object. This enables object level control of the strength of the 3-D effect that the viewer sees. For example a low level of 3-D effect could be applied for less important objects of the 3-D scene and a higher level of 3-D effect applied to other more important objects of the 3-D scene to enhance and draw attention to them. The level of 3-D effect applied to objects may be dynamically altered with time over a series of images displaying the scene, for example at critical points during a 3-D game the strength of the 3-D effect applied to the image portion representing a game character's head could be suddenly increased to make the head appear to project out towards the viewer. Such a scheme may be used to increase the interaction of the game with the viewer and enable implementation of exciting special effects.
Furthermore, the dynamic control of the level of 3-D effect (disparity) enables an individual user to set the disparity at the level they are most comfortable with viewing. The aforementioned paper "Just Enough Reality: Comfortable 3D Viewing via Microstereopsis" describes how the level of disparity between left and right eye scene views can be reduced (at the expense of 3-D effect) to improve viewer comfort, and to reduce the "lock-in" time for a viewer's eyes to lock onto the left and right eye scene views so the viewer can perceive a 3-D image. Advantageously, the disparity values are generated distinct from the 2-
D image, and so processing the depths of objects in the 3-D scene to generate disparity values does not also process (distort) the 2-D image.
In an embodiment, there is provided a method wherein the storing of the first aspect of the invention further comprises: - determining a reference plane associated with the portion of the object in 3-D space; and
- calculating for vertices of the portion of the object respective disparity attributes according to the vertices' geometric distances from the reference plane. Following the specific embodiment outlined above, an object's depths in the 3-D scene are preferably high-pass filtered by measuring the geometric distances of the object's vertices from a reference plane. As described earlier with reference to the paper of Andre Redert, high-pass filtering objects' depths results in an enhanced perception of 3-D image depth.
Advantageously, the calculation of the disparity attributes takes place using the vertices of the object to give geometric distances, and then the disparity attributes are interpolated to give disparity attributes for the pixels of the object using the existing graphics hardware. This makes the mathematics required for the calculation of the disparity attributes of the pixels straightforward to implement.
In a further embodiment, there is provided a method wherein the storing of the first aspect of the invention further comprises:
- identifying the vertices of the portion of the object that form the edges of the object when the object is viewed from the direction of the viewer of the 3-D scene; and
- storing for the identified vertices respective texture map coordinates as the vertices' disparity attributes.
Following the further embodiment outlined above, the edges of an object when viewed from the position of the viewer may be textured to increase the 3-D effect between that object and other objects of the scene.
Advantageously, the texture map that is applied to the object's edge is a Transition' texture, giving a large and very sharp transition in disparity values at the object's edge to emphasise the depth difference between the object and other adjacent objects, increasing the 3-D effect.
Additionally, different amplitude Transition texture maps may be applied to object's edges to increase or decrease the level of 3-D effect between the object's edge and other adjacent objects. Furthermore the values read from the transition texture maps may be dynamically scaled to give dynamic control of the level of 3-D effect.
According to a second aspect of the invention, there is provided an apparatus configured to generate disparity values for pixels of a two dimensional (2-D) image of a three dimensional (3-D) scene, the 3-D scene comprising one or more objects, each object represented by a plurality of vertices, the apparatus comprising storage means; and processing means, operable to: a) store for each vertex of a portion of an object at least one disparity attribute according to a level of 3-D effect required for the portion of the object; b) rasterise the portion of the object into pixels, the rasterising comprising interpolating between the at least one disparity attribute of the vertices of the portion of the object to give at least one disparity attribute for each pixel; and c) calculate the pixels' respective disparity values according to the pixels' respective at least one disparity attributes; and repeat steps a), b), and c) for a portion of a different object, the portion of the different object requiring a different level of 3-D effect.
The second aspect of the invention provides apparatus configured for the generation of disparity values for a 3-D scene, wherein the disparity values and a 2-D image of the 3-D scene are used together to synthesise left and right eye image views of the 3-D scene for display on a stereoscopic or multiple view display to give a viewer the perception of viewing the scene in 3-
D. Owing to the second aspect of the invention, the disparity attributes for the vertices of an object of the 3-D scene may be determined taking into account the level of 3-D effect required for the object.
Further features and advantages of the present invention will become apparent from reading of the following description of preferred embodiments of the present invention, given by way of example only, and with reference to the accompanying drawings, in which:
Figure 1 shows a block diagram of the development of a 3-D graphics application. Figure 2 shows a plan diagram of a user viewing a 3-D display and perceiving a pixel to be further away from them than the display screen. Figure 3 shows a flow diagram of the method for rendering and displaying a 3-D image.
Figure 4 shows a block diagram of the architecture for a 3-D graphics pipeline. Figure 5 shows a block diagram of the processing and storage elements of a computer system for running 3-D game middleware.
Figure 6 shows a plan view of an object, some of whose faces face towards a viewing position and some of whose faces face away from the viewing position and cannot be seen. Figure 7 shows a flow chart of the method for rendering a 2-D image and disparity map.
Figures 8a and 8b show plan diagrams of the calculation of geometric distances.
Figure 9 shows a diagram of a disparity map of a scene having one object.
Embodiments of the invention will now be described with reference to a 3-D graphics pipeline, although the invention may also be implemented in embodiments using other 3-D graphics tools as will be apparent to the person skilled in the art. For example, steps from the method of the invention may be performed at different stages in the pipeline, and in different orders depending on the exact form of application.
The block diagram of Figure 1 shows the development of the software for a typical 3-D graphics computer game. The artist 10 decides how the objects appearing in the game should be drawn with the aid of 3-D modelling tools 11 to test the various possibilities. Once the artist is satisfied with the objects they are all stored together in a software data structure 12 called a scene graph. Each object in the scene graph comprises vertices that set the shape and colouring of the object. Each vertex has a normal vector specifying the direction that is normal to the object face partially formed by the vertex. In a preferred embodiment, each object in the scene graph also comprises the reference plane that is to be used for high-pass filtering the depth of that object in the 3-D graphics pipeline. The programmer 13 writes the software application 14 that controls how the game works and how it should respond to user inputs. The application 14 and scene graph 12 are combined together into the 3-D game middleware 15 which is made available to users on a storage media 16. The storage media 16 is for example an optical disk or a memory on a server that a user can access over a network to obtain the middleware 15. In a further embodiment the software is sent to the user's equipment via a signal from a network, for example a user may use their Personnel Computer (PC) to connect to an Internet site storing the middleware 15, and then download the middleware 15 to their PC's hard disk for execution at a later time.
The diagram of Figure 2 shows how displaying an image pixel 20 in a left eye scene view disparate from an image pixel 21 in a right eye scene view on a screen 25 creates the perception of viewing a virtual pixel 22. The viewer perceives the virtual pixel 22 to be at a different distance away from them than the pixels 20 and 21 shown on the screen 25. The left eye image view comprising pixel 20 is directed towards left eye 23 and the right eye image view comprising pixel 21 is directed towards right eye 24 using a lenticular lens 26. The distance between pixels 20 and 21 on the screen is known as the disparity 27, and the level of disparity may be altered to make the virtual pixel 22 appear closer to or further away from the screen 25, altering the 3-D effect that is seen by the viewer. For example increasing the level of disparity 27 moves pixels 20 and 21 further apart and so the viewer perceives pixel 22 to be further away from them than before. The principle of displaying one image to the left eye and a different but correlated image to the right eye to create the perception of image depth may be effected by any means capable of displaying different images to the viewer's left and right eyes, like for example a head mounted display. The flow diagram of Figure 3 shows the steps for generating images of the different scene views on a multiple view display. At step 30 a two dimensional image of the scene from a first viewpoint is rendered. The two dimensional image is W pixels wide, H pixels high, and each pixel has RGBA (Red, Blue, Green, Alpha) components. At step 31 the 2-D RGBA image is written into a frame buffer. At step 32 a disparity map of the scene having one disparity value D for each pixel is rendered, and at step 33 the disparity map is written into the frame buffer, overwriting the A components of the pixels with the disparity values. Hence each RGBA pixel in the frame buffer after storing the 2-D image in step 31 becomes an RGBD pixel in the frame buffer after overwriting the A components with D components in step 33. Since the 3-D graphics pipeline is designed to output pixels with only three components (RGB) one line at a time, in step 34 the W RGBD pixels are read out as though they were W + W/3 RGB pixels, hence three pixels of RGBD, RGBD, RGBD are read out as four pixels of RGB, DRG, BDR, GBD.
In step 35 the image synthesiser receives each line of pixels and reads them as W RGBD pixels. It then shifts the pixels according to the disparity D values to synthesise different images of the scene as seen from different viewpoints. Then the image synthesiser combines the images so that when the images are displayed on a multiple view display in step 36 the viewer's eyes each see a different image view.
In a preferred embodiment the D values are scaled to fit within the disparity range capable of being displayed on the display. This scaling is done within the image synthesiser, or within the 3-D graphics pipeline if knowledge of the display device capabilities is available.
The flow diagram of Figure 3 illustrates the case where the disparity map is rendered after the rendering of the 2-D image. However, in a further embodiment, as will be apparent to the skilled reader, the disparity map is for example rendered before the 2-D image or rendered simultaneously with the 2-D image if sufficient processing power is available in the 3-D graphics pipeline.
Figure 4 shows the architecture of a 3-D graphics pipeline. The vertex shader 41 runs vertex programs 511 (see Figure 5 described below) to process vertices, the vertex programs perform many operations on the vertices like for example lighting them according to any light sources that are present and transforming their position according to changes in the objects position.
The block 42 clips vertices that are outside the viewer's field of view and culls vertices that form back-facing object faces that are occluded from view. The culling process is further discussed below in relation to Figure 6. The process known as the homogenous divide also occurs in this block, adding a fourth dimension to each vertex specifying how the X1Y1Z dimensions of the vertex should be scaled for perspective effects, causing objects that are further away from the viewer to be reduced in size.
The rasteriser 43 defines image pixels by interpolating between stored attributes of vertices forming each image face. For example if an image face is triangular having two vertices of a white colour and one vertex of a black colour, then the colour of the rasterised pixels of the object face will progressively change from white along one side of the triangle through grey and to black at the opposite corner of the triangle. The other attributes of the vertices are also interpolated, like for example the vertex's position and texture map coordinates. The depth (Z) dimensions of the rasterised pixels are stored in a Z buffer.
The pixel shader 44 runs pixel programs 510 (see Figure 5) to process pixels; the pixel programs may perform many operation on the pixels like for example shading and texturing them so that the colours of rasterised pixels of a particular object face do not remain smoothly interpolated but become textured and more life-like.
The block 45 performs a pixel depth test by for each image position comparing the Z buffer depth of every pixel at that image position and discarding all the pixels except for the one that is closest to the viewer. There remains one pixel for every image position, the remaining pixels stored in the frame buffer 46.
The block diagram of Figure 5 shows the processing and storage elements of a computer system configured for running 3-D game middleware 15. The Central Processing Unit (CPU) 50 handles the scene graph 52 and the
3-D graphics application 53 from data in memory 51. The 3-D graphics application specifies when events should occur and the scene graph 52 specifes the 3-D objects as discussed above in relation to Figure 1.
During the running of the 3-D graphics application 53, the CPU 50 traverses the scene graph 52 and sends the objects in the scene graph required for rendering on the display to the Graphics Processing Unit (GPU)
55. The GPU 55 stores the objects to be rendered in memory 56 by storing vertices defining the objects in the vertex buffer 57.
The vertices typically define triangles forming the object's surfaces, however the vertices may also define other shapes like for example quadrilaterals. Each vertex has a series of attributes including the position of the vertex in 3-D space and the colour of the vertex. Many other attributes, like a texture map identifier for pointing to a particular texture map, may be stored with each vertex depending on how the artist 10 has defined the object.
The constants buffer 58 stores constant parameters like for example the position of light sources in the 3-D scene and procedural data for animation effects. In a preferred embodiment the constants register 58 stores the position of the reference plane used for high-pass filtering an object's depth values, although in other embodiments the reference plane may be stored in other ways, like for example as an attribute to each vertex. Vertex programs 511 are also stored in memory 56, and these define how vertices should be processed like for example to transform them from one position to another or alter their colour according to the light falling on them in the scene. Vertex programs 511 may be written by the artist 10 using modelling tools 11 to implement different graphical effects. The memory 56 includes pixel programs 510 that control how pixels should be processed, like for example to texture or shade pixels so they form a more realistic and life-like image of the scene. The memory 56 includes textures 59 that the pixel programs use to texture pixels, for example a pixel program 510 may use a brick wall texture 59 to texture a flat object so that it looks like a brick wall. The frame buffer 512 stores the final image pixels, and when the scene has been fully rendered the frame buffer's contents are sent to the display device for display to the viewer.
The process of culling is now discussed further in relation to the plan diagram in Figure 6 of a viewer viewing an object. The culling process comprises identifying the object faces 63, 610 that cannot be seen when viewed from the viewer's viewing point 60. The operation of this process is not only relevant to culling vertices, but also to determining where the edges of an object appear to be from a particular viewing point. The object 61 comprises face 62 facing towards the viewer 60 and face
63 facing away from the viewer 60. Face 62 can be seen from the viewing position 60 and face 63 cannot be seen from the viewing position 60 as it is occluded by another part of the object and so should be culled. The eye vectors 64, 67 point from the object faces 62, 63 towards the viewer's position 60. The normal vectors 65, 68 point in the direction that is normal to the object faces 62, 63. For each object face the angle between the faces' eye vector and the faces' normal vector is obtained. If the angle is greater than 90 degrees, like angle 69 is, then the object's face is back-facing and cannot be seen by the viewer and so the vertices defining the object face should be discarded. If the angle is less than 90 degrees, like angle 66 is, then the object's face is forward-facing and can be seen by the viewer and so the vertices defining the object face should be retained. It is apparent that the boundary 611 between a face 62 having an angle of less than 90 degrees (forward-facing) and a face 610 having an angle greater than 90 degrees (backward-facing) will appear to be the edge of the object 61 when viewed from the viewing point 60. Identification of the vertices lying on the visible edges 611 of an object is important for storing disparity attributes in the vertex shader 41 stage of the pipeline, as is discussed below in relation to Figure 7 block 71. Figure 7 shows a flow diagram of the process in the 3-D graphics pipeline for rendering a disparity map of the scene. This process is now described with reference to Figures 4 - 7. The flow diagram begins at block 70 where objects to be rendered have just been broken down into vertices that are stored in the vertex buffer 57.
At block 71 the vertices of a portion of an object are passed into the vertex shader 41. Block 71 is where disparity attributes are stored with vertices according to the level of 3-D effect required. At block 71 a vertex program 511 calculates for each object face the angle between the eye vector and the face's normal vector to determine whether the face is forward-facing or backward-facing, in the same manner as previously described in relation to the culling process. The vertex program 511 identifies and stores texture map coordinates as disparity attributes to the vertices that lie on or close to the boundary 611 between forward facing 62 and backward facing 610 object faces when the object is viewed from the viewing point 60. A vertex is determined to lie on or close to the boundary 611 if it has a calculated angle between the eye vector and normal vector of substantially 90 degrees. In an example embodiment texture map coordinates are assigned to vertices having angles between 75 and 105 degrees, however this will vary according to the shape of the object and the number of vertices used to define it. For example the size of the angle 613 between forward and backward facing object faces obviously influences the range of calculated angles that should be used. Identifying the vertices forming edges of objects is a process well known in the art as 'silhouette edge detection', and this process is described in the book 'Game Programming Gems 2', page 436 - 443, ISBN 1-58450-054-9.
In a preferred embodiment, the texture map coordinates point to values in a texture map 59 that is stored in memory 56, and these values are later used to texture pixels' disparity attributes to create an enhanced perception of image depth. These texture map values are commonly known in the art as texels. In an exemplary embodiment, the texture map coordinates for different objects are set to point to different texture maps. The different texture maps have different texel values, enabling different levels of 3-D effect to be applied for the different objects.
At block 71 a vertex program 511 calculates the geometric distance from a reference plane to each vertex of a portion of an object. The determination of the reference plane and the calculation is later explained in relation to Figures 8a and 8b. The geometric distances are effectively high- pass filtered versions of the depths of portions of objects and the vertices' disparity attributes are set according to the geometric distances. This high- pass filtering causes the disparity values to give the viewer an enhanced perception of image depth after the disparity values are calculated from the disparity attributes. In a preferred embodiment, the disparity attributes are stored as vertices' colours, for example the geometric distance of a particular vertex from a reference plane could be stored as that vertex's colour. As an example very large geometric distances could be stored as black, very small geometric distances could be stored as white and other geometric distances could be stored as shades of grey. Further embodiments for storing the disparity attributes are also possible as will be apparent to those skilled in the art like for example using other colours or storing the geometric distance directly as an additional vertex attribute.
The block 73 is where culling and clipping of vertices and the homogenous divide takes place, as discussed above in relation to Figure 4 block 42.
In block 74 the vertices are rasterised into pixels and the attributes of the vertices are interpolated to give pixel attributes as discussed earlier in relation to Figure 4 block 43. The vertices' texture map coordinates are also interpolated, and if there is a first vertex with a texture map coordinate (disparity attribute) pointing at a first texture map value (texel), and a second vertex with a texture map coordinate (disparity attribute) pointing at a second texture map value (texel), then when a pixel is rasterised between the first and second vertices the pixel will have a texture map coordinate (disparity attribute) that points to a texture map value (texel) that is between the first texture map value and the second texture map value.
In a preferred embodiment where the geometric distances are represented as vertex colours, the rasterisation of pixels results in the pixels having interpolated colours that are representative of interpolated geometric distances. In a further embodiment where the geometric distances were not represented as colours in block 71 , but as additional vertex attributes, the rasterisation of pixels results in the pixels having interpolated additional attributes that are representative of interpolated geometric distances.
In block 75 the rasterised pixels are sent to the pixel shader 44, and a pixel shader program 510 retrieves the texture map values in texture map 59 pointed to by the pixels' texture map coordinates. The texture map values are used to texture the pixels' disparity attributes to enhance the viewers' perception of image depth. For example, in a preferred embodiment where a pixel's interpolated geometric distance was earlier stored as the pixel's colour, the pixel's colour is then textured with the texture map value pointed to by the pixel's texture map coordinate. Hence the pixel's colour becomes a combination of the interpolated geometric distance and the texture map value. In a further embodiment where texture map coordinates have been assigned to pixels and where geometric distances are stored as an additional attribute to each pixel, the texture map values are combined with this additional attribute. The pixels' disparity values are then set as the combined geometric distances and texture map values.
In a preferred embodiment, the level of 3-D effect applied to a portion of an object of the 3-D scene is changed by scaling the disparity values. Many methods of achieving this scaling will be apparent to those skilled in the art, like for example changing the position of the reference plane to effect the size of the geometric distances, or multiplying the disparity values or geometric distances or texture map values by a factor having a value according to the level of 3-D effect required. In a preferred embodiment the texture map coordinates point to values in a Transition texture map. The Transition texture map textures vertices' disparity attributes to give a sharp transition in disparity values at the edges of objects to emphasise the 3-D effect. Different variants of Transition texture map may be used to control the level of 3-D effect, for example the coordinates may point to a transition texture map having very high texel values to give a high level of 3-D effect. In block 76 the disparity map values are stored into the frame buffer, forming W RGBD pixels as described earlier in relation to Figure 3.
Figure 7 describes a preferred embodiment where disparity attributes are stored for both the geometric distances and for the texture map coordinates. However, the benefits of object level control of 3-D effects are still obtained by implementing only one of geometric distances and texture map coordinates. For example, in a further embodiment, geometric distances are not calculated and only the Transition texture map texel values are used in calculating the disparity values. In a still further embodiment Transition texture map coordinates are not assigned and only the geometric distances are used in calculating the disparity values. A diagram showing how the disparity map values for an object in a preferred embodiment are created from high-pass filtered depth values (geometric distances) and Transition texturing is described later in relation to Figure 9. The calculation of the geometric distances is now explained with reference to Figures 8a and 8b showing plan diagrams of a viewer viewing a portion of an object. Two example methods of defining the reference plane are described, the first method shown in Figure 8a and the second method shown in Figure 8b. Each of the two Figures show a portion of an object 80, 810 having six vertices, a reference plane 82, 812, and a viewing point 85, 815. For each figure the calculation of the geometric distance 84, 814 for the vertex 81 , 811 is now described. The reference plane 82, 812 is defined to be closer to the portion of the object 80, 810 than the viewer 85, 815 and to be substantially the same distance away from the viewer 85, 815 in all the directions of the vertices of the portion of the object 80, 810. Hence the distance from the viewing point 85, 815 to the reference plane 82, 812 is a low-frequency (ie. slow-varying) component of the distance from the viewing point 85, 815 to the six different vertices of the portion of the object 80, 810. The distance from the reference plane 82, 812 to the vertices of the portions of the object 80, 810 is the high-frequency (ie fast-varying) component of the distance from the viewing point 85, 815 to the six different vertices of the portion of the object 80, 810. Therefore the geometric distances 84, 814 are a high-pass filtered version of the distances from the viewing point 85, 815 to the vertices of the object portion 80, 810.
In the embodiment of Figure 8a the reference plane 82 intersects with the object 80 so that virtually all of the low frequency components of the distances from the viewing point 85 to the vertices of the portion of the object 80 are removed to give high-pass filtered geometric distances. However in an embodiment of Figure 8b the reference plane may simply be closer to the portion of the object 810 than the viewing point 815 to remove a smaller portion of the low frequency component distances. The eye vector 83 is the vector that points from the portion of the object
80 to the viewing position 85. In the embodiment of figure 8a the reference plane 82 is defined to be normal to the eye vector 83 of the portion of the object 80 in order to make all the distances from the viewing point to the reference plane in the directions of the vertices of the object 80 substantially the same. In the embodiment of Figure 8b the reference plane is defined to be normal to the Z-axis 816, and while this gives more variation between the distances from the viewing point to the reference plane in the directions of the vertices of the portion of the object 810, it is still sufficient to remove the low frequency component without having too adverse an effect on the accuracy of the measurement of the high frequency components (geometric distances). It will be understood that other definitions of the reference plane may be also used provided the reference plane is closer to the portion of the object than the viewer and is substantially the same distance away from the viewer in the directions towards the vertices of the portion of the object. In an exemplary embodiment, the reference plane is stored as a constant 58, however in other embodiments the reference plane may be stored in other ways like for example as an attribute to each vertex.
In a preferred embodiment, each object has a different reference plane for straightforward and accurate high-pass filtering, however in a second embodiment the same reference plane may be applied to every object in the scene, or in a third embodiment there may be different reference planes for different portions of the same object. These reference plane definitions still perform the objective of high-pass filtering the distance from the viewer to the vertices of the portion of the object.
In a preferred embodiment, to reduce the number of calculations required, the geometric distances are scaled during their calculation to set the level of image depth enhancement that is required for a particular object. However, in a further embodiment, object level control of the level of depth enhancement is achieved by an additional calculation that scales the geometric distances at any stage after their calculation.
The disparity value for each pixel in an exemplary embodiment is calculated by the addition of firstly the pixel's interpolated disparity attribute that was calculated from corresponding vertices' disparity attributes that were set according to the vertices' geometric distances, and secondly the Transition texture map values pointed to by the pixel's interpolated disparity attribute that was calculated from corresponding vertices' disparity attributes that were set according to the vertices' Transition texture map coordinates.
The blending of geometric distances and Transition texture map values is now described with reference to Figure 9. Figure 9 shows a plan diagram of exemplary scene objects 90, 91 , 92 with reference planes 93, 94, 95 respectively being viewed from the direction of eye vector 96. The diagrammatic guidelines 910, 911 , 912 relate the transitions between objects
90, 91 , 92 to the effect on the geometric distances shown on axes 97, the
Transition texture map values shown on axes 98, and the disparity values
(addition of geometric distances and texture map values) shown on axes 99.
The geometric distances on axes 97 show for each object how far away the visible surfaces of the object are from the reference plane of the object. The geometric distances are scaled according to the level of 3-D effect required.
The Transition texture map values on axes 98 show the Transition texture map being applied at the edges of each object. The amplitude of the Transition texture map values is altered according to the level of 3-D effect required. The texture map coordinates may be distributed over a larger or smaller range of pixels respectively to increase or reduce the number of pixels around the object's edge that the Transition texture map is applied.
The disparity values (addition of geometric distances and texture map values) are shown on axes 99. In a preferred embodiment the Transition texture map values and geometric distances are added to give disparity values, however in a further embodiment the Transition or geometric distance values may be scaled by a factor before addition to give a higher weighting to the Transition texture map values or the geometric distance in the disparity value. In a further embodiment only one of the geometric distances or the Transition texture map coordinates are stored as disparity attributes, and the disparity values are the same as (or a scale factor of) the geometric distances or the Transition texture map values respectively.

Claims

1. A method for generating disparity values (75) for pixels (20) of a two dimensional (2-D) image of a three dimensional (3-D) scene, the 3-D scene comprising one or more objects (90, 91 , 92), each object represented by a plurality of vertices (81 , 811 ), the method comprising: a) storing for each vertex of a portion (61 , 80) of an object at least one disparity attribute according to a level of 3-D effect required for the portion of the object; b) rasterising (74) the portion of the object into pixels, the rasterising comprising interpolating between the at least one disparity attribute of the vertices of the portion of the object to give at least one disparity attribute for each pixel; and c) calculating (75) the pixels' respective disparity values according to the pixels' respective at least one disparity attributes; and repeating steps a), b), and c) for a portion of a different object, the portion of the different object requiring a different level of 3-D effect.
2. The method of claim 1 , wherein the storing further comprises: - determining a reference plane (82, 812) associated with the portion of the object in 3-D space; and
- calculating and storing for vertices of the portion of the object respective disparity attributes according to the vertices' geometric distances (84, 814) from the reference plane.
3. The method of claim 2, wherein the calculating and storing for vertices of the portion of the object further comprises scaling the disparity attributes according to the level of 3-D effect required.
4. The method of claim 2, wherein the determination of the reference plane includes reference to the level of 3-D effect required.
5. The method of claim 1 , wherein the storing further comprises:
- identifying the vertices of the portion of the object that lie on the edges of the object (910, 911 , 912) when the object is viewed from the direction (96) of the viewer of the 3-D scene; and - storing for the identified vertices respective texture map coordinates as the vertices' disparity attributes.
6. The method of claim 5, wherein vertices have respective normal vector attributes, each normal vector attribute having a direction that is normal to a portion of the object at least partially defined by the respective vertex, and wherein the identifying comprises:
- calculating for the vertices of the portion of the object respective angles (66, 69) between the direction of the viewer (64, 67) and the vertices' normal vectors (68, 65); and - identifying the vertices having a calculated angle of substantially 90 degrees.
7. The method of claim 5, wherein the disparity values for respective pixels are set according to the texture map values pointed to by the respective pixels' texture map coordinates.
8. The method of claim 7, wherein texture map values are scaled according to the level of 3-D effect required.
9. The method of claims 1 , 2 and 5, wherein at least one disparity attribute is set according to a geometric distance and at least one disparity attribute is set as texture map coordinate.
10. The method of claim 9, wherein the step of calculating the pixels' respective disparity values further comprises calculating pixels' respective disparity values firstly according to the disparity attribute set according to a geometric distance and secondly according to the texture map value pointed to by the disparity attribute set as a texture map coordinate.
11. The method of any preceding claim, wherein a disparity attribute is stored as a vertex colour.
12. The method of any preceding claim, wherein the disparity values are dynamically scaled with time over a series of images to increase or decrease the level of 3-D effect.
13. The method of any preceding claim, wherein the disparity values are scaled according to a depth range of a display device for displaying the 3-D scene.
14. Software stored in a storage device, the software for carrying out the method of any preceding claim.
15. Software sent as a signal, the software for carrying out the method of any of claims 1 to 13.
16. An apparatus configured to generate disparity values (75) for pixels (20) of a two dimensional (2-D) image of a three dimensional (3-D) scene, the 3-D scene comprising one or more objects (90, 91 , 92), each object represented by a plurality of vertices (81 , 811 ), the apparatus comprising storage means (51 , 56); and processing means (50, 55), operable to: a) store for each vertex of a portion (61 , 80) of an object at least one disparity attribute according to a level of 3-D effect required for the portion of the object; b) rasterise (74) the portion of the object into pixels, the rasterising comprising interpolating between the at least one disparity attribute of the vertices of the portion of the object to give at least one disparity attribute for each pixel; and c) calculate (75) the pixels' respective disparity values according to the pixels' respective at least one disparity attributes; and repeat steps a), b), and c) for a portion of a different object, the portion of the different object requiring a different level of 3-D effect.
PCT/IB2006/052730 2005-08-09 2006-08-08 Disparity value generator WO2007017834A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05107320 2005-08-09
EP05107320.3 2005-08-09

Publications (2)

Publication Number Publication Date
WO2007017834A2 true WO2007017834A2 (en) 2007-02-15
WO2007017834A3 WO2007017834A3 (en) 2007-09-13

Family

ID=37685922

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/052730 WO2007017834A2 (en) 2005-08-09 2006-08-08 Disparity value generator

Country Status (1)

Country Link
WO (1) WO2007017834A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009096912A1 (en) * 2008-01-29 2009-08-06 Thomson Licensing Method and system for converting 2d image data to stereoscopic image data
US20120032950A1 (en) * 2010-08-03 2012-02-09 Samsung Electronics Co., Ltd. Apparatus and method for synthesizing additional information while rendering object in 3d graphic-based terminal
US8253737B1 (en) * 2007-05-17 2012-08-28 Nvidia Corporation System, method, and computer program product for generating a disparity map
US20120235999A1 (en) * 2011-03-14 2012-09-20 Qualcomm Incorporated Stereoscopic conversion for shader based graphics content
WO2013078479A1 (en) * 2011-11-23 2013-05-30 Thomson Licensing Method and system for three dimensional visualization of disparity maps
US9571819B1 (en) 2014-09-16 2017-02-14 Google Inc. Efficient dense stereo computation
US9892496B2 (en) 2015-11-05 2018-02-13 Google Llc Edge-aware bilateral image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997047142A2 (en) * 1996-06-07 1997-12-11 Philips Electronics N.V. Stereoscopic image display driver apparatus
WO2001084852A1 (en) * 2000-05-03 2001-11-08 Koninklijke Philips Electronics N.V. Autostereoscopic display driver
WO2005060271A1 (en) * 2003-12-18 2005-06-30 University Of Durham Method and apparatus for generating a stereoscopic image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997047142A2 (en) * 1996-06-07 1997-12-11 Philips Electronics N.V. Stereoscopic image display driver apparatus
WO2001084852A1 (en) * 2000-05-03 2001-11-08 Koninklijke Philips Electronics N.V. Autostereoscopic display driver
WO2005060271A1 (en) * 2003-12-18 2005-06-30 University Of Durham Method and apparatus for generating a stereoscopic image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
REDERT A: "Visualization of arbitrary-shaped 3D scenes on depth-limited 3D displays" 3D DATA PROCESSING, VISUALIZATION AND TRANSMISSION, 2004. 3DPVT 2004. PROCEEDINGS. 2ND INTERNATIONAL SYMPOSIUM ON THESSALONIKI, GREECE 6-9 SEPT. 2004, PISCATAWAY, NJ, USA,IEEE, 6 September 2004 (2004-09-06), pages 938-942, XP010725305 ISBN: 0-7695-2223-8 cited in the application *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8253737B1 (en) * 2007-05-17 2012-08-28 Nvidia Corporation System, method, and computer program product for generating a disparity map
WO2009096912A1 (en) * 2008-01-29 2009-08-06 Thomson Licensing Method and system for converting 2d image data to stereoscopic image data
US9137518B2 (en) 2008-01-29 2015-09-15 Thomson Licensing Method and system for converting 2D image data to stereoscopic image data
US20120032950A1 (en) * 2010-08-03 2012-02-09 Samsung Electronics Co., Ltd. Apparatus and method for synthesizing additional information while rendering object in 3d graphic-based terminal
KR20120012698A (en) * 2010-08-03 2012-02-10 삼성전자주식회사 Apparatus and method for synthesizing additional information when rendering objects in 3D graphic terminal
US10389995B2 (en) 2010-08-03 2019-08-20 Samsung Electronics Co., Ltd. Apparatus and method for synthesizing additional information while rendering object in 3D graphic-based terminal
US9558579B2 (en) * 2010-08-03 2017-01-31 Samsung Electronics Co., Ltd. Apparatus and method for synthesizing additional information while rendering object in 3D graphic-based terminal
KR101691034B1 (en) * 2010-08-03 2016-12-29 삼성전자주식회사 Apparatus and method for synthesizing additional information during rendering object in 3d graphic terminal
US9219902B2 (en) 2011-03-14 2015-12-22 Qualcomm Incorporated 3D to stereoscopic 3D conversion
CN103493102A (en) * 2011-03-14 2014-01-01 高通股份有限公司 Stereoscopic conversion for shader based graphics content
US9578299B2 (en) * 2011-03-14 2017-02-21 Qualcomm Incorporated Stereoscopic conversion for shader based graphics content
US20120235999A1 (en) * 2011-03-14 2012-09-20 Qualcomm Incorporated Stereoscopic conversion for shader based graphics content
WO2013078479A1 (en) * 2011-11-23 2013-05-30 Thomson Licensing Method and system for three dimensional visualization of disparity maps
US9571819B1 (en) 2014-09-16 2017-02-14 Google Inc. Efficient dense stereo computation
US9736451B1 (en) 2014-09-16 2017-08-15 Google Inc Efficient dense stereo computation
US9892496B2 (en) 2015-11-05 2018-02-13 Google Llc Edge-aware bilateral image processing

Also Published As

Publication number Publication date
WO2007017834A3 (en) 2007-09-13

Similar Documents

Publication Publication Date Title
JP5421264B2 (en) Improvement of rendering method of 3D display
EP1582074B1 (en) Video filtering for stereo images
EP3792876A1 (en) Apparatus, method and computer program for rendering a visual scene
US7528831B2 (en) Generation of texture maps for use in 3D computer graphics
EP1542167A1 (en) Computer graphics processor and method for rendering 3D scenes on a 3D image display screen
Didyk et al. Adaptive Image-space Stereo View Synthesis.
Bonatto et al. Real-time depth video-based rendering for 6-DoF HMD navigation and light field displays
Niem et al. Mapping texture from multiple camera views onto 3D-object models for computer animation
WO2007017834A2 (en) Disparity value generator
CA2540538C (en) Stereoscopic imaging
JPH07200870A (en) Stereoscopic three-dimensional image generator
Hübner et al. Multi-view point splatting
JP6898264B2 (en) Synthesizers, methods and programs
Hübner et al. Single-pass multi-view volume rendering
CN117635454A (en) Multi-source light field fusion rendering method, device and storage medium
AU2013237644B2 (en) Rendering improvement for 3D display
Andersson et al. Efficient multi-view ray tracing using edge detection and shader reuse
Leith Computer visualization of volume data in electron tomography
Boev et al. GPU-based algorithms for optimized visualization and crosstalk mitigation on a multiview display
Hübner et al. Single-pass multi-view rendering
Nozick et al. Multi-view Rendering using GPU for 3-D Displays
De Sorbier et al. Depth camera based system for auto-stereoscopic displays
Petz et al. Hardware-accelerated autostereogram rendering for interactive 3d visualization
O’Conor et al. 3D visualisation of confocal fluorescence microscopy data
Buchacher et al. Single-Pass Stereoscopic GPU Ray Casting Using Re-Projection Layers.

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06795611

Country of ref document: EP

Kind code of ref document: A2

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载