+

WO1995011482A1 - Systeme de manipulation de surfaces oriente objet - Google Patents

Systeme de manipulation de surfaces oriente objet Download PDF

Info

Publication number
WO1995011482A1
WO1995011482A1 PCT/US1994/000139 US9400139W WO9511482A1 WO 1995011482 A1 WO1995011482 A1 WO 1995011482A1 US 9400139 W US9400139 W US 9400139W WO 9511482 A1 WO9511482 A1 WO 9511482A1
Authority
WO
WIPO (PCT)
Prior art keywords
surface object
cursor
recited
virtual box
transform
Prior art date
Application number
PCT/US1994/000139
Other languages
English (en)
Inventor
Robert Seidl
Original Assignee
Taligent, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taligent, Inc. filed Critical Taligent, Inc.
Priority to AU60831/94A priority Critical patent/AU6083194A/en
Publication of WO1995011482A1 publication Critical patent/WO1995011482A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour

Definitions

  • the present invention relates to the field of manipulation of 3D objects on computer displays. More specifically, the present invention relates to the field of manipulation of a 3D curve object displayed on a computer display with kinesthetic feedback to the user directing the manipulation.
  • Objects can now be displayed in three-dimensional (3D) representation, for example in wireframe, solid and/or shaded forms. While a 3D trackball input controller device has been utilized for manipulating objects displayed in 3D representation, it is complex and expensive.
  • Various techniques utilizing two- dimensional (2D) input controllers such as a mouse have been developed for manipulating objects displayed in 3D representation.
  • a known technique utilizes graphically displayed X, Y, and Z sliders which are adjusted by the user (for example, with an input controller such as a mouse) to indicate the amount of rotation about each axis independently. Typically, only one slider is adjusted at any given time.
  • Another known technique involves the menu selection of the axis about which rotation is desired.
  • An input controller such as a mouse is then moved in one dimension to indicate the amount of rotation.
  • Still another technique involves holding down one of three buttons on a mouse or a keyboard to select the axis of rotation, and then moving a mouse in one dimension to indicate the amount of rotation.
  • a still further technique involves selecting the object by clicking on it with the mouse pointer and again using the mouse pointer to drag a handle on the selected object in order to move, re-shape, re-size, or rotate the object.
  • 3D objects only one or two dimensions can be altered with any given handle and rotation only occurs around a central point in a world 3D space as opposed to rotation around the centerpoint (or other axis) of the 3D object itself (sometimes referred to as model space).
  • An even further technique involves selecting a 3D object by clicking on it with the mouse pointer, using the mouse pointer to make a menu selection as to a predefined type of movement option desired and again using the mouse pointer to drag a handle on the selected object in order to define a movement of the selected predefined type of movement.
  • 3D objects typically, only one predefined type of movement is available at a time in what is commonly known as a modal form of operation.
  • a still further consideration is the inherent limitation of the modal form of 3D object manipulation which further separates the user's expectations regarding moving a real world 3D object from the experience of moving an image of the 3D object on a computer display due to having to either select between alternative manipulation modes and /or operate in different windows each containing different views of the object to be manipulated.
  • An objective of the present invention is to provide an improved technique for manipulating surface objects displayed in 3D representation with 2D input controller devices which provides for kinesthetic correspondence between input controller motion and displayed surface object movement. Another objective of the present invention is to provide an improved technique for intuitively manipulating displayed 3D surface objects such that the displayed 3D surface object manipulation emulates physical 3D surface object manipulation. A still further objective of the present invention is to provide an improved technique for manipulation of displayed 3D surface objects which provides for de-coupled surface object rotation, both homogenous and non-homogenous surface object scaling and both translate-slide and translate-pull surface object translation. Another objective is to provide a technique for locally modifying surface shape, tangent plane orientation, parametric curvatures and orientation of surface derivative vectors.
  • a method for manipulating a surface object displayed in three-dimensional representation on a computer controlled display system having a computer and a display coupled to the computer, the method comprising the steps of providing a user actuated input controller for selectively positioning a cursor on the display, positioning the cursor over the displayed surface object and signaling the computer to activate a control movement mode, providing a three-dimensional representation of a virtual box enclosing the displayed surface object, positioning the cursor over a portion of the virtual box sensitive to the presence of the cursor, signaling the computer to activate a predefined control movement type specified by the sensitive portion of the virtual box under the cursor and repositioning the cursor to define a movement of the predefined control movement type, and re-displaying the displayed surface object in accordance with the defined movement of the predefined control movement type.
  • an apparatus for manipulating a surface object displayed in three-dimensional representation on a computer controlled display system having a computer and a display coupled to the computer, the apparatus comprising means for positioning a cursor over the displayed surface object and signaling the computer to activate a control movement mode, means for generating a three-dimensional representation of a virtual box enclosing the displayed surface object, means for signaling the computer to activate a predefined control movement type specified by the sensitive portion of the virtual box under the cursor and repositioning the cursor to define a movement of the predefined control movement type, and means for re-displaying the displayed surface object in accordance with the defined movement of the predefined control movement type.
  • Figure 1 depicts a generalized block diagram of a computer system as might be used by a preferred embodiment
  • Figure 2 depicts the object model coordinate system as used by a preferred embodiment
  • Figure 3 depicts a 3D representation of an object and some alternative embodiments of an object surrounded by a bounding box in accordance with a preferred embodiment
  • Figure 4 depicts the active zone layout of a preferred embodiment and some of the object and bounding box manipulations supported by a preferred embodiment
  • Figure 5 is a flowchart depicting the sequence of steps preparing to handle a user manipulation in accordance with a preferred embodiment
  • Figure 6 is a flowchart depicting the translation manipulation sequence of steps in accordance with a preferred embodiment
  • Figure 7 is a flowchart depicting the rotation manipulation sequence of steps in accordance with a preferred embodiment
  • Figure 8 is a flowchart depicting the scaling manipulation sequence of steps in accordance with a preferred embodiment
  • Figure 9 is a flowchart depicting the sequence of steps to re-display a manipulated object and bounding box in accordance with a preferred embodiment
  • Figure 10 illustrates a virtual box being used to move a 2D representation of a 3D space in accordance with a preferred embodiment
  • Figure 11 illustrates a virtual box being used to manipulate lights or cameras in accordance with a preferred embodiment
  • Figure 12 illustrates a 3D curve that has been selected to enter a shape modification mode in accordance with a preferred embodiment
  • Figure 13 illustrates the effect of movements of the virtual box on the curve in accordance with a preferred embodiment
  • Figure 14 illustrates scaling a virtual box in accordance with a preferred embodiment
  • Figure 15 illustrates a curve that has been locally flattened by scaling the virtual box in accordance with a preferred embodiment
  • Figure 16 illustrates the results of rotating the virtual box on the curve in accordance with a preferred embodiment
  • Figure 17 illustrates the effect of rotating the virtual box in the two non- osculating planes of rotation in accordance with a preferred embodiment
  • Figure 18 illustrates a virtual box appearing on a curve, ready for shape manipulation in accordance with a preferred embodiment
  • Figure 19 illustrates a virtual box tracking along a curve's path in accordance with a preferred embodiment
  • Figure 20 illustrates a virtual box appearing centered on the spot on the surface that was selected in accordance with a preferred embodiment
  • Figure 21 illustrates a virtual box moved straight up, causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 22 illustrates a virtual box moved straight up some more causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 23 illustrates a virtual box moved aside and up causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 24 illustrates a virtual box scaled up uniformly, flattening the surface at the top and causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 25 illustrates a virtual box scaled up non-uniformly, flattening the surface in one parametric direction, but not another causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 26 illustrates a virtual box tilted, slanting the surface locally changing the surfaces tangent plane at the selected surface point and causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 27 illustrates a virtual box rotated, causing a corresponding twisting of the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment
  • Figure 28 is an illustration of a sweep object representative of a piece of a whale intestine in accordance with a preferred embodiment
  • Figure 29 illustrates the virtual box encompassing a sweep object in accordance with a preferred embodiment
  • Figure 30 illustrates the wireframe rendering of the sweep object in accordance with a preferred embodiment
  • Figure 31 illustrates the virtual box enclosing the contour in accordance with a preferred embodiment
  • Figure 32 illustrates a shrink operation on the contour in accordance with a preferred embodiment
  • Figure 33 illustrates a sweep object with the modifications (shrinking) made to the wireframe in accordance with a preferred embodiment
  • Figure 34 illustrates a rendering of a tape recorder and the operation of a tape recorder door
  • Figure 35 illustrates the default virtual box on a tape recorder door in accordance with a preferred embodiment
  • Figure 36 illustrates the result of rotation for a default virtual box on a tape recorder door
  • Figure 37 illustrates a virtual box with an adjusted origin and rotation axis in accordance with a preferred embodiment
  • Figure 38 illustrates a tape recorder door that is rotated around an adjusted axis in accordance with a preferred embodiment
  • Figure 39 illustrates the sequence of events associated with modifying the axis in accordance with a preferred embodiment.
  • a preferred embodiment generally involves the manipulation of a computer displayed object represented in three-dimensional form, and it would be helpful to provide a brief discussion of the pertinent computer environment.
  • the computer 10 has a system unit 12 a high resolution display device 14, such as a cathode ray tube (CRT) or, alternatively, a liquid crystal display (LCD).
  • CTR cathode ray tube
  • LCD liquid crystal display
  • the type of display is not important except that it should be a display capable of the high resolutions required for windowing systems typical of graphic user interfaces (GUIs).
  • GUIs graphic user interfaces
  • User input to the computer is by means of a keyboard 16 and a cursor pointing device, such as the mouse 18.
  • the mouse 18 is connected to the keyboard 16 which, in turn, is connected to the system unit 12. Alternatively, the mouse 18 may be connected to a dedicated or serial port in the system unit 12.
  • Examples of general purpose computers of the type shown in Figure 1 are the Apple Macintosh® (registered trademark of Apple Computer) and the IBM PS/2. Other examples include various workstations such as the IBM RISC System/ 6000 and the Sun Microsystems computers.
  • the object viewed on the video display 14 can be referenced for convenience relative to an orthogonal coordinate system (having X, Y and Z axis) called the model coordinate system (or model space) that has its origin at the center of rotation of the object.
  • an orthogonal coordinate system having X, Y and Z axis
  • model coordinate system or model space
  • a mouse controls the position of a mouse pointer (e.g., a reference indicator such as a cursor) that is displayed on the video display.
  • the pointer is moved by moving the mouse over a flat surface, such as the top of a desk, in the desired direction of movement of the pointer.
  • the two-dimensional movement of the mouse on the flat surface translates into a corresponding two-dimensional movement of the mouse pointer on the video display.
  • a mouse typically has one or more finger actuated control buttons. While the control buttons can be utilized for different functions, such as selecting a menu option pointed to by the pointer, the disclosed invention advantageously utilizes a single mouse button to select a 3D object and to trace the movement of the pointer along a desired path.
  • the pointer is located at the desired starting location, the mouse button is depressed to signal the computer to activate a control movement mode, and the mouse is moved while maintaining the button depressed. After the desired path has been traced, the mouse button is released. This procedure is sometimes referred to as dragging the mouse pointer. It should be appreciated that a predetermined key on a keyboard could also be utilized to activate dragging the mouse pointer.
  • a 3D "virtual box” or “bounding box” appears on the visual display such that the bounding box is proximal to the 3D object.
  • the bounding box is proximal to the 3D object.
  • the bounding box thus signals the user that the 3D object has been selected.
  • the bounding box allows for direct manipulation of the enclosed 3D object as will be explained below. Note that it is well within the scope of a preferred embodiment to provide a virtual box having a shape other than a generally rectangular or box shape.
  • Such a virtual box could be of any of a great number of shapes including oblong, oval, ovoid, conical, cubic, cylindrical, multi- hedronical, spherical, etc.
  • the virtual box could, for example, vary based on the geometry of the 3D object.
  • Direct manipulation of the 3D object which manipulation generally comprises moving, scaling, or rotating the object, can be accomplished in various ways depending upon which embodiment the user has chosen and which implementation is supported by a given computer system.
  • FIG. 3 a 3D representation of an object 301, in this case a chair, is shown as displayed on the display of a computer system.
  • the user selects chair 301, by moving the mouse until the pointer 302 is on the chair and clicking on the chair by pressing the mouse button (or using a keyboard equivalent), the chair is surrounded by a bounding box 300.
  • Alternative embodiments include a bounding box 305 with hands 313, a bounding box 307 with handles 315 & 317, and a bounding box 309 with active zones (or hot zones 319), as is explained more fully below.
  • the bounding box 300 which appears as a result of the user selecting the 3D object 301 and as was stated above, is a 3D transparent box which completely surrounds the selected 3D object 301, or is proximal thereto as explained below.
  • the bounding box 303 is a visual clue to the user that the 3D object has been selected.
  • the user is given further clues. Not only is the user informed that the 3D object 301 has been selected, but the user is also given indications as to what manipulation operations might be possible with the selected object.
  • the top hand 311 of the bounding box 305 appears to be pulling the bounding box up (or pushing down or both) and thus indicates to the user that the 3D object can be lifted.
  • the hands 313 around the base of the bounding box 305 with hands appear to be pushing or pulling the bounding box around in a circle and thus indicate to the user that this 3D object can be spun around if so desired.
  • the top handle 315 of the bounding box 307 appears to be available for grabbing and pulling the bounding box up (and /or pushing the bounding box down) and thus tells the user that the 3D object can be lifted up or down.
  • the handles 317 around the base of the bounding box 307 appear to be available for pushing or pulling the bounding box around in a circle and thus tell the user that this 3D object can be spun around if so desired.
  • the bounding box 309 With the bounding box 309 and active zones, the user is given different clues (and, as will be explained below, some of these clues are user selectable to lessen any visual busyness which may exist with the visible active zones). Again, the bounding box tells the user that the 3D object has been selected. Further, additional lines on the bounding box tell the user that there are different active, or hot zones available to be used.
  • Still further embodiments support spring-loaded object manipulations (as is explained below) by providing additional manipulation clues to the user.
  • a pointer changing to a curved arrow could indicate rotation manipulations in the case of a rotation active zone selection, to crossed arrows indicating the plane of movement in the case of a translation active zone selection and to an enlarging arrow indicating that dimensions are to be affected in the case of a scaling active zone selection.
  • a selected object's bounding box of a preferred embodiment could display a circle (or ellipse when the object and bounding box are in a perspective view) when a rotation active zone is selected to thus indicate the rotation possibilities with a given rotation active zone.
  • the displayed circle could further display a curved arrow around a portion of the circumference of the displayed circle to thus signal a user as to the manipulations possible with the selected rotation active zone.
  • a translucent plane could be displayed to indicate the plane of translation available with a given selected translation active zone.
  • a bounding box with active zones 401 is shown. It should be appreciated by one with ordinary skill in the art that although a preferred embodiment utilizes a bounding box represented as a wireframe with no back lines visible and with the object remaining visible within the bounding box (e.g., bounding box 309 in Figure 3), the back lines of the bounding could also be displayed, or the bounding box could even be displayed as a solid (no back faces or lines visible) with the object inside either visible (a transparent solid bounding box), not visible (an opaque solid bounding box), visible yet faint or greyed out (a translucent solid bounding box), etc., all as alternative embodiments which could be user selectable. Note, however, that no object is shown within the bounding box in Figure 4 so as to avoid any potential visual clutter (which option could be user selectable in a still further alternative embodiment of a preferred embodiment). In a preferred embodiment, each face of the bounding box with active zones
  • the bounding box 401 is divided into 9 active zones. Clicking the pointer in any one of these active zones and dragging will result in moving, rotating, or scaling the bounding box (along with the 3D object within the bounding box) depending upon which active zone is selected.
  • the bounding box 401 with active zones allows various paradigms for 3D object manipulation. To scale the 3D object, the user grabs a corner of the bounding box and pulls. To rotate the 3D object, the user grabs an edge of the bounding box turns the bounding box. To move (translate) the 3D object, the user grabs a face of the bounding box and slides the bounding box.
  • the user need not worry about where to grab a particular 3D object (regardless of object shape) in order to perform any one of the desired manipulations because the bounding box provides a consistent user interface across all object shapes. For example, if the 3D object is a floor lamp, the user need not worry about whether it is "proper" to pick up the lamp by the shade, the base, or the pole. This is because the bounding box consistently defines the available actions and means to perform those actions.
  • the bounding box with active zones 401 shows the manipulations possible by clicking and dragging in the various active zones.
  • the chosen operation (scale, rotate, or move) is determined by the active zone in which the user clicks.
  • the axis or axis along which the bounding box is scaled, rotated, or translated is /are chosen according to the particular active zone and face which is clicked on.
  • these manipulations are not limited to only one face of the 3D object but rather are available for each visible face (which can range from one to three faces depending upon the current orientation of the 3D object). It is also important to note that the particular manipulations for each active zone, as well as the number and type of active zones, could be user selectable such that the corner active zones perform rotation manipulations instead of scaling manipulations as is shown in the embodiment of Figure 4.
  • the visible lines delineating the active zones on the bounding box are optionally drawn a user settable distance or percentage in from each face edge of the bounding box (the "inset" as is explained more fully below) thus providing users with an explicit view of the active zone layout as well as allowing users to alter the relative sizes of the active zones. It should be appreciated by one of ordinary skill in the art that alternative active zone layouts and locations as well as having greater or fewer active zones (or to not show the active zones on the bounding box) is well within the scope of a preferred embodiment.
  • Bounding box with active zones 403 shows some of the move or translate manipulations available by clicking and dragging on the move active zone of the left front face of bounding box 401.
  • the move manipulation allows moving the bounding box along with the 3D object inside across the plane of the chosen face of the bounding box. Note that the bounding box 403 with active zones can be moved anywhere within the chosen plane and is not limited to the direction of the arrows in the figure.
  • Bounding boxes 405 and 407 with active zones show some of the rotate manipulations available by clicking and dragging on the rotate active zones of the left front, top, or right side faces of the bounding box 401 with active zones.
  • the rotate manipulation allows rotating the bounding box along with the 3D object inside around one of the three axis of the bounding box and the object within it.
  • Bounding box 405 depicts rotation around the object's Y axis using the left or right active zones.
  • Bounding box 407 depicts rotation around the object's X axis using the top or bottom active zones. Note that the rotation active zones are arranged so that clicking on either side near an edge will result in rotations around the same axis which makes the selection less sensitive to minor locational inaccuracies by the user and also provides for greater user interface consistency.
  • a preferred embodiment provides for de-coupled rotations about the three (X, Y and Z) axis.
  • De-coupled rotations require rotations to occur around a single axis at a time.
  • 3D object manipulation tasks for example, arranging a scene containing a number of objects
  • rotating objects around a single axis at a time can be more intuitive than dealing with a coupled rotation around two or more axis.
  • the manipulation is more predictable and thus a desired object orientation can be effected more quickly than would be the case otherwise.
  • Bounding box 409 with active zones shows some of the scaling manipulations available by clicking and dragging on the scaling active zones on the left front face of bounding box 401.
  • the scaling manipulation allows re-sizing the bounding box along with the 3D object inside across one or two dimensions of the chosen face of the bounding box.
  • re-sizing the bounding box along with the 3D object it contains across one or two dimensions of the chosen face alters the relative dimensions of the bounding box and object and is thus a non-homogenous scaling operation.
  • An alternative embodiment of a preferred embodiment (user selectable by depressing a key on the keyboard) provides re-sizing (as opposed to re ⁇ shaping) the bounding box along with the object it contains across all three dimensions thus maintaining the relative dimensions of the bounding box and object and is thus a homogenous scaling operation.
  • homogenous scaling operations would also tolerate greater user active zone selection inaccuracies because the same re-sizing operation would result from selecting any one of the up to three (depending upon bounding box orientation) displayed scaling active zones of a given bounding box corner.
  • the bounding box with active zones thus provides what might be termed nine degrees of freedom: movement in three directions (up to two concurrently); rotation about any one of the three axis; and scaling along three directions. Note that displaying the active zones of bounding box 401 (and bounding box
  • each “mode” is a temporary condition which is entered “on-the-fly” by clicking on one of the active zones (or handles in the alternative embodiments) and is exited by releasing the mouse button and might thus be termed "spring-loaded.”
  • spring-loaded the particular manipulation mode chosen is only active while the mouse button remains pressed down.
  • the bounding box provides direct manipulation capability which further increases its intuitiveness. Because the manipulation, be it moving, scaling, or rotating, is constrained to only one or two of the three possible axis of the 3D object, every position on the screen specifies exactly one particular movement, rotation, or scaling value. If the user keeps the pointer "pinned" to the spot on the bounding box originally clicked, the bounding box will appear to smoothly track the pointer movement. This further provides the desired kinesthetic feedback of direct locational coupling between the user motion and the 3D object display motion which thus increases user intuitiveness. Still further, it should be appreciated that the manipulations of a preferred embodiment are performed in an absolute sense rather than in a relative sense. An absolute manipulation bases the current object position on the difference between the current object position and the original object position.
  • the transformation for an absolute manipulation is a gross determination of the current position versus the original position of all of the object movements made by the current manipulation.
  • relative manipulations determine the current object position as an incremental difference from the previous object position.
  • the transformation for a relative manipulation is an incremental determination of the current position versus the last position, or each small incremental object movement made by the current manipulation.
  • the importance of using absolute manipulation determinations is the improved user intuitiveness. The improved user intuitiveness is due to the result of absolute determinations wherein when a user returns the pointer to the original location in an object manipulation, the object is returned to its original orientation because the gross difference is zero.
  • the object to be manipulated is either in world space coordinates or in model space coordinates which are passed through transforms in order to reach world space coordinates as is explained more fully below (and either way the object must also pass through a viewing transform as is well known in the art) in order to be displayed.
  • the object is stored in model space coordinates in order to facilitate more efficient manipulation calculations.
  • the object must first pass through a transformation, which translates the object to world space, before being displayed on the computer display.
  • this transformation from model space to world space is represented as three separate transformations; one for scaling, at least one for rotation (alternative embodiments support multiple rotation transforms, as is explained more fully below), and one for translation.
  • the concatenation of these three transforms forms the complete transformation from model space to world space.
  • the scale and translation transforms are separately stored as 3D vectors, and the rotation transform is stored as a 3 x 3 matrix. Storing the transforms separately allows for changing any component of the three separate transforms without affecting the other transforms. The alternative (storing a single transformation matrix) is less efficient because it would require additional matrix computations.
  • the user can manipulate the bounding box and the object it contains by clicking on one of the spring-loaded active zones and dragging the bounding box in the desired direction of manipulation. Referring to Figure 5, the user manipulation steps will now be described.
  • the next determination that needs to be made is whether the user is selecting an active zone in order to manipulate the bounding box and object or, alternatively, the user is de-selecting the bounding box.
  • the x and y coordinates of the pointer location when the user pressed the mouse button down are used to create a ray from the eyepoint into the screen at that x,y pointer location 503.
  • the ray is transformed into model space 505.
  • the ray is defined by its origin, which is equal to the camera position or "eyepoint," and its direction.
  • the direction vector of the ray is constructed by concatenating the vector (x,y,vd) with the 3 x 3 submatrix of M, where vd is the viewing distance (the distance from the projection plane to the eyepoint of the virtual camera) and M is the transpose of the 4 x 4 viewing matrix of the virtual camera.
  • the ray is then transformed into the object's local coordinate system (model space) by multiplying both the ray's origin and direction with the inverse of the current transformation matrix (formed by concatenating the three transforms, one for each of scale, rotate and translate, as is explained more fully below).
  • the viewing ray is inverse-transformed into the model space of the axis-aligned bounding box. This causes intersections to then occur with axis-parallel planes which thus simplifies the calculations.
  • a 3D version of the Woo algorithm described in Graphics Gems (citation above) is used.
  • only those faces of the box visible to the virtual camera need to be tested for intersection. Thus, there is at most one intersection between the visible faces and the viewing ray. The position of this intersection point, as well as a number of other variables as discussed below with respect to particular manipulations, is then recorded.
  • the dimensions of the face planes are extended slightly outward when the intersection calculations are performed because some users might expect to hit a bounding box edge even when they hit close to the edge yet are still just outside the bounding box. If no face of the bounding box is hit by the ray (no intersection is found between the viewing ray and any visible face of the bounding box), which merely means the user moved the pointer to another area of the screen before pressing the mouse button, then in a preferred embodiment the object is de-selected and the bounding box would disappear ("stop" at step 509).
  • each face of the bounding box is subdivided into nine active zone rectangular subregions which thus makes it a simple matter to determine in which of the nine regions the hitpoint lies 511.
  • an index scheme is used to indicate which active zone was selected. It is already known that a face of the bounding box was hit, in a preferred embodiment determining which particular active zone is selected uses the following steps for each coordinate axis of the hit face:
  • ZONE ZONE + 2 (10 in binary)
  • Y axis is Y > Ymin + insety? yes: is Y > Ymax - insety? yes: hitpoint is in top portion of hit face
  • the X axis determination would yield a value of 1 (01 in binary) and the Y axis determination would yield a value of 0 (0000 in binary). In that case, the resulting index value of the selected active zone hitpoint would be 1 (or 0001 in binary).
  • the polarity is merely the face, of the two parallel faces, of the bounding box having the greater face value along the axis perpendicular to the hit face. Determination of the axis perpendicular to the hit face, determining the polarity of the hit face, and using indices for coordinate axis are techniques well known in the art. Together, these three variables completely determine the active zone that was hit (note that in total there are 9 active zones per box face and 6 box faces and therefore a total of 54 active zones in a preferred embodiment).
  • a translation manipulation will be caused by any further movement of the mouse while the user continues to hold the mouse button down. This is discussed below with reference to Figure 6.
  • a rotation zone was hit by the viewing ray then a rotation manipulation will be caused by any further movement of the mouse while the user continues to hold the mouse button down. This is discussed below with reference to Figure 7.
  • a scaling zone was hit by the viewing ray then a scaling manipulation will be caused by any further movement of the mouse while the user continues to hold the mouse button down. This is discussed below with reference to Figure 8.
  • the difference between these two hitpoints is calculated 609.
  • This difference which represents the amount of movement or translation the user has indicated via movement of the mouse pointer, is then transferred into scaled, rotated coordinates 611.
  • the new translation is added to the original translation transformation and the translation transformation is set to this sum 613.
  • the new translation is added to the original translation transformation in order to create a new combined translation transformation which includes the user's latest manipulations. Now that the translation transform includes the latest user manipulation, the bounding box and the object it contains can be re-displayed 615.
  • re-displaying the bounding box and the object within it is achieved by first creating a cumulative transform M from the current three separate scale, rotate and translate transforms 901. Then the object is re-displayed, in a preferred embodiment, by transforming each vertex of the object from model space to world space by multiplying the vertices by M, passing the transformed object vertices through a viewing matrix V (which defines where the camera is located, where the camera is pointed, the focal length of the lens, and the camera screen geometry; please note that these techniques are well known in the art) and drawing lines between the transformed object vertices 903. Finally, to re-display the bounding box, in a preferred embodiment, each vertex of the bounding box is transformed by multiplying the vertices by M and the viewing matrix V and drawing lines between the transformed bounding box vertices 905.
  • V which defines where the camera is located, where the camera is pointed, the focal length of the lens, and the camera screen geometry; please note that these techniques are well known in the art
  • the mouse button is again checked 603 to determine whether the user has finished all current translation manipulations. If the user has finished all current translation manipulations, the user will no longer be pressing the mouse button. However, if the user has not yet finished all current manipulations then the user will still be pressing the mouse button and the same sequence of steps 605 through 615 will be followed. In this way, the bounding box and the object within it will appear to the user to continuously move, or translate, across the screen as the user moves the pointer with the mouse while continuing to hold the mouse button down. These continuous movements will only pause while the user stops moving the mouse and will only stop when the user stops pressing the mouse button down.
  • a gridding capability is provided whereby movements would be constrained along box coordinate system axis so as to stay on the intersections of a three-dimensional grid.
  • the gridding is enabled after the intersection point is transformed into world space, resulting in gridding in world space which would thus not be affected by the orientation of the bounding box in model space.
  • a constrain mechanism is triggered by the user holding down the shift (or other) key when pressing the mouse button to select an active zone and manipulate the bounding box and object within. The shift-constrain mechanism would constrain the bounding box and object to movements which lie along the one axis which has the larger translation component at the time the shift key is pressed.
  • a still further alternative embodiment would limit translation to a specified volume in 3D space. For example, when moving a chair in a room on the visual display the chair would be limited by the room boundaries.
  • the rotation sequence will now be described. Again, as was stated above, the calculated active zone classification, the axis and the polarity are stored. The original three separate scaling, rotation and translation transforms - at the time the mouse button was first clicked on the active zone - are also saved 701. Finally, the index of the axis around which rotation will occur is also stored. This is because in a rotation manipulation the bounding box and the object it contains will be rotated around this axis.
  • the center line of the bounding box which passes through the origin of the model space coordinate system, is the axis of rotation.
  • Alternative embodiments support moving the axis of rotation elsewhere within the box, for instance to an edge of the bounding box, and even to an axis outside of the bounding box.
  • an indicator such as a cross-hair or visible rotation axis line would be displayed when a rotation manipulation was selected to thus inform the user of the current axis of rotation.
  • a ray through the current mouse x,y location is transformed into model space 705. Please note that this is accomplished in the same manner as step 605 in the translation manipulation sequence. Then it is determined where the translated ray intersects the plane of the selected face and active zone 707. Please note that this is accomplished in the same manner as step 605 in the translation manipulation sequence. This thus provides two intersection points in the plane the bounding box and the object it contains are to be rotated in: the original hit point that was stored in step 701 and the current hitpoint just now determined in step 707.
  • both the original hitpoint A and the current hitpoint B are inverse- transformed through the scale transform 709.
  • the scaling transform (as was stated above) when calculating the intersection points A and B. This is because non-homogenous scaling transforms would change the angles, so in a preferred embodiment angle a (explained below) is calculated in scaled space.
  • angle a (explained below) is calculated in scaled space.
  • calculating this angle is essentially two-dimensional and is thus computationally more efficient than using a full 3D algorithm.
  • this angle a is used to construct a rotation matrix which is then preconcatenated into the original rotation matrix (earlier stored, as was discussed above with reference to step 701) and the rotation transform of the bounding box is then set to this compound transform 713.
  • the rotation transform includes the latest user manipulation, the bounding box and the object it contains can be re-displayed 715. Please note that this is accomplished in the same manner as step 615 in the translation manipulation sequence (and thus follows the sequence of steps discussed above with reference to Figure 9).
  • the mouse button is again checked 703 to determine whether the user has finished all current rotation manipulations. If the user has finished all current rotation manipulations, the user will no longer be pressing the mouse button.
  • the rotation manipulations are constrained to increments of multiples of a predefined angle. For instance, increment angles would be constrained to multiples of 45 degrees when the shift key is held down during rotations.
  • the rotation transform of the bounding box would be stored as three successive rotation angles, one of which would be added to a to constrain the result.
  • the rotation transform was stored as three successively applied rotation angles around the X, Y and Z axis respectively, then the rotation angles could be limited or gridded separately for each of the three axis.
  • the ratio of the original hitpoint A and the current hitpoint B is determined 807. Note that neither of these points needs to be inverse transformed before this calculation can be made because they are both already in the same space and because they are both only one transform away from model space. In other words, because both the original hitpoint A and the current hitpoint B are only the scaling transform away from model space, they are in the same relative space and scaling calculations can be made directly on them.
  • these 2D points A and B are then used to update the scaling transform 811.
  • the ratio of B/A is then multiplied into the current scaling transform to yield a new scaling transform.
  • scaling along the one axis that is not involved remains unchanged by these computations.
  • the scaling transform includes the latest user manipulation, the bounding box and the object it contains can be re-displayed 813. Please note that this is accomplished in the same manner as step 615 in the translation manipulation sequence (and thus follows the sequence of steps discussed above with reference to Figure 9).
  • the mouse button is again checked 803 to determine whether the user has finished all current scaling manipulations. If the user has finished all current scaling manipulations, the user will no longer be pressing the mouse button. However, if the user has not yet finished all current scaling manipulations then the user will still be pressing the mouse button and the same sequence of steps 805 through 813 will be followed. In this way, the bounding box and the object within it will appear to the user to continuously scale on the screen as the user moves the pointer with the mouse while continuing to hold the mouse button down. These continuous movements will only pause while the user stops moving the mouse and will only stop when the user stops pressing the mouse button down.
  • scaling is gridded or constrained. This would prevent objects from becoming smaller than a predefined (or user settable) minimum size and would prevent a negative scaling manipulation which would otherwise cause an object to flip around within itself.
  • a further consideration arises when, due to the camera position (the user's viewpoint) and the physical display size or area available for object display (e.g., the object could be in a window on the display which window is smaller than the total display area), the object to be manipulated is larger than the area available to the user.
  • an alternative embodiment would, upon noting an object size larger than the available viewing area, provide a reduced size bounding box thus still providing the user with complete access to all of the available object manipulations.
  • the implementation of such an alternative embodiment would be readily apparent to one of ordinary skill in the art based upon the teachings of a preferred embodiment (e.g., a scaling factor could be applied to the object dimensions to fool the bounding box generation means into thinking that the object is smaller than it actually is).
  • Providing a bounding box which is not of equal size with the dimensions of the object to be manipulated provides further capabilities.
  • a reduced size bounding box based on a space size which encompasses all of the objects desired to be included in the group could be used.
  • objects for instance furniture in a room or scene
  • a scene of objects for instance a room containing furniture
  • a larger space than a scene could be selected thus providing an infinite sized bounding box which, again, is of a reduced size to facilitate full user manipulation.
  • the virtual box controller can be used for many applications. In fact, this versatility makes it an attractive choice as the "standard” or "ubiquitous" direct manipulation method for 3D in an operating system in accordance with a preferred embodiment. Having a single, powerful manipulator presents a far more consistent and easier to learn interface than using a variety of domain or task-specific manipulators as is common in other graphics applications or systems. As yet, no other interaction controller has proved as versatile and easy to use for this purpose.
  • Figure 10 illustrates a virtual box being used to move a 2D representation of a 3D space.
  • This processing is referred to as space handles.
  • the virtual box here moves all of the objects in the scene at the same time.
  • it moves the virtual camera or "space” the objects live in.
  • convenient locations for such "space handles” could be on floors, ceilings, staircases etc. Note again that not only a particular object moves, all of them do, but according to the same convenient virtual box interface that is used to manipulate a single object.
  • the space can be moved (changing the camera location and center of interest), rotated (rotating the camera location around the center of interest) and scaled (changing the camera's focal length or "zoom").
  • space handles could be scattered sparsely around in 3D space (e.g. in a 3D grid) and any one of them could be used to manipulate the view.
  • the drawing of the controller would be persistent, i.e. not serve as selection feedback.
  • a different color code could be used for a space handles to distinguish a space handle from a virtual box serving as selection feedback and object controllers.
  • Figure 11 illustrates a virtual box being used to manipulate lights or cameras.
  • a virtual box can also be used to directly manipulate 3D curves. Curves in 3D are traditionally very hard to edit using direct manipulation techniques, because their projections look so two- dimensional and small movements or shape changes are very hard to interpret correctly when the curve is viewed in perspective.
  • a virtual box interface provides an intuitive, consistent and powerful way of changing
  • Figure 12 illustrates a 3D curve that has been selected to enter a shape modification mode.
  • a virtual box appears around the point where the curve has been selected at as represented by the dot on the display centered in the virtual box.
  • Figure 13 illustrates the effect of movements of the virtual box on the curve in accordance with a preferred embodiment. The usual movements of the virtual box cause the point on the curve to move along, modifying the curve's shape locally. Movement is predictable due to the virtual box controller.
  • Figure 14 illustrates scaling a virtual box in accordance with a preferred embodiment. Scaling the virtual box changes the local curvature.
  • Figure 15 illustrates a curve that has been locally flattened by scaling the virtual box.
  • Figure 16 illustrates the results of rotating the virtual box on the curve. The rotation results in a change of the tangent.
  • Figure 17 illustrates the effect of rotating the virtual box in the two non- osculating planes of rotation. The rotation results in a rotation of the osculating plane and a resulting modification of the curve's shape
  • FIG 18 illustrates a moving virtual box curve modification in accordance with a preferred embodiment.
  • the virtual box appears on the curve, ready for shape manipulation. Instead of clicking anywhere on the virtual box, the user clicks on the black dot and drags the cursor along the curve.
  • the virtual box (and the dot) follow along the curve's path. The box's orientation is adjusted automatically to reflect the 3D curve's tangent direction and osculating plane orientation.
  • Figure 20 illustrates a virtual box appearing centered on the spot on the surface that was selected in accordance with a preferred embodiment.
  • Figure 21 illustrates a virtual box moved straight up, causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • Figure 22 illustrates a virtual box moved straight up some more causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • Figure 23 illustrates a virtual box moved aside and up causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • Figure 24 illustrates a virtual box scaled up uniformly, flattening the surface at the top and causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • Figure 25 illustrates a virtual box scaled up non-uniformly, flattening the surface in one parametric direction, but not another causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • Figure 26 illustrates a virtual box tilted, slanting the surface locally changing the surfaces tangent plane at the selected surface point and causing a corresponding deflection in the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • Figure 27 illustrates a virtual box rotated, causing a corresponding twisting of the surface in the direction and orientation of the movement of the virtual box in accordance with a preferred embodiment.
  • a virtual box controller can also be used to manipulate the parameters of a sweep.
  • a sweep is a graphical object that is moved through a 2D space to create a 3D representation of a solid object.
  • Figure 28 is an illustration of a sweep object representative of a piece of a whale intestine in accordance with a preferred embodiment. If the sweep object is selected, then a virtual box provides selection feedback and direct (but rigid) manipulation of the sweep surface.
  • Figure 29 illustrates the virtual box encompassing a sweep object in accordance with a preferred embodiment.
  • a menu command (like pressing the Edit cmd-E keys) invokes a parameter editing mode. The surface disappears and the input parameters to the sweep surface appear in wireframe rendering.
  • Figure 30 illustrates the wireframe rendering of the sweep object in accordance with a preferred embodiment.
  • FIG. 31 illustrates the virtual box enclosing the contour in accordance with a preferred embodiment.
  • the contour can now be scaled, rotated or otherwise manipulated as discussed above using the familiar virtual box controls.
  • Figure 32 illustrates a shrink operation on the contour in accordance with a preferred embodiment. Clicking elsewhere in the scene or pressing cmd-E again to exit the parameter editing mode recreates the sweep surface and renders it using shading.
  • FIG. 33 illustrates a sweep object with the modifications (shrinking) made to the wireframe in accordance with a preferred embodiment.
  • the contour could also have been rotated within its plane or rotated out of its original plane.
  • An alternative embodiment would allow contour virtual boxes to be moved along the trajectory interactively to create the sweep object.
  • a virtual box has an internal coordinate system, and origin. These are used during rotation and scaling operations. Rotations always rotate around the origin, scales are always around the origin also as described in detail above. Sometimes it is convenient or more intuitive to move the origin.
  • Figure 34 illustrates a rendering of a tape recorder and the operation of a tape recorder door. Opening the tape recorder door is a natural function associated with using a tape recorder.
  • Figure 35 illustrates the default virtual box on a tape recorder door in accordance with a preferred embodiment.
  • Figure 36 illustrates the result of rotation for a default virtual box on a tape recorder door. Note that the rotation centered around the rotation axis is not the required interaction. Thus, the rotation axis must be adjusted so that its origin is centered appropriately.
  • Figure 37 illustrates a virtual box with an adjusted origin and rotation axis in accordance with a preferred embodiment.
  • Figure 38 illustrates a tape recorder door that is rotated around an axis, but now the axis is in the right place.
  • a menu command Show axes
  • Figure 39 illustrates the sequence of events associated with modifying the axis in accordance with a preferred embodiment.
  • the virtual box is shown before the modify axis is invoked.
  • an illustration of the after axes are shown.
  • a modification of the axis is shown as the axis is dragged by the mouse pointer.
  • This menu command toggles the display of the virtual box's rotation axis. Dots appear on the visible faces where they intersect the axes.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un appareil permettant la manipulation directe d'objets surfaces sur des dispositifs d'affichages d'ordinateurs. Ledit procédé permet de produire une boîte virtuelle en 3 dimensions renfermant un objet surface sélectionné, ladite boîte présentant des zones sensibles que l'utilisateur peut sélectionner au moyen d'un curseur de sorte que, lorsque celui-ci manipule le curseur après avoir sélectionné une zone sensible, la boîte virtuelle en 3 dimensions et l'objet surface à l'intérieur de cette dernière sont manipulés, une correspondance kinesthésique directe existant entre la manipulation du curseur par l'utilisateur et celle de la boîte virtuelle et de l'objet surface.
PCT/US1994/000139 1993-10-21 1994-01-06 Systeme de manipulation de surfaces oriente objet WO1995011482A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU60831/94A AU6083194A (en) 1993-10-21 1994-01-06 Object-oriented surface manipulation system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13995293A 1993-10-21 1993-10-21
US08/139,952 1993-10-21

Publications (1)

Publication Number Publication Date
WO1995011482A1 true WO1995011482A1 (fr) 1995-04-27

Family

ID=22489056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/000139 WO1995011482A1 (fr) 1993-10-21 1994-01-06 Systeme de manipulation de surfaces oriente objet

Country Status (2)

Country Link
AU (1) AU6083194A (fr)
WO (1) WO1995011482A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0824247A2 (fr) * 1996-08-09 1998-02-18 Genius Cad-Software GmbH Méthode pour modifier des objets tridimensionnels
WO2000031690A1 (fr) * 1998-11-20 2000-06-02 Opticore Ab Procede et dispositif de creation et de modification de modeles tridimensionnels numeriques
US6867771B2 (en) 2002-05-07 2005-03-15 Autodesk, Inc. Controlled face dragging in solid models
US6918087B1 (en) 1999-12-16 2005-07-12 Autodesk, Inc. Visual clues to navigate three-dimensional space in a computer-implemented graphics system
US7092859B2 (en) 2002-04-25 2006-08-15 Autodesk, Inc. Face modification tool
US8525838B2 (en) 2008-02-08 2013-09-03 Autodesk, Inc. Associative fillet

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2239773A (en) * 1990-01-04 1991-07-10 Apple Computer Graphical object, e.g. cursor, is rotated during translation
CA2077173A1 (fr) * 1991-11-22 1993-05-23 Michael Chen Methode et appareil de manipulation directe d'objets tridimensionnels sur un ecran d'ordinateur

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2239773A (en) * 1990-01-04 1991-07-10 Apple Computer Graphical object, e.g. cursor, is rotated during translation
CA2077173A1 (fr) * 1991-11-22 1993-05-23 Michael Chen Methode et appareil de manipulation directe d'objets tridimensionnels sur un ecran d'ordinateur

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
L. PIEGL: "Modifying the shape of rational B-splines. Part1: curves", COMPUTER AIDED DESIGN, vol. 21, no. 8, October 1989 (1989-10-01), LONDON, GB, pages 509 - 518, XP000088133, DOI: doi:10.1016/0010-4485(89)90059-6 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0824247A2 (fr) * 1996-08-09 1998-02-18 Genius Cad-Software GmbH Méthode pour modifier des objets tridimensionnels
EP0824247A3 (fr) * 1996-08-09 1999-08-04 Autodesk, Inc. Méthode pour modifier des objets tridimensionnels
US6281906B1 (en) 1996-08-09 2001-08-28 Autodesk, Inc. Method for the modification of three-dimensional objects
US6801217B2 (en) 1996-08-09 2004-10-05 Autodesk, Inc. Determining and displaying geometric relationship between objects in a computer-implemented graphics system
WO2000031690A1 (fr) * 1998-11-20 2000-06-02 Opticore Ab Procede et dispositif de creation et de modification de modeles tridimensionnels numeriques
US6918087B1 (en) 1999-12-16 2005-07-12 Autodesk, Inc. Visual clues to navigate three-dimensional space in a computer-implemented graphics system
US7092859B2 (en) 2002-04-25 2006-08-15 Autodesk, Inc. Face modification tool
US6867771B2 (en) 2002-05-07 2005-03-15 Autodesk, Inc. Controlled face dragging in solid models
US8525838B2 (en) 2008-02-08 2013-09-03 Autodesk, Inc. Associative fillet

Also Published As

Publication number Publication date
AU6083194A (en) 1995-05-08

Similar Documents

Publication Publication Date Title
US5583977A (en) Object-oriented curve manipulation system
CA2077173C (fr) Methode et appareil de manipulation directe d'objets tridimensionnels sur un ecran d'ordinateur
US5861889A (en) Three dimensional computer graphics tool facilitating movement of displayed object
US6448964B1 (en) Graphic object manipulating tool
US6426745B1 (en) Manipulating graphic objects in 3D scenes
US5689628A (en) Coupling a display object to a viewpoint in a navigable workspace
US6023275A (en) System and method for resizing an input position indicator for a user interface of a computer system
US5841440A (en) System and method for using a pointing device to indicate movement through three-dimensional space
US7528823B2 (en) Techniques for pointing to locations within a volumetric display
US5371845A (en) Technique for providing improved user feedback in an interactive drawing system
Mine Working in a virtual world: Interaction techniques used in the chapel hill immersive modeling program
US7110005B2 (en) Object manipulators and functionality
Liang et al. Geometric modeling using six degrees of freedom input devices
EP0219671A2 (fr) Fonction inverse en miroir dans un système graphique interactif
JPH10283158A (ja) ウィンドウの立体表示装置及びその方法
JPH0573661A (ja) 3次元情報会話システム
US6295069B1 (en) Three dimensional computer graphics tool facilitating movement of displayed object
JP2000298685A (ja) 選択ナビゲータ
Stork et al. Efficient and precise solid modelling using a 3D input device
WO2007035988A1 (fr) Interface pour des contrôleurs informatiques
US20060082597A1 (en) Systems and methods for improved graphical parameter definition
WO1995011482A1 (fr) Systeme de manipulation de surfaces oriente objet
WO1995011480A1 (fr) Systeme de manipulation graphique oriente objet
KR102392675B1 (ko) 3차원 스케치를 위한 인터페이싱 방법 및 장치
JPH08249500A (ja) 3次元図形の表示方法

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SK UA UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载