HK1194480A - User interface and method for implementing a floating-in-the-air user interface - Google Patents
User interface and method for implementing a floating-in-the-air user interface Download PDFInfo
- Publication number
- HK1194480A HK1194480A HK14107935.7A HK14107935A HK1194480A HK 1194480 A HK1194480 A HK 1194480A HK 14107935 A HK14107935 A HK 14107935A HK 1194480 A HK1194480 A HK 1194480A
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- display
- optionally
- floating
- viewer
- Prior art date
Links
Description
The application is a divisional application of a Chinese patent application with the application number of 200980135202.4 and the invention name of wide-view display and user interface, which enters the Chinese national stage at 3/9/2011.
RELATED APPLICATIONS
The present application claims priority and benefit from U.S. provisional patent application serial No. 61/129,665 entitled "broadcast VIEWINGANGLE DISPLAYS," filed 10.7.2008, in accordance with 35u.s.c.119(e), the contents of which are incorporated herein by reference.
Background
The present invention, in some embodiments thereof, relates to a method and apparatus for displaying an image, and more particularly, but not exclusively, to such a method and apparatus which allows an image to be viewed from a wide viewing angle (e.g. from 360 ° around the image).
The present invention, in some embodiments thereof, relates to computerized user interface systems, and more particularly, but not exclusively, to user interface systems including floating-in-the-air (lou) displays.
U.S. patent application publication No.2006-0171008 describes a three-dimensional (3D) display system. The 3D display system includes a projector device for projecting an image on a display medium to form a 3D image. The 3D image is formed such that a viewer can view the image from a plurality of angles up to 360 degrees. Various display media are described, namely rotating diffuser screens, circular diffuser screens, and aerogels. The rotating diffuser screen controls the image using a spatial light modulator so that a 3D image is displayed on the rotating screen in a time multiplexed manner. The circular diffuser screen includes a plurality of projectors operating simultaneously to project images from a plurality of locations onto the circular diffuser screen, thereby forming a 3D image. Aerogels may use a projection device described as applicable to a rotating diffuser screen or a circular diffuser screen. Although this disclosure sometimes refers to 3D images as holograms, in fact, the display media taught thereby generate non-holographic 3D images.
Some computer-generated three-dimensional displays are known. Some use microlenses on a flat screen. Some include computer-generated holograms that can be viewed from relatively narrow angles.
A class of three-dimensional (3D) displays, known as volumetric displays, is currently undergoing rapid development. The display types of the class include a scan volume display and a static volume display. Volume display allows the display of three-dimensional (3D) graphical scenes within a real 3D volume. That is, rather than projecting the volumetric data onto a 2D display, the volumetric display is a true physical 3D volume.
Some user interfaces display a location of a user input indicator in a first display space, transitioning a location of a user input from a second space that is an input space. An example of this user interface is a mouse, where a pointer on a computer screen moves corresponding to the motion of the mouse, where the mouse moves on the desktop in one coordinate system and the pointer moves on the screen in a second coordinate system.
U.S. patent 7,528,823 to Balakrishnan et al describes a system that creates a volume display and a user-controllable volume pointer within the volume display. The user may point by aiming a beam based on a tangent, a plane, or a vector, positioning the device in three dimensions in association with the display, touching a digitized surface of the display housing, or otherwise entering location coordinates. The cursor may take many different forms including a ray, a point, a volume, and a plane. The light rays may include rings, beads, segmented rods, cones, and cylinders. The user specifies an input position and the system maps the input position to a 3D cursor position within the volumetric display. The system also determines whether the cursor has designated any objects by determining whether the objects are within the area of influence of the cursor. The system also performs any functions that are activated in association with the designation.
Whiteboard authoring (white-boarding) is a term used to describe the placement of shared files on a screen "shared notebook" or "whiteboard". Videoconferencing and dataconferencing software often includes tools that let the user mark the electronic whiteboard much as he would for a traditional wall-mounted board. The general nature of this type of software is to allow more than one person to work on the image at any one time, keeping the two versions synchronized with each other in near real time.
Haptic feedback, often referred to simply as "haptics," is the use of touch sensations in user interface designs to provide information to end users. When referring to mobile phones and similar devices, this typically means using the vibration of a vibration alert from the device to indicate that a touch screen button has been pressed. In this particular example, the phone will vibrate slightly in response to user activation of the on-screen control to compensate for the lack of a normal tactile response that the user would experience when pressing the physical button. Some "force feedback" joysticks and video game steering wheels provide resistance that is another form of tactile feedback.
The background art includes:
U.S. patent 6,377,238 to mcphers;
U.S. patent 7,054,045 to mcphers;
U.S. patent 7,528,823 to Balakrishnan et al;
U.S. published patent application nos. 2006/0171008 to Mintz et al; and
an article entitled "Overview of this-dimensional shape measurement using optical methods" by Chen F., Brown G.M., Song M. published in Opti.Eng.39 (1)10-22 (1/2000).
Disclosure of Invention
The present invention, in some embodiments thereof, involves displaying holograms to many viewers such that each viewer sees the hologram in exactly the same location, and if a certain portion of the hologram is touched, all other viewers see the image touched at the same location from his own perspective.
The present invention, in some embodiments thereof, relates to projecting paraxial images around 360 °.
There is therefore provided, in accordance with an exemplary embodiment of the present invention, a method of displaying content to a plurality of viewers, the method including:
forming a plurality of volumetric images, each volumetric image having at least a portion of the content and each volumetric image being viewable from its own visibility space; and
a portion of one or more of the visibility spaces is made to overlap with each viewer's pupil.
In some embodiments, the content is a single scene; and each of said volumetric images having a face of said single scene, whether solid (solid) or partially transparent, viewable from a different viewpoint.
Optionally, the multiple volumetric images overlap or abut in space.
Optionally, all the volumetric images overlap in space.
Optionally, the overlap between the volume images is complete.
Optionally, the volumetric images are completely overlapped in space.
Alternatively, volumetric images are considered to overlap if image points of one image overlap or are spaced apart between image points of the other image. Similarly, full overlap may be defined as the state when all image points of one image overlap or lie between image points of another image. Optionally, each point in space identified by the viewer as part of the image is an image point.
In some embodiments of the invention, the viewer is at different azimuthal angles around the space occupied by one of the volumetric images.
Optionally, the different azimuthal angles span the entire circle, a half circle or a quarter circle.
In some embodiments, the two viewers are at a distance of at least 1 meter from each other.
In some embodiments, the viewer sees the images simultaneously.
Optionally, the visibility space overlaps only a short periodic sequence with the eyes of the viewer, and the short periods are spaced apart in time so that the viewer sees a continuous scene.
The present invention, in some embodiments thereof, relates to a user interface including a floating-in-the-air display.
The term floating-in-the-air display is used herein with respect to a bottomless display. A floating in the air display is optionally generated and does not require a floor, and thus may appear to float in the air or in water or in solids.
In some embodiments, the user interface allows a user to reach into (reach intos) the display space up to and/or into the displayed object and/or scene. "reach in" provides natural hand-eye coordination for a user interacting with the user interface. For example, the user is enabled to "touch" the displayed object, and the user and optionally other viewers see "touch".
In some prior art cases, a user manipulates an input device such as a mouse in one space and views the result of the manipulation in another space (display space). The user interface of the present invention enables manipulation of the input device and viewing of the results of the manipulation in the same space.
In some embodiments, the user provides input to the user interface, and the user interface effects some change in the display, whether it is part of the indicia display or causes a larger change, such as cutting, uncovering the layers, and so forth. Since the user reaches into the object and appears to manipulate the object, the user appears to have effected a change to the object itself.
In some embodiments, sensory feedback is provided when the user appears to touch the object being displayed. Since the displayed object is floating in the air and does not provide resistance to touch, the user may optionally use a device for pointing, wherein the user interface optionally causes the device to provide sensory feedback when the user "touches" the displayed object.
One example method for optionally providing sensory feedback to a user when "touching" an object in a display includes evoking an artificial touch sensation as known in the art of artificial reality, for example, by the user wearing a vibrating ring or glove. Another example approach is to cause the hand and/or fingers of the user to be heated by projecting a beam of radiation, such as infrared heat, at the hand and/or fingers. Yet another example method includes projecting an acoustic beam, e.g., ultrasound, modulated to induce perception.
Yet another example method for providing sensory feedback to a user includes visually marking a touch point, for example, by highlighting the touch point. Note that the user interface digitally defines the displayed image, so the user interface may optionally cause a location in the displayed image to be highlighted, flashed, change hue, and so forth.
Yet another exemplary method for providing sensory feedback to the user is through audible feedback, such as sounding a "tap" when the pointer "touches" an object, and/or optionally selecting from a variety of sounds for feedback depending on which object is "touched".
Where "touch," "grab," and other such manipulation terms are used to describe user interaction, sensory feedback will be considered an option in place here.
Where the term "sensory feedback" is used herein, the term is intended to mean any of the methods listed above, as well as other methods of providing feedback to a user.
Some non-limiting examples of command forms for the user interface system described herein include: actuating an actuator on a tool that is also used for an interface within the display space, such as pressing a button on a pointer that is also used for extending into the display space; and voice commands.
In some embodiments, two or more user interfaces at different locations display the same object and/or scene. A user at a location interacts with the user interface at that location and all users see the interaction. Optionally, users in another location optionally interact with the user interface at the same time, and all users see both interactions. This enables the above-mentioned natural hand-eye coordination interaction between remote locations, with many example uses. Some non-limiting examples include: telemedicine practice; remote teaching; remote robotic manipulation; arcade games (arcade gaming); and interactive games. Distances that one location may be far from another include: in another room; in another building; spanning towns; spanning countries; crossing the ocean; two meters are boiled; one hundred meters or more away; kilometers or more away; and hundreds or thousands of kilometers away.
In some applications of the user interface, the floating-in-the-air display utilizes embodiments of the volumetric display described herein. In other applications of the user interface, other volumetric displays are optionally used, provided their nature supports the particular application.
An exemplary embodiment of the present invention also provides a system for displaying content to a plurality of viewers, the system including:
means for generating volumetric images, each volumetric image having at least a portion of the content and each volumetric image viewable from its own visibility space; and
an optical system controlling a portion of one or more of the visibility spaces to overlap with a pupil of each viewer.
In some embodiments, the plurality of volumetric images generated by the unit overlap in space.
Optionally, all volume images generated by the unit overlap in space.
Optionally, there is a complete overlap between the two or more volumetric images.
In some exemplary embodiments, an optical system includes: an orientation determining element determining an orientation of at least one of the visualization spaces with respect to a volumetric image viewable from the visualization space.
Optionally, the orientation determining element comprises a rotating mirror.
Optionally, the orientation determining element is configured to determine that the orientation of the different visibility spaces is different by up to 90 °, up to 180 °, or up to 360 °.
In some embodiments, the system comprises: a time sharing control controlling each visibility space to overlap the pupil by only a sequence of short periods, and the short periods are spaced in time so that the viewer sees a continuous scene.
Optionally, a time sharing control controls rotation of the rotating mirror.
According to an exemplary embodiment of the invention, a system is provided, comprising:
an image generation unit that generates a paraxial image; and
an optical system defining a stage and imaging the paraxial images onto the stage such that the images on the stage can be viewed from a visibility space,
wherein the optical system comprises an eyepiece and a mirror configured to direct light to the eyepiece at a plurality of different azimuthal angles, and wherein
Each of the orientation angles determines a different position of the visibility space; and
the position of the stage is the same for each of said azimuth angles.
In some embodiments, if one viewer touches a given point and the other viewer sees the same point being touched, the position of the stage for both viewers is considered the same. Optionally, this allows for tolerance depending on the ability of the viewer to experience differences in position.
Alternatively, if a point in the graph is touched such that viewers gazing from all azimuthal angles see the same point touched, the position of the stage is considered to be the same for all azimuthal angles. In this context, a point is "same" if the viewer cannot resolve the difference.
In some exemplary embodiments, the eyepiece has a light receiving surface that receives light from the paraxial image, and the light receiving surface has a rotationally curved shape that resides on a plane and rotates about an axis out of the plane. In an exemplary embodiment of the invention, the light receiving surface is a cylinder, optionally with walls with curvature to provide image magnification. Optionally, the curvature is not symmetrical up and down. Alternatively or additionally, the axis spans the image, for example at the center of the image.
Optionally, the shape of the light receiving surface has a curvature rotated at least 90 ° around the axis. For example, if the curve has a semicircle, the surface has a quarter spherical shell.
Optionally, the shape of the light receiving surface has a curvature that is rotated 360 ° about the axis such that the eyepiece defines an inner cavity. For example, if the curve has a semi-circle, the lumen defined is a sphere.
Optionally, the curve is an arc that forms a portion of a circle.
Optionally, the curvature is parabolic.
In some embodiments where the curve is an arc, the axis of rotation does not pass through the center of the arc. Optionally or alternatively, the axis spans the image. Optionally, the axis spans the image but is not as perfectly vertical. Alternatively or additionally, the shaft rocks.
In some embodiments, the axis of rotation lies in a curved plane, midway and perpendicular to the stage.
Optionally, the curvature is concave with respect to the axis of rotation.
Optionally, the curvature is convex with respect to the axis of rotation.
In some exemplary embodiments, the mirror rotates about an axis. Optionally, the axis about which the mirror rotates is the axis about which the curvature rotates to obtain the shape of the inner surface of the eyepiece.
Optionally, the axis about which the mirror rotates and/or the axis about which the curvature rotates to form the shape of the inner surface is the symmetry axis of the stage.
In some exemplary embodiments, the system includes an optical element and light going from the paraxial image to the mirror passes through the optical element.
Optionally, the optical element comprises a lens.
Optionally, the optical element comprises a curved mirror.
In some embodiments, the light source is located inside a cavity defined by the eyepiece.
Optionally, the mirror, the image forming unit and/or the optical element are located inside the cavity.
In some embodiments, at least a portion of the optical path between the light source and the stage is located inside the cavity.
In some embodiments of the present invention, the image forming unit includes a transmissive Liquid Crystal Display (LCD).
In some embodiments of the invention, the image forming unit comprises a reflective LCD.
In some embodiments of the invention, the image forming unit comprises a Spatial Light Modulator (SLM). Optionally, the paraxial image is a Computer Generated Hologram (CGH).
Optionally, the paraxial image is a paraxial parallax barrier image.
Optionally, the paraxial image is a two-dimensional image.
Optionally, the paraxial image is a three-dimensional image.
Optionally, the image on the stage is volumetric. In this context, a volumetric image is an image comprising image points which are not limited to a single plane but fill a three-dimensional space. Thus, a volume image is an image that occupies a volume, but has nothing in the volume other than air, etc., and light is emitted from image points within the volume. Optionally, the three physical dimensions of the volumetric image have amplitudes of the same order, e.g. each of the height, width and depth of the image has a measurement of between 1cm and 20cm, e.g. 10 cm. Optionally, larger measurements are provided for one or more dimensions, e.g., 30cm, 50cm, 80cm, 100cm or more. Optionally, this is provided using a viewer position located inside the imaging system. In an exemplary embodiment of the invention, the diameter of the light receiving surface and its height are selected to match the desired viewing angle and image size. In an exemplary embodiment of the invention, the stage is not curved or piecewise curved and the image forming unit and/or the optical element are thus used for compensation. Alternatively, the image forming unit is not located in the center of the curvature, so different magnifications and/or angular magnitudes can be generated for different viewing angles at the same distance from the system.
In an exemplary embodiment, the image generation unit is configured to generate the same image to be viewed from all of the different azimuthal angles. Optionally, the viewed images differ in size according to different distances. Optionally or alternatively, the image is moved up or down for different viewing heights. Alternatively, the images may be the same even if the viewer lifts up or lowers his head. However, in an exemplary embodiment of the invention, the system modifies and generates the display such that any movement, distance, orientation, or change in elevation generates the same onlooker's visual effect as if the real image were floating in space and being viewed. As mentioned, in some embodiments, such perfect fidelity is not provided, and may be degraded, for example, because one or more types of eye position change are not supported. For example, the same image may be provided from any perspective (rotated to fit), optionally with a different scene for each eye. In another example, vertical repositioning of the eyes does not provide a change in the observed image portion.
In some embodiments, the image generation unit is configured to generate different images to be viewed from different azimuthal angles. For example, the image generation unit may be configured to generate partial images of the scene, each partial image being viewable from a different angle, and the system is configured to image the partial images of the scene as being viewable from the different angles.
In some embodiments, the mirror is tilted about an axis about which the mirror rotates.
An aspect of some embodiments of the invention relates to a method of imaging a paraxial image to be seen by a viewer, the viewer having a pupil at a first location and gazing at a second location, the method comprising:
generating a paraxial image;
imaging the paraxial image to a position at which a viewer gazes such that the image of the paraxial image can be viewed from a visibility space having a widest portion and a narrower portion;
selecting a third position in response to the position of the viewer's pupil; and
the widest part of the visibility space is imaged to the selected third position. Optionally, the imaging comprises: the paraxial image is imaged into the image visibility space and at the same time the plane of the projector is imaged into the plane of the viewer's pupil. For example, the projector in the holographic configuration is an SLM.
In an exemplary embodiment of the invention, the paraxial image is a Computer Generated Hologram (CGH) generated with a Spatial Light Modulator (SLM); and the image of the SLM is in the widest part of the visibility space.
In an exemplary embodiment, the third position is selected to overlap the pupil of the viewer.
Optionally, the image of the CGH is viewable from a visibility space, and the third position is selected such that the visibility space overlaps the viewer's pupil. Optionally, the imaging comprises: the paraxial image is imaged into the image visibility space and at the same time the projector (e.g., SLM in a holographic configuration) plane is imaged into the plane of the viewer's pupil.
In some embodiments, the method comprises:
receiving an indication of a location of a viewer's pupil; and
defining a viewing port (e.g., a pupil) within which the pupil resides in response to the indication,
wherein the third position is selected such that the visibility space at least partially overlaps said viewing window. Optionally, the third position is selected such that the visibility space overlaps the entire viewing window.
In some embodiments, receiving an indication of a location of a viewer's pupil comprises:
receiving an indication of a position of a viewer's face; and
the indication is analyzed to obtain an indication of the position of the viewer's pupil.
Optionally, imaging the SLM comprises generating an image larger than the SLM.
Alternatively or additionally, the image of the CGH is larger than the SLM.
In some exemplary embodiments, the method comprises;
(a) imaging the SLM in response to a position of one viewer's eye; and then
(b) Imaging the SLM in response to the position of the other viewer's eye; and
repeating (a) and (b) such that the viewer sees successive images.
Optionally, the first CGH is projected to a first viewer eye and the second CGH is projected to a second viewer eye.
In some embodiments, the first and second CGHs are holograms of the same scene as would be seen by the first and second eyes of a viewer, provided that the scene is at said second position on which the CGH is imaged.
In an exemplary embodiment, the viewer is one of a plurality of viewers having a plurality of eyes together, and the SLM is sequentially imaged each time in response to a position of another one of the plurality of eyes, such that each viewer sees a continuous scene.
Optionally, whenever the SLM is imaged to overlap with the eyes of the same viewer, the image or images imaged to the second position are the same, thus showing a still or similar (e.g. dynamic) image to the viewer.
In some embodiments of the invention, a method of imaging a paraxial image to be seen by a viewer having a pupil at a first location and gazing at a second location comprises;
projecting light from the SLM to a mirror; and
the mirror is moved to follow the movement of the viewer's eyes.
In some embodiments, the method comprises
Imaging the SLM to a rotating mirror; and
rotating the rotating mirror causes the viewer to see a succession of images.
Optionally, the SLM is one of a plurality of SLMs and images of said plurality of SLMs are projected onto the same rotating mirror.
Optionally, the mirror is at the focal point of a focusing element of the optical system.
Optionally, the imaging is for at least 24 cycles per second, each said cycle being 1 to 20 milliseconds long.
In some embodiments, the method includes performing eye tracking to receive an indication of a location of a pupil of a viewer.
In some embodiments, receiving an indication of a location of a viewer's pupil comprises:
receiving a light reflection from the viewer's eye; and
analyzing the reflections to estimate a position of the viewer's eye.
Optionally, receiving an indication of the position of the viewer's pupils comprises:
receiving an indication of a location in which a face of a viewer is identified; and
the indication is processed to obtain an indication of the position of the viewer's pupils.
In some embodiments, the paraxial image is a paraxial parallax barrier image.
In some embodiments, the paraxial image is a two-dimensional image.
In some embodiments, the paraxial image is a three-dimensional image.
Optionally, the image of the paraxial image is volumetric.
An exemplary embodiment of the present invention also provides a method of displaying a scene to a viewer facing a given location, the method comprising:
estimating a position of a viewer's eye;
estimating which parts of the scene a viewer will see if the scene is at a given position in a given orientation; and
imaging a computer-generated hologram of only a portion of a scene to a given location, said portion comprising said estimated portion of the scene,
wherein the imaging is such that the hologram is visible to a viewer.
In a preferred embodiment, the imaging is in the above-described manner.
Optionally, the method comprises: tracking a position of the viewer's eyes; and imaging the computer-generated hologram such that as the viewer moves, he remains seeing the hologram at the given location. In some embodiments, this is the case even when the viewer moves, for example, one or two meters.
An aspect of some embodiments of the invention relates to a system for imaging a Computer Generated Hologram (CGH), the system comprising:
a hologram generating unit including a Spatial Light Modulator (SLM);
an optical system configured to image the hologram generated by the unit to a first position and to image an image of the SLM to a second position; and
a controller configured to control imaging of said image of the SLM to a second position such that the CGH is imaged to a position between the optical system and the image of the SLM. Optionally, the controller controls at least one of the optical system and the hologram generating unit.
Optionally, the controller is configured to control the optical system to generate an image of the SLM at the desired position in a desired orientation.
In some embodiments, the controller is configured to change the desired position and/or orientation online.
In some embodiments, the system comprises an input for receiving online an indication of said desired position and/or orientation.
Optionally, the input comprises a receiver for receiving a signal from an eye tracking unit indicating a position of a viewer's eye, and the controller controls the optical system to project the image of the SLM to the desired position such that the viewer's eye is within a visibility space comprising the desired position.
Optionally, both eyes of the viewer are simultaneously in the visibility space.
In some embodiments, an optical system comprises: an objective lens, an eyepiece lens, and an optical path adjustment unit controllable to adjust an optical path between the objective lens and the eyepiece lens in response to a distance between the desired position and one or more of the eyepiece lens and the objective lens.
Optionally, the optical path adjusting unit is configured to adjust the optical path online.
In some embodiments, the system includes a mirror that reflects light reaching the mirror from the objective lens to a portion of the eyepiece, wherein the mirror is controllable to reflect the light to a different portion of the eyepiece.
Optionally, the eyepiece comprises a hollow or transparent solid and a fluid.
In some embodiments, the hollow body is shaped as a rotating solid obtainable by rotating a parabola about an axis passing through the center of the image.
Optionally, the hollow cylinder is part of a sphere.
Optionally, the hologram generating unit is inside the eyepiece.
In some embodiments, the system includes a rotating mirror at the center of the eyepiece that rotates about the central axis of the eyepiece.
Optionally, light from the objective lens to the turning mirror is reflected towards the eyepiece.
Optionally, light passes from the objective lens to the turning mirror via one or more optical elements.
Optionally, the one or more optical elements comprise an optical path adjustment unit controllable to adjust an optical path between the objective lens and the eyepiece.
Optionally, the one or more optical elements comprise an optical light adjustment element controllable to adjust the optical light direction with respect to an elevation view (elevation) for each particular orientation towards the eyepiece.
An aspect of some embodiments of the invention relates to a system for imaging a hologram, the system comprising:
a hollow eyepiece having an inner wall defining a central cavity;
a hologram generating unit residing inside the cavity; and
an objective lens on an optical path from the hologram generated by the hologram generating unit to the inner wall.
Optionally, the system comprises a plurality of hologram generating units, each hologram generating unit being associated with an objective lens.
Optionally or alternatively, the system comprises a plurality of unit cells (cells), each cell optionally rotated with a view. Exemplary such elements include hologram generating element elements and tracking elements, for example, for tracking an eye or finger or an input element.
Optionally, the system is configured to use said eyepiece to create an image of a hologram produced by different ones of the computer-generated units at a single location.
Optionally, the single position is inside said eyepiece, optionally at the centre of rotation of the eyepiece.
Optionally, the hollow eyepiece has an internal reflective surface.
In some embodiments, the shape of the inner surface may be obtained by rotating the curvature residing on the first plane about an axis of rotation residing in the same plane.
Optionally, the axis of rotation is perpendicular to the middle of the stage. Alternatively, the shaft is angled to the stage or wobbles with the rotation.
Optionally, the inner surface is shaped as a part of a spherical shell.
In some embodiments, each of the plurality of hologram generating units comprises a Spatial Light Modulator (SLM), and each of the plurality of unit cells comprises a light converging objective lens positioned with a focal point between the SLM and a hologram produced by the SLM.
Alternatively, each of the plurality of unit cells has an objective lens, and an optical path length determining element configured to determine an optical path length between the objective lens and the eyepiece lens.
Alternatively, each optical path determining element may be controlled independently of the other optical path determining elements.
Optionally, the system comprises a rotating mirror at the center of the eyepiece configured to receive light from the plurality of unit cells and reflect the light onto the eyepiece.
Optionally, the one or more optical elements comprise an optical light adjustment element controllable to adjust the optical light direction with respect to an elevation view for each particular orientation toward the eyepiece.
According to an aspect of some embodiments of the present invention, there is provided a method of floating-in-the-air image display, comprising:
a floating-in-the-air display device is provided at a location, and one or more floating-in-the-air computer-generated images are projected from the device that are viewable over a range of angles that encompasses at least 200 radians around the location.
According to some embodiments of the invention, projecting comprises selectively projecting over a small angle in which a viewer is expected. According to some embodiments of the invention, projecting comprises selectively projecting using a plurality of image generation modules. According to some embodiments of the invention, projecting comprises projecting different images having the same coordinate system in different directions. According to some embodiments of the invention, projecting comprises projecting the 2D image.
According to some embodiments of the invention, projecting comprises projecting an image of the object such that the angle of presentation of the object changes with its perspective to match the effect of movement around the object. According to some embodiments of the invention, projecting comprises projecting the 3D image.
According to some embodiments of the invention, projecting comprises projecting a hologram. According to some embodiments of the invention, projecting comprises adjusting a projection distance of the image. According to some embodiments of the invention, projecting comprises adjusting a focal length of the image.
According to some embodiments of the invention, projecting comprises projecting different images for different eyes of the same viewer. According to some embodiments of the invention, the projection comprises a projection from a single point in the device. According to some embodiments of the invention, projecting comprises projecting the projected image with a shared coordinate system. According to some embodiments of the invention, projecting comprises imaging the image at a location not occupied by the display backplane.
According to an aspect of some embodiments of the present invention there is provided a hologram display device projecting airborne computer-generated holograms simultaneously viewable over a range of viewing angles of at least 180 degrees.
According to some embodiments of the invention, the holograms share the same set of coordinates from viewpoints spaced at least 20 degrees apart. According to some embodiments of the invention, a hologram generating unit and at least one lens for projecting a hologram are included. According to some embodiments of the invention, at least one distance control unit is included. According to some embodiments of the invention, at least one hologram aiming mechanism is included.
According to an aspect of some embodiments of the present invention there is provided a method of displaying content to a plurality of viewers, the method comprising: forming a plurality of volumetric images, each volumetric image having at least a portion of the content and each volumetric image being viewable from its own visibility space; and overlapping a portion of the one or more visibility spaces with the pupil of each viewer.
According to some embodiments of the invention, the visibility space may cover more than 90 degrees.
According to some embodiments of the invention, the content is a single scene and each volumetric image has a face of the single scene viewable from a different viewpoint.
According to some embodiments of the invention, the viewer is at different azimuthal angles around the space occupied by one volumetric image. According to some embodiments of the invention, the different azimuth angles span the entire circle. According to some embodiments of the invention, the different azimuthal angles span at least a half circle.
According to some embodiments of the invention, the two viewers are at least 1 meter from each other.
According to some embodiments of the invention, the viewer sees the images simultaneously.
According to some embodiments of the invention, the visibility space overlaps only a short period sequence with the eyes of the viewer, and the short periods are spaced apart in time such that the viewer sees a continuous scene.
According to an aspect of some embodiments of the present invention there is provided a system for displaying content to a plurality of viewers, the system comprising: means for generating volumetric images, each volumetric image having at least a portion of content and each volumetric image viewable from its own visibility space; and an optical system controlling a portion of the one or more visibility spaces with a pupil of each viewer.
According to an aspect of some embodiments of the present invention there is provided a system according to claim 29, wherein the optical system comprises an orientation determining element determining an orientation of the at least one visualization space with respect to the volumetric image viewable from said visualization space.
According to an aspect of some embodiments of the invention, there is provided a system comprising: an image generation unit that generates a paraxial image; and an optical system defining a stage and imaging the paraxial image to the stage such that the image on the stage is viewable from the visibility space, wherein the optical system comprises an eyepiece and a mirror configured to direct light to the eyepiece at a plurality of different azimuthal angles, and wherein each azimuthal angle determines a different position of the visibility space and the position of the stage is the same for each azimuthal angle.
According to some embodiments of the invention, two different elevation views are provided for at least two different azimuth angles.
According to an aspect of some embodiments of the present invention there is provided a method of imaging a paraxial image for viewing by a viewer, the viewer having a pupil at a first location and gazing at a second location, the method comprising: generating a paraxial image; imaging the paraxial image to a position at which the viewer gazes such that the image of the paraxial image can be viewed from a visibility space having a widest portion and a narrower portion: selecting a third position in response to the position of the viewer's pupil; and imaging the widest portion of the visibility space to the selected third position.
According to an aspect of some embodiments of the present invention there is provided a method of displaying a scene to a viewer facing a given location, the method comprising: estimating a position of a viewer's eye; estimating which parts of the scene a viewer will see if the scene is at a given position in a given orientation; and imaging a computer generated hologram of only a portion of the scene to a given location, the portion comprising the estimated portion of the scene, wherein the imaging is such that the hologram is visible to a viewer.
According to some embodiments of the invention, comprising: tracking a position of the viewer's eyes; and the computer-generated hologram is imaged such that as the viewer moves, he remains seeing the hologram at a given location.
According to an aspect of some embodiments of the present invention there is provided a system for imaging a Computer Generated Hologram (CGH), the system comprising: a hologram generating unit including a Spatial Light Modulator (SLM); an optical system configured to image the hologram generated by the cell to a first position and image the image of the SLM to a second position; and a controller configured to control imaging of the image of the SLM to the second position such that the CGH is imaged to a position between the optical system and the image of the SLM.
According to some embodiments of the invention, the controller is configured to control the optical system to generate an image of the SLM at a desired position in a desired orientation.
According to some embodiments of the invention, the controller is configured to change the desired position and/or orientation online.
According to some embodiments of the invention, an input is included for receiving online an indication of the desired position and/or orientation.
According to some embodiments of the invention, the input comprises a receiver for receiving a signal from the eye tracking unit indicating a position of the viewer's eye, and the controller controls the optical system to project the image of the SLM to the desired position such that the viewer's eye is within a visibility space comprising the desired position.
According to some embodiments of the invention, an optical system comprises: an objective lens, an eyepiece lens, and an optical path adjustment unit controllable to adjust an optical path between the objective lens and the eyepiece lens in response to a distance between a desired position and one or more of the eyepiece lens and the objective lens.
According to some embodiments of the invention, the optical path adjusting unit is configured to adjust the optical path online.
According to some embodiments of the invention, a mirror is included that reflects light reaching the mirror from the objective lens to a portion of the eyepiece, wherein the mirror is controllable to reflect light to a different portion of the eyepiece.
According to some embodiments of the invention, the eyepiece comprises a hollow body. According to some embodiments of the invention, the hollow body is shaped as a rotating solid obtainable by rotating a parabola around an axis, which axis is not in the same plane as the parabola. According to some embodiments of the invention, the hollow cylinder is part of a sphere.
According to some embodiments of the invention, the hologram generating unit is inside the eyepiece.
According to some embodiments of the invention, a rotating mirror is included that rotates about a central axis of the eyepiece.
According to some embodiments of the invention, light from the objective lens to the turning mirror is reflected towards the eyepiece.
According to some embodiments of the invention, the light passes from the objective lens to the turning mirror via one or more optical elements.
According to some embodiments of the invention, the one or more optical elements comprise an optical path length adjustment element controllable to adjust an optical path length between the objective lens and the eyepiece.
According to an aspect of some embodiments of the present invention, there is provided a system for imaging, the system comprising: an eyepiece having an inner reflective wall covering at least a 90 degree arc angle; and an image generation unit residing inside the cavity, wherein the eyepiece is configured to project an image from the image generation unit to one or more viewers.
According to some embodiments of the invention, the image generation unit comprises at least one element that moves so as to project the image over a range of angles.
According to some embodiments of the invention, the system is configured such that the viewer is surrounded by a wall.
According to some embodiments of the invention, the eyepiece is hollow, defines a cavity, and wherein the image generation unit comprises a hologram generation unit residing inside the cavity, and comprises an objective lens on an optical path from a hologram generated by the hologram generation unit to the inner wall.
According to some embodiments of the invention, a plurality of hologram generating units is included, each hologram generating unit being associated with an objective lens.
According to some embodiments of the invention, the computer-generated unit is configured to generate, using the eyepiece, an image of a hologram produced by different ones of the computer-generated units at a single location.
According to some embodiments of the invention, the single location is inside the eyepiece. According to some embodiments of the invention, the hollow eyepiece has an internal reflective surface.
According to some embodiments of the invention, the shape of the inner surface may be obtained by rotating a bend residing on a first plane about an axis of rotation residing on a second plane instead of the first plane.
According to some embodiments of the invention, the axis of rotation is perpendicular to the first plane.
According to some embodiments of the invention, the inner surface is shaped as a portion of a spherical shell.
According to some embodiments of the present invention, each of the plurality of hologram generating units includes a Spatial Light Modulator (SLM), and each of the plurality of unit cells includes a light converging objective lens disposed with a focal point between the SLM and a hologram produced by the SLM.
According to some embodiments of the present invention, each of the plurality of unit cells has an objective lens and an optical path length determining element configured to determine an optical path length between the objective lens and the eyepiece lens.
According to some embodiments of the invention, each optical path determining element may be controlled independently of the other optical path determining elements.
According to some embodiments of the present invention, a rotating mirror at the center of the eyepiece is included that is configured to receive light from the plurality of unit cells and reflect the light onto the eyepiece.
According to some embodiments of the invention, the controller controls the optical system. According to some embodiments of the invention, the controller controls the hologram generating unit.
According to an aspect of some embodiments of the present invention, there is provided a method for implementing a floating-in-the-air user interface,
it includes: displaying a first image in a display space of a first floating-in-the-air display; inserting a real object into a display space of a first floating-in-the-air display; locating a position of a real object within a display space of a first floating-in-the-air display; positioning a real object in a display space; and providing the location as input to the air-floating user interface.
According to some embodiments of the invention, further comprising displaying a second image in the display space of the first floating-in-the-air display based at least in part on the position.
According to some embodiments of the invention, the floating-in-the-air display is a volumetric display.
According to some embodiments of the invention, the second image is displayed in near real-time after the real object is inserted into the display space. According to some embodiments of the invention, the time is less than 1/24 seconds.
According to some embodiments of the invention, the first image is a blank image and the second image comprises a position display.
According to some embodiments of the invention, the real object is a finger.
According to some embodiments of the invention, further comprising: the actuator is displayed in the first image, the position of the moving real object is substantially close to the actuator, and the positional input is interpreted as the real object actuating the actuator.
According to some embodiments of the invention, further comprising: the position of the real object is moved, the position of the real object is tracked over time, and the positional input is interpreted as the real object manipulating at least a portion of the first image.
According to some embodiments of the invention, further comprising sending a control command to the robotic device based at least in part on the interpretation.
According to some embodiments of the invention, the real object further comprises a plurality of real objects, and the position of each real object is used as a position input for the volumetric user interface.
According to some embodiments of the invention, the second image is different from the first image.
According to some embodiments of the invention, the second image is substantially equal to the first image plus an add indicator of the location input.
According to some embodiments of the invention, the location comprises a location that is substantially a point on the real object.
According to some embodiments of the invention, further comprising capturing the sub-image based at least in part on the location. According to some embodiments of the invention, the sub-images comprise voxels (voxels).
According to some embodiments of the invention, the location further comprises a plurality of locations based at least in part on a plurality of locations of points on the real object.
According to some embodiments of the invention, the path connecting the plurality of locations is displayed by a first floating-in-the-air display.
According to some embodiments of the invention, the plurality of locations comprises two locations, and further comprising defining the line in three dimensions based at least in part on the two locations.
According to some embodiments of the invention, the plurality of locations comprises three locations that are not on a straight line, and further comprising defining a plane in three dimensions based at least in part on the three locations.
According to some embodiments of the invention, the plurality of locations comprises four locations that are not in a plane, and further comprising defining the volume in three dimensions based at least in part on the four locations.
According to some embodiments of the invention, further comprising implementing one of the following group of functions based at least in part on the plurality of locations: the method includes the steps of magnifying the first image, reducing the first image, cropping the first image, rotating the first image, segmenting the first image, measuring a length within the first image, measuring an area within the first image, and measuring a volume within the first image.
According to some embodiments of the invention, further comprising performing sub-image capture based at least in part on the plurality of locations.
According to some embodiments of the invention, marking the point to substantially contrast with the remainder of the real object is further included.
According to some embodiments of the invention, further comprising a marker marked by a substantially compact light source.
According to some embodiments of the invention, the location comprises a line defined by a long axis of the real object.
According to some embodiments of the invention, the location comprises a box corresponding to the shape of the real object.
According to some embodiments of the invention, the first floating-in-the-air display displays the second image at substantially the same time as the first floating-in-the-air display displays the first image, and wherein the first image is displayed to the first user and the second image is displayed to the second user.
According to some embodiments of the invention, the first image and the second image appear to be located at the same position in space.
According to some embodiments of the invention, the method further comprises displaying a second image at substantially the same time as the first image is displayed by the first floating-in-the-air display, and wherein the first image is displayed to the first user and the second image is displayed to the second user.
According to some embodiments of the invention, the first floating-in-the-air display is substantially remote from the second floating-in-the-air display, and further comprising a communication channel between the first floating-in-the-air display and the second floating-in-the-air display.
According to some embodiments of the invention, the first display and the second display are used to implement a telemedicine interaction between the first user and the second user.
According to some embodiments of the invention, the first display and the second display are used to implement whiteboard-like cooperative sharing between the first display and the second display.
According to some embodiments of the invention, the first display and the second display are used to implement a remote tutorial of the user at the first floating-in-the-air display.
According to some embodiments of the invention, the first display and the second display are used to implement a game in which the first user and the second user participate.
According to some embodiments of the invention, the first display is different from the second display. According to some embodiments of the invention, the first display displays more content than the second display.
According to an aspect of some embodiments of the present invention, there is provided a method for enabling viewing of dynamically generated floating-in-the-air displayed objects and real objects in the same display space, comprising: a volumetrically displayed object is displayed on the first floating-in-the-air display, and a real object is inserted into a display space of the first floating-in-the-air display.
According to some embodiments of the invention, the floating-in-the-air display is a volumetric display.
According to some embodiments of the invention, the dynamically generated comprises computer generated.
According to some embodiments of the invention, further comprising comparing the real object with at least a portion of the object displayed floating in the air.
According to some embodiments of the invention, the real object comprises a standard against which the object is measured, and the comparison enables a determination of compliance with the standard.
According to some embodiments of the invention, the real object is a medical device for insertion into a body, and the at least part of the object that is displayed floating in the air is at least part of the body generated from the three-dimensional data set.
According to some embodiments of the invention, the comparing further comprises measuring a difference in size between the real object and at least a portion of the object floating on the air display.
According to some embodiments of the invention, the measured dimensional difference comprises at least one of the group consisting of: length differences, planar area differences, surface area differences, and volume differences.
According to an aspect of some embodiments of the present invention there is provided a method for enabling viewing of a body part of a floating-in-the-air display of a three-dimensional data set from a body and a virtual object of a volumetric display of the three-dimensional data set from one or more virtual objects, comprising: the method includes displaying a body part of a floating-in-the-air display on a first floating-in-the-air display, and superimposing a virtual object into a display space of the first floating-in-the-air display.
According to some embodiments of the invention, the virtual object and the floating-in-the-air displayed body part are moved relative to each other in the display space of the first floating-in-the-air display.
According to some embodiments of the invention, the method further comprises comparing the virtual object to at least a portion of the body part.
According to an aspect of some embodiments of the present invention, there is provided a user interface comprising: the first floating in the air is displayed; and a first input unit adapted to accept input from a first location within a first display space, the first display space being a volume within which an object is displayed by a first floating-in-the-air display.
According to some embodiments of the invention, the floating-in-the-air display is a volumetric display. According to some embodiments of the invention, the floating-in-the-air display is a two-dimensional floating-in-the-air display.
According to some embodiments of the invention, the first floating-in-the-air display is adapted to display the first location.
According to some embodiments of the invention, further comprising a second floating-in-the-air display, wherein the second floating-in-the-air display displays the same display as the first floating-in-the-air display, including displaying the first location.
According to some embodiments of the invention, the display further comprises a second input unit adapted to accept input from a second location within a second display space, the second display space being a volume within which an object displayed by the second floating-in-the-air display appears, and wherein the first floating-in-the-air display is adapted to display the same display as the second floating-in-the-air display, including displaying the second location.
According to some embodiments of the invention, both the input from the first location and the input from the second location are displayed.
According to some embodiments of the invention, the first floating-in-the-air display is located in a different room than the second floating-in-the-air display. According to some embodiments of the invention, the first floating-in-the-air display is at least 100 meters away from the second floating-in-the-air display.
According to some embodiments of the invention, the first floating-in-the-air volume display is adapted to provide sensory feedback based at least in part on the location and on content displayed at the location.
According to some embodiments of the invention, the first floating-in-the-air volume display is adapted to display a hologram.
According to an aspect of some embodiments of the present invention, there is provided a method for implementing a floating-in-the-air user interface, comprising; the method includes displaying a first image in a display space of a first floating-in-the-air display, inserting a real object into the display space, detecting a location of the real object within the display space, using the location as an input to a user interface, and highlighting the location in the display space.
According to an aspect of some embodiments of the present invention, there is provided a user interface comprising: means for displaying a floating-in-the-air display, means for accepting input from a location within a display space, the display space being a volume within which an object displayed by the floating-in-the-air display appears.
Unless defined otherwise, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and not intended to be necessarily limiting.
Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, exemplary methods and/or materials are described below. In addition, the materials, methods, and examples are illustrative only and not intended to be necessarily limiting.
Implementation of the methods and/or systems of embodiments of the present invention may involve performing or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of embodiments of the method and/or system of the present invention, several selected tasks could be implemented by hardware, by software, by firmware or by a combination thereof using an operating system.
For example, hardware for performing selected tasks according to embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to exemplary embodiments of the methods and/or systems as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes volatile memory for storing instructions and/or data and/or non-volatile storage, such as a magnetic hard disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is also provided. A display and/or a user input device such as a keyboard or mouse are also optionally provided.
Drawings
Some embodiments of the invention are described herein, by way of example only, with reference to the accompanying drawings. Referring now in detail to the drawings in particular, it is emphasized that the details shown are by way of example and for illustrative purposes to discuss embodiments of the invention. In this regard, the description taken with the drawings making apparent to those skilled in the art how the embodiments of the invention may be practiced.
In the drawings;
FIG. 1A is a schematic diagram of a cell for generating a hologram;
FIG. 1B is a schematic illustration of the visibility space of some points in a hologram;
FIG. 2A is a schematic diagram of a system for imaging a hologram according to an embodiment of the present invention;
FIG. 2B is a schematic illustration of a visibility space of some points in an image hologram produced by the system of FIG. 2A;
FIG. 3A is a diagram illustrating ray tracing generated by an image hologram of a projection system according to an embodiment of the present invention;
FIG. 3B is a graph illustrating ray tracing produced by the image SLM of the same projection system referenced in FIG. 3A;
FIG. 4 is a schematic diagram of an optical system designed to allow adjustment of the projection of the SLM and hologram to a desired position according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a system for projecting a hologram such that the hologram has a wide visibility space, according to an embodiment of the present invention;
FIG. 6A is an illustration of a 360 walk-around image projection system according to an embodiment of the present invention;
FIG. 6B is a schematic diagram of the system illustrated in FIG. 6A;
FIG. 6C is a diagram of a 360 walking around image projection system with tilted mirrors, according to an embodiment of the present invention;
FIG. 7 is a diagram of a projection system with two optical systems having a common eyepiece 320 and a common rotating mirror, according to an embodiment of the invention;
FIG. 8 is a schematic diagram of a mirror operable as a low pass filter;
FIG. 9 is a schematic diagram of a tracking unit according to an embodiment of the present invention;
FIG. 10 is a simplified block diagram of a system according to an embodiment of the present invention;
FIG. 11 is a flowchart of actions taken in performing a method according to an embodiment of the present invention;
FIG. 12 is a flow diagram of actions taken in a method of generating a hologram to be viewed by a viewer gazing at the hologram from a given angle in accordance with an embodiment of the present invention;
FIG. 13A is a simplified diagram of an object that a user appears to have a finger touch being displayed by a user interface constructed and operative in accordance with an embodiment of the present invention;
FIG. 13B is a simplified diagram of an object that a user appears to have the pointer touch that is being displayed by a user interface constructed and operated in accordance with an embodiment of the present invention;
FIG. 13C is a simplified diagram of a user inserting a box into the display space of a user interface constructed and operative in accordance with an embodiment of the present invention;
FIG. 14 is a simplified diagram of two users interacting with the same object being displayed by a user interface constructed and operative in accordance with an embodiment of the present invention;
FIG. 15 is a simplified diagram of two users interacting with the same object being displayed by a user interface constructed and operative in accordance with an embodiment of the present invention; and
FIG. 16 is a simplified diagram of a user comparing a real object with an object displayed by a user interface constructed and operative in accordance with an embodiment of the present invention.
Detailed Description
SUMMARY
The present invention, in some embodiments thereof, relates to methods and apparatus for displaying images, and more particularly, but not exclusively, to such methods and apparatus that allow three-dimensional images to be viewed from a wide viewing angle. Some embodiments of the invention also allow viewing of two-dimensional images from a wide viewing angle.
The present invention, in some embodiments thereof, relates to a computer-generated user interface, and more particularly, but not exclusively, to a floating-on-the-air user interface.
In some embodiments, a viewer may walk around a stage and view different faces of a scene projected on the stage, each face being viewable from another perspective, as when looking at the real original scene. For example, a viewer walking a hologram around a globe may see Europe when viewing the hologram from one point, America when viewing the hologram from another point, and so on. In some embodiments, different viewers see different data that may be aligned with the same coordinates.
Additionally or alternatively, the viewer may walk around, closer to the stage or further away from the stage, adjusting the eyes for the distance from the image, similar to the focus adjustment needed when looking at a real object and changing the distance to it. In exemplary embodiments of the present invention, the display device may adjust the distance of projection according to the distance of the viewer, for example, by a range factor of 1.2, 2, 3, 4 or by an intermediate or greater amount, for example, moving the projection point by 5cm, 10cm, 20cm, 30cm or more as desired.
Alternatively, the viewer can walk freely and see the hologram from whatever location he is in, as long as he is looking at the stage within the field of view of the system.
Optionally, the stage is a physical construction. Alternatively, the stage is an imaged volume in space onto which the hologram is projected and towards which the viewer is facing frontally. The hologram on the imaging stage appears as if it is floating in the air.
In some embodiments, the viewer may touch the hologram. Such a viewer would not necessarily feel that his finger is touching anything, but would see that his finger is touching the hologram. Optionally, an artificial touch sensation is evoked in the viewer, for example by the user wearing a vibrating ring or glove or by projecting a light beam at the finger from a system or different location so that the finger is heated and experiences the heating, as is known in the art of artificial reality. Optionally or alternatively, an acoustic beam, e.g. ultrasound, is projected and/or modulated to induce perception.
Alternatively, only a viewer in the visibility space of the hologram may actually view the hologram. A person not in the visibility space of the hologram but looking at the stage will see the viewer's finger rather than the hologram that the viewer is touching. Alternatively, a hologram of the same scene viewable from the viewing angle of each of a plurality of viewers is displayed to each viewer, and when one viewer touches the hologram, all other viewers see the finger of the first viewer touching the hologram. Optionally, all viewers see the same hologram (or other image type). Alternatively, different viewers see different images, e.g., the same structure with different data thereon.
Optionally, all viewers seeing the finger touching the hologram see the finger touching the hologram at the same location (e.g., the hologram has a doll and all viewers see the finger touching the left eyelid of the doll).
In some embodiments, the viewer can walk around the hologram and see the hologram from all sides as if the viewer walks around a physical object. In some such embodiments, as long as the viewer's eyes are within a particular space, referred to as a first viewing window, the first hologram is imaged onto the stage, showing the scene as it would be seen from a point in the first viewing window. Due to the holographic nature of the image of the first hologram, a viewer whose eyes move within the viewing window can detect different features of the scene from different points within the viewing window. Optionally, when the viewer's eyes are outside the first viewing window, a second viewing window is defined that covers the viewer's eyes, and a second hologram of the scene is projected onto the stage, showing the scene as would be seen from a point in the second viewing window. In some embodiments, the holograms of the scene as seen from each possible viewing window are projected sequentially; however, these embodiments require more laborious computational effort and do not necessarily improve the viewer's viewing experience. In some more computationally efficient embodiments, the positions of the viewer's eyes (or both) are estimated, a viewing window is defined around them, and only holograms of the portions of the scene that can be viewed from within the estimated viewing window are projected onto the stage.
In some embodiments, the viewer may manipulate the image. For example, the viewer may move, rotate, zoom, or otherwise manipulate the image. In some embodiments, the viewer may move the stage instead of or in addition to moving around the stage. Additionally or alternatively, the viewer may change the portion of the scene shown on the stage. For example, a viewer looking at a hologram of the globe may rotate the globe about an axis passing through two poles or about any other axis. Additionally or alternatively, the viewer may transform the hologram and/or stage, enlarge the image, reduce, and so forth. In some embodiments, magnification is not accompanied by a loss of resolution, since the larger hologram of the smaller portion viewed is imaged at the same resolution at which the larger portion was imaged prior to magnification by the viewer.
In some embodiments of the invention, holograms of scenes viewable from different viewing windows are projected simultaneously so that different viewers looking at the stage from different viewing windows can each view a hologram of a scene from his own viewpoint simultaneously. Alternatively, each viewer may walk around the stage independently of the other viewers. Optionally, each viewer is identified, for example, based on an image of each viewer's face, based on a tag (e.g., an infrared-readable tag on the face), and/or based on other identification techniques such as RFID. Optionally, each user is shown data and/or viewing parameters personalized for the viewer, for example the distance or size may be set for the user's acuity of vision and adaptability and the data content preferred by each user (e.g. the exterior view of the object or the interior view of the object).
In some embodiments, different holograms (each viewable from a different viewing window) are projected sequentially onto a single stage at a sufficiently high frequency to allow viewers (each viewer gazing through one viewing window) to see each different successive image hologram. In this way, different viewers may see different holograms simultaneously and consecutively, or in some embodiments different holograms, or in some embodiments different viewers may see different 2D content, such as video or TV, on a single display, or alternatively different (non-holographic, focus controlled or holographic) 3D content with a shared coordinate system, simultaneously and consecutively.
An aspect of some embodiments of the invention relates to a method of displaying a paraxial image, such as a hologram or a paraxial parallax barrier image.
A paraxial image or object is one in which each point emits light rays that span a cone having a small solid angle, typically about 3 °, and the heights of these cones are approximately parallel to each other (e.g., or otherwise matched to the rotation of a perpendicular line from the viewer's eye). Typically, only a viewer with an eye pupil that overlaps all of these cones can see the entire paraxial object or image. If the pupil overlaps only some of the cones, only some points on the paraxial image or object are viewed, i.e., those points from which the overlapping cones originate. Thus, the paraxial image or object may be viewed from a relatively narrow space, referred to herein as a visibility space.
In the following description, reference is sometimes made to a semi-paraxial image. The term refers to an image in which each point emits light rays that span a cone having a small solid angle but the axes of the cones are not parallel to each other. In some embodiments of the invention, the cones converge at the visibility space, so the entire image can be seen at the visibility space.
In some embodiments of the invention, the hologram is a reconstruction of a light field produced by the scene. In these embodiments, the hologram of the scene appears to the human audience to be identical to the scene itself. Optionally, the reproduced light field is the same as the light field generated by the scene. Optionally, the resemblance between the original light field and the light field reproduced by the hologram is in the phase and intensity of the field, forming a monochrome hologram. Alternatively, the wavelength of the emitted light is also reproduced, forming a color hologram.
In some embodiments, the hologram is a reconstruction of a fourier transform of the scene. When such a hologram is viewed through the lens, the scene appears in the fourier plane of the lens.
In some embodiments, the hologram is comprised of a beam of light that interacts with a Spatial Light Modulator (SLM). Spatial light modulators are media that have different optical behavior at different points. The term SLM is used herein to denote: static media, with different optical behavior at different points, such as slotted film; and dynamic media, with different points of controllable optical behavior. The latter category of SLMs is routinely used in the field of computerized generation of holograms. A Spatial Light Modulator (SLM) is designed or controlled such that a beam of light interacting with (e.g., reflecting from or passing through) the SLM creates a holographic reconstruction of a scene. Many ways of producing an SLM of a scene are known in the art of holography and each of these ways can be used to create a hologram to be projected or imaged according to various embodiments of the present invention. Note that when a non-holographic image is shown, an SLM (e.g., DMD or LCD) that does not modify the phase may be used. Alternatively or additionally, an incoherent light source may be used.
In the following, reference is mainly made to computer-controlled SLMs; however, other SLMs, such as plates or films that are grooved to form static holograms, may also be utilized in some embodiments.
Computer controlled SLMs are made of multiple pixels (e.g., 500 x 500 pixels) and the optical behavior of each pixel of the SLM can be computer controlled independently of the other pixels. These SLMs are currently commercially available from a variety of sources, such as the Fourth Dimension Displays of London. Some commercially available SLMs are based on the transmissive type, that is, light should be transmitted through them to create an object hologram, and some SLMs are reflective type, that is, light should be reflected from them to form an object hologram. One type of reflective SLM is known in the art as LCoS.
Some embodiments are limited to dealing with stationary scenes. In some embodiments, as in video motion pictures, the scene changes over time. In these embodiments, the hologram is optionally changed at a rate suitable to provide a continuously moving scene. As is well known in the cinematic arts, this rate is about 16 or 24 scenes per second or higher.
In some embodiments of the invention, the hologram is paraxial. That is, each point in the hologram emits light rays that span a cone having a small solid angle, typically about 3 °, and the heights of these cones are approximately parallel to each other and to the optical axis of the system creating the hologram. They are only visible by a viewer looking in the direction of the optical axis directly facing the paraxial hologram. Thus, as illustrated in fig. 1B, the paraxial hologram and, in general, the paraxial image can be viewed from a relatively narrow visibility space.
As described above, an aspect of some embodiments of the invention relates to displaying a hologram. In some embodiments of the invention, displaying the hologram comprises generating a hologram (hereinafter referred to as an object hologram) and optically creating an image of the created hologram (hereinafter referred to as an image hologram). At least some embodiments of the invention relate to displaying paraxial objects, including, but not necessarily limited to, holograms. For convenience, reference will be made below primarily to holograms, but other paraxial images may be similarly processed unless explicitly stated otherwise. An image formed from a paraxial image or object and viewed by a viewer, such as an image hologram, in embodiments of the present invention, is optionally semi-paraxial.
An aspect of some embodiments of the invention relates to displaying an image to be shown from a wide angle around the display. In some embodiments, the angle is greater than 180 °, such as 270 °, or even 360 °, or an intermediate angle. Optionally, the image viewed from a wide viewing angle is a hologram. Examples of images displayed to viewers positioned around the display include: holograms, autostereoscopic images, stereoscopic images, controlled-focus 3D or other images (e.g., using optical elements to set the perceived focal length to the image), and 2D images.
In some embodiments, displaying the object hologram includes creating an image hologram different from the object hologram. For example, the image hologram may be larger than the object hologram and/or may be seen from a wider viewing angle than the object hologram.
In an exemplary embodiment, creating an image hologram viewable from a wide viewing angle involves imaging the hologram and the SLM with a single optical system such that the image SLM is wider than the object SLM. Projecting the hologram and SLM with a single optical system ensures that the image hologram can be viewed from any point in the image SLM.
The image SLM does not necessarily cover the entire space from which the entire image hologram can be viewed, which space is referred to herein as the visibility space.
It may be noted that: paraxial holograms are a particular class of paraxial objects, and other paraxial objects may be similarly displayed. Accordingly, in some exemplary embodiments, an image of a paraxial object is created in a process that includes imaging, with a single optical system, a paraxial image and at least a portion of a space from which the paraxial object may be viewed. Optionally, the image of the visibility space is wider than the visibility space itself. Imaging the paraxial image and its visibility space with a single optical system ensures that the image hologram can be viewed from any point in the image in visibility space.
In some embodiments, to ensure that the paraxial image is viewable by a particular viewer, it is sufficient that the viewer's pupil will overlap a portion of the image visibility space.
In some embodiments, the viewer sees and touches a non-holographic three-dimensional image of the scene, such as a 3D parallax barrier image. However, at least in some non-holographic embodiments, each viewer must select between focusing on the finger and focusing on the touch point, because the finger and touch point are not necessarily at the same focus.
In some embodiments, the eyes of the viewer are tracked and only holograms representing the original scene viewable from the viewpoint of the viewer are projected onto the stage while the image of the SLM is constantly projected onto the eyes of the viewer.
In some embodiments, the eye of the viewer is tracked to facilitate projection of the SLM onto the eye of the viewer. Projecting the larger image of the SLM allows less accurate tracking of the eye of the viewer, at least in embodiments where the local overlap between the image SLM and the eye of the viewer is sufficient to allow the viewer to see the complete hologram. Thus, projecting a large SLM image can help relax the demands from the tracking system. It should be noted that although the system as a whole optionally ensures that the eyes of the viewer overlap the image SLM, it is not necessary to track the eyes themselves. Optionally, the center of the face of the viewer is tracked and the position of the eyes is derived from the position of the center of the face. Optionally, the viewer wears headphones, and the headphones transmit a signal (or include a marker) indicative of the position of the headphones, and the eye position is determined in response to the headphone position. Optionally, the face of the viewer is identified in an image of the space surrounding the display, and the eye position is determined in response to the face identification. Thus, the term eye tracking as used herein means tracking any signal indicative of the position of the eye, not necessarily the eye itself. It should be noted that in some embodiments, tracking the signal indicative of the eye position is much easier than tracking the eye itself and the tracking system can be greatly simplified.
In some embodiments, the image viewing space is large enough to cover both eyes of the viewer. In some embodiments, two windows are defined, each window surrounding each eye, and a different SLM is imaged to each eye. Optionally, the two different SLMs are two parts of a single SLM. Optionally, two SLMs overlapping the eyes of the viewer create the same image hologram, optionally creating an image hologram that will be viewable from between the two eyes. A color hologram may be projected by the same SLM by sequential illumination in red, green and blue light. Alternatively, red, green and blue light can be projected in parallel to three different SLMs, all mechanically synchronized to the same window.
Optionally, the windows overlap or abut such that switching from one window to another is as smooth as going from one point to another within a single window.
In some embodiments, the different holograms are generated sequentially at a rate fast enough to allow each viewer to inscribe that he views a continuous image. For example, each of3 viewers sees 30 images per second, with the period of each image being 1/180 seconds or less. When generated for both eyes and 3 viewers, 3 × 2 eyes ═ 6.
In some embodiments, each viewer sees a hologram produced by a different optical system. In some embodiments, two or more viewers view holograms produced by the same optical system. Optionally, the optical system and SLM are repeatedly adjusted to the needs of different viewers. The SLM is adjusted to create a hologram of the scene currently viewed by the viewer; and the optical system is adjusted to project the image of the SLM to the eyes of different viewers at their current positions.
In some embodiments of the specific application opportunity, the hologram is an image for viewing by a viewer, such that the viewer may touch a portion of the hologram, for example with his finger or a man-machine interface (MMI) tool. Optionally, the hologram comprises a portion that is actuated by touch.
In some embodiments, multiple viewers may each touch the hologram that each viewer is viewing. For example, two viewers view holograms of the same house from different viewing windows, and the finger of one of them touches the handle of the main door. In some embodiments, if the second viewer is touching the same location (say, the same location on the handle) at the same time, then each of the two viewers sees two finger touch holograms. Optionally, the two fingers touching the hologram also touch each other. Optionally, the image manipulation of one user is transmitted to the other user's view, thus sharing zoom and orientation, for example, if desired.
In some embodiments, one viewer may touch the hologram while another viewer may walk around (or move around) the hologram. In this way, a walking viewer can see the hologram and the touching finger from different angles. For example, the instructor may touch an arterial valve in a hologram of a heart model, and the student may walk around it and see the touched valve from a different angle.
An aspect of some embodiments of the invention relates to the design of a projection system in which the internal elements generate an image or hologram which is then projected on the inside of an imaging mirror which magnifies and/or aims the image or hologram for the user. Optionally, the imaging mirror is generally cylindrical and optionally curved to provide magnification. In some embodiments, the viewer is located outside the imaging mirror. In other embodiments, the viewer is located inside an imaging mirror, which may be mounted on a wall of a room, for example.
An aspect of some embodiments of the invention relates to the design of a projection system in which one or more modules generate an image and are rotated to help aim the image at the eyes of a viewer. Optionally, one or more modules rotate or aim at the rotating mirror. Optionally, the mirror rotates at a substantially fixed speed or oscillates at a fixed rate.
An aspect of some embodiments of the invention relates to a design of a projection system having a modular design such that each of a plurality of modules may have a line of sight to the same viewer's eye. Optionally, the shared line of sight is provided by a turning mirror. Alternatively or additionally, a shared line of sight is provided by rotating the modules and noting their positions so that they can behave as if they share the same coordinate system. Optionally, the module comprises a plurality of image or hologram generating modules. Optionally or alternatively, the modules include at least one viewer tracking/user interaction module. In exemplary embodiments of the invention, system capabilities are enhanced or reduced by replacing, adding, or removing modules.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the methods and/or components set forth in the following description and/or illustrated in the drawings and/or examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
Reference is now made to the construction and operation of the hologram generating unit as illustrated in fig. 1A and the visibility space of the paraxial object as illustrated in fig. 1B.
Exemplary computerized hologram Generation Unit
In an exemplary embodiment of the present invention, the hologram generating unit 10' includes a light source 15 and a Spatial Light Modulator (SLM) 20.
The SLM20 is connected to a computerized control unit 22 which controls the optical behavior of each pixel of the SLM independently of the other pixels so that light reflected from the SLM will reproduce the light field front (front) emanating from the scene (24, in the illustrated case a house). In this regard, light received from the scene 24 is detected and data representing it is input into the computerized unit 22, and the computerized unit 22 processes the input data to obtain the desired optical behavior of the different pixels, and controls the SLM accordingly.
In an exemplary embodiment of the invention, light from source 15 is deflected with polarizing beam splitter 25, passes through quarter wave plate (40), goes to SLM20, and reflects from the SLM to create hologram 35. On the way to the polarizing beam splitter 25, the beam passes once again through the quarter wave plate 40 and continues through the polarizing beam splitter without deflection.
Optionally, the cell 10' further comprises an optical element 70 that alters the wavefront of the light source 15 such that the hologram 35 is larger after interacting with the SLM 20. Optionally, lensless magnification is used. In lensless magnification, a spherical wavefront beam illuminates the SLM, which is configured to produce an image from a planar wavefront beam illumination. The image produced with the spherical wavefront beam is scaled relative to the image produced with the planar wavefront beam. Optionally, the image produced with a spherical wavefront beam is larger than the image produced with a planar wavefront beam. Optionally, the system comprises several lenses, and one lens in use is selected to produce an object hologram of the desired size and position. Optionally, the selection of the lens is part of the adjustment of the optical system. Optionally, the selection of the lens is part of the design of the optical system, and the selection is permanent.
The technique of lensless magnification is described in detail, for example, in the book "Introduction to fourier" by j.w. goodmann, published by McGraw-Hill.
The cell 10' is only one possible arrangement suitable for creating a hologram using a coherent light source and a spatial light modulator. Many other arrangements are known in the art and may be used in accordance with various embodiments of the present invention. Furthermore, at least in some embodiments, the cell 10' may be replaced by a cell for generating a non-holographic paraxial image or other image type. For easier understanding, in the following description, the unit for generating a paraxial object will be generally referred to as unit 10 to illustrate that the unit 10' described above is only one possible construction of such a unit. However, in an exemplary embodiment of the present invention, all cells 10 have a light source and a paraxial (or other type of) image forming unit, such as an SLM or a Liquid Crystal (LC) panel. In embodiments where a non-hologram image is used, the display panel may, for example, be luminescent.
In one exemplary variation, if, for example, the SLM is polarization sensitive, the design may be altered, for example, such that polarized light from source 15 is deflected by beam splitter 25 to strike SLM20 and reflect from the SLM to create hologram 35. The quarter wave plate 40 is optionally omitted.
In a further alternative design, the light is aimed at the SLM at a slightly off-axis angle, so it is reflected off-axis away from the SLM and no beam splitter is used.
In some embodiments, a transmissive SLM is used and light is also not reflected by the beam splitter.
Visibility space for paraxial images
Fig. 1B illustrates some principles when viewing a paraxial image such as, for example, object hologram 35.
The object hologram 35 is a paraxial image, with each point (e.g., 36, 37) in the paraxial object in a single direction (h)36、h37) And emits light in a certain narrow angle (α) around it, creating a cone (C)36、C37). From cone C36Each point within sees point 36 and from cone C37Each point within sees point 37. Thus, cone C36And C37Referred to herein as the visibility space of points 36 and 37, respectively.
Two points 36 and 37 can be seen simultaneously from each point forming part of both cone 36 and cone 37, which part forms a visibility space 60 from which both points can be viewed. Similarly, the space from which the entire hologram can be viewed can be determined and represented as the visibility space of the hologram 35.
Thus, eye 52, which overlaps a portion of space 60, can see two points 36 and 37, eye 54 can see point 37 but not point 36, and eye 56 cannot see any of points 36 and 37.
Exemplary optical System
FIG. 2A illustrates a basic system 200 for projecting an object hologram 35 according to an embodiment of the present invention.
The system 200 comprises a paraxial object generation unit 10 and an optical system 210, the unit 10 optionally being a hologram generation unit. The hologram generation unit 10 generates a hologram 35 (object hologram), and the optical system 210 images the object hologram 35 to be seen as an image hologram 235 (image hologram) standing on a stage 237, the stage 237 optionally being an empty space. Optical system 210 also projects SLM20 (shown as an object SLM) to provide image SLM 220. Image SLM220 is optionally larger than SLM20, and image hologram 235 is visible to a viewer looking at image hologram 235 from any point along image 220.
Fig. 2B illustrates some principles when viewing a semi-paraxial image generated by optical system 210 from a paraxial object.
Similar to the paraxial object 35 of FIG. 1B, it can be taken from a cone-like space (C)236、C237) Each point (e.g., 236, 237) of the semi-paraxial image 235 is viewed, and the two points can be viewed from the space 260 where the cones 238 and 239 overlap. Unlike the paraxial object 35, however, the visibility spaces of the different points that together make up the image 235 are not parallel to one another. The effect of the optical system 210 is to break up the parallelism between the visibility spaces at different points, thus providing a larger and optionally closer visibility space 260. Thus, in some embodiments, similar to a telescope, the system 210 brings the image hologram 235 closer to the viewer, but also widens the visibility space of the hologram from the relatively narrow space 60 illustrated in FIG. 1B to the larger visibility space 260 illustrated in FIG. 2B.
Visibility space 260 surrounds image SLM220 (fig. 2A); thus, in some embodiments of the invention, optical system 210 is adjusted to form an image SLM220 to be overlapped with the eye of the viewer. In this way, the image 235 can be assured to be viewable by the viewer. Alternatively or additionally, other portions of visibility space 260 are imaged to overlap the eyes of the viewer.
Fig. 3A and 3B illustrate an alternative construction of an optical system (300) that allows for the projection of an enlarged image of the SLM (20) and an enlarged image of the hologram (35), thereby enlarging the hologram and/or widening the space from which the hologram can be viewed.
FIG. 3A illustrates ray tracing of rays that produce an image hologram; and figure 3B shows ray tracing of the rays that produce the image SLM.
It should be noted that in some embodiments of the present invention, the only requirements from system 210 are: (i) imaging the object hologram 35 onto the stage 237, (ii) imaging the SLM20 onto a plane (or volume) outside the stage 237, and (iii) allowing the exact position of the plane to be adjustable. A wide variety of optical architectures may accomplish this task, and many alternatives to the construction illustrated in fig. 3A or 3B may be readily envisioned by one of ordinary skill in the art in view of the above requirements.
Shown in both fig. 3A and 3B: a hologram generation unit 10 comprising an SLM 20; and an optical system 210 including an objective lens 310 and an eyepiece lens 320.
In the illustrated embodiment, the objective lens 310 has two focal points: 311 and 312: and eyepiece 320 has two focal points: 321 and 322.
In the embodiment shown, the objective lens 310 and the hologram generating unit 10 are arranged such that an object hologram is generated by the unit 10 between the objective lens 310 and its focal point 311. Optical units 310 and 320 are positioned at a distance from each other greater than the sum of their focal lengths such that focal point 321 of element 320 is located between focal points 312 of element 320 and element 310.
Optionally, objective lens 310 includes lenses and/or curved mirrors. Optionally, eyepiece 320 includes a lens and/or a curved mirror.
Each objective 310 and 320 may be independently a light converging element (e.g., a concave mirror) or a light diverging element (e.g., a convex mirror).
As shown in fig. 3A, an image hologram 235 is formed on stage 237 in front of the viewer's eye 250.
As shown in FIG. 3B, an image of SLM20 is formed at viewer's eye 250.
Thus, fig. 3A and 3B together show that optical system 210 images hologram 35 onto stage 237 and SLM20 as locally overlapping with viewer's eye 250.
In an exemplary embodiment of the invention, the optical system 210 is adjustable to change the position at which the image SLM is formed, for example by changing the distance between the two optical elements 310 and 320. Such adjustments may also change the position at which image hologram 235 appears. This may be compensated for by the computing unit 22 (FIG. 1A), which may drive the SLM20 to form the object hologram 35 at different positions without moving the SLM20, if desired.
In the illustrated embodiment, the eyepiece is a hollow mirror, however the eyepiece may also be a transmissive element (e.g., a lens) that optionally also changes the angle of the light (e.g., a prism) so that the hologram is not superimposed on the image generation system.
Alternative shapes for objective lens 310
The objective lens 310 is optionally a mirror in the form of a paraboloid of revolution, wherein the axis of rotation is the axis of symmetry of the parabola. Another alternative shape is a paraboloid of revolution, wherein the axis of revolution is perpendicular to the axis of symmetry of the parabola. Optionally, the objective lens 310 is shaped with a spherical cap. Alternatively, the shape of the objective lens 310 has a segmentation of any of the above suggested shapes. A spherical cap is optionally preferred because it is easier to manufacture and because the hologram 35 is paraxial, spherical aberration does not play a significant role in the system.
Alternatively, the eyepiece 320 has any of the above shapes that the objective lens may have. Eyepieces having particularly useful shapes are described below under the heading "exemplary eyepiece".
In some embodiments, objective lens 310 is a cylindrical mirror or an arcuate portion thereof. As mentioned, such a mirror may be parabolic rather than flat.
Exemplary dimensions of an optical System
In an exemplary embodiment, the setup (setup) illustrated in FIGS. 3A and 3B is used to provide a magnified image SLM using a lens 310 having a focal length of-50 cm and an eyepiece 320 positioned at a first side (right side in the figure) of lens 310, 310100cm away from lens 310100. The SLM is about several millimeters from lens 310 on the second side of lens 310.
The SLM receives input to generate a fourier transform of the object hologram and thus finds the object hologram at the focal plane of lens 310 (50 cm to the left of lens 310). The size of the object hologram is similar to the size of the part of the SLM used to form the hologram.
The eyepiece forms two images:
the image of the object hologram is formed at 150em to the right of the eyepiece 320 and has the same size as the object hologram; and the image of the SLM is formed 200cm to the right of the eyepiece 320 and is three times that of the SLM.
When the eyes of the viewer overlap the image SLM, the viewer is 50cm away from the image hologram.
This example shows one arrangement for creating an image hologram having a larger visibility space than that of an object hologram. The image hologram can be viewed at least from any position at the image SLM, which is three times the SLM and 50cm from the object hologram.
In some embodiments, generating an image SLM larger than the image hologram results in a visibility space; it disperses at a greater distance from the image until it reaches the image of the SLM, and then it converges. In some such embodiments, the need to accurately track the distance of the viewer from the image is mitigated, and thus the need to accurately project the image of the SLM to the eye of the viewer is mitigated. However, in some such embodiments, information regarding the distance between the viewer and the image still helps to estimate the size of the visibility space and determine when the viewer moves from one viewing window to another. Ensuring that the image SLM is in the vicinity of the viewer's eyes (where the visibility space is widest) optionally relieves the requirements from orientation tracking.
Exemplary adjustment of image SLM position
In order to display an image with a limited visibility space to a moving viewer, the image SLM should follow the eyes of the viewer. Some exemplary embodiments that provide such a follow-up are described below.
In some described embodiments, changing the position of the image SLM also changes the position of the image; however, the image movement is small compared to the SLM movement and can be compensated using the limited optical power of the SLM.
Fig. 4 is a schematic diagram of one possible configuration of an optical system 400 designed to allow adjustment of the projection of the SLM and hologram to the position of the stage and the viewer's eyes, respectively.
System 400 includes all of the components of system 210, including hologram generation unit 10, objective lens 310, and eyepiece 320. In the illustrated embodiment, objective lens 310 is a curved mirror and eyepiece 320 is a convex mirror.
The system 210 further comprises an adjustment unit 410 for facilitating control of the position at which the SLM and the hologram are projected. The adjustment unit 410 is illustrated in the figure as a V-shaped mirror comprising mirror surfaces 420 and 430 fixed at e.g. 60 ° to each other, but many other implementations will be apparent to those skilled in the art.
As shown, light ray 405 passing through v-mirror 410 from objective lens 310 to eyepiece 320 first reflects from objective lens 310 to mirror 420 and from there to mirror 430, from mirror 430 toward eyepiece 320.
Moving mirror surfaces 420 and 430 back and forth in the direction of arrow 425 changes the distance between eyepiece 320 and the SLM image. Thus, moving mirror surfaces 420 and 430 along arrow 425 allows following the eye of a viewer moving away from or close to the hologram along the optical axis.
Optionally, v-mirror 410 is omitted and elements 310 and/or 320 are moved relative to each other to achieve a similar effect. Any other known means for changing the position of the back focal length of the system 400 may also be used in place of the v-mirror 410 to move the image SLM along the optical axis.
Rotating one of the mirrors making up V-mirror 410 in the direction shown by arrow 426 (i.e., about an axis parallel to the line of contact between surfaces 420 and 430, and in the V-plane) allows the viewer's eye to follow the tracked up and down stroke.
In the exemplary embodiment, v-mirrors 410 are mounted on motors (not shown) that move the mirrors as indicated by arrows 425 and/or 426.
To follow the viewer's eyes as they move horizontally out of the image hologram visibility space (in the figure: into and out of the page), lens 320 is optionally moved to face the viewer.
Optionally, controlling the position of the image hologram comprises calculating and generating the holographic object such that the image hologram is generated exactly at a desired position, e.g. exactly at a position seen by another viewer.
Optionally, the optical system is adjusted to generate an image SLM near the viewer and the calculation unit calculates the object hologram to be formed such that the image hologram is exactly at the desired position. Optionally, such calculations are omitted, but at the expense of the accuracy of the image hologram positions.
FIG. 5 is a schematic diagram of a system 500 for showing different holograms for each eye of a viewer. The system 500 is similar to the system 400 except that there is an additional flat mirror 510 that rotates or revolves about its axis 515.
In one embodiment, mirror 510 is moved left and right at an angle of, for example, 3 degrees, and the SLM creates one hologram in the first half of each movement and another hologram in the second half. In this way, each eye sees a different hologram. Optionally, the movement is at a frequency such that each eye perceives the hologram projection as if it were continuous. Optionally, the central position around which the mirror moves is changed to follow the centre of the viewer's face.
In another embodiment, mirror 510 is rotated about its axis 515 at a frequency of at least about 15Hz (e.g., 24Hz, 30Hz, 45Hz, or 60Hz), and the SLM creates one hologram at half of the rotation and another hologram in the second half. Optionally, the SLM creates one hologram at a first rotation and a second hologram at a second rotation. For example, a mirror that rotates at 30Hz and an SLM that is updated twice per rotation may be updated at a similar rate as a mirror that rotates at 60Hz and is updated once per rotation.
The switching point at which the SLM is changed from creating one hologram to creating another hologram is optionally when no eye overlaps the image SLM. This optionally occurs twice per revolution: once between the transmission of the image SLM to the eyes of the viewer and once when the image SLM is projected away from the eyes of the viewer.
Optionally, each eye overlaps an image of another SLM, and each SLM changes the geisha object hologram once per revolution before it overlaps the viewer's eye.
Optionally, the image SLM overlaps both eyes of the viewer simultaneously.
Another difference between the embodiments shown in fig. 4 and 5 is that: eyepiece 320 is a lens in fig. 4, and a curved mirror in fig. 5. However, this difference is of secondary importance, with the embodiment of FIG. 4 working equally well with a mirror as element 320, and the embodiment of FIG. 5 working equally well with a lens as element 320.
Optionally, eyepiece 320 is mounted on a base (520) that is rigidly connected to shaft 515 by a bridge (525) such that eyepiece 320 follows the movement of plane mirror 510. In this option, eyepiece 320 optionally has any of the forms suggested above for objective lens 310, regardless of the shape of objective lens 310. Another possible shape of eyepiece 320 is discussed below in the context of fig. 6A and 6B.
Thus, in one embodiment, all of the image forming components move together to aim at the eyes of the viewer. Optionally, the objective lens also moves and is therefore less than a full 360 degrees. Moving the objective lens and/or the image generation module at such speeds may be energy inefficient or noisy and therefore not practiced in some embodiments.
In another embodiment, mirror 510 is rotated at half the angular rotation of eyepiece 320 to compensate for the doubling of angular velocity caused by reflection.
Single viewer
In an exemplary embodiment of the invention, the system 500 is used to image a hologram for a single viewer, such that the hologram has a large image and a wide visibility space.
In one embodiment, the location (stage) where the hologram image is projected is fixed, and the viewer is free to look at the hologram from different locations, but see the hologram from all around. In this embodiment, the eyes of the viewer are tracked, and the visibility space of the hologram is projected to follow the eyes.
For example, the eye is tracked by an eye tracking system with sufficient accuracy to distinguish when the eye is in the visibility space of the hologram and when the eye is outside said space. The tracking system may be any commercially available eye tracking system with suitable accuracy, such as the TrackIR system available from Natural Point, headquartered in Corvallis, Oregon, USATMA head tracking system. Optionally, the tracking system has common parts with the system 500, as will be described below.
The position of the viewer's eye as detected by the tracking system is passed to a computing unit which determines how the system should be adjusted to project the image SLM near the viewer's eye, such as the exact position of v-mirror 410, the exact point mirror 510 is facing, or any other adjustment that may be required depending on the particular setting being used.
When the eyes of the viewer move, their movement is tracked by the tracking unit and the optical system is controlled to keep projecting the image SLM near the eyes of the viewer. In this way, as long as the viewer is looking at the stage, he sees the entire hologram from whatever location he is in.
In some embodiments, the computing unit controls the SLM to create a hologram that reproduces the scene as would be seen from the viewer's perspective. Alternatively, the holograms are displayed using a rotating mirror around (all around) so that all viewers see the same hologram regardless of where they are located, and the hologram changes in response to movement of one of them.
A single viewer, different holograms for each eye
In some embodiments of the invention, the visibility space of a single hologram overlaps both eyes of the viewer. In these embodiments, the viewer sees a complete 3D hologram, since the holographic nature of the gazed-at image provides all the depth cues (cue) of the original scene. These embodiments are based on an image SLM approximately 6-8cm wide to cover both eyes of an adult viewer.
However, in many embodiments, the image SLM is small and has a width of about 5 to about 20 mm. In these embodiments, a single hologram may be seen by only a single eye, and presenting holograms to both eyes requires presenting two holograms, one for each eye.
In some exemplary embodiments of the invention, the two holograms have two different aspects of the same scene: one aspect will be seen by the right eye of the viewer and the other aspect will be seen by the left eye of the viewer, provided the scene is on stage. In this way, the viewer may have a better depth perception of the scene.
A single hologram
In some embodiments, the system projects a single hologram in all directions, for example around 360 °. Such a system can be made simpler and cheaper than the walking around system described below. The system can operate fully without eye tracking and without adjustment for a particular viewer. An additional advantage of this system is that the computations required to control the LSM are very simple compared to the computations required to provide a complete holographic scene to each of a plurality of users.
In some embodiments, the computing unit controls the SLM to create a single hologram, which is made of two superimposed holograms. Optionally, the viewer is equipped with glasses that filter out different images from each eye. One such embodiment uses a 2-color anaglyph, which 2-color anaglyph is known per se in the art.
In some embodiments, a single hologram is projected such that it can be viewed from different heights. In some such embodiments, the same hologram is projected at different heights, and around 360 ° in each height. For example, the hologram is multiplied by a prism so that the viewer sees the same hologram at several different heights.
Multiple viewers and/or a 360 ° walking surround
The embodiment of fig. 5 allows the hologram to be shown to a viewer looking in the direction of eyepiece 320. To allow viewers to walk around the stage or to present holograms to different viewers viewing the stage from different locations, eyepiece 320 can be made circular, as illustrated in fig. 6A.
Fig. 6A is a diagram of a system 600 showing an image hologram (635) of a heart shown floating in the air. The figure primarily shows an eyepiece 320 that optionally has an internal reflecting surface shaped as a paraboloid of revolution. In this option, each vertical cross-section in eyepiece 320 has a parabolic shape and each horizontal cross-section has a circular shape. Alternatively, eyepiece 320 is a segment of a sphere. The horizontal cross-section is a circle and the vertical cross-section is a circular arc.
Fig. 6B is a schematic diagram of system 600. Eyepiece 320 is represented by two opposing vertical cross-sections of the circular eyepiece illustrated in fig. 6A. The figure also shows a cylindrical mirror 605 that is used to displace light away from other optical elements and into eyepiece 320 so that the other optical elements do not obscure eyepiece 320 to receive reflections from turning mirror 510. The other optical elements shown in fig. 6B are similar in structure and function to those shown in fig. 4 or 5.
The mirror 605 may be omitted and the light displaced by other means. For example, as in fig. 6C, mirror 510 may be tilted about an axis about which it rotates, where fig. 6C does not show eyepiece 320 for convenience.
The cylindrical mirror 605 may be replaced by one or more flat mirrors.
The distortion introduced by the cylindrical mirror 605 or by the flat mirror replacing the mirror 605 is optionally corrected by pre-distortion of the image generated by the SLM.
Note that the use of mirror 605 optionally constrains the actual length of the optical path from the SLM to the viewer, and omitting mirror 605 optionally removes this constraint and removes the need for predistortion compensation of mirror 605.
Eyepiece 320 may be replaced by one or more flat mirrors.
The distortion introduced by eyepiece 320 is optionally corrected by pre-distortion of the image generated by the SLM.
When mirrors are used instead for mirror 605 and/or planar eyepiece 320, the locations where the mirrors abut each other are optionally not used to project an image. An optional encoder detects when the optical path crosses these locations and the image is not projected during this time.
There may optionally be gaps in the spatial coverage of the optical system, and an optional encoder optionally detects when the optical path crosses these gaps and the image is not projected during this time.
Optionally, mirror 510 is rotated about its axis 515 at a frequency of at least about 24Hz, and the SLM creates a different hologram each time the image SLM is projected to a different position (whether it be a different eye of the same viewer, an eye of another viewer, or the same eye of the same viewer after movement of the viewer).
In some embodiments of the present invention, both sides of the mirror 510 are reflecting, such that an image may be projected around the mirror in each rotation of the mirror, spanning 360 ° projection angles or smaller angles, such as 150 ° or more, 170 ° or more, 180 ° or more, 220 ° or more, 260 ° or more, or intermediate angles. Optionally, there is no reflection point at the center of the mirror 510 to eliminate 0 th order reflection (i.e., reflection of the light source). The 0 th order reflection may similarly be omitted by blocking light from reaching the center of the mirror 510 or reflecting from the center of the mirror 510 in any other way.
In an exemplary embodiment of the invention, the images viewed from different angles are distortion-free (e.g., as would be on a flat panel display or other standard 2D imaging system).
It is noted that the methods and apparatus described herein may also be used for small angle displays, particularly floating in the air, for example between 10 ° and 150 °, for example less than 100 °, less than 80 °, less than 45 °, less than 30 °, or intermediate angles.
In some embodiments, different types of images are shown to different viewers and/or different eyes, e.g., one eye may see a 2D image and one eye may see a 3D or hologram image. Optionally, different images are created by different image creation modules in the system that rotate into place and/or provide their line of sight in time by rotating a flat mirror (e.g., 510).
Exemplary light Source
In an exemplary embodiment of the invention, the light source provides collimated light (and in some embodiments coherent light) to generate a paraxial object. Non-limiting examples of collimated light sources include lasers and LEDs.
In some embodiments, a light source that provides 10-100 μ W to the eye is used.
This light intensity is optionally selected for embodiments where the distance between the rotating mirror 510 and the eyepiece 320 is 1m and the mirror rotates at 30 Hz. The considerations for selecting the above-mentioned light intensities can be summarized as follows:
in order to display an image to a viewer by such a system in a bright room, the optical power at the pupil of the viewer should be about 10 μ W in a dark room and up to 100 μ W in a bright room.
The estimation of 10 μ W is based on the following considerations:
in the embodiment in question, light enters the pupil for 5 μ s in each rotation. The estimation is based on the following assumptions: the diameter of the pupil in a bright room is about 1 mm. Since the far point of the light from the mirror to the eyepiece travels 6.28m (2 tr) per rotation of the mirror and the mirror rotates 30 times per second, the far point travels approximately 30 × 6.28-190 m-190,000 mm per second.
Thus, the beam scans 1mm in 1/190,000 seconds, with 1/190,000 seconds being approximately 5 μ s.
To provide a sharp image, 1nW of light should reach the pupil for 50 msec.
Since the light sweeps (swift) the eye during 5 mus, the system must provide all the light in 5 mus sec instead of 50 msec. Therefore, 10,000 times as much power as 1nW is required. 1nW × 10,000 ═ 10 μ w.
The above estimation is suitable for displaying images under average room lighting conditions. If the room is brighter, optionally a higher light intensity is provided, for example 2, 5 or 10 times the light intensity.
Exemplary eyepiece
In an exemplary embodiment, eyepiece 320 is a hollow body having an optically active inner surface, such as a curved reflective inner surface. Optionally, the optically active inner surface is a reflective surface, such as a curved mirror. Optionally, the inner surface is a surface of a lens.
Optionally, a hollow eyepiece converges light arriving from mirror 510. Optionally, the hollow eyepiece 320 defines two finite radii of curvature at each point. Optionally, the two radii of curvature are equal to each other, as in a spherical shell.
Optionally, the reflective inner surface of the eyepiece is a closed surface. Optionally, it is an open surface and allows the image to be viewed from a limited viewing angle. For example, in some embodiments, the eyepiece has the shape of a solid of revolution formed by rotating a 60 ° arc about an axis by half or a 3/4 circle, such that the interior of the arc is generally aimed at the axis. These embodiments may allow the image to be seen only from 180 ° or 270 ° around the stage, since the image is not visible to a viewer viewing the stage from a position where there is no eyepiece present at the stage background, unless in some embodiments the eyepiece is moved or moved as it might be.
In some embodiments, the shape of the hollow eyepiece is a rotating solid formed by rotating an arc about an axis at which the concave side of the arc is generally aimed. Optionally, the distance between the axis of rotation and the arc is equal to the radius of the arc. Alternatively, the distance between the axis of rotation and the arc is different from the radius of the arc.
In some embodiments, the hollow eyepiece has the form of a rotating solid obtained by rotating an arc about an axis that is at a distance from the arc that is different than a radius of the arc.
In some embodiments, the hollow eyepiece is, for example, a paraboloid forming a portion of a paraboloid of revolution formed by rotating a parabola about an axis perpendicular to an axis of symmetry of the parabola.
A spherical eyepiece can be constructed more easily than a parabolic eyepiece. On the other hand, a parabolic eyepiece may be less sensitive to aberrations. However, in some embodiments, the aberrations are small or even negligible due to the paraxial nature of the objects and images processed by the system. Alternatively or additionally, these aberrations or other aberrations in the optical system and/or the viewer's ability to see with the naked eye are compensated for by generating an adapted image that provides a pre-compensation.
Optionally, a rotating mirror 510 is centered in the eyepiece, directing light to different portions of the eyepiece as it rotates.
Optionally, the stage is inside the eyepiece.
Optionally, the entire optical path is inside the eyepiece.
In some embodiments, to see an image, a viewer must look in situ at a stage having at least a portion of the ocular reflective surface.
Alternatively, the viewer looks at the eyepiece from the outside.
Optionally, the viewer is inside the eyepiece, for example sitting, standing, walking or lying down in a spherical room with reflective walls and/or ceiling or with the eyepiece and/or part of the eyepiece mounted thereon. Optionally, a visual tracker images the eyepiece and uses the image to determine where and where the image may and may not be projected and/or to adjust imaging parameters such as distance and light level. Optionally, such an eyepiece includes one or more markings, such as hobbies or crosses, visible to such a tracker camera or other imaging module in the display system.
Exemplary Modular projection System
In some embodiments of the invention, two or more SLMs are used to provide different holograms to different eyes, viewers and/or locations. For example, in one embodiment, there are two different SLMs, each SLM dedicated to creating an object hologram for one eye of the viewer. In some embodiments, each SLM creates several object holograms sequentially, e.g. up to 6 different holograms for three different viewers, one hologram for each eye, and a rotating mirror brings each hologram to the stage and each SLM image to the eye, where the holograms are generated to view the SLM images.
Optionally, each SLM has its own optical system 210, and all systems are collectively tuned to provide image holograms to the same stage, optionally to exactly the same point on the stage. This option may be advantageous, for example, when the viewer is limited to some predefined area, such that a full coverage of 360 ° is not required.
In some embodiments of the invention, two or more SLMs are used simultaneously, each having its own objective lens and all having a common eyepiece and rotating mirror. Note that this allows the image generated by one SLM to be superimposed on the image generated by another SLM at the eye of the viewer, even with different types and/or colors.
One such configuration is depicted in FIG. 7, which is an illustration of a projection system 700 including two SLMs (20' and 20") with a common eyepiece 320 and a common rotating mirror 510. Hereinafter, the portion of the projection system dedicated to a single SLM that includes the SLM itself is referred to as a unit cell. In fig. 7, each unit cell is shown on its own base (710, 720, 730), allowing for modular construction of the system 700.
In some embodiments, each unit cell is dedicated to generating holograms to be viewed by a different viewer or different multiple viewers associated with the unit cell. Optionally, the association of particular unit primitive(s) with particular viewer(s) does not change during operation.
Optionally, the viewer sits in a predetermined position, for example in a fixed seat arranged in concentric circles around the stage. Adjusting the optical path length, for example with element 410, in this case is only optional and may sometimes be omitted. Similarly, face detection/tracking may be omitted.
Alternatively, the association of the unit cell with the viewer changes according to the position of the viewer. For example, it may be convenient if switching from one viewer associated with a certain unit cell to another viewer would not require a large amount of movement of the v-mirror. However, in some embodiments, the v-mirror must move as the distance between the viewer and the stage changes. Thus, in some embodiments, when two viewers associated with one SLM move a portion so that one of them is much closer to the stage than the other, one viewer may be associated with another unit cell to omit the need to repeatedly move the v-mirror back and forth a large distance.
360 DEG sit-around (sit-around) holographic television
While the embodiments described in the previous sections allow each viewer (or even each eye of each viewer) to view different holograms or video streams, some embodiments of the invention allow all viewers to view the same hologram with both eyes. In this way, many people can gather and have exactly the same experience as when viewing a standard television or movie but with holographic pictures. Although full depth perception requires viewing different holograms with each eye, viewing the same hologram with both eyes provides some depth perception, which may sometimes be better than that obtainable with certain autostereoscopic displays.
Thus, in some embodiments of the invention, a single hologram is projected for the entire duration of each cycle of the rotating mirror, and people sitting around the stage can view the same holographic video stream.
Such a system does not require angular tracking of the viewer's face around the stage; it is known that the distance of the viewer from the stage may be sufficient.
Optionally, in the sedentary configuration, the viewers are seated in concentric circles around the stage such that each group of viewers is at a different distance from the stage. Optionally, the seat is fixed such that the distance is predetermined.
In some such embodiments, there is one unit cell dedicated to the viewer of each circle, so that online optical path adjustments may be omitted.
In the present description and claims, an action is said to be taken online if the action is taken while the viewer is viewing. Optionally, the online action changes the content being seen by the viewer, the quality of the picture, and/or the orientation of the picture.
In some embodiments, a concentric seating arrangement is utilized to present different viewing streams to different viewers. For example, each circle may see a different movie. This arrangement is particularly simple since there is one unit cell showing each movie, and the system shows different movies from different SLMs to viewers sitting in each different circle.
Alternatively, different movies are shown to different areas of the same circle. It should be noted that different content shown to different viewers may be of similar nature (e.g., two movies) or of different nature (e.g., one viewer watching a movie and the other viewer seeing a 2D still image).
Optionally, the projected image is a 2D image, such as a conventional television show, and the system allows viewing of the television from up to 360 degrees around it. Alternatively, different content may be projected to different regions (e.g., 2, 3, 4, 5, 6, or more different content/channels). For example, viewers viewing the display from angles between 0 ° and 90 ° may view sports channels, viewers viewing the display from 91 ° to 180 ° may view news channels, and so on. Alternatively, the 2D image stream is displayed, for example using a paraxial LCD display, which may require optical manipulation as described herein by conventional control of the LCD. When different viewers are gazing at different media streams, it may be preferable to provide the audio via personal headphones. Note that angles smaller than 360 ° may also be supported, such as 100 °, 160 °, 180 °, 210 °, or smaller or larger or intermediate angles. For example, the viewing angle (effectively, not necessarily instantaneous) may be, for example, at least 10 °, 20 °, 30 °, 40 °, 50 °, or an intermediate angle. Certain features of some embodiments of the invention are: the small-view image generator is operated to provide a wide view, for example at least 5, 10, 30 or 100 times in area. Certain features of some embodiments of the invention are: rather than generating a hologram for all parts of the space in which a viewer may be located, a hologram (or other image) is calculated and/or displayed only for the part of the space in which the intended user is located.
In some embodiments of the invention, the same general content (e.g., house) is provided, but different data layers (e.g., pipe work, cabling) are provided at different angles (e.g., rotation and/or change of orientation). Optionally, there is a seamless change in transparency of one or more data types as the viewing angle changes.
Certain features of some embodiments of the invention are: the plurality of images are shown to one or more viewers substantially simultaneously, e.g., in less than 1 second, in less than 0.1 second, at a video rate, or faster. In an exemplary embodiment of the invention, the system generates (and projects) at least 10, at least 20, at least 30, at least 40, at least 80, at least 150, or an intermediate number of different images/holograms per second using, for example, 1, 2, 3, 4, 5, 6, or more image generation modules.
Exemplary coping trailing (smearing)
To recognize possible smearing issues, it may be useful to consider the following embodiments: where the image SLM is about 2cm wide, the viewer is at about 1m (meaning about 6m in circumference) from the rotating mirror, and the rotating mirror rotates at 30 Hz. In this particular embodiment, if a single image is projected constantly and continuously, the image SLM scans the viewer's eyes at a linear speed of about 180 m/sec; and the sweep may cause smearing of the hologram.
One way to cope with this possible smearing is by having the system active only for a small fraction of the time that the mirror guides the image SLM to the vicinity of the viewer's eyes (hereinafter referred to as the projection period). In the above example, the projection period is about 2cm/6m 1/300 of the mirror rotation period. The mirror rotation period at 30Hz is 1/30 sec. Thus, the projection period in this example is 1/9000sec, which is about 100 μ s. Activating the laser for only a small fraction of this time, say between about 1 and about 20 mus, generally reduces all smearing. Optionally, the laser is activated several times per projection period, for example 5 times 2 mus activation with 18 mus inactivity in between. Optionally, the system is adjusted such that each eye is exposed to the system only once per mirror rotation. In the above example, this may be achieved, for example, by providing pulses with a width of 20 μ s each 80 μ s.
In some embodiments, the laser is activated in pulses. Additionally or alternatively, the laser is continuous and a chopper is used to chop the light into shorter flashes.
Alternatively or in addition, the light source is shaped as a line, for example a vertical line that scans the SLM horizontally. In this embodiment, each vertical illumination defines a sub-hologram describing the scene from a different and extremely narrow angle. The sub-holograms arrive at the eye as vertical slit windows. Alternatively, the scan covers the entire SLM, thus presenting all SLM images near the eye, but the eye will only see one vertical segment of the SLM, i.e. the same segment that falls exactly into the eye.
One potential way to filter out the smear is by using a slit illumination source, which uses slits 513 in a rotating mirror 510' (see fig. 8). Turning mirror 510 'is optionally partially reflective, and slit 513 in turning mirror 510' emits light from mirror axis 515 to lens 310, from lens 310 to the SLM, from the SLM back to lens 310, from lens 310 back to slit 513, from slit 513 to reflect to eyepiece 320, and to the eye of the viewer.
Systems that follow a single viewer without rotating the mirror are optionally gated conventionally to overcome the smearing problem, as smearing is less noticeable in these systems.
Exemplary eye tracking Unit
Eye tracking systems are well known in the art and any such known system may be suitable for use with embodiments of the present invention, as long as the tracking quality is compatible with the size of the SLM image: the tracking should be good enough to allow estimation of the position of the eye in each direction within a tolerance smaller than the size of the image visibility space in the same direction.
In an exemplary embodiment of the present invention, the tracking unit provides only the position of the center of the face of the viewer, and calculates the positions of the eyes based on knowledge of the face of the viewer or using general information on the distance between the eyes of the viewer. Such information may be provided for different viewing groups, e.g. children, adults, etc. Knowledge may be acquired, for example, by measuring the viewer's face prior to tracking. Optionally, the measuring comprises standing at a known distance from and looking at the tracking unit. The tracking unit then detects the eyes of the viewer and the distance between the eyes as seen from a known distance, and uses this information to calculate the distance between the tracking unit and the viewer during tracking in response to the detected distance between the eyes of the viewer.
In an exemplary embodiment of the present invention, the tracking unit is built to inherently have the same coordinate system as the optical system. For example, one unit cell depicted in FIG. 7 may hold a tracking unit optical element.
FIG. 9 is a schematic diagram of an exemplary tracking unit 800 useful in tracking unit primitives. The tracking unit 800 includes a light source 810 and a light detector 820 located at the back focal length of the tracking unit 800. Optionally, the tracking unit 800 further comprises a filter 805 that filters out light of other wavelengths than the wavelength provided by the light source 810.
The light provided by light source 810 and detected by detector 820 is of a type that is selectively reflected by the eye. It is known in the art to use infrared light for these purposes.
In an exemplary embodiment of the invention, light from light source 810 is split by beam splitter 825 such that a portion of the light goes to detector 820 and another portion goes to viewer's eye 830. Light reflected from the eye 830 returns to the detector and is detected. This may be the case if coherent light is used to detect the interference between the direct and reflected light or if the reflected light is used to provide a baseline of instantaneous light levels. In other embodiments, the light is not reflected directly back to the detector, instead only light reflected by the eyes or face (or artificial markers such as on hats, stickers, or glasses) is reflected back to the detector.
In the illustrated embodiment, light from the beam splitter passes through optical element 310, turning mirror 510, and eyepiece 320 (all of which are described in detail above) to the eye of the viewer. On the way from the viewer's eye to the IR detector, the light optionally travels through the same optical path, but in the reverse order.
The direction in which the rotating mirror faces when reflected light is detected at detector 820 corresponds to the direction of the eye. The vertical elevation view of the viewer's eye is optionally estimated based on the point at which the reflected light strikes the detector 820. Optionally, the elevation view of the image is adjusted by optical means and/or by moving the image itself (e.g. shifting its code on the SLM). Optionally, the different viewing directions have different elevation views (e.g., for different viewer heights and/or distances).
Alternatively, detecting both eyes within a certain predetermined distance (say about 6.4cm ± 1.5cm) is interpreted as detecting both eyes of the same viewer.
Optionally, the distance between the eyes of the viewer is measured before the tracking starts and the time difference between the signals received from the eyes of the viewer is used to estimate the distance between the viewer and the system.
In an exemplary embodiment of the invention, the detection of the eye is adjusted to be biased for detection (e.g. with false detections of e.g. 10%, 30% or 100% or an intermediate or larger percentage than the correct detection). In an exemplary embodiment of the invention, sending the image to a location where no eyes are present only has a computational cost, possibly pre-compensated by system component selection, while not sending the image to a location where eyes are present may prevent the display from operating correctly.
Exemplary adjustment of holograms
When a viewer is looking at an object while walking around the object, the viewer sees a different face of the object from each point.
Holograms displayed according to some embodiments of the invention provide similar effects without any adjustment to the optical system or hologram generating unit. In other words, the display of a single holographic frame provides a complete walking around experience. However, this is limited to walking around which the eye remains within the boundaries of the hologram visibility space.
In some embodiments, the optical system is adjusted to follow the eye as the viewer moves around such that the eye is outside the image viewability space, but the hologram displayed to the viewer does not change. In these embodiments, a viewer walking around a hologram of a globe, for example, always sees europe, regardless of where he stands. Alternatively, in such an embodiment, instead of following the eyes of the viewer, the system simply makes the same hologram available around.
In some exemplary embodiments, the scene displayed to the viewer is updated as he moves so that the viewer can see a different continent as the hologram surrounding the globe is moved. Optionally, this is achieved by updating the displayed hologram each time the tracked viewer's eye moves outside the visibility space of the image hologram. If a real object is located on the stage, the system estimates which part of the object the viewer will see and renders a hologram of that part on the stage.
A hologram that projects only that part of the scene that is viewable by a viewer at a time allows a significant amount of computational power to be saved without compromising the quality of the image seen by the viewer.
In some embodiments of the invention, the scene itself changes over time. In one particular example, the scene is a video stream. In this case, the hologram will be adjusted even if the viewer's eye does not move. Optionally, these adjustments are made about 24 times per second, as it is well known in the art that the human brain sees smooth movement at such frequencies.
Alternatively, when a hologram of a video stream is shown, each time the hologram is adjusted (e.g., a hologram displaying another "frame" of the video stream), the system updates about the viewer's eye position and projects only those portions of the frame that will be viewable to the viewer from the viewer's current viewpoint. This allows, for example, a viewer watching a movie of a basketball game to change seats and see the game from different angles.
Exemplary System and control
FIG. 10 is a simplified block diagram of an exemplary system 900 illustrating some of the major elements of a projection system and the interactions between them.
The system 900 includes: a projection unit 910 driven by a driver 915 for generating and projecting a hologram. Projection unit 910 includes SLM 920.
The system 900 further includes: a calculation unit (930) for calculating a desired optical behavior of each pixel of SLM 920; and an SLM driver (940) for driving the optical behavior of each pixel in SLM920 according to the desired optical behavior as calculated by calculation unit 930.
The calculation unit 930 receives as input, for example, a dataset, an image or a video stream (optionally a 3-dimensional or volumetric image, optionally a stream of 3-dimensional or volumetric images). The input optionally has a digital form. Alternatively, the input is in analog form. In some embodiments, only the surface of the 3D image is provided. Optionally, the data is pre-computed for streaming to the SLM. Alternatively, unit 930 generates SLM data from the input, e.g. as described below. Optionally, unit 930 generates data and/or renders the input only for viewing directions in which the user is or is expected to be (e.g. assuming a certain movement speed of the human head). Optionally, there is a time delay (e.g., seconds or fractions of seconds) between the detection of a new user of the eye tracker and the presentation of an image (or a complete image) to the new user, e.g., due to delays in obtaining data and/or delays in rendering such data.
Optionally, volumetric 3D image flow data or any image data is stored in advance in a memory of the system and during projection the stored data is approached (propach) and used to control the system. Alternatively, the data is received online and only temporarily stored as required for controlling the system during the projection.
A calculation unit 930 calculates, based on this input, what the optical behavior of each pixel of SLM920 should be, in order to reproduce with the hologram produced by the SLM the wave front corresponding to the wave front emanating from the scene. The SLM driver 940 drives the pixels of the SLM to the calculated optical behavior.
Optionally, unit 930 modifies the data it receives and/or the data to be displayed to take into account the optical properties of the system, the viewer and/or the calibration process. Optionally, the calibration process is visual and/or comprises detecting a pointing device of the user. In one example, a grid is shown and the user "touches" each point on the grid. In another example, a series of images are shown to a user and feedback is provided, for example, regarding color quality, shape, spatial distortion, and/or other image properties. Alternatively, the input is provided via an image or using an input device (e.g., a mouse or buttons not shown).
The system shown also comprises a tracking unit (950) which optionally provides information to the calculation unit 930 about the position of the eyes of the viewer, thus allowing the calculation unit 930 to estimate which parts of the scene the viewer will see from his viewpoint, and to calculate the optical behavior of the pixels to produce only the wavefronts emanating from these parts.
Additionally or alternatively, the tracking unit 950 provides information about the position of the viewer's eyes to the driver 915 of the projection system 910, allowing it to adjust the position of the visibility space of the generated hologram to the position of the viewer's eyes.
In an exemplary sequence of operations, a video stream is input into a computation unit 930 that computes desired optical behavior of individual pixels of SLM 920. The calculation unit 930 passes the calculated value to the SLM driver 940, which drives the SLM920 accordingly. The generation and projection unit 910 generates an object hologram using the SLM920 and projects the object hologram.
At the same time, the tracking unit tracks the position and orientation of the viewer and sends this information to the computing unit. The computing unit uses this information to compute a simplified hologram that reproduces only the light emanating from the scene in the direction of the viewer. The tracking unit also communicates the position and orientation of the viewer's eyes to the driver 915 of the generation and projection unit 910, and the driver 915 drives the projection unit to project the hologram to be viewable by the viewer.
Fig. 11 is a flow chart of actions taken in a method of generating and projecting a hologram to be seen by a viewer looking at a stage according to an embodiment of the present invention.
In 105, the position of the viewer's eyes is estimated. Optionally, the positions of both eyes are estimated. Optionally, the direction in which the viewer is gazing is also estimated. Optionally, the system is configured to project the hologram onto a predefined stage, and the position of the viewer's eyes and the stage determine the direction in which the viewer is looking.
At 110, based on the eye positions of the viewer, the portion of the scene viewable by the viewer with the scene at a stage in a given orientation is estimated.
At 115, holograms are generated for only those portions of the scene estimated at 110 to be seen by the viewer. Alternatively, the holograms are generated by calculating the optical behaviour of the individual SLM pixels required to generate the holograms and driving the SLM accordingly.
In 120, the hologram generated in 115 is projected such that the viewer will see the hologram from his position when gazing at the stage.
FIG. 12 is a flow chart of actions taken in a method 150 of generating a hologram to be seen by a viewer looking at the hologram from a given angle.
The viewer's position is captured at 152. Capturing the viewing position optionally includes receiving input from a tracking unit. Optionally, capturing further comprises processing the input. When the capture is complete, the system has determined the viewer's location, and the perspective from which the viewer will see the scene (if the scene is in fact on stage).
At 154, volumetric data is approached. Optionally, the volume data is pre-stored in a memory of the system. Optionally, the volumetric data is received online (i.e., while the imaging process is in progress), for example, from a 3D imaging device (e.g., a CT imager).
The system filter filters out from the volumetric data the data needed to create the hologram for the portion of the scene determined in 152 that will be seen by the viewer.
At 156, the computing system converts the volumetric data to holographic data, including, for example, setting a desired index of refraction for each active pixel of the SLM to generate a hologram. Alternatively, the SLM has inactive pixels that do not appear when generating the hologram. These pixels are optionally not illuminated in 160 (below). Alternatively, preventing illumination of these pixels is performed by an additional transmissive or reflective LCD or Digital Micromirror Device (DMD) placed between the light source and the SLM. Optional additional transmissive or reflective LCDs or DMDs are not depicted in the figures.
At 158, the SLM is controlled so that each pixel has in fact a refractive index set for it.
In 160, the SLM is illuminated to generate an object hologram.
Exemplary interaction with a hologram
Some embodiments of the invention allow a viewer to interact with the hologram. For example, the viewer may move his hand, or any other body part, or any object (e.g., a pointer) that the viewer is holding to touch the hologram. Optionally, a sensor detects the position of the viewer's hand and controls the output device accordingly. In one example, a viewer touching a clock causes a ring of the output device.
In another example, a viewer may interact with the hologram to manipulate the scene. For example, a viewer touching a car hologram at the bonnet may change the hologram to one inside the car engine.
Alternatively or additionally, a viewer touching a certain portion of the hologram may cause the output device to control the system to produce a hologram in which the touched portion is in front of the viewer. For example, a viewer facing swiss viewing the globe may touch the globe at spain, and the globe will rotate to bring spain in front of the viewer.
Alternatively or in addition, the viewer may interact with the hologram via the control panel (moving the hologram in space, rotating it about some predetermined axis, specifically defining an axis of rotation about the hologram and rotating the hologram about a specifically defined axis), and perform any other manipulation of the orientation and/or position of the hologram in space.
Alternatively, two or more viewers may interact with holograms of the same scene at the same time. For example, two viewers may touch the same part of the scene in front of them, and although each viewer is looking at a different hologram (or even each eye of each viewer is looking at a different hologram), they also touch each other when they both touch the same part of the scene, e.g. both touch the clock.
Color image
Various embodiments are described above in the context of a monochrome image. However, multicolor images may be provided as well by systems and methods according to embodiments of the present invention.
In some embodiments, the color hologram is projected by a single SLM that is sequentially illuminated by red, green, and blue light. Alternatively, the color scene is processed into three monochromatic scenes (one red, one green and one blue), and the computing unit provides the SLM with data for sequentially generating monochromatic holograms which reproduce each monochromatic scene in its turn. Optionally, the light source is synchronized with the computing unit so that each monochrome hologram is formed with corresponding light (a hologram reproducing a red scene is generated with red light, etc.).
Optionally, the image of the SLM is projected to overlap with the eyes of the viewer while the hologram generated by the SLM is projected to the stage.
In some embodiments, the red, green, and blue light is projected onto three different SLMs, each SLM reproducing a single color scene. In some embodiments, each of the three SLMs is contained in a different unit cell, and in each rotation of the mirror, the images of all unit cells are projected sequentially to the stage so that the viewer sees a multi-color hologram.
In some embodiments, all three SLMs share a single optical system, such that synchronization between them is optical. For example, the three SLMs are three portions of a single SLM screen. Alternatively, the SLMs sharing the optical system are each contained in a single unit cell.
The light source used to generate the color hologram image comprises, for example, three different lasers. Another example is a light source comprising three different LEDs.
Exemplary use of multiple SLMs
In some embodiments of the invention, creating object holograms does not require activating a complete SLM unit. In these cases, it is possible to use one SLM as a plurality of SLM units. For example, one SLM of 1000 × 1000 pixels can be used as multiple (4) SLMs (each having 500 × 500 pixels), and all the advantages of using multiple SLMs discussed below are obtained.
Optionally, several SLMs are imaged at the same eye of the same viewer. This arrangement may have several uses.
For example, as mentioned above, in some embodiments, each SLM provides a single color image (red, green, or blue), and a viewer seeing three single color images perceives them as a single multi-color image. Note that if a non-hologram image is shown, a color SLM may be used. Alternatively, different source colors may be aimed at different parts of the hologram generating SLM, optionally with different colored SLM elements interleaved or otherwise mixed.
In another exemplary embodiment, multiple SLMs are used to provide a larger image than that provided by a single SLM. The system is controlled to form two half-objects (each formed by a single SLM) and image the half-objects in close proximity to each other on the stage. The two SLMs are imaged to overlap the same eye of the viewer. Thus, the viewer sees an image consisting of two images (one for each half-object) with the same eye. The composed image is optionally larger than any image that makes up the image.
In another exemplary embodiment, multiple SLMs are used to widen the angle from which an image is viewed. In one such embodiment, two SLMs are imaged close to each other, optionally with some overlap between the two SLM images, to the vicinity of the viewer. Each of the two SLMs optionally creates an object hologram of the same scene, and both holograms are imaged onto the stage. The viewer can see the image regardless of which image SLM overlaps his eye. This arrangement allows relaxing the requirements from the system tracking mechanism, since only relatively large movements of the viewer require system adjustments.
In another exemplary embodiment, multiple SLMs are used to widen the angle from which an image is viewed. In one such embodiment, two SLMs are imaged close to each other to the vicinity of the viewer, optionally with some gap between the two SLM images smaller than the viewer's pupil. Each of the two SLMs optionally creates an object hologram of the same scene, and both holograms are imaged onto the stage. The viewer can see the image regardless of which image SLM overlaps his eye. This arrangement allows relaxing even more requirements from the system tracking mechanism than the previous options, since only relatively large movements of the viewer require system adjustments.
Exemplary applications
Exemplary private content application
In some embodiments of the invention, the content is imaged to the eyes of only one viewer, and others in the vicinity of the viewer cannot see the content.
Alternatively, the viewer may view the content from any desired angle, and in some embodiments even move around the display, while others in his vicinity cannot see the content.
In some embodiments, the display is disconnected when the tracking system loses track of the viewer. These embodiments are particularly useful for processing confidential materials. For example, if a viewer views a confidential document with the most advanced laptop computer displays, the document can be seen by neighbors sitting next to the viewer. If the laptop is equipped with a display unit according to an embodiment of the present invention, the confidential document is only displayed to the viewer. However, the viewer is not necessarily stuck looking at the display from one angle. The viewer can leave the display and his neighbors will occupy the viewer's position in front of the display and will still not be able to see anything because the tracking system loses track of the viewer and stops the display.
Exemplary medical applications
In many medical applications, physicians are provided with information about the three-dimensional structure of tissue. In some embodiments of the invention, this information is displayed to one or more physicians as a hologram with which the physicians may interact.
For example, as a preparation for minimally invasive cardiac surgery, a team of physicians uses existing ultrasound techniques to acquire dynamic 3D images of the heart. The team members may then view the acquired images from different perspectives, e.g., each viewer from his own perspective, while having the ability to point and mark specific areas within the images as part of their discussion and preparation for cardiac surgery.
In an exemplary embodiment, the image hologram has the same dimensions as the imaged scene (in the above example the heart). Thus, in case an external component, such as a stent, is to be inserted into a patient, the component may be fitted to the holographic image before the surgery is started, in order to minimize the need to fit it to the patient during surgery. This feature is optionally enhanced by having a virtual "floating in the air" image.
Exemplary computer aided design
In an exemplary embodiment of the invention, a computer-designed model is displayed to a team of designers, allowing one, several, or each team member to walk around, associate with, and/or manipulate the model. For example, in a model such as the mechanical part of a new cell phone housing, where the display is a touch screen, one designer may suggest modifying the illumination of the display, and another designer will comment on the results of the modification and suggest adding real buttons and immediately preset those buttons. Similarly, a single designer may look at the same detail from different angles. While viewing the design, the team member may point (with a dedicated pen or with his finger) to a specific portion within the image. Optionally, all team members see the portion pointed to, and the team members may discuss the scene of the portion as seen from different perspectives. Optionally, the display includes a system-human interface that allows team members to manipulate specific components within the complete design, such as changing the color of the marked surface. The described process eliminates some of the rapid prototyping stages within the development process when the team is accurately viewing the image from all angles, thus reducing its overall time and cost.
Digital advertisement
Some embodiments of the invention may be utilized to obtain attention of a person exposed to the display without having to look at it intentionally. For example, some embodiments of the present invention may be used as an advertising display in public locations and obtain significantly more attention than the more traditional flat displays and posters.
For example, a display according to an embodiment of the invention may be positioned in an exhibition and viewers walking around it will see a holographic image of an advertised product, such as a cell phone, or the entire advertising film. Holographic displays attract more attention than ordinary posters or flat screens.
Optionally, in addition, one viewer manipulates the image while the other viewer is viewing. The manipulation is optionally by moving, rotating or zooming the image or interacting with the image in any other way. This embodiment optionally enhances the viewer's appeal to the presented image, product or service.
Optionally, different scenes of the product are displayed for viewing by viewers standing at different locations around the display. It is expected that people exposed to such advertising will be encouraged to walk around the display and pay more and more attention to the advertised product.
Optionally, an advertising display according to an embodiment of the present invention allows a viewer to manipulate the displayed scene as explained above under the heading "exemplary image manipulation". Allowing the viewer to manipulate the viewed image can be expected to increase the viewer's attention and involvement with the advertised product.
Optionally, an advertising display according to an embodiment of the invention displays the same hologram 360 ° around it and includes an input device to allow a user to indicate that the user is interested in looking more closely at the displayed scene. In response to receiving such an indication, the system begins tracking the viewer's face and allowing that particular viewer to see the advertised product from different angles as the viewer walks around the display.
User interface
A 3D interaction occurs when a user is able to move and perform the interaction in a 3D space. Human-computer interaction needs: both humans and machines receive and process information and then present the output of that process to each other. The user performs an action or gives a command to the machine in order to achieve the purpose. The machine takes the information provided by the user, performs some processing, and then presents the results back to the user.
Ideally, users can interact with the virtual image as they can in reality. In contrast to standard input devices like a keyboard or a 2D mouse, ideal operation in all three dimensions should allow six degrees of freedom, which is natural for the user. This type of3D interaction device should recognize and interpret human actions and gestures and transform them into information about the image or corresponding manipulation of the virtual scene. Some embodiments of the present invention are much closer to ideal than the standard input devices mentioned above.
While some devices enable three-dimensional interaction with up to six degrees of freedom, none of them enables this to be done on the actual projected image, but only in another location in space, referred to herein as the input space, with the image projected on a 2D screen or some form of3D platform, referred to herein as the display space.
In some embodiments of the present invention, using the displays of the present invention described herein, the images provide the viewer with depth cues of real objects, making the viewing user interface feel natural. Some embodiments of the invention enable a user to actually "touch" a projected 3D image while providing appropriate visual depth cues, optionally having a wide viewing angle, optionally allowing viewing of precise locations in space, and optionally viewing from different viewing angles. Some embodiments of the invention project a "floating in the air" image, so the image appears approximately 80cm from the viewer in the range of the viewer's arm.
In some embodiments, the apparent distance from the viewer to the image is such that the user can reach the image, i.e., approximately the length of an undeployed arm, approximately the length of an arm, and/or approximately the length of an arm holding a cane or pointer.
In some embodiments, the size of the display space corresponds to the range of motion of the user's arm, i.e., spanning approximately 1 to 2 meters. In some embodiments, the size of the display space corresponds to the range of motion of the finger, i.e., spanning approximately 10 to 20 centimeters.
In some embodiments, the resolution of the input position corresponds to the movement of the user's arm, i.e., approximately 1 centimeter. In some embodiments, the resolution of the input location corresponds to the movement of the user's finger, i.e., approximately 1 millimeter. Coarser or finer resolutions are also possible optically, so some embodiments potentially operate at these resolutions.
In some embodiments, the user interface displays the floating-in-the-air object to one or more users in one or more locations. The floating object optionally does not appear to change position as the viewer moves. Floating objects may also optionally appear in the same position from different viewing directions.
In some embodiments, two user interfaces use two displays to display the same floating-in-the-air image in two different locations, enabling one or more users at a first location to perform partial or full walk-around at one or more walk-around rates, and simultaneously enabling one or more users at a second location to perform partial or full walk-around at one or more other walk-around rates.
In some embodiments, the user interface displays different floating-in-the-air images to different viewers. Displaying different images to different users optionally takes many forms: displaying disparate images, such as red balloon and green farley; displaying the same object using different coordinate systems, such as a first image showing a hammer placed in the center of the display and a second image showing a hammer placed on the side of the display; displaying a portion of the object in the first image and another portion of the object in the second image, the objects optionally always being in the same position, the same coordinate system, the same size; displaying different colors for the same object, such as displaying the object to a first user and displaying the same object to another user, either contrast-enhanced or color-enhanced or in a different hue range; and displaying the same object in different sizes.
Note that displaying the same object in different sizes to different users poses a problem: when the first user points the pointer to the display space, the location on the object seen by the first user at which the display of the pointer for the second user should occur is "touched". One option is that the image displayed to the second user should be displayed such that the touching tip of the pointer should appear to touch the same location in the second image as in the first image, even when the second object appears to have a different size than the first object. The second image is displayed in coordinates such that the tip of the pointer appears to touch the same location in the second image as in the first image.
In some embodiments, the user interface displays different floating-in-the-air images to different eyes of the same viewer.
In some embodiments, the user interface enables walking partially or completely around the floating-in-the-air image as previously described to display different sides of the floating-in-the-air image as if the image were a real object floating in the air.
In some embodiments, the user interface allows a finger or some other object to be extended over or into the floating-in-the-air image.
In some applications of the user interface, the floating-in-the-air display utilizes embodiments of the volumetric display described herein. In other applications of the user interface, other volumetric displays are optionally used, provided their nature supports the particular application.
Reference is now made to fig. 13A, which is a simplified illustration of a user 1320 appearing to have a finger 1330 touching an object 1315 being displayed by a user interface 1300 constructed and operative in accordance with an embodiment of the present invention.
User interface 1300 includes a volumetric display 1305 that displays a first image in a three-dimensional display space 1310 that floats in the air. By way of non-limiting example, the image shows an object 1315 (a heart by way of non-limiting example).
Note that with reference to fig. 13A, 13B, 13C, 14, 15 and 16, where reference is made to a three-dimensional display and/or a floating-in-the-air display, the reference is meant to include, as non-limiting examples: holographic image display as described above; paraxial image display as described above; and other image displays suitable for volumetric display.
It is noted that two-dimensional displays, such as, by way of non-limiting example, television displays and/or computer monitor displays, are suitable for transformation into volumetric displays by generating two-dimensional images that float in the air similar to three-dimensional images.
User 1320 views object 1315 and extends finger 1330 to visibly "touch" object 1315. Since volume display 1305 is a floating-in-the-air volume display that displays floating-in-the-air images, volume display 1305 allows real objects to be inserted into display space 1310.
The user interface 1300 also optionally includes a computer 1335 that provides control and data 1340 to the volumetric display 1305.
The position of the finger 1330 is located by a position determining unit (not shown). The position determination unit optionally determines the position of the finger by recognizing a real object placed into the display space 1310.
The position determination unit optionally includes a unit for positioning an object (e.g., finger 1330) in three dimensions, such as, by way of non-limiting example, a camera mounted to pick up images in different directions and triangulate position in three dimensions, and/or a distance measurement unit that measures distances to objects in display space 1310.
In some embodiments of user interface 1300, a variant of a unit cell, such as depicted and described with reference to FIG. 7, operates as a location determination unit. The variation of the unit cell is positioned such that the reverse optical path from the display space 1310 leads to the unit cell. The unit cell optionally measures the position of an object (e.g., finger 1330) in the display space 1310 by a combination of the angle of rotation of the object relative to the base of the unit cell and the distance to the object. The angle of rotation optionally takes into account the rotation of the optical system.
In some embodiments of the present invention, the distance to the object (e.g., finger 1330) is measured by a distance measurement system, such as used in a camera auto-focus system. In some embodiments of the present invention, the distance to the object (e.g., finger 1330) is measured by a conoscopic distance measurement system.
The position of the object is optionally used as an input 1345 to the computer 1335, and the computer 1335 optionally calculates control instructions and data for displaying the second image by inputting the location of a highlight on the image displayed by the volumetric display 1305 (e.g., the location of the highlight on the heart).
Optionally, in order to locate a specific position on the object inserted into the display space 1310, the specific position is selected on the object, and the specific position may be further highlighted. As a non-limiting example, the finger tip may be the location. As another non-limiting example, the tip of a finger may be highlighted by marking with a dye. The dye may be visible to the human eye and/or the dye may be selected to provide high contrast to a machine vision system that locates the position input.
Optionally, the interface 1300 tracks the position of the finger 1330 through a 3D camera available today, for example, through a 3DV system.
Optionally, the user interface 1300 tracks the position of the finger 1330, or some other position indicating tool, and interprets the dynamic movement of the finger 1330 as a command gesture for the user interface 1300. The command gesture optionally causes manipulation of the displayed image. This use of a user interface provides the user with the experience of direct (virtual) shaping of the displayed objects and/or images and/or scenes. The above experience is especially enhanced when sensory feedback is provided.
Reference is now made to fig. 13B, which is a simplified diagram of an object 1365 that a user 1370 appears to have a pointer 1380 touch that is being displayed by a user interface 1350 constructed and operated in accordance with an embodiment of the present invention.
The user interface 1350 includes a volumetric display 1355 that displays a first image in a three-dimensional display space 1360 that floats in the air. By way of non-limiting example, the image shows an object 1365 (a heart by way of non-limiting example).
The user 1370 views the object 1365 and extends the pointer 1380 to visibly "touch" the object 1365. The volumetric display 1355 allows a real object, such as a pointer 1380, to be inserted into the display space 1360.
The user interface 1350 also optionally includes a computer 1385 that provides control and data 1390 to the volumetric display 1355. The position of the pointer 1380 is located by a position determination unit (not shown). The position determination unit optionally determines the position of the pointer 1380 by recognizing a real object placed in the display space 1360.
The pointer 1380 optionally presents a more definite (better defined) position input than a finger. Alternatively, it is easier to position the tip of the pointer 1380 than to position the tip of the finger of the hand.
The tip of the pointer 1380 may be highlighted by marking with a dye or more than one dye.
In some embodiments, the tip of the pointer includes a substantially compact light source 1382. The light source 1382 may be visible to the human eye and/or the light source 1382 may be selected to provide high contrast to a machine vision system that locates the position input.
In some embodiments of the user interface, the location input causes the user interface to capture data corresponding to voxels that are substantially near the location input.
In some embodiments of the user interface, the location input causes the user interface to capture data corresponding to a sub-image located substantially near the location input.
In some embodiments, the user interface includes a display of an "actuator," i.e., a display such as a button, a lever (1ever), or some such device that is typically pressed, pushed, pulled, etc. The user interface enables the user to place a hand and/or pointer into the display space, virtually "press a button", "push or pull a joystick", and the like. When the user interface senses a display position of a hand and/or pointer next to the actuator in the display space, the user interface optionally interprets the placement of the hand or pointer as an actuation of the actuator.
The user interface optionally provides sensory feedback to the user such that the user feels somewhat as if pressing/pulling/pushing the actuator.
The user interface optionally changes the first display to move an actuator image corresponding to actuating the actuator.
The user interface optionally changes the first display to show a change in the actuated actuator, pushed button, and/or other such indication to the user to "push the button". Note that the display is controlled by the user interface and thus optionally provides feedback upon actuation. This is in contrast to, for example, existing holographic displays that can display a hologram of a button, but cannot change the appearance of the holographic button, since their holograms are static displays projected from film.
In some embodiments of the invention, the user interface displays an image of the robotic arm, and the computer optionally sends control signals and data so that the real robotic arm moves according to the user's input provided in the display space of the volumetric display of the user interface.
In some embodiments of the invention, the user interface optionally picks up more than one location input. The position input is optionally provided by a number of fingers in the display space of the volumetric display and/or by a number of pointers and/or by pointing to a number of positions in succession. The position input is optionally provided by several points on one finger and/or pointer. Several spots are marked on the finger and/or on the pointer, optionally with a contrast dye and/or a light source.
In some embodiments of the invention, the user interface optionally picks up more than one location input. The position input is optionally provided by calculating and/or estimating a position based on the shape of the object inserted into the display space. As a non-limiting example, the line is optionally calculated based on a long axis of a substantially elongated object inserted into the display space.
The user interface optionally tracks movement of one or more positional inputs over time, and optionally displays one or more paths that track movement in the display space, optionally superimposed on images displayed in the display space.
In an example application, the user interface optionally begins with an empty display, tracks movement of one or more location inputs over time, and optionally displays one or more paths that track movement in display space in real-time and/or near real-time.
In an example application, the user interface optionally accepts two positional inputs and defines a line in the three-dimensional volumetric display passing through the two positional inputs. This line is optionally used to further manipulate the image displayed by the user interface. As non-limiting examples, image manipulation using defined lines includes: rotation about the line; measuring the length of the wire; and dividing the displayed object into portions on both sides of the line.
In an example application, the user interface optionally accepts three position inputs that are not on the same line and defines a plane in the three-dimensional volumetric display that passes through the three position inputs. The plane is optionally used to further manipulate the image displayed by the user interface. As non-limiting examples, image manipulation using the defined planes includes: a measurement of the area of intersection of the plane and the displayed object; and dividing the displayed object into portions on both sides of the plane.
In an example application, the user interface optionally accepts four or more positional inputs that are not on the same plane and defines a volume in the three-dimensional volumetric display based on the four or more positional inputs. The volume is optionally defined as a volume contained within the four or more position inputs, and/or a volume contained within some function of the four or more position inputs, such as a surface calculated based on the four or more position inputs.
The volume is optionally used to further manipulate the image displayed by the user interface. As a non-limiting example, image manipulation using a defined volume includes: measuring the volume; and dividing the displayed object into portions inside and outside the volume.
In some example applications, the user interface optionally picks up one or more location inputs. These position inputs are optionally used as inputs for initiating image processing functions such as, as some non-limiting examples: amplifying; shrinking; cutting the image; rotating the image; and segmenting the image.
Reference is now made to fig. 13C, which is a simplified diagram of a user 1370 inserting a box 1383 into a display space 1360 of a user interface 1350 constructed and operated in accordance with an embodiment of the present invention.
The user interface 1350 includes a volumetric display 1355 that displays a first image in a three-dimensional display space 1360 that floats in the air. By way of non-limiting example, the image shows an object 1365 (a heart by way of non-limiting example).
The user 1370 views the object 1365 and extends the frame 1383 to visibly "surround" the object 1365. The volumetric display 1355 allows real objects, such as boxes 1383, to be inserted into the display space 1360.
The user interface 1350 also optionally includes a computer 1385 that provides control and data 1390 to the volumetric display 1355. The location of block 1383 is located by a location determination unit (not shown). The position determination unit optionally determines the position of the frame 1383 by identifying the real object placed into the display space 1360.
Block 1383 optionally defines a positional input that defines a plane and optionally a limited region within the plane. Optionally, the plane defined by the interface is displayed to the viewer by means of the volumetric display of the invention. Optionally, the defined plane is presented on a 2D display. Optionally, the defined plane is displayed in real time.
In some embodiments of the invention, the frame 1383 is optionally a three-dimensional frame, such as a frame having the shape of a wire-frame cube. The shape of the frame is not limited to the example of a rectangle or a cube as depicted in fig. 13C, but includes various wire-frame-like shapes.
Block 1383 optionally presents a position input defining a volume bounded within block 1383.
Reference is now made to FIG. 14, which is a simplified diagram of two users 1470, 1472 interacting with the same object 1465 being displayed by a user interface constructed and operated in accordance with an embodiment of the present invention.
The user interface (of which display space 1460 is shown in FIG. 14) displays floating-in-the-air images to the first user 1470 and the second user 1472. The floating-in-the-air image has an object 1465. The object 1465 appears to both users 1470, 1472 at the same location and substantially simultaneously, each user 1470, 1472 viewing the object 1465 from their respective locations.
The user interface optionally implements a simultaneous display of the same locations using embodiments of the volumetric display described herein. If the first user 1470 places a real object (not shown) in the display space 1460 of the volumetric display, such as, by way of non-limiting example, a hand, pointer, or frame, the second user 1472 sees the real object in the same location as the first user 1470. If, for example, the first user 1470 uses a pointer to point to a location on the displayed object 1465, the second user 1472 sees the pointer pointing to the same location.
The term "substantially simultaneously" is now explained with reference to the above description: two users see the object 1465 "in the same location and substantially simultaneously". The image of object 1465 is optionally displayed to each user by flashing two users 1470, 1472 for a short period of time, these flashes repeating at a rate of several times per second. The two users 1470, 1472 begin seeing the object 1465 several times per second during the same second, and are thus "substantially simultaneous".
In some embodiments of the user interface, the floating-in-the-air display of the user interface displays a different image to the first user 1470 than to the second user 1472, as described above with respect to embodiments of the volumetric display of the present invention. (Note that FIG. 14 does not show a first image displayed to the first user 1470 and a different second image displayed to the second user 1472.) in the presently described embodiment, if the first user 1470 points to a first object in the first image, the second user 1472 sees the first user 1470 pointing to the display space, while the second user 1472 does not see the first image, and the pointing is generally meaningless to the second user.
Reference is now made to FIG. 15, which is a simplified diagram of two users 1571, 1572 interacting with the same object being displayed by a user interface 1500 constructed and operative in accordance with an embodiment of the present invention.
The user interface 1500 of fig. 15 includes two volume displays 1556, 1557 that are optionally remote from each other. The two volume displays 1556, 1557 are optionally both connected to a computer 1551, 1552, and the two computers 1551, 1552 are optionally functionally connected to each other by a functional connection 1554.
In a first example use of the embodiment of fig. 15, a first volumetric display 1556 displays a first object 1565 in a display space 1561 of the first volumetric display 1556. The first user 1571 points to the first object 1565 using the pointer 1581. The location indicated by the pointer 1581 is picked up and transmitted 1596 to the first computer 1551. The first computer 1551 optionally sends signals and/or data to the first volumetric display 1556 (indicating where the first user 1571 is pointing, optionally providing feedback to the first user 1571), and optionally sends data to the second computer 1552 over the functional connection 1554.
The second computer 1552 optionally sends signals and/or data to a second volumetric display 1557, and the second volumetric display 1557 optionally displays an image of the second object 1566 in a display space 1562 of the second volumetric display 1557.
The image of the second object 1562 optionally appears the same as the image of the first object 1561. The image of the second object 1562 optionally also includes an indication of where the first user 1571 is pointing.
Note that the second user 1572 may point at an image of the second object 1562, and the location indicated by the second user 1572 may be picked up and transmitted 1597 to the second computer 1552. The second computer 1552 may optionally send signals and/or data to the second volumetric display 1557 (indicating where the second user 1572 is pointing, optionally providing feedback to the second user 1572), and optionally send data to the first computer 1551 over the functional connection 1554. First computer 1551 optionally causes an image and pointed to location to be displayed.
Functional connection 1554 optionally includes a network connection between first computer 1551 and second computer 1552.
In some embodiments, functional connection 1554 includes whiteboard production software.
In some embodiments of the invention, the first volumetric display 1556 and the second volumetric display 1557 do not necessarily display the same image. Some non-limiting example applications in which the first volumetric display 1556 and the second volumetric display 1557 display different images include: remote teaching, in which a teacher and a student can watch different images; and games where one user sees a different image than another, optionally one user sees more content than the other and the game uses the difference in viewing.
Fig. 15 depicts two sets using volume displays 1556, 1557. Note that more than two volume displays may be connected to the functionality described with reference to fig. 15.
Embodiments of two or more volumetric displays connected to each other but remote from each other are particularly useful for medical and/or educational purposes. The medical cases may be presented as three-dimensional volumetric images, and a user of each volumetric display site may discuss the medical cases, including by pointing out a location on the image and "touching" the image. The implant or prosthesis can be accommodated (hold up to) in the medical image, of comparable size, even when the implant is in one location and the medical image source, i.e. the patient, is in another location.
Fig. 15 depicts the use of two computers 1551, 1552, one for each volumetric display 1556, 1557. Note that one computer may be used to support both volume displays 1556, 1557, provided there is sufficient computing power and provided there is sufficient communication bandwidth through the functional connection 1554.
Reference is now made to FIG. 16, which is a simplified diagram of a user 1620 comparing a real object 1680 to an object 1615 being displayed by a user interface constructed and operated in accordance with an embodiment of the invention.
FIG. 16 depicts a display space 1610, which is a portion of a user interface constructed and operative in accordance with an embodiment of the invention. The user interface displays a floating-in-the-air object 1615 in the display space 1610. The user 1620 places a real object 1680 in the display space 1610, and compares the real object 1680 with the displayed object 1615.
A non-limiting example of applying the scenario of fig. 16 to the real world includes using a three-dimensional medical data set to display a floating object 1615, such as a heart or vascular structure. The cardiac or vascular structures are shown in full size. A user, such as a physician or medical student, receives the stent in a cardiac or vascular structure and compares the size of the stent to the size of the expected cardiovascular or vascular structure. Another example may be to have the artificial percutaneous heart valve housed in the displayed anatomy of the heart intended to be implanted. The real object 1680 in comparison to the airborne object may optionally be placed at and/or next to the implantation location. In this non-limiting example, the user may interpret the position and orientation of the stent or valve to provide good positioning and the ability to select a particular stent or valve in terms of, for example, size, a particular manufacturer, or a particular technology.
The scenario of fig. 16 implements: teaching how to fit the implant in the body; research and/or development of new implants; and pre-implant verification that the implant meets its purpose.
The user interface depicted in fig. 15 enables the medical data set from the first location to be displayed at the second remote location, and optionally a telemedicine session may be maintained, wherein the remote location provides recommendations, guidance, measures sizes, compares implant and/or tool sizes, and so forth.
The comparison of the first three-dimensional display object 1615 in fig. 16 is optionally performed with reference to a second three-dimensional display object (not shown in fig. 16). The first three-dimensional displayed object 1615 is optionally compared to one of a set of three-dimensional representations of objects (such as tools and/or implants) that are optionally saved for comparison purposes.
In some cases, a first three-dimensional object is compared to a second three-dimensional object by viewing the first and second objects in a user interface that includes a volumetric display as described herein. The first and second objects may be displaced and/or rotated in the space where the user interface of the present invention is used.
The scenario of fig. 16 enables comparison of objects that are not necessarily in a medical environment. As a non-limiting example, assuming that the object can be displayed, a go-no-go gauge (go/no-go gauge) can be accommodated in the floating-in-the-air display of the object, and the object is tested for conformance to a standard. Instead of bringing the go-no-go gauge to the object, the three-dimensional representation of the object is brought to the go-no-go gauge.
The situation in which the real object 1680 is compared to the display of the object 1615, in conjunction with the measurement of the real object 1680 within the display space 1610 as described above with reference to fig. 13A, enables the measurement of differences between the real object 1680 and the displayed object 1615. These differences include one or more of length, planar area, surface area, and volume. These differences are optionally measured for the object and/or parts of the object.
Some details of the user interface described above are now listed with reference to the following four questions: a data source for display; a display device; an interface device; as well as supporting software and communication devices.
Data source for display
Typically, for 3D representation, a cloud of XYZ points, called voxels or volumetric data, is optionally input and displayed. The input is optionally from a source that generates such information, optionally from computer-based data such as a CAD model, and/or externally acquired data such as a CT or MRI scan in medical imaging.
Alternatively, the data may be two-dimensional, such as a 2D image or stream of images from a computer, television, cable, satellite, or the like.
Optionally, the 2D/3D data source is holographic, i.e. an interference pattern or a stream of interference patterns.
Alternatively, the data may be from a user interface of the invention as described herein, including: a specific location entry point in space marked by a user; a path drawn by a user; and/or other images that a user may optionally generate in the display space of the user interface while interacting with the user interface and/or while offline.
Optionally, the software interprets the user interface and generates 2D or 3D (including holographic) data according to its tasks. For example, the user "touches" the location using an interface tool, the user interface optionally displaying a predefined indication, such as, by way of non-limiting example, a highlight and/or a particular shape such as a star or cross.
As a non-limiting example, data is optionally input from 3D medical imaging, also referred to as real-time 3D, 4D or volume rendered data, which provides volumetric and spatial rendering of the human anatomy.
Optionally, the input is a 3D data image stream.
Optionally, the input is provided "in real time", i.e. 24 frames per second or more.
The 2D/3D data is optionally extracted from a 3D imaging modality: CT; MRI; PET; 3D rotational angiography; 3D ultrasound; and future/emerging technologies.
The 2D/3D data optionally includes a combination of the above modalities, superimposed and/or fused data, which is also referred to as "combined imaging" or "image fusion". Examples include: fusion of CT and MRI results of the same patient; and MR guided ultrasound therapy.
The 2D/3D data optionally includes predefined anatomical models, such as an anatomical library of various clinical cases, and a collection of images for each patient.
The 2D/3D data optionally includes a source of 2D/3D data from a CAD tool such as SolidWorks. The 2D/3D data may be still images and/or image streams. Example criteria for some data include: IGES, 3DF, OBJ, etc.
The 2D data optionally includes data from a computer having, for example, the VESA standard, and/or analog and digital video standards from television-related systems, such as composite video, DVI, etc.
In some applications of the invention, the data is not passed to a volumetric display. In these cases, the user optionally draws lines, objects, 2D images, volume images and optionally performs digital volume sculpting in the display space via the interface tool. Rendering and/or "carving" is optionally presented in near real-time in the display space through a user interface via a volumetric display.
Display device
To interact with the floating-in-the-air image, a volumetric display device is optionally used. Typically, the device renders images generated from the data as "floating in the air" for the user to interact with the images.
The image data source may be 2D, 3D or volumetric.
Optionally using the wide view display of the present invention, the data is optionally presented by an image having an axis of symmetry passing through the center of the image and through the middle of the display device. The 3D volume holographic data is optionally displayed using absolute coordinates, which may be viewed by one or more viewers, and/or may be viewed by one viewer at different positions around the display.
The 2D information optionally appears "floating in the air" in various orientations, presenting a flat surface in any direction in a 360 degree circle. The 2D images optionally originate from different sources and optionally display different images to different viewers.
Optionally the image is a repeated image of a volumetric/2D non "aerial" image, which is made "aerial" by using repeated imaging optics.
A 3D parallax barrier image presenting two different images to both eyes of a viewer is optionally displayed.
In some embodiments of the invention, for 3D images, the "floating in the air" display uses projected 3D images with absolute coordinates in display space. High quality, wide viewing angle 3D display devices are optionally used. Non-limiting examples of such display devices include the wide view display of the present invention and possibly future wide view 3D display devices capable of displaying "floating in the air" images.
Interface device
Tools that support 2D/3D user input or manipulation in display space are considered for use as interface media. These tools include, by way of non-limiting example, hand-held tools, such as pen-like devices; a gesture recognition interface unit; an object recognition interface unit, such as for recognizing a finger and/or a finger tip; and a tracking system having the ability to track the position and/or orientation of the hand-held implement or finger.
Optionally, each discrete finger or tool may be separately detected, optionally separately identified, optionally differently marked.
An exemplary interface device is a stylus with an IR LED or LED. One or more IR cameras are located near a display space of the user interface where interaction occurs and images are presented. The IR camera optionally receives IR signals from the LEDs and the position calculation unit optionally calculates the position and/or orientation in up to six degrees of freedom. The position calculation unit may be implemented in hardware and/or software. The position calculation is optionally performed using image processing techniques. The position calculation unit optionally communicates the position and/or orientation of the tool or finger to an optional computer for performing an action according to the user interface program. As a non-limiting example, exemplary actions are: marking a point in space; rendering lines or images in space; calculating a distance in space; drawing a path; calculating the absolute length of the path; drawing a path; saving the coordinates of the path; and so on.
Other exemplary methods for tracking tools/objects include: one or more CCD cameras and computing hardware for performing image processing, which extract the position/orientation of the tool/object in space; mechanical, magnetic, ultrasonic, optical, and hybrid inertial based tracking devices, combinations of some or all of the above sensors, and/or other methods for locating a tool/object in space.
One emerging method of tracking objects in space is based on illuminating the object with coherent light or illuminating a pattern onto the object, and processing the resulting image to interpret the position and orientation of the object. The above operations are optionally performed in real-time, that is, the image frames are calculated within the time it takes to capture the image frames. Real-time in this context also means at least a movie rate, such as 24 frames per second or more, although alternatively a rate of2 frames per second or 10 frames per second may be used.
Example companies that are developing such methods include: prime Sense Inc., of28Habarzel Street, Tel-Aviv, Israel: and 3DV Systems, of3DV Systems, 2Carmel Street, Industrial park building1, p.o. box249, Yokneam, 20692, Israel.
An example IR tracking company is naturallpoint Inc, p.o. box2317corvallis, OR97339, USA.
An exemplary inertial/ultrasound tracking company is InterSense inc., 4Federal Street, Billerica, MA01821 USA.
Supporting software and communication device
The supporting software and communication device optionally process the display data source, the display device and the interface device, synchronize them and transfer data between them.
The supporting software and the communication device are responsible for communication and data transfer between the other elements of the user interface, so that the presented information comprises raw data, input data or interpreted action products as a result of the interface device in the following manner; the combined data is optionally presented to one or more users by a display device.
Optionally, the combined data is presented on a still 3D image or a dynamic image in real time.
Optionally, the image is 2D.
Optionally, the image is holographic.
Alternatively, the supporting software and communication devices may communicate with other systems, such as robots, for performing tasks in the space, for example, according to paths or other instructions received from the user via the 3D interface.
Optionally, the communication transmits data or portions of data to a remote display device.
Optionally, the communication transmits data or portions of data to other systems at a remote location, which use the data to allow interaction in an "over the air" image to be exploited in the remote system, whether it be at a remote location or nearby.
Optionally, the data is transmitted via RF.
Alternatively, the data is sent via a wired physical layer.
Optionally, two (or more) different users interact with the same volumetric or 2D "in-air" image at the same location using the same device (system and display).
Optionally, two (or more) different users interact with the same volumetric or 2D "in-air" images at different locations using separate but communicating devices (systems and displays).
Example applications
Some example applications of the user interface of the present invention are described below.
In-air marking of specific points
For a projected volumetric image (optionally holographic), the user points to a particular location. With the aid of audible and/or button click indications, the user marks a particular point in the display space. For example, a stylus with a tracking sensor is used and a particular point is marked in the volumetric image by the user. The user interface records the spatial position and/or orientation of the stylus and saves it in the support hardware. The saved points are interpreted as one or more specific voxels, and the display device optionally renders these points on the volumetric image in real-time.
Optionally, the user interface starts with an initial image that is not projected in the display space, and only one or more user-subsequently marked points are presented in the display space.
As the image is projected, the user interface enables capturing a point in display space by "touching" a particular point in the image, also referred to as a voxel. Optionally, the proximity to the point enables capture through a predefined "cloud" surrounding the point, so that the user does not have to touch the exact location, and thus has some tolerance with respect to inaccuracies, for example, using a human hand.
Optionally, the display device projects certain marked points, which the user may optionally "grab" once the user points close enough, and then the user may press a button on the pointing tool to "grab".
Optionally, the "marker" image allows for "tagging" of the marker region, so that in the case of a dynamic image, the marker region can be isolated and optionally tracked over time for movement of the marker region. An example is tracking mitral valve leaflets, which are optionally tagged by an interface device. The untagged portions of the displayed image are optionally removed, and the dynamic leaflets are optionally tracked and specifically studied. Labeling is optionally performed before and after the medical intervention, and the images are optionally overlaid to compare and assess the efficacy of the medical intervention. Such labeling is optionally applied to other static and/or dynamic portions of the image including the myocardium before and after resynchronization, the electrophysiological pathway (electrophysiologic pathway) before and after ablation, and so forth.
Additional examples of "over the air tags" include:
tools for human-assisted edge correction, optionally following edge recognition performed by supporting software, are used for medical imaging, such as ultrasound-based diagnosis, interventional cardiac surgery, and the like.
Marking a location in an organ;
marking the location of the tumor;
marking in space in the body the position with respect to the proposed device implantation, organ and therapeutic intervention; and
the position outside the body, organ is marked, it is referenced to a fixed point in space or organ, and it maintains this relationship during movement of the body or organ or movement outside the body or organ (such as EGG gating or breathing).
The "in-air" marking of a particular point enables the hands to be "closed" by a viewer of the aerial image in an intuitive and accurate manner; a hand-to-user interface; from the user interface to the display; and a "loop" from the display to the eye. Later, the marked points or regions may be transmitted to and used by other systems that deal with the spatial region of interest. Examples may be focused radiation on a specific region of a tumor, a robotic or tool for performing ablation on a specific point marked on an organ, etc.
Volume and length quantization
For projected images, such as bone tissue, the user optionally marks one end of the bone image and then marks the other end of the bone image. The user interface system indicates the marked points and optionally calculates the length of the bone via a software module by means of a calculation.
If a path length is required, the user optionally marks more than two points and optionally calculates the path length. Optionally, the continuous path is drawn by a user and the user interface system calculates the length of the continuous path.
To quantify the volume, several points on the volume outline are optionally marked by the user, for example 10 points, and the user interface software optionally calculates the volume between the points. Optionally, the software infers the measurements and displays a continuous volume that approximates in shape the objects marked by the points, and calculates the volume. Optionally the calculated shape is presented in real time on the image, allowing the user to perform edge correction by moving the calculated shape edges to the edges of the true shape, allowing fine tuning of the volume and quantification of the volume.
Optionally, the projected image is projected at a 1: 1 ratio. Optionally, the projected image is optionally enlarged or rendered smaller per user input. The user interface system optionally defines scales so that measurements can be made on objects displayed at various scales, and the user interface system optionally outputs absolute measurements using the scale factor.
The "in-air" labeling of specific points enables in-air quantification of length, distance and volume. The user marks 2 points for the system to facilitate calculation of length or distance, or marks multiple points of the user interface system to calculate volume.
In the medical field, there are accepted normal surface areas and volumes of different organs, sometimes calculated from height, weight and/or body surface area. The areas and volumes are expressed in ml, l, square or cubic cm, and are typically expressed in a range with a standard deviation and expressed as Z values or the like. Optionally, the user interface system projects a static or dynamic normal area or volume (such as a lung volume) separately and/or together with an actual image of a static or dynamic organ obtained from CT, MRI or other such modality used to generate an actual or computed image.
Some example uses for quantization are listed below.
In the field of cardiology, ejection fraction is quantified based on the volume of the beating heart.
In the field of pneumology, volumetric analysis of lung function.
The real volume of the organ is compared to a predicted volume of the organ based on a standard reference.
In the field of obstetrics, a fetus is diagnosed based on the quantification of the area and/or volume of a fetal organ or fetal body.
Other areas where volume quantification is useful include orthopedics and oncology, e.g., for mapping and measuring tumors.
Frame-like interface device
Users of user interfaces such as physicians often wish to view a particular plane in a volumetric image. The particular plane to be viewed may have various orientations and positions. A natural selection method with respect to a specific plane of a person is described by referring to the user interface of the present invention.
As described below, a volumetric image is projected, and the position and orientation of the interface device optionally defines a plane in the 3D volumetric image.
A planar frame is used, optionally having a diagonal length about the length of the long axis of the image. The plane may be made as a "wire frame", i.e. an outline, or as a sheet of transparent material, which is optionally glass or polycarbonate. Markers are optionally placed on the frame edges, such as IR LEDs on the corners. The user positively inserts the frame into the projected image and indicates a specific plane in the projected image, or a frame in the plane. The data included in the particular plane or box is processed and the particular plane or box may optionally be projected on a volumetric display and/or a conventional 2D display.
Optionally, the continuous movement of the frame in the image produces a continuous stream of planar images on the same display or on corresponding one or more 2D displays.
Optionally, the volume display "crops" the image plane by the limits of the frame.
Alternatively, the frame may have any size and shape that defines a plane. Alternatively, the box may be a three-dimensional box defining a three-dimensional shape in the image.
Optionally, the marking of the plane allows for "tagging" the marked plane. Optionally in the case of a dynamic projected image, the marker region is optionally isolated and movement of the marker region is tracked over time.
Non-limiting example uses of the user interface include, when 3D rotational angiography (3DRA) is used during transcatheter surgery, the physician optionally selects a particular plane for viewing on the 2D display, which is extracted from the volumetric image "floating" in front of the physician and defined by a box-like interface.
Aerial image steering interface
For a rendered volumetric image, the user optionally marks points on the image or on the image outline, and by gesture or some other point marking method, defines the rotation of the image in a particular direction. Optionally, the user marks two points and rotates the image such that the axis of rotation is an axis defined by a line defined by the two points. The manipulation is optionally performed based at least in part on receiving the marker points, interpreting the marker points by software and/or hardware, defining the image to be projected by the display, and optionally based on a specific "command" such as "pivot" provided by the user. The corresponding image is optionally rendered and presented via a display. Optionally the action is performed in real time.
Alternatively, the user defines a plane or box by marking three or more points in the displayed object and "slices" the object so that only the plane or box is projected. Optionally, the user selects to crop the image on both sides of the plane and/or outside the box. Alternatively, the user may repeat the clipping action, thus defining a series of "clipping" planes.
Optionally, real-time lines are projected on the image according to the orientation of the interface device, and optionally the image is cropped according to a "line" path.
Optionally, a real-time line is defined by the user, according to the orientation of the interface device, which optionally serves as a symmetry axis around which the image can be rotated.
Optionally, "in-air" buttons with indications such as zoom in and/or zoom out are displayed and the user may "touch" these buttons with the interface device.
Optionally, the user "captures" a portion of the image using a frame-like interface, optionally when pressing a button or some form of such command. In this case, the user can optionally move all of the virtual images with his hands and/or with the frame, "as if the virtual images were physically connected to the frame. The above-described capability is similar to moving an object connected to a wand, and the user optionally moves the object by moving the wand, like a moving popsicle (popsicle).
"air" navigation
The user interface system optionally receives a location of the location indicator and presents the location in the display space. The system optionally presents images in which the tool is navigated, such as images from CT data or real-time ultrasound images. The system optionally superimposes tool position indications, optionally from a variety of sources, such as a unit for tracking the tool, on the image, optionally after scale correlation. The user then optionally visually checks whether the tool is in the correct position or on the correct route. In the event that the tool position is incorrect, the user may virtually "touch" the tool position indicator in the projected volumetric image and drag the tool position indicator to the preferred location and/or route. The new coordinates of the location and/or the route are recorded by the interface tool and optionally provided to the tool navigation unit.
Optionally, the system for controlling navigation of the position indicator corrects the movement of the actual position indicator according to the user indication.
Optionally, the second doctor and/or user manually moves the interface device according to the instructions of the first user. The control loop is "closed" via visual control by the user as the interface device is optionally continuously presented on the image.
Optionally, the user draws the path in the display space using an interface device. The system optionally records the coordinates of each point in the path. The path coordinates may be used by a separate machine, such as a robot, to control the machine to follow the drawn path.
Optionally, an image of the machine is projected, optionally using a volumetric display, the machine is monitored by a physician and/or an automated machine, and real-time corrections of the path may be made.
3D navigation has become an important application in electrophysiology based cardiac surgery. The "in-air" navigation optionally allows the user to view, optionally in real time, static or dynamic images, as well as position indicators and/or paths superimposed on the images. As another example, electromagnetic 3D navigation is also implemented in pneumology/bronchoscopy to provide minimally invasive access to the lesion (lesion) deep in the lung and deep in the mediastinal lymph nodes.
The above mentioned tracked machines may also be tools or implantable devices or treatments, such as drugs; a support; a conduit; a flap; a combination of permanent or temporary tools; a drug eluting stent; chemotherapy attached to embolic particles; devices or sensors affected by forces or energy external to the body or organ (such as radio frequency or acoustic energy, ultrasound or HIFU); a radio frequency catheter for ablation; and catheters for cryogenic resection.
Telesurgical operation
The above-mentioned manipulation of the tool image in the display space causes telerobotic manipulation of the real tool somewhere, enabling telesurgery and/or telenavigation through the body.
Optionally, the user manipulates a real tool in the display space of the first volumetric display, which also displays the human body. The manipulation is tracked and the real manipulation is validated against a real tool at the remote location. The body changes at the remote location are picked up by the three-dimensional imaging device and sent to the first volumetric display. The user thus sees an image of the result of the actual manipulation of the actual tool.
Optionally, the user manipulates an image of the tool, i.e. the virtual tool, in a display space of the first volumetric display that also displays the human body. The manipulation is tracked and a real manipulation is validated against a real tool at the remote location. The body changes at the remote location and the changes of the real tool are picked up by the three-dimensional imaging device and sent to the first volumetric display. The user thus sees an image of the result of the actual manipulation of the body image by the actual tool, as well as an image of the tool.
Drawing in the air
The present invention, in some embodiments thereof, provides a tool for drawing points, lines and/or paths in a volumetric projection image and enables a user to see the drawing in real time. The "over the air" rendering optionally provides a collaborative tool between users/physicians, allowing points or marker spaces to be rendered to allow discussion of a particular anatomy or region of interest in an image.
Optionally, the "in-air" rendering is transformed into coordinates of the display space, optionally in real time, by computation, and optionally into coordinates of some other space, which optionally will be used by other instruments, such as robots.
A non-limiting example of medical use with "in-air" mapping is the real-time location of a particular marker for guidance applications.
Positioning of virtual objects within a displayed image
The present invention, in some embodiments thereof, provides a tool for combining images of virtual objects in a display image generated from data from an input source. One use for locating virtual objects inside the display image is optionally to simulate device selection by displaying a volumetric image.
Various tools/objects may be 3D modeled, the modeling optionally including dynamic operations.
The user optionally picks the virtual tool/object and moves the virtual tool/object in the display space using the 3D interface, positioning the tool/object at a particular location and orientation. For example, an image of a virtual heart valve is optionally generated that is similar in size and shape to the particular valve. The user optionally drags the virtual valve over the image of the patient's heart. The user optionally marks the interface points on the heart image and the corresponding points on the virtual valve. The display calculation unit optionally calculates a combined image of the heart and the virtual valve and presents the combination to the user. The user optionally assesses whether the valve is in the correct position and orientation and optionally performs another measurement/indication if desired. The user optionally also assesses whether the size of the valve fits.
Alternatively, the virtual valve may be dynamic and superimposed on a dynamic or static image of the heart.
Optionally, simulations of blood flow and tissue movement are predicted, calculated and displayed.
Optionally, the user requests to display instances from the virtual image library in the display space, each instance representing a particular actual tool or implant. In the case of an expansion tool, such as a cardiac stent, the library optionally includes tools having a dynamic representation of an unexpanded form, an expanded form, and an expansion process.
Other non-limiting examples of use for positioning virtual objects inside a display image include valve positioning; assembling an orthopaedic prosthesis; assembling intracardiac and extracardiac prostheses, devices, implantable devices, stents, arterial grafts, stent grafts; and intraventricular devices such as ventricular assist devices.
Localization of real objects inside an image
The present invention, in some embodiments thereof, provides a tool for combining a real object with an image generated from data from an input source. This combination is optionally used for real device selection, such as heart valve selection.
The user optionally places a real object, i.e. optionally a real object to be inserted into the body later, in the displayed image. The real object may be inserted by hand and/or may be inserted using a tool for holding the real object. The user optionally positions the object in a static or dynamic image of the organ projected in the volumetric "in air" display. The user/physician optionally thus assesses the way he wishes to insert the real object into the body, the level at which the real object physically matches the body part, etc.
Another non-limiting example of use with respect to locating real objects within an image includes valve location.
Interactive game
Inserting a real object into the display space of the volumetric display enables the use of the user interface system for gaming.
As a non-limiting example, a user optionally waves a game prop, such as a sword, a tool, or some such prop, in a game having a three-dimensional virtual reality display on a volumetric display.
A non-limiting example of the type of game supported is virtual sword fighting of two or more users at two or more different volumetric displays connected by communication as described above with reference to fig. 15.
An interesting game that is particularly supported by the volumetric display of the present invention is the virtual Piezetian (pinata). The first user waves the "wand" in the display space of the volumetric display and does not see the virtual piercable in the display space. Other users see a virtual pierre in the display space and see a "wand" that the first user waves. The virtual pitaya game may be played at one volumetric display by two or more users around the game, or at two or more volumetric displays.
Another interesting game is the game of "battleships", where each user sees only their own battleship on the same volumetric display.
Another interesting category of games based on the ability to insert real objects into the display space includes hand-eye coordinated games such as pick-up sticks and layer stacks (Jenga). These games optionally use virtual gaming chips (pieces) that are displayed three-dimensionally in a volumetric display, and the user "grabs" the gaming chips when entering the display space. The tracking device optionally measures the fingers to see gaps between the user's hold of the gaming chip.
General notes
It is anticipated that during the life of the patent from the start of this application many relevant spatial light modulators, hologram generating units and volumetric displays will be developed and the scope of the corresponding terms is intended to include all such new technologies a priori.
The terms "imaging" and "projecting" are used interchangeably herein.
The term "exemplary" is used in a sense to be an example, instance, or illustration.
The terms "comprising," "including," "having," and their conjugates mean "including, but not limited to.
The term "consisting of …" means "including and limited to".
The term "consisting essentially of …" means that the composition, method, or structure may include additional components, steps, and/or elements, but only if the additional components, steps, and/or elements do not materially alter the basic and novel characteristics of the claimed composition, method, or structure.
The word "optionally" is used herein to mean "provided in some embodiments and not provided in other embodiments". Any particular embodiment of the invention may include a plurality of "optional" features unless there is a conflict between such features.
As used herein, the singular forms "a", "an" and "the" are intended to mean "at least one" unless the context clearly indicates otherwise. For example, the term "a mirror" may include a plurality of mirrors.
The term "about" as used herein means ± 10%.
Ranges are provided herein interchangeably in two equivalent formats: "from X to Y" and "between X and Y" and in both cases X, Y and any value in between are contemplated.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments should not be considered essential features of those embodiments, unless the embodiments are inoperable without these elements.
While the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
The entire contents of all publications, patents and patent applications mentioned in this specification are incorporated in this specification by reference, to the extent that they are incorporated herein; as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference herein. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. As to the section headings used, they should not be construed as necessarily limiting.
Claims (25)
1. A method for implementing a floating-in-the-air user interface, comprising:
producing a computer-generated interference pattern hologram on a spatial light modulator;
illuminating the spatial light modulator with coherent light, producing a first computer-generated three-dimensional holography image having a depth;
displaying the first computer-generated holography image in a first floating-in-the-air display space, the first computer-generated holography image providing all holography image depth cues;
inserting a real object into the first floating-in-the-air display space and causing the real object to apparently touch a first computer-generated holography image;
locating a position of the real object within a display space of the first floating-in-the-air display; and
providing the location as an input to the floating-in-the-air user interface,
wherein
Displaying the first computer-generated holography image providing all holography image depth cues enables a user to focus on both an object and a touch point in the first computer-generated holography image; and
locating the touch location of the real object within the first floating-in-the-air display space includes locating an apparent touch location.
2. The method of claim 1, wherein the real object is a finger.
3. The method of claim 1, and further comprising:
displaying an actuator in the first computer-generated holography image;
moving the position of the real object to the actuator; and
interpreting the position input as the real object actuating the actuator.
4. The method of claim 1, and further comprising:
moving the position of the real object;
tracking the position of the real object over time;
interpreting the positional input as the real object manipulating at least a portion of the first computer-generated holography image.
5. The method of claim 1, wherein the real object further comprises a plurality of real objects, and the location of each real object is used as a location input for the floating-in-the-air user interface.
6. The method of claim 1, wherein the location comprises two locations within the display space, and the two locations determine a rotation axis in the display space.
7. The method of claim 1, wherein the location further comprises a plurality of locations based at least in part on a plurality of different locations of a point on the real object at different times.
8. The method of claim 7, wherein a path connecting the plurality of locations is displayed by the first floating-in-the-air display.
9. The method of claim 8, and further comprising implementing at least one function from the following group of functions based at least in part on the plurality of locations:
magnifying the first computer-generated holography image;
reducing the first computer-generated holography image;
rotating the first computer-generated holography image;
measuring a length within the first computer-generated holography image;
measuring an area within the first computer-generated holography image; and
measuring a volume within the first computer-generated holography image.
10. The method of claim 1, wherein the input for the floating-in-the-air user interface further comprises at least one additional input selected from the group consisting of:
a voice command;
clicking by a mouse;
inputting by a keyboard; and
the button is pressed.
11. The method of claim 1, and further comprising marking the location for comparison with a remainder of the real object.
12. The method of any preceding claim, and further comprising:
a second floating-in-the-air display displays a second computer-generated holography image at substantially the same time as the first floating-in-the-air display displays the first computer-generated holography image, and
wherein the first computer-generated holography image is displayed to a first user and the second computer-generated holography image is displayed to a second user.
13. The method of claim 12, wherein the first floating-in-the-air display displaying the same display as the second floating-in-the-air display comprises displaying a first location.
14. The method of claim 12, wherein the first and second floating-in-the-air displays are used to implement a telemedicine interaction between a first user of the first floating-in-the-air display and a second user of the second floating-in-the-air display.
15. The method of claim 12, wherein the first and second floating-in-the-air displays are used to implement whiteboard-like cooperative sharing between the first and second floating-in-the-air displays.
16. The method of claim 12, wherein the first and second floating-in-the-air displays are used to implement a game in which a first user of the first floating-in-the-air display and a second user of the second floating-in-the-air display participate.
17. A user interface, comprising:
a first floating-in-the-air display configured to: illuminating a computer-generated interference pattern hologram on a spatial light modulator with coherent light, producing a first computer-generated three-dimensional holography image having a depth; and displaying the first computer-generated three-dimensional holography image within a first display space, the first computer-generated three-dimensional holography image providing all holography image depth cues, the first display space being a volume into which an object can be inserted by a user; and
a first input unit adapted to accept an input of a first position of an object inserted in the first display space;
wherein:
the first display space is adapted to enable insertion of a real object into the first display space to cause the real object to apparently touch the first computer-generated holography image;
the first floating-in-the-air display is configured to display a first computer-generated holography image providing all holography image depth cues to enable a user to focus on both the object and a touch location in the first computer-generated holography image; and
the input of the first position of the inserted object within the first display space includes a position of an apparent touch of a real object within the first floating-in-the-air displayed display space.
18. The user interface of claim 17, wherein the floating-in-the-air display is a volumetric display.
19. The user interface of claim 17, wherein the floating-in-the-air display is a two-dimensional floating-in-the-air display.
20. The user interface of claim 17, wherein the first floating-in-the-air display displays a first location.
21. The user interface of claim 17, and further comprising:
tracking a position of an eye of a viewer; and
projecting the first computer-generated holography image towards the location of the viewer's eye,
and wherein the first floating-in-the-air display is adapted to display the first computer-generated holographic image at least partially within a range of an arm of a viewer.
22. The user interface of claim 20, further comprising a second floating-in-the-air display, wherein the second floating-in-the-air display displays the same scene as displayed by the first floating-in-the-air display, including displaying the first location.
23. The user interface of claim 22, wherein the first floating-in-the-air display and the second floating-in-the-air display are connected by a communication channel between the first floating-in-the-air display and the second floating-in-the-air display.
24. The user interface according to any of claims 22-23, and further comprising a second input unit adapted to accept input from a second location within a second display space, the second display space being a volume within which an object displayed by the second floating-in-the-air display appears, and wherein the first floating-in-the-air display is adapted to display the same display as the second floating-in-the-air display, including displaying the second location.
25. The user interface of claim 17, wherein the first floating-in-the-air display is adapted to provide an artificial touch perception based at least in part on the location and on content being displayed in the location.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US61/129,665 | 2008-07-10 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1194480A true HK1194480A (en) | 2014-10-17 |
| HK1194480B HK1194480B (en) | 2018-07-27 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12386311B2 (en) | Holographic image display system | |
| US12238267B2 (en) | Device to create and display free space hologram | |
| CN102566049B (en) | Automatic variable virtual focus for augmented reality displays | |
| JP5214616B2 (en) | 3D display system | |
| JP2005508016A (en) | Projecting 3D images | |
| KR20160016468A (en) | Method for generating real 3 dimensional image and the apparatus thereof | |
| HK1194480A (en) | User interface and method for implementing a floating-in-the-air user interface | |
| HK1194480B (en) | User interface and method for implementing a floating-in-the-air user interface | |
| Bolton et al. | BodiPod: interacting with 3d human anatomy via a 360 cylindrical display | |
| Zhou | Reconciling pixels and percept: improving spatial visual fidelity with a fishbowl virtual reality display | |
| Bimber et al. | Alternative Augmented Reality Approaches: Concepts, Techniques, and Applications. | |
| JP2004258287A (en) | Video display system | |
| GOGIA | An Augmented Reality Application for Surgical Planning and Navigation | |
| van der Mast et al. | Research Assignment | |
| Bordegoni et al. | Design and assessment of a 3D visualisation system integrated with haptic interfaces | |
| HUP0800650A2 (en) | 3d system of display, measurement, motion analysis and navigation |