WO2018161113A1 - Systems, methods and devices for controlling a view of a 3d object on a display - Google Patents
Systems, methods and devices for controlling a view of a 3d object on a display Download PDFInfo
- Publication number
- WO2018161113A1 WO2018161113A1 PCT/AU2018/050200 AU2018050200W WO2018161113A1 WO 2018161113 A1 WO2018161113 A1 WO 2018161113A1 AU 2018050200 W AU2018050200 W AU 2018050200W WO 2018161113 A1 WO2018161113 A1 WO 2018161113A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- control device
- display
- determining
- computing device
- orientation
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1694—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/16—Indexing scheme relating to G06F1/16 - G06F1/18
- G06F2200/163—Indexing scheme relating to constructional details of the computer
- G06F2200/1637—Sensing arrangement for detection of housing movement or orientation, e.g. for controlling scrolling or cursor movement on the display of an handheld computer
Definitions
- the disclosure relates to systems, methods and devices for controlling a view of a representation of an object, in particular a three dimensional (3D) object, on a display.
- 3D three dimensional
- these 3D controls are enabled via mouse or track pad movements in combination with keyboard controls.
- lack of standardized 3D controls can be confusing, requiring users to learn a new interface each time they encounter a new application. Unless these controls can be quickly and easily understood, many potential users of an application may give up.
- Specialized input devices have been developed including depth-sensing cameras, input gloves, multi -touch interaction screens, and 3D 'wands', also known as 3D mice, such as the Nintendo Wii Remote controller.
- 3D 'wands' also known as 3D mice, such as the Nintendo Wii Remote controller.
- these devices are only used by a relatively small fraction of 'expert' users, compared with all users of 3D graphics. This is especially a concern when creating 3D web applications. While it can be advantageous to support specialized devices, it is often avoided for web applications due to the availability of such devices to users.
- a second major challenge for 3D graphics applications is to provide sufficient depth cues to communicate the 3D nature of an object or scene on a two dimensional (2D) display. Potentially, this can be solved using 3D display systems, such as 3D glasses and head-mounted displays like as the Oculus rift.
- 3D display systems such as 3D glasses and head-mounted displays like as the Oculus rift.
- 3D displays are currently uncommon, hence most 3D applications rely on depth cue methods that work with 2D displays.
- the method comprising: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
- the received data may comprise one or more of the following: information relating to orientation of the control device; and information relating to movement of the control device.
- Determining the perspective to the 3D object may comprise: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data; and in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data.
- the method comprising: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data, wherein determining a perspective to the 3D object comprises: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data, and in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
- the control device may be in the form of a mobile computing device.
- the methods may further comprise determining a change in orientation of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding change in orientation of the 3D object based on the change in orientation of the control device.
- the methods may further comprise receiving a lock command from the control device, and in response to the lock command changing to the second mode from the first mode.
- the method may further comprise determining a lock command based on receiving one or more specified sensor outputs of one or more inertial measurement sensors of the control device, wherein in response to the lock command, changing to the second mode from the first mode.
- the lock command may be further based on receiving one or more specified sensor outputs over a specified time.
- the methods may further comprise determining a change in position of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding translation of the 3D object based on the change in position of the control device.
- the methods may further comprise determining a change in orientation and/or position of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding change in magnification of the 3D object based on the change in orientation and/or position of the control device.
- the method may further comprise a toggle command based on receiving one or more specified sensor outputs of one or more inertial measurement sensors of the control device, wherein in response to the toggle command, the method comprises changing from one mode to another mode in a plurality of modes, wherein the plurality of modes include determining the perspective to the 3D object in at least one or more of: - an orientation of the 3D object;
- a server may comprise the processor, and sending the display data to the display may comprise sending the display data to a slave computing device associated with the display.
- a slave computing device associated with the display may comprise the processor.
- the slave computing device may communicate with the control device via peer-to-peer (P2P), wireless network or Bluetooth.
- P2P peer-to-peer
- wireless network wireless network
- Bluetooth Bluetooth
- the method may further comprise: sending, to the display, display data showing a pairing key; receiving the pairing key via an input device of the control device; and when the pairing key is received, pairing the control device with the slave computing device for controlling the view of the representation of the 3D object on the display.
- the input device of the control device may include a camera, wherein the pairing key includes at least part of the view of the representation of the 3D object on the display. In some examples, the pairing key includes at least part of the view of the representation of the 3D object in motion.
- pairing the control device with the slave computing device may include exclusive pairing for a specified exclusive time, wherein upon expiration of the specified exclusive time, the control device or another control device can pair with the slave computing device.
- the specified exclusive time period is based on time after a last change in sensor output of the one or more inertial measurement sensors of the control device.
- the method may further include controlling the view of representation of the 3D object on the display with the further control device paired to the slave computing device. This may include passing off control to the further control device such that control device that originally controlled the 3D object temporarily, or permanently, ceases to have control.
- the methods may further comprise transmitting feedback data to the control device to cause haptic feedback on the control device.
- the 3D object may be a molecule.
- a system for controlling a view of a representation of a three dimensional (3D) object on a display comprises: a processor, and a memory comprising a computer program that when executed by the processor performs the following: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
- a system for controlling a view of a representation of a three dimensional (3D) object on a display comprises: a processor, and a memory comprising a computer program that when executed by the processor performs the following: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data, wherein determining a perspective to the 3D object comprises: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data, and in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
- a system for controlling a view of a representation of a three dimensional (3D) object on a display comprising: a processor; and a memory
- control device is a mobile computing device.
- system includes one or more of the control devices.
- Fig. 1 illustrates a system for controlling a view of a representation of a three dimensional (3D) object on a display.
- Fig. 2 illustrates a method for controlling a view of a representation of a three dimensional (3D) object on a display.
- FIG. 3 A illustrates an example mobile computing device that may be used as a control device.
- Fig. 3B illustrates an example display showing a representation of a 3D object.
- Fig. 4 illustrates a second system for controlling a view of a representation of a three dimensional (3D) object on a display.
- Fig. 5 illustrates a method of pairing a mobile computing device with a slave computing device associated with a display.
- Fig. 6 illustrates a mobile computing device showing a second user interface.
- Fig. 7 A illustrates a mobile computing device showing a first user interface.
- Fig. 7B illustrates a mobile computing device placed on a surface.
- Fig. 7C illustrates a display showing rotation of the 3D object after locking rotation to be about a single axis.
- Fig. 8 illustrates a display showing a 3D object and a control device capturing an image of the 3D object for pairing.
- Fig. 9 illustrates a display and a control device passing control to another control device in close proximity.
- Fig. 10 illustrates a display showing two 3D objects and two control devices, wherein each control device captures a respective image of one of the 3D objects for pairing.
- Fig. 11 illustrates a pair of displays, wherein a 3D object is moved from one display to another display using the control device.
- Fig. 12 illustrates a method for controlling a view of a representation of a 3D object, wherein the method includes a first mode with unrestricted change in orientation and a second mode with restricted change in orientation around a rotation axis.
- Fig. 13 illustrates a computing device that may be, for example, a server or the slave computing device used in the disclosure. Description of Embodiments
- Systems, methods and devices are described for controlling a view of a representation of a three dimensional (3D) object on a display.
- the devices may be mobile computing devices or other control devices that include sensors which provide information about orientation and/or movement of the device.
- the systems, methods and devices may be used, for example, for molecular graphics applications, which generally require a complex set of 3D controls, including full 3-axis rotation, 3-axis translation, plus many more specialized controls, such as the ability to re-set rotation centres to an arbitrary set of atoms.
- the 3D object may be a 3D dataset.
- Fig. 1 illustrates a system 100.
- the system 100 comprises a control device 110 in the form of a mobile computing device, such as a smart phone, tablet, or a similar computing device that can be handheld.
- One or more sensors 115 of the control device 110 measure information about orientation and/or movement of the control device 110.
- the system comprises a display 130 which shows a representation of a 3D object 135.
- the 3D object 135 may be, for example, a molecule, a model generated by medical imaging such as computed tomography (CT) or magnetic resonance imaging (MRI), or another object which a user wishes to view in three dimensions.
- CT computed tomography
- MRI magnetic resonance imaging
- the system 100 comprises a processor 120 that receives data based on a sensor output of the one or more sensors 115 and controls a perspective to the 3D object 135 on the display based on the received data.
- the representation of the 3D object 135 may be rotated, translated, enlarged or reduced in size on the display based on the orientation and/or movement of the control device 110.
- the processor 120 may, for example, form part of the mobile computing device 110, a computing device associated with the display 130, or a separate server that communicates with the mobile computing device 110 and a computing device associated with the display 130.
- Fig. 2 illustrates a method 200.
- the method may be implemented by a processor for controlling a view of a representation of a 3D object on a display.
- the method 200 may be implemented in the system 100 by the processor 120.
- the method comprises receiving data based on a sensor output of one or more inertial measurement sensors of a device, such as a mobile computing device or other control device.
- the sensors may include, for example, one or more 3-axis gyroscopes and/or 3-axis accelerometers which may track changes in rotation and translation, and acceleration, of the device.
- the method comprises determining a perspective to the 3D object based on the received data.
- the received data may comprise information relating to orientation and/or movement of the mobile computing device.
- the method comprises sending, to the display, display data
- Fig. 3A illustrates an example mobile computing device 110.
- a 3D coordinate system 112 is shown on the mobile computing device 110.
- the coordinate system 112 comprises an x-axis, a y-axis and a z-axis. Rotation about the x-axis is indicated by ⁇ , rotation about the y-axis is indicated by ⁇ , and rotation about the z-axis is indicated by a.
- the mobile computing device 110 may be held by a user and rotated about one or more of the three axes and/or translated in one or more of the three dimensions.
- a processor may receive data based on a sensor output of one or more inertial
- Fig. 3B illustrates an example display 130 showing a representation of a 3D object 135.
- the 3D object is represented within a coordinate system comprising an xl- axis, a yl-axis and a zl-axis.
- the representation of a view of the 3D object 135 on the display 130 from a determined perspective is based on the perspective to the 3D object determined from the received data.
- the processor may determine a change in orientation of the mobile computing device 110 based on the received data.
- rotations about the xl-axis, yl- axis and/or zl-axis may correspond to a determined rotation of the mobile computing device 110 about the x-axis, y-axis and/or z-axis, respectively.
- determining the perspective to the 3D object comprises determining an unrestricted change in orientation of the 3D object based on the received data.
- the orientation of the 3D object may be changed in three dimensions. This is shown in steps 220A and 230A in method 200A in Fig. 12.
- determining the perspective to the 3D object comprises determining a restricted change in orientation of the 3D object around a rotation axis yl ' based on the received data, as shown in Figs. 7B and 7C and steps 220B and 230B in Fig. 12.
- the processor may receive a lock command from the mobile computing device and in response to the lock command change to the second mode from the first mode.
- the change to the second mode from the first mode may also be triggered by other means, such as the processor detecting movement of the mobile computing device 110 of less than a threshold for a predefined period of time or the processor detecting a movement of the mobile computing device 110 that is greater than a threshold acceleration or velocity.
- the rotation of the 3D object may be restricted to rotation about a single axis based on rotation of the mobile computing device 110.
- the single axis may be the xl-axis, yl-axis, zl-axis or another axis determined based on orientation of the representation of the 3D object when the second mode is invoked.
- the processor may determine a change in position of the mobile computing device 110 based on the received data. For example, if the mobile computing device 110 is translated along the x-axis, the y-axis and/or the z-axis, the processor may determine this change in position of the mobile computing device based on the received data. Determining the perspective to the 3D object may then comprise determining a corresponding translation of the 3D object along the xl-axis, yl-axis and/or zl-axis based on the change in position of the mobile computing device.
- determining the perspective to the 3D object may comprise determining a corresponding change in magnification of the 3D object based on the change in orientation and/or position of the mobile computing device 1 10. For example, a translated along the x-axis, the y-axis or the z-axis, or a rotation about the x- axis, the y-axis or the z-axis, maybe cause a zooming in or zooming out of the representation of the 3D object 135 on the display 130.
- the third and/or fourth mode may operate simultaneously with the first mode.
- Fig. 4 illustrates a system 400.
- the system 400 comprises a master computing device 410 such as the control device 110, a slave computing device 430 which drives the display 130, and optionally a server 420 which may comprise the processor 120.
- the master computing device 410 comprises one or more sensors 115 that measure information about orientation and/or movement of the master computing device 410.
- the slave computing device 430 drives a display 135 to displays a representation of a 3D object 135.
- the slave computing device 430 may be a laptop or desktop computer having an external or inbuilt display 135.
- the master computing device 410 and the slave computing device 430 may separately connect to the server 420 and establish a pairing to enable the master computing device 410 to control the 3D object 135 on the display 430.
- Fig. 5 illustrates a method 500 of pairing the master computing device 410, such as a mobile computing device 110, with the slave computing device 430.
- the method 500 comprises sending, to the display, display data showing a pairing key.
- the processor 120 at the server 420 may generate a unique key.
- the processor 120 may send the unique key to the slave computing device 430 to show on the display 130.
- the slave computing device 430 may connect to the server 420 via a web browser by entering a uniform resource locator (URL) of the server 420.
- a web page at the URL may also provide the 3D object to the slave computing device to display on the display.
- the method 500 comprises receiving the pairing key via an input device of the mobile computing device 110.
- a user of the slave computing device 430 may enter the unique key into a user interface on the master computing device 410 after reading the unique key from the display 130.
- a camera of the computing device 430 may read the unique key from the display 130.
- the user interface may, for example, be received by accessing the same URL via a web browser on the master computing device 410 or may form part of an app on the master computing device 410.
- the method 500 comprises, when the pairing key is received, pairing the mobile computing device 110 with the slave computing device 430 for controlling the view of the representation of the 3D object 135 on the display 130.
- the processor 110 receives the unique key from the master computing device 410 and pairs the master computing device 410 with the slave computing device 430. Once paired, the 3D object 135 may be controlled using the master computing device 410.
- Fig. 6 illustrates a mobile computing device 110 showing a second user interface 650.
- the second user interface 650 comprises an input field 660, such as a text box, to receive the pairing key from the user.
- a deviceorientation JavaScript event may be available in the webpage. This event is defined by the World Wide Web Consortium.
- the deviceorientation JavaScript event is fired upon changing the master computing device's 410 orientation, and the web browser then can read three Tait-Bryan angles, which define the smartphone's orientation with respect to the world coordinate frame, with Z-X'-Y" intrinsic rotations, i.e. the first rotation is described around the z-axis, the second rotation is described around the new x-axis after the previous rotation, and the third rotation is described around the new y-axis after the previous two rotations.
- the master computing device 410 and the slave computing device 430 may use the Socket.IO JavaScript library for bi-directional communication between their web browser and the server 420.
- Socket.IO uses the WebSocket communications protocol with polling as a fallback option. WebSocket allows data transfer from a web browser to the server 420 and reverse, thus enabling real-time communication between multiple web browsers.
- the JavaScript library Socket.IO provides a server-side Node.js library and a client-side JavaScript library.
- a key Socket.IO functionality that may be utilised is the pairing of two web browsers by creating a room via the socket.join (ROOM KEY)- function, with the ROOM KEY being a unique key.
- ROOM KEY socket.join
- two clients i.e. the master computing device 410 and the slave computing device 430 may be paired, such that the control communications are only required to be between the paired clients.
- the slave computing device 430 creates a unique room key and sends the unique room key to the server 420 to create a room.
- the unique room key may be manually input into the slave computing device 430 or may be generated, for example randomly, by the slave computing device 430, such as by executing client side code received from the server 420.
- the master computing device 410 then receives the unique room key via a user interface, for example from a user of the slave computing device 430 who knows the unique room key, and sends the unique room key to the server 420 for verification. Once the server verifies the unique room key, the master computing device 410 sends data based on one or more sensor outputs of the one or more sensors 115 to the server 420.
- the server 420 then generates a view of the 3D object depending on the received data.
- the server 420 may create rooms on request by receiving a room key from a web browser, store room keys, forward room keys to a web browser, verify room keys, pair clients by connecting them in the same room, check the number of clients per room (e.g. only two clients may be permitted per room), deny access to a 'full' room, and allow communication between clients in the same room.
- Socket.IO's functionality for pairing clients by creating rooms identified by a unique room key may be used. This requires, for example, the use of a 5 digit pairing key which is entered by the user, adding an extra layer of complexity for creating a connection.
- the master computing device 410 and the slave computing device 430 may be paired by an alternative means such as a quick response (QR) code.
- QR quick response
- the JavaScript code that is executed on the master computing device 410 or the slave computing device 430 may be packaged in an API. This allows easy integration with any web page which renders and/or allows the control of 3D content.
- WebRTC Web Real-Time Communication
- a peer-to-peer connection i.e. browser to browser communication without a server
- WebRTC currently is not supported in some browsers (e.g. Safari), and also lacks support for mobile browsers, making it less suitable for interaction involving mobile computing devices. Therefore, other types of peer-to-peer connection may be used, or future versions of WebRTC may be used if it starts supporting mobile browsers (and preferably more major desktop browsers).
- the slave computing device 430 and the master computing device 410 communicate directly, i.e. without involvement of the server 420.
- the slave computing device 430 and the master computing device 410 may communicate via peer-to-peer (P2P), wireless network, Bluetooth or another suitable network protocol and/or technology.
- P2P peer-to-peer
- the slave computing device 430 and/or the master computing device 410 may comprise the processor 110.
- the processor 110 may receive data based on the sensor output of the one or more sensors 115 and determine the perspective of the 3D object for display on the display 130 based on the received data.
- a user may use the mobile computing device 110 to directly control the 3D object 135 on the display 130, i.e. direct rotation mapping of the movement of the mobile computing device to the object.
- the mobile computing device 110 effectively becomes a proxy for the 3D object 135 on the display screen 130, so when a user rotates the mobile computing device 110 then 3D object 135 rotates as if the user were holding the 3D object in their hand.
- This may enable a user to intuitively control the 3D orientation of virtual objects on the display, thus improving the onboarding process for new users of the system.
- This unrestricted change in orientation in a first mode is shown in step 220A and 230A in Fig. 12.
- the user can lock the 3D object 135 in this orientation, for example by pressing a 'lock' button on a touchscreen of the mobile computing device 110.
- Alternative methods of locking that does not require user input to the screen of the mobile computing device 110 will be described in further in this description.
- the mobile computing device 110 can be placed on a surface 710 as shown in Fig. 7B. This may enable a user to more intuitively rotate the 3D object in a single dimension. Rotation of the 3D object in a single dimension may allow a user to gain a better perception of object depth.
- Fig. 7 A illustrates a mobile computing device 110 showing a first user interface 600.
- the first user interface 600 shows a reference coordinate system 610 to indicate to a user how movement of the mobile computing device 110 will affect the 3D object.
- the first user interface 600 comprises a lock button 620 that may be actuated by the user to change from the first mode to the second mode.
- the lock button 620 may also be used as an unlock button to change from the second mode to the first mode. It is to be appreciated that in some examples the user interface may be used to select other modes, such as the third and fourth mode.
- Fig. 7B illustrates a mobile computing device 110 placed on a surface 710.
- the surface 710 restricts the mobile computing device to rotate about a single axis, labelled as "y" that is perpendicular to the surface.
- Fig. 7C illustrates a display 130 showing the 3D object 135 after locking.
- the 3D object is fixed to rotate about a single axis, i.e. the yl '-axis.
- Rotating the mobile computing device 110 about the y-axis as shown by the arrow in Fig. 7B results in rotation of the 3D object around the yl-axis as shown by the arrow in Fig. 7C. Locking an axis in this way and rotating the object back and forth provides a rocking motion that assists depth perception, using the kinetic depth effect.
- a mobile computing device can be used to provide intuitive 3D controls (almost as if the user was holding the 3D object in their hand) without needing to purchase a specialized device.
- a mobile computing device By simply resting the mobile computing device on a bench, it can also become a dedicated controller for rotating about single axes, thus also addressing the challenge of depth cueing.
- the processor may transmit feedback data to the mobile computing device 110 to cause haptic feedback on the mobile computing device 110.
- haptic feedback may include, for example, vibration functionality of the mobile computing device 110 may be activated, such as when a movement is not possible, space for movement is limited (e.g. reaching a wall limit in a game) or a key point in the 3D object is reached.
- Users may be well adapted to using both a mouse or a track pad in
- keyboard may be integrated with the input from the sensors to further enhance the user experience.
- key presses on the keyboard of the master computing device 410 may be used to disable/enable specific functionalities or the mouse may be used to select menu items. This may include selection between the modes.
- the mobile computing device 110 is a smartphone and may comprise functionalities that may be utilized for controlling the 3D object.
- Many modern smartphones e.g., iPhone and many other smartphones
- smartphones include a 3-axis gyroscope and 3-axis accelerometer. Using these sensors, it is possible to track changes in rotation, translation, and acceleration.
- one or more gyroscopes of the smartphone may be read to control rotation of the 3D object around three axes and one or more accelerometers may read to control translation of the 3D object in three axes.
- the sensors may include magnetometers that may be used to complement or substitute other sensors described herein.
- control device described in the examples is a mobile computing device. It is to be appreciated that a dedicated control device with inertial sensors may be used in some examples of the method described herein.
- Fig. 13 illustrates an example computing device 800.
- the computing device 800 may be used for the server 420 or the slave computing device 430.
- the computing device 800 includes a processor 810, a memory 820 and an interface device 840 that communicate with each other via a bus 830.
- the memory 820 may store instructions and data for implementing aspects of the disclosure, such as the method 200 and the method 500 described above, and the processor 810 performs the instructions (such as a computer program) from the memory 820 to implement the methods 200 and 500.
- the interface device 840 may include a communications module that facilitates communication with a communications network and, in some examples, with user interfaces and other peripherals, such as a keyboard, a mouse and the display 130.
- some functions performed by the computing device 800 may be distributed between multiple network elements.
- the server 420 may be associated with multiple processing devices and steps of the methods may be performed, and distributed, across more than one of these devices.
- Advantages of the embodiments described include providing an intuitive onboarding process for users by having a 3D object mirror the movements of a mobile computing device, and aiding in accurate depth perception of virtual 3D structures viewed on a 2D screen by enabling easy movement or rocking of the 3D object about a single axis.
- Embodiments may also allow intuitive exploration of 3D objects on a 2D screen, using a mobile device that a vast majority of users already possess. This can remove the need for specialized devices to be purchased for users.
- Embodiments may be web-based, they do not necessarily require the installation of additional software.
- Embodiments may also be content independent, such that they are applicable to any 3D content. For example, a simple and readily available framework is provided for users to interact with 3D content, via standard web-based technologies that can be interpreted cross-browser. This framework can be used by experts, such as scientists (e.g. structural biologists to explore protein structures), artists, graphic designers, as well as naive users who may struggle to work with and understand the control of 3D content.
- a lock command is determined based on the outputs of the one or more inertial sensors 115. This may include receiving outputs from the inertial sensors 115 that is indicative of the user intending to initiate a lock command so that the system is changed between the modes (or to toggle between modes).
- a specified sensor output corresponding to a user shaking the control device 110 may be used to determine a lock command. This may include sensor outputs indicative of acceleration back and forth in opposite directions.
- the specified sensor output may include determining that the control device 110 is placed in a particular configuration. For example, placing the control device 110 flat on a horizontal table surface. Once locked (and in, for example, the second mode), the user may simply rotate the phone on the table surface to rotate the 3D object around the rotation axis.
- the lock command may be based on receiving one or more specified sensor outputs (including an absence of an output) over a specified time. This may can allow locking based on moving a 3D object to a desired perspective, pausing and holding that position for a time period (e.g. more than one second) after which an axis (such as a vertical axis through the desired perspective) is locked.
- a time period e.g. more than one second
- control device 110 Further movement of the control device 110 will then cause movement around that axis.
- determining the lock command based on sensor outputs over a specified time may be used to detect when the control device is placed onto a horizontal surface.
- the user may orientate the view to the desired perspective and (with the intention of transitioning to the second mode) place the control device onto the horizontal surface.
- the method may then include determining the lock command by determining that, over a short period of time, the control device moved from a (relatively) higher position (e.g. above a table) towards a location downwards and substantially horizontal (e.g. on a table surface). Based on receiving such an input, the method may determine the corresponding desired perspective before such sensor inputs so that the 3D object is locked in a second mode at that desired perspective.
- determination of the lock command may be by, or in conjunction with, other sensors.
- a camera of the control device may be used to assist determination that the control device is on a flat surface.
- a camera is located to face in an opposite direction to a touch screen. Therefore when the mobile computing device is placed on a flat surface with the touch screen facing up, the camera is facing downwards towards the flat surface (and will therefore have an obscured or black image).
- An advantage of such an input where the lock command is based on sensor inputs and/or time is that it may not require additional manipulation of user input controls. For example, the user may be able to switch between modes without interacting with a touchscreen of the control device 110. This may allow easier operation.
- the touchscreen of a control device 110 is typically on one side of the control device 110 and therefore if the control device could, during use, be orientated such that the touchscreen faces away from the user. Such a scenario would make it difficult for the operator to see and interact with the touchscreen.
- the method and system may operate in hybrid modes that combine two or more modes.
- the method may include a hybrid mode that includes both the second and fourth mode.
- the control device 110 may be on a flat surface (e.g. a table top), where rotating the control device 110 changes the orientation around a rotation axis and translating the control device 110 (across the table) changes the magnification of the 3D object on the display.
- the modes may be toggled between modes, or cycled through a plurality of modes, by sensor inputs. For example, lifting the control device 110 off the flat surface may trigger a change in the modes.
- the method may include determining tapping on the control device 110 with the sensor outputs, whereby determination of tapping initiates a change in the mode. In some examples, this may include the form or type of tapping such as a single tap, double tap, or multiple taps. The taps may be sensed by the inertial measurement sensors and/or via the touchscreen display.
- additional modes may be controlled with movement of the control device 110.
- the method includes additional modes to control one or more of the following:
- the pairing key may be at least part of the representation of the 3D object on the display.
- the 3D object 135 on the display 130 may have identifiable features to allow pairing of the slave computing device and the control device 110.
- the identifiable features may include the current orientation and/or shape of the 3D object.
- the pairing key may include at least part of the view of the representation of the 3D object in motion.
- the 3D object may be rotating, oscillating, or moving otherwise whereby the movement may be one of the identifiable features used, at least in part, for pairing.
- a camera of the control device 110 may be used to capture an image 735 (or multiple images such as video) of the 3D object 135. From the image 735 (or multiple images), the pairing key may be used by the method to pair the control device 110 to the slave computing device.
- the method may include selecting a component or portion of the 3D object for the control device 110 to control. For example, this may include pointing the camera of the control device 110 to a specific part of the 3D object 135 shown in the display 130. The method may include identifying that specific part so that that the system 100 is configured for the control device 110 to control the perspective, or move, that selected component or portion. Multi-user environment
- the system 100 and method 200 may be used in a multiuser environment where multiple control devices 110 are used to control the 3D object 135.
- the different control devices 110 may control different aspects of the 3D object. For example, one control device may control the orientation whilst another control device controls the magnification. In another example, different parts of the 3D object 135 may be rotated by respective different control devices 110.
- the system may include features to prevent multiple control devices 110 from controlling the same 3D object 135 at one time.
- the control device 100 may be exclusively paired to the slave computing device.
- the exclusive pairing may be maintained until the control device 100 relinquishes the exclusive pairing.
- there may be a hierarchy for the control devices 110 (and/or the users of the control devices) whereby the highest rank has the right to pair with the slave computing device.
- the slave computing device may have exclusive pairing for a specified exclusive time. For example this may be 10 second, 30 seconds, 1 minute, 5 minutes, etc. After the expiration of the specified exclusive time, the pairing may cease. Alternatively, the pairing may continue but be non-exclusive such that it will be relinquished when another control device attempts to pair with the slave computing device. In yet another example, exclusive pairing may be renewed by the computing device 110 after expiration of the specified exclusive time.
- the specified exclusive time may commence, and transpire, based on pairing or the request to pair. In other examples, the specified exclusive time may be based on the last change of sensor outputs (which in turn are indicative of last use by a user). Thus the exclusive pairing cease by "timing out” if the user does not use the control device 110 to manipulate the 3D object after the specified time.
- control may be passed from one control device 110 to another control device 110' as shown in Fig. 9. In one example, control device 110 may initially be paired with the slave computing device. To pass on control, the other control device 110' is brought into proximity 111 of the control device 110.
- control may be passed to the other control device 110' such that it is configured to control the view of the 3D object 135 on the display 130. In some examples, this may be automatic. In other examples, a prompt may show on one or both of the control devices 110, 110' to confirm the intention to pass control.
- Determining proximity may be achieved using time of flight and/or time of arrival of wireless signals such as those used for Bluetooth or Wi-Fi.
- the method may further include determining contact (or close contact) between the control devices 110, 110' . This may include using the inertial
- passing control may be achieved by showing or passing the code to the next control device.
- the control device 110 that is paired to the slave computing device may show a pairing code (which may include a QR code).
- the next control device may then receive that pairing code (such as capturing the code with a camera) and that pairing code may then be used to pair the next control device to the slave computing device.
- the display 130 may show multiple 3D objects 135, 135' .
- Each object 135, 135' may be controlled by different respective control devices 110, 110' .
- the method may include using the camera of the control devices 110, 110' to identify the object to be controlled.
- a first user may point the camera of control device 110 towards 3D object 135. This selection is shown on an image 735 of the 3D object 135 on the display of the control device.
- the slave computing device can pair with control device 110 for the purposes of controlling object 135.
- a second user may point the camera of another control device 110' towards another 3D object 135'.
- the slave computing device can then pair with the other control device 110' to allow control of the other 3D object 135' .
- a native application may be installed on the control devices 110, 110' to facilitate communication and pairing with the slave computing device.
- An application of the system may include an interactive art installation where the display 130 is a large display whereby a visitor may use their control device (such as a personal mobile computing device, or a supplied mobile computing device) to interact with the 3D object 135, 135' . In other examples, this may be used as a scientific, engineering, medical, or educational tool.
- the display 130 may include traditional displays such as a monitor, a television display, or a projector.
- the display 130 may include an augmented reality display or a virtual reality display. These may include head mounted displays.
- the system may include multiple displays.
- the multiple displays may include a combination of traditional displays as well as augmented reality and virtual reality displays.
- the control device 110 may be used by a user to move the 3D object between two different displays.
- two displays 130A, 130B are provided in the system.
- a control device 710 is controlling the 3D object 135 that is shown in display 130A.
- Display 130B is empty and does not show the 3D object 135.
- a user may then pick up and move the control device 710 (as shown by arrow A) towards a second location to specify moving the 3D object from first display 130A to the second display 130B.
- the control device is placed in the second position (as shown as control device 710') the 3D object is then displayed in the second display 130B (as shown as 3D object 135').
- the first display 130A may then stop showing 3D object 135.
- displays 130A or 130B may be a virtual reality or augmented reality display, whilst the remaining display 130A or 130B may be a traditional display.
- movement of the control device 110 and the 3D object 135 may be at a 1 : 1 ratio. That is an angular rotation of the control device 110 will cause an identical angular rotation of the 3D object 135. This may be useful when manipulating the views for the larger overall 3D object.
- scaling, and selective change of scaling can be used for other modes. This may include scaling change in position of the control device 110 compared to the corresponding translation of the 3D object 135 in the display. Control for other objects
- control device 110 is used to control a representation of a 3D object that is shown on a display 130.
- features of the present disclosure may be used in methods and systems to control other objects and representations.
- this may include using the control device to control actuators to move a real 3D object.
- this may include a drone vehicle, such as a drone aircraft (e.g. unmanned aerial vehicle).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A method (200) for controlling a view of a representation of a three dimensional (3D) object on a display (130), the method comprising: receiving (210) data based on a sensor output of one or more inertial measurement sensors (115) of a control device (110); determining (220) a perspective to the 3D object (135) based on the received data; and sending (230), to the display (130), display data representing a view of the 3D object from the determined perspective. There is also disclosed a system (100) including a processor (120) to perform the method (200).
Description
"Systems, methods and devices for controlling a view of a 3D object on a display" Technical Field
[0001] The disclosure relates to systems, methods and devices for controlling a view of a representation of an object, in particular a three dimensional (3D) object, on a display.
Background
[0002] Navigation through three dimensional (3D) datasets is required for a broad range of applications, including virtual landscapes in games, computer-aided design, as well a host of more specialized use scenarios. As the power of computer graphics continues to advance, it seems likely that usage of 3D navigation will continue to become more mainstream. Recent advances in web-based 3D graphics have now brought functionality previously only available in stand-alone applications to a much larger community of potential users.
[0003] A major challenge for 3D graphics applications is onboarding, which is the process of teaching new users the controls needed to facilitate 3D navigation.
Typically, these 3D controls are enabled via mouse or track pad movements in combination with keyboard controls. However, lack of standardized 3D controls can be confusing, requiring users to learn a new interface each time they encounter a new application. Unless these controls can be quickly and easily understood, many potential users of an application may give up.
[0004] Specialized input devices have been developed including depth-sensing cameras, input gloves, multi -touch interaction screens, and 3D 'wands', also known as 3D mice, such as the Nintendo Wii Remote controller. However, currently these devices are only used by a relatively small fraction of 'expert' users, compared with all users of 3D graphics. This is especially a concern when creating 3D web applications.
While it can be advantageous to support specialized devices, it is often avoided for web applications due to the availability of such devices to users.
[0005] A second major challenge for 3D graphics applications is to provide sufficient depth cues to communicate the 3D nature of an object or scene on a two dimensional (2D) display. Potentially, this can be solved using 3D display systems, such as 3D glasses and head-mounted displays like as the Oculus rift. However, as with specialized 3D control devices, 3D displays are currently uncommon, hence most 3D applications rely on depth cue methods that work with 2D displays.
Summary
[0006] A method implemented by a processor for controlling a view of a
representation of a three dimensional (3D) object on a display, the method comprising: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
[0007] The received data may comprise one or more of the following: information relating to orientation of the control device; and information relating to movement of the control device.
[0008] Determining the perspective to the 3D object may comprise: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data; and
in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data.
[0009] A method implemented by a processor for controlling a view of a
representation of a three dimensional (3D) object on a display, the method comprising: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data, wherein determining a perspective to the 3D object comprises: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data, and in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
[0010] The control device may be in the form of a mobile computing device.
[0011] A wide range of depth cueing methods have been developed. However, one of the most powerful methods is a simple back-and-forward rocking motion, typically around the y-axis of the viewed object's local coordinate frame. This method for depth perception can be significantly enhanced by engaging hand-eye-coordination via the use of a dedicated, direct manual control for this rotation, typically for use by the left hand. Dedicated physical dials that enable such rotations are routinely used by specialist users of 3D applications (including molecular graphics). However, for web- based applications, as mentioned above, most end users will not have access to such specialized devices. In the present disclosure, the second mode may allow such simple back and forward motion for depth cueing.
[0012] The methods may further comprise determining a change in orientation of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding change in orientation of the 3D object based on the change in orientation of the control device.
[0013] The methods may further comprise receiving a lock command from the control device, and in response to the lock command changing to the second mode from the first mode.
[0014] The method may further comprise determining a lock command based on receiving one or more specified sensor outputs of one or more inertial measurement sensors of the control device, wherein in response to the lock command, changing to the second mode from the first mode. The lock command may be further based on receiving one or more specified sensor outputs over a specified time.
[0015] The methods may further comprise determining a change in position of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding translation of the 3D object based on the change in position of the control device.
[0016] The methods may further comprise determining a change in orientation and/or position of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding change in magnification of the 3D object based on the change in orientation and/or position of the control device.
[0017] The method may further comprise a toggle command based on receiving one or more specified sensor outputs of one or more inertial measurement sensors of the control device, wherein in response to the toggle command, the method comprises changing from one mode to another mode in a plurality of modes, wherein the plurality of modes include determining the perspective to the 3D object in at least one or more of:
- an orientation of the 3D object;
- a position of the 3D object;
- a magnification of the 3D object;
- brightness of the 3D object;
- contrast of rendering of the 3D object;
- colour(s) of rendering of the 3D object;
- selection of layers to be displayed for the 3D object;
- transparency of rendering of the 3D object; and
- movement of a component part of the 3D object relative to other parts.
[0018] A server may comprise the processor, and sending the display data to the display may comprise sending the display data to a slave computing device associated with the display.
[0019] A slave computing device associated with the display may comprise the processor.
[0020] The slave computing device may communicate with the control device via peer-to-peer (P2P), wireless network or Bluetooth.
[0021] The method may further comprise: sending, to the display, display data showing a pairing key; receiving the pairing key via an input device of the control device; and
when the pairing key is received, pairing the control device with the slave computing device for controlling the view of the representation of the 3D object on the display.
[0022] The input device of the control device may include a camera, wherein the pairing key includes at least part of the view of the representation of the 3D object on the display. In some examples, the pairing key includes at least part of the view of the representation of the 3D object in motion.
[0023] In the method, pairing the control device with the slave computing device may include exclusive pairing for a specified exclusive time, wherein upon expiration of the specified exclusive time, the control device or another control device can pair with the slave computing device.
[0024] In some examples, the specified exclusive time period is based on time after a last change in sensor output of the one or more inertial measurement sensors of the control device.
[0025] In the method, wherein upon determining the control device is within a specified proximity of a further control device, the method may further include controlling the view of representation of the 3D object on the display with the further control device paired to the slave computing device. This may include passing off control to the further control device such that control device that originally controlled the 3D object temporarily, or permanently, ceases to have control.
[0026] The methods may further comprise transmitting feedback data to the control device to cause haptic feedback on the control device.
[0027] The 3D object may be a molecule.
[0028] The method according to any one of the preceding claims, wherein the method includes controlling the view of the 3D object on a plurality of displays, wherein in one mode the method comprises:
- determining, based on the received data, a specified display from the plurality of displays; and
- sending, to the specified display, the display data representing a view of the 3D object.
[0029] A system for controlling a view of a representation of a three dimensional (3D) object on a display. The system comprises: a processor, and a memory comprising a computer program that when executed by the processor performs the following: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
[0030] A system for controlling a view of a representation of a three dimensional (3D) object on a display. The system comprises: a processor, and a memory comprising a computer program that when executed by the processor performs the following: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data, wherein determining a perspective to the 3D object comprises:
in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data, and in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
[0031] A system for controlling a view of a representation of a three dimensional (3D) object on a display, the system comprising: a processor; and a memory
comprising a computer program that when executed by the processor to perform the method described above.
[0032] In some examples of the system, the control device is a mobile computing device. In another example of the system, the system includes one or more of the control devices.
Brief Description of Drawings
[0033] Fig. 1 illustrates a system for controlling a view of a representation of a three dimensional (3D) object on a display.
[0034] Fig. 2 illustrates a method for controlling a view of a representation of a three dimensional (3D) object on a display.
[0035] Fig. 3 A illustrates an example mobile computing device that may be used as a control device.
[0036] Fig. 3B illustrates an example display showing a representation of a 3D object.
[0037] Fig. 4 illustrates a second system for controlling a view of a representation of a three dimensional (3D) object on a display.
[0038] Fig. 5 illustrates a method of pairing a mobile computing device with a slave computing device associated with a display.
[0039] Fig. 6 illustrates a mobile computing device showing a second user interface.
[0040] Fig. 7 A illustrates a mobile computing device showing a first user interface.
[0041] Fig. 7B illustrates a mobile computing device placed on a surface.
[0042] Fig. 7C illustrates a display showing rotation of the 3D object after locking rotation to be about a single axis.
[0043] Fig. 8 illustrates a display showing a 3D object and a control device capturing an image of the 3D object for pairing.
[0044] Fig. 9 illustrates a display and a control device passing control to another control device in close proximity.
[0045] Fig. 10 illustrates a display showing two 3D objects and two control devices, wherein each control device captures a respective image of one of the 3D objects for pairing.
[0046] Fig. 11 illustrates a pair of displays, wherein a 3D object is moved from one display to another display using the control device.
[0047] Fig. 12 illustrates a method for controlling a view of a representation of a 3D object, wherein the method includes a first mode with unrestricted change in orientation and a second mode with restricted change in orientation around a rotation axis.
[0048] Fig. 13 illustrates a computing device that may be, for example, a server or the slave computing device used in the disclosure.
Description of Embodiments
[0049] Systems, methods and devices are described for controlling a view of a representation of a three dimensional (3D) object on a display. The devices may be mobile computing devices or other control devices that include sensors which provide information about orientation and/or movement of the device.
[0050] The systems, methods and devices may be used, for example, for molecular graphics applications, which generally require a complex set of 3D controls, including full 3-axis rotation, 3-axis translation, plus many more specialized controls, such as the ability to re-set rotation centres to an arbitrary set of atoms. In such applications, the 3D object may be a 3D dataset.
[0051] Fig. 1 illustrates a system 100. The system 100 comprises a control device 110 in the form of a mobile computing device, such as a smart phone, tablet, or a similar computing device that can be handheld. One or more sensors 115 of the control device 110 measure information about orientation and/or movement of the control device 110.
[0052] The system comprises a display 130 which shows a representation of a 3D object 135. The 3D object 135 may be, for example, a molecule, a model generated by medical imaging such as computed tomography (CT) or magnetic resonance imaging (MRI), or another object which a user wishes to view in three dimensions.
[0053] The system 100 comprises a processor 120 that receives data based on a sensor output of the one or more sensors 115 and controls a perspective to the 3D object 135 on the display based on the received data. For example, the representation of the 3D object 135 may be rotated, translated, enlarged or reduced in size on the display based on the orientation and/or movement of the control device 110. The processor 120 may, for example, form part of the mobile computing device 110, a computing device associated with the display 130, or a separate server that communicates with the mobile computing device 110 and a computing device associated with the display 130.
[0054] Fig. 2 illustrates a method 200. The method may be implemented by a processor for controlling a view of a representation of a 3D object on a display. For example, the method 200 may be implemented in the system 100 by the processor 120.
[0055] At 210, the method comprises receiving data based on a sensor output of one or more inertial measurement sensors of a device, such as a mobile computing device or other control device. The sensors may include, for example, one or more 3-axis gyroscopes and/or 3-axis accelerometers which may track changes in rotation and translation, and acceleration, of the device.
[0056] At 220, the method comprises determining a perspective to the 3D object based on the received data. The received data may comprise information relating to orientation and/or movement of the mobile computing device.
[0057] At 230, the method comprises sending, to the display, display data
representing a view of the 3D object from the determined perspective.
[0058] Fig. 3A illustrates an example mobile computing device 110. A 3D coordinate system 112 is shown on the mobile computing device 110. The coordinate system 112 comprises an x-axis, a y-axis and a z-axis. Rotation about the x-axis is indicated by β, rotation about the y-axis is indicated by γ, and rotation about the z-axis is indicated by a. The mobile computing device 110 may be held by a user and rotated about one or more of the three axes and/or translated in one or more of the three dimensions. A processor may receive data based on a sensor output of one or more inertial
measurement sensors of the mobile computing device and determine a perspective to the 3D object based on the received data.
[0059] Fig. 3B illustrates an example display 130 showing a representation of a 3D object 135. The 3D object is represented within a coordinate system comprising an xl- axis, a yl-axis and a zl-axis. The representation of a view of the 3D object 135 on the display 130 from a determined perspective is based on the perspective to the 3D object determined from the received data.
[0060] The processor may determine a change in orientation of the mobile computing device 110 based on the received data. For example, if the mobile computing device 110 is rotated about the x-axis as indicated by β, the y-axis as indicated by γ, and/or the z-axis as indicated by a, the processor may determine this change in orientation of the mobile computing device based on the received data. Determining the perspective to the 3D object may then comprise determining a corresponding change in orientation of the 3D object about the xl-axis, yl-axis and/or zl-axis based on the change in orientation of the mobile computing device 110. Here rotations about the xl-axis, yl- axis and/or zl-axis may correspond to a determined rotation of the mobile computing device 110 about the x-axis, y-axis and/or z-axis, respectively.
[0061] In a first mode, determining the perspective to the 3D object comprises determining an unrestricted change in orientation of the 3D object based on the received data. For example, the orientation of the 3D object may be changed in three dimensions. This is shown in steps 220A and 230A in method 200A in Fig. 12.
[0062] In a second mode, determining the perspective to the 3D object comprises determining a restricted change in orientation of the 3D object around a rotation axis yl ' based on the received data, as shown in Figs. 7B and 7C and steps 220B and 230B in Fig. 12.
[0063] The processor may receive a lock command from the mobile computing device and in response to the lock command change to the second mode from the first mode. The change to the second mode from the first mode may also be triggered by other means, such as the processor detecting movement of the mobile computing device 110 of less than a threshold for a predefined period of time or the processor detecting a movement of the mobile computing device 110 that is greater than a threshold acceleration or velocity.
[0064] In the second mode, the rotation of the 3D object may be restricted to rotation about a single axis based on rotation of the mobile computing device 110. The single
axis may be the xl-axis, yl-axis, zl-axis or another axis determined based on orientation of the representation of the 3D object when the second mode is invoked.
[0065] In a third mode, the processor may determine a change in position of the mobile computing device 110 based on the received data. For example, if the mobile computing device 110 is translated along the x-axis, the y-axis and/or the z-axis, the processor may determine this change in position of the mobile computing device based on the received data. Determining the perspective to the 3D object may then comprise determining a corresponding translation of the 3D object along the xl-axis, yl-axis and/or zl-axis based on the change in position of the mobile computing device.
[0066] In a fourth mode, determining the perspective to the 3D object may comprise determining a corresponding change in magnification of the 3D object based on the change in orientation and/or position of the mobile computing device 1 10. For example, a translated along the x-axis, the y-axis or the z-axis, or a rotation about the x- axis, the y-axis or the z-axis, maybe cause a zooming in or zooming out of the representation of the 3D object 135 on the display 130.
[0067] In some examples, the third and/or fourth mode may operate simultaneously with the first mode.
[0068] Fig. 4 illustrates a system 400. The system 400 comprises a master computing device 410 such as the control device 110, a slave computing device 430 which drives the display 130, and optionally a server 420 which may comprise the processor 120. The master computing device 410 comprises one or more sensors 115 that measure information about orientation and/or movement of the master computing device 410. The slave computing device 430 drives a display 135 to displays a representation of a 3D object 135. For example, the slave computing device 430 may be a laptop or desktop computer having an external or inbuilt display 135.
[0069] In one embodiment, the master computing device 410 and the slave computing device 430 may separately connect to the server 420 and establish a pairing to enable the master computing device 410 to control the 3D object 135 on the display 430.
Pairing and communication between the master computing device and slave computing device
[0070] Fig. 5 illustrates a method 500 of pairing the master computing device 410, such as a mobile computing device 110, with the slave computing device 430.
[0071] At 510, the method 500 comprises sending, to the display, display data showing a pairing key. For example, the processor 120 at the server 420 may generate a unique key. When the slave computing device 430 connects to the server 420, the processor 120 may send the unique key to the slave computing device 430 to show on the display 130. The slave computing device 430 may connect to the server 420 via a web browser by entering a uniform resource locator (URL) of the server 420. A web page at the URL may also provide the 3D object to the slave computing device to display on the display.
[0072] At 520, the method 500 comprises receiving the pairing key via an input device of the mobile computing device 110. For example, a user of the slave computing device 430 may enter the unique key into a user interface on the master computing device 410 after reading the unique key from the display 130. Alternatively, a camera of the computing device 430 may read the unique key from the display 130. The user interface may, for example, be received by accessing the same URL via a web browser on the master computing device 410 or may form part of an app on the master computing device 410.
[0073] At 530, the method 500 comprises, when the pairing key is received, pairing the mobile computing device 110 with the slave computing device 430 for controlling the view of the representation of the 3D object 135 on the display 130. For example, the processor 110 receives the unique key from the master computing device 410 and
pairs the master computing device 410 with the slave computing device 430. Once paired, the 3D object 135 may be controlled using the master computing device 410.
[0074] Fig. 6 illustrates a mobile computing device 110 showing a second user interface 650. The second user interface 650 comprises an input field 660, such as a text box, to receive the pairing key from the user.
[0075] When the master computing device 410 is viewing a web page on the server 420, a deviceorientation JavaScript event may be available in the webpage. This event is defined by the World Wide Web Consortium. The deviceorientation JavaScript event is fired upon changing the master computing device's 410 orientation, and the web browser then can read three Tait-Bryan angles, which define the smartphone's orientation with respect to the world coordinate frame, with Z-X'-Y" intrinsic rotations, i.e. the first rotation is described around the z-axis, the second rotation is described around the new x-axis after the previous rotation, and the third rotation is described around the new y-axis after the previous two rotations.
[0076] The master computing device 410 and the slave computing device 430 may use the Socket.IO JavaScript library for bi-directional communication between their web browser and the server 420. Socket.IO uses the WebSocket communications protocol with polling as a fallback option. WebSocket allows data transfer from a web browser to the server 420 and reverse, thus enabling real-time communication between multiple web browsers.
[0077] One disadvantage of using the WebSocket protocol is the necessity of using a server as a messenger between browsers. This may induce an observable latency, particularly if the server has to communicate with browsers over long geographical distances via the Internet. In 2011, Chen and Xu [1] developed a web-based game for multiple players using WebGL and WebSocket, and measured the performance of WebSocket. Using an Ethernet LAN network with 3 clients and 1 game server, they found that the WebSocket protocol could handle a server load of an average of 50,000 bytes per second. Furthermore, Pimentel and Nickerson [2] measured the latency of the
Web Socket protocol with a server stationed in Canada and clients located in Canada, Sweden, Japan and Venezuela. They found an average latency of 40.3ms within Canada and latency up to 163.3ms in Japan. As an instant response is desirable for the systems and methods described herein, it is important to keep the latency low.
[0078] The JavaScript library Socket.IO provides a server-side Node.js library and a client-side JavaScript library. A key Socket.IO functionality that may be utilised is the pairing of two web browsers by creating a room via the socket.join (ROOM KEY)- function, with the ROOM KEY being a unique key. For example, two clients, i.e. the master computing device 410 and the slave computing device 430 may be paired, such that the control communications are only required to be between the paired clients.
[0079] In one example, the slave computing device 430 creates a unique room key and sends the unique room key to the server 420 to create a room. The unique room key may be manually input into the slave computing device 430 or may be generated, for example randomly, by the slave computing device 430, such as by executing client side code received from the server 420. The master computing device 410 then receives the unique room key via a user interface, for example from a user of the slave computing device 430 who knows the unique room key, and sends the unique room key to the server 420 for verification. Once the server verifies the unique room key, the master computing device 410 sends data based on one or more sensor outputs of the one or more sensors 115 to the server 420. The server 420 then generates a view of the 3D object depending on the received data.
[0080] The server 420 may create rooms on request by receiving a room key from a web browser, store room keys, forward room keys to a web browser, verify room keys, pair clients by connecting them in the same room, check the number of clients per room (e.g. only two clients may be permitted per room), deny access to a 'full' room, and allow communication between clients in the same room.
[0081] To pair the master computing device 410 and the slave computing device 430, Socket.IO's functionality for pairing clients by creating rooms identified by a unique
room key (pairing key) may be used. This requires, for example, the use of a 5 digit pairing key which is entered by the user, adding an extra layer of complexity for creating a connection. In some embodiments, the master computing device 410 and the slave computing device 430 may be paired by an alternative means such as a quick response (QR) code.
[0082] In some embodiments, the JavaScript code that is executed on the master computing device 410 or the slave computing device 430 may be packaged in an API. This allows easy integration with any web page which renders and/or allows the control of 3D content.
[0083] In some embodiments, a different technology could be used to replace the Web Socket protocol. For example, WebRTC (Web Real-Time Communication) enables real-time communication using a peer-to-peer connection, i.e. browser to browser communication without a server, which would significantly reduce latency. However, WebRTC currently is not supported in some browsers (e.g. Safari), and also lacks support for mobile browsers, making it less suitable for interaction involving mobile computing devices. Therefore, other types of peer-to-peer connection may be used, or future versions of WebRTC may be used if it starts supporting mobile browsers (and preferably more major desktop browsers).
[0084] In one example, the slave computing device 430 and the master computing device 410 communicate directly, i.e. without involvement of the server 420. For example, the slave computing device 430 and the master computing device 410 may communicate via peer-to-peer (P2P), wireless network, Bluetooth or another suitable network protocol and/or technology. The slave computing device 430 and/or the master computing device 410 may comprise the processor 110. The processor 110 may receive data based on the sensor output of the one or more sensors 115 and determine the perspective of the 3D object for display on the display 130 based on the received data.
Example of first and second modes of operation
[0085] Initially a user may use the mobile computing device 110 to directly control the 3D object 135 on the display 130, i.e. direct rotation mapping of the movement of the mobile computing device to the object. In this first mode, the mobile computing device 110 effectively becomes a proxy for the 3D object 135 on the display screen 130, so when a user rotates the mobile computing device 110 then 3D object 135 rotates as if the user were holding the 3D object in their hand. This may enable a user to intuitively control the 3D orientation of virtual objects on the display, thus improving the onboarding process for new users of the system. This unrestricted change in orientation in a first mode is shown in step 220A and 230A in Fig. 12. Once a preferred orientation is of the 3D object has been found, the user can lock the 3D object 135 in this orientation, for example by pressing a 'lock' button on a touchscreen of the mobile computing device 110. Alternative methods of locking that does not require user input to the screen of the mobile computing device 110 will be described in further in this description. The mobile computing device 110 can be placed on a surface 710 as shown in Fig. 7B. This may enable a user to more intuitively rotate the 3D object in a single dimension. Rotation of the 3D object in a single dimension may allow a user to gain a better perception of object depth.
[0086] Fig. 7 A illustrates a mobile computing device 110 showing a first user interface 600. The first user interface 600 shows a reference coordinate system 610 to indicate to a user how movement of the mobile computing device 110 will affect the 3D object. The first user interface 600 comprises a lock button 620 that may be actuated by the user to change from the first mode to the second mode. In some embodiments, the lock button 620 may also be used as an unlock button to change from the second mode to the first mode. It is to be appreciated that in some examples the user interface may be used to select other modes, such as the third and fourth mode.
[0087] Fig. 7B illustrates a mobile computing device 110 placed on a surface 710. The surface 710 restricts the mobile computing device to rotate about a single axis, labelled as "y" that is perpendicular to the surface.
[0088] Fig. 7C illustrates a display 130 showing the 3D object 135 after locking. The 3D object is fixed to rotate about a single axis, i.e. the yl '-axis. Rotating the mobile computing device 110 about the y-axis as shown by the arrow in Fig. 7B results in rotation of the 3D object around the yl-axis as shown by the arrow in Fig. 7C. Locking an axis in this way and rotating the object back and forth provides a rocking motion that assists depth perception, using the kinetic depth effect.
[0089] The use of these two modes of operation can address two major challenges in navigating 3D content, namely onboarding and depth cueing. For example, a mobile computing device can be used to provide intuitive 3D controls (almost as if the user was holding the 3D object in their hand) without needing to purchase a specialized device. By simply resting the mobile computing device on a bench, it can also become a dedicated controller for rotating about single axes, thus also addressing the challenge of depth cueing.
[0090] In some embodiments, the processor may transmit feedback data to the mobile computing device 110 to cause haptic feedback on the mobile computing device 110. For example, haptic feedback may include, for example, vibration functionality of the mobile computing device 110 may be activated, such as when a movement is not possible, space for movement is limited (e.g. reaching a wall limit in a game) or a key point in the 3D object is reached.
[0091] Users may be well adapted to using both a mouse or a track pad in
combination with a keyboard to navigate in virtual space. In some embodiments, input from other devices, such as the keyboard, mouse and/or trackpad, may be integrated with the input from the sensors to further enhance the user experience. For example, key presses on the keyboard of the master computing device 410 may be used to disable/enable specific functionalities or the mouse may be used to select menu items. This may include selection between the modes.
[0092] In some embodiments, the mobile computing device 110 is a smartphone and may comprise functionalities that may be utilized for controlling the 3D object. Many
modern smartphones (e.g., iPhone and many other smartphones) include a 3-axis gyroscope and 3-axis accelerometer. Using these sensors, it is possible to track changes in rotation, translation, and acceleration. For example, one or more gyroscopes of the smartphone may be read to control rotation of the 3D object around three axes and one or more accelerometers may read to control translation of the 3D object in three axes. In some examples, the sensors may include magnetometers that may be used to complement or substitute other sensors described herein.
[0093] While embodiments have been described in relation to 3D objects or 3D datasets, many aspects of the disclosure can also be applied to two dimensional (2D) objects, graphics and datasets, for example, by using two dimensions rather than three dimensions of movement of the control device (e.g. during onboarding).
[0094] Although the control device described in the examples is a mobile computing device. It is to be appreciated that a dedicated control device with inertial sensors may be used in some examples of the method described herein.
[0095] Fig. 13 illustrates an example computing device 800. For example, the computing device 800 may be used for the server 420 or the slave computing device 430. The computing device 800 includes a processor 810, a memory 820 and an interface device 840 that communicate with each other via a bus 830. The memory 820 may store instructions and data for implementing aspects of the disclosure, such as the method 200 and the method 500 described above, and the processor 810 performs the instructions (such as a computer program) from the memory 820 to implement the methods 200 and 500. The interface device 840 may include a communications module that facilitates communication with a communications network and, in some examples, with user interfaces and other peripherals, such as a keyboard, a mouse and the display 130. In some embodiments, some functions performed by the computing device 800 may be distributed between multiple network elements. For example, the server 420 may be associated with multiple processing devices and steps of the methods may be performed, and distributed, across more than one of these devices.
[0096] Advantages of the embodiments described include providing an intuitive onboarding process for users by having a 3D object mirror the movements of a mobile computing device, and aiding in accurate depth perception of virtual 3D structures viewed on a 2D screen by enabling easy movement or rocking of the 3D object about a single axis.
[0097] Embodiments may also allow intuitive exploration of 3D objects on a 2D screen, using a mobile device that a vast majority of users already possess. This can remove the need for specialized devices to be purchased for users. As embodiments may be web-based, they do not necessarily require the installation of additional software. Embodiments may also be content independent, such that they are applicable to any 3D content. For example, a simple and readily available framework is provided for users to interact with 3D content, via standard web-based technologies that can be interpreted cross-browser. This framework can be used by experts, such as scientists (e.g. structural biologists to explore protein structures), artists, graphic designers, as well as naive users who may struggle to work with and understand the control of 3D content.
Example alternatives to determine lock command
[0098] In some examples, a lock command is determined based on the outputs of the one or more inertial sensors 115. This may include receiving outputs from the inertial sensors 115 that is indicative of the user intending to initiate a lock command so that the system is changed between the modes (or to toggle between modes).
[0099] In one example, a specified sensor output corresponding to a user shaking the control device 110 may be used to determine a lock command. This may include sensor outputs indicative of acceleration back and forth in opposite directions.
[0100] In another example, the specified sensor output may include determining that the control device 110 is placed in a particular configuration. For example, placing the control device 110 flat on a horizontal table surface. Once locked (and in, for example,
the second mode), the user may simply rotate the phone on the table surface to rotate the 3D object around the rotation axis.
[0101] In a further variation, the lock command may be based on receiving one or more specified sensor outputs (including an absence of an output) over a specified time. This may can allow locking based on moving a 3D object to a desired perspective, pausing and holding that position for a time period (e.g. more than one second) after which an axis (such as a vertical axis through the desired perspective) is locked.
Further movement of the control device 110 will then cause movement around that axis.
[0102] In another example, determining the lock command based on sensor outputs over a specified time may be used to detect when the control device is placed onto a horizontal surface. In one example, the user may orientate the view to the desired perspective and (with the intention of transitioning to the second mode) place the control device onto the horizontal surface. The method may then include determining the lock command by determining that, over a short period of time, the control device moved from a (relatively) higher position (e.g. above a table) towards a location downwards and substantially horizontal (e.g. on a table surface). Based on receiving such an input, the method may determine the corresponding desired perspective before such sensor inputs so that the 3D object is locked in a second mode at that desired perspective.
[0103] In yet another example, determination of the lock command may be by, or in conjunction with, other sensors. In one example, a camera of the control device may be used to assist determination that the control device is on a flat surface. In some mobile computing devices, a camera is located to face in an opposite direction to a touch screen. Therefore when the mobile computing device is placed on a flat surface with the touch screen facing up, the camera is facing downwards towards the flat surface (and will therefore have an obscured or black image).
[0104] An advantage of such an input where the lock command is based on sensor inputs and/or time is that it may not require additional manipulation of user input controls. For example, the user may be able to switch between modes without interacting with a touchscreen of the control device 110. This may allow easier operation. In particular, the touchscreen of a control device 110 is typically on one side of the control device 110 and therefore if the control device could, during use, be orientated such that the touchscreen faces away from the user. Such a scenario would make it difficult for the operator to see and interact with the touchscreen.
Hybrid modes and additional modes
[0105] In some examples, the method and system may operate in hybrid modes that combine two or more modes. For example, the method may include a hybrid mode that includes both the second and fourth mode. For example, the control device 110 may be on a flat surface (e.g. a table top), where rotating the control device 110 changes the orientation around a rotation axis and translating the control device 110 (across the table) changes the magnification of the 3D object on the display.
[0106] In further example, the modes may be toggled between modes, or cycled through a plurality of modes, by sensor inputs. For example, lifting the control device 110 off the flat surface may trigger a change in the modes. In another example, the method may include determining tapping on the control device 110 with the sensor outputs, whereby determination of tapping initiates a change in the mode. In some examples, this may include the form or type of tapping such as a single tap, double tap, or multiple taps. The taps may be sensed by the inertial measurement sensors and/or via the touchscreen display.
[0107] It is to be appreciated that additional modes may be controlled with movement of the control device 110. In some examples, the method includes additional modes to control one or more of the following:
- brightness of the 3D object;
- contrast of rendering of the 3D object;
- colour(s) of rendering of the 3D object;
- selection of layers to be displayed for the 3D object;
- transparency of rendering of the 3D object;
- movement of a component part of the 3D object relative to other parts.
Variations of the pairing key
[0108] In some examples, the pairing key may be at least part of the representation of the 3D object on the display. Referring to Fig. 8, the 3D object 135 on the display 130 may have identifiable features to allow pairing of the slave computing device and the control device 110. The identifiable features may include the current orientation and/or shape of the 3D object. In other examples, the pairing key may include at least part of the view of the representation of the 3D object in motion. Thus the 3D object may be rotating, oscillating, or moving otherwise whereby the movement may be one of the identifiable features used, at least in part, for pairing.
[0109] To capture such a pairing key, a camera of the control device 110 may be used to capture an image 735 (or multiple images such as video) of the 3D object 135. From the image 735 (or multiple images), the pairing key may be used by the method to pair the control device 110 to the slave computing device.
[0110] In some examples, the method may include selecting a component or portion of the 3D object for the control device 110 to control. For example, this may include pointing the camera of the control device 110 to a specific part of the 3D object 135 shown in the display 130. The method may include identifying that specific part so that that the system 100 is configured for the control device 110 to control the perspective, or move, that selected component or portion.
Multi-user environment
[0111] In some examples, the system 100 and method 200 may be used in a multiuser environment where multiple control devices 110 are used to control the 3D object 135. In some examples, the different control devices 110 may control different aspects of the 3D object. For example, one control device may control the orientation whilst another control device controls the magnification. In another example, different parts of the 3D object 135 may be rotated by respective different control devices 110.
[0112] To manage multiple control devices 110, the system may include features to prevent multiple control devices 110 from controlling the same 3D object 135 at one time. In one example, the control device 100 may be exclusively paired to the slave computing device. In some examples, the exclusive pairing may be maintained until the control device 100 relinquishes the exclusive pairing. In another example, there may be a hierarchy for the control devices 110 (and/or the users of the control devices) whereby the highest rank has the right to pair with the slave computing device.
[0113] In yet another example, the slave computing device may have exclusive pairing for a specified exclusive time. For example this may be 10 second, 30 seconds, 1 minute, 5 minutes, etc. After the expiration of the specified exclusive time, the pairing may cease. Alternatively, the pairing may continue but be non-exclusive such that it will be relinquished when another control device attempts to pair with the slave computing device. In yet another example, exclusive pairing may be renewed by the computing device 110 after expiration of the specified exclusive time.
[0114] In some examples, the specified exclusive time may commence, and transpire, based on pairing or the request to pair. In other examples, the specified exclusive time may be based on the last change of sensor outputs (which in turn are indicative of last use by a user). Thus the exclusive pairing cease by "timing out" if the user does not use the control device 110 to manipulate the 3D object after the specified time.
[0115] In some examples, control may be passed from one control device 110 to another control device 110' as shown in Fig. 9. In one example, control device 110 may initially be paired with the slave computing device. To pass on control, the other control device 110' is brought into proximity 111 of the control device 110. Upon determining the control devices 110, 110' are in proximity 111, control may be passed to the other control device 110' such that it is configured to control the view of the 3D object 135 on the display 130. In some examples, this may be automatic. In other examples, a prompt may show on one or both of the control devices 110, 110' to confirm the intention to pass control.
[0116] Determining proximity may be achieved using time of flight and/or time of arrival of wireless signals such as those used for Bluetooth or Wi-Fi. In another example, the method may further include determining contact (or close contact) between the control devices 110, 110' . This may include using the inertial
measurement sensors 115 to determine contact or a "bump" between the control devices 110, 110' .
[0117] In other examples, passing control may be achieved by showing or passing the code to the next control device. For example, the control device 110 that is paired to the slave computing device may show a pairing code (which may include a QR code). The next control device may then receive that pairing code (such as capturing the code with a camera) and that pairing code may then be used to pair the next control device to the slave computing device.
Multiple 3D objects
[0118] As illustrated in Fig. 10, the display 130 may show multiple 3D objects 135, 135' . Each object 135, 135' may be controlled by different respective control devices 110, 110' . To associate the 3D objects 135, 135' with control devices, the method may include using the camera of the control devices 110, 110' to identify the object to be controlled.
[0119] Thus a first user may point the camera of control device 110 towards 3D object 135. This selection is shown on an image 735 of the 3D object 135 on the display of the control device. Thus the slave computing device can pair with control device 110 for the purposes of controlling object 135. Similarly, a second user may point the camera of another control device 110' towards another 3D object 135'. The slave computing device can then pair with the other control device 110' to allow control of the other 3D object 135' .
[0120] In some examples, a native application may be installed on the control devices 110, 110' to facilitate communication and pairing with the slave computing device. An application of the system may include an interactive art installation where the display 130 is a large display whereby a visitor may use their control device (such as a personal mobile computing device, or a supplied mobile computing device) to interact with the 3D object 135, 135' . In other examples, this may be used as a scientific, engineering, medical, or educational tool.
Display 130 and multiple displays
[0121] The display 130 may include traditional displays such as a monitor, a television display, or a projector. In further examples, the display 130 may include an augmented reality display or a virtual reality display. These may include head mounted displays.
[0122] In some example, the system may include multiple displays. In further examples, the multiple displays may include a combination of traditional displays as well as augmented reality and virtual reality displays. In some applications, the control device 110 may be used by a user to move the 3D object between two different displays.
[0123] Referring to Fig. 11, two displays 130A, 130B are provided in the system. Initially, a control device 710 is controlling the 3D object 135 that is shown in display 130A. Display 130B is empty and does not show the 3D object 135. A user may then
pick up and move the control device 710 (as shown by arrow A) towards a second location to specify moving the 3D object from first display 130A to the second display 130B. After the control device is placed in the second position (as shown as control device 710') the 3D object is then displayed in the second display 130B (as shown as 3D object 135'). The first display 130A may then stop showing 3D object 135.
[0124] This may be useful for users to switch between virtual reality, augmented reality environments and traditional displays. For example, one of displays 130A or 130B may be a virtual reality or augmented reality display, whilst the remaining display 130A or 130B may be a traditional display.
Scaling of movement
[0125] In some examples, movement of the control device 110 and the 3D object 135 may be at a 1 : 1 ratio. That is an angular rotation of the control device 110 will cause an identical angular rotation of the 3D object 135. This may be useful when manipulating the views for the larger overall 3D object.
[0126] However, it is to be appreciated that other scales could be used. For example, if the user wishes to have more precise viewing and manipulation of the 3D object, a different scale could be used so that a larger angular rotating of the control device 110 is required to cause a correspondingly smaller angular rotation of the 3D object displayed on the display. In other examples, the ratio may be reversed such that a smaller angular rotation of the control device 110 causes a correspondingly large rotation of the 3D object.
[0127] It is to be appreciated that scaling, and selective change of scaling, can be used for other modes. This may include scaling change in position of the control device 110 compared to the corresponding translation of the 3D object 135 in the display.
Control for other objects
[0128] In the above mentioned description, the control device 110 is used to control a representation of a 3D object that is shown on a display 130. In some alternatives, features of the present disclosure may be used in methods and systems to control other objects and representations. In some examples, this may include using the control device to control actuators to move a real 3D object. In one variation, this may include a drone vehicle, such as a drone aircraft (e.g. unmanned aerial vehicle).
[0129] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
[0130] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.
[0131] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
References
[1] Bijin Chen and Zhiqi Xu. 2011. A framework for browser-based Multiplayer Online Games using WebGL and Web Socket, In Multimedia Technology (ICMT), 2011 International Conference on. IEEE, 471-474.
[2] Victoria Pimentel and Bradford G Nickerson. 2012. Communicating and displaying real-time data with websocket, IEEE Internet Computing 16, 4 (2012), 45-53.
Claims
1. A method implemented by a processor for controlling a view of a
representation of a three dimensional (3D) object on a display, the method comprising: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
2. The method of claim 1, wherein the received data comprises one or more of the following: information relating to orientation of the control device; and information relating to movement of the control device.
3. The method of claim 1 or 2, further comprising: determining a change in orientation of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding change in orientation of the 3D object based on the change in orientation of the control device.
4. The method of any one of the preceding claims, wherein determining the perspective to the 3D object comprises: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data; and in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data.
5. The method of claim 4, further comprising receiving a lock command from the control device, and in response to the lock command changing to the second mode from the first mode.
6. The method of claim 4, further comprising determining a lock command based on receiving one or more specified sensor outputs of one or more inertial measurement sensors of the control device, wherein in response to the lock command, changing to the second mode from the first mode.
7. The method of claim 6, wherein the lock command is further based on receiving one or more specified sensor outputs over a specified time.
8. The method of any one of the preceding claims, further compri determining a change in position of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding translation of the 3D object based on the change in position of the control device.
9. The method of any one of the preceding claims, further comprising: determining a change in orientation and/or position of the control device based on the received data, wherein determining the perspective to the 3D object comprises determining a corresponding change in magnification of the 3D object based on the change in orientation and/or position of the control device.
10. The method of any one of the preceding claims further comprising a toggle command based on receiving one or more specified sensor outputs of one or more inertial measurement sensors of the control device, wherein in response to the toggle command, the method comprises changing from one mode to another mode in a plurality of modes, wherein the plurality of modes include determining the perspective to the 3D object in at least one or more of:
- an orientation of the 3D object;
- a position of the 3D object;
- a magnification of the 3D object;
- brightness of the 3D object;
- contrast of rendering of the 3D object;
- colour(s) of rendering of the 3D object;
- selection of layers to be displayed for the 3D object;
- transparency of rendering of the 3D object; and
- movement of a component part of the 3D object relative to other parts.
11. The method of any preceding claim, wherein a server comprises the processor, and sending the display data to the display comprises sending the display data to a slave computing device associated with the display.
12. The method of claim 11, wherein the slave computing device associated with the display comprises the processor.
13. The method of either claim 11 or 12, wherein the slave computing device communicates with the control device via peer-to-peer (P2P), wireless network or Bluetooth.
14. The method of any one of claims 11 to 13 further comprising: sending, to the display, display data showing a pairing key; receiving the pairing key via an input device of the control device; and when the pairing key is received, pairing the control device with the slave computing device for controlling the view of the representation of the 3D object on the display.
15. The method of claim 14 wherein the input device of the control device includes a camera, wherein the pairing key includes at least part of the view of the representation of the 3D object on the display.
The method of claim 15 wherein the pairing key includes at least part of the of the representation of the 3D object in motion.
17. The method of any one of claims 14 to 16 wherein pairing the control device with the slave computing device includes exclusive pairing for a specified exclusive time, wherein upon expiration of the specified exclusive time, the control device or another control device can pair with the slave computing device.
18. The method of claim 17 wherein the specified exclusive time period is based on time after a last change in sensor output of the one or more inertial measurement sensors of the control device.
19. The method according to any one of claims 14 to 18, wherein upon determining the control device is within a specified proximity of a further control device, the method further includes controlling the view of representation of the 3D object on the display with the further control device paired to the slave computing device.
20. The method of any preceding claim, further comprising transmitting feedback data to the control device to cause haptic feedback on the control device.
The method of any preceding claim, wherein the 3D object is a molecule.
22. The method of any preceding claim wherein the control device is a mobile computing device.
23. The method according to any one of the preceding claims, wherein the method includes controlling the view of the 3D object on a plurality of displays, wherein in one mode the method comprises:
- determining, based on the received data, a specified display from the plurality of displays; and
- sending, to the specified display, the display data representing a view of the 3D object.
24. A system for controlling a view of a representation of a three dimensional (3D) object on a display, the system comprising: a processor; and a memory comprising a computer program that when executed by the processor performs the following: receiving data based on a sensor output of one or more inertial measurement sensors of a control device; determining a perspective to the 3D object based on the received data; and sending, to the display, display data representing a view of the 3D object from the determined perspective.
25. A system according to claim 24, wherein determining a perspective to the 3D object comprises: in a first mode, determining an unrestricted change in orientation of the 3D object based on the received data, and
in a second mode, determining a restricted change in orientation of the 3D object around a rotation axis based on the received data.
26. A system for controlling a view of a representation of a three dimensional (3D) object on a display, the system comprising: a processor; and a memory comprising a computer program that when executed by the processor to perform the method according to any one of claims 1 to 23.
27. The system according to any one of claims 24 to 26 wherein the control device is a mobile computing device.
28. The system according to any one of claims 24 to 27 wherein the system further comprises the control device.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2017900812 | 2017-03-08 | ||
AU2017900812A AU2017900812A0 (en) | 2017-03-08 | Systems, methods and devices for controlling a view of a 3D object on a display |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018161113A1 true WO2018161113A1 (en) | 2018-09-13 |
Family
ID=63447074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2018/050200 WO2018161113A1 (en) | 2017-03-08 | 2018-03-05 | Systems, methods and devices for controlling a view of a 3d object on a display |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018161113A1 (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060125917A1 (en) * | 2004-12-13 | 2006-06-15 | Samsung Electronics Co., Ltd. | Three dimensional image display apparatus |
US20120007850A1 (en) * | 2010-07-07 | 2012-01-12 | Apple Inc. | Sensor Based Display Environment |
US20120026166A1 (en) * | 2010-02-03 | 2012-02-02 | Genyo Takeda | Spatially-correlated multi-display human-machine interface |
US20130002548A1 (en) * | 2011-06-28 | 2013-01-03 | Kyocera Corporation | Display device |
US20130227492A1 (en) * | 2010-08-25 | 2013-08-29 | At&T Intellectual Property I, Lp | Apparatus for controlling three-dimensional images |
US8581905B2 (en) * | 2010-04-08 | 2013-11-12 | Disney Enterprises, Inc. | Interactive three dimensional displays on handheld devices |
US20140125557A1 (en) * | 2012-11-02 | 2014-05-08 | Atheer, Inc. | Method and apparatus for a three dimensional interface |
US20140188638A1 (en) * | 2008-01-16 | 2014-07-03 | Martin Kelly Jones | Systems and Methods for Determining Mobile Thing (MT) Identification and/or MT Motion Activity Using Sensor Data of Wireless Communication Device (WCD) |
-
2018
- 2018-03-05 WO PCT/AU2018/050200 patent/WO2018161113A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060125917A1 (en) * | 2004-12-13 | 2006-06-15 | Samsung Electronics Co., Ltd. | Three dimensional image display apparatus |
US20140188638A1 (en) * | 2008-01-16 | 2014-07-03 | Martin Kelly Jones | Systems and Methods for Determining Mobile Thing (MT) Identification and/or MT Motion Activity Using Sensor Data of Wireless Communication Device (WCD) |
US20120026166A1 (en) * | 2010-02-03 | 2012-02-02 | Genyo Takeda | Spatially-correlated multi-display human-machine interface |
US8581905B2 (en) * | 2010-04-08 | 2013-11-12 | Disney Enterprises, Inc. | Interactive three dimensional displays on handheld devices |
US20120007850A1 (en) * | 2010-07-07 | 2012-01-12 | Apple Inc. | Sensor Based Display Environment |
US20130227492A1 (en) * | 2010-08-25 | 2013-08-29 | At&T Intellectual Property I, Lp | Apparatus for controlling three-dimensional images |
US20130002548A1 (en) * | 2011-06-28 | 2013-01-03 | Kyocera Corporation | Display device |
US20140125557A1 (en) * | 2012-11-02 | 2014-05-08 | Atheer, Inc. | Method and apparatus for a three dimensional interface |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11275481B2 (en) | Collaborative augmented reality system | |
US10854169B2 (en) | Systems and methods for virtual displays in virtual, mixed, and augmented reality | |
US10657716B2 (en) | Collaborative augmented reality system | |
EP2814000B1 (en) | Image processing apparatus, image processing method, and program | |
KR100963238B1 (en) | Tabletop-Mobile Augmented Reality System for Personalization and Collaboration | |
US20180075644A1 (en) | Caching in map systems for displaying panoramic images | |
Henrysson | Bringing augmented reality to mobile phones | |
US10114543B2 (en) | Gestures for sharing data between devices in close physical proximity | |
US10591986B2 (en) | Remote work supporting system, remote work supporting method, and program | |
Papaefthymiou et al. | Mobile Virtual Reality featuring a six degrees of freedom interaction paradigm in a virtual museum application | |
US20210005014A1 (en) | Non-transitory computer-readable medium, image processing method, and image processing system | |
CN111913645B (en) | Three-dimensional image display method, device, electronic device and storage medium | |
US9292165B2 (en) | Multiple-mode interface for spatial input devices | |
Benini et al. | Palmtop computers for managing interaction with immersive virtual heritage | |
WO2018161113A1 (en) | Systems, methods and devices for controlling a view of a 3d object on a display | |
US10664105B2 (en) | Projected, interactive environment | |
US8972864B2 (en) | Website list navigation | |
JP5475163B2 (en) | Data acquisition device, data acquisition system, data acquisition device control method, and program | |
JP5247907B1 (en) | Data acquisition device, data acquisition system, data acquisition device control method, and program | |
JP6363350B2 (en) | Information processing program, information processing apparatus, information processing system, and information processing method | |
JP2004054828A (en) | Image processing device | |
CN116235136A (en) | VR control based on mobile device | |
TW201112105A (en) | Method and system of dynamic operation of interactive objects | |
Umenhoffer et al. | Using the Kinect body tracking in virtual reality applications | |
KR20140039395A (en) | Virtual device control smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18763154 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18763154 Country of ref document: EP Kind code of ref document: A1 |