+

WO2018146493A1 - A graphical user interface device and method - Google Patents

A graphical user interface device and method Download PDF

Info

Publication number
WO2018146493A1
WO2018146493A1 PCT/GB2018/050382 GB2018050382W WO2018146493A1 WO 2018146493 A1 WO2018146493 A1 WO 2018146493A1 GB 2018050382 W GB2018050382 W GB 2018050382W WO 2018146493 A1 WO2018146493 A1 WO 2018146493A1
Authority
WO
WIPO (PCT)
Prior art keywords
visual object
vertices
state
user interface
virtual movement
Prior art date
Application number
PCT/GB2018/050382
Other languages
French (fr)
Inventor
Nicolas COMER-CALDER
Aron Alexander SCHLEIDER
Original Assignee
The Voucher Market Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Voucher Market Limited filed Critical The Voucher Market Limited
Publication of WO2018146493A1 publication Critical patent/WO2018146493A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • a Graphical User Interface Device and Method Field of Invention is in the field of graphical user interfaces. More particularly, but not exclusively, the present invention relates to controlling access within an electronic device using a graphical user interface.
  • Access is often mediated via a graphical user interface where a graphical element is displayed and where user input is received to enable interaction with the graphical element. In the simplest form this may involve, for example, receiving a pointer-click to an icon.
  • virtually wrapped objects are displayed to the user and the user may click on the wrapped object.
  • a predefined animation is displayed where the virtually wrapped object is unwrapped through a series of animation or video frames.
  • the unwrapped object may represent content or functionality, such a greeting card, a virtual gift, an image corresponding to an actual gift or a gift card redeemable for a physical or virtual product.
  • a computer- implemented method of controlling an electronic device with a display and a user input including:
  • Figure 1 shows a block diagram illustrating an electronic device in accordance with an embodiment of the invention
  • Figure 2 shows a block diagram illustrating a software architecture for the electronic device in accordance with an embodiment of the invention
  • Figure 3 shows a flow diagram illustrating a method in accordance with an embodiment of the invention
  • FIG. 1 show a series of screenshots illustrating a graphical user interface in accordance with an embodiment of the invention
  • Figure 5a shows a diagram illustrating the forces acting upon vertices within a grid in accordance with an embodiment of the invention
  • Figure 6c shows a flow diagram illustrating a method in accordance with an embodiment of the invention. Detailed Description of Preferred Embodiments
  • the present invention provides a graphical user interface device and method.
  • FIG. 1 an electronic device 100 in accordance with an embodiment of the invention is shown.
  • the device 100 includes a processor 101 , a memory 102, a user input apparatus 103, and a display apparatus 104.
  • the device 100 may also include a communications apparatus 105.
  • the input apparatus 103 may include one or more of a touch/near-touch input, an audio input, a keyboard, a pointer device (such as a mouse), or any other type of input.
  • the display apparatus 104 may include one or more of a digital screen (such as an LED or OLED screen), an e-ink screen, or any other type of display.
  • the input and display apparatuses 103 and 104 may form an integrated user interface 106 such as a touch or near-touch screen.
  • the device 100 may constitute a personal computing device such as a desktop or laptop computer, or a mobile device 100, such as a smart-phone or a tablet.
  • the device 100 may include a common operating system 107 such as Apple iOS, Google Android, or Microsoft Windows Phone for mobile devices or Microsoft Windows or Apple OSX for personal computing devices.
  • the processor 101 may be configured to display a visual object within a graphical user interface in a first and second state on the display apparatus 104.
  • the processor 101 may be configured to convert the state of the visual object from the first state to the second in response to a user input via the input apparatus 103 at the graphical user interface.
  • the user input may comprise a virtual movement, for example, a touch movement detected moving from one location to another or a pointer-based movement detected moving from one location to another.
  • the states may be reflected as visual differences on the display.
  • the processor 101 may be further configured to provide content or functionality to the user. Access to the content or functionality may be provided by the processor 101 when the visual object is converted into the second state.
  • the memory 102 may be configured to store software applications 108, libraries 109, the operating system 107, device drivers 1 10, and data 1 1 1 .
  • the processor 101 is configured to execute the software applications 108, libraries 109, operating system 107, and device drivers 1 10, and to retrieve data 1 1 1 .
  • the communications apparatus 105 may be configured to communicate with one or more other devices or servers via a communications interface such as wifi, Bluetooth, and/or cellular (e.g. 2G, 3G, or 4G) and/or across a network (such as a cellular network, a private LAN/WLAN and/or the Internet).
  • a communications interface such as wifi, Bluetooth, and/or cellular (e.g. 2G, 3G, or 4G) and/or across a network (such as a cellular network, a private LAN/WLAN and/or the Internet).
  • Software applications 201 are provided at a top layer. Below this layer are user interface APIs (Application Programming Interfaces) 202 which provide access for the application software 201 to user interface libraries. Below this layer are operating system APIs 203 which provide access for the application software 201 and user interface libraries to the core operating system 204. Below the core operating system 204 are the device drivers 205 which provide access to the input 103, output 104, and communication 105 apparatuses.
  • user interface APIs Application Programming Interfaces
  • operating system APIs 203 which provide access for the application software 201 and user interface libraries to the core operating system 204.
  • device drivers 205 which provide access to the input 103, output 104, and communication 105 apparatuses.
  • the user interface APIs 202 include exposed calls to instruct the processor 101 to display a visual object within a graphical user interface in a first state on the display apparatus 104 and to instruct the processor 101 to convert the state of the visual object from the first displayed state to a second displayed state in response to a user input via the input apparatus 103 at the graphical user interface.
  • the calls may exposed to the application layer 201 .
  • a visual object is displayed in a first state to a user on an electronic device (e.g. on display apparatus 104) within a graphical user interface.
  • the visual object may be a visual representation of a sheet of material such as paper.
  • the visual object may be comprised of a visible or non-visible grid.
  • the grid may be comprised of interconnected vertices.
  • a user input event may be detected at the graphical user interface.
  • the user input event may comprise virtual movement.
  • Virtual movement may be a continuous series of touch events detected by a touchscreen across an area, or a pointer-based movement from one location to another. It will be appreciated that virtual movement may be captured in a variety of ways.
  • the virtual movement may comprise a path.
  • the path may not be predefined. That is, the path may be only defined by the user when the user interacts with the user input apparatus to provide the user input event.
  • the user input event comprises multiple virtual movement defined by touch events detected by a touch-screen.
  • each of the multiple virtual movements may correspond to a path such that multiple paths are defined.
  • step 303 in response to the user input event and during the virtual movement, the visual object is converted in correspondence to the virtual movement from the first state to a second state. Modification of the visual object is reflected within the graphical user interface.
  • the modification may be tears, folds, or crumpling of the sheet, such that the sheet of material is simulated within the graphical user interface.
  • the visual object is comprised of a grid of vertices, virtual movement proximate to the vertices may result in "disconnections" between those vertices. In this way, where the visual object represents a sheet of material, a tearing in the sheet may be simulated.
  • the effect of the virtual movement on a localised aspect of the grid may iterate through-out the grid of vertices to provide a crumpling effect.
  • a 2D image may be mapped to the grid of vertices such that modifications to the grid of vertices results in a visual change to the 2D image which may in a first state be displayed in a flat form, and, after, modification to the grid of vertices may be displayed in a three-dimensional form illustrating crumpling.
  • a force is calculated on each vertex of the grid of vertices, and virtual movement changes the forces on one or more vertices within the grid of vertices, such that, beyond a specific force threshold, a vertex may split into two representing a "disconnection" within the grid.
  • associated sound effects may be played such as tearing, folding or crumpling sound effects.
  • vertices may experience force increases in two opposing directions resulting in more rapid "disconnections".
  • vertices are not split into two to represent disconnections, but the visual object may comprise two or more grids of vertices, each grid of vertices mapped to one part of the image or one image of a plurality of images. Modification may then affect one or more of the grids of vertices depending on the type of user input event, resulting in one grid or both moving in relation to the other simulating, for example, a tearing of the overall visual object.
  • step 304 access is provided to content or functionality once the visual object is converted into the second state. Access may be provided to the user within the graphical user interface.
  • the content or functionality may include display of a greeting card or message card, display of a gift card, and/or providing the ability to purchase products or services. It will be appreciated that access to various types of content or functionality may be provided.
  • Figures 4a to 4d show diagrams illustrating a sequence of user interactions with screenshots generated by a software application in accordance with a method of the invention.
  • a visual object representing wrapping paper is displayed to the user within a graphical user interface.
  • An animated icon may be displayed to user to indicate the type of action that is possible in relation to this visual object.
  • a dragging user input is indicated.
  • a dragging user input is detected within the graphical user interface in relation to the visual object.
  • the visual representation of the wrapping paper tears and folds organically corresponding to where and how the user drags.
  • the content or functionality is a further wrapping paper to be interacted with and a new visual object representing this further wrapping paper is displayed within the graphical user interface as shown in Figure 4c.
  • the display of the wrapping paper shows tearing and folding of the paper to display a message card behind in Figure 4d.
  • An image (such as a 2D bitmap image) is mapped into a grid of vertices. Each connected pair of vertices within the grid is connected by a spring.
  • the grid In an initial state, before user events are detected, the grid is in a rest state and the forces upon the vertices sum to null as shown in Figure 5a.
  • a user event is first detected (such as one or more touch events)
  • an anchor point is created at each user event initiation point corresponding to the grid (e.g. a touch event on a touch-screen).
  • One or more anchor springs may be created between each of the one or more anchor points and one or more proximate vertices.
  • the anchor points are moved to correspond to the virtual movement. Tension then increase on the one or more anchor springs and additional force is applied to the associated vertices.
  • each point i.e. vertex or anchor point
  • Velocity Springs may be defined with parameters:
  • Forces between two points (p1 and p2) may be updated as follows:
  • Point integration using velocity and acceleration over a given deltaTime may be defined as:
  • multiple images forming a larger visual object may be each mapped on to multiple grids.
  • These image-grid pairs exist within the user interface as wrapping-paper objects.
  • a user input event comprising one or more touch events may connect the anchor points for each touch event to vertices at one of the wrapping-paper objects via the anchor springs.
  • a user input event comprising a single touch event will affect vertices of the grid for one wrapping-paper object
  • a user input event comprising multiple touch events may affect multiple wrapping-paper objects when the vertices proximate to the anchor points exist on different wrapping-paper objects.
  • a user provides a single input point, for example using a mouse or one finger on a touch-screen.
  • An example of a user interacting with two wrapping-paper objects in accordance with the invention will be described with reference to Figure 5c.
  • a user provides a two input points, for example using two fingers on a touch-screen.
  • the larger visual object is a layer
  • there may exist multiple layers within the user interface such that the layers are stacked in front of one another. Each layer may partially or completely obscure the layer behind it. These layers exist within the interface as wrappingobjects. An embodiment of the invention will now be described with reference to Figure 6a to 6b.
  • This embodiment may include five system modules.
  • the system modules may be implemented in software or within firmware.
  • the modules may execute at the user interface layer 202 and expose user interface APIs to application software executing at the application layer 201.
  • the system modules may include a Wrapping Manager module, a Wrapping module, a Wrapping Geometry module, a Wrapping Paper module, and Wrapping Spring module.
  • the Wrapping Manager module may be configured for managing the entire "unwrapping" user interface process. It may be configured for:
  • the Wrapping module may be configured for managing an "unwrapping" layer. It may be configured for:
  • the Wrapping Geometry module may be configured for creating the grid plane geometry. It may be configured for:
  • the Wrapping Paper module may be configured for managing each separate "paper piece” or Wrapping Paper object. It may be configured for:
  • the Wrapping Spring module may be configured for calculating the forces applied between two vertices. It may be configured for:
  • the Wrapping Spring module may require rest length and stiffness parameters.
  • the Wrapping Manager module receives user input and executes the main update function calling the active Wrapping object.
  • the Wrapping object manages the Wrapping Paper objects that represent each separate piece of paper.
  • Each Wrapping Paper object updates its vertices using its Wrapping Spring objects for the physics calculations.
  • a potential advantage of some embodiments of the present invention is that immediate visual feedback is provided to the user, the visual feedback correlates more directly to the user action, and the skeuomorphic attributes enhance user ease-of-use particularly for users with less exposure to technology or technical ability. At least some of these advantages may thus provide an improved graphical user interface which in turn provides an improved device. Furthermore, a potential advantage of some embodiments of the present invention is that a user interface API is provided to application software developers to facilitate ease of creation of this user interface element. While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of applicant's general inventive concept.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to a computer-implemented of controlling an electronic device with a display and a user input. The method includes the steps of: displaying a visual object within a first state within a graphical user interface on the display; detecting a user input event comprising virtual movement within the graphical user interface; and in response to the user input event: during the virtual movement, modifying the visual object in correspondence to the virtual movement such that the visual object is converted from a first state to a second state; and providing access to content or functionality when the visual object is in the second state. The virtual movement does not follow a predefined path.

Description

A Graphical User Interface Device and Method Field of Invention The present invention is in the field of graphical user interfaces. More particularly, but not exclusively, the present invention relates to controlling access within an electronic device using a graphical user interface.
Background
Within electronic devices there is often a need to provide access to a user to content or functionality. Access is often mediated via a graphical user interface where a graphical element is displayed and where user input is received to enable interaction with the graphical element. In the simplest form this may involve, for example, receiving a pointer-click to an icon.
In one application, virtually wrapped objects are displayed to the user and the user may click on the wrapped object. At this point, a predefined animation is displayed where the virtually wrapped object is unwrapped through a series of animation or video frames. The unwrapped object may represent content or functionality, such a greeting card, a virtual gift, an image corresponding to an actual gift or a gift card redeemable for a physical or virtual product.
There is a desire to provide enhanced graphical user interfaces to improve this application and others.
It is an object of the present invention to provide a graphical user interface device and method which overcomes the disadvantages of the prior art, or at least provides a useful alternative. Summary of Invention
According to a first aspect of the invention there is provided a computer- implemented method of controlling an electronic device with a display and a user input, including:
Displaying a visual object within a first state within a graphical user interface on the display;
Detecting a user input event comprising virtual movement within the graphical user interface; and
In response to the user input event:
During the virtual movement, modifying the visual object in correspondence with the virtual movement such that the visual object is converted from the first state to a second state; and
Providing access to content or functionality when the visual object is in the second state;
wherein the virtual movement does not follow a predefined path.
Other aspects of the invention are described within the claims. Brief Description of the Drawings
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: Figure 1 : shows a block diagram illustrating an electronic device in accordance with an embodiment of the invention;
Figure 2: shows a block diagram illustrating a software architecture for the electronic device in accordance with an embodiment of the invention;
Figure 3: shows a flow diagram illustrating a method in accordance with an embodiment of the invention; Figures 4a to 4d:
show a series of screenshots illustrating a graphical user interface in accordance with an embodiment of the invention;
Figure 5a: shows a diagram illustrating the forces acting upon vertices within a grid in accordance with an embodiment of the invention;
Figures 5b and 5c:
each show a series of diagrams illustrating a graphical user interface in accordance with an embodiment of the invention;
Figures 6a and 6b:
each show block diagrams illustrating a software architecture for the electronic device in accordance with an embodiment of the invention; and
Figure 6c: shows a flow diagram illustrating a method in accordance with an embodiment of the invention. Detailed Description of Preferred Embodiments
The present invention provides a graphical user interface device and method.
In Figure 1 , an electronic device 100 in accordance with an embodiment of the invention is shown.
The device 100 includes a processor 101 , a memory 102, a user input apparatus 103, and a display apparatus 104. The device 100 may also include a communications apparatus 105. The input apparatus 103 may include one or more of a touch/near-touch input, an audio input, a keyboard, a pointer device (such as a mouse), or any other type of input. The display apparatus 104 may include one or more of a digital screen (such as an LED or OLED screen), an e-ink screen, or any other type of display.
The input and display apparatuses 103 and 104 may form an integrated user interface 106 such as a touch or near-touch screen.
The device 100, or at least parts of the device, may constitute a personal computing device such as a desktop or laptop computer, or a mobile device 100, such as a smart-phone or a tablet. The device 100 may include a common operating system 107 such as Apple iOS, Google Android, or Microsoft Windows Phone for mobile devices or Microsoft Windows or Apple OSX for personal computing devices.
The processor 101 may be configured to display a visual object within a graphical user interface in a first and second state on the display apparatus 104. The processor 101 may be configured to convert the state of the visual object from the first state to the second in response to a user input via the input apparatus 103 at the graphical user interface. The user input may comprise a virtual movement, for example, a touch movement detected moving from one location to another or a pointer-based movement detected moving from one location to another. The states may be reflected as visual differences on the display.
The processor 101 may be further configured to provide content or functionality to the user. Access to the content or functionality may be provided by the processor 101 when the visual object is converted into the second state. The memory 102 may be configured to store software applications 108, libraries 109, the operating system 107, device drivers 1 10, and data 1 1 1 .
The processor 101 is configured to execute the software applications 108, libraries 109, operating system 107, and device drivers 1 10, and to retrieve data 1 1 1 .
The communications apparatus 105 may be configured to communicate with one or more other devices or servers via a communications interface such as wifi, Bluetooth, and/or cellular (e.g. 2G, 3G, or 4G) and/or across a network (such as a cellular network, a private LAN/WLAN and/or the Internet).
Referring to Figure 2, the various layers of the architecture 200 of the device 100 will be described.
Software applications 201 are provided at a top layer. Below this layer are user interface APIs (Application Programming Interfaces) 202 which provide access for the application software 201 to user interface libraries. Below this layer are operating system APIs 203 which provide access for the application software 201 and user interface libraries to the core operating system 204. Below the core operating system 204 are the device drivers 205 which provide access to the input 103, output 104, and communication 105 apparatuses.
In one embodiment, the user interface APIs 202 include exposed calls to instruct the processor 101 to display a visual object within a graphical user interface in a first state on the display apparatus 104 and to instruct the processor 101 to convert the state of the visual object from the first displayed state to a second displayed state in response to a user input via the input apparatus 103 at the graphical user interface. The calls may exposed to the application layer 201 . With reference to Figure 3, a method 300 in accordance with an embodiment of the invention will be described.
In step 301 , a visual object is displayed in a first state to a user on an electronic device (e.g. on display apparatus 104) within a graphical user interface. The visual object may be a visual representation of a sheet of material such as paper. The visual object may be comprised of a visible or non-visible grid. The grid may be comprised of interconnected vertices. In step 302, a user input event may be detected at the graphical user interface. The user input event may comprise virtual movement. Virtual movement may be a continuous series of touch events detected by a touchscreen across an area, or a pointer-based movement from one location to another. It will be appreciated that virtual movement may be captured in a variety of ways. The virtual movement may comprise a path.
The path may not be predefined. That is, the path may be only defined by the user when the user interacts with the user input apparatus to provide the user input event.
In one embodiment, the user input event comprises multiple virtual movement defined by touch events detected by a touch-screen. In this embodiment, each of the multiple virtual movements may correspond to a path such that multiple paths are defined.
In step 303, in response to the user input event and during the virtual movement, the visual object is converted in correspondence to the virtual movement from the first state to a second state. Modification of the visual object is reflected within the graphical user interface. For example, where the visual object is a visual representation of a sheet of material, the modification may be tears, folds, or crumpling of the sheet, such that the sheet of material is simulated within the graphical user interface. Where the visual object is comprised of a grid of vertices, virtual movement proximate to the vertices may result in "disconnections" between those vertices. In this way, where the visual object represents a sheet of material, a tearing in the sheet may be simulated. The effect of the virtual movement on a localised aspect of the grid may iterate through-out the grid of vertices to provide a crumpling effect. A 2D image may be mapped to the grid of vertices such that modifications to the grid of vertices results in a visual change to the 2D image which may in a first state be displayed in a flat form, and, after, modification to the grid of vertices may be displayed in a three-dimensional form illustrating crumpling.
In one embodiment, a force is calculated on each vertex of the grid of vertices, and virtual movement changes the forces on one or more vertices within the grid of vertices, such that, beyond a specific force threshold, a vertex may split into two representing a "disconnection" within the grid.
During modification of the visual object, associated sound effects may be played such as tearing, folding or crumpling sound effects.
Where the user input event comprises multiple virtual movement, vertices may experience force increases in two opposing directions resulting in more rapid "disconnections".
In one embodiment, vertices are not split into two to represent disconnections, but the visual object may comprise two or more grids of vertices, each grid of vertices mapped to one part of the image or one image of a plurality of images. Modification may then affect one or more of the grids of vertices depending on the type of user input event, resulting in one grid or both moving in relation to the other simulating, for example, a tearing of the overall visual object.
In step 304, access is provided to content or functionality once the visual object is converted into the second state. Access may be provided to the user within the graphical user interface. The content or functionality may include display of a greeting card or message card, display of a gift card, and/or providing the ability to purchase products or services. It will be appreciated that access to various types of content or functionality may be provided.
Figures 4a to 4d show diagrams illustrating a sequence of user interactions with screenshots generated by a software application in accordance with a method of the invention. In Figure 4a, a visual object representing wrapping paper is displayed to the user within a graphical user interface.
An animated icon may be displayed to user to indicate the type of action that is possible in relation to this visual object. In the example in Figure 4a, a dragging user input is indicated.
A dragging user input is detected within the graphical user interface in relation to the visual object. In Figure 4b, as the user drags within the GUI (for example, using a pointer or using touch), the visual representation of the wrapping paper tears and folds organically corresponding to where and how the user drags.
Once this visual object has been converted from a first state - un-torn, to a second state - torn, content or functionality is accessible.
In one embodiment, the content or functionality is a further wrapping paper to be interacted with and a new visual object representing this further wrapping paper is displayed within the graphical user interface as shown in Figure 4c. When a further dragging action is detected, the display of the wrapping paper shows tearing and folding of the paper to display a message card behind in Figure 4d. Referring to Figures 5a to 5c, a method in accordance with an embodiment of the invention will be described.
An image (such as a 2D bitmap image) is mapped into a grid of vertices. Each connected pair of vertices within the grid is connected by a spring.
In an initial state, before user events are detected, the grid is in a rest state and the forces upon the vertices sum to null as shown in Figure 5a. When a user event is first detected (such as one or more touch events), an anchor point is created at each user event initiation point corresponding to the grid (e.g. a touch event on a touch-screen). One or more anchor springs may be created between each of the one or more anchor points and one or more proximate vertices.
After user event initiation, when the user event includes virtual movement (such as dragging across a touch-screen), the anchor points are moved to correspond to the virtual movement. Tension then increase on the one or more anchor springs and additional force is applied to the associated vertices.
For example, each point (i.e. vertex or anchor point) may be defined with following properties:
pos: Position
mass: Mass
- acc: Acceleration
vel: Velocity Springs may be defined with parameters:
restLength: Initial distance where force = 0
stiffness: Controls the amount of force The following may be calculated :
masslnv = 1 / mass;
Forces between two points (p1 and p2) may be updated as follows:
delta = p2 - p1 ;
force = (delta. length - restLength) / (delta. length * (p1 . masslnv + p2. masslnv)) * stiffness;
p1 += delta * force * p1 .masslnv;
p2 -= delta * force * p2. masslnv; Point integration using velocity and acceleration over a given deltaTime may be defined as:
p.acc *= p. masslnv; // force = mass * acc
p.vel *= 0.8; // Damping to slow down
p.vel += p.acc * deltaTime;
p += p.vel ;
In one embodiment, multiple images forming a larger visual object (such as a virtual sheet or layer (e.g. paper or fabric)) may be each mapped on to multiple grids. These image-grid pairs exist within the user interface as wrapping-paper objects. A user input event comprising one or more touch events may connect the anchor points for each touch event to vertices at one of the wrapping-paper objects via the anchor springs. In this way, a user input event comprising a single touch event will affect vertices of the grid for one wrapping-paper object, and a user input event comprising multiple touch events may affect multiple wrapping-paper objects when the vertices proximate to the anchor points exist on different wrapping-paper objects. An example of a user interacting with one wrapping-paper object in accordance with the invention will be described with reference to Figure 5b.
In this example, a user provides a single input point, for example using a mouse or one finger on a touch-screen. There are two wrappingobjects A and B within the user interface, but only the vertices within the grid for object B change in response to the virtual movement created by continuing user input (e.g. movement of the mouse or finger on the touch-screen). An example of a user interacting with two wrapping-paper objects in accordance with the invention will be described with reference to Figure 5c.
In this example, a user provides a two input points, for example using two fingers on a touch-screen. There are two wrappingobjects A and B within the user interface, and vertices within the grids for both objects A and B change in response to the virtual movement created by continuing user input (e.g. movement of the fingers on the touch-screen).
In addition, where the larger visual object is a layer, in some embodiments, there may exist multiple layers within the user interface, such that the layers are stacked in front of one another. Each layer may partially or completely obscure the layer behind it. These layers exist within the interface as wrappingobjects. An embodiment of the invention will now be described with reference to Figure 6a to 6b.
This embodiment may include five system modules. The system modules may be implemented in software or within firmware. The modules may execute at the user interface layer 202 and expose user interface APIs to application software executing at the application layer 201. The system modules may include a Wrapping Manager module, a Wrapping module, a Wrapping Geometry module, a Wrapping Paper module, and Wrapping Spring module. The Wrapping Manager module may be configured for managing the entire "unwrapping" user interface process. It may be configured for:
Preloading images and sound effects.
Creating materials for each Wrapping Object.
Creating and updating Wrapping Objects.
- Handling user input to calculate path.
Playback of sound effect with dynamic volume.
The Wrapping module may be configured for managing an "unwrapping" layer. It may be configured for:
- Creating Wrapping Geometry.
Creating and updating Wrapping Paper objects.
The Wrapping Geometry module may be configured for creating the grid plane geometry. It may be configured for:
- Defining vertices, faces and grid shortcuts.
Supporting randomised vertex positions.
The Wrapping Paper module may be configured for managing each separate "paper piece" or Wrapping Paper object. It may be configured for:
- Creating 3D mesh using the Wrapping Geometry module.
Creating Wrapping Spring objects for vertices on every face of the mesh.
Integrating vertices physics (acceleration and velocity)
Updating springs force model for each vertex
- Supporting one input point The Wrapping Spring module may be configured for calculating the forces applied between two vertices. It may be configured for:
Containing references to two points. The Wrapping Spring module may require rest length and stiffness parameters.
The hierarchy for the five modules is illustrated in Figure 6b. A method for updating the Wrapping object in response to user input (e.g. virtual movement) will be described in relation to Figure 6c.
The Wrapping Manager module receives user input and executes the main update function calling the active Wrapping object.
The Wrapping object manages the Wrapping Paper objects that represent each separate piece of paper.
Each Wrapping Paper object updates its vertices using its Wrapping Spring objects for the physics calculations.
Potential advantages of some embodiments of the present invention are that immediate visual feedback is provided to the user, the visual feedback correlates more directly to the user action, and the skeuomorphic attributes enhance user ease-of-use particularly for users with less exposure to technology or technical ability. At least some of these advantages may thus provide an improved graphical user interface which in turn provides an improved device. Furthermore, a potential advantage of some embodiments of the present invention is that a user interface API is provided to application software developers to facilitate ease of creation of this user interface element. While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of applicant's general inventive concept.

Claims

Claims
1 . A computer-implemented method of controlling an electronic device with a display and a user input, including:
Displaying a visual object in a first state within a graphical user interface on the display;
Detecting a user input event comprising virtual movement within the graphical user interface; and
In response to the user input event:
During the virtual movement, modifying the visual object in correspondence with the virtual movement such that the visual object is converted from the first state to a second state; and
Providing access to content or functionality when the visual object is in the second state;
wherein the virtual movement does not follow a predefined path.
2. A method as claimed in claim 1 , wherein the visual object comprises a grid of connected vertices.
3. A method as claimed in claim 2, wherein modification of the visual object includes modification to the grid.
4. A method as claimed in any one of claims 2 to 3, wherein connections between the vertices proximate to the virtual movement are broken.
5. A method as claimed in any one of the preceding claims, wherein the visual object simulates a sheet of material.
6. A method as claimed in claim 5, wherein modifications to the visual object include one or more selected from the set of tearing of the sheet, crumpling of the sheet, and folding of the sheet.
7. A method as claimed in any one of the preceding claims, wherein the virtual movement includes one or more selected from the set of touch movement from one location within the graphical user interface to another, and pointer movement from one location within the graphical user interface to another.
8. A method as claimed in claim 3, wherein the grid of vertices is mapped to a 2D image, such that the visual object corresponds to the 2D image and such that modifications to the grid result in visual modifications to the 2D image.
9. A method as claimed in any one of the preceding claims, further including:
Displaying one or more visual objects within layers within the user interface, such that each preceding visual object within the layer visually obscures succeeding visual objects when in a first state and does not visually obscure succeeding visual objects when in a second state;
Detecting a second user input event comprising a second virtual movement within the graphical user interface; and In response to the second user input event:
During the second virtual movement, modifying a visual object in a layer in correspondence with the second virtual movement such that the visual object in the layer is converted from the first state to the second state.
10. A method as claimed in any one of the preceding claims when dependent on claim 2, wherein the visual object comprises a plurality of grids of vertices.
1 1 . A method as claimed in claim 10, wherein each grid of the plurality of grids of vertices is mapped to an associated 2D image, such that visual object corresponds to the combination of associated 2D images.
12. A method as claimed in claim 1 1 , wherein the user input event comprises a plurality of virtual movements.
13. A method as claimed in claim 12 when dependent on claim 3, wherein each of the virtual movements is associated with a different grid of vertices of the plurality of grids of vertices, such that modification of each different grid effects each associated 2D image.
14. A method as claimed in any one of the preceding claims, wherein the visual object is displayed in response to an application programming interface call to a user interface by application software.
15. A method as claimed in claim 2, wherein each pair of connected vertices is connected by a spring applying a force on each of the vertices.
16. A method as claimed in claim 15, wherein a user input event creates one or more anchor points connected to one or more proximate vertices within the grid.
17. A method as claimed in claim 16, wherein the anchor points move in correspondence with the virtual movement.
18. A method as claimed in any one of claims 16 and 17, wherein each anchor point is associated with one or more anchor springs connecting the anchor point to the one or more proximate vertices.
19. An electronic device, including:
A display;
One or more processors;
Memory;
A user input; and One or more programs wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including instructions for:
Displaying a visual object within a first state within a graphical user interface on the display;
Detecting a user input event comprising virtual movement within the graphical user interface; and
In response to the user input event:
During the virtual movement, modifying the visual object in correspondence to the virtual movement such that the visual object is converted from a first state to a second state; and
Providing access to content or functionality when the visual object is in the second state;
wherein the virtual movement does not follow a predefined path.
Computer software configured to perform the method of any one of claims 1 to 18.
A computer readable non-tangible electronic storage medium configured to store the computer software of claim 20.
PCT/GB2018/050382 2017-02-10 2018-02-12 A graphical user interface device and method WO2018146493A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1702286.4 2017-02-10
GBGB1702286.4A GB201702286D0 (en) 2017-02-10 2017-02-10 A graphical user interface device and method

Publications (1)

Publication Number Publication Date
WO2018146493A1 true WO2018146493A1 (en) 2018-08-16

Family

ID=58462068

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2018/050382 WO2018146493A1 (en) 2017-02-10 2018-02-12 A graphical user interface device and method

Country Status (2)

Country Link
GB (1) GB201702286D0 (en)
WO (1) WO2018146493A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060284852A1 (en) * 2005-06-15 2006-12-21 Microsoft Corporation Peel back user interface to show hidden functions
US20120098639A1 (en) * 2010-10-26 2012-04-26 Nokia Corporation Method and apparatus for providing a device unlock mechanism
EP2587361A2 (en) * 2011-10-25 2013-05-01 Samsung Electronics Co., Ltd Method and apparatus for displaying e-book in terminal having function of e-book reader
US20130159914A1 (en) * 2011-12-19 2013-06-20 Samsung Electronics Co., Ltd. Method for displaying page shape and display apparatus thereof
US20130205255A1 (en) * 2012-02-06 2013-08-08 Hothead Games, Inc. Virtual Opening of Boxes and Packs of Cards

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060284852A1 (en) * 2005-06-15 2006-12-21 Microsoft Corporation Peel back user interface to show hidden functions
US20120098639A1 (en) * 2010-10-26 2012-04-26 Nokia Corporation Method and apparatus for providing a device unlock mechanism
EP2587361A2 (en) * 2011-10-25 2013-05-01 Samsung Electronics Co., Ltd Method and apparatus for displaying e-book in terminal having function of e-book reader
US20130159914A1 (en) * 2011-12-19 2013-06-20 Samsung Electronics Co., Ltd. Method for displaying page shape and display apparatus thereof
US20130205255A1 (en) * 2012-02-06 2013-08-08 Hothead Games, Inc. Virtual Opening of Boxes and Packs of Cards

Also Published As

Publication number Publication date
GB201702286D0 (en) 2017-03-29

Similar Documents

Publication Publication Date Title
US10754531B2 (en) Displaying a three dimensional user interface
US9195362B2 (en) Method of rendering a user interface
US9952745B2 (en) Method of modifying rendered attributes of list elements in a user interface
US9075631B2 (en) Method of rendering a user interface
US8984448B2 (en) Method of rendering a user interface
CA2792900C (en) Method of rendering a user interface
US20210141523A1 (en) Platform-independent user interface system
US8610714B2 (en) Systems, methods, and computer-readable media for manipulating graphical objects
CN102023706B (en) System for interacting with objects in a virtual environment
US20130127870A1 (en) Focus-change invariance in a graphical display
US20130093764A1 (en) Method of animating a rearrangement of ui elements on a display screen of an electronic device
JP6532981B2 (en) Persistent Node Framework
JP2016514875A (en) Switching list interaction
Laufs et al. Mt4j-a cross-platform multi-touch development framework
JP2016528612A (en) Reduced control response latency with defined cross-control behavior
WO2018146493A1 (en) A graphical user interface device and method
CN109416638B (en) Customizable compact overlay window
JP2017072977A (en) Computer program
CN118708093A (en) Sliding control method, device, equipment and storage medium for web page controls
WO2002031776A1 (en) Method and system for loading three dimensional visual objects into the video display area of a two dimensional graphical user interface
CN118131897A (en) Display method and device for physical movement of user interface and electronic equipment
CN117971220A (en) Control display method, medium, device and computing equipment
Bérard The GML canvas: Aiming at ease of use, compactness and flexibility in a graphical toolkit

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18714008

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18714008

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载