US20130191711A1 - Systems and Methods to Facilitate Active Reading - Google Patents
Systems and Methods to Facilitate Active Reading Download PDFInfo
- Publication number
- US20130191711A1 US20130191711A1 US13/876,463 US201113876463A US2013191711A1 US 20130191711 A1 US20130191711 A1 US 20130191711A1 US 201113876463 A US201113876463 A US 201113876463A US 2013191711 A1 US2013191711 A1 US 2013191711A1
- Authority
- US
- United States
- Prior art keywords
- document
- virtual workspace
- gesture
- response
- view region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000004044 response Effects 0.000 claims description 40
- 230000003993 interaction Effects 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 5
- 230000001419 dependent effect Effects 0.000 claims description 2
- 238000012986 modification Methods 0.000 claims description 2
- 230000004048 modification Effects 0.000 claims description 2
- 239000000725 suspension Substances 0.000 claims 2
- 238000012552 review Methods 0.000 abstract description 106
- 230000008569 process Effects 0.000 abstract description 13
- 210000003811 finger Anatomy 0.000 description 54
- 230000001052 transient effect Effects 0.000 description 18
- 238000012553 document review Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000002457 bidirectional effect Effects 0.000 description 6
- 230000003247 decreasing effect Effects 0.000 description 6
- 210000005224 forefinger Anatomy 0.000 description 6
- 230000015654 memory Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000007639 printing Methods 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001680 brushing effect Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G06F17/211—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0483—Interaction with page-structured environments, e.g. book metaphor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/103—Formatting, i.e. changing of presentation of documents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
- G06F40/169—Annotation, e.g. comment data or footnotes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04808—Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen
Definitions
- Various embodiments of the present invention relate to digital documents and, more particularly, to systems and methods to facilitate active reading of digital documents.
- paper supports bimanual interaction and freeform annotation within the boundaries of a single page
- paper lacks the flexibility to provide, for example, content rearrangement, document overviews, and annotation outside of page boundaries.
- the tangibility of paper supports some rapid forms of navigation, such as dog-earing and bookmarking with a finger
- paper provides little flexibility to create a customized navigational structure.
- Modern pen-based computerized tablets do a fine job of imitating paper, which benefits users by providing a familiar medium, but as a result, these pen-based tablets suffer from the same limitations as paper. Thus, neither paper nor modern computer systems adequately facilitate active reading.
- a document review system can provide a novel approach to representing and interacting with documents.
- the document review system can provide a highly flexible, malleable document representation.
- the document review system can provide high degree-of-freedom ways to navigate through and manipulate the document representation, control what document content is displayed and where, and create annotations and other structures related to the document.
- the document review system can include a multi-touch, gesture-based user interface.
- Embodiments of the document review system can provide improvements, as compared to paper and conventional word processing, to each of these processes.
- Annotation can be generally defined as text embellishment, including highlighting and marginalia.
- the review system can provide efficient annotation by enabling convenient switching between annotation tools, by supporting idiosyncratic markings, and by providing a convenient means for retrieving annotations made previously.
- Content extraction generally includes copying or moving content from a document to a secondary location, such as when outlining or note-taking.
- the review system can closely integrate extraction with the reading process, so that user can organize and view extracted content, as well as link extracted content back to the original document.
- Navigation generally entails moving throughout a document and between multiple documents, such as when searching for text, turning a page, or flipping between selected locations for comparison.
- the review system can support bookmarks and parallelism to facilitate these or other navigational tasks.
- Layout generally refers to the visual or spatial arrangement of the document and related objects.
- the review system can optimize layout according to the user's preferences by enabling distinct portions of the document to be viewed in parallel, while maintaining the document's linearity.
- a document review system can comprise a virtual workspace, a document view region, a preview region, and optional document objects.
- the system can be embodied in one or more computer-readable media and can be executable by one or more computer processors on a computing device.
- the computing device can comprise a multi-touch interface by which a user can interact with the virtual workspace and the overall document review system.
- the virtual workspace can be a working environment in which the user can review a document.
- the virtual workspace can be, for example, a graphical user interface displayed in a virtual window or frame viewable through the multi-touch interface.
- the virtual workspace can be designed to look and feel like a physical desktop or other physical workspace to which the user may be accustomed.
- the virtual workspace can be a relatively unstructured environment, enabling users to place the document objects as desired throughout the virtual workspace.
- the document view region can be contained fully or partially within the virtual workspace.
- the view region can be configured to display a viewable portion of at least one document at a size that enables a user to easily read the text of the document.
- the size of the document in the view region can, however, be increased or decreased as the user desired. If the document is too long to be contained fully within the view region at a given magnification state of the document, then only a portion of the document can be viewable in the view region.
- the document can be displayed in a continuous layout, and in an exemplary embodiment, page breaks in the document can be hidden, so that the document appears to be seamless and unbounded by pagination.
- the preview region can be contained fully or partially within the virtual workspace.
- the preview region can display a larger portion of the document, at a smaller size, than the view region.
- the magnification of the preview region can be such that the entire document can be displayed continuously in the preview region.
- the magnification can be such that the general layout of the document can be determined by the preview region, although the text of the document need not be readable within the region.
- the preview region can be linked to the document view region and can serve various navigational or other purposes.
- the portion of the document displayed in the document view region can change, so as to center in the document view region the portion of the document touched in the preview region.
- the preview region can be utilized to select a portion of the document that is displayed in the document view region.
- the document objects can be moveable objects positioned throughout the virtual workspace as desired by the user. In some embodiments, however, such movement can be restricted to areas outside of one or both of the document view region and the preview region, so as not to obstruct these regions.
- a document object can be created by the user to assist the user in actively reading of the document. For example, and not limitation, the user can create an excerpt of the document or an annotation, either of which can be encapsulated in a document object, which may be smaller and more easily manipulable than the document as a whole. Once created, the document object can be freely moved about the virtual workspace, so as to enable the user to arrange the virtual workspace in a manner that creates a customized active reading experience.
- the document object can be linked to the portion or portions of the document to which the document object relates.
- the document object can include a visual link, such as an arrow, that the user can touch to cause the one or more documents in the document view region to shift position, thus bringing the related portions into view.
- the document review system can thus enable users to manipulate documents in a way that improves upon paper and other document manipulation systems.
- FIG. 1 illustrates a review system, according to an exemplary embodiment of the present invention.
- FIG. 2 illustrates an architecture of a computing device for providing the review system, according to an exemplary embodiment of the present invention.
- FIG. 3 illustrates a transient bookmark of the review system, according to an exemplary embodiment of the present invention.
- FIG. 4 illustrates a flow diagram of a method of creating a transient bookmark, according to an exemplary embodiment of the present invention.
- FIGS. 5A-5B illustrate collapsing of a document, according to an exemplary embodiment of the present invention.
- FIGS. 6A-6B illustrate an excerpt of the review system 100 , according to an exemplary embodiment of the present invention.
- FIGS. 7A-7B illustrate an annotation of the review system 100 , according to an exemplary embodiment of the present invention.
- FIG. 1 illustrates a review system 100 , or document review system, according to an exemplary embodiment of the present invention.
- the review system 100 can comprise, for example, a touchscreen input device 110 of a computing device 200 , a virtual workspace 120 , a document view region 130 , a preview region 140 , an optional one or more document objects 150 , and a toolbar 160 .
- the touchscreen input device 110 can be a multi-touch input device for interfacing with the virtual workspace 120 and other aspects of the review system 100 .
- the touchscreen input device 110 is a multi-touch device capable of receiving multiple simultaneous touches, thus enabling a user to interact with the review system 100 in a natural manner, using multiple hands and fingers simultaneously.
- a detection system 115 can be integrated with or in communication with the touchscreen input device 110 , to detect user interactions with the touchscreen input device 110 . These user interactions, or gestures, can be interpreted as commands to the review system 100 .
- the review system 100 can alternatively comprise some other multi-point, bimanual, spatial input device capable of receiving a wide away of gestures interpretable as commands.
- the review system 100 can be designed to improve four major processes that occur in active reading: annotation, content extraction, navigation, and layout. Conventional paper-like approaches fall short in facilitating these processes because of their fixed structure and lack of flexibility. Utilizing a multi-touch input device 110 can provide parallel and bimanual input, which are important parts of paper-based reading, and which also enable a flexible environment.
- a mouse as used in most computer-based reading systems, is an inefficient control device because it provides only a single indicator or selector.
- a keyboard also used in most computer-based reading systems, lacks a natural spatial mapping. The flexible interactions made possible by embodiments of the review system 100 inherently offer more degrees of freedom than traditionally offered by computer-based reading systems.
- the multi-touch input device 110 can support multi-point spatial input and is thus capable of efficiently receiving a wide array of gestures for interacting with the review system 100 .
- the terms “touch,” “hold,” and the like need not refer only to physical contact between the user and the touchscreen input device 110 . Such terms can refer to various interactions simulating a physical contact, such as pointing from a distance or bringing a finger, hand, or implement in close proximity to the touchscreen input device 110 , so as to indicate a virtual touching, holding, or the like.
- the definition of a “touch” can be implementation-dependent, wherein the type of touchscreen input device 110 used can determine how interactions are detected and thus how a “touch” or “hold” is defined.
- the touchscreen input device 110 can utilize resistive, capacitive, or camera technologies.
- a “touch” can be defined based on camera sensitivity, or on an instrument's being within a predetermined distance from the touchscreen input device 110 . Additionally, “touch,” “hold,” and like terms need not refer only to interactions between the user's hands or fingers and the touchscreen input device 110 , but can also refer to interactions with an instrument held by the user, such as a stylus, marker, or pen.
- a display system 118 can be in communication with the detection system 115 , the touchscreen input device 110 , or both.
- the display system 118 can react to user gestures by displaying and refreshing a graphical user interface presented to the user, preferably through the touchscreen input device 110 , which can perform as both an input and an output device.
- This graphical user interface can include the virtual workspace 120 , the review region 130 , the document view region 140 , and the document objects 150 , all of which will be described in more detail below.
- the virtual workspace 120 can be accessible and manipulable through the touchscreen input device 110 .
- the virtual workspace 120 can simulate a physical desktop, in that the user can freely move document objects 150 throughout the workspace 120 without being bound by a fixed structure common in computer-based reading system.
- the virtual workspace 120 can contain the preview region 130 and the document view region 140 .
- the virtual workspace 120 can comprise the useable space of the review system 100 outside of the preview region 130 and the document view region 140 .
- the review system 100 can present the user with the virtual workspace 120 containing a document 50 , or configured to display a yet-to-be-opened document 50 .
- the user can control the document 50 and other document objects 150 in the virtual workspace 120 with a vocabulary of multi-touch gestures. Through these gestures, the user can navigate, annotate, and manipulate the virtual workspace 120 , rarely having to explicitly select tools or otherwise shift attention away from the document 50 at hand.
- Some basic interactions can be performed in the virtual workspace 120 as one might expect based on conventional touch applications. For example, objects can be repositioned by dragging the objects about the virtual workspace 120 . Rescaling can be performed by a pinching or stretching gesture with two fingers, preferably in a horizontal orientation.
- Other performable gestures and operations are new to the review system 100 , as will be described below in detail.
- the user can open a document 50 in the virtual workspace 120 , and the open document 50 can be displayed in the one or both of the preview region 130 and the document view region 140 .
- various actions are described as being performed or performable on the “text” of the open document 50 . It will be understood, however, that all or most of such actions can similarly be performed on imbedded objects in the document 50 that are not text, such as images or multimedia.
- text throughout this disclosure is used for illustrative purposes only and is not restrictive.
- the preview region 130 can be configured to display the document 50 at a magnification or size that enables the user to view the general layout of the document 50 .
- the entire document 50 can be viewable in the preview region 130 , so as to present the general layout of the entire document 50 to the user.
- the magnification of the preview region 130 can be adjustable, so that the user can select a magnification size that is best suited to the user's needs.
- the document view region 140 can display at least a portion of the open document 50 .
- the document view region 140 can display the document 50 at a magnification or size enabling the user to easily read the text of the document 50 .
- the magnification of the document 50 in the document view region 140 can be modified by the user to enable to user to select a text size best suiting the user's needs.
- the text of the document 50 can, in either or both of the preview region 130 and the document view region 140 , be presented to the user in a continuous format, with or without pagination. If pagination is provided, then this provision can be for the user's reference only and need not restrict operations of the review system 100 to page boundaries. Some embodiments of the review system 100 can enable the user to select whether pagination is shown, to further customize the user's active reading experience. In the document view region 140 and in the preview region 130 , if the entire document 50 is not visible, then the user can scroll vertically in the respective region 140 or 130 to adjust the visible portion of the document 50 .
- Scrolling can occur when the user performs a predetermined gesture, such as touching the representation of the document 50 and, while maintaining contact with the touchscreen input device 110 , sliding the fingers upward or downward. Sliding downward can cause the document 50 to move downward, thus displaying a previously invisible portion above the previously displayed portion of the document 50 . Analogously, sliding upward can cause the document 50 to move upward, thus displaying a previously invisible portion below the previously displayed portion of the document 50 .
- a predetermined gesture such as touching the representation of the document 50 and, while maintaining contact with the touchscreen input device 110 , sliding the fingers upward or downward.
- Sliding downward can cause the document 50 to move downward, thus displaying a previously invisible portion above the previously displayed portion of the document 50 .
- sliding upward can cause the document 50 to move upward, thus displaying a previously invisible portion below the previously displayed portion of the document 50 .
- the review system 100 can also support “fast scrolling” in the preview region 130 , the document view region 140 , or both. Scrolling at normal speed can occur as described above, in which case the displayed portion of the document 50 can be adjusted up or down corresponding to the distance the user's finger slides while in contact with the touchscreen input device 110 .
- the document 50 can be moved by a distance equivalent to the distance moved by the user's finger while the user's finger is holding the touchscreen input device 110 . While normal scrolling is thus an intuitive means to navigate a document, normal scrolling can be inefficient for long document, when the user seeks to navigate between portions of the document 50 separated by a great distance.
- the review system 100 can also support fast scrolling, which can take advantage of modern touch sensors.
- the review system 100 can detect an amount of pressure, a number of fingers used, or an area of contact for a touch performed in a scrolling gesture.
- the review system 100 can provide fast scrolling in response to, for example, increased pressure, increased number of fingers, or increased contact area of a touch. For example, if the user drags the document 50 with a light touch, the movement of the document 50 in response can simply follow the finger, resulting in normal-speed scrolling. In contrast, if a firmer touch is used, then the movement of the document 50 can correspond to the pressure of the user's touch.
- the document 50 can move in the same direction as the finger, but at a speed corresponding to the pressure applied by the user, where increased pressure corresponds to increased speed and distance, and where decreased pressure corresponds to decreased speed and distance. For example, if the user drags his or her finger over a distance of one inch, the document 50 can move by one, two, three, or six inches, depending on how hard the user presses the touchscreen input device 110 .
- the review system 100 can decrease scrolling speed in response to, for example, decreased pressure, decreased number of fingers, or decreased contact area of a touch in a scrolling gesture.
- the document objects 150 can be objects created by the user to facilitate the user's active reading process.
- a particular document object 150 can be created by the user, with tools of the review system 100 , to represent and include an excerpt or annotation of the document 50 .
- the document object 150 can contain text, an image, or another annotation or portion of the document 50 .
- the document object 150 can also comprise a link to the portion of the document 50 to which the document object 150 refers.
- an excerpt can contain a link back to the portion of the document 50 from which the excerpt was extracted.
- the link 155 can have a visible representation, such as an arrow, which can point from the document object 150 to the document view region 140 to indicate that the linked portion of the document 50 can be displayed in the document view region 140 .
- a visible representation such as an arrow
- the document 50 in the document view region 140 can automatically scroll to display the portion of the document 50 referred to by the document object 150 .
- selecting the link can cause the referred-to portion to be centered within the document view region 140 .
- Selection of the link 155 can occur when the user touches the visible representation of the link 155 .
- Various types and uses of the document objects 150 will be described in more detail later in this disclosure.
- the review system 100 can be embodied in a computer-readable medium and executed by a computer processor to provide one, some, or all aspects of the invention.
- the review system 100 can be integrated into a computing device 200 , such as by being embodied in a software application installed on the computing device.
- FIG. 2 illustrates an architecture of an exemplary computing device into which the review system 100 can be integrated.
- FIG. 2 is for example only, and can be modified to accommodate various embodiments of the review system 100 and particular operational environments.
- the review system 100 can be built on a custom, general-purpose, “query-based,” touch processing system.
- An implementation of the review system 100 can be based on the recognition that touch input relevant to an operation might not be directed at the object of that operation. For example, holding a finger on a document 50 might mean the user wishes to drag the document 50 , or it might mean the user wishes to keep the region under the user's finger from moving. More generally, with arbitrary numbers of fingers on the touchscreen input device 110 , the review system 100 should be able to determine which gesture is indicated by the current number and arrangement of fingers.
- a computing device 200 embodying the review system 100 can comprise a central processing unit 205 and one or more system memories 207 , such as a random access memory 209 (“RAM”) and a non-volatile memory, such as a read-only memory (“ROM”) 211 .
- the computing device 200 can further comprise a system bus 212 coupling together the memory 207 , the processing unit 205 , and various other components.
- a basic input/output system containing routines to assist in transferring information between components of the computing device 200 can be stored in the ROM 211 .
- the computing device 200 can include a mass storage device 214 for storing an operating system 216 , application programs, and other program modules.
- the mass storage device 214 can be connected to the processing unit 205 through a mass storage controller (not shown) connected to the bus 212 .
- the mass storage device 214 and other computer-readable media can comprise computer storage media, which can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory, other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, or various other media used to store data accessible by the computing device 200 .
- a number of program modules and data files can be stored in the computer storage media and RAM 209 of the computing device 200 .
- Such program modules and data files can include an operating system 216 suitable for controlling operations of a networked personal computer.
- a web browser application program, or web client 224 can also be stored on the computer storage media and RAM 209 .
- the web client 224 may comprise an application program for requesting and rendering web pages 226 created in Hypertext Markup Language (“HTML”) or other types of markup languages.
- HTML Hypertext Markup Language
- the web client 224 can be capable of executing scripts through the use of a scripting host.
- the scripting host executes program code expressed as scripts within the browser environment.
- Computer-readable instructions on the storage media of the computing device 200 can include, for example, instructions for implementing processes of the review system 100 or for implementing a web client 224 for receiving instructions from the review system 100 when operated remotely. These instructions can be executed by the computer processor 205 to enable use of the review system 100 .
- the computing device 200 can operate in a networked environment using logical connections to remote computers over a network 250 , such as the Internet.
- the computing device 200 can connect to the network 250 and remote computers through a network interface unit 220 connected to the bus 212 .
- the computing device 200 can also include an input/output controller 222 for receiving and processing input from a number of input devices, including a keyboard, mouse, or electronic stylus. Interactions between the input devices and the review system 100 can be detected by the input/output controller 222 to provide meaningful input to the computing device 200 .
- the input/output controller 222 can additionally provide output to a display screen, a printer, or other type of input/output device, such as the multi-touch input device 110 or other appropriate input device of the review system 100 .
- the review system 100 can provide various mechanisms by which the user can navigate the document 50 and modify the layout of the document 50 for the user's convenience during active reading.
- dog-earing or bookmarking can be supported in a manner that is more convenient than in conventional computer-based systems.
- bookmarking is supported by navigating to a desired page, selecting a bookmark icon or menu item, and then typing a name for the bookmark. Later, when the user wishes to return to a bookmarked location, the user can select the bookmark that was created. And when the bookmark is no longer needed, the user must explicitly delete the bookmark to remove it from the document.
- This bookmarking process is inconvenient and time-consuming in situations where a user intends to create only a temporary bookmark, to facilitate flipping between sections for comparison. When a user desires simply to compare two or more sections of a document, the user must bookmark each section and cycle through the bookmark links to flip between the bookmarked sections.
- FIG. 3 illustrates the use of transient bookmarks 300 in the review system 100 , according to an exemplary embodiment of the present invention, which are an improvement over bookmarking in conventional computer-based systems.
- the review system 100 can provide a much more convenient means of bookmarking, analogous to dog-earing and simply holding one's place in a book with a finger.
- the user can perform a gesture to create a transient bookmark 300 , which can be recalled by a later gesture.
- a gesture to create a transient bookmark 300
- the user can simply touch and hold a finger to the document 50 as the user navigates through the document 50 .
- a touch and hold can be interpreted as transient bookmarking only when occurring in a predetermined area of the document view region 140 , such as near the left edge. This need not be the case, however, and in some other embodiments, the touch and hold can occur anywhere on the document 50 to create a transient bookmark 300 .
- the touch and hold can indicate to the review system 100 that the user is holding the currently visible place in the document 50 , as the user continues to scroll through or otherwise navigate the document 50 in the document view region 140 .
- Additional fingers can touch and hold on the document 50 , next to the first finger, to indicate other transient bookmarks 300 within the document 50 , as navigation continues.
- a graphical representation 310 or link of the bookmark 300 such as an orb, an arrow, or an icon of a bookmark 300 , can be created where the user touches.
- the user can simply lift the finger corresponding to the desired position of the document 50 and then replace the finger again within a predetermined time period.
- the document 50 in the document view region 140 can automatically scroll to display the portion of the document 50 that was visible when the finger originally touched down to create the virtual dog-ear or bookmark 300 .
- the transient bookmark can disappear and be automatically deleted after the predetermined time period, such as several seconds. Replacing the finger on the document 50 , or on the graphical representation 310 of the bookmark 300 , within the predetermined time period can cause the review system 100 to continue saving, or resave, the bookmark 300 . Accordingly, by placing and alternately lifting two or more fingers, the user can mark and switch between positions in the document 50 .
- the user need not waste time naming or deleting bookmarks 300 , but can thus create transient bookmarks 300 by simply touching and holding the document 50 .
- a transient bookmark 300 can save and restore a state of the virtual workspace 120 or of the document view region 140 , as opposed to merely a position within the document 50 .
- a transient bookmark 300 can save the current layout of the document 50 or the current layout of the entire virtual workspace 120 .
- a portion of the document 50 includes highlighting or is collapsed, as will be described further below, these aspects of the document layout can be restored when a transient bookmark 300 is recalled, such as by the user's lifting a finger.
- a bookmark 300 can capture the placement of document objects 150 or the magnification and rotation of the document view region 140 and document objects 150 .
- FIG. 4 illustrates a flow diagram of an exemplary method 400 of providing a transient bookmark 300 , according to an exemplary embodiment of the present invention.
- the method 400 depicted in this FIG. 4 is provided for illustrative purpose and is not limiting, and other methods toward a similar end can also be implemented.
- the review system 100 can receive a transient bookmark 300 gesture, such as a touch and hold in the document view region 140 .
- the review system 100 can save the current state of the virtual workspace 120 .
- the review system 100 can then receive one or more other commands resulting in a change in the state of the virtual workspace 120 .
- the user can continue to navigate the document 50 , thus changing the portion of the document 50 displayed in the document view region 140 .
- the review system 100 can receive a recall gesture for the bookmark 300 , such as the user's releasing from the document view region 140 a finger corresponding to the bookmark 300 and then quickly replacing the finger.
- the review system 100 can save the current state of the virtual workspace 120 and return the virtual workspace 120 to the previous state to which the bookmark 300 corresponds.
- the method 400 of FIG. 4 results in creation and use of a transient bookmark 300 in the review system 100 .
- collapsing Another tool provided by the review system 100 is collapsing, which is not efficiently provided in either paper or conventional computer-based systems.
- the review system 100 seeks to treat a document 50 in a fluid manner, instead of as a rigid structure. Collapsing is a tool to that end, enabling a user to focus on important parts of the document 50 in the context of the document's original layout, without being distracted by less important portions of the document 50 .
- collapsing is a process of squishing, minimizing, or squeezing an intermediate portion of the document 50 , so as to bring together two portions of the document 50 separated by that intermediate portion.
- FIGS. 5A-5B illustrate an example of collapsing a document 50 , where FIG. 5A shows the document 50 in an uncollapsed state, and FIG. 5B shows the document 50 after being collapsed.
- an intermediate section C of the document 50 can be collapsed to bring separate sections A and C closer together.
- the distinct sections A and C of the document 50 were both simultaneously viewable in the document view region 140 even before collapsing, this need not be the case.
- a first section A may be far removed from a second section C within the document 50 , such that both sections would not be simultaneously viewable in the document 50 at a readable magnification, without collapsing the document 50 .
- the review system 100 can collapse the document 50 in response to a collapse gesture received from the user.
- the collapse gesture can be a pinching gesture, whereby the user places two fingers, usually a thumb and forefinger, on the touchscreen input device 110 , and then moves the fingers closer together while maintaining the touch, thus creating a pinching motion. Pinching to initiate collapsing is intuitive because it corresponding to simultaneously scrolling in two directions, where the top finger of the pinch scrolls downward, while the bottom finger scrolls upward. As a result of this opposite-direction scrolling, the document 50 is collapsed.
- magnification of the document view region 140 can also be adjusted with a pinching motion.
- the gestures indicating collapse and magnification can be distinguished based on orientation of the pinching. For example, and not limitation, magnification can be initiated by a horizontal pinching gesture, while collapsing can be initiated by a vertical pinching gesture.
- a subtlety of the pinching gesture is that the user can control many aspects of the collapse process by the manner of pinching. For example, and not limitation, if the user moves his or her top finger toward the bottom finger, then the portion of the document 50 below the fingers can remain stationary while the part above the fingers can move and collapse downward. Analogously, if the user moves his or her bottom finger while leaving the top finger stationary, the reverse can occur. If the user moves both fingers toward each other, then both the above and below portions of the document 50 can move toward each other and collapse together in the process. Further, the distance by which the user moves his or her fingers can control how much of the document is collapsed. Therefore, the user can perform a complex command, with many degrees of freedom, by way of a one-hand movement.
- one or more other gestures can also be interpreted as a collapse command.
- a collapse gesture performed on the preview region 130 can be used to initiate collapsing.
- the review system 100 can interpret such touching as a collapse gesture.
- Yet another collapse gesture can comprise the user's touching and holding a first section A of the document 50 in the document view region 140 and then touching a second section C in the preview region 130 , or the user can touch and hold the first section A in the preview region 130 and then touch the second section C in the document view region 140 .
- the review system 100 can automatically collapse the document 50 and, more specifically, can collapse the intermediate section B between the separate sections A and C that were touched by the user in the preview region 130 or the document view region 140 .
- Performing a version of the collapse gesture, on the preview region 130 can be particularly useful when the sections A and C that the user desires to bring closer together are separated by a large amount of space within the document 50 . In that case, when a large intermediate section B of the document 50 needs to be collapsed, pinching can become time-consuming.
- the preview region 130 can be used to initiate collapsing in an efficient manner.
- Collapsing can provide a number of benefits to the user during active reading. As shown in FIG. 5B , collapsing can enable the user to simultaneously view two distinct sections of the document 50 while retaining the linearity of the document 50 and the context of the two sections A and C. For example, although a portion of the intermediate section B between the distinct sections A and C may not be readable after collapsing, some of the intermediate section B can remain readable, so as to enable the user to see the context of the two sections A and C brought closer together by the collapsing.
- Retaining the document's linearity can be beneficial to the user because it can enable the user to maintain awareness of where he or she is within the document 50 and, thus, to maintain awareness of the general flow and organization of the document 50 . Additionally, because the collapsed portion is still visible to the user, although not necessarily readable, collapsing can provide the user with a visual cue as to the amount of text lying between the two distinct sections A and C of the document 50 .
- collapsing within a single document 50 need not be limited to bringing two sections closer together. Rather, collapsing can also be used to reduce the distraction caused by multiple unimportant sections. Further, multiple collapsed sections can be present within the document 50 simultaneously, so as to enable the user to modify the spatial arrangement of the document 50 and view only the sections of the document 50 that hold interest for the user, while collapsing less interesting sections, maintaining the linearity of the document 50 , and enabling the user to view the context of the sections that remain readable.
- the review system 100 can uncollapse a portion of collapsed text upon receiving an uncollapse gesture.
- an uncollapse gesture can comprise the user's brushing or swiping a hand or finger upward or downward across the collapsed portion.
- An upward swipe can cause the review system 100 to uncollapse the document 50 upward, so as to maintain the bottom position of the collapsed portion upon uncollapsing.
- a downward swipe can initiate a downward uncollapsing.
- Another important aspect of active reading is text selection and emphasis.
- the user may wish to emphasize, extract, or otherwise manipulate portions of the document 50 . In order for such manipulation to occur, however, the user can sometimes be required first to select the portion of the document 50 to be manipulated.
- the review system 100 can provide a means for selecting text in a document 50 .
- the review system 100 can select a block of text in the document 50 , preferably displayed in the document view region 140 , in response to receiving a selection gesture from the user.
- the selection gesture can comprise the user's touching a forefinger and middle finger, or other detectable set of two fingers, to the touchscreen input device 110 over the document view region 140 , where the forefinger is positioned just below the starting point of the intended selection area in the document 50 .
- the user can remove the middle finger and, while maintaining the touch of the forefinger, slide the forefinger to the end of the text to be selected. Then the user can remove the forefinger to end the touch.
- the review system 100 can interpret the above, or some other, selection gesture as a command to select the text between the start and end points of the touch. To confirm that the indicated text was selected, the review system 100 can temporarily emphasize the selected portion, such as by coloring, highlighting, underlining, or enlarging the selected portion in the document view region 140 . Unlike some conventional touch-based systems, the review system 100 need not rely on dwell time to detect that a selection gesture is occurring, and the user need not hold his hand or fingers in a single position for an extended period of time in order for the selection gesture to be recognized by the review system 100 .
- the user can select multiple sections of text, thus enabling the user to perform an action on the multiple selections simultaneously.
- the review system 100 can create multiple selections in response to a multiple-selection gesture.
- the multiple-selection gesture can comprise, for example, selecting a first section of text as discussed above, and then touching and holding that selected section while creating a second selection elsewhere in the document 50 . Alternatively, however, the user need not hold a selected section to begin selecting other sections of the document 50 .
- the review system 100 can simply detect that multiple selections are being made in sequence, and can thus retain all selections. In that case, a multiple-selection gesture can simple be a sequence of selection gestures. All currently selected portions of the document 50 can be emphasized to indicate to the user that selection was successful.
- the user can highlight that selected portion to maintain an emphasized state of the selected text.
- the review system 100 can recognize a highlighting gesture performed by the user to highlight the selected or otherwise-indicated portion of the document 50 .
- the highlighting gesture can comprise the user's touching a highlight button 180 (see FIG. 1 ) in the virtual workspace 120 or on the toolbar 160 before or after completing the selection.
- the review system 100 can highlight the selected portion of the document 50 , such as by providing a background color for the selected portion.
- the review system 100 can provide the user with one or more colors with which to highlight text in the document 50 . If multiple colors are available, then the user can select a desired color, and that selected color can be the active highlighting color used to highlight text when the user so indicates.
- FIGS. 6A-6B illustrate creation of an excerpt 600 in the review system 100 , according to an exemplary embodiment of the present invention. More specifically, FIG. 6A illustrates a selected section of text within the document 50 , and FIG. 6B illustrates the virtual workspace 120 after the selected section as been extracted into an excerpt 600 .
- the review system 100 can create an excerpt 600 in response to an excerpt gesture, which can comprise a selection gesture in combination with an extraction gesture.
- an excerpt gesture can comprise a selection gesture in combination with an extraction gesture.
- the user can touch and hold the document 50 with one finger or hand, and then touch and drag the selected text from the document view region 140 into a portion of the virtual workspace 120 outside of the document view region 140 .
- This can be an intuitive gesture, because performing the gesture simply requires the user, after initial selection, to simulate holding the document 50 in place with one hand, while dragging a portion of the document 50 away with the other hand.
- an excerpt 600 can be encapsulated or embodied in an excerpt object 650 , a type of document object 150 moveable throughout the virtual workspace 120 .
- the excerpt object 650 can include the text extracted from the document 50 during the excerpt's creation. In an exemplary embodiment, this text is not removed from the document 50 in the document view region 140 , but is simply duplicated into the excerpt objects 650 for the user's convenience, while maintaining the linearity and content of the document 50 in the document view region 140 .
- the excerpt object 650 can comprise a link 155 back to the portion of the document 50 from which it was extracted. That link 155 can have a graphical representation, such as an arrow, visible on or near the excerpt object 650 in the virtual workspace 120 .
- the document view region 140 can automatically return to the portion of the document 50 referred to by the excerpt object 650 .
- the document view region 140 no longer displays the section of the document 50 from which the excerpt 600 was extracted, that section of the document 50 can automatically become centered in the document view region 140 when the user selects the arrow or other representation of the link 155 contained by the excerpt object 650 .
- the user can retrieve the portion of the document 50 referred to by an excerpt object 650 by simply selecting the link 155 of the excerpt object 650 .
- the portion of the document 50 that was extracted to the excerpt object 650 can contain a link 55 to the excerpt object 650 .
- the link 55 in the document view region 140 can have a graphical representation, such as an arrow. This arrow can be positioned on or near the extracted portion of the document 50 in the document view region 140 .
- the link 55 is selected, the excerpt object 650 referred to by the link 55 can be emphasized by the review system 100 , to enable the user to locate the excerpt object 650 . Emphasis can take various forms.
- the excerpt object 650 can automatically be placed in front of other document objects 150 that may appear in the virtual workspace 120 and that may block the user's view of the excerpt object 650 .
- the excerpt object 650 can flash, change colors, or be emphasized in various other manner to enable the user to locate the excerpt object 650 as a result of the user's selection of the link 55 within the document 50 .
- the review system 100 can establish a pair of bidirectional links enabling the user to maintain a connection between the excerpt 600 and the portion of the document 50 from the excerpt 600 was extracted.
- a large shortcoming of paper is the constraint that paper places on textual annotations, such as comments and notes.
- Annotations on paper must generally be fit to the space of a small margin, and are typically only able to refer to text appearing within a single page.
- software products like Microsoft Word® and Adobe Acrobat® avoid some of these difficulties, these software products still largely follow paper's paradigm.
- annotations created by these software products are thus limited to a single referent on a single page, and the user is provided little control over the size and scale of annotations.
- the review system 100 can overcome these difficulties by providing a flexible visual-spatial arrangement.
- FIGS. 7A-7B illustrate creation of an annotation 700 in the review system 100 , according to an exemplary embodiment of the present invention. More specifically, FIG. 7A illustrates selection of text in the document 50 to which an annotation 700 will refer, and FIG. 7B illustrates an annotation object 750 referring back to the text selected in FIG. 7A .
- Creation of an annotation 700 in the review system 100 can begin with selection of text in the document 50 , as displayed in the document view region 140 , or with selection of text in a preexisting document object 150 .
- the user can simply begin typing, or the user can select an annotation button and then begin typing.
- the review system 100 can then interpret the typed text as an annotation 700 , which can be encapsulated in an annotation object 750 , a type of document object 150 .
- the typed input received from the user can be displayed in the annotation object 750 .
- the annotation object 750 need not refer to only a single portion of text, in the document 50 or in another document object 150 .
- an annotation object 750 referring to multiple portions can be created when the user selects two or more sections of text, using the multiple selection gesture, and then types the annotation text.
- an annotation 700 can be created for multiple sections by touching and holding each intended section within the preview region 130 , the document view region 140 , document objects 150 , or some combination of these, and then typing or selecting an annotation button.
- the annotation object 750 can have many similarities to an excerpt object 650 , given that both are types of document objects 150 , which will be described in more detail below.
- the review system 100 can create a bidirectional link between each annotation object 750 and the portion or portions of text referred to by the annotation object 750 .
- the annotation object 750 can thus contain a link 155 back to the one or more text portions of the document 50 or other document objects 150 to which the annotation object 750 refers. That link 155 can have a graphical representation, such as an arrow, linking the annotation object 750 back to the portions of text to which the annotation 700 refers.
- the annotation object 750 can have a separate link 155 for each portion of text to which the annotation object 750 refers, while in other embodiments, a single link 155 can be used to refer back to all of the related portions of text in the document 50 or elsewhere.
- a single link 155 is used, and when the user selects the single link 155 of the annotation object 750 , the document 50 can automatically collapse to simultaneously display any portions of the document 50 linked to the annotation 700 , and any document objects 150 linked to the annotation object 750 can automatically move into view in front of other document objects 150 in the virtual workspace 120 .
- the user can touch and hold multiple links 155 of an annotation object 750 to prompt the review system 100 to collapse the document 50 and recall the linked document objects 150 , as needed to display the multiple linked portions of text.
- Document objects 150 can be located in the virtual workspace 120 and manipulable in a manner similar to physical objects in a physical workspace.
- a document object 150 can be freely moved about the virtual workspace 120 and positioned in the workspace 120 wherever the user desires.
- Document objects 150 can be placed over one another, so as to hide each other or to bring one document object 150 into view at the expense of the visibility of another document object 150 .
- the size and number of document objects 150 that can be placed on the virtual workspace 120 need not have a predetermined limit, so the user can create and manipulate as many document objects 150 as the user desires to fit into the virtual workspace 120 .
- the review system 100 can recognize a resizing gesture, such as a pinching gesture, for modifying the size of an individual document object 150 .
- a resizing gesture such as a pinching gesture
- the user may desire to selectively and temporarily enlarge or shrink individual or groups of document objects 150 in the virtual workspace 120 , as shown by an exemplary enlarged document object 150 e in FIG. 1 .
- the review system 100 can selectively enlarge or shrink one or more individual document objects 150 in response to the user's performance of the resizing gesture on the individual document objects 150 .
- a first document object 150 can contain a link or links 155 to one or more portions of the document 50 or other document objects 150 associated with the first document object 150 .
- the link 155 can be part of a bidirectional link, where the other part of the bidirectional link is associated with the document 50 in the document view region 140 , or with another document object 150 , and refers back to the first document object 150 . Selecting a link 155 of the first document object 150 can cause the document 50 in the document view region 140 to scroll, so as to position the related portion of the document 50 at the vertical center of the document view region 140 .
- link 155 connects to another document object 155 , then when the link is selected, that other document object 150 can be automatically brought into view over other document objects 150 .
- the document 50 in the document view region 140 can collapse, scroll, or collapse and scroll as needed to simultaneously display all portions of the document 50 referred to by the links 155 .
- linked document objects 150 can also be brought into view as necessary to display the text referred to by the links 155 .
- selected links 155 additionally refer to portions of a second document 50 in a second document view region 140 , that second document 50 and second document view region 140 can also be modified as needed to display the text referred to by the selected links 155 .
- a bidirectional link between two or more document 50 portions can be created in response to a linking gesture.
- a linking gesture can include, for example, selecting the desired document 50 portions and then touching the desired portions simultaneously.
- the review system 100 can create a bidirectional link between the selected portions of the document 50 .
- selection of the link at one of the linked document 50 portions can automatically cause the other linked portions to come into view.
- document objects 150 can also be attachable to one another, to enable the user to rearrange the document objects 150 and the virtual workspace 120 as needed.
- the user can touch and drag one document object 150 until it contacts another.
- the two document objects 150 can then be attached to each other, until the user touches both of them and drags them away from each other.
- moving a primary one of those attached document objects 150 can cause all of the attached document objects 150 to move together, maintaining their spatial relationships with one another.
- the primary document object 150 can be, for example, the document object 150 positioned at the highest point in the virtual workspace 120 , as compared to the other grouped document objects 150 .
- the user can group annotation 700 and excerpts 600 together into a group to assist the user in performing the organizational aspects of active reading. Further, even after grouping document objects 150 together, the user can continue to rearrange the virtual workspace 120 to best suit the user's needs.
- document objects 150 within a group can have a parent-child hierarchy, where a primary document object 150 , such as the highest positioned or the first to become a member of the group, can be a parent of a lower positioned or later-grouped document object 150 .
- a parent document object 150 can control the movement of its child or children, such that when the user moves the parent document object 150 , the child document object 150 automatically moves, thus maintaining its spatial relationship to its parent document object 150 .
- a child document object 150 when a child document object 150 is moved, its parent need not follow.
- the same parent-child principles can apply to manipulations of document objects 150 other than repositioning.
- resizing, and deletion can also be inherited by a child document object 150 from a parent document object 150 , such that the child document object 150 can be resized, magnified, or deleted automatically along with its parent document object 150 .
- manipulations performed to a child document object 150 need not be inherited by a parent document object 150 .
- the review system 100 can enable the user to save the current state of the virtual workspace 120 .
- the review system 100 can export the virtual workspace 120 by printing to paper, printing to Adobe PDF, or exported to an image.
- the review system 100 can be associated with a proprietary document format. If the user saves the virtual workspace 120 in this format, then the user can return to the virtual workspace 120 to continue active reading in the same state in which the virtual workspace 120 was saved.
- Embodiments of the review system can thus be used to facilitate active reading, by providing a fluid-like, non-rigid, reading environment customizable by a user. While the review system has been disclosed in exemplary forms, many modifications, additions, and deletions may be made without departing from the spirit and scope of the system, method, and their equivalents, as set forth in the following claims.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- This application claims priority to PCT Patent Application No. PCT/US2010/050911, filed 30 Sep. 2010, which claims a benefit under 35 U.S.C. §119(e) of U.S. Provisional Application Ser. No. 61/247,279, filed 30 Sep. 2009. The entire contents and substance of these two prior applications are hereby incorporated by reference as if fully set out below.
- Various embodiments of the present invention relate to digital documents and, more particularly, to systems and methods to facilitate active reading of digital documents.
- From magazines and novels to review of important document, reading forms a critical part of our lives, and many reading tasks involve a rich interaction with the text. This rich interaction, known as active reading, can be conducted to answer questions, perform analysis, or obtain information. Active reading can involve highlighting, annotating, outlining, note-taking, comparing, and searching. As a result, active reading generally demands more of a reading medium than simply an ability to advance pages.
- Although paper supports bimanual interaction and freeform annotation within the boundaries of a single page, paper lacks the flexibility to provide, for example, content rearrangement, document overviews, and annotation outside of page boundaries. Additionally, although the tangibility of paper supports some rapid forms of navigation, such as dog-earing and bookmarking with a finger, paper provides little flexibility to create a customized navigational structure. Modern pen-based computerized tablets do a fine job of imitating paper, which benefits users by providing a familiar medium, but as a result, these pen-based tablets suffer from the same limitations as paper. Thus, neither paper nor modern computer systems adequately facilitate active reading.
- There is a need for a document review system to provide a fluid-like environment in which users can freely and flexibly manipulate, rearrange, and annotate documents without many of the restriction inherent in paper. It is to such systems and related methods that various embodiments of the present invention are directed.
- Briefly described, various embodiments of the present invention are review systems and methods for facilitating active reading of documents, by providing a fluid-like interface with advantages over physical paper and conventional word processing systems. According to embodiments of the present invention, a document review system can provide a novel approach to representing and interacting with documents. In contrast to the paper model, which offers a stable but rigid representation, the document review system can provide a highly flexible, malleable document representation. The document review system can provide high degree-of-freedom ways to navigate through and manipulate the document representation, control what document content is displayed and where, and create annotations and other structures related to the document. To this end, the document review system can include a multi-touch, gesture-based user interface.
- Earlier work has shown that active reading involves four core processes: annotation, content extraction, navigation, and layout. Embodiments of the document review system can provide improvements, as compared to paper and conventional word processing, to each of these processes. Annotation can be generally defined as text embellishment, including highlighting and marginalia. The review system can provide efficient annotation by enabling convenient switching between annotation tools, by supporting idiosyncratic markings, and by providing a convenient means for retrieving annotations made previously. Content extraction generally includes copying or moving content from a document to a secondary location, such as when outlining or note-taking. In an exemplary embodiment, the review system can closely integrate extraction with the reading process, so that user can organize and view extracted content, as well as link extracted content back to the original document. Navigation generally entails moving throughout a document and between multiple documents, such as when searching for text, turning a page, or flipping between selected locations for comparison. The review system can support bookmarks and parallelism to facilitate these or other navigational tasks. Layout generally refers to the visual or spatial arrangement of the document and related objects. The review system can optimize layout according to the user's preferences by enabling distinct portions of the document to be viewed in parallel, while maintaining the document's linearity.
- More specifically, in an exemplary embodiment, a document review system can comprise a virtual workspace, a document view region, a preview region, and optional document objects. The system can be embodied in one or more computer-readable media and can be executable by one or more computer processors on a computing device. The computing device can comprise a multi-touch interface by which a user can interact with the virtual workspace and the overall document review system.
- The virtual workspace can be a working environment in which the user can review a document. The virtual workspace can be, for example, a graphical user interface displayed in a virtual window or frame viewable through the multi-touch interface. In an exemplary embodiment, the virtual workspace can be designed to look and feel like a physical desktop or other physical workspace to which the user may be accustomed. The virtual workspace can be a relatively unstructured environment, enabling users to place the document objects as desired throughout the virtual workspace.
- The document view region can be contained fully or partially within the virtual workspace. When a user opens one or more document in the workspace, at least part of the documents can be displayed in the view region. In an exemplary embodiment, the view region can be configured to display a viewable portion of at least one document at a size that enables a user to easily read the text of the document. The size of the document in the view region can, however, be increased or decreased as the user desired. If the document is too long to be contained fully within the view region at a given magnification state of the document, then only a portion of the document can be viewable in the view region. The document can be displayed in a continuous layout, and in an exemplary embodiment, page breaks in the document can be hidden, so that the document appears to be seamless and unbounded by pagination.
- Like the document view region, the preview region can be contained fully or partially within the virtual workspace. The preview region can display a larger portion of the document, at a smaller size, than the view region. In an exemplary embodiment, the magnification of the preview region can be such that the entire document can be displayed continuously in the preview region. Alternatively, however, the magnification can be such that the general layout of the document can be determined by the preview region, although the text of the document need not be readable within the region. The preview region can be linked to the document view region and can serve various navigational or other purposes. For example, and not limitation, when a user touches a point in the document within the preview region, the portion of the document displayed in the document view region can change, so as to center in the document view region the portion of the document touched in the preview region. Thus, the preview region can be utilized to select a portion of the document that is displayed in the document view region.
- The document objects can be moveable objects positioned throughout the virtual workspace as desired by the user. In some embodiments, however, such movement can be restricted to areas outside of one or both of the document view region and the preview region, so as not to obstruct these regions. A document object can be created by the user to assist the user in actively reading of the document. For example, and not limitation, the user can create an excerpt of the document or an annotation, either of which can be encapsulated in a document object, which may be smaller and more easily manipulable than the document as a whole. Once created, the document object can be freely moved about the virtual workspace, so as to enable the user to arrange the virtual workspace in a manner that creates a customized active reading experience. The document object can be linked to the portion or portions of the document to which the document object relates. For example, the document object can include a visual link, such as an arrow, that the user can touch to cause the one or more documents in the document view region to shift position, thus bringing the related portions into view.
- The document review system can thus enable users to manipulate documents in a way that improves upon paper and other document manipulation systems. Other objects, features, and advantages of the review system, will become more apparent upon reading the following specification in conjunction with the accompanying drawing figures.
-
FIG. 1 illustrates a review system, according to an exemplary embodiment of the present invention. -
FIG. 2 illustrates an architecture of a computing device for providing the review system, according to an exemplary embodiment of the present invention. -
FIG. 3 illustrates a transient bookmark of the review system, according to an exemplary embodiment of the present invention. -
FIG. 4 illustrates a flow diagram of a method of creating a transient bookmark, according to an exemplary embodiment of the present invention. -
FIGS. 5A-5B illustrate collapsing of a document, according to an exemplary embodiment of the present invention. -
FIGS. 6A-6B illustrate an excerpt of thereview system 100, according to an exemplary embodiment of the present invention. -
FIGS. 7A-7B illustrate an annotation of thereview system 100, according to an exemplary embodiment of the present invention. - To facilitate an understanding of the principles and features of the invention, various illustrative embodiments are explained below. In particular, the invention is described in the context of being a review system enabling a user to interact with documents in a fluid-like environment, thus facilitating active reading. Embodiments of the invention, however, are not limited to this context. Rather, embodiments of the invention can provide a freeform, fluid-like environment for performing a variety of tasks.
- The components described hereinafter as making up various elements of the invention are intended to be illustrative and not restrictive. Many suitable components that can perform the same or similar functions as components described herein are intended to be embraced within the scope of the invention. Such other components not described herein can include, but are not limited to, similar or analogous components developed after development of the invention.
- Various embodiments of the present invention are review systems to facilitate active reading. Referring now to the figures, in which like reference numerals represent like parts throughout the views, various embodiment of the review system will be described in detail.
-
FIG. 1 illustrates areview system 100, or document review system, according to an exemplary embodiment of the present invention. In an exemplary embodiment, thereview system 100 can comprise, for example, atouchscreen input device 110 of acomputing device 200, avirtual workspace 120, adocument view region 130, apreview region 140, an optional one or more document objects 150, and atoolbar 160. - The
touchscreen input device 110 can be a multi-touch input device for interfacing with thevirtual workspace 120 and other aspects of thereview system 100. In an exemplary embodiment, thetouchscreen input device 110 is a multi-touch device capable of receiving multiple simultaneous touches, thus enabling a user to interact with thereview system 100 in a natural manner, using multiple hands and fingers simultaneously. Adetection system 115 can be integrated with or in communication with thetouchscreen input device 110, to detect user interactions with thetouchscreen input device 110. These user interactions, or gestures, can be interpreted as commands to thereview system 100. Instead of atouchscreen input device 110, thereview system 100 can alternatively comprise some other multi-point, bimanual, spatial input device capable of receiving a wide away of gestures interpretable as commands. - The
review system 100 can be designed to improve four major processes that occur in active reading: annotation, content extraction, navigation, and layout. Conventional paper-like approaches fall short in facilitating these processes because of their fixed structure and lack of flexibility. Utilizing amulti-touch input device 110 can provide parallel and bimanual input, which are important parts of paper-based reading, and which also enable a flexible environment. A mouse, as used in most computer-based reading systems, is an inefficient control device because it provides only a single indicator or selector. A keyboard, also used in most computer-based reading systems, lacks a natural spatial mapping. The flexible interactions made possible by embodiments of thereview system 100 inherently offer more degrees of freedom than traditionally offered by computer-based reading systems. Controlling these interactions with a mouse or a keyboard would be highly inefficient, requiring numerous sequential inputs to create a single command. In contrast, themulti-touch input device 110 can support multi-point spatial input and is thus capable of efficiently receiving a wide array of gestures for interacting with thereview system 100. - As used through this disclosure, the terms “touch,” “hold,” and the like need not refer only to physical contact between the user and the
touchscreen input device 110. Such terms can refer to various interactions simulating a physical contact, such as pointing from a distance or bringing a finger, hand, or implement in close proximity to thetouchscreen input device 110, so as to indicate a virtual touching, holding, or the like. The definition of a “touch” can be implementation-dependent, wherein the type oftouchscreen input device 110 used can determine how interactions are detected and thus how a “touch” or “hold” is defined. For example, and not limitation, thetouchscreen input device 110 can utilize resistive, capacitive, or camera technologies. If, for example, camera technology is used, then a “touch” can be defined based on camera sensitivity, or on an instrument's being within a predetermined distance from thetouchscreen input device 110. Additionally, “touch,” “hold,” and like terms need not refer only to interactions between the user's hands or fingers and thetouchscreen input device 110, but can also refer to interactions with an instrument held by the user, such as a stylus, marker, or pen. - A
display system 118 can be in communication with thedetection system 115, thetouchscreen input device 110, or both. Thedisplay system 118 can react to user gestures by displaying and refreshing a graphical user interface presented to the user, preferably through thetouchscreen input device 110, which can perform as both an input and an output device. This graphical user interface can include thevirtual workspace 120, thereview region 130, thedocument view region 140, and the document objects 150, all of which will be described in more detail below. - The
virtual workspace 120 can be accessible and manipulable through thetouchscreen input device 110. Thevirtual workspace 120 can simulate a physical desktop, in that the user can freely move document objects 150 throughout theworkspace 120 without being bound by a fixed structure common in computer-based reading system. In some exemplary embodiments, thevirtual workspace 120 can contain thepreview region 130 and thedocument view region 140. In other embodiments, however, thevirtual workspace 120 can comprise the useable space of thereview system 100 outside of thepreview region 130 and thedocument view region 140. - When an application embodying the
review system 100 is first opened, thereview system 100 can present the user with thevirtual workspace 120 containing adocument 50, or configured to display a yet-to-be-opened document 50. Throughout the active reading process, the user can control thedocument 50 and other document objects 150 in thevirtual workspace 120 with a vocabulary of multi-touch gestures. Through these gestures, the user can navigate, annotate, and manipulate thevirtual workspace 120, rarely having to explicitly select tools or otherwise shift attention away from thedocument 50 at hand. Some basic interactions can be performed in thevirtual workspace 120 as one might expect based on conventional touch applications. For example, objects can be repositioned by dragging the objects about thevirtual workspace 120. Rescaling can be performed by a pinching or stretching gesture with two fingers, preferably in a horizontal orientation. Other performable gestures and operations, however, are new to thereview system 100, as will be described below in detail. - The user can open a
document 50 in thevirtual workspace 120, and theopen document 50 can be displayed in the one or both of thepreview region 130 and thedocument view region 140. Throughout this disclosure, various actions are described as being performed or performable on the “text” of theopen document 50. It will be understood, however, that all or most of such actions can similarly be performed on imbedded objects in thedocument 50 that are not text, such as images or multimedia. Thus, the term “text” throughout this disclosure is used for illustrative purposes only and is not restrictive. - The
preview region 130 can be configured to display thedocument 50 at a magnification or size that enables the user to view the general layout of thedocument 50. In an exemplary embodiment, theentire document 50 can be viewable in thepreview region 130, so as to present the general layout of theentire document 50 to the user. In some other embodiments, however, the magnification of thepreview region 130 can be adjustable, so that the user can select a magnification size that is best suited to the user's needs. - The
document view region 140 can display at least a portion of theopen document 50. In an exemplary embodiment, thedocument view region 140 can display thedocument 50 at a magnification or size enabling the user to easily read the text of thedocument 50. In a further exemplary embodiment, the magnification of thedocument 50 in thedocument view region 140 can be modified by the user to enable to user to select a text size best suiting the user's needs. - The text of the
document 50 can, in either or both of thepreview region 130 and thedocument view region 140, be presented to the user in a continuous format, with or without pagination. If pagination is provided, then this provision can be for the user's reference only and need not restrict operations of thereview system 100 to page boundaries. Some embodiments of thereview system 100 can enable the user to select whether pagination is shown, to further customize the user's active reading experience. In thedocument view region 140 and in thepreview region 130, if theentire document 50 is not visible, then the user can scroll vertically in therespective region document 50. Scrolling can occur when the user performs a predetermined gesture, such as touching the representation of thedocument 50 and, while maintaining contact with thetouchscreen input device 110, sliding the fingers upward or downward. Sliding downward can cause thedocument 50 to move downward, thus displaying a previously invisible portion above the previously displayed portion of thedocument 50. Analogously, sliding upward can cause thedocument 50 to move upward, thus displaying a previously invisible portion below the previously displayed portion of thedocument 50. - The
review system 100 can also support “fast scrolling” in thepreview region 130, thedocument view region 140, or both. Scrolling at normal speed can occur as described above, in which case the displayed portion of thedocument 50 can be adjusted up or down corresponding to the distance the user's finger slides while in contact with thetouchscreen input device 110. For example, and not limitation, with normal scrolling, thedocument 50 can be moved by a distance equivalent to the distance moved by the user's finger while the user's finger is holding thetouchscreen input device 110. While normal scrolling is thus an intuitive means to navigate a document, normal scrolling can be inefficient for long document, when the user seeks to navigate between portions of thedocument 50 separated by a great distance. - To provide a more efficient scrolling mechanism, the
review system 100 can also support fast scrolling, which can take advantage of modern touch sensors. In some embodiments, thereview system 100 can detect an amount of pressure, a number of fingers used, or an area of contact for a touch performed in a scrolling gesture. Thereview system 100 can provide fast scrolling in response to, for example, increased pressure, increased number of fingers, or increased contact area of a touch. For example, if the user drags thedocument 50 with a light touch, the movement of thedocument 50 in response can simply follow the finger, resulting in normal-speed scrolling. In contrast, if a firmer touch is used, then the movement of thedocument 50 can correspond to the pressure of the user's touch. Thedocument 50 can move in the same direction as the finger, but at a speed corresponding to the pressure applied by the user, where increased pressure corresponds to increased speed and distance, and where decreased pressure corresponds to decreased speed and distance. For example, if the user drags his or her finger over a distance of one inch, thedocument 50 can move by one, two, three, or six inches, depending on how hard the user presses thetouchscreen input device 110. Analogously, thereview system 100 can decrease scrolling speed in response to, for example, decreased pressure, decreased number of fingers, or decreased contact area of a touch in a scrolling gesture. - The document objects 150 can be objects created by the user to facilitate the user's active reading process. For example, and not limitation, a
particular document object 150 can be created by the user, with tools of thereview system 100, to represent and include an excerpt or annotation of thedocument 50. Thedocument object 150 can contain text, an image, or another annotation or portion of thedocument 50. Thedocument object 150 can also comprise a link to the portion of thedocument 50 to which thedocument object 150 refers. For example, and not limitation, an excerpt can contain a link back to the portion of thedocument 50 from which the excerpt was extracted. Thelink 155 can have a visible representation, such as an arrow, which can point from thedocument object 150 to thedocument view region 140 to indicate that the linked portion of thedocument 50 can be displayed in thedocument view region 140. When the user selects thelink 155, thedocument 50 in thedocument view region 140 can automatically scroll to display the portion of thedocument 50 referred to by thedocument object 150. In an exemplary embodiment, selecting the link can cause the referred-to portion to be centered within thedocument view region 140. Selection of thelink 155 can occur when the user touches the visible representation of thelink 155. Various types and uses of the document objects 150 will be described in more detail later in this disclosure. - The
review system 100 can be embodied in a computer-readable medium and executed by a computer processor to provide one, some, or all aspects of the invention. For example, thereview system 100 can be integrated into acomputing device 200, such as by being embodied in a software application installed on the computing device.FIG. 2 illustrates an architecture of an exemplary computing device into which thereview system 100 can be integrated. Those skilled in the art will recognize that the general architecture described with reference toFIG. 2 is for example only, and can be modified to accommodate various embodiments of thereview system 100 and particular operational environments. - Architecturally, the
review system 100 can be built on a custom, general-purpose, “query-based,” touch processing system. An implementation of thereview system 100 can be based on the recognition that touch input relevant to an operation might not be directed at the object of that operation. For example, holding a finger on adocument 50 might mean the user wishes to drag thedocument 50, or it might mean the user wishes to keep the region under the user's finger from moving. More generally, with arbitrary numbers of fingers on thetouchscreen input device 110, thereview system 100 should be able to determine which gesture is indicated by the current number and arrangement of fingers. - As shown in
FIG. 2 , acomputing device 200 embodying thereview system 100 can comprise acentral processing unit 205 and one ormore system memories 207, such as a random access memory 209 (“RAM”) and a non-volatile memory, such as a read-only memory (“ROM”) 211. Thecomputing device 200 can further comprise asystem bus 212 coupling together thememory 207, theprocessing unit 205, and various other components. A basic input/output system containing routines to assist in transferring information between components of thecomputing device 200 can be stored in theROM 211. Additionally, thecomputing device 200 can include amass storage device 214 for storing anoperating system 216, application programs, and other program modules. - The
mass storage device 214 can be connected to theprocessing unit 205 through a mass storage controller (not shown) connected to thebus 212. Themass storage device 214 and other computer-readable media can comprise computer storage media, which can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory, other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, or various other media used to store data accessible by thecomputing device 200. - A number of program modules and data files can be stored in the computer storage media and
RAM 209 of thecomputing device 200. Such program modules and data files can include anoperating system 216 suitable for controlling operations of a networked personal computer. A web browser application program, orweb client 224, can also be stored on the computer storage media andRAM 209. Theweb client 224 may comprise an application program for requesting andrendering web pages 226 created in Hypertext Markup Language (“HTML”) or other types of markup languages. Theweb client 224 can be capable of executing scripts through the use of a scripting host. The scripting host executes program code expressed as scripts within the browser environment. - Computer-readable instructions on the storage media of the
computing device 200 can include, for example, instructions for implementing processes of thereview system 100 or for implementing aweb client 224 for receiving instructions from thereview system 100 when operated remotely. These instructions can be executed by thecomputer processor 205 to enable use of thereview system 100. - The
computing device 200 can operate in a networked environment using logical connections to remote computers over anetwork 250, such as the Internet. Thecomputing device 200 can connect to thenetwork 250 and remote computers through anetwork interface unit 220 connected to thebus 212. - The
computing device 200 can also include an input/output controller 222 for receiving and processing input from a number of input devices, including a keyboard, mouse, or electronic stylus. Interactions between the input devices and thereview system 100 can be detected by the input/output controller 222 to provide meaningful input to thecomputing device 200. The input/output controller 222 can additionally provide output to a display screen, a printer, or other type of input/output device, such as themulti-touch input device 110 or other appropriate input device of thereview system 100. - The hardware and virtual components described above can work in combination to provide various aspects and operations of the
review system 100, as will be described in detail below. - The
review system 100 can provide various mechanisms by which the user can navigate thedocument 50 and modify the layout of thedocument 50 for the user's convenience during active reading. For example, dog-earing or bookmarking can be supported in a manner that is more convenient than in conventional computer-based systems. In conventional systems, bookmarking is supported by navigating to a desired page, selecting a bookmark icon or menu item, and then typing a name for the bookmark. Later, when the user wishes to return to a bookmarked location, the user can select the bookmark that was created. And when the bookmark is no longer needed, the user must explicitly delete the bookmark to remove it from the document. This bookmarking process is inconvenient and time-consuming in situations where a user intends to create only a temporary bookmark, to facilitate flipping between sections for comparison. When a user desires simply to compare two or more sections of a document, the user must bookmark each section and cycle through the bookmark links to flip between the bookmarked sections. -
FIG. 3 illustrates the use oftransient bookmarks 300 in thereview system 100, according to an exemplary embodiment of the present invention, which are an improvement over bookmarking in conventional computer-based systems. Through transient bookmarking, thereview system 100 can provide a much more convenient means of bookmarking, analogous to dog-earing and simply holding one's place in a book with a finger. - In the
document view region 140 of thereview system 100, the user can perform a gesture to create atransient bookmark 300, which can be recalled by a later gesture. For example, the user can simply touch and hold a finger to thedocument 50 as the user navigates through thedocument 50. In some exemplary embodiments, a touch and hold can be interpreted as transient bookmarking only when occurring in a predetermined area of thedocument view region 140, such as near the left edge. This need not be the case, however, and in some other embodiments, the touch and hold can occur anywhere on thedocument 50 to create atransient bookmark 300. - The touch and hold can indicate to the
review system 100 that the user is holding the currently visible place in thedocument 50, as the user continues to scroll through or otherwise navigate thedocument 50 in thedocument view region 140. Additional fingers can touch and hold on thedocument 50, next to the first finger, to indicate othertransient bookmarks 300 within thedocument 50, as navigation continues. When a finger touches and holds to create atransient bookmark 300, agraphical representation 310 or link of thebookmark 300, such as an orb, an arrow, or an icon of abookmark 300, can be created where the user touches. - When the user desires to return to a marked document position, the user can simply lift the finger corresponding to the desired position of the
document 50 and then replace the finger again within a predetermined time period. In response to the lifted and replaced finger, thedocument 50 in thedocument view region 140 can automatically scroll to display the portion of thedocument 50 that was visible when the finger originally touched down to create the virtual dog-ear orbookmark 300. If the user leaves his or her finger up after lifting it, instead of replacing the finger, the transient bookmark can disappear and be automatically deleted after the predetermined time period, such as several seconds. Replacing the finger on thedocument 50, or on thegraphical representation 310 of thebookmark 300, within the predetermined time period can cause thereview system 100 to continue saving, or resave, thebookmark 300. Accordingly, by placing and alternately lifting two or more fingers, the user can mark and switch between positions in thedocument 50. The user need not waste time naming or deletingbookmarks 300, but can thus createtransient bookmarks 300 by simply touching and holding thedocument 50. - Further, as a benefit over both paper and conventional computer-based systems, a
transient bookmark 300 can save and restore a state of thevirtual workspace 120 or of thedocument view region 140, as opposed to merely a position within thedocument 50. In some embodiments, atransient bookmark 300 can save the current layout of thedocument 50 or the current layout of the entirevirtual workspace 120. For example, and not limitation, if a portion of thedocument 50 includes highlighting or is collapsed, as will be described further below, these aspects of the document layout can be restored when atransient bookmark 300 is recalled, such as by the user's lifting a finger. For another example, abookmark 300 can capture the placement of document objects 150 or the magnification and rotation of thedocument view region 140 and document objects 150. Thus, by usingtransient bookmarks 300 in thereview system 100, the user can rapidly flip between and discard layout states by placing, lifting, and moving fingers. -
FIG. 4 illustrates a flow diagram of anexemplary method 400 of providing atransient bookmark 300, according to an exemplary embodiment of the present invention. Themethod 400 depicted in thisFIG. 4 is provided for illustrative purpose and is not limiting, and other methods toward a similar end can also be implemented. As shown in the illustratedmethod 400, at 410 thereview system 100 can receive atransient bookmark 300 gesture, such as a touch and hold in thedocument view region 140. At 420, in response to the bookmarking gesture, thereview system 100 can save the current state of thevirtual workspace 120. At 430, thereview system 100 can then receive one or more other commands resulting in a change in the state of thevirtual workspace 120. For example, and not limitation, the user can continue to navigate thedocument 50, thus changing the portion of thedocument 50 displayed in thedocument view region 140. At 440, thereview system 100 can receive a recall gesture for thebookmark 300, such as the user's releasing from the document view region 140 a finger corresponding to thebookmark 300 and then quickly replacing the finger. In response to this recall gesture, at 450, thereview system 100 can save the current state of thevirtual workspace 120 and return thevirtual workspace 120 to the previous state to which thebookmark 300 corresponds. Thus, themethod 400 ofFIG. 4 results in creation and use of atransient bookmark 300 in thereview system 100. - Another tool provided by the
review system 100 is collapsing, which is not efficiently provided in either paper or conventional computer-based systems. Thereview system 100 seeks to treat adocument 50 in a fluid manner, instead of as a rigid structure. Collapsing is a tool to that end, enabling a user to focus on important parts of thedocument 50 in the context of the document's original layout, without being distracted by less important portions of thedocument 50. In essence, collapsing is a process of squishing, minimizing, or squeezing an intermediate portion of thedocument 50, so as to bring together two portions of thedocument 50 separated by that intermediate portion. -
FIGS. 5A-5B illustrate an example of collapsing adocument 50, whereFIG. 5A shows thedocument 50 in an uncollapsed state, andFIG. 5B shows thedocument 50 after being collapsed. As shown by comparingFIGS. 5A and 5B , an intermediate section C of thedocument 50 can be collapsed to bring separate sections A and C closer together. Although inFIG. 5A , the distinct sections A and C of thedocument 50 were both simultaneously viewable in thedocument view region 140 even before collapsing, this need not be the case. In some instances, a first section A may be far removed from a second section C within thedocument 50, such that both sections would not be simultaneously viewable in thedocument 50 at a readable magnification, without collapsing thedocument 50. - The
review system 100 can collapse thedocument 50 in response to a collapse gesture received from the user. In some embodiments, the collapse gesture can be a pinching gesture, whereby the user places two fingers, usually a thumb and forefinger, on thetouchscreen input device 110, and then moves the fingers closer together while maintaining the touch, thus creating a pinching motion. Pinching to initiate collapsing is intuitive because it corresponding to simultaneously scrolling in two directions, where the top finger of the pinch scrolls downward, while the bottom finger scrolls upward. As a result of this opposite-direction scrolling, thedocument 50 is collapsed. - As mentioned above, magnification of the
document view region 140 can also be adjusted with a pinching motion. The gestures indicating collapse and magnification can be distinguished based on orientation of the pinching. For example, and not limitation, magnification can be initiated by a horizontal pinching gesture, while collapsing can be initiated by a vertical pinching gesture. - A subtlety of the pinching gesture, in those embodiments where it is used, is that the user can control many aspects of the collapse process by the manner of pinching. For example, and not limitation, if the user moves his or her top finger toward the bottom finger, then the portion of the
document 50 below the fingers can remain stationary while the part above the fingers can move and collapse downward. Analogously, if the user moves his or her bottom finger while leaving the top finger stationary, the reverse can occur. If the user moves both fingers toward each other, then both the above and below portions of thedocument 50 can move toward each other and collapse together in the process. Further, the distance by which the user moves his or her fingers can control how much of the document is collapsed. Therefore, the user can perform a complex command, with many degrees of freedom, by way of a one-hand movement. - In addition, or alternatively, to the vertical pinching gesture, one or more other gestures can also be interpreted as a collapse command. For example, a collapse gesture performed on the
preview region 130 can be used to initiate collapsing. When the user touches and holds on a first section A of thedocument 50 within thepreview region 130, and while holding the first section A, the user also touches a separate second section C also in thepreview region 130, thereview system 100 can interpret such touching as a collapse gesture. Yet another collapse gesture can comprise the user's touching and holding a first section A of thedocument 50 in thedocument view region 140 and then touching a second section C in thepreview region 130, or the user can touch and hold the first section A in thepreview region 130 and then touch the second section C in thedocument view region 140. - In response to one or all of the above collapse gestures, the
review system 100 can automatically collapse thedocument 50 and, more specifically, can collapse the intermediate section B between the separate sections A and C that were touched by the user in thepreview region 130 or thedocument view region 140. Performing a version of the collapse gesture, on thepreview region 130, can be particularly useful when the sections A and C that the user desires to bring closer together are separated by a large amount of space within thedocument 50. In that case, when a large intermediate section B of thedocument 50 needs to be collapsed, pinching can become time-consuming. Thus, thepreview region 130 can be used to initiate collapsing in an efficient manner. - Collapsing can provide a number of benefits to the user during active reading. As shown in
FIG. 5B , collapsing can enable the user to simultaneously view two distinct sections of thedocument 50 while retaining the linearity of thedocument 50 and the context of the two sections A and C. For example, although a portion of the intermediate section B between the distinct sections A and C may not be readable after collapsing, some of the intermediate section B can remain readable, so as to enable the user to see the context of the two sections A and C brought closer together by the collapsing. Retaining the document's linearity can be beneficial to the user because it can enable the user to maintain awareness of where he or she is within thedocument 50 and, thus, to maintain awareness of the general flow and organization of thedocument 50. Additionally, because the collapsed portion is still visible to the user, although not necessarily readable, collapsing can provide the user with a visual cue as to the amount of text lying between the two distinct sections A and C of thedocument 50. - It will be understood that collapsing within a
single document 50 need not be limited to bringing two sections closer together. Rather, collapsing can also be used to reduce the distraction caused by multiple unimportant sections. Further, multiple collapsed sections can be present within thedocument 50 simultaneously, so as to enable the user to modify the spatial arrangement of thedocument 50 and view only the sections of thedocument 50 that hold interest for the user, while collapsing less interesting sections, maintaining the linearity of thedocument 50, and enabling the user to view the context of the sections that remain readable. - The
review system 100 can uncollapse a portion of collapsed text upon receiving an uncollapse gesture. In an exemplary embodiment, for example, an uncollapse gesture can comprise the user's brushing or swiping a hand or finger upward or downward across the collapsed portion. An upward swipe can cause thereview system 100 to uncollapse thedocument 50 upward, so as to maintain the bottom position of the collapsed portion upon uncollapsing. Analogously, a downward swipe can initiate a downward uncollapsing. - Another important aspect of active reading is text selection and emphasis. The user may wish to emphasize, extract, or otherwise manipulate portions of the
document 50. In order for such manipulation to occur, however, the user can sometimes be required first to select the portion of thedocument 50 to be manipulated. Thus, thereview system 100 can provide a means for selecting text in adocument 50. - The
review system 100 can select a block of text in thedocument 50, preferably displayed in thedocument view region 140, in response to receiving a selection gesture from the user. In an exemplary embodiment, the selection gesture can comprise the user's touching a forefinger and middle finger, or other detectable set of two fingers, to thetouchscreen input device 110 over thedocument view region 140, where the forefinger is positioned just below the starting point of the intended selection area in thedocument 50. The user can remove the middle finger and, while maintaining the touch of the forefinger, slide the forefinger to the end of the text to be selected. Then the user can remove the forefinger to end the touch. - The
review system 100 can interpret the above, or some other, selection gesture as a command to select the text between the start and end points of the touch. To confirm that the indicated text was selected, thereview system 100 can temporarily emphasize the selected portion, such as by coloring, highlighting, underlining, or enlarging the selected portion in thedocument view region 140. Unlike some conventional touch-based systems, thereview system 100 need not rely on dwell time to detect that a selection gesture is occurring, and the user need not hold his hand or fingers in a single position for an extended period of time in order for the selection gesture to be recognized by thereview system 100. - In some embodiments of the
review system 100, the user can select multiple sections of text, thus enabling the user to perform an action on the multiple selections simultaneously. Thereview system 100 can create multiple selections in response to a multiple-selection gesture. The multiple-selection gesture can comprise, for example, selecting a first section of text as discussed above, and then touching and holding that selected section while creating a second selection elsewhere in thedocument 50. Alternatively, however, the user need not hold a selected section to begin selecting other sections of thedocument 50. In some embodiments, for example, thereview system 100 can simply detect that multiple selections are being made in sequence, and can thus retain all selections. In that case, a multiple-selection gesture can simple be a sequence of selection gestures. All currently selected portions of thedocument 50 can be emphasized to indicate to the user that selection was successful. - After a portion of a
document 50 is selected, the user can highlight that selected portion to maintain an emphasized state of the selected text. Thereview system 100 can recognize a highlighting gesture performed by the user to highlight the selected or otherwise-indicated portion of thedocument 50. For example, and not limitation, the highlighting gesture can comprise the user's touching a highlight button 180 (seeFIG. 1 ) in thevirtual workspace 120 or on thetoolbar 160 before or after completing the selection. In response to the highlighting gesture, thereview system 100 can highlight the selected portion of thedocument 50, such as by providing a background color for the selected portion. - The
review system 100 can provide the user with one or more colors with which to highlight text in thedocument 50. If multiple colors are available, then the user can select a desired color, and that selected color can be the active highlighting color used to highlight text when the user so indicates. - In addition to highlighting, various other tasks can be performed on a block of selected text. For example,
FIGS. 6A-6B illustrate creation of anexcerpt 600 in thereview system 100, according to an exemplary embodiment of the present invention. More specifically,FIG. 6A illustrates a selected section of text within thedocument 50, andFIG. 6B illustrates thevirtual workspace 120 after the selected section as been extracted into anexcerpt 600. - The
review system 100 can create anexcerpt 600 in response to an excerpt gesture, which can comprise a selection gesture in combination with an extraction gesture. To perform the extraction portion of the gesture, the user can touch and hold thedocument 50 with one finger or hand, and then touch and drag the selected text from thedocument view region 140 into a portion of thevirtual workspace 120 outside of thedocument view region 140. This can be an intuitive gesture, because performing the gesture simply requires the user, after initial selection, to simulate holding thedocument 50 in place with one hand, while dragging a portion of thedocument 50 away with the other hand. - Once created, an
excerpt 600 can be encapsulated or embodied in anexcerpt object 650, a type ofdocument object 150 moveable throughout thevirtual workspace 120. Theexcerpt object 650 can include the text extracted from thedocument 50 during the excerpt's creation. In an exemplary embodiment, this text is not removed from thedocument 50 in thedocument view region 140, but is simply duplicated into the excerpt objects 650 for the user's convenience, while maintaining the linearity and content of thedocument 50 in thedocument view region 140. - The
excerpt object 650 can comprise alink 155 back to the portion of thedocument 50 from which it was extracted. Thatlink 155 can have a graphical representation, such as an arrow, visible on or near theexcerpt object 650 in thevirtual workspace 120. When the user selects thelink 155, such as by touching the graphical representation, thedocument view region 140 can automatically return to the portion of thedocument 50 referred to by theexcerpt object 650. In other words, if thedocument view region 140 no longer displays the section of thedocument 50 from which theexcerpt 600 was extracted, that section of thedocument 50 can automatically become centered in thedocument view region 140 when the user selects the arrow or other representation of thelink 155 contained by theexcerpt object 650. Thus, the user can retrieve the portion of thedocument 50 referred to by anexcerpt object 650 by simply selecting thelink 155 of theexcerpt object 650. - In the
document view region 140, the portion of thedocument 50 that was extracted to theexcerpt object 650 can contain alink 55 to theexcerpt object 650. Like thelink 155 comprised in theexcerpt object 650, thelink 55 in thedocument view region 140 can have a graphical representation, such as an arrow. This arrow can be positioned on or near the extracted portion of thedocument 50 in thedocument view region 140. When thelink 55 is selected, theexcerpt object 650 referred to by thelink 55 can be emphasized by thereview system 100, to enable the user to locate theexcerpt object 650. Emphasis can take various forms. For example, and not limitation, theexcerpt object 650 can automatically be placed in front of other document objects 150 that may appear in thevirtual workspace 120 and that may block the user's view of theexcerpt object 650. Alternatively, for example, theexcerpt object 650 can flash, change colors, or be emphasized in various other manner to enable the user to locate theexcerpt object 650 as a result of the user's selection of thelink 55 within thedocument 50. Thus, when an excerpt is created, thereview system 100 can establish a pair of bidirectional links enabling the user to maintain a connection between theexcerpt 600 and the portion of thedocument 50 from theexcerpt 600 was extracted. - A large shortcoming of paper is the constraint that paper places on textual annotations, such as comments and notes. Annotations on paper must generally be fit to the space of a small margin, and are typically only able to refer to text appearing within a single page. While software products like Microsoft Word® and Adobe Acrobat® avoid some of these difficulties, these software products still largely follow paper's paradigm. As a result, annotations created by these software products are thus limited to a single referent on a single page, and the user is provided little control over the size and scale of annotations. The
review system 100 can overcome these difficulties by providing a flexible visual-spatial arrangement. -
FIGS. 7A-7B illustrate creation of anannotation 700 in thereview system 100, according to an exemplary embodiment of the present invention. More specifically,FIG. 7A illustrates selection of text in thedocument 50 to which anannotation 700 will refer, andFIG. 7B illustrates anannotation object 750 referring back to the text selected inFIG. 7A . - Creation of an
annotation 700 in thereview system 100 can begin with selection of text in thedocument 50, as displayed in thedocument view region 140, or with selection of text in apreexisting document object 150. After text is selected, the user can simply begin typing, or the user can select an annotation button and then begin typing. Thereview system 100 can then interpret the typed text as anannotation 700, which can be encapsulated in anannotation object 750, a type ofdocument object 150. The typed input received from the user can be displayed in theannotation object 750. - In some embodiments, the
annotation object 750 need not refer to only a single portion of text, in thedocument 50 or in anotherdocument object 150. For example, anannotation object 750 referring to multiple portions can be created when the user selects two or more sections of text, using the multiple selection gesture, and then types the annotation text. For another example, anannotation 700 can be created for multiple sections by touching and holding each intended section within thepreview region 130, thedocument view region 140, document objects 150, or some combination of these, and then typing or selecting an annotation button. - The
annotation object 750 can have many similarities to anexcerpt object 650, given that both are types of document objects 150, which will be described in more detail below. For example, like anexcerpt object 650, thereview system 100 can create a bidirectional link between eachannotation object 750 and the portion or portions of text referred to by theannotation object 750. Theannotation object 750 can thus contain alink 155 back to the one or more text portions of thedocument 50 or other document objects 150 to which theannotation object 750 refers. Thatlink 155 can have a graphical representation, such as an arrow, linking theannotation object 750 back to the portions of text to which theannotation 700 refers. In some embodiments, theannotation object 750 can have aseparate link 155 for each portion of text to which theannotation object 750 refers, while in other embodiments, asingle link 155 can be used to refer back to all of the related portions of text in thedocument 50 or elsewhere. When asingle link 155 is used, and when the user selects thesingle link 155 of theannotation object 750, thedocument 50 can automatically collapse to simultaneously display any portions of thedocument 50 linked to theannotation 700, and any document objects 150 linked to theannotation object 750 can automatically move into view in front of other document objects 150 in thevirtual workspace 120. Likewise, ifmultiple links 155 are used, the user can touch and holdmultiple links 155 of anannotation object 750 to prompt thereview system 100 to collapse thedocument 50 and recall the linked document objects 150, as needed to display the multiple linked portions of text. - Document objects 150, such as excerpt objects 650 and annotation objects 750, can be located in the
virtual workspace 120 and manipulable in a manner similar to physical objects in a physical workspace. For example, and not limitation, adocument object 150 can be freely moved about thevirtual workspace 120 and positioned in theworkspace 120 wherever the user desires. Document objects 150 can be placed over one another, so as to hide each other or to bring onedocument object 150 into view at the expense of the visibility of anotherdocument object 150. The size and number of document objects 150 that can be placed on thevirtual workspace 120 need not have a predetermined limit, so the user can create and manipulate as many document objects 150 as the user desires to fit into thevirtual workspace 120. - In some embodiments, the
review system 100 can recognize a resizing gesture, such as a pinching gesture, for modifying the size of anindividual document object 150. The user may desire to selectively and temporarily enlarge or shrink individual or groups of document objects 150 in thevirtual workspace 120, as shown by an exemplaryenlarged document object 150 e inFIG. 1 . Thereview system 100 can selectively enlarge or shrink one or more individual document objects 150 in response to the user's performance of the resizing gesture on the individual document objects 150. - As discussed above with respect to excerpt objects 650 and annotation objects 750, a
first document object 150 can contain a link orlinks 155 to one or more portions of thedocument 50 or other document objects 150 associated with thefirst document object 150. Thelink 155 can be part of a bidirectional link, where the other part of the bidirectional link is associated with thedocument 50 in thedocument view region 140, or with anotherdocument object 150, and refers back to thefirst document object 150. Selecting alink 155 of thefirst document object 150 can cause thedocument 50 in thedocument view region 140 to scroll, so as to position the related portion of thedocument 50 at the vertical center of thedocument view region 140. Alternatively, if thelink 155 connects to anotherdocument object 155, then when the link is selected, thatother document object 150 can be automatically brought into view over other document objects 150. If multiple portions of text in thedocument 50 or other document objects 150 are referred to by a selectedlink 155, or ifmultiple links 155 of thefirst document object 150 are selected, or ifmultiple links 155 of multiple document objects 150 are selected, then thedocument 50 in thedocument view region 140 can collapse, scroll, or collapse and scroll as needed to simultaneously display all portions of thedocument 50 referred to by thelinks 155. Analogously, linked document objects 150 can also be brought into view as necessary to display the text referred to by thelinks 155. Further analogously, if selectedlinks 155 additionally refer to portions of asecond document 50 in a seconddocument view region 140, thatsecond document 50 and seconddocument view region 140 can also be modified as needed to display the text referred to by the selectedlinks 155. - In the same or similar manner by which a
document object 150 can be linked to a portion of thedocument 50 or to anotherdocument object 150, two or more portions of adocument 50 or indifferent documents 50 can be linked together. A bidirectional link between two ormore document 50 portions can be created in response to a linking gesture. A linking gesture can include, for example, selecting the desireddocument 50 portions and then touching the desired portions simultaneously. In response to this linking gesture, thereview system 100 can create a bidirectional link between the selected portions of thedocument 50. Like the links associated withdocument object 150, selection of the link at one of the linkeddocument 50 portions can automatically cause the other linked portions to come into view. - In addition to being moveable throughout the
workspace 120, document objects 150 can also be attachable to one another, to enable the user to rearrange the document objects 150 and thevirtual workspace 120 as needed. To attach two or more document objects 150 together, the user can touch and drag onedocument object 150 until it contacts another. The two document objects 150 can then be attached to each other, until the user touches both of them and drags them away from each other. In some exemplary embodiments, when a group of document objects 150 are attached together, moving a primary one of those attached document objects 150 can cause all of the attached document objects 150 to move together, maintaining their spatial relationships with one another. Theprimary document object 150 can be, for example, thedocument object 150 positioned at the highest point in thevirtual workspace 120, as compared to the other grouped document objects 150. Thus, the user cangroup annotation 700 andexcerpts 600 together into a group to assist the user in performing the organizational aspects of active reading. Further, even after grouping document objects 150 together, the user can continue to rearrange thevirtual workspace 120 to best suit the user's needs. - In some other exemplary embodiments, document objects 150 within a group can have a parent-child hierarchy, where a
primary document object 150, such as the highest positioned or the first to become a member of the group, can be a parent of a lower positioned or later-groupeddocument object 150. Aparent document object 150 can control the movement of its child or children, such that when the user moves theparent document object 150, thechild document object 150 automatically moves, thus maintaining its spatial relationship to itsparent document object 150. In contrast, when achild document object 150 is moved, its parent need not follow. The same parent-child principles can apply to manipulations of document objects 150 other than repositioning. For example, and not limitation, resizing, and deletion can also be inherited by achild document object 150 from aparent document object 150, such that thechild document object 150 can be resized, magnified, or deleted automatically along with itsparent document object 150. In contrast, manipulations performed to achild document object 150 need not be inherited by aparent document object 150. - When the user seeks to exit the
review system 100 but would like to retain the state of thevirtual workspace 100, thereview system 100 can enable the user to save the current state of thevirtual workspace 120. For example, thereview system 100 can export thevirtual workspace 120 by printing to paper, printing to Adobe PDF, or exported to an image. For further example, thereview system 100 can be associated with a proprietary document format. If the user saves thevirtual workspace 120 in this format, then the user can return to thevirtual workspace 120 to continue active reading in the same state in which thevirtual workspace 120 was saved. - Embodiments of the review system can thus be used to facilitate active reading, by providing a fluid-like, non-rigid, reading environment customizable by a user. While the review system has been disclosed in exemplary forms, many modifications, additions, and deletions may be made without departing from the spirit and scope of the system, method, and their equivalents, as set forth in the following claims.
Claims (41)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/876,463 US10268661B2 (en) | 2010-09-30 | 2011-03-16 | Systems and methods to facilitate active reading |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2010/050911 WO2011041547A1 (en) | 2009-09-30 | 2010-09-30 | Systems and methods to facilitate active reading |
USPCT/US2010/050911 | 2010-09-30 | ||
US13/876,463 US10268661B2 (en) | 2010-09-30 | 2011-03-16 | Systems and methods to facilitate active reading |
PCT/US2011/028595 WO2012044363A1 (en) | 2010-09-30 | 2011-03-16 | Systems and methods to facilitate active reading |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2010/050911 Continuation WO2011041547A1 (en) | 2009-09-30 | 2010-09-30 | Systems and methods to facilitate active reading |
PCT/US2011/028595 A-371-Of-International WO2012044363A1 (en) | 2009-09-30 | 2011-03-16 | Systems and methods to facilitate active reading |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/391,003 Continuation US11704473B2 (en) | 2009-09-30 | 2019-04-22 | Systems and methods to facilitate active reading |
Publications (2)
Publication Number | Publication Date |
---|---|
US20130191711A1 true US20130191711A1 (en) | 2013-07-25 |
US10268661B2 US10268661B2 (en) | 2019-04-23 |
Family
ID=45895521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/876,463 Active 2033-02-03 US10268661B2 (en) | 2010-09-30 | 2011-03-16 | Systems and methods to facilitate active reading |
Country Status (2)
Country | Link |
---|---|
US (1) | US10268661B2 (en) |
WO (1) | WO2012044363A1 (en) |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130007663A1 (en) * | 2011-06-30 | 2013-01-03 | Nokia Corporation | Displaying Content |
US20130019193A1 (en) * | 2011-07-11 | 2013-01-17 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling content using graphical object |
US20130339885A1 (en) * | 2012-06-14 | 2013-12-19 | Samsung Electronics Co. Ltd. | Method of assigning importance to contents and electronic device therefor |
US20140075277A1 (en) * | 2012-09-11 | 2014-03-13 | Microsoft Corporation | Tap-To-Open Link Selection Areas |
US20140157118A1 (en) * | 2012-12-05 | 2014-06-05 | Fuji Xerox Co., Ltd. | Information processing apparatuses, non-transitory computer readable medium, and information processing method |
US20140282283A1 (en) * | 2013-03-15 | 2014-09-18 | Caesar Ian Glebocki | Semantic Gesture Processing Device and Method Providing Novel User Interface Experience |
US20140306897A1 (en) * | 2013-04-10 | 2014-10-16 | Barnesandnoble.Com Llc | Virtual keyboard swipe gestures for cursor movement |
US20150185833A1 (en) * | 2012-06-22 | 2015-07-02 | Ntt Docomo, Inc. | Display device, display method, and program |
US9081410B2 (en) | 2012-11-14 | 2015-07-14 | Facebook, Inc. | Loading content on electronic device |
US20150254211A1 (en) * | 2014-03-08 | 2015-09-10 | Microsoft Technology Licensing, Llc | Interactive data manipulation using examples and natural language |
US9218188B2 (en) | 2012-11-14 | 2015-12-22 | Facebook, Inc. | Animation sequence associated with feedback user-interface element |
US20150373080A1 (en) * | 2014-06-23 | 2015-12-24 | Qingdao Hisense Media Network Technology Co., Ltd. | Devices and methods for opening online documents |
US9229632B2 (en) | 2012-10-29 | 2016-01-05 | Facebook, Inc. | Animation sequence associated with image |
US9235321B2 (en) | 2012-11-14 | 2016-01-12 | Facebook, Inc. | Animation sequence associated with content item |
JP2016006644A (en) * | 2014-05-30 | 2016-01-14 | キヤノンマーケティングジャパン株式会社 | Information processing device, control method, and program |
US9245312B2 (en) | 2012-11-14 | 2016-01-26 | Facebook, Inc. | Image panning and zooming effect |
US20160110316A1 (en) * | 2014-10-15 | 2016-04-21 | International Business Machines Corporation | Generating a document preview |
US20160110317A1 (en) * | 2014-10-16 | 2016-04-21 | LiquidText, Inc. | Facilitating active reading of digital documents |
US20160124618A1 (en) * | 2014-10-29 | 2016-05-05 | International Business Machines Corporation | Managing content displayed on a touch screen enabled device |
JP2016081417A (en) * | 2014-10-21 | 2016-05-16 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Display method with plurality areas combined, device and program |
US20160313909A1 (en) * | 2015-04-24 | 2016-10-27 | Samsung Electronics Company, Ltd. | Variable Display Orientation Based on User Unlock Method |
US9507757B2 (en) | 2012-11-14 | 2016-11-29 | Facebook, Inc. | Generating multiple versions of a content item for multiple platforms |
US9507483B2 (en) | 2012-11-14 | 2016-11-29 | Facebook, Inc. | Photographs with location or time information |
US9547416B2 (en) | 2012-11-14 | 2017-01-17 | Facebook, Inc. | Image presentation |
US9547627B2 (en) * | 2012-11-14 | 2017-01-17 | Facebook, Inc. | Comment presentation |
US9606695B2 (en) | 2012-11-14 | 2017-03-28 | Facebook, Inc. | Event notification |
US9606717B2 (en) | 2012-11-14 | 2017-03-28 | Facebook, Inc. | Content composer |
US9607289B2 (en) | 2012-11-14 | 2017-03-28 | Facebook, Inc. | Content type filter |
US9684935B2 (en) | 2012-11-14 | 2017-06-20 | Facebook, Inc. | Content composer for third-party applications |
US9696898B2 (en) | 2012-11-14 | 2017-07-04 | Facebook, Inc. | Scrolling through a series of content items |
US20170285899A1 (en) * | 2016-03-30 | 2017-10-05 | Kyocera Document Solutions Inc. | Display device and computer-readable non-transitory recording medium with display control program recorded thereon |
US9851802B2 (en) * | 2013-01-28 | 2017-12-26 | Samsung Electronics Co., Ltd | Method and apparatus for controlling content playback |
US9858251B2 (en) * | 2014-08-14 | 2018-01-02 | Rakuten Kobo Inc. | Automatically generating customized annotation document from query search results and user interface thereof |
US20180260492A1 (en) * | 2017-03-07 | 2018-09-13 | Enemy Tree LLC | Digital multimedia pinpoint bookmark device, method, and system |
US10394937B2 (en) | 2016-01-13 | 2019-08-27 | Universal Analytics, Inc. | Systems and methods for rules-based tag management and application in a document review system |
US11010040B2 (en) * | 2019-02-28 | 2021-05-18 | Microsoft Technology Licensing, Llc | Scrollable annotations associated with a subset of content in an electronic document |
US11074397B1 (en) * | 2014-07-01 | 2021-07-27 | Amazon Technologies, Inc. | Adaptive annotations |
USD944216S1 (en) | 2018-01-08 | 2022-02-22 | Brilliant Home Technology, Inc. | Control panel with sensor area |
USD945973S1 (en) * | 2019-09-04 | 2022-03-15 | Brilliant Home Technology, Inc. | Touch control panel with moveable shutter |
US11347930B2 (en) * | 2018-06-29 | 2022-05-31 | Tianjin Bytedance Technology Co., Ltd. | Method and apparatus for automatically displaying directory of document |
US11443103B2 (en) * | 2020-10-07 | 2022-09-13 | Rakuten Kobo Inc. | Reflowable content with annotations |
US11544227B2 (en) * | 2020-06-18 | 2023-01-03 | T-Mobile Usa, Inc. | Embedded reference object and interaction within a visual collaboration system |
US11563595B2 (en) | 2017-01-03 | 2023-01-24 | Brilliant Home Technology, Inc. | Home device controller with touch control grooves |
US11715943B2 (en) | 2020-01-05 | 2023-08-01 | Brilliant Home Technology, Inc. | Faceplate for multi-sensor control device |
USD1038895S1 (en) | 2021-01-05 | 2024-08-13 | Brilliant Home Technology, Inc. | Wall-mountable control device with illuminable feature |
US12153726B1 (en) * | 2023-06-30 | 2024-11-26 | Adobe Inc. | Integrating text of a document into an extended reality environment |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9417754B2 (en) | 2011-08-05 | 2016-08-16 | P4tents1, LLC | User interface system, method, and computer program product |
EP2889745A4 (en) * | 2012-08-22 | 2016-07-06 | Nec Corp | Electronic apparatus, document display method, and computer-readable recording medium whereupon program is recorded |
US10691323B2 (en) | 2015-04-10 | 2020-06-23 | Apple Inc. | Column fit document traversal for reader application |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6219679B1 (en) * | 1998-03-18 | 2001-04-17 | Nortel Networks Limited | Enhanced user-interactive information content bookmarking |
US6654039B1 (en) * | 2000-10-13 | 2003-11-25 | International Business Machines Corporation | Method, system and program for scrolling index scans |
US6654036B1 (en) * | 2000-06-05 | 2003-11-25 | International Business Machines Corporation | Method, article of manufacture and apparatus for controlling relative positioning of objects in a windows environment |
US20050257400A1 (en) * | 1998-11-06 | 2005-11-24 | Microsoft Corporation | Navigating a resource browser session |
US20060129944A1 (en) * | 1994-01-27 | 2006-06-15 | Berquist David T | Software notes |
US20070192729A1 (en) * | 2006-02-10 | 2007-08-16 | Microsoft Corporation | Document overview scrollbar |
US20070266342A1 (en) * | 2006-05-10 | 2007-11-15 | Google Inc. | Web notebook tools |
US20080307308A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Creating Web Clips |
US20090033633A1 (en) * | 2007-07-31 | 2009-02-05 | Palo Alto Research Center Incorporated | User interface for a context-aware leisure-activity recommendation system |
US20090164887A1 (en) * | 2006-03-31 | 2009-06-25 | Nec Corporation | Web content read information display device, method, and program |
US20090193351A1 (en) * | 2008-01-29 | 2009-07-30 | Samsung Electronics Co., Ltd. | Method for providing graphical user interface (gui) using divided screen and multimedia device using the same |
US20090199106A1 (en) * | 2008-02-05 | 2009-08-06 | Sony Ericsson Mobile Communications Ab | Communication terminal including graphical bookmark manager |
US20090199093A1 (en) * | 2007-09-04 | 2009-08-06 | Tridib Chakravarty | Image Capture And Sharing System and Method |
US20090222717A1 (en) * | 2008-02-28 | 2009-09-03 | Theodor Holm Nelson | System for exploring connections between data pages |
US20100162160A1 (en) * | 2008-12-22 | 2010-06-24 | Verizon Data Services Llc | Stage interaction for mobile device |
US7859539B2 (en) * | 2006-05-27 | 2010-12-28 | Christopher Vance Beckman | Organizational viewing techniques |
US8332754B2 (en) * | 2009-11-04 | 2012-12-11 | International Business Machines Corporation | Rendering sections of content in a document |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2154951C (en) * | 1994-09-12 | 2004-05-25 | John E. Warnock | Method and apparatus for viewing electronic documents |
US7555705B2 (en) | 2003-09-10 | 2009-06-30 | Microsoft Corporation | Annotation management in a pen-based computing system |
US20050052427A1 (en) | 2003-09-10 | 2005-03-10 | Wu Michael Chi Hung | Hand gesture interaction with touch surface |
US7343552B2 (en) | 2004-02-12 | 2008-03-11 | Fuji Xerox Co., Ltd. | Systems and methods for freeform annotations |
WO2007089847A2 (en) * | 2006-01-30 | 2007-08-09 | Fast-Cat, Llc | A portable dataport device and method for retrieving, inter-relating, annotating and managing electronic documents at a point of need |
US7966561B1 (en) * | 2006-07-18 | 2011-06-21 | Intuit Inc. | System and method for indicating information flow among documents |
US7739622B2 (en) | 2006-10-27 | 2010-06-15 | Microsoft Corporation | Dynamic thumbnails for document navigation |
US8144129B2 (en) * | 2007-01-05 | 2012-03-27 | Apple Inc. | Flexible touch sensing circuits |
US20090249257A1 (en) * | 2008-03-31 | 2009-10-01 | Nokia Corporation | Cursor navigation assistance |
US20090249178A1 (en) * | 2008-04-01 | 2009-10-01 | Ambrosino Timothy J | Document linking |
US8924892B2 (en) * | 2008-08-22 | 2014-12-30 | Fuji Xerox Co., Ltd. | Multiple selection on devices with many gestures |
-
2011
- 2011-03-16 WO PCT/US2011/028595 patent/WO2012044363A1/en active Application Filing
- 2011-03-16 US US13/876,463 patent/US10268661B2/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060129944A1 (en) * | 1994-01-27 | 2006-06-15 | Berquist David T | Software notes |
US6219679B1 (en) * | 1998-03-18 | 2001-04-17 | Nortel Networks Limited | Enhanced user-interactive information content bookmarking |
US20050257400A1 (en) * | 1998-11-06 | 2005-11-24 | Microsoft Corporation | Navigating a resource browser session |
US6654036B1 (en) * | 2000-06-05 | 2003-11-25 | International Business Machines Corporation | Method, article of manufacture and apparatus for controlling relative positioning of objects in a windows environment |
US6654039B1 (en) * | 2000-10-13 | 2003-11-25 | International Business Machines Corporation | Method, system and program for scrolling index scans |
US20070192729A1 (en) * | 2006-02-10 | 2007-08-16 | Microsoft Corporation | Document overview scrollbar |
US20090164887A1 (en) * | 2006-03-31 | 2009-06-25 | Nec Corporation | Web content read information display device, method, and program |
US20070266342A1 (en) * | 2006-05-10 | 2007-11-15 | Google Inc. | Web notebook tools |
US7859539B2 (en) * | 2006-05-27 | 2010-12-28 | Christopher Vance Beckman | Organizational viewing techniques |
US20080307308A1 (en) * | 2007-06-08 | 2008-12-11 | Apple Inc. | Creating Web Clips |
US20090033633A1 (en) * | 2007-07-31 | 2009-02-05 | Palo Alto Research Center Incorporated | User interface for a context-aware leisure-activity recommendation system |
US20090199093A1 (en) * | 2007-09-04 | 2009-08-06 | Tridib Chakravarty | Image Capture And Sharing System and Method |
US20090193351A1 (en) * | 2008-01-29 | 2009-07-30 | Samsung Electronics Co., Ltd. | Method for providing graphical user interface (gui) using divided screen and multimedia device using the same |
US20090199106A1 (en) * | 2008-02-05 | 2009-08-06 | Sony Ericsson Mobile Communications Ab | Communication terminal including graphical bookmark manager |
US20090222717A1 (en) * | 2008-02-28 | 2009-09-03 | Theodor Holm Nelson | System for exploring connections between data pages |
US20100162160A1 (en) * | 2008-12-22 | 2010-06-24 | Verizon Data Services Llc | Stage interaction for mobile device |
US8332754B2 (en) * | 2009-11-04 | 2012-12-11 | International Business Machines Corporation | Rendering sections of content in a document |
Non-Patent Citations (1)
Title |
---|
Theodor Holm Nelson, "Xanalogical Structure, Needed Now More than Ever", publisher: ACM, published: 2000, pages 1-32 * |
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130007663A1 (en) * | 2011-06-30 | 2013-01-03 | Nokia Corporation | Displaying Content |
US9280273B2 (en) * | 2011-06-30 | 2016-03-08 | Nokia Technologies Oy | Method, apparatus, and computer program for displaying content items in display regions |
US20130019193A1 (en) * | 2011-07-11 | 2013-01-17 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling content using graphical object |
US20170336938A1 (en) * | 2011-07-11 | 2017-11-23 | Samsung Electronics Co., Ltd. | Method and apparatus for controlling content using graphical object |
US9727225B2 (en) * | 2011-07-11 | 2017-08-08 | Samsung Electronics Co., Ltd | Method and apparatus for controlling content using graphical object |
US20130339885A1 (en) * | 2012-06-14 | 2013-12-19 | Samsung Electronics Co. Ltd. | Method of assigning importance to contents and electronic device therefor |
US20150185833A1 (en) * | 2012-06-22 | 2015-07-02 | Ntt Docomo, Inc. | Display device, display method, and program |
US9411418B2 (en) * | 2012-06-22 | 2016-08-09 | Ntt Docomo, Inc. | Display device, display method, and program |
US20140075277A1 (en) * | 2012-09-11 | 2014-03-13 | Microsoft Corporation | Tap-To-Open Link Selection Areas |
US10162492B2 (en) * | 2012-09-11 | 2018-12-25 | Microsoft Technology Licensing, Llc | Tap-to-open link selection areas |
US9229632B2 (en) | 2012-10-29 | 2016-01-05 | Facebook, Inc. | Animation sequence associated with image |
US10768788B2 (en) | 2012-11-14 | 2020-09-08 | Facebook, Inc. | Image presentation |
US9607289B2 (en) | 2012-11-14 | 2017-03-28 | Facebook, Inc. | Content type filter |
US9235321B2 (en) | 2012-11-14 | 2016-01-12 | Facebook, Inc. | Animation sequence associated with content item |
US9081410B2 (en) | 2012-11-14 | 2015-07-14 | Facebook, Inc. | Loading content on electronic device |
US9245312B2 (en) | 2012-11-14 | 2016-01-26 | Facebook, Inc. | Image panning and zooming effect |
US9218188B2 (en) | 2012-11-14 | 2015-12-22 | Facebook, Inc. | Animation sequence associated with feedback user-interface element |
US9696898B2 (en) | 2012-11-14 | 2017-07-04 | Facebook, Inc. | Scrolling through a series of content items |
US9684935B2 (en) | 2012-11-14 | 2017-06-20 | Facebook, Inc. | Content composer for third-party applications |
US10459621B2 (en) | 2012-11-14 | 2019-10-29 | Facebook, Inc. | Image panning and zooming effect |
US10762684B2 (en) | 2012-11-14 | 2020-09-01 | Facebook, Inc. | Animation sequence associated with content item |
US10762683B2 (en) | 2012-11-14 | 2020-09-01 | Facebook, Inc. | Animation sequence associated with feedback user-interface element |
US9606717B2 (en) | 2012-11-14 | 2017-03-28 | Facebook, Inc. | Content composer |
US10664148B2 (en) | 2012-11-14 | 2020-05-26 | Facebook, Inc. | Loading content on electronic device |
US9507757B2 (en) | 2012-11-14 | 2016-11-29 | Facebook, Inc. | Generating multiple versions of a content item for multiple platforms |
US9507483B2 (en) | 2012-11-14 | 2016-11-29 | Facebook, Inc. | Photographs with location or time information |
US9547416B2 (en) | 2012-11-14 | 2017-01-17 | Facebook, Inc. | Image presentation |
US9547627B2 (en) * | 2012-11-14 | 2017-01-17 | Facebook, Inc. | Comment presentation |
US9606695B2 (en) | 2012-11-14 | 2017-03-28 | Facebook, Inc. | Event notification |
US20140157118A1 (en) * | 2012-12-05 | 2014-06-05 | Fuji Xerox Co., Ltd. | Information processing apparatuses, non-transitory computer readable medium, and information processing method |
US9851802B2 (en) * | 2013-01-28 | 2017-12-26 | Samsung Electronics Co., Ltd | Method and apparatus for controlling content playback |
US20140282283A1 (en) * | 2013-03-15 | 2014-09-18 | Caesar Ian Glebocki | Semantic Gesture Processing Device and Method Providing Novel User Interface Experience |
US20140306897A1 (en) * | 2013-04-10 | 2014-10-16 | Barnesandnoble.Com Llc | Virtual keyboard swipe gestures for cursor movement |
US20150254211A1 (en) * | 2014-03-08 | 2015-09-10 | Microsoft Technology Licensing, Llc | Interactive data manipulation using examples and natural language |
JP2016006644A (en) * | 2014-05-30 | 2016-01-14 | キヤノンマーケティングジャパン株式会社 | Information processing device, control method, and program |
US20150373080A1 (en) * | 2014-06-23 | 2015-12-24 | Qingdao Hisense Media Network Technology Co., Ltd. | Devices and methods for opening online documents |
US11074397B1 (en) * | 2014-07-01 | 2021-07-27 | Amazon Technologies, Inc. | Adaptive annotations |
US9858251B2 (en) * | 2014-08-14 | 2018-01-02 | Rakuten Kobo Inc. | Automatically generating customized annotation document from query search results and user interface thereof |
US11042689B2 (en) * | 2014-10-15 | 2021-06-22 | International Business Machines Corporation | Generating a document preview |
US11461533B2 (en) * | 2014-10-15 | 2022-10-04 | International Business Machines Corporation | Generating a document preview |
US20160110316A1 (en) * | 2014-10-15 | 2016-04-21 | International Business Machines Corporation | Generating a document preview |
US20160110314A1 (en) * | 2014-10-15 | 2016-04-21 | International Business Machines Corporation | Generating a document preview |
US10417309B2 (en) * | 2014-10-16 | 2019-09-17 | Liquidtext, Inc | Facilitating active reading of digital documents |
US20160110317A1 (en) * | 2014-10-16 | 2016-04-21 | LiquidText, Inc. | Facilitating active reading of digital documents |
US9632993B2 (en) | 2014-10-21 | 2017-04-25 | International Business Machines Corporation | Combining and displaying multiple document areas |
US10241977B2 (en) | 2014-10-21 | 2019-03-26 | International Business Machines Corporation | Combining and displaying multiple document areas |
US11663393B2 (en) | 2014-10-21 | 2023-05-30 | International Business Machines Corporation | Combining and displaying multiple document areas |
US10216710B2 (en) | 2014-10-21 | 2019-02-26 | International Business Machines Corporation | Combining and displaying multiple document areas |
JP2016081417A (en) * | 2014-10-21 | 2016-05-16 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Display method with plurality areas combined, device and program |
US20160124618A1 (en) * | 2014-10-29 | 2016-05-05 | International Business Machines Corporation | Managing content displayed on a touch screen enabled device |
US10275142B2 (en) * | 2014-10-29 | 2019-04-30 | International Business Machines Corporation | Managing content displayed on a touch screen enabled device |
US11379112B2 (en) | 2014-10-29 | 2022-07-05 | Kyndryl, Inc. | Managing content displayed on a touch screen enabled device |
US20160313909A1 (en) * | 2015-04-24 | 2016-10-27 | Samsung Electronics Company, Ltd. | Variable Display Orientation Based on User Unlock Method |
US11366585B2 (en) * | 2015-04-24 | 2022-06-21 | Samsung Electronics Company, Ltd. | Variable display orientation based on user unlock method |
US10394937B2 (en) | 2016-01-13 | 2019-08-27 | Universal Analytics, Inc. | Systems and methods for rules-based tag management and application in a document review system |
US20170285899A1 (en) * | 2016-03-30 | 2017-10-05 | Kyocera Document Solutions Inc. | Display device and computer-readable non-transitory recording medium with display control program recorded thereon |
US11563595B2 (en) | 2017-01-03 | 2023-01-24 | Brilliant Home Technology, Inc. | Home device controller with touch control grooves |
US20180260492A1 (en) * | 2017-03-07 | 2018-09-13 | Enemy Tree LLC | Digital multimedia pinpoint bookmark device, method, and system |
US11182450B2 (en) * | 2017-03-07 | 2021-11-23 | Enemy Tree LLC | Digital multimedia pinpoint bookmark device, method, and system |
US10754910B2 (en) * | 2017-03-07 | 2020-08-25 | Enemy Tree LLC | Digital multimedia pinpoint bookmark device, method, and system |
USD944216S1 (en) | 2018-01-08 | 2022-02-22 | Brilliant Home Technology, Inc. | Control panel with sensor area |
US11347930B2 (en) * | 2018-06-29 | 2022-05-31 | Tianjin Bytedance Technology Co., Ltd. | Method and apparatus for automatically displaying directory of document |
US11010040B2 (en) * | 2019-02-28 | 2021-05-18 | Microsoft Technology Licensing, Llc | Scrollable annotations associated with a subset of content in an electronic document |
USD945973S1 (en) * | 2019-09-04 | 2022-03-15 | Brilliant Home Technology, Inc. | Touch control panel with moveable shutter |
US11715943B2 (en) | 2020-01-05 | 2023-08-01 | Brilliant Home Technology, Inc. | Faceplate for multi-sensor control device |
US11544227B2 (en) * | 2020-06-18 | 2023-01-03 | T-Mobile Usa, Inc. | Embedded reference object and interaction within a visual collaboration system |
US11880342B2 (en) | 2020-06-18 | 2024-01-23 | T-Mobile Usa, Inc. | Embedded reference object and interaction within a visual collaboration system |
US11443103B2 (en) * | 2020-10-07 | 2022-09-13 | Rakuten Kobo Inc. | Reflowable content with annotations |
USD1038895S1 (en) | 2021-01-05 | 2024-08-13 | Brilliant Home Technology, Inc. | Wall-mountable control device with illuminable feature |
US12153726B1 (en) * | 2023-06-30 | 2024-11-26 | Adobe Inc. | Integrating text of a document into an extended reality environment |
Also Published As
Publication number | Publication date |
---|---|
US10268661B2 (en) | 2019-04-23 |
WO2012044363A1 (en) | 2012-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10268661B2 (en) | Systems and methods to facilitate active reading | |
US10417309B2 (en) | Facilitating active reading of digital documents | |
US5347295A (en) | Control of a computer through a position-sensed stylus | |
US7818691B2 (en) | Zeroclick | |
US6891551B2 (en) | Selection handles in editing electronic documents | |
US8941600B2 (en) | Apparatus for providing touch feedback for user input to a touch sensitive surface | |
RU2557463C2 (en) | Dual screen portable touch sensitive computing system | |
US9792272B2 (en) | Deleting annotations of paginated digital content | |
US9424241B2 (en) | Annotation mode including multiple note types for paginated digital content | |
JP4063246B2 (en) | Page information display device | |
Hinckley et al. | InkSeine: In Situ search for active note taking | |
US10331777B2 (en) | Merging annotations of paginated digital content | |
WO2011041547A1 (en) | Systems and methods to facilitate active reading | |
US20110216015A1 (en) | Apparatus and method for directing operation of a software application via a touch-sensitive surface divided into regions associated with respective functions | |
US20140189593A1 (en) | Electronic device and input method | |
US20090315841A1 (en) | Touchpad Module which is Capable of Interpreting Multi-Object Gestures and Operating Method thereof | |
US20120044164A1 (en) | Interface apparatus and method for setting a control area on a touch screen | |
US10915698B2 (en) | Multi-purpose tool for interacting with paginated digital content | |
US20020059350A1 (en) | Insertion point bungee space tool | |
US9286279B2 (en) | Bookmark setting method of e-book, and apparatus thereof | |
JP2003531428A (en) | User interface and method of processing and viewing digital documents | |
JP2003303047A (en) | Image input and display system, usage of user interface as well as product including computer usable medium | |
JP2009025920A (en) | Information processing unit and control method therefor, and computer program | |
JP2007334910A (en) | Graphical user interface for help system | |
US11704473B2 (en) | Systems and methods to facilitate active reading |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GEORGIA TECH RESEARCH CORPORATION, GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TASHMAN, CRAIG;EDWARDS, W. KEITH;SIGNING DATES FROM 20130401 TO 20130515;REEL/FRAME:030761/0390 |
|
AS | Assignment |
Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA Free format text: CONFIRMATORY LICENSE;ASSIGNOR:GEORGIA TECH RESEARCH CORPORATION;REEL/FRAME:033501/0606 Effective date: 20131017 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: GEORGIA TECH RESEARCH CORPORATION, GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TASHMAN, CRAIG;EDWARDS, W. KEITH;SIGNING DATES FROM 20130401 TO 20130515;REEL/FRAME:054774/0068 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |