AU2015201596A1 - Displaying augmented reality content on a document - Google Patents
Displaying augmented reality content on a document Download PDFInfo
- Publication number
- AU2015201596A1 AU2015201596A1 AU2015201596A AU2015201596A AU2015201596A1 AU 2015201596 A1 AU2015201596 A1 AU 2015201596A1 AU 2015201596 A AU2015201596 A AU 2015201596A AU 2015201596 A AU2015201596 A AU 2015201596A AU 2015201596 A1 AU2015201596 A1 AU 2015201596A1
- Authority
- AU
- Australia
- Prior art keywords
- salient
- regions
- document
- augmented reality
- reality content
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 215
- 238000000034 method Methods 0.000 claims abstract description 121
- 230000015654 memory Effects 0.000 claims description 75
- 239000000203 mixture Substances 0.000 claims description 15
- 230000007704 transition Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 description 24
- 238000012795 verification Methods 0.000 description 14
- 230000003416 augmentation Effects 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000001514 detection method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000007726 management method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Processing Or Creating Images (AREA)
Abstract
-42 DISPLAYING AUGMENTED REALITY CONTENT IN A DOCUMENT 5 A method, system and apparatus for displaying augmented reality content on a document are described. The method comprises determining a spatial arrangement and a rank for each of a plurality of salient regions within the augmented reality content and determining a spatial arrangement and a rank for each of a plurality of empty regions within the document. The rank of each of the salient regions is determined according to an importance of each salient region in 10 the augmented reality content. The rank of each of the empty regions is determined according to a distance between each empty region and a display target of the augmented reality content. The method further comprises modifying the augmented reality content to allow alignment of the spatial arrangement of the plurality of salient regions with the spatial arrangement of the plurality of empty regions within the document. The plurality of salient regions are positioned 15 in the document based on matching of the rank of each of the salient regions with the rank of each of the empty regions, and the modified augmented reality content is displayed on the document such that at least one of the salient regions is overlaid onto at least one of the empty regions based upon the matching. C\J ,. CD) C\Ql C)C N Nt rr CM t (r Lt co Nt kN
Description
DISPLAYING AUGMENTED REALITY CONTENT ON A DOCUMENT
TECHNICAL FIELD
[0001] The present invention relates to a method, system and apparatus for displaying augmented reality content on a document. The present invention further relates to a computer readable medium storing instructions executable to implement a method for displaying augmented reality content on a document. The present application also relates to the position and display of augmented reality content in augmentable areas on and around a document.
BACKGROUND
[0002] Printed documents have been a primary source of communication for many centuries. Printed documents have been used across different domains such as news reporting, advertisements, office environments (large and small offices alike) and so on. The last decade has witnessed an explosion in the popularity of hand-held devices such as smart phones and more recently tablet and wearable devices. The ubiquitous nature of the print media and the increasing popularity of hand-held and wearable devices have led to a new genre of applications based on augmented reality.
[0003] Augmented reality content is a view of a physical world where some elements of the physical reality are augmented by computer generated inputs such as sound, graphics and so on. Using these hand-held and wearable devices users are able to retrieve additional information related to a captured image of a real world object from a camera connected to the device (e.g., a camera phone or a camera attached to a head-mounted display or augmented reality glasses) and augment the additional information to the real world object. Such a real-world object could be a natural image in a document, a piece of textual information, a physical object such as a printer and so on. In addition to hand-held devices such projectors are also being used to show augmented reality information, especially in an office environment. For example, a projector in conjunction with a camera can be used to provide an augmented reality system. Such a system provides hands-free and glasses-free augmented reality to the end-user. As a result projection based augmented reality systems are well suited for office environments. In such a system, typically, a camera will detect objects, documents etc., in the scene and the projector projects augmented information at the appropriate location in relation to the detected object in the scene. For example, a camera and a projector could be fixed in relation to an employee’s desk such that the camera can capture images of the desk in real-time and the projector can projected augmented information according to the captured images.
[0004] A desk environment can be a crowded space containing a variety of objects, in addition to documents, such as pens, keyboards, cups, telephones etc. This environment presents a number of problems for the presentation and display of augmented reality content. One such problem is the scarcity of suitable space, in sufficiently close proximity to the document, to provide a clear relationship between the augmented reality content and the target document. Another problem is the colour and texture of the available spaces for augmentation, which can reduce the legibility of the augmented reality content.
[0005] A need exists to address one or more of the problems associated with presenting augmented reality content on a document.
SUMMARY
[0010] It is an object of the present disclosure to substantially overcome, or at least ameliorate, at least one disadvantage of present arrangements.
[0011] A first aspect of the present disclosure provides a method for displaying augmented reality content on a document. The method determines a spatial arrangement and a rank for each of a plurality of salient regions within the augmented reality content, the rank of each of the salient regions being determined according to an importance of each salient region in the augmented reality content. A spatial arrangement and a rank for each of a plurality of empty regions within the document are then determined, the rank of each of the empty regions being determined according to a distance between each empty region and a display target of the augmented reality content. The method modifies the augmented reality content to allow alignment of the spatial arrangement of the plurality of salient regions with the spatial arrangement of the plurality of empty regions within the document, the plurality of salient regions being positioned in the document based on matching of the rank of each of the salient regions with the rank of each of the empty regions. The modified augmented reality content is then displayed on the document such that at least one of the salient regions is overlaid onto at least one of the empty regions based upon the matching.
[0012] Preferably, the spatial arrangement of each of the plurality of salient regions is determined based upon a size of the salient region.
[0013] In one implementation, the spatial arrangement of each of the plurality of salient regions is determined based upon at least one of a location of said salient region within the augmented reality content, and proximity of said salient region in relation to at least one other salient region of the augmented reality content.
[0014] In another implementation, the rank of each of the plurality of salient regions is determined based upon a relationship between content of said salient region with content of another of the salient regions.
[0015] Desirably, the determining of the rank of each of the plurality of empty regions comprises determining a size of said empty region in relation to the a size of a selected salient region [0016] The method may further comprise determining suitability of one of said empty regions to display an animation or transform.
[0017] Advantageously, the spatial arrangement of each of the plurality of salient regions is determined based upon a direction between each of the salient regions.
[0018] In a specific implementation, the matching of the rank of each of the salient regions with the rank of each of the empty regions is based upon a relationship between a selected one of the plurality of salient regions and one of the plurality of salient regions previously matched with one of the empty regions.
[0019] Preferably, the rank of each of the plurality of empty regions is determined relative to a scene of the document.
[0020] In another example, the spatial arrangement of each of the salient regions is determined based upon a key relationship between each of the plurality of salient regions.
Desirably the key relationship relates to a direction in which content of each of the salient region faces.
[0021] In another example the method further comprises applying a blend of the augmented reality content. Desirably the blend is determined by a relationship between the salient regions.
[0022] Advantageously the rank of each of the plurality of empty regions is determined based upon a relationship between each of the plurality of salient regions.
[0023] The method may further comprise displaying the modified augmented reality content as one of an animation and a transition.
[0024] Preferably the plurality of empty regions are predefined and stored in association with the target.
[0025] Other aspects are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] At least one embodiment of the invention will now be described with reference to the following drawings, in which: [0027] Figs. 1 A, IB and 1C form a schematic block diagram of a system on which arrangements described may be practiced.
[0028] Fig. 2 is a schematic flow diagram showing a method of displaying augmented reality content on a document.
[0029] Fig. 3 is a schematic flow diagram showing a method of matching an augmentable area to a salient region, as executed in the method of Fig. 2.
[0030] Fig. 4 shows an example photograph, a scene and an augmented scene to illustrate the functionality of a method of displaying augmented reality content on a document with regard to proximity and location.
[0031] Fig. 5 shows an example photograph, a scene, and an augmented scene to illustrate the functionality of a method of displaying augmented reality content on a document with regard to size and location.
[0032] Fig. 6 shows an example photograph, the scene of Fig. 4 and an augmented scene to illustrate the functionality of a method of displaying augmented reality content on a document with regard to relationship and blend.
[0033] Fig. 7 shows the photograph of Fig. 6, a scene and an augmented scene to illustrate the functionality of a method of displaying augmented reality content on a document with regard to animations and transforms.
DETAILED DESCRIPTION INCLUDING BEST MODE
[0034] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
[0035] Augmented reality content may be placed on a document by determining an empty region inside the document and modifying the augmented reality content to fit that region. However, the augmentable area may be small or consist of multiple distributed spaces.
[0036] Augmented reality content may also be placed on a document by defining a document region in which augmented reality content cannot be placed, (i.e., a restricted zone), and then fill the remaining document space with the augmented reality content. While defining such a document region supports the placement of augmented reality content items in multiple places, the background image may affect how the augmented reality content items are placed.
[0037] There is a need to address one or more of the problems associated with presenting augmented reality content in crowded spaces, including the limitations of known methods so that the augmented reality experience is valuable and useful, regardless of the limitations posed by the environment.
[0038] Methods of displaying augmented reality content on a document are described below.
[0039] Fig. 1A shows an example system 100 upon which the described methods of displaying augmented reality content on a document can be practiced. The system 100 includes a camera 127, a projector 169, and a computer module 101. The system 100 may be used for projecting or displaying one or more elements of augmented reality content 170, as seen in Fig. 1 A, on a document 190. The document 190 is projected by the projector 169 of Fig. 1 A. In other implementations, the document 190 may be a printed document, or viewed on a display of an electronic device. In some instances, the camera 127 and the projector 169 may be a single device. In further implementations, the camera 127, the projector 169 and the computer module 101 may be components of a single device.
[0040] Normally, a document such as the document 190 contains static content determined at the time of printing. However, the document 190 may also contain dynamic content that either was not available at the time of printing, or that aids in the understanding of the document 190, or improves the user experience in one way or another. Currently, such dynamic content can only be viewed by reprinting the document 190, or by viewing the document 190 using an electronic reader device, such as a computer, a tablet, a mobile phone, a smartphone, a personal device and the like.
[0041] A system, such as the system 100 of Fig. 1 A, allows a user to view the dynamic content as a projected augmentation when the document 190 is placed within a field of view 185b of the camera 127. However, existing methods of displaying augmented reality content on a document fare poorly when there are cluttered items within a camera field of view 185b. Existing methods may ignore the cluttered items and project augmented reality content over the cluttered items, preventing the user from understanding the augmentation or forcing the user to reposition the document 190. Other existing methods may identify the empty-regions of the camera field of view 185b and ensure that the augmented content is displayed wholly in those regions - to achieve this, such methods may resize the augmented content but in doing so, run the risk of projecting an augmentation that is too small to be useful for the user.
[0042] The described methods consider the salient regions of the augmented reality content when projecting the augmented reality content into the camera field of view 185b with an aim of ensuring that the most salient portions of the augmented image reality content are always well understood by the user of the system 100, and are not lost in attempts to place the projected augmented reality content into the field of view 185b.
[0043] The methods of displaying augmented reality content on a document described in detail below allow augmented reality content to be displayed by a projector when augmentable areas of a document are limited in number and size.
[0044] An augmentable area is a part of a scene, normally adjacent to or on a document to be augmented. A document to be augmented is determined by the system 100 to have properties that allow an augmentation to be projected on the document. One such property is whether a background of the document is empty, whereby an empty region is deemed to be an augmentable area. Another such property is a complexity measure, whereby a low-enough complexity measure value region is deemed to be an augmentable area. An augmentable area, also referred to as an empty region, may comprise whitespace, a pre-determined region, or a low-feature region, [0045] The augmented reality content may be at least one of image content, text content, other media, or a mix thereof. The augmented reality content is associated with the document to be augmented, so that the augmented reality content can be projected by a projector onto the scene in which the document is presented.
[0046] The methods described determine where and how the augmented reality content is projected or displayed onto the document and the scene by analysing the saliency value of salient regions of the augmented reality content, the scene, and the document.
[0047] An application of the described methods provide for augmented reality content to be displayed across more than one augmentation region by prioritising or ranking the most salient regions of the augmentation content to be displayed. The lesser salient regions of augmented reality content, such as of an image, may not be visible if insufficient augmentation regions are available for all the salient regions of the augmented reality content. In one example, the prioritisation or ranking may occur by either displaying only the most salient regions individually, or by displaying the most salient regions such that a relationship between the most salient regions is maintained (by scaling, rotating, skewing, distorting) and a blending, or similar, effect is applied for the augmented reality content between the salient regions, or by applying an animation effect to the augmented reality content to transition between the most salient regions displayed on the augmentable area.
[0048] As seen in Fig. 1A the system 100 comprises the computer module 101 connected to the projector 169 and the camera 127. The projector 169 may be any type of device suitable for projecting an image onto a surface, such as a portable projector, a mobile projector, a desktop projector, Google Glass™, and the like. The camera 127 retrieves an image of the camera field of view 185b and transmits the image to the computing device 101. The projector device 169 receives digital data from the computing device 101 and projects the digital data at specified locations into a field of projection 185a. The locations of projection are specified by the computing device 101. In the examples described hereafter, the field of projection 185a is identical to the camera field of view 185b. However, in other implementations, the camera field of view 185b and the field of projection 185a may differ.
[0049] Fig. 1A also shows a software architecture 133a for implementing the described methods. The software architecture 133a is implemented as one or more software code modules of a software application 133 as seen in Fig. IB.
[0050] Figs. IB and 1C show the computer module 101 in more detail.
[0051] As seen in Fig. IB, the computer module 101 includes associated input devices such as a keyboard 102, a mouse pointer device 103, a scanner 126, the camera 127, and a microphone 180; and output devices including the projector 169, a printer 115, a display device 114 and loudspeakers 117. An external Modulator-Demodulator (Modem) transceiver device 116 may be used by the computer module 101 for communicating to and from a communications network 120 via a connection 121. The communications network 120 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 121 is a telephone line, the modem 116 may be a traditional “dial-up” modem. Alternatively, where the connection 121 is a high capacity (e.g., cable) connection, the modem 116 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 120.
[0052] In some implementations, the camera 127 and/or the projector 169 may be in wireless communication with the computer module 101 via a network such as the network 120. In other implementations, the camera 127 and/or the projector 169 may be in wired communication with the computer module 101.
[0053] The computer module 101 typically includes at least one processor unit 105, and a memory unit 106. For example, the memory unit 106 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 101 also includes an number of input/output (I/O) interfaces including: an audiovideo interface 107 that couples to the video display 114, loudspeakers 117 and microphone 180; an I/O interface 113 that couples to the keyboard 102, mouse 103, scanner 126, camera 127 and optionally a joystick or other human interface device (not illustrated); and an interface 108 for the external modem 116 and printer 115. In some implementations, the modem 116 may be incorporated within the computer module 101, for example within the interface 108. The computer module 101 also has a local network interface 111, which permits coupling of the system 100 via a connection 123 to a local-area communications network 122, known as a Local Area Network (LAN). As illustrated in Fig. IB, the local communications network 122 may also couple to the wide network 120 via a connection 124, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 111 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 111.
[0054] The EO interfaces 108 and 113 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 109 are provided and typically include a hard disk drive (HDD) 110. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 112 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 100.
[0055] The components 105 to 113 of the computer module 101 typically communicate via an interconnected bus 104 and in a manner that results in a conventional mode of operation of the system 100 known to those in the relevant art. For example, the processor 105 is coupled to the system bus 104 using a connection 118. Likewise, the memory 106 and optical disk drive 112 are coupled to the system bus 104 by connections 119. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple Mac™ or a like computer systems.
[0056] The described methods of displaying augmented reality content on a document may be implemented using the system 100 wherein the processes of Figs. 2 and 3, to be described, may be implemented as one or more software application programs 133 executable within the system 100. In particular, the steps of the method of Figs. 2 and 4 are effected by instructions 131 (see Fig. 1C) in the software 133 that are carried out within the system 100. The software instructions 131 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the disclosed methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0057] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software 133 is typically stored in the HDD 110 or the memory 106. The software is loaded into the system 100 from a computer readable medium, and executed by the system 100. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the system 100 preferably effects an advantageous apparatus for implementing the described methods of displaying augmented reality content on a document.
[0058] Thus, for example, the software 133 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 125 that is read by the optical disk drive 112. A computer readable medium having such software or computer program recorded on it is a computer program product.
[0059] In some instances, the application programs 133 may be supplied to the user encoded on one or more CD-ROMs 125 and read via the corresponding drive 112, or alternatively may be read by the user from the networks 120 or 122. Still further, the software can also be loaded into the system 100 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the system 100 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 101. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 101 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0060] The second part of the application programs 133 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 114. Through manipulation of typically the keyboard 102 and the mouse 103, a user of the system 100 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 117 and user voice commands input via the microphone 180.
[0061] Fig. 1C is a detailed schematic block diagram of the processor 105 and a “memory” 134. The memory 134 represents a logical aggregation of all the memory modules (including the HDD 109 and semiconductor memory 106) that can be accessed by the computer module 101 in Fig. IB.
[0062] When the computer module 101 is initially powered up, a power-on self-test (POST) program 150 executes. The POST program 150 is typically stored in a ROM 149 of the semiconductor memory 106 of Fig. IB. A hardware device such as the ROM 149 storing software is sometimes referred to as firmware. The POST program 150 examines hardware within the computer module 101 to ensure proper functioning and typically checks the processor 105, the memory 134 (109, 106), and a basic input-output systems software (BIOS) module 151, also typically stored in the ROM 149, for correct operation. Once the POST program 150 has run successfully, the BIOS 151 activates the hard disk drive 110 of Fig. IB. Activation of the hard disk drive 110 causes a bootstrap loader program 152 that is resident on the hard disk drive 110 to execute via the processor 105. This loads an operating system 153 into the RAM memory 106, upon which the operating system 153 commences operation. The operating system 153 is a system level application, executable by the processor 105, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
[0063] The operating system 153 manages the memory 134 (109, 106) to ensure that each process or application running on the computer module 101 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 100 of Fig. IB must be used properly so that each process can run effectively. Accordingly, the aggregated memory 134 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 100 and how such is used.
[0064] As shown in Fig. 1C, the processor 105 includes a number of functional modules including a control unit 139, an arithmetic logic unit (ALU) 140, and a local or internal memory 148, sometimes called a cache memory. The cache memory 148 typically includes a number of storage registers 144 - 146 in a register section. One or more internal busses 141 functionally interconnect these functional modules. The processor 105 typically also has one or more interfaces 142 for communicating with external devices via the system bus 104, using a connection 118. The memory 134 is coupled to the bus 104 using a connection 119.
[0065] The application program 133 includes a sequence of instructions 131 that may include conditional branch and loop instructions. The program 133 may also include data 132 which is used in execution of the program 133. The instructions 131 and the data 132 are stored in memory locations 128, 129, 130 and 135, 136, 137, respectively. Depending upon the relative size of the instructions 131 and the memory locations 128-130, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 130. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 128 and 129.
[0066] In general, the processor 105 is given a set of instructions which are executed therein. The processor 105 waits for a subsequent input, to which the processor 105 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 102, 103, data received from an external source across one of the networks 120, 102, data retrieved from one of the storage devices 106, 109 or data retrieved from a storage medium 125 inserted into the corresponding reader 112, all depicted in Fig. IB. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 134.
[0067] The disclosed arrangements for displaying augmented reality content on a document use input variables 154, which are stored in the memory 134 in corresponding memory locations 155, 156, 157. The arrangements for displaying augmented reality content on a document produce output variables 161, which are stored in the memory 134 in corresponding memory locations 162, 163, 164. Intermediate variables 158 may be stored in memory locations 159, 160, 166 and 167.
[0068] Referring to the processor 105 of Fig. 1C, the registers 144, 145, 146, the arithmetic logic unit (ALU) 140, and the control unit 139 work together to perform sequences of microoperations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 133. Each fetch, decode, and execute cycle comprises: a fetch operation, which fetches or reads an instruction 131 from a memory location 128, 129, 130; a decode operation in which the control unit 139 determines which instruction has been fetched; and an execute operation in which the control unit 139 and/or the ALU 140 execute the instruction.
[0069] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 139 stores or writes a value to a memory location 132.
[0070] Each step or sub-process in the processes of Figs. 2 and 3 is associated with one or more segments of the program 133 and is performed by the register section 144, 145, 147, the ALU 140, and the control unit 139 in the processor 105 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 133.
[0071] The described methods of displaying augmented reality content on a document may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of displaying augmented reality content on a document. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
[0072] Each step or sub-process in the processes of the methods described below is associated with one or more segments of the application program 133, and is performed by repeated execution of a fetch-execute cycle in the processor 105 or similar programmatic operation of other independent processor blocks in the electronic device 101 [0073] The software architecture 133a comprises a feature extractor module 1211. Under execution of the processor 105, the feature extractor module 1211 receives as input a raster image of the camera field of view 185b, and produces a list of feature points of interest. In producing the list of feature points of interest, the feature extractor module 1211 determines features or regions of the document 190 of the camera field of view 185b. The operation of the feature extraction module 1211 will be described in further detail below with reference to Fig. 2.
[0074] The software architecture 133a also comprises a feature matcher module 1212. By execution of the processor 105, feature matcher module 1212 receives as input a list of feature points of interest determined by the feature extractor module 1211. Execution of the processor 105 causes the feature matcher module 1212 to produce an output of a secondary list of feature points of interest. The outputted secondary list of feature points of interest has a 1:1 correspondence to the input list of feature points of interest received from the feature extractor module 1211. The operation of the feature matcher module 1212 is will also be described in further detail below with reference to Fig. 2.
[0075] The software architecture 133a also comprises a geometric feature verification module 1213. Under execution of the processor 105, the geometric verification module 1213 receives as input the two corresponding lists of feature points of interest produced by the feature extractor module 1211 and the feature matcher module 1212. Execution of the processor 105 results in the geometric verification module 1213 generating as output two lists of feature points of interest. The two lists of feature points of interest generated by the geometric verification module 1213 comprise a sub-set of the input lists of feature points of interest and a document identification number. The operation of the geometric feature verification module 1213 will be described in further detail below with reference to Fig. 2.
[0076] The software architecture 133a also comprises a calculate document pose module 1214. Under execution of the processor 105, the calculate document pose module 1214 receives as input two lists of feature points of interest and a document identification number associated with the document 190, as generated by the geometric feature verification module 1213. The calculate document pose module 1214, under execution of the processor 105, generates as output a polygon which describes a bounding region of the printed document 190 within the raster image of the camera field of view 185b, a document identification number, a list of polygons that describe bounding regions for augmented reality content within the printed document 190, in raster image camera field of view 185b coordinates and augmented reality content associated with the document identification number. The list of polygons generated by the calculate document pose module 1214 represents a determination of the augmentable areas of the document 190. The operation of the calculate document pose module 1214 will be described in further detail below with reference to Fig. 2.
[0077] The software architecture 133a also comprises a calculate salient region module 1215. Under execution of the processor 105, the calculate salient region module 1215 receives as input augmented reality content from an augmented reality content database 1223 associated with the document 190. The calculate salient region module also receives as input the list of polygons that describe bounding regions for the augmentable content in the camera field of view 185b coordinates (generated by the calculate document pose module 1214). Under execution of the application 133 by the processor 105, the calculate salient region module 1215 processes the augmented reality content and generates as output a list of regions with associated saliency values of the augmented reality content. The list of regions and the associated saliency values determine salient regions within the augmented reality content.
[0078] In one example, the calculate salient region module 1215 determines salient regions of augmented reality content by executing a face-detection algorithm to identify faces on augmented reality content that contains faces. In this instance, the determined salient regions output by the calculate salient region module 1215 will comprise regions for each matched face and the associated saliency value can be determined according to an area of the salient region. One example of a face-detection algorithm is a Haar-cascade classifier. Other methods of detecting salient regions of the augmented reality content can be used, and likewise other face-detection methods can be used.
[0079] The software architecture 133a also comprises a salient to augmentable region matcher module 1216. Under execution of the processor 105, the salient to augmentable area matcher module 1216 receives as input a list of polygons that describe the bounding region of augmentable areas of the document 190, as generated by the generate document pose module 1214, and a list of augmented reality content whose position and orientation can be modified such that is suitable for placement in the raster image of the camera field of view 185b.
Under execution of the processor 105, the salient to augmentable area matcher module 1216 executes to generate an index list that matches the input augmented reality content to an input augmentable area of the document 190. The operation of the salient to augmentable area matcher module 1216 will be described in further detail below with reference to Fig. 2 and Fig. 3.
[0080] The software architecture 133a also comprises a Tenderer module 1217. Under execution of the processor 105, the Tenderer module 1217 receives as input a list of augmented reality content whose position and orientation has been modified such that the augmented reality content is suitable for placement within the raster image of the camera field of view 185b. The Tenderer module 1217, under execution of the processor 105, generates and outputs a list of augmented reality content whose position and orientation has been modified such that the augmented reality content is suitable for projection and display within the projection field of view 185a. The operation of the Tenderer module 1217 will be will be described in further detail below with reference to Fig. 2 and Fig. 3.
[0081] The software architecture 133a also comprises a camera calibrator module 1210.
Under execution of the application 133 by the processor 105, the camera calibrator module 1210 receives as input a raster image of camera field of view 185b and outputs a transformation matrix. The generated transformation matrix contains a translation vector, a rotation vector and a scale vector. When applied to a raster image of the camera field of view 185b, the transformation matrix converts the camera field of view 185b coordinates to projection field of view 185a coordinates. The operation of the camera calibrator module 1210 will be described in further detail below with reference to Fig. 2.
[0082] The software architecture 133a also comprises a feature database 1220. Each entry in the feature database 1220 comprises a feature point of a document, and a corresponding document identification number for the relevant document. Each feature point of each feature database entry comprises an x and y location of a point of interest within a raster image representation of the document, for example the document 190. Each feature point of each feature database entry also comprises a vector of values that uniquely identify the x and y location. A physical representation of the raster image of the document 190 can be constructed as the document 190 using the feature point and the document identification number of each appropriate entry in the feature database 1220. In one implementation, the feature points of a digital representation of a document are pre-computed by the feature extractor module 1211 and stored within the feature database 1220. The feature database 1220 may be constructed using a data structure suitable for efficiently retrieving entries that contain a feature point. When queried with a list of feature points, such a data structure will provide a list of matching feature points and an accompanying index list under execution of the processor 105. The index list provides a one-to-one correspondence between a query feature point and a matched feature point within the data structure, as well as a document identification number associated with the matched feature point. In some implementations, the data structure is a KD-tree. The feature database 1220 may be stored in the memory 106 of the computer module 101. Entries in the feature database 1220 may in some instances be generated by the feature matcher module 1212.
[0083] The software architecture 133a also comprises an augmentable areas database 1222, which may be stored in the memory 106. Each entry in the augmentable areas database 1222 comprises a list of polygons and an associated document identification number. In some implementations, each polygon of each entry in the augmentable areas database represents a region of low complexity within a raster image representation of the printed document 190.
In other implementations, a region of low complexity within a raster image of the document 190 is defined as a quadrilateral region of a raster image having an entropy below a predetermined threshold. In yet further implementations, the entropy threshold may predetermined by an author of the document, or may relate to an empty region within the document 190. Entries in the augmentable areas database 1222 may in some instances be generated by the calculate document pose module 1214.
[0084] The software architecture 133a also comprises an augmented content database 1223. The augmented content database 1223 may be stored on the memory 106 of the computing device 101. Each entry in the augmented content database 1223 comprises a URL and an associated document identification number. The URL of each entry in the augmented content database 1223 is used to retrieve specific augmented reality content related to a raster image representation of a document associated with the document identification number, for example, the document 190. In some implementations, each URL is used to retrieve augmented reality content in form of digital information that represents a raster image of a natural image. Other types of digital information are also possible.
[0085] Fig. 2 is a schematic block diagram showing a method 200 for displaying augmented reality content on a document. The method 200 may be implemented as one or more of the software code modules 1211, 1212, 1213, 1214, 1215, 1216, 1210 and 1217 of the application 133, which are resident on the hard disk drive 110 and are controlled by execution of the processor 105. The method 200 will be described by way of example where the document 190 of Fig. 1 A, in this example a printed document, is recognised and augmented reality content 170 is projected or displayed on the document 190.
[0086] The method 200 is referred to as the ‘dynamic projection method 200’ in the description hereafter. The dynamic projection method 200 begins upon execution of the application 133 by the processor 105 to execute a calibrate camera step 210.
[0087] In execution of the calibrate camera step 210 by the processor 105, an origin raster image of a calibration pattern is transmitted from the computer module 101 to the projector device 169. The projector device 169 then projects the raster image into the projection field of view 185a. In some implementations, the origin raster image of a calibration pattern is preconstructed and stored in the memory 106, for example in a RAM portion of the memory 106. In one example implementation the calibration pattern is a checkerboard. Once the projector device 169 has projected the raster image of the calibration pattern into the projection field of view 185a, the camera 127 operates, under execution of the processor 105, to retrieve a raster image of the camera field of view 185b which contains the projected calibration pattern. The camera 127 further operates to transmit the raster image containing the calibration pattern to the computing device 101. At the computer module 101, the raster image containing the calibration pattern is input to the camera calibration module 1210.
[0088] Under execution of the processor 105, the camera calibration module 1210 executes a comer detection process on the raster image of the calibration pattern. In one implementation, the corner detection process is a FAST9 comer detector. The corner detection process executes on the processor to generate a list of x and y locations that represent identified comers of the raster image containing the calibration pattern. The identified corners are referred to as computed corners hereafter. The processor 105 executes the application 133 so that the camera calibrator module 1210 then retrieves a list of corners which correspond to the origin calibration pattern received from the memory 106.
[0089] In one implementation, the list of comers corresponding to the origin calibration pattern are pre-computed by execution of an inspection process. Such corners are referred to as pre-computed comers hereafter. The processor 105 executes the application 133 so that the camera calibrator module 1210 then retrieves a list of indices that provide a one-to-one correspondence between the computed comers and the pre-computed corners. The processor 105 executes the application 133 to cause the camera calibrator module 1210 to transfer the computed and pre-computed corners and the matching indices list to the calculated document pose module 1214. Upon execution of the processor 105, the calculated document pose module executes to generate and output an intrinsic calibration matrix. Operation of the calculate document pose module 1214 is described in further detail below. The processor 105 executes the application 133 to cause the camera calibration module 1210 to receive the intrinsic calibration matrix from the calculate document pose module 1214 and store the received intrinsic calibration matrix in the memory 106. The stored intrinsic matrix is used by the camera calibrator module 1210 to transform x and y locations from points within a raster image that originates from the camera field of view 185b, to new x and y locations for projection within the projection field of view 185a.
[0090] Following completion of the calibrate camera step 210 in the dynamic projection process 200, the processor 105 executes to progress the method 200 to a get camera scene image step 220. In the get camera scene image step 220, processor 105 executes the application 133 to cause the camera device 127 to retrieve a raster image of the camera field of view 185b which contains the printed document 190. The retrieved raster image containing the printed document 190 is then transferred to the computer module 101 by the camera 127. The processor 105 executes to store the retrieved raster image containing the document 190 in the memory 106, such as in RAM of the memory 106. The raster image containing the printed document 190 will hereafter be referred to as the raster scene image.
[0091] Following completion of the get camera scene image step 220, the processor 105 executes to progress the dynamic projection method 200 to execute an identify documents in the scene step 230. Execution of the identify documents in scene step 230 operates to identify documents within the camera field of view 185b. Upon execution of the processor 105, the scene raster image is received by the feature extractor module 1211 from the memory 106.
The application 133 is execute by the processor 105 to cause the feature extractor module 1211 to create a list of feature points that identify points of interest within the scene raster image. A single created feature point comprises (i) an x and y location representing a position within the raster scene image; and (ii) a vector of values which uniquely identifies the x and y location within the corresponding raster scene image. The creation of the feature point list is implemented through execution of a feature extraction process by the feature extractor module 1211. In some implementations, the feature extraction is performed by execution of a SIFT feature extractor. After creating the list of feature points, under execution of the processor 105, the feature extractor module 1211 stores the feature points in the memory 106. The stored feature points are hereafter referred to as scene feature points.
[0092] Next, under execution of the processor 105, an identify documents in scene step 230 executes to cause the feature matcher module 1212 to receive the scene feature points from the memory 106. The processor 105 executes such that the feature matcher module 1212 then queries the feature database 1220 with the scene feature points and receives a list of matching feature points and an index list. The received index list is hereafter referred to as the matched scene feature index. Under execution of the processor 105, the feature matcher module 1212 stores the matched scene feature index in the memory 106. The identify documents in the scene step 230, under execution of the processor 105, next executes the geometric feature verification module 1213. The geometric feature verification module 1213 receives the matched scene feature index and scene feature points from the memory 106 for use as input.
The geometric feature verification module 1213 then, under execution of the processor 105, creates a scene feature index that is a subset of the input scene feature index. The subset scene feature index is created by the geometric feature verification module 1213, under execution of the processor 105, processing the scene feature points, the matching feature points and the scene feature index by executing a series of outlier removal processes. The series of outlier removal processes execute to remove entries from the scene feature index when a scene feature point is incorrectly matched to a feature database entry. Such an anomaly occurs when the feature vector of a scene feature point appears to resemble multiple database feature point vectors. In some implementations, one of the outlier removal processes is a RANSAC process, and a secondary outlier removal process may be an orientation-histogram process. The subset scene feature index is stored in the memory 106. The subset feature index is hereafter referred to as the refined feature index.
[0093] The geometric feature verification module 1213 then, under execution of the processor 105, takes a vote of the document identification numbers associated with the scene feature index and locates the document identification number with the highest number of votes. The highest voted document verification number represents the printed document 190 within the camera field of view 185b. The geometric feature verification module 1213 operates under execution of the processor 105 to store the highest voted document identification number in the memory 106.
[0094] Following the identify documents in the scene step 230, under execution of the processor 105, the dynamic projection method 200 progresses to a retrieve content to project step 260. In the retrieve content to project step 260 the dynamic projection method 200 executes to retrieve the highest voted document identification number from the memory 106. Under execution of the processor 105, the application 133 executes to query the augmented content database 1223 using the highest voted document identification number to receive the associated augmented reality content. The application 133 further executes to store the associated augmented content in the memory 106, such as in a RAM portion of the memory 106. In some implementations the query of the augmented content database 1223 is performed using SQL. Other methods for querying may be used. In some implementations, the received augmented reality content from the augmented content database 1223 is a natural image.
[0095] Following the step 260, under execution of the processor 105, the dynamic projection method 200 progresses to a calculate salient regions of content step 270. In the calculate salient regions of content step 270 the calculate salient regions module 1215 executes to receive as input the augmented reality content from the memory 106 (as stored in step 260) for use as input. The calculated salient regions module 1215, under execution of the processor 105, then determines salient regions of the received augmented reality content, and corresponding sizes of each calculated salient region, within the received augmented content using a salient identification process. In the examples hereafter, a plurality of salient regions are calculated for any input augmented reality content. The calculated salient regions and corresponding sizes of the corresponding salient regions are stored in the memory 106. The calculated salient regions module 1215 operates to determine salient regions of the augmented reality content in step 270. A salient region is polygon that bounds a region of interest within the augmented reality content.
[0096] In some implementations, identification or calculation of salient regions executed by the calculate salient regions module 1215 receiving pre-computed polygons that bound predetermined regions of interest within the received augmented reality content, from the augmented content database 1223. Such implementations are hereafter referred to as a precomputed salient implementation. In the pre-computed salient implementation, an author of the augmented reality content selects one or more regions within the augmented content as salient. The polygons describing the pre-selected salient regions are then stored in the augmented content database 1223 with the augmented reality content. Other implementations are also possible.
[0097] Following the identify documents in the scene step 230, under execution of the processor 105, the dynamic projection method 200 progresses to execute a find augmentable areas step 240. In the find augmentable areas step 240, the calculate document pose module 1214 executes to receive as input the scene feature points, matched feature points and the refined feature index from the memory 106. The calculate document pose module 1214 then executes to receive the x and y location of each feature point in the scene feature points and the corresponding matched feature points using the refined feature index to create a scene points list and a corresponding matched points list. The scene points list and the matched points list are then used to produce a Homography which is stored in the memory 106. The Homography comprises a rotation and translation vector as well as a scale factor: together rotation and translation vector and the scale factor describe a transformation from any x and y location that defined for features from the feature database 1220 to any x and y location that defined for features, determined from the raster scene image. The Homography is hereafter referred to as the database-to-scene Homography. The described methods may make use of functions provided by the OpenCV (http://opencv.willowgarage.com) computer vision library to create the database-to-scene Homography.
[0100] Next, the calculate document pose module 1214 executes to receive the highest voted document identifier from the memory 106 and queries areas of whitespace, or pre-determined areas, or low-feature areas, etc. of the raster image. In querying areas of whitespace, or predetermined areas, or low-feature areas, the calculate document pose module 1214 executes to determine one or more empty regions within the document 190. In step 250, described below, under execution of the processor 105, the determined salient regions of the augmented reality content are associated with the augmentable areas database 1222 using the highest voted document identifier to receive polygons that describe the bounded augmentable areas associated to the identifier. The calculated document pose module 1214 executes to use the database-to-scene Homography to transform the points describing the polygons of the empty regions or augmentable areas to corresponding scene raster image points. The corresponding scene raster image points are hereafter referred to as scene augmentable areas. The scene augmentable areas, and the bounding size of each of the scene augmentable areas, are then stored in the memory 106.
[0101] Following the find augmentable areas step 240, under execution of the processor 105, the dynamic projection method 200 progresses to the associate salient regions to augmentable areas step 250. In the associate salient regions to augmentable spaces step 250 the salient to augmentable area matcher module 1216 executes to receive as input the computed salient regions and augmentable areas determined in steps 260 and 240 respectively from the memory 106. The salient to augmentable area matcher module 1216 executes to match each determined salient region to a determined augmentable area using a salient matching process. In executing the salient matching process, the salient to augmentable area matcher module 1216 generates or produces an index list that matches the input salient regions of the augmented reality content to an input augmentable area of the document 190. The associate salient regions to augmentable areas step 250 matches the determined augmentable areas and the salient regions for display by the projector 169 onto the field of view 185a as the augmented reality content 170. A method 300 of matching an augmentable area to a salient region, as executed at step 250, is described below with reference to Fig. 3.
[0102] Following the associate salient regions to augmentable areas step 250, under execution of the processor 105, the dynamic projection method 200 progresses to a project onto the scene step 280. In the project onto the scene step 280 the Tenderer module 1217 executes to receive as input the determined salient regions of the augmented reality content and the associated intrinsic matrix, as generated in step 250, from the memory 106. The Tenderer module 1217 executes to apply the intrinsic matrix to the polygons that describe the bounding region of the determined salient regions of the augmented reality content, such that the x and y positions of the points defining the bounding regions or polygons of the determined salient regions of the augmented content are transformed into coordinates for the projection space field of view 185a. The transformed salient regions of the augmented reality content are hereafter referred to as projection augmented reality content. The Tenderer module 1217 executes to transmit the projection augmented reality content to the projector 169 for projection onto the projection field of view 185a. In transmitting the projection augmented reality content onto the projection field of view, the Tenderer module 1217 executes to display the salient regions of the augmented reality content by overlaying the salient regions of the augmented reality content onto the augmentable areas based upon the matching of step 250.
[0103] The method 300 of matching an augmentable area to a salient region, as executed at step 250, will now be described with reference to Fig. 3. The method 300 may be implemented as one or more of the software modules 1211, 1212, 1213, 1214, 1215, 1216, 1210 and 1217 of the application 133, which are resident on the hard disk drive 110 and are controlled by execution of the processor 105. The method 300 represents an implementation of the associate salient regions to augmentable areas step 250 of Fig. 2.
[0104] As described, the method 300 is implemented by execution of the salient to augmentable area matcher module 1216 by the processor 105, and shows an example of how the augmentable areas or empty spaces determined in the find augmentable areas step 240 of Fig. 2 are matched to the salient regions of augmented reality content calculated by the calculate salient regions of content step 270 to project the salient regions into the augmentable areas at the project onto the scene at step 280.
[0105] The method 300 begins at a recognise relationships between salient regions step 310.
In the step 310, under execution of the processor 105, the salient to augmentable area matcher module 1216 executes to recognise relationships between salient regions of the augmented reality content calculated by the calculate salient regions module 1215 in execution of the calculate salient content step 270. In recognising relationships between salient regions, the salient to augmentable area matcher module 1216 determines a spatial arrangement of each of the salient regions of the augmented reality content determined at step 270. In some implementations, the recognise relationships in salient regions step 310 determines relationships based upon one or more of size, location, and proximity of the salient region to one or more of the other salient regions. In some implementations, the recognise relationships in salient regions step 310 recognises relationships based upon the relationships of content of the salient regions relative to one another, e.g., if content of a given salient region has a relationship with content of one or more of the salient regions, or a direction between one salient region and another salient region. Other relationships are also possible. One particular implementation that determines a location relationship of the salient regions will be described in relation to Fig. 4. The result of execution of the recognise relationships between salient regions step 310 is generation of a list of salient regions of augmented realty content where the second salient region in the list relates to the first salient region, and the third salient region relates to the fourth, etc. Other relationship groupings are also possible.
[0106] Following the recognise relationships between salient regions step 310, under execution of the processor 105, the method 300 proceeds from step 310 to execute a select most important salient region step 320. In execution of the select most important salient region step 320, the most salient region of the augmented reality content is selected to be used in a loop of the process 300 that follows the select most important salient region step 320. To identify the most important salient region, the salient to augmentable area matcher module 1216 executes to receive the list of saliency values of each salient region within the augmentable reality content, as generated in step 270 of Fig. 2, from the memory 106. The salient to augmentable area matcher module 1216 executes on the processor 105 to sort the list of saliency values in descending order. The highest saliency value is then selected from the beginning of the list by the salient to augmentable area matcher module 1216, and used as a most important salient region. In executing to sort the list of saliency values in descending order, the salient to augmentable area matcher module 1216 determines a rank of each of the salient regions of the augmented reality content. Determination of the importance of each salient region may be based upon one or more properties of the salient region. Example properties of a salient region include a size of the salient region, content of the salient region, a relationship between the content of a given salient region with one or more of the other salient regions, or resolution of the salient region.
[0107] Upon completion of the select most important salient region step 320, the method 300, under execution of the processor 105, progresses to execute a select most appropriate augmentable area step 330. In executing the select most appropriate augmentable area step 330, the processor 105 executes to select the most appropriate augmentable area of the document 190 (from the augmentable areas determined in the find augmentable areas step 240) for the determined most important salient region. In selecting the most appropriate augmentable area of the document 190 for the determined most important salient region, the module 1216 determines a spatial arrangement and a rank for each of the augmentable areas, and matches the rank of the determined most important salient region with the rank of the empty regions.
[0108] Depending on the implementation, the method for determining the appropriateness to select the most appropriate augmentable area can vary. In one implementation, the appropriateness of an augmentable area is determined according to a proximity of the augmentable area to the document 190 - such is described in relation to Fig. 4. The proximity of the augmentable area to the document 190 may in some instances be determined as a distance between the augmentable area and a target of the document 190. In other implementations, the appropriateness of an augmentable areas or empty space is determined according to the size of the augmentable area in relation to the selected salient region - such is described in relation to Fig. 5.
[0109] In yet another implementation, appropriateness of an augmentable area is determined according to the relationship between the selected salient region of the augmented reality content to be placed in the most appropriate augmentable area and a previously selected salient region of the augmented reality content that has already been placed in the most appropriate augmentable area - such is described in relation to Fig. 6. Another implementation determines the appropriateness of an augmentable area or empty space by determining the suitability of the augmentable area to display an animation, or transform, of the salient region of the augmented content - as will be described in relation to Fig. 7. Other methods (including combinations of the above) may be used to determine the appropriateness of an augmentable area of the augmentable areas determined in step 240. Properties such as the proximity of an augmentable area to the document, the size of an augmentable area, and the like represent a spatial arrangement of the augmentable area, and may be used in determining a rank of the augmentable area.
[0110] Upon completion of the select most appropriate augmentable area step 330, the method 300, under execution of the processor 105, progresses to execute a place selected salient region into selected area step 340. Using the selected most important salient region of the augmented reality content from the select most important salient region step 320, and the selected most appropriate augmentable area from the select most appropriate augmentable area step 330, the place selected salient region into selected area step 340 executes to place the selected most important salient region into the selected salient area. In placing the selected most important salient region into the selected salient area place, the selected salient region into selected area step 340 positions the selected salient regions in the document 190 based upon matching of the step 330.
[0111] Typically, placing the selected most important (highest rank) salient region of the augmented reality content involves extracting the selected most important region from the augmented reality content (retrieved from retrieve content to project step 260) and projecting the extracted selected most important region of the augmented reality content into a document or scene at the project onto the scene step 280 - as will be described in relation to Fig. 4. In some implementations, the augmented reality content is modified to allow alignment of the spatial arrangement of the salient regions within the document. For example, in one implementation, the selected most important region of the augmented reality content is placed in conjunction with the other salient regions of the augmented reality but with a blend applied between the salient regions - such is described in relation to Fig. 6. In yet another implementation, the selected most important region is placed in the document 190 in conjunction with other salient regions but with an animation applied between the placed salient regions- such is described in relation to Fig. 7. Other modifications are also possible such as resizing or rotation of the salient region or the like.
[0112] Under execution of the processor 105, the method 300 progresses from step 340 to a select the next related salient region as the selected salient region step 360. In this instance, the processor 105 executes such that the selected most salient region of the augmented reality content (as selected in select most important salient region step 320) is set to be the salient region that is most related (as per the relationships recognised or determined in the recognise relationships between salient regions step 310).
[0113] Following execution of step 360, the method 300 progresses, under execution of the processor 105, to a decision step 370. At the decision step 370, the method 300 executes to determine whether to continue. For example, if all of the salient regions have been positioned in the document 190, a determination of ‘No’ is made at step 370, and the method 300 ends. Alternatively, if all of the determined salient regions have not been positioned in the document 190, and augmentable areas are still available the method 300 continues to the select most appropriate augmentable areas step 330.
[0114] Fig. 4 shows an example to illustrate the functionality of an implementation of the method 200 for displaying augmented reality content on a document. In the example of Fig. 4, the plurality of salient regions of augmented reality content are positioned in the document based upon a matching determined with regard to relative proximity and location.
[0115] Fig. 4 shows a photograph 410 and a document 421. The photograph 410 and the document 421 are shown in both a document scene 420 and an augmented document scene 430. In a typical system according to the present disclosure, as shown in Fig. 1 A, the photograph 410 forms augmented reality content (stored in the augmented content database 1223) to be projected or displayed on the document 421 in the document scene 420 according to a target 427, resulting in the augmented document scene 430. In the example shown in Fig. 4, the salient regions of the photograph 410 are matched for overlaying on the document 421 by prioritising placement of the most salient regions into those of the augmentable areas nearest to the target 427 such that the spatial relationship between the salient regions is maintained. The target 427 may be pre-determined and stored in the memory 106. The target 427 may in some implementations be related to features of the document 421. The augmentable regions may in other implementations be predefined and stored in association with the target 427 in the memory 106.
[0116] The photograph 410 represents a typical natural photograph containing four items: a parasol 411, a person 412, a beach ball 414 and a Sun 413, each identified with a circle. The photograph 410 represents augmented reality content stored in the augmented content database 1223 that is retrieved for projection for the document 421 at the retrieve content to project as at step 260. Salient regions are determined for the photograph 410 by execution of the calculate salient region module 1215 as at the calculate salient regions of content step 270. The determined salient regions are represented as circles overlaid on the photograph 410 in Fig. 4. In the example in relation to Fig. 4, the size of each circle on the photograph represents the saliency of each encircled item in the photograph 410 such that a most salient item is encircled by the biggest circle. As such, the most salient or highest rank region in the photograph 410 is the parasol 411, followed by the person 412, then the beach ball 414 and finally the sun 413 (the least salient or lowest rank region).
[0117] The document scene 420 forms a typical camera scene image that can be captured from the scene by camera 127 upon execution of the get camera scene image step 220. The identify documents in the scene step 230 executes to identify the document 421 in the document scene 420 under execution of the feature extractor module 1211 and feature matcher module 1212. As at the find augmentable areas step 240, the processor 105 executes so that the augmentable areas in the document scene 420 are determined by execution of the calculate salient region module 1215 on the processor 105. In other implementations, the augmentable areas are predetermined and loaded from the augmentable areas database 1222 under execution of the processor 105. The result of the find augmentable areas step 240 is shown in Fig. 4 as the augmentable areas 422, 423, 424, 425, 426, 428. The augmentable areas 422, 423, 424, 425, 426, 428 represent empty regions of the document 420.
[0118] The augmented document scene 430 shows the document 421 as displayed after the salient region associated with the parasol 411 has been associated with the augmentable area 422, the person 412 has been associated with augmentable area 425, and the beach ball 414 has been associated with augmentable area 429 as the result of execution of the associate salient regions to augmentable areas step 250, and after the augmented reality content 411, 412, 412 has been projected onto the scene 420 by the projector 169 by execution of the project onto the scene step 280.
[0119] Having described what is shown by Fig. 4, this implementation is described with reference to Fig. 3.
[0120] The recognise relationships between salient regions step 310 executes to identify a spatial relationship 415 between the parasol 411 and the person 412, a spatial relationship 416 between the person 412 and the beach ball 414, and a spatial relationship 417 between the beach ball 414 and the Sun 413. The recognise relationships between salient regions step 310 may recognise the relationships between salient regions by identifying the spatial arrangement or direction of one salient region is relative to another salient region. In one implementation, beginning with the most salient region, the parasol 411, the recognise relationships between salient regions step 310 may identify that the second-most salient region, the person 412, is positioned top-right to the parasol 411, consequently, the spatial relationship 415 of the parasol 411 to the person 412 can be categorised as a “top-right” relationship, whereby the person 412 is “top-right” to the parasol 411. Similarly, the spatial relationship 416 identifies that the beach ball 414 is to the “bottom-right” of the person 412. Similarly, the spatial relationship 417 identifies that the Sun 413 is to the “top” of the beach ball 414. The relationships 415, 416 and 417 represent spatial arrangements of the salient regions of the photograph 410. Other implementations may use a different means to recognise relationships between salient regions at the recognise relationships between salient regions step 310.
[0121] The select most important salient region step 320, in the implementation of Fig. 4, executes to select the parasol 411. In the example relating to Fig. 4, the salient regions of the augmented reality content, the photograph 410, are ranked by which of the salient regions is most salient first.
[0122] In the example of Fig. 4, the select most appropriate augmentable area step 330 executes as described above to select the most appropriate augmentable area to position the selected salient region of the photograph 410 (from either the select most important salient region step 320, or the select the next related salient region as the selected salient region step 360) for the target 427. Determining a distance between an augmentable area and the target 427 represents determining a spatial relationship of the augmentable area.
[0123] One implementation for determining which augmentable area is the most appropriate for a selected salient region is based upon the proximity of a tentatively selected augmentable area to the target 427 and compatibility of the spatial relationship between the previously selected augmentable area and the tentatively selected augmentable and the spatial relationship between the previously selected salient region and the selected salient region. In such instances, a distance between the empty region (augmentable area) and the target 427 of the augmented reality content represents compatibility or matching of the direction between positioned salient regions, and a direction between the empty regions. For example, if parasol 411 has been positioned at augmentable area 422, then placing the person 412 will require an augmentable area that is either “top”, or “right”, or “top-right” to the augmentable area 422. Such candidates are the augmentable area 423, 424, 425. In the example of Fig. 4, augmentable area 425 is determined for the person 412. Once the parasol 411 and the person 412 have been positioned, the next most important salient region is the beach ball 414. The beach ball 414 has a “bottom-right” spatial relationship 416 to the person 412, consequently, the select most appropriate augmentable area step 330 for the beach ball 414 executes to select from the augmentable areas 429 and augmentable area 428. However, since augmentable area 428 is too far away from the target 427, the augmentable area 428 is not selected. Thus, the beach ball 414 can only be positioned at the augmentable area 429.
Finally, the Sun 413 needs to be positioned, however, there is no augmentable area available above the placed beach ball 414 at augmentable area 429. Accordingly, the Sun 413 is not positioned anywhere on the scene 430.
[0124] Fig. 5 shows a photograph 500, a scene raster image 510 and a projected physical scene 520 to illustrate an example of an alternate implementation of the find augmentable areas step 240 within the dynamic projection method 200. The alternate implementation of the find augmented areas step 240 of Fig. 5 shows how augmentable areas outside of the printed document 518 can be determined and used in the associate salient region to augmentable spaces step 250. In the example of Fig. 5, the augmentable areas are determined and ranked in relation to a scene including the document. The alternate implementation of the find augmentable areas step 240 begins after the database-to-scene Homography has been created and stored in the memory 106. Following the creation of the database-to-scene Homography, the find augmentable areas step 240 executes to receive a scene raster image 510 of the camera field of view 185b from the memory 106. Within the scene raster image 510 digital representations of various physical objects are present, such as a mouse 512, mobile device 513, paper notes 514 and the printed document 518. The printed document 518 has been identified as described in relation to operation of the identify documents in scene step 230. Upon retrieving the scene raster image 510 the find augmentable areas step 240 executes to divide the scene raster image 510 into tiles of a pre-determined size. The scene raster image 510 that has been tiled is hereafter referred to as a tiled scene image. The find augmented areas step 240 then executes to determine a standard deviation of each tile within the tiled scene image. Determining the standard deviation of each tile may be implemented according to the following equation: SN = where N is the sample size and xt is sample and x is an average of all the samples. The find augmentable areas step 240 executes to create a binary mask, using the determined tile standard deviations and a first pre-determined threshold. If a tile standard deviation is determined to be below the first pre-determined threshold the tile is marked as 0. If the tile standard deviation is determined to be above the first pre-determined threshold the tile is marked as ‘ Γ. Tiles marked as ‘0’ represent non busy tiles. Tiles marked as 1 represent busy tiles. Following the creation of the binary mask the find augmentable areas step 240 executes to group adjacent tiles that are marked as 0, using connected components to form rectangular regions that represent available augmentable areas. Examples of identified available augmentable areas within the scene raster image 510 of Fig. 5 are shown as augmentable area 515, augmentable area 516 and augmentable area 517. The augmentable areas 515, 516 and 517 represent empty regions of the scene raster image 510. Other augmentable areas are also depicted in Fig 5 shown as dashed quadrilaterals. Once the available augmentable areas have been determined, the find augmentable areas step 240 executes to determines a distance from an edge of each available augmented region that is closest to the printed document 518, and compares each distance to a second pre-determined threshold. Augmentable areas determined to be further from the printed document 518 than the second pre-determined threshold are discarded. The available augmentable areas determined to be within the second predetermined threshold are stored in the memory 106 and the find augmentable areas step 240 ends.
[0125] Upon completion of the alternate implementation of the find augmentable areas step 240 described in relation to Fig. 5, the dynamic projection method 200 continues as described in Fig. 2. The photograph 500 represents augmented reality content retrieved in the retrieve content to project step 260. The photograph 500 comprises three salient regions that have been determined by execution of the calculate salient regions step 270. The three identified salient regions are: a weather region 501, ball region 502 and person region 503. Upon execution of the calculate salient regions step 270, the dynamic projection method 200 progresses to the associate salient regions step 250, as described in relation to Fig 2.
Following execution of the dynamic projection process 250, the project onto scene step 280 is executed. Fig. 5 shows a projected physical scene 520 output according to execution of the project onto scene step 280. In the projected physical scene 520, the salient ball region 502 has been associated with the augmentable area 515 and projected as the projected ball image 521 on the document 518; the weather region 501 has been associated with the augmentable area 517 and projected as the weather image 523; and the person region 503 has been associated with the augmentable area 516 and projected as the person image 522, as described in relation to operation of the associate salient regions step 250 in Fig. 2.
[0126] Fig. 6 shows a photograph 600, a document scene 420 and an augmented document scene 620 to illustrate the functionality of the method 200 of displaying augmented reality content on a document with regard to relationship and blend. The document scene 420 shown in Fig 6 is the same as the scene 420 shown in Fig. 4.
[0127] Fig. 6 shows a result of operation of the method 200 to display salient regions of augmented reality content, the photograph 600, on the document 421. The document 421 is shown in both the document scene 420 and the augmented document scene 620. In a typical system for displaying augmented reality content on a document, such as the system 100 shown in Fig. 1 A, the photograph 610 represents augmented reality content (stored in the augmented content database 1223) to be projected over or displayed on the document 421 in the document scene 420 for the target 427, resulting in the augmented document scene 620.
In the implementation described in relation to Fig. 6, determined salient regions of the photograph 600 are projected over the document 421 by the application 133 executing to determine a key relationship 605 between determined salient regions in the photograph 600.
In the example of Fig. 6, the rank of the salient regions is determined based upon the key relationship between the salient regions of the photograph 600. The key relationship 605 is used to determine positioning of salient regions of the photograph 600 in the document 421 such that the salient regions are placed in augmentable areas with blend transitions between the salient regions of the photograph 600.
[0128] The photograph 600 contains a natural image of three persons. The left-most person, is a straight-facing person 601, the middle person, is a left-facing person 602, and the right most person is a right-facing person 603. In execution of the calculate salient regions of content step 270, the salient regions of the photograph 600 are determined in the example of Fig. 6. The salient regions of the photograph 600 are determined as the straight-facing person 601, left-facing person 602, and the right-facing person 603. A saliency value for each determined saliency region is shown as a circle encircling each determined saliency region.
In Fig. 6, each salient region has the same saliency value, as demonstrated by the diameter of each circle being the same. The relationship 604 and the relationship 605 indicate relationships recognised between salient regions in execution of the step 310. The relationships 604 and 605 represent spatial arrangements of the salient regions 601, 602 and 603. The relationship 605 is indicated with a thicker line indicating a stronger relationship between the salient regions for the left-facing person 602 and the right-facing person 603 who are facing each other. In contrast, the relationship 604 has a thinner line indicating a weaker relationship because the straight-facing person 601 is not looking at the left-facing person 602. The thicker line of the relationship 605 indicates a higher determined rank then the thinner line of the relationship 604. In this instance, the rank of the salient regions is determined according to key relationships between the salient regions. Other methods of determining a relationship between two salient regions may be used.
[0129] The augmented document scene 620 shows the document scene 420 displayed with a projected augmentation. The projected augmentation consists of the part of the augmented reality content, the photograph 600, that contains the salient regions that had the strongest relationship. That is, the left-facing person 602 and the right-facing person 603 are displayed on the document 421 as a placed left-facing person 623 and a placed right-facing person 624. Between the placed left-facing person 623 and placed right-facing person 624 there is a blend applied that fades background of the photograph 610 progressively away from the placed leftfacing person 623, reaching transparency at a blend portion 621. Between the placed rightfacing person 624 and placed left-facing person 623 there is a blend applied that fades background of the photograph 610 progressively away from the placed right-facing person 624, reaching transparency at a blend portion 622. The blends applied between the placed left-facing person 623 and the placed right-facing person 624 represent modification of the photograph 600 to allow alignment of the spatial arrangement of the salient regions 601 and 603 with the spatial arrangement of the augmentable areas of the document 421. As such the nature of the blend may be determined by a relationship between the salient regions.
[0130] Execution of the select most important salient region 320 step in the example provided in relation to Fig. 6 is implemented by selecting the most important salient region in terms of the most important relationship between determined salient regions. As shown in the photograph 600, the most important relationship, the key relationship 605 consists of leftfacing person 602 and right-facing person 603. As such, at execution of select most important salient region 320, either one of the left-facing person 602 or the right-facing person 603 is selected to be first placed at the place selected salient region into selected area step 340. In this example, the right-facing person 603 is first selected. Consequently, in Fig. 6, on execution of the select most appropriate augmentable area step 330 the augmentable areas of the document 421 are examined for placement of the right-facing person 603. In the example of Fig. 6, the augmentable area 425 is selected for placement of the right-facing person 603.
In execution of the select the next related salient region as the selected salient region step 360, the left-facing person 602 is selected, and in this example, the augmentable area 422 is selected for positioning of the area 602 by operation of the select most appropriate augmentable area step 330. Methods for implementing the select most appropriate augmentable area 330 are described above in relation to Fig. 4.
[0131] Fig. 7 shows the photograph 600, a document scene 710 and an augmented document scene 720 to illustrate functionality of an alternate implementation of the method 200. Fig. 7 illustrates use of animations and transitions when there are less augmentable areas available than salient regions.
[0132] In the example of Fig. 7, execution of the associate salient regions to augmentable areas step 250, described in relation to the select most appropriate augmentable area step 330. The printed document 711 has been identified by execution of the feature extractor module 1211, feature matcher module 1212 and the geometric verification module 1213 as described in relation to execution of the identify documents in the scene step 230 of Fig. 2. The augmentable area 713 has been received by the calculate document pose module 1214 from the augmentable areas database 1222 as described in the find augmentable areas step 240. Furthermore, the augmented reality content of the photograph 600, has been received from the augmentable content database 1223 and stored in the memory 106 by the calculate salient regions module 1215, under execution of the calculate salient regions of content step 270.
The determined salient regions step 270 executes to determine a person region 701, a person region 702 and a person region 703. The saliency values of each of the salient regions 701, 702 and 703 have been determined and sorted in descending order of most important to least important (by execution of the calculated salient regions module 1215 in the calculate salient regions of content step 270, the recognise relationships between salient regions step 310 the select most important salient regions step 320, respectively).
[0133] Execution of the select most appropriate augmentable area step 330 for the example of Fig. 7 begins by the salient to augmentable area matcher module 1216 executing to place the most salient region, the person region 701, within the augmentable area 713. The salient to augmentable area matcher module 1216 executes to compares a size of salient region person 701 to a size of the augmentable area 713 in order to determine if the salient region person 701 will fit within the augmentable area 713. In the example shown in Fig. 7, appropriateness of the augmentable areas is determined based on size of the augmentable area compared to the size of the most salient region. The most salient region, the person region 701, meets the size requirements of the augmentable area 713. The salient to augmentable area matcher module executes to place the most salient region, person 701, in the augmentable area 713 as described in relation to the place selected salient region into selected area step 340 of Fig. 3.
[0134] Upon the placement of the most salient region, person region 701, the salient to augmentable area matcher module 1216 executes to select a next related salient region for placement. In the example shown in Fig. 7, the next related salient region is the salient region person 702. To place the next salient region, person region 702, the salient to augmentable area matcher 1216 requires another augmentable area. In the example shown in Fig. 7 only a single augmentable area 713 is available. In the example shown in Fig. 7, there are less augmentable areas than salient regions available. In example of Fig. 7, the salient to augmentable area matcher module 1216 executes to calculate an animated transition between the most salient region and the subsequent salient regions.
[0135] In the example shown in Fig. 7, the animated transition will start at the most salient region, the person region 701, move to the second most salient region, the person region 702, and finally move to a third most salient region, the person region 703. The salient to augmentable area matcher module 1216 executes to calculate the animated transition by first determining the direction of the related salient regions within the augmented reality content 600. The salient to augmentable area matcher module 1216 begins by selecting the left most edge of the most salient region. Upon selecting the left most edge, the salient to augmentable area matcher module 1216 selects the left most edge of the next salient region within the sorted list of salient region values. The direction between the left edge of the first and second selected salient regions is determined by the salient to augmentable area matcher module 1216 and stored within a direction list in the memory 106. This process is repeated until the direction of all salient region pairs within the saliency value list have been determined and stored within the direction list. Once the salient to augmentable area matcher module 1216 has determined the direction list, the salient to augmentable area matcher module 1216 then iterates through the direction list and transitions between each salient region pair within the augmented content 600 using directional pairing within the direction list until all salient regions have been transitioned. Following the transition of all salient regions, the implementation of the place selected salient regions into selected areas step 340 associated with Fig. 7 within the associate salient regions to augmentable areas step 250 ends.
[0136] The arrangements described are applicable to the computer and data processing industries and particularly for projection and data augmentation industries.
[0137] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
Claims (19)
- CLAIMS:1. A method of displaying augmented reality content on a document, the method comprising: determining a spatial arrangement and a rank for each of a plurality of salient regions within the augmented reality content, the rank of each of the salient regions being determined according to an importance of each salient region in the augmented reality content; determining a spatial arrangement and a rank for each of a plurality of empty regions within the document, the rank of each of the empty regions being determined according to a distance between each empty region and a display target of the augmented reality content; modifying the augmented reality content to allow alignment of the spatial arrangement of the plurality of salient regions with the spatial arrangement of the plurality of empty regions within the document, the plurality of salient regions being positioned in the document based on matching of the rank of each of the salient regions with the rank of each of the empty regions; and displaying the modified augmented reality content on the document such that at least one of the salient regions is overlaid onto at least one of the empty regions based upon the matching.
- 2. The method according to claim 1, wherein the spatial arrangement of each of the plurality of salient regions is determined based upon a size of the salient region.
- 3. The method according to claim 1, wherein the spatial arrangement of each of the plurality of salient regions is determined based upon at least one of a location of said salient region within the augmented reality content, and proximity of said salient region in relation to at least one other salient region of the augmented reality content.
- 4. The method according to claim 1, wherein the rank of each of the plurality of salient regions is determined based upon a relationship between content of said salient region with content of another of the salient regions.
- 5. The method according to claim 1, wherein determining the rank of each of the plurality of empty regions comprises determining a size of said empty region in relation to the a size of a selected salient region
- 6. The method according to claim 1, further comprising determining suitability of one of said empty regions to display an animation or transform.
- 7. The method according to claim 1, wherein the spatial arrangement of each of the plurality of salient regions is determined based upon a direction between each of the salient regions.
- 8. The method according to claim 1, wherein the matching of the rank of each of the salient regions with the rank of each of the empty regions is based upon a relationship between a selected one of the plurality of salient regions and one of the plurality of salient regions previously matched with one of the empty regions.
- 9. The method according to claim 1, wherein the rank of each of the plurality of empty regions is determined relative to a scene of the document.
- 10. The method according to claim 1, wherein the spatial arrangement of each of the salient regions is determined based upon a key relationship between each of the plurality of salient regions.
- 11. The method according to claim 10, wherein the key relationship relates to a direction in which content of each of the salient region faces.
- 12. The method according to claim 1, further comprising applying a blend of the augmented reality content.
- 13. The method according to claim 12, where the blend is determined by a relationship between the salient regions.
- 14. The method according to claim 1, wherein the rank of each of the plurality of empty regions is determined based upon a relationship between each of the plurality of salient regions.
- 15. The method according to claim 1, further comprising displaying the modified augmented reality content as one of an animation and a transition.
- 16. The method according to claim 1, wherein the plurality of empty regions are predefined and stored in association with the target.
- 17. An apparatus for displaying augmented reality content on a document, the apparatus comprising: means for determining a spatial arrangement and a rank for each of a plurality of salient regions within the augmented reality content, the rank of each of the salient regions being determined according to an importance of each salient region in the augmented reality content; means for determining a spatial arrangement and a rank for each of a plurality of empty regions within the document, the rank of each of the empty regions being determined according to a distance between each empty region and a display target of the augmented reality content; means for modifying the augmented reality content to allow alignment of the spatial arrangement of the plurality of salient regions with the spatial arrangement of the plurality of empty regions within the document, the plurality of salient regions being positioned in the document based on matching of the rank of each of the salient regions with the rank of each of the empty regions; and means for displaying the modified augmented reality content on the document such that at least one of the salient regions is overlaid onto at least one of the empty regions based upon the matching.
- 18. A computer readable medium having a computer program stored thereon for displaying augmented reality content on a document, the program comprising: code for determining a spatial arrangement and a rank for each of a plurality of salient regions within the augmented reality content, the rank of each of the salient regions being determined according to an importance of each salient region in the augmented reality content; code for determining a spatial arrangement and a rank for each of a plurality of empty regions within the document, the rank of each of the empty regions being determined according to a distance between each empty region and a display target of the augmented reality content; code for modifying the augmented reality content to allow alignment of the spatial arrangement of the plurality of salient regions with the spatial arrangement of the plurality of empty regions within the document, the plurality of salient regions being positioned in the document based on matching of the rank of each of the salient regions with the rank of each of the empty regions; and code for displaying the modified augmented reality content on the document such that at least one of the salient regions is overlaid onto at least one of the empty regions based upon the matching.
- 19. A system for displaying augmented reality content on a document, the system comprising: a memory for storing data and a computer program; a processor coupled to the memory for executing said computer program, said computer program comprising instructions for: determining a spatial arrangement and a rank for each of a plurality of salient regions within the augmented reality content, the rank of each of the salient regions being determined according to an importance of each salient region in the augmented reality content; determining a spatial arrangement and a rank for each of a plurality of empty regions within the document, the rank of each of the empty regions being determined according to a distance between each empty region and a display target of the augmented reality content; modifying the augmented reality content to allow alignment of the spatial arrangement of the plurality of salient regions with the spatial arrangement of the plurality of empty regions within the document, the plurality of salient regions being positioned in the document based on matching of the rank of each of the salient regions with the rank of each of the empty regions; and displaying the modified augmented reality content on the document such that at least one of the salient regions is overlaid onto at least one of the empty regions based upon the matching.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2015201596A AU2015201596A1 (en) | 2015-03-27 | 2015-03-27 | Displaying augmented reality content on a document |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2015201596A AU2015201596A1 (en) | 2015-03-27 | 2015-03-27 | Displaying augmented reality content on a document |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| AU2015201596A1 true AU2015201596A1 (en) | 2016-10-13 |
Family
ID=57068549
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| AU2015201596A Abandoned AU2015201596A1 (en) | 2015-03-27 | 2015-03-27 | Displaying augmented reality content on a document |
Country Status (1)
| Country | Link |
|---|---|
| AU (1) | AU2015201596A1 (en) |
-
2015
- 2015-03-27 AU AU2015201596A patent/AU2015201596A1/en not_active Abandoned
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10832086B2 (en) | Target object presentation method and apparatus | |
| US20250022037A1 (en) | Method, system, and non-transitory computer-readable medium for analyzing facial features for augmented reality experiences of physical products in a messaging system | |
| US9721391B2 (en) | Positioning of projected augmented reality content | |
| US10049308B1 (en) | Synthesizing training data | |
| US20240161179A1 (en) | Identification of physical products for augmented reality experiences in a messaging system | |
| US10311284B2 (en) | Creation of representative content based on facial analysis | |
| US9727775B2 (en) | Method and system of curved object recognition using image matching for image processing | |
| US11704357B2 (en) | Shape-based graphics search | |
| EP4128026A1 (en) | Identification of physical products for augmented reality experiences in a messaging system | |
| US20140210857A1 (en) | Realization method and device for two-dimensional code augmented reality | |
| US10554803B2 (en) | Method and apparatus for generating unlocking interface, and electronic device | |
| US20110164815A1 (en) | Method, device and system for content based image categorization field | |
| US11523063B2 (en) | Systems and methods for placing annotations in an augmented reality environment using a center-locked interface | |
| AU2013273829A1 (en) | Time constrained augmented reality | |
| US20140321770A1 (en) | System, method, and computer program product for generating an image thumbnail | |
| WO2014114118A1 (en) | Realization method and device for two-dimensional code augmented reality | |
| EP4276754A1 (en) | Image processing method and apparatus, device, storage medium, and computer program product | |
| CN113273167B (en) | Data processing apparatus, method and storage medium | |
| CN112328088B (en) | Image presentation method and device | |
| US9918057B2 (en) | Projecting text characters onto a textured surface | |
| KR102605451B1 (en) | Electronic device and method for providing multiple services respectively corresponding to multiple external objects included in image | |
| AU2014277851A1 (en) | Detecting a gap between text columns from text line fragments | |
| AU2015201596A1 (en) | Displaying augmented reality content on a document | |
| CN104732188B (en) | Text Extraction and device | |
| TWI899841B (en) | Wearable apparatus and integrating method of virtual and real images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MK4 | Application lapsed section 142(2)(d) - no continuation fee paid for the application |