EP3971849A1 - Method for operating machine and system configured to carry out the method - Google Patents
Method for operating machine and system configured to carry out the method Download PDFInfo
- Publication number
- EP3971849A1 EP3971849A1 EP20197209.8A EP20197209A EP3971849A1 EP 3971849 A1 EP3971849 A1 EP 3971849A1 EP 20197209 A EP20197209 A EP 20197209A EP 3971849 A1 EP3971849 A1 EP 3971849A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- machine
- code
- user
- portable electronic
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000004891 communication Methods 0.000 claims abstract description 64
- 239000000047 product Substances 0.000 claims description 117
- 238000010191 image analysis Methods 0.000 claims description 50
- 230000003287 optical effect Effects 0.000 claims description 27
- 238000013473 artificial intelligence Methods 0.000 claims description 8
- 235000013353 coffee beverage Nutrition 0.000 description 32
- 235000013361 beverage Nutrition 0.000 description 11
- 235000020280 flat white Nutrition 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 5
- 238000002360 preparation method Methods 0.000 description 5
- 235000019504 cigarettes Nutrition 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 235000011888 snacks Nutrition 0.000 description 4
- 239000008267 milk Substances 0.000 description 3
- 210000004080 milk Anatomy 0.000 description 3
- 235000013336 milk Nutrition 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 230000003213 activating effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 235000013305 food Nutrition 0.000 description 2
- 230000008014 freezing Effects 0.000 description 2
- 238000007710 freezing Methods 0.000 description 2
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- 235000015109 caffè americano Nutrition 0.000 description 1
- 235000015115 caffè latte Nutrition 0.000 description 1
- 235000015116 cappuccino Nutrition 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 235000020285 double espresso Nutrition 0.000 description 1
- 235000015114 espresso Nutrition 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 244000052769 pathogen Species 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F9/00—Details other than those peculiar to special kinds or types of apparatus
- G07F9/001—Interfacing with vending machines using mobile or wearable devices
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F13/00—Coin-freed apparatus for controlling dispensing or fluids, semiliquids or granular material from reservoirs
- G07F13/06—Coin-freed apparatus for controlling dispensing or fluids, semiliquids or granular material from reservoirs with selective dispensing of different fluids or materials or mixtures thereof
- G07F13/065—Coin-freed apparatus for controlling dispensing or fluids, semiliquids or granular material from reservoirs with selective dispensing of different fluids or materials or mixtures thereof for drink preparation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F19/00—Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
- G07F19/20—Automatic teller machines [ATMs]
Definitions
- the present invention generally relates to machine-user interaction, and particularly to a method for operating a machine and a system configured to carry out the method.
- Machine-user interfaces are provided ubiquitously in modern life. They include, for example, physical features such as push buttons of an elevator and two-dimensional graphical icons displayed on the touchscreen. Despite the great varieties of the machine-user interfaces, the operation of many of them involves contacting or pressing the interface, often by a user's finger, to thereby actuate certain desired functions of the machine.
- vending machines In the context of, for example, vending machines, several attempts have been made to address the aforementioned problem. They typically involve providing a vending machine in network connection with a server and having a machine-readable identification code shown on the vending machine. The user may access the server by reading the identification code with his/her smartphone and make the purchase via the server with the smartphone. The server then instructs the machine to provide the product or service to the user. In this manner, contact between the user and the machine is avoided.
- the present invention aims at providing a method for operating a machine in which contact between a user and the machine can be avoided and with which the machine does not have to be connected to the internet.
- Another object is to provide a method and machine that allows intuitive contactless operation of the machine by the user.
- the present invention is directed to a method for operating a machine.
- the machine comprises, inter alia, a scanner or a wireless communication terminal for receiving a code and a machine-user interface comprising a plurality of contact areas.
- the machine is configured to perform a plurality of functions, wherein each contact area is associated with a different one of the plurality of functions.
- the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area.
- the machine may be any machine which normally requires a user to contact the machine-user interface to operate the machine.
- the function performed by the machine comprises providing a service and/or a product.
- the service provided by the machine may be, for example, financial transactions.
- the machine is an automated teller machine (ATM).
- the product provided by the machine may be, for example, newspapers, foods, snacks, beverages, cigarettes, birth controls or tickets.
- the machine is a vending machine such as a beverage preparation machine.
- the machine is equipped with a microprocessor to control the respective components thereof in order to perform the desired function.
- the microprocessor is connected to the scanner or to the wireless communication terminal, more preferably by a wired connection.
- the microprocessor is connected to the machine user-interface, more preferably by a wired connection.
- an additional controller may be interposed between the scanner or the wireless communication terminal and the machine (e.g., between the scanner and the microprocessor).
- the controller may be configured to recognize a certain function based on the code (e.g., in case of a beverage preparation machine, which beverage is to be prepared based on information stored in the machine) and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed (e.g., in case of a beverage preparation machine, how much coffee is to be used, whether milk is to be provided and/or foamed, how much water is to be infused, etc.).
- a certain function based on the code (e.g., in case of a beverage preparation machine, which beverage is to be prepared based on information stored in the machine) and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed (e.g., in case of a beverage preparation machine, how much coffee is to be used, whether milk is to be provided and/or foamed, how much water is to be infused, etc.).
- the machine may include further components such as a product dispensing unit, a brewing unit, and/or a refrigerator.
- the machine is not connected to the internet. More preferably, the machine is offline except that it may exceptionally allow the wireless communication terminal to receive the code, as further explained below.
- the wireless communication terminal preferably is a Bluetooth or Near Field Communication (NFC) terminal.
- NFC Near Field Communication
- the machine-user interface comprises a touchscreen, wherein several or all of the contact areas are displayed on the touchscreen.
- the touchscreen may be based on, for example, capacitive or resistive technologies.
- the machine-user interface comprises push buttons and/or capacitive sensors, wherein several or all of the contact areas are formed by said push buttons and/or capacitive sensors.
- the method according to the first aspect of the invention comprises i) an image obtention step, ii) an image analysis step, iii) a function selection step, iv) a code generation step, v) a code output step, vi) a code identification step, and vii) a function performing step.
- the user obtains one or more images of the machine-user interface with a portable electronic device, wherein the one or more images each show at least some of the plurality of contact areas of the machine-user interface.
- the one or more images are analyzed to recognize the function associated with at least one (or a plurality) of the plurality of contact areas shown in the one or more images.
- the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates (e.g., contacts) the device-user interface at a position where one of the plurality of contact areas is displayed to select a desired function associated with said contact area.
- the functions recognized in the image analysis step ii) are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired function therefrom.
- the device-used interface is provided on a touchscreen of the portable electronic device.
- a code which conveys the desired function is generated.
- the code is output by the portable electronic device.
- the code identification step vi) the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal.
- the function performing step vii) the machine performs the desired function conveyed by the code.
- the one or more images obtained in the image obtention step i) are part of a livestream, e.g. one or more stills from the livestream, wherein the livestream is displayed by the portable electronic device during the image obtention step i).
- the livestream is analyzed by a software algorithm in the image analysis step ii), and preferably the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm. More preferably, the livestream is frozen upon determining that the functions associated with all contact areas have been recognized and/or that the entire machine-user interface is being shown.
- Freezing the livestream in the context of the present invention refers to displaying a still or essentially still image (e.g., a still frame of the livestream) by the device-user interface (e.g., on a touchscreen of the device). The user can then easily actuate the device-user interface at the desired position, even if the device is moved and/or pointed away from the machine-user interface.
- a still or essentially still image e.g., a still frame of the livestream
- the device-user interface e.g., on a touchscreen of the device.
- the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, so that the machine-user interface or the portion thereof that is displayed by the device-user interface is displayed as viewed from a determined viewing direction. More specifically, the algorithm may process the still or essentially still image (e.g., the still frame) of the livestream displayed to the user for actuating the device-user interface in the function selection step so that the machine-user interface (or the portion thereof) shown in said image is displayed as viewed from such determined viewing direction.
- Such determined viewing direction preferably is a front view (e.g., as the machine-user interface would be seen with a viewing axis having an angle of approximately 90° to a surface plane of said machine-user interface).
- the determined view could also be a perspective view, such as for example an inclined front view.
- the frozen image is automatically processed by the algorithm. This facilitates function selection for the user.
- the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, by recognizing characters, numbers, character strings, logograms, pictograms and/or shapes of the machine-user interface and replacing at least some of these characters, numbers, character strings, logograms, pictograms and/or shapes by characters, numbers, character strings, logograms, pictograms and/or shapes that are sharper and/or have a better image quality.
- the algorithm preferably recognizes whether certain characters, numbers, character strings, logograms, pictograms and/or shapes shown in the livestream, in particular in the frozen image of the livestream (e.g. the frame), do not have a desired sharpness and/or image quality.
- the algorithm may then replace the respective characters, numbers, character strings, logograms, pictograms, and/or shapes by sharp characters, numbers, character strings, logograms, pictograms and/or shapes in the frozen image (e.g., the frame) displayed to the user. In this manner, operation may be facilitated for the user.
- the portable electronic device with which the image obtention step i) is carried out, has a camera, and the one or more images (or a livestream comprising the same) is obtained with the camera in the image obtention step i).
- the portable electronic device further comprises a touchscreen on which the device-user interface is provided. The one or more images or the menu displayed in the function selection step iii) and/or the code output in the code output step v) is shown on the touchscreen of the portable electronic device.
- the portable electronic device is a smartphone or tablet.
- the user may actively take a photo of the entirety or a part of the machine-user interface.
- the user may aim the smartphone or tablet at the machine-user interface with the camera turned on and without performing further operations to the smartphone or tablet, the image(s) being automatically obtained (or the livestream being automatically performed) by the smartphone or tablet.
- the image obtention step i) may be carried out by executing a special application installed on the smartphone/tablet or by simply activating the camera.
- the image analysis step ii) comprises recognizing the type of machine based on one or more of a) a letter, a number, a character string, a logogram and/or a pictogram associated with each contact area, b) a shape of the machine and/or a shape of the machine-user interface, c) an arrangement of the contact areas on the machine-user interface, and d) a machine-readable optical code positioned on the machine.
- a plurality of pictograms is associated with the plurality of contact areas, and the image analysis step ii) comprises recognizing the type of machine based on one or more of the pictograms. Such recognition simplifies operation for the user because the user will not have to recognize and/or input the type of machine.
- the contact areas of the machine-user interface are each associated with a letter, a number, a character string, a logogram and/or a pictogram
- the image analysis step ii) comprises recognizing the function associated with the at least one contact area based on a recognition of the letter, the number, the character string, the logogram and/or the pictogram associated therewith.
- the image analysis step ii) comprises recognizing the function associated with each of the contact areas shown in the one or more images.
- the function selection step iii) comprises displaying one or more analyzed image to the user before the user actuates (e.g., contacts) the device-user interface to select the desired function. That is, the function selection step iii) may be carried out after the image analysis step ii).
- the image analysis step ii) comprises recognizing the function associated with the contact area displayed at the position where the user has actuated (e.g., contacted) the device-used interface to select the desired function. That is, the function selection step iii) may also be carried out before the image analysis step ii). In this scenario, the image analysis step ii) may be carried out after the user has actuated (e.g., contacted) the device-user interface.
- the one or more images are analyzed by a software provided on the portable electronic device.
- the one or more images may be analyzed by a software provided on a remote server, wherein the portable electronic device is connected to the server via, inter alia, a wireless communication network.
- the one or more images (or a livestream comprising the same, as described above) obtained with the portable electronic device are communicated to the server via the wireless communication network.
- the portable electronic device is connected to the internet.
- the server is connected to the internet as well.
- the wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- the one or more images or the livestream are analyzed with an artificial intelligence algorithm, which is preferably provided on the server or on the portable electronic device.
- the code generated in the code generation step iv) contains a command for the machine to carry out the desired function.
- the code may be generated by an algorithm provided on the portable electronic device or may be determined by an algorithm provided on a server, such as the server described above, and communicated to the portable electronic device connected to the server via a wireless communication network.
- the wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- the code is a machine-readable optical code.
- the machine comprises the scanner for receiving the optical code
- the optical code is displayed by the portable electronic device (e.g. on the device-user interface, such as on the touchscreen of the smartphone or tablet) in the code output step v) and is then scanned with the scanner in the code identification step vi).
- the optical code comprises a one-dimensional or two-dimensional pattern, such as a one-dimensional or two-dimensional barcode. More preferably, the optical code comprises a QR code.
- the code may be transmitted as an electromagnetic signal from the portable electronic device to the machine.
- the machine comprises the wireless communication terminal for receiving the electromagnetic code.
- the electromagnetic code is wirelessly transmitted by the portable electronic device in the code output step v) and received with the wireless communication terminal in the code identification step vi).
- the electromagnetic code is wirelessly transmitted based on Bluetooth, Near Field Communication (NFC) or Radio Frequency Identification (RFID) technology.
- the present invention is directed to a method for operating a vending machine.
- the vending machine comprises a scanner or a wireless communication terminal for receiving a code, and a plurality of products that can be purchased via the machine, the plurality of products or representations thereof being visible to a user of the machine.
- the product provided by the vending machine may be, for example, newspapers, foods, snacks, beverages, cigarettes, birth controls or tickets.
- the plurality of products may be visible to the user through one or more transparent surfaces of the machine (e.g., a transparent front).
- the vending machine is equipped with a microprocessor to control the respective components thereof in order to perform the desired function.
- the microprocessor is connected to the scanner or to the wireless communication terminal, more preferably by a wired connection.
- the microprocessor is connected to the machine user-interface, more preferably by a wired connection.
- an additional controller may be interposed between the scanner or the wireless communication terminal and the machine (e.g., between the scanner and the microprocessor). The controller may be configured to recognize a certain function based on the code and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed.
- the vending machine is not connected to the internet.
- the vending machine is offline except that it may exceptionally allow the wireless communication terminal to receive the code, as further explained below.
- the skilled person will appreciate that the methods and systems disclosed herein may also be realized when such connection to the internet is present for the machine.
- the method according to the second aspect of the invention comprises i) an image obtention step, ii) an image analysis step, iii) a product selection step, iv) a code generation step, v) a code output step, vi) a code identification step, and vii) a product release step.
- the user obtains one or more images of the plurality of products or the representations thereof with a portable electronic device, wherein the one or more images each show at least some of the plurality of products or representations thereof.
- the image analysis step ii) the one or more images are analyzed to recognize the plurality of products or the representations thereof shown in the image in order to identify the products that can be purchased via the machine.
- Representations of products indicate to the user the product corresponding to the respective representation.
- Such representations may include tradenames, logograms, pictograms, shapes, characters, numbers, and/or character strings.
- the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates (e.g., contacts) the device-user interface at a position where one of the plurality of products or representations thereof is displayed to select a desired product.
- the products identified in the image analysis step ii) are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired product from said menu.
- the device-used interface is provided on a touchscreen.
- a code which conveys the desired product is generated.
- the code is output by the portable electronic device.
- the code identification step vi) the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal.
- the product release step vii) the machine releases the desired product conveyed by the code.
- the one or more images obtained in the image obtention step i) are part of a livestream, e.g. one or more stills from the livestream, wherein the livestream is displayed by the portable electronic device during the image obtention step i).
- the livestream is analyzed by a software algorithm in the image analysis step ii), and preferably the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm. More preferably, the livestream is frozen upon determining that all products or representations thereof have been recognized.
- Freezing the livestream in the context of the present invention refers to displaying a still or essentially still image (e.g., a still frame of the livestream) by the device-user interface (e.g., on a touchscreen of the device). The user can then easily actuate the device-user interface at the desired position, even if the device is moved and/or pointed away from the machine-user interface.
- a still or essentially still image e.g., a still frame of the livestream
- the device-user interface e.g., on a touchscreen of the device.
- the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, so that the products or the representations thereof that are displayed by the device-user interface is displayed as viewed from a determined viewing direction. More specifically, the algorithm may process the still or essentially still image (e.g., the still frame) of the livestream displayed to the user for actuating the device-user interface in the function selection step so that the products or the representations thereof shown in said image are displayed as viewed from such determined viewing direction.
- Such determined viewing direction preferably is a front view (e.g., as the products or representations thereof would be seen with a viewing axis having an angle of approximately 90° to a front surface plane of said machine).
- the determined view could also be a perspective view, such as for example an inclined front view.
- the frozen image is automatically processed by the algorithm. This facilitates product selection for the user.
- the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, by recognizing characters, numbers, character strings, logograms, pictograms and/or shapes associated with the products or the product representations and replacing at least some of these characters, numbers, character strings, logograms, pictograms and/or shapes by characters, numbers, character strings, logograms, pictograms and/or shapes that are sharper and/or have a better image quality.
- the algorithm preferably recognizes whether certain characters, numbers, character strings, logograms, pictograms and/or shapes shown in the livestream, in particular in the frozen image of the livestream (e.g. the frame), do not have a desired sharpness and/or image quality.
- the algorithm may then replace the respective characters, numbers, character strings, logograms, pictograms, and/or shapes by sharp characters, numbers, character strings, logograms, pictograms and/or shapes in the frozen image (e.g., the frame) displayed to the user. In this manner, operation may be facilitated for the user.
- the portable electronic device with which the image obtention step i) is carried out, has a camera, and the one or more images (or a livestream comprising the same) is obtained with the camera in the image obtention step i).
- the portable electronic device further comprises a touchscreen on which the device-user interface is provided. The one or more images displayed in the product selection step iii) and/or the code output in the code output step v) is shown on the touchscreen of the portable electronic device.
- the portable electronic device is a smartphone or tablet.
- the user may actively take a photo of the entirety or a part of the vending machine.
- the user may aim the smartphone or tablet at the vending machine with the camera turned on and without performing further operations to the smartphone or tablet, the image(s) being automatically obtained (or the livestream being automatically performed) by the smartphone or tablet.
- the image obtention step i) may be carried out by executing a special application installed on the smartphone/tablet or by simply activating the camera.
- the image analysis step ii) comprises recognizing the type of vending machine based on one or more of a) a shape of the vending machine and/or a shape of a machine-user interface of the vending machine, b) an arrangement of the plurality of products or representations thereof, and c) a machine-readable optical code positioned on the vending machine.
- the image analysis step ii) comprises recognizing each product or representation thereof shown in the one or more images.
- the product selection step iii) comprises displaying one or more analyzed images to the user before the user actuates (e.g., contacts) the device-user interface to select the desired product. That is, the product selection step iii) may be carried out after the image analysis step ii).
- the image analysis step ii) comprises recognizing the product or representation thereof displayed at the position where the user has actuated (e.g., contacted) the device-user interface to select the desired product. That is, the product selection step iii) may also be carried out before the image analysis step ii). In this scenario, the image analysis step ii) may be carried out after the user has actuated (e.g., contacted) the device-user interface.
- the one or more images are analyzed by a software provided on the portable electronic device.
- the one or more images may be analyzed by a software provided on a remote server, wherein the portable electronic device is connected to the server via, inter alia, a wireless communication network.
- the one or more images (or a livestream comprising the same, as described above) obtained with the portable electronic device are communicated to the server via the wireless communication network.
- the portable electronic device is connected to the internet.
- the server is connected to the internet as well.
- the wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- the one or more images or the livestream are analyzed with an artificial intelligence algorithm, which is preferably provided on the server or on the device.
- the code generated in the code generation step iv) contains a command for the machine to release the desired product.
- the code may be generated by an algorithm provided on the portable electronic device or may be determined by an algorithm provided on a server, such as the server described above, and communicated to the portable electronic device connected to the server via a wireless communication network.
- the wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- the code is a machine-readable optical code.
- the vending machine comprises the scanner for receiving the optical code
- the optical code is displayed by the portable electronic device (e.g. on the device-user interface, such as the touchscreen of a smartphone) in the code output step v) and is then scanned with the scanner in the code identification step vi).
- the optical code comprises a one-dimensional or two-dimensional pattern, such as a one-dimensional or two-dimensional barcode. More preferably, the optical code comprises a QR code.
- the code may be transmitted as an electromagnetic signal from the portable electronic device to the machine.
- the machine comprises the wireless communication terminal for receiving the electromagnetic code.
- the electromagnetic code is wirelessly transmitted by the portable electronic device in the code output step v) and received with the wireless communication terminal in the code identification step vi).
- the electromagnetic code is wirelessly transmitted based on Bluetooth, Near Field Communication (NFC) or Radio Frequency Identification (RFID) technology.
- the present invention is directed to a system comprising a portable electronic device and a machine.
- the machine comprises a scanner or a wireless communication terminal for receiving a code and a machine-user interface comprising a plurality of contact areas, wherein the machine is configured to perform a plurality of functions.
- Each contact area is associated with a different one of the plurality of functions, and the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area.
- the system is configured to carry out the method according to the first aspect of the invention.
- the system comprises a server in network connection with the portable electronic device.
- the present invention is directed to a system comprising a portable electronic device and a vending machine.
- the vending machine comprises a scanner or a wireless communication terminal for receiving a code and a plurality of products that can be purchased via the machine. The plurality of products or representations thereof is visible to a user of the machine.
- the system is configured to carry out the method according to the second aspect of the invention.
- the system comprises a server in network connection with the portable electronic device.
- the method in accordance with the first aspect of the invention comprises an image obtention step S1, an image analysis step S2, a function selection step S3, a code generation step S4, a code output step S5, a code identification step S6 and a function performing step S7.
- Each step may comprise the features described in the context of the first aspect of the invention. Unnecessary repetition will thus be avoided hereinafter. In the following, the method will be described in additional details with reference to Figs. 2 to 7 .
- Fig. 2 is a schematic diagram of a system 10 configured to carry out the respective steps of Fig. 1 .
- the system 10 comprises a machine 100 and a smartphone 140.
- the smartphone 140 has a touchscreen 142 and a camera (not shown) on e.g. the back.
- the smartphone 140 is connected to a server 160 via a network 150.
- the network 150 may be any telecommunication network that enables communication between the smartphone 140 and the server 160, including but not limited to local area network (LAN), wireless LAN (WLAN) and/or any of the mobile communication technologies (such as cellular networks).
- the server 160 is configured to receive information from the smartphone 140, process or generate information therein, and/or transmit information to the smartphone 140.
- the machine 100 comprises a scanner 120 for receiving a machine-readable optical code.
- the machine 100 further comprises a machine-user interface 110.
- the machine 100 is configured to perform a plurality of functions when a user operates the machine 100 via the machine-user interface 110.
- the machine 100 may, for example, be a machine that provides a service and/or a product to a consumer.
- Fig. 3 illustrates a specific example - i.e. a coffee machine 200 - of the machine 100 in Fig. 2 .
- the coffee machine 200 is configured to prepare various types of coffee at the user's choice. While not shown, a skilled person shall understand that the coffee machine 200 may comprise a water tank, a bean container, a grinder, a heater, a brewer, a milk supply, etc., connecting means for the respective components and a microprocessor controlling and coordinating the operation of the respective components of the coffee machine 200 in order to prepare the coffee.
- the coffee machine 200 comprises a scanner 220 the same as the scanner 120 and a machine-user interface 210, which is shown more clearly in Fig. 4 .
- An additional controller 240 may be interposed between the scanner 220 and the machine (e.g., between the scanner 220 and the microprocessor).
- the controller 240 may be configured to recognize a certain function based on the code (e.g., which beverage is to be prepared based on information stored in the machine 200) and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed (e.g., how much coffee is to be used, whether milk is to be provided and/or foamed, how much water is to be infused, etc.).
- the machine-user interface 210 comprises a screen 214 on which a plurality of pictograms 216, each representing a specific type of coffee, is displayed.
- the machine-user interface 210 further comprises a plurality of push buttons 212, individually corresponding to the respective pictograms 216. Accordingly, the machine-user interface 210 allows a user to select the desired coffee drink, e.g. flat white, by contacting (pressing) the corresponding push button 212'. A command thereby generated will be sent to the microprocessor of the coffee machine 200 which controls the respective components of the coffee machine 200 accordingly to prepare a flat white for the user.
- the user obtains an image of the machine-user interface 210 with the smartphone 140. This may be done by executing a special application installed on the smartphone 140, which turns on the camera, and aiming the camera at the machine-user interface 210. An image of (or a livestream showing) the machine-user interface 210 is automatically obtained and communicated to the server 160 via the network 150.
- the server 160 analyzes, by an algorithm such as artificial intelligence, the image (or the livestream) to recognize the respective types of coffee associated with the individual push buttons 212 shown in the image (or the livestream). That is, the algorithm recognizes the respective types of coffee represented by the individual pictograms 216 (or by the designations displayed below) and correlates the respective types of coffee to the corresponding push buttons 212. Specifically, the algorithm recognizes that the three push buttons 212 on the left of the machine-user interface 210 are respectively associated with, in an order from the top to the bottom, espresso, americano and cappuccino, while the three push buttons 212 on the right are respectively associated with, in the same order, doppio, cafe latte and flat white. The server 160 then communicates a result of recognition to the smartphone 140 via the network 150.
- an algorithm such as artificial intelligence
- the image may further be processed by the server 160 and/or the smartphone 140 in order to facilitate input by the user.
- the image may be processed to show the machine-user interface 210 as seen in a front view and/or some of the pixels may be replaced (e.g., unclear pictograms may be replaced by pictograms having a better sharpness and/or image quality).
- an analyzed image 170 is displayed on the touchscreen 142 of the smartphone 140.
- the user is then allowed to select the desired type of coffee, e.g. flat white, by contacting the touchscreen 142 at a position where the corresponding push button 212' is displayed, thereby indicating that the user's desired type of coffee is flat white.
- the user is allowed to zoom in/out the analyzed image 170 displayed on the touchscreen 142 with a gesture, such as moving two fingers toward or away from each other, to facilitate the selection of the desired type of coffee.
- a machine-readable optical code containing a command that a flat white be made is generated, as per the code generation step S4.
- the code may be determined by the server 160 and then communicated to the smartphone 140 via the network 150.
- a machine-readable optical code 180 which for example is a QR code, is displayed on the touchscreen 142 of the smartphone 140, as per the code output step S5.
- the user may then scan the machine-readable optical code 180 with the scanner 220 of the coffee machine 200, as per the code identification step S6.
- the command that a flat white be made is given to the coffee machine 200, which accordingly prepares the flat white, as per the function performing step S7.
- the method according to the invention provides a user with a virtual reality environment in which the user is capable of operating the machine via the machine-user interface in an exact same manner as he/she does in reality, except that the machine-user interface is displayed on the smartphone.
- the method thereby avoids the contact between the user and the machine and meanwhile eliminates the need to provide network connection to the machine.
- a menu 190 listing the respective types of coffee may be displayed to the user.
- the user may then select the desired type of coffee from the menu 190.
- the steps S4 to S7 as described above will follow.
- the machine-user interface 210 may comprise no push buttons in case that, for example, the screen 214 of the machine-user interface 210 is itself a touchscreen which allows the user to select the desired coffee drink by contacting a region of the screen 214 that substantially overlaps with the corresponding pictogram displayed thereon.
- the user selects the desired type of coffee by contacting the touchscreen 142 of the smartphone 140 at a position where said region is displayed in the function selection step S3.
- the image analysis step S2 is carried out before the function selection step S3, this order may be reversed. That is, the method may be carried out in a procedure that after the image obtention step S1, the image of the machine-user interface 210, not analyzed yet, is displayed on the touchscreen 142 of the smartphone 140, and the user contacts the touchscreen 142 at a position where e.g. the push button 212' is displayed to select the desired type of coffee, a corresponding region of the image then being analyzed by the server 160 to recognize the type of coffee associated with the push button 212'. The result of recognition is then used as the user's indication of the desired type of coffee, and steps S4 to S7 then follow.
- the machine 100 may comprise, instead of or in addition to the scanner 120, a wireless communication terminal 230.
- the code in the code output step S5 is in a form of an electromagnetic signal configured to be received by the wireless communication terminal.
- the server 160 is described, the method may be carried out with a system which does not include a remote server, i.e., the image analysis step S2 and the code generation step S4 may be carried out on the smartphone 140.
- the terminal 130 may be connected to the controller 240.
- Fig. 8 is a flow chart of the respective method steps.
- the method comprises an image obtention step S1', an image analysis step S2', a product selection step S3', a code generation step S4', a code output step S5', a code identification step S6' and a product release step S7'.
- Each step may comprise the features described in the context of the second aspect of the invention.
- the method in accordance with the second aspect is principally similar to that of the first aspect, except that it concerns the image of products sold by a vending machine or the image of representations of the products. Unnecessary repetition will thus be avoided hereinafter.
- the steps of Fig. 8 may also be carried out with the system 10 shown in Fig. 2 .
- the machine 100 is a vending machine via which a plurality of products can be purchased.
- the machine 100 further comprises - in addition to or in replacement of the machine-user interface 110 - a panel 130.
- the panel 130 is transparent so that the products stored in the machine 100 are visible to a user.
- the user obtains an image of the products seen through the panel 130 with the smartphone 140 in a manner similar to the image obtention step S1.
- the server 160 analyzes the image to recognize the respective products shown in the image in order to identify the products that can be purchased via the vending machine.
- an analyzed image is displayed on the touchscreen 142 of the smartphone 140.
- the user is allowed to select the desired product by contacting the touchscreen 142 at a position where said desired product is displayed.
- a machine-readable optical code containing a command that the desired product be released is generated, as per the code generation step S4'.
- the machine-readable optical code is then displayed on the touchscreen 142 of the smartphone 140, as per the code output step S5'.
- the user may scan the machine-readable optical code with the scanner 120 of the machine 100, as per the code identification step S6'. In this manner, the command that the desired product be released is given to the machine 100, which accordingly release the desired product, as per the product release step S7.
- the product can then be taken out of the machine 100 by the user.
- the method in accordance with this aspect of the invention is not limited by non-essential technical details exemplarily mentioned above and may include alternatives and/or modifications such as those described above with respect to the second aspect of the invention.
- the products offered by the vending machine may also be invisible to the user. Instead, representations of the products may be visible, and the image concerned in the respective method steps S1' to S3' is an image showing representations of the products.
- the panel 130 may be a display panel.
- the invention has been described in the context of the detailed embodiments with reference to a beverage preparation machine, the invention is not limited with respect to the type of machine and/or the services or products provided thereby.
- the method may also be used for operating other types of machines, such as other vending machines, elevators, etc.
- the invention may, for example, be defined by the following items:
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
Abstract
Description
- The present invention generally relates to machine-user interaction, and particularly to a method for operating a machine and a system configured to carry out the method.
- Machine-user interfaces are provided ubiquitously in modern life. They include, for example, physical features such as push buttons of an elevator and two-dimensional graphical icons displayed on the touchscreen. Despite the great varieties of the machine-user interfaces, the operation of many of them involves contacting or pressing the interface, often by a user's finger, to thereby actuate certain desired functions of the machine.
- However, when these machines are intended for use by a number of unspecified users, the necessity to make physical contact with the machine for operating the same poses a problem concerning public hygiene: Pathogens of a contagious disease may be left on the machine-user interface after a carrier has contacted the machine, the next user thus being at risk of infection. This issue has increasingly gained attention particularly in view of the Sars-CoV-2 global pandemic.
- In the context of, for example, vending machines, several attempts have been made to address the aforementioned problem. They typically involve providing a vending machine in network connection with a server and having a machine-readable identification code shown on the vending machine. The user may access the server by reading the identification code with his/her smartphone and make the purchase via the server with the smartphone. The server then instructs the machine to provide the product or service to the user. In this manner, contact between the user and the machine is avoided.
- However, these "smart" vending machines require network connection between the server and the machine, which increases complexity for machine installation and also limits the availability of the machine.
- In view of the above, the present invention aims at providing a method for operating a machine in which contact between a user and the machine can be avoided and with which the machine does not have to be connected to the internet.
- Another object is to provide a method and machine that allows intuitive contactless operation of the machine by the user.
- In a first aspect, the present invention is directed to a method for operating a machine. The machine comprises, inter alia, a scanner or a wireless communication terminal for receiving a code and a machine-user interface comprising a plurality of contact areas. The machine is configured to perform a plurality of functions, wherein each contact area is associated with a different one of the plurality of functions. The machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area.
- The machine may be any machine which normally requires a user to contact the machine-user interface to operate the machine. Preferably, the function performed by the machine comprises providing a service and/or a product. The service provided by the machine may be, for example, financial transactions. Preferably, the machine is an automated teller machine (ATM). The product provided by the machine may be, for example, newspapers, foods, snacks, beverages, cigarettes, birth controls or tickets. Preferably, the machine is a vending machine such as a beverage preparation machine.
- Preferably, the machine is equipped with a microprocessor to control the respective components thereof in order to perform the desired function. Preferably, the microprocessor is connected to the scanner or to the wireless communication terminal, more preferably by a wired connection. Preferably, the microprocessor is connected to the machine user-interface, more preferably by a wired connection. Moreover, an additional controller may be interposed between the scanner or the wireless communication terminal and the machine (e.g., between the scanner and the microprocessor). The controller may be configured to recognize a certain function based on the code (e.g., in case of a beverage preparation machine, which beverage is to be prepared based on information stored in the machine) and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed (e.g., in case of a beverage preparation machine, how much coffee is to be used, whether milk is to be provided and/or foamed, how much water is to be infused, etc.).
- Depending on the type of machine, the machine may include further components such as a product dispensing unit, a brewing unit, and/or a refrigerator.
- Preferably, the machine is not connected to the internet. More preferably, the machine is offline except that it may exceptionally allow the wireless communication terminal to receive the code, as further explained below. The wireless communication terminal preferably is a Bluetooth or Near Field Communication (NFC) terminal. However, the skilled person will appreciate that the methods and systems disclosed herein may also be realized when such connection to the internet is present for the machine.
- Preferably, the machine-user interface comprises a touchscreen, wherein several or all of the contact areas are displayed on the touchscreen. The touchscreen may be based on, for example, capacitive or resistive technologies. Alternatively, or additionally, the machine-user interface comprises push buttons and/or capacitive sensors, wherein several or all of the contact areas are formed by said push buttons and/or capacitive sensors.
- The method according to the first aspect of the invention comprises i) an image obtention step, ii) an image analysis step, iii) a function selection step, iv) a code generation step, v) a code output step, vi) a code identification step, and vii) a function performing step.
- In the image obtention step i), the user obtains one or more images of the machine-user interface with a portable electronic device, wherein the one or more images each show at least some of the plurality of contact areas of the machine-user interface. In the image analysis step ii), the one or more images are analyzed to recognize the function associated with at least one (or a plurality) of the plurality of contact areas shown in the one or more images.
- In the function selection step iii), the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates (e.g., contacts) the device-user interface at a position where one of the plurality of contact areas is displayed to select a desired function associated with said contact area. Alternatively, in the function selection step iii), the functions recognized in the image analysis step ii) are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired function therefrom. Preferably, the device-used interface is provided on a touchscreen of the portable electronic device.
- In the code generation step iv), a code which conveys the desired function is generated. In the code output step v), the code is output by the portable electronic device. In the code identification step vi), the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal. In the function performing step vii), the machine performs the desired function conveyed by the code.
- Preferably, the one or more images obtained in the image obtention step i) are part of a livestream, e.g. one or more stills from the livestream, wherein the livestream is displayed by the portable electronic device during the image obtention step i). The livestream is analyzed by a software algorithm in the image analysis step ii), and preferably the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm. More preferably, the livestream is frozen upon determining that the functions associated with all contact areas have been recognized and/or that the entire machine-user interface is being shown. Freezing the livestream in the context of the present invention refers to displaying a still or essentially still image (e.g., a still frame of the livestream) by the device-user interface (e.g., on a touchscreen of the device). The user can then easily actuate the device-user interface at the desired position, even if the device is moved and/or pointed away from the machine-user interface.
- Preferably, the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, so that the machine-user interface or the portion thereof that is displayed by the device-user interface is displayed as viewed from a determined viewing direction. More specifically, the algorithm may process the still or essentially still image (e.g., the still frame) of the livestream displayed to the user for actuating the device-user interface in the function selection step so that the machine-user interface (or the portion thereof) shown in said image is displayed as viewed from such determined viewing direction. Such determined viewing direction preferably is a front view (e.g., as the machine-user interface would be seen with a viewing axis having an angle of approximately 90° to a surface plane of said machine-user interface). This may involve, for example, rotating or straightening the image and/or adjusting the vertical or horizontal perspective of the image. However, the determined view could also be a perspective view, such as for example an inclined front view. Preferably, the frozen image is automatically processed by the algorithm. This facilitates function selection for the user.
- Preferably, the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, by recognizing characters, numbers, character strings, logograms, pictograms and/or shapes of the machine-user interface and replacing at least some of these characters, numbers, character strings, logograms, pictograms and/or shapes by characters, numbers, character strings, logograms, pictograms and/or shapes that are sharper and/or have a better image quality. For example, the algorithm preferably recognizes whether certain characters, numbers, character strings, logograms, pictograms and/or shapes shown in the livestream, in particular in the frozen image of the livestream (e.g. the frame), do not have a desired sharpness and/or image quality. The algorithm may then replace the respective characters, numbers, character strings, logograms, pictograms, and/or shapes by sharp characters, numbers, character strings, logograms, pictograms and/or shapes in the frozen image (e.g., the frame) displayed to the user. In this manner, operation may be facilitated for the user.
- Preferably, the portable electronic device, with which the image obtention step i) is carried out, has a camera, and the one or more images (or a livestream comprising the same) is obtained with the camera in the image obtention step i). Preferably, the portable electronic device further comprises a touchscreen on which the device-user interface is provided. The one or more images or the menu displayed in the function selection step iii) and/or the code output in the code output step v) is shown on the touchscreen of the portable electronic device. Preferably, the portable electronic device is a smartphone or tablet.
- With the smartphone or tablet, the user may actively take a photo of the entirety or a part of the machine-user interface. Alternatively, the user may aim the smartphone or tablet at the machine-user interface with the camera turned on and without performing further operations to the smartphone or tablet, the image(s) being automatically obtained (or the livestream being automatically performed) by the smartphone or tablet. The image obtention step i) may be carried out by executing a special application installed on the smartphone/tablet or by simply activating the camera.
- Preferably, the image analysis step ii) comprises recognizing the type of machine based on one or more of a) a letter, a number, a character string, a logogram and/or a pictogram associated with each contact area, b) a shape of the machine and/or a shape of the machine-user interface, c) an arrangement of the contact areas on the machine-user interface, and d) a machine-readable optical code positioned on the machine. Preferably, a plurality of pictograms is associated with the plurality of contact areas, and the image analysis step ii) comprises recognizing the type of machine based on one or more of the pictograms. Such recognition simplifies operation for the user because the user will not have to recognize and/or input the type of machine.
- Preferably, the contact areas of the machine-user interface are each associated with a letter, a number, a character string, a logogram and/or a pictogram, and the image analysis step ii) comprises recognizing the function associated with the at least one contact area based on a recognition of the letter, the number, the character string, the logogram and/or the pictogram associated therewith.
- Preferably, the image analysis step ii) comprises recognizing the function associated with each of the contact areas shown in the one or more images. Preferably, the function selection step iii) comprises displaying one or more analyzed image to the user before the user actuates (e.g., contacts) the device-user interface to select the desired function. That is, the function selection step iii) may be carried out after the image analysis step ii).
- Alternatively, the image analysis step ii) comprises recognizing the function associated with the contact area displayed at the position where the user has actuated (e.g., contacted) the device-used interface to select the desired function. That is, the function selection step iii) may also be carried out before the image analysis step ii). In this scenario, the image analysis step ii) may be carried out after the user has actuated (e.g., contacted) the device-user interface.
- Preferably, in the image analysis step ii), the one or more images are analyzed by a software provided on the portable electronic device. Alternatively, or additionally, the one or more images may be analyzed by a software provided on a remote server, wherein the portable electronic device is connected to the server via, inter alia, a wireless communication network. In this scenario, the one or more images (or a livestream comprising the same, as described above) obtained with the portable electronic device are communicated to the server via the wireless communication network. Preferably, the portable electronic device is connected to the internet. Preferably, the server is connected to the internet as well. The wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- Preferably, the one or more images or the livestream are analyzed with an artificial intelligence algorithm, which is preferably provided on the server or on the portable electronic device.
- Preferably, the code generated in the code generation step iv) contains a command for the machine to carry out the desired function. The code may be generated by an algorithm provided on the portable electronic device or may be determined by an algorithm provided on a server, such as the server described above, and communicated to the portable electronic device connected to the server via a wireless communication network. Also, in this case, the wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- Preferably, the code is a machine-readable optical code. In this scenario, the machine comprises the scanner for receiving the optical code, and the optical code is displayed by the portable electronic device (e.g. on the device-user interface, such as on the touchscreen of the smartphone or tablet) in the code output step v) and is then scanned with the scanner in the code identification step vi). Preferably, the optical code comprises a one-dimensional or two-dimensional pattern, such as a one-dimensional or two-dimensional barcode. More preferably, the optical code comprises a QR code.
- Alternatively, the code may be transmitted as an electromagnetic signal from the portable electronic device to the machine. In this scenario, the machine comprises the wireless communication terminal for receiving the electromagnetic code. The electromagnetic code is wirelessly transmitted by the portable electronic device in the code output step v) and received with the wireless communication terminal in the code identification step vi). Preferably, the electromagnetic code is wirelessly transmitted based on Bluetooth, Near Field Communication (NFC) or Radio Frequency Identification (RFID) technology.
- In a second aspect, the present invention is directed to a method for operating a vending machine. The vending machine comprises a scanner or a wireless communication terminal for receiving a code, and a plurality of products that can be purchased via the machine, the plurality of products or representations thereof being visible to a user of the machine. The product provided by the vending machine may be, for example, newspapers, foods, snacks, beverages, cigarettes, birth controls or tickets. The plurality of products may be visible to the user through one or more transparent surfaces of the machine (e.g., a transparent front).
- Preferably, the vending machine is equipped with a microprocessor to control the respective components thereof in order to perform the desired function. Preferably, the microprocessor is connected to the scanner or to the wireless communication terminal, more preferably by a wired connection. Preferably, the microprocessor is connected to the machine user-interface, more preferably by a wired connection. Moreover, an additional controller may be interposed between the scanner or the wireless communication terminal and the machine (e.g., between the scanner and the microprocessor). The controller may be configured to recognize a certain function based on the code and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed.
- Preferably, the vending machine is not connected to the internet. Preferably, the vending machine is offline except that it may exceptionally allow the wireless communication terminal to receive the code, as further explained below. However, the skilled person will appreciate that the methods and systems disclosed herein may also be realized when such connection to the internet is present for the machine.
- The method according to the second aspect of the invention comprises i) an image obtention step, ii) an image analysis step, iii) a product selection step, iv) a code generation step, v) a code output step, vi) a code identification step, and vii) a product release step.
- In the image obtention step i), the user obtains one or more images of the plurality of products or the representations thereof with a portable electronic device, wherein the one or more images each show at least some of the plurality of products or representations thereof. In the image analysis step ii), the one or more images are analyzed to recognize the plurality of products or the representations thereof shown in the image in order to identify the products that can be purchased via the machine.
- Representations of products indicate to the user the product corresponding to the respective representation. Such representations may include tradenames, logograms, pictograms, shapes, characters, numbers, and/or character strings.
- In the product selection step iii), the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates (e.g., contacts) the device-user interface at a position where one of the plurality of products or representations thereof is displayed to select a desired product. Alternatively, in the product selection step iii), the products identified in the image analysis step ii) are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired product from said menu. Preferably, the device-used interface is provided on a touchscreen.
- In the code generation step iv), a code which conveys the desired product is generated. In the code output step v), the code is output by the portable electronic device. In the code identification step vi), the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal. In the product release step vii), the machine releases the desired product conveyed by the code.
- Preferably, the one or more images obtained in the image obtention step i) are part of a livestream, e.g. one or more stills from the livestream, wherein the livestream is displayed by the portable electronic device during the image obtention step i). The livestream is analyzed by a software algorithm in the image analysis step ii), and preferably the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm. More preferably, the livestream is frozen upon determining that all products or representations thereof have been recognized. Freezing the livestream in the context of the present invention refers to displaying a still or essentially still image (e.g., a still frame of the livestream) by the device-user interface (e.g., on a touchscreen of the device). The user can then easily actuate the device-user interface at the desired position, even if the device is moved and/or pointed away from the machine-user interface.
- Preferably, the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, so that the products or the representations thereof that are displayed by the device-user interface is displayed as viewed from a determined viewing direction. More specifically, the algorithm may process the still or essentially still image (e.g., the still frame) of the livestream displayed to the user for actuating the device-user interface in the function selection step so that the products or the representations thereof shown in said image are displayed as viewed from such determined viewing direction. Such determined viewing direction preferably is a front view (e.g., as the products or representations thereof would be seen with a viewing axis having an angle of approximately 90° to a front surface plane of said machine). This may involve, for example, rotating or straightening the image and/or adjusting the vertical or horizontal perspective of the image. However, the determined view could also be a perspective view, such as for example an inclined front view. Preferably, the frozen image is automatically processed by the algorithm. This facilitates product selection for the user.
- Preferably, the algorithm is configured to process the livestream (or otherwise obtained one or more images), in particular the frozen image of the livestream, by recognizing characters, numbers, character strings, logograms, pictograms and/or shapes associated with the products or the product representations and replacing at least some of these characters, numbers, character strings, logograms, pictograms and/or shapes by characters, numbers, character strings, logograms, pictograms and/or shapes that are sharper and/or have a better image quality. For example, the algorithm preferably recognizes whether certain characters, numbers, character strings, logograms, pictograms and/or shapes shown in the livestream, in particular in the frozen image of the livestream (e.g. the frame), do not have a desired sharpness and/or image quality. The algorithm may then replace the respective characters, numbers, character strings, logograms, pictograms, and/or shapes by sharp characters, numbers, character strings, logograms, pictograms and/or shapes in the frozen image (e.g., the frame) displayed to the user. In this manner, operation may be facilitated for the user.
- Preferably, the portable electronic device, with which the image obtention step i) is carried out, has a camera, and the one or more images (or a livestream comprising the same) is obtained with the camera in the image obtention step i). Preferably, the portable electronic device further comprises a touchscreen on which the device-user interface is provided. The one or more images displayed in the product selection step iii) and/or the code output in the code output step v) is shown on the touchscreen of the portable electronic device. Preferably, the portable electronic device is a smartphone or tablet.
- With the smartphone or tablet, the user may actively take a photo of the entirety or a part of the vending machine. Alternatively, the user may aim the smartphone or tablet at the vending machine with the camera turned on and without performing further operations to the smartphone or tablet, the image(s) being automatically obtained (or the livestream being automatically performed) by the smartphone or tablet. The image obtention step i) may be carried out by executing a special application installed on the smartphone/tablet or by simply activating the camera.
- Preferably, the image analysis step ii) comprises recognizing the type of vending machine based on one or more of a) a shape of the vending machine and/or a shape of a machine-user interface of the vending machine, b) an arrangement of the plurality of products or representations thereof, and c) a machine-readable optical code positioned on the vending machine.
- Preferably, the image analysis step ii) comprises recognizing each product or representation thereof shown in the one or more images. Preferably, the product selection step iii) comprises displaying one or more analyzed images to the user before the user actuates (e.g., contacts) the device-user interface to select the desired product. That is, the product selection step iii) may be carried out after the image analysis step ii).
- Alternatively, the image analysis step ii) comprises recognizing the product or representation thereof displayed at the position where the user has actuated (e.g., contacted) the device-user interface to select the desired product. That is, the product selection step iii) may also be carried out before the image analysis step ii). In this scenario, the image analysis step ii) may be carried out after the user has actuated (e.g., contacted) the device-user interface.
- Preferably, in the image analysis step ii), the one or more images are analyzed by a software provided on the portable electronic device. Alternatively, or additionally, the one or more images may be analyzed by a software provided on a remote server, wherein the portable electronic device is connected to the server via, inter alia, a wireless communication network. In this scenario, the one or more images (or a livestream comprising the same, as described above) obtained with the portable electronic device are communicated to the server via the wireless communication network. Preferably, the portable electronic device is connected to the internet. Preferably, the server is connected to the internet as well. The wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- Preferably, the one or more images or the livestream are analyzed with an artificial intelligence algorithm, which is preferably provided on the server or on the device.
- Preferably, the code generated in the code generation step iv) contains a command for the machine to release the desired product. The code may be generated by an algorithm provided on the portable electronic device or may be determined by an algorithm provided on a server, such as the server described above, and communicated to the portable electronic device connected to the server via a wireless communication network. Also, in this case, the wireless communication network may be a cellular network (e.g. in accordance with the 3G, 4G or 5G standard) or a wireless local area network (e.g., based on IEEE 802.11 standards).
- Preferably, the code is a machine-readable optical code. In this scenario, the vending machine comprises the scanner for receiving the optical code, and the optical code is displayed by the portable electronic device (e.g. on the device-user interface, such as the touchscreen of a smartphone) in the code output step v) and is then scanned with the scanner in the code identification step vi). Preferably, the optical code comprises a one-dimensional or two-dimensional pattern, such as a one-dimensional or two-dimensional barcode. More preferably, the optical code comprises a QR code.
- Alternatively, the code may be transmitted as an electromagnetic signal from the portable electronic device to the machine. In this scenario, the machine comprises the wireless communication terminal for receiving the electromagnetic code. The electromagnetic code is wirelessly transmitted by the portable electronic device in the code output step v) and received with the wireless communication terminal in the code identification step vi). Preferably, the electromagnetic code is wirelessly transmitted based on Bluetooth, Near Field Communication (NFC) or Radio Frequency Identification (RFID) technology.
- In a third aspect, the present invention is directed to a system comprising a portable electronic device and a machine. The machine comprises a scanner or a wireless communication terminal for receiving a code and a machine-user interface comprising a plurality of contact areas, wherein the machine is configured to perform a plurality of functions. Each contact area is associated with a different one of the plurality of functions, and the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area. The system is configured to carry out the method according to the first aspect of the invention. Preferably, the system comprises a server in network connection with the portable electronic device.
- In a fourth aspect, the present invention is directed to a system comprising a portable electronic device and a vending machine. The vending machine comprises a scanner or a wireless communication terminal for receiving a code and a plurality of products that can be purchased via the machine. The plurality of products or representations thereof is visible to a user of the machine. The system is configured to carry out the method according to the second aspect of the invention. Preferably, the system comprises a server in network connection with the portable electronic device.
- The general aspects of the invention being outlined above, specific, non-limiting embodiments of the invention will be described in detail with reference to the following schematic figures, in which
-
Fig. 1 is a flow chart summarizing a method in accordance with the first aspect of the invention; -
Fig. 2 shows a schematic diagram of a system with which the method ofFig. 1 may be carried out; -
Fig. 3 shows a coffee machine as an example of the machine in the system ofFig. 2 ; -
Fig. 4 shows an enlarged view of a machine-user interface of the coffee machine ofFig. 3 ; -
Fig. 5 shows the function selection step in which a user selects a function of the coffee machine ofFig. 3 using a smartphone; -
Fig. 6 shows the code output step in which a machine-readable optical code is displayed on the smartphone; -
Fig. 7 shows an alternative embodiment wherein, instead of the image of the machine-user interface, a menu is displayed on the smartphone; and -
Fig. 8 is a flow chart summarizing a method in accordance with the second aspect of the invention. - Referring to
Fig. 1 , the method in accordance with the first aspect of the invention comprises an image obtention step S1, an image analysis step S2, a function selection step S3, a code generation step S4, a code output step S5, a code identification step S6 and a function performing step S7. Each step may comprise the features described in the context of the first aspect of the invention. Unnecessary repetition will thus be avoided hereinafter. In the following, the method will be described in additional details with reference toFigs. 2 to 7 . -
Fig. 2 is a schematic diagram of asystem 10 configured to carry out the respective steps ofFig. 1 . Thesystem 10 comprises amachine 100 and asmartphone 140. Thesmartphone 140 has atouchscreen 142 and a camera (not shown) on e.g. the back. Thesmartphone 140 is connected to aserver 160 via anetwork 150. Thenetwork 150 may be any telecommunication network that enables communication between thesmartphone 140 and theserver 160, including but not limited to local area network (LAN), wireless LAN (WLAN) and/or any of the mobile communication technologies (such as cellular networks). Theserver 160 is configured to receive information from thesmartphone 140, process or generate information therein, and/or transmit information to thesmartphone 140. Themachine 100 comprises ascanner 120 for receiving a machine-readable optical code. Themachine 100 further comprises a machine-user interface 110. Themachine 100 is configured to perform a plurality of functions when a user operates themachine 100 via the machine-user interface 110. Themachine 100 may, for example, be a machine that provides a service and/or a product to a consumer. -
Fig. 3 illustrates a specific example - i.e. a coffee machine 200 - of themachine 100 inFig. 2 . Thecoffee machine 200 is configured to prepare various types of coffee at the user's choice. While not shown, a skilled person shall understand that thecoffee machine 200 may comprise a water tank, a bean container, a grinder, a heater, a brewer, a milk supply, etc., connecting means for the respective components and a microprocessor controlling and coordinating the operation of the respective components of thecoffee machine 200 in order to prepare the coffee. Thecoffee machine 200 comprises ascanner 220 the same as thescanner 120 and a machine-user interface 210, which is shown more clearly inFig. 4 . - An
additional controller 240 may be interposed between thescanner 220 and the machine (e.g., between thescanner 220 and the microprocessor). Thecontroller 240 may be configured to recognize a certain function based on the code (e.g., which beverage is to be prepared based on information stored in the machine 200) and/or to recognize from the code a set of steps and/or machine actuation parameters indicating how the function is to be performed (e.g., how much coffee is to be used, whether milk is to be provided and/or foamed, how much water is to be infused, etc.). - Referring to
Fig. 4 , the machine-user interface 210 comprises ascreen 214 on which a plurality ofpictograms 216, each representing a specific type of coffee, is displayed. The machine-user interface 210 further comprises a plurality ofpush buttons 212, individually corresponding to therespective pictograms 216. Accordingly, the machine-user interface 210 allows a user to select the desired coffee drink, e.g. flat white, by contacting (pressing) the corresponding push button 212'. A command thereby generated will be sent to the microprocessor of thecoffee machine 200 which controls the respective components of thecoffee machine 200 accordingly to prepare a flat white for the user. - Referring now to
Figs. 1 ,2 and4 , according to the method for operating the machine of the invention, in the image obtention step S1, the user obtains an image of the machine-user interface 210 with thesmartphone 140. This may be done by executing a special application installed on thesmartphone 140, which turns on the camera, and aiming the camera at the machine-user interface 210. An image of (or a livestream showing) the machine-user interface 210 is automatically obtained and communicated to theserver 160 via thenetwork 150. - Then, in the image analysis step S2, the
server 160 analyzes, by an algorithm such as artificial intelligence, the image (or the livestream) to recognize the respective types of coffee associated with theindividual push buttons 212 shown in the image (or the livestream). That is, the algorithm recognizes the respective types of coffee represented by the individual pictograms 216 (or by the designations displayed below) and correlates the respective types of coffee to thecorresponding push buttons 212. Specifically, the algorithm recognizes that the threepush buttons 212 on the left of the machine-user interface 210 are respectively associated with, in an order from the top to the bottom, espresso, americano and cappuccino, while the threepush buttons 212 on the right are respectively associated with, in the same order, doppio, cafe latte and flat white. Theserver 160 then communicates a result of recognition to thesmartphone 140 via thenetwork 150. - The image may further be processed by the
server 160 and/or thesmartphone 140 in order to facilitate input by the user. For example, the image may be processed to show the machine-user interface 210 as seen in a front view and/or some of the pixels may be replaced (e.g., unclear pictograms may be replaced by pictograms having a better sharpness and/or image quality). - Referring further to
Fig. 5 , in the function selection step S3, an analyzedimage 170 is displayed on thetouchscreen 142 of thesmartphone 140. The user is then allowed to select the desired type of coffee, e.g. flat white, by contacting thetouchscreen 142 at a position where the corresponding push button 212' is displayed, thereby indicating that the user's desired type of coffee is flat white. Preferably, the user is allowed to zoom in/out the analyzedimage 170 displayed on thetouchscreen 142 with a gesture, such as moving two fingers toward or away from each other, to facilitate the selection of the desired type of coffee. - In response to the user's indication of the desired type of coffee, a machine-readable optical code containing a command that a flat white be made is generated, as per the code generation step S4. The code may be determined by the
server 160 and then communicated to thesmartphone 140 via thenetwork 150. - Referring further to
Fig. 6 , a machine-readableoptical code 180, which for example is a QR code, is displayed on thetouchscreen 142 of thesmartphone 140, as per the code output step S5. The user may then scan the machine-readableoptical code 180 with thescanner 220 of thecoffee machine 200, as per the code identification step S6. In this manner, the command that a flat white be made is given to thecoffee machine 200, which accordingly prepares the flat white, as per the function performing step S7. - In this manner, the method according to the invention provides a user with a virtual reality environment in which the user is capable of operating the machine via the machine-user interface in an exact same manner as he/she does in reality, except that the machine-user interface is displayed on the smartphone. The method thereby avoids the contact between the user and the machine and meanwhile eliminates the need to provide network connection to the machine.
- A skilled person will appreciate that the method in accordance with this aspect of the invention may not be limited by non-essential technical details exemplarily mentioned above and may include alternatives and/or modifications such as those described above with respect to the first aspect of the invention. For example, referring to
Fig. 7 , in the function selection step S3, instead of displaying the analyzedimage 170 on thetouchscreen 142 of thesmartphone 140, amenu 190 listing the respective types of coffee may be displayed to the user. The user may then select the desired type of coffee from themenu 190. In response to the user's selection, the steps S4 to S7 as described above will follow. - Also, while the
push buttons 212 as shown inFigs. 3 to 5 are described, the machine-user interface 210 may comprise no push buttons in case that, for example, thescreen 214 of the machine-user interface 210 is itself a touchscreen which allows the user to select the desired coffee drink by contacting a region of thescreen 214 that substantially overlaps with the corresponding pictogram displayed thereon. In this scenario, in accordance with the invention, the user selects the desired type of coffee by contacting thetouchscreen 142 of thesmartphone 140 at a position where said region is displayed in the function selection step S3. - Furthermore, while in the above description the image analysis step S2 is carried out before the function selection step S3, this order may be reversed. That is, the method may be carried out in a procedure that after the image obtention step S1, the image of the machine-
user interface 210, not analyzed yet, is displayed on thetouchscreen 142 of thesmartphone 140, and the user contacts thetouchscreen 142 at a position where e.g. the push button 212' is displayed to select the desired type of coffee, a corresponding region of the image then being analyzed by theserver 160 to recognize the type of coffee associated with the push button 212'. The result of recognition is then used as the user's indication of the desired type of coffee, and steps S4 to S7 then follow. - Moreover, the
machine 100 may comprise, instead of or in addition to thescanner 120, awireless communication terminal 230. In this scenario, the code in the code output step S5 is in a form of an electromagnetic signal configured to be received by the wireless communication terminal. Also, while theserver 160 is described, the method may be carried out with a system which does not include a remote server, i.e., the image analysis step S2 and the code generation step S4 may be carried out on thesmartphone 140. The terminal 130 may be connected to thecontroller 240. - The method in accordance with the second aspect of the invention will now be described with reference to
Fig. 8 , which is a flow chart of the respective method steps. The method comprises an image obtention step S1', an image analysis step S2', a product selection step S3', a code generation step S4', a code output step S5', a code identification step S6' and a product release step S7'. Each step may comprise the features described in the context of the second aspect of the invention. Furthermore, the method in accordance with the second aspect is principally similar to that of the first aspect, except that it concerns the image of products sold by a vending machine or the image of representations of the products. Unnecessary repetition will thus be avoided hereinafter. - Specifically, the steps of
Fig. 8 may also be carried out with thesystem 10 shown inFig. 2 . In this regard, themachine 100 is a vending machine via which a plurality of products can be purchased. Themachine 100 further comprises - in addition to or in replacement of the machine-user interface 110 - apanel 130. Thepanel 130 is transparent so that the products stored in themachine 100 are visible to a user. - According to the method for operating the vending machine of the invention, in the image obtention step S1', the user obtains an image of the products seen through the
panel 130 with thesmartphone 140 in a manner similar to the image obtention step S1. Then, in the image analysis step S2', theserver 160 analyzes the image to recognize the respective products shown in the image in order to identify the products that can be purchased via the vending machine. - Next, in the product selection step S3', an analyzed image is displayed on the
touchscreen 142 of thesmartphone 140. The user is allowed to select the desired product by contacting thetouchscreen 142 at a position where said desired product is displayed. In response to the user's selection of the desired product, a machine-readable optical code containing a command that the desired product be released is generated, as per the code generation step S4'. The machine-readable optical code is then displayed on thetouchscreen 142 of thesmartphone 140, as per the code output step S5'. The user may scan the machine-readable optical code with thescanner 120 of themachine 100, as per the code identification step S6'. In this manner, the command that the desired product be released is given to themachine 100, which accordingly release the desired product, as per the product release step S7. The product can then be taken out of themachine 100 by the user. - A skilled person will appreciate that the method in accordance with this aspect of the invention is not limited by non-essential technical details exemplarily mentioned above and may include alternatives and/or modifications such as those described above with respect to the second aspect of the invention. For example, while a
transparent panel 130 allowing the user to see the products is described, the products offered by the vending machine may also be invisible to the user. Instead, representations of the products may be visible, and the image concerned in the respective method steps S1' to S3' is an image showing representations of the products. In this scenario, thepanel 130 may be a display panel. - While the invention has been described in the context of the detailed embodiments with reference to a beverage preparation machine, the invention is not limited with respect to the type of machine and/or the services or products provided thereby. For example, as noted above, the method may also be used for operating other types of machines, such as other vending machines, elevators, etc.
- Various aspects and embodiments of the invention have been described for purposes of illustration. The present invention, however, should not be unduly limited by any of the details of the above disclosure, as a skilled person in the art will appreciate that changes and modifications that do not contradict the principle of the invention may be made and still covered by the present invention. In particular, the present invention shall not be construed to be limited to the embodiments described above with reference to the drawings. Rather, the scope of protection of the present invention is determined solely by the appended claims.
- The invention may, for example, be defined by the following items:
- 1. A method for operating a machine, the machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a machine-user interface comprising a plurality of contact areas, wherein the machine is configured to perform a plurality of functions, wherein each contact area is associated with a different one of the plurality of functions, and wherein the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area,
the method comprising- an image obtention step, wherein the user obtains one or more images of the machine-user interface with a portable electronic device, the one or more images showing at least some of the plurality of contact areas of the machine-user interface;
- an image analysis step, wherein the one or more images are analyzed to recognize the function associated with at least one of the plurality of contact areas shown in the one or more images;
- a function selection step,
- wherein the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates the device-user interface at a position where one of the plurality of contact areas is displayed to select a desired function associated with said contact area, or
- wherein the functions recognized in the image analysis step are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired function therefrom;
- a code generation step, wherein a code is generated, the code conveying the desired function;
- a code output step, wherein the code is output by the portable electronic device;
- a code identification step, wherein the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal; and
- a function performing step, wherein the machine performs the desired function conveyed by the code.
- 2. The method of item 1,
wherein the code is a machine-readable optical code, wherein the code is displayed by the portable electronic device in the code output step, wherein the code is scanned with the scanner in the code identification step;
preferably wherein the code comprises a one-dimensional or two-dimensional barcode, more preferably a QR code. - 3. The method of item 1,
wherein the code is wirelessly transmitted by the portable electronic device in the code output step and received with the wireless communication terminal;
preferably wherein the code is wirelessly transmitted by Bluetooth or Near Field Communication. - 4. The method of item 1, 2 or 3,
wherein the image analysis step comprises recognizing the type of machine based on one or more of:- a letter, a number, a character string, a logogram, and/or a pictogram associated with each contact area, and/or
- a shape of the machine and/or a shape of the machine-user interface, and/or
- an arrangement of the contact areas on the machine-user interface, and/or
- a machine-readable optical code positioned on the machine;
- 5. The method according to any one of the preceding items,
wherein the contact areas each are associated with a letter, a number, a character string, a logogram, and/or a pictogram, and
the image analysis step comprises recognizing the function associated with the at least one contact area based on a recognition of the letter, the number, the character string, the logogram, and/or the pictogram associated therewith. - 6. The method according to any one of the preceding items, wherein the image analysis step comprises:
- recognizing the function associated with each of the contact areas shown in the one or more images, wherein preferably in the function selection step one or more analyzed images are displayed to the user before the user actuates the device-user interface to select the desired function; and/or
- recognizing the function associated with the contact area displayed at the position where the user has actuated the device-user interface to select the desired function.
- 7. The method according to any one of the preceding items,
wherein the one or more images are analyzed by a software provided on the portable electronic device; or
wherein the portable electronic device is connected to a server via a wireless communication network and the one or more images are analyzed by the server. - 8. The method according to any one of the preceding items, wherein the image is analyzed with an artificial intelligence algorithm.
- 9. The method according to any one of the preceding items, wherein the one or more images are part of a livestream, preferably wherein the portable electronic device is connected to a server via a wireless communication network and the livestream is communicated to the server via the wireless communication network.
- 10. The method of item 9, wherein the livestream is displayed by the portable electronic device during the image obtention step, wherein the livestream is analyzed by an algorithm, preferably an artificial intelligence algorithm, and wherein the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm.
- 11. The method of
item 10, wherein the livestream is frozen upon determining that the functions associated with all contact areas have been recognized and/or that the entire machine-user interface is being shown. - 12. The method according to any one of the preceding items,
wherein the machine-user interface comprises a touchscreen and several of the contact areas are displayed on the touchscreen; and/or
wherein the machine-user interface comprises push buttons and/or capacitive sensors and several of the contact areas are formed by said push buttons and/or capacitive sensors. - 13. The method according to any one of the preceding items,
wherein the portable electronic device is connected to a server via a wireless communication network and the code is determined by the server and communicated to the portable electronic device. - 14. The method according to any one of the preceding items,
wherein the device-user interface is a touchscreen;
preferably wherein the portable electronic device is a smartphone having a camera, the image being obtained with the camera in the image obtention step, and the image being displayed on the touchscreen in the function selection step;
more preferably wherein the code is displayed on the touchscreen in the code output step. - 15. The method according to any one of the preceding items, wherein the machine is not connected to the internet, preferably wherein the portable electronic device is connected to the internet.
- 16. The method according to any one of the preceding items, wherein the function performed by the machine comprises providing a service and/or a product.
- 17. The method according to any one of the preceding items, wherein the product comprises snacks, beverages, cigarettes, or tickets.
- 18. The method according to any one of the preceding items, wherein the machine is a vending machine, such as a beverage preparation machine.
- 19. A method for operating a vending machine, the vending machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a plurality of products that can be purchased via the vending machine, the plurality of products or representations thereof being visible to a user of the vending machine;
the method comprising- an image obtention step, wherein the user obtains one or more images of the plurality of products or the representations thereof with a portable electronic device, the one or more images showing at least some of the plurality of products or representations thereof;
- an image analysis step, wherein the one or more images are analyzed to recognize the plurality of products or the representations thereof shown in the image in order to identify the products that can be purchased via the vending machine;
- a product selection step,
- wherein the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates the device-user interface at a position where one of the plurality of products or representations thereof is displayed to select a desired product, or
- wherein the products identified in the image analysis step are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired product therefrom;
- a code generation step, wherein a code is generated, the code conveying the desired prod uct;
- a code output step, wherein the code is output by the portable electronic device;
- a code identification step, wherein the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal; and
- a product release step, wherein the vending machine releases the desired product conveyed by the code.
- 20. The method of item 19,
wherein the code is a machine-readable optical code, wherein the code is displayed by the portable electronic device in the code output step, wherein the code is scanned with the scanner in the code identification step;
preferably wherein the code comprises a one-dimensional or two-dimensional barcode, more preferably a QR code. - 21. The method of item 19,
wherein the code is wirelessly transmitted by the portable electronic device in the code output step and received with the wireless communication terminal;
preferably wherein the code is wirelessly transmitted by Bluetooth or Near Field Communication. - 22. The method of item 19, 20, or 21,
wherein the image analysis step comprises recognizing the type of vending machine based on one or more of:- a shape of the vending machine and/or a shape of a machine-user interface of the vending machine, and/or
- an arrangement of the plurality of products or representations thereof, and/or
- a machine-readable optical code positioned on the vending machine.
- 23. The method according to any one of items 19 to 22, wherein the image analysis step comprises:
- recognizing each product or representation thereof shown in the one or more images, wherein preferably in the product selection step one or more analyzed images are displayed to the user before the user actuates the device-user interface to select the desired product; and/or
- recognizing the product or representation thereof displayed at the position where the user has actuated the device-user interface to select the desired product.
- 24. The method according to any one of items 19 to 23,
wherein the one or more images are analyzed by a software provided on the portable electronic device; or
wherein the portable electronic device is connected to a server via a wireless communication network and the one or more images are analyzed by the server. - 25. The method according to any one of items 18 to 24, wherein the one or more images are analyzed with an artificial intelligence algorithm.
- 26. The method according to any one of the preceding items, wherein the one or more images are part of a livestream, preferably wherein the portable electronic device is connected to a server via a wireless communication network and the livestream is communicated to the server via the wireless communication network.
- 27. The method of item 26, wherein the livestream is displayed by the portable electronic device during the image obtention step, wherein the livestream is analyzed by an algorithm, preferably an artificial intelligence algorithm, and wherein the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm.
- 28. The method of item 27, wherein the livestream is frozen upon determining that all products or representations thereof have been recognized.
- 29. The method according to any one of items 19 to 28,
wherein the portable electronic device is connected to a server via a wireless communication network and the code is determined by the server and communicated to the portable electronic device. - 30. The method according to any one of items 19 to 29,
wherein the device-user interface is a touchscreen;
wherein the portable electronic device has a camera, the image being obtained with the camera in the image obtention step, and the image being displayed on the touchscreen in the product selection step. - 31. The method according to any one of items 19 to 30,
wherein the device-user interface is a touchscreen;
wherein the code is displayed on the touchscreen in the code output step. - 32. The method according to any one of the preceding items, wherein the vending machine is not connected to the internet, preferably wherein the portable electronic device is connected to the internet.
- 33. The method according to any one of the preceding items, wherein the product comprises snacks, beverages, cigarettes, or tickets.
- 34. A system comprising a portable electronic device and a machine, the machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a machine-user interface comprising a plurality of contact areas, wherein the machine is configured to perform a plurality of functions, wherein each contact area is associated with a different one of the plurality of functions, and wherein the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area,
wherein the system is configured to carry out the method according to any one of items 1 to 18. - 35. A system comprising a portable electronic device and a vending machine, the vending machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a plurality of products that can be purchased via the vending machine, the plurality of products or representations thereof being visible to a user of the vending machine;
wherein the system is configured to carry out the method according to any one of items 19 to 33.
Claims (15)
- A method for operating a machine, the machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a machine-user interface comprising a plurality of contact areas, wherein the machine is configured to perform a plurality of functions, wherein each contact area is associated with a different one of the plurality of functions, and wherein the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area,
the method comprising- an image obtention step, wherein the user obtains one or more images of the machine-user interface with a portable electronic device, the one or more images showing at least some of the plurality of contact areas of the machine-user interface;- an image analysis step, wherein the one or more images are analyzed to recognize the function associated with at least one of the plurality of contact areas shown in the one or more images;- a function selection step,wherein the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates the device-user interface at a position where one of the plurality of contact areas is displayed to select a desired function associated with said contact area, orwherein the functions recognized in the image analysis step are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired function therefrom;- a code generation step, wherein a code is generated, the code conveying the desired function;- a code output step, wherein the code is output by the portable electronic device;- a code identification step, wherein the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal; and- a function performing step, wherein the machine performs the desired function conveyed by the code. - The method of claim 1,
wherein the image analysis step comprises recognizing the type of machine based on one or more of:- a letter, a number, a character string, a logogram, and/or a pictogram associated with each contact area, and/or- a shape of the machine and/or a shape of the machine-user interface, and/or- an arrangement of the contact areas on the machine-user interface, and/or- a machine-readable optical code positioned on the machine;preferably wherein a plurality of pictograms is associated with the plurality of contact areas and the image analysis step comprises recognizing the type of machine based on one or more of the pictograms. - The method according to claim 1 or 2,
wherein the contact areas each are associated with a letter, a number, a character string, a logogram, and/or a pictogram, and
the image analysis step comprises recognizing the function associated with the at least one contact area based on a recognition of the letter, the number, the character string, the logogram, and/or the pictogram associated therewith. - The method according to any one of the preceding claims, wherein the image analysis step comprises:- recognizing the function associated with each of the contact areas shown in the one or more images, wherein preferably in the function selection step one or more analyzed images are displayed to the user before the user actuates the device-user interface to select the desired function; and/or- recognizing the function associated with the contact area displayed at the position where the user has actuated the device-user interface to select the desired function.
- A method for operating a vending machine, the vending machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a plurality of products that can be purchased via the vending machine, the plurality of products or representations thereof being visible to a user of the vending machine;
the method comprising- an image obtention step, wherein the user obtains one or more images of the plurality of products or the representations thereof with a portable electronic device, the one or more images showing at least some of the plurality of products or representations thereof;- an image analysis step, wherein the one or more images are analyzed to recognize the plurality of products or the representations thereof shown in the image in order to identify the products that can be purchased via the vending machine;- a product selection step,wherein the one or more images are displayed to the user by a device-user interface of the portable electronic device and the user actuates the device-user interface at a position where one of the plurality of products or representations thereof is displayed to select a desired product, orwherein the products identified in the image analysis step are displayed to the user by a device-user interface of the portable electronic device as a menu and the user selects a desired product therefrom;- a code generation step, wherein a code is generated, the code conveying the desired prod uct;- a code output step, wherein the code is output by the portable electronic device;- a code identification step, wherein the code output by the portable electronic device is scanned with the scanner or received with the wireless communication terminal; and- a product release step, wherein the vending machine releases the desired product conveyed by the code. - The method of claim 5,
wherein the image analysis step comprises recognizing the type of vending machine based on one or more of:- a shape of the vending machine and/or a shape of a machine-user interface of the vending machine, and/or- an arrangement of the plurality of products or representations thereof, and/or- a machine-readable optical code positioned on the vending machine. - The method according to claim 5 or 6, wherein the image analysis step comprises:- recognizing each product or representation thereof shown in the one or more images, wherein preferably in the product selection step one or more analyzed images are displayed to the user before the user actuates the device-user interface to select the desired product; and/or- recognizing the product or representation thereof displayed at the position where the user has actuated the device-user interface to select the desired product.
- The method according to any one of the preceding claims, wherein the one or more images are part of a livestream, preferably wherein the portable electronic device is connected to a server via a wireless communication network and the livestream is communicated to the server via the wireless communication network.
- The method of claim 8, wherein the livestream is displayed by the portable electronic device during the image obtention step, wherein the livestream is analyzed by an algorithm, preferably an artificial intelligence algorithm, and wherein the livestream displayed is frozen upon determining that a set of relevant data has been recognized by the algorithm.
- The method according to any one of the preceding claims,
wherein the portable electronic device is connected to a server via a wireless communication network and the code is determined by the server and communicated to the portable electronic device. - The method according to any one of the preceding claims, wherein the machine is not connected to the internet, preferably wherein the portable electronic device is connected to the internet.
- The method of any one of claims 1 to 11,
wherein the code is a machine-readable optical code, wherein the code is displayed by the portable electronic device in the code output step, wherein the code is scanned with the scanner in the code identification step;
preferably wherein the code comprises a one-dimensional or two-dimensional barcode, more preferably a QR code. - The method of any one of claims 1 to 11,
wherein the code is wirelessly transmitted by the portable electronic device in the code output step and received with the wireless communication terminal;
preferably wherein the code is wirelessly transmitted by Bluetooth or Near Field Communication. - A system comprising a portable electronic device and a machine, the machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a machine-user interface comprising a plurality of contact areas, wherein the machine is configured to perform a plurality of functions, wherein each contact area is associated with a different one of the plurality of functions, and wherein the machine is instructed to perform the function associated with a respective contact area when a user of the machine contacts said contact area,
wherein the system is configured to carry out the method according to any one of claims 1 to 4. - A system comprising a portable electronic device and a vending machine, the vending machine comprising
a scanner or a wireless communication terminal for receiving a code, and
a plurality of products that can be purchased via the vending machine, the plurality of products or representations thereof being visible to a user of the vending machine;
wherein the system is configured to carry out the method according to any one of claims 5 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20197209.8A EP3971849A1 (en) | 2020-09-21 | 2020-09-21 | Method for operating machine and system configured to carry out the method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20197209.8A EP3971849A1 (en) | 2020-09-21 | 2020-09-21 | Method for operating machine and system configured to carry out the method |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3971849A1 true EP3971849A1 (en) | 2022-03-23 |
Family
ID=72603429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20197209.8A Pending EP3971849A1 (en) | 2020-09-21 | 2020-09-21 | Method for operating machine and system configured to carry out the method |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP3971849A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080308628A1 (en) * | 2007-06-12 | 2008-12-18 | Gilbarco Inc. | System and method for providing receipts, advertising, promotion, loyalty programs, and contests to a consumer via an application-specific user interface on a personal communication device |
WO2014088488A1 (en) * | 2012-12-06 | 2014-06-12 | Mobile Payment Solutions Holding Nordic Ab | Method for purchasing or claiming a product using a portable communication device |
US20190130618A1 (en) * | 2017-10-31 | 2019-05-02 | Paypal, Inc. | Using augmented reality for accessing legacy transaction terminals |
US10331874B1 (en) * | 2018-06-06 | 2019-06-25 | Capital One Services, Llc | Providing an augmented reality overlay to secure input data |
US10515349B1 (en) * | 2018-03-05 | 2019-12-24 | Carolyn Bryant | Networked augmented reality and virtual vending machine |
-
2020
- 2020-09-21 EP EP20197209.8A patent/EP3971849A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080308628A1 (en) * | 2007-06-12 | 2008-12-18 | Gilbarco Inc. | System and method for providing receipts, advertising, promotion, loyalty programs, and contests to a consumer via an application-specific user interface on a personal communication device |
WO2014088488A1 (en) * | 2012-12-06 | 2014-06-12 | Mobile Payment Solutions Holding Nordic Ab | Method for purchasing or claiming a product using a portable communication device |
US20190130618A1 (en) * | 2017-10-31 | 2019-05-02 | Paypal, Inc. | Using augmented reality for accessing legacy transaction terminals |
US10515349B1 (en) * | 2018-03-05 | 2019-12-24 | Carolyn Bryant | Networked augmented reality and virtual vending machine |
US10331874B1 (en) * | 2018-06-06 | 2019-06-25 | Capital One Services, Llc | Providing an augmented reality overlay to secure input data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9230386B2 (en) | Product providing apparatus, display apparatus, and method for providing GUI using the same | |
KR101500981B1 (en) | Personalized QR Code Coffee Vending Machine System And Its Coffee Making Method For Smart Device User | |
US9767446B2 (en) | Touch screen system and methods for multiple contactless payments | |
JP4659817B2 (en) | Sales support device | |
JP4928592B2 (en) | Image processing apparatus and program | |
JP6261060B2 (en) | Information processing device | |
JP2012243243A (en) | Accounting processing system | |
EP2026306A2 (en) | Article sales data processing apparatus | |
JP2017049953A (en) | Order processing system and order processing method | |
JP2011048440A (en) | Cooking assistance terminal and program | |
KR101923591B1 (en) | Method for menu ordering using character | |
US20100100844A1 (en) | Electronic menu apparatus | |
JP2015191576A (en) | Information output apparatus, information output method, information output system, terminal and program | |
EP3971849A1 (en) | Method for operating machine and system configured to carry out the method | |
CN204480387U (en) | Beverage machine is ordered automatic transmission pouring and boiling device | |
JP6554257B2 (en) | Support system for providing custom dishes | |
KR101852543B1 (en) | Smart vending machine, electronic commerce system and method using the same | |
JP2011113388A (en) | Ordering system, order terminal and controller device | |
JP6940859B2 (en) | Order entry system, mobile terminal, table-equipped terminal, and ordering method | |
JP2012094070A (en) | Automatic dispenser | |
JP5709616B2 (en) | Order processing apparatus, order processing method, and program | |
JP6355256B2 (en) | Menu screen construction device, menu processing device, menu screen production method, menu processing method, and program | |
TW201633976A (en) | Automatic transmitting and beverage brewing apparatus of food ordering machine | |
KR102391096B1 (en) | System for managing on demand complex autonomous restaurant and method thereof | |
JP2021114185A (en) | Information processing equipment, systems and programs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220922 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20240119 |