+

WO2019171635A1 - Operation input device, operation input method, anc computer-readable recording medium - Google Patents

Operation input device, operation input method, anc computer-readable recording medium Download PDF

Info

Publication number
WO2019171635A1
WO2019171635A1 PCT/JP2018/034490 JP2018034490W WO2019171635A1 WO 2019171635 A1 WO2019171635 A1 WO 2019171635A1 JP 2018034490 W JP2018034490 W JP 2018034490W WO 2019171635 A1 WO2019171635 A1 WO 2019171635A1
Authority
WO
WIPO (PCT)
Prior art keywords
operation input
sensor
aerial projection
projection plane
depth
Prior art date
Application number
PCT/JP2018/034490
Other languages
French (fr)
Japanese (ja)
Inventor
夏美 鈴木
Original Assignee
Necソリューションイノベータ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necソリューションイノベータ株式会社 filed Critical Necソリューションイノベータ株式会社
Priority to JP2020504657A priority Critical patent/JP6898021B2/en
Priority to CN201880090820.0A priority patent/CN111886567B/en
Publication of WO2019171635A1 publication Critical patent/WO2019171635A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means

Definitions

  • the present invention relates to an operation input device and an operation input method that enable an input operation by touching a screen displayed in the air. Furthermore, the present invention relates to a computer-readable recording medium on which a program for realizing these is recorded. About.
  • Patent Literature 1 an operation device has been proposed in which an operation screen is projected in the air, and a user can input an operation by touching the screen projected in the air (hereinafter referred to as “aerial projection plane”).
  • the operation device proposed by Patent Literature 1 includes a display device, an image imaging plate that forms an image of the screen of the display device in the air, a camera, a distance sensor, and a control unit.
  • the image imaging plate has a function of collecting the light emitted from the image at a specific position at the same distance on the opposite side as viewed from the image imaging plate to form the same image (for example, , See Patent Document 2). For this reason, when the light emitted from the screen displayed on the display device passes through the image imaging plate, the screen is projected into the air.
  • the camera captures the aerial projection plane and the user's finger, and inputs the captured image to the control unit.
  • the distance sensor measures the distance from the user's fingertip and inputs the measured distance to the control unit.
  • the control unit first calculates the coordinates of the user's finger on the aerial projection plane by substituting the position on the image of the user's fingertip shown in the captured image and the distance measured by the distance sensor into the conversion formula. To do.
  • the conversion formula used at this time is determined in advance from the position coordinates of the aerial projection plane and camera information (position, angle of view, focal length, etc.).
  • control unit determines the overlap between the user's finger and the operation icon displayed on the aerial projection plane based on the calculated coordinates, and specifies the input operation by the user based on the determination result.
  • the user can perform an input operation by touching a screen displayed in the air.
  • Patent Document 1 can accurately detect the coordinates of the contact position of the user's finger on the aerial projection plane when the user's fingertip is positioned behind the operation screen in the air. There is a problem of disappearing.
  • the distance measured by the distance sensor is shorter than the distance at the contact position intended by the user. .
  • the distance sensor is installed so that the distance at the center position of the aerial projection plane is the shortest, the coordinates of the user's finger on the aerial projection plane are Will move to the center of the screen.
  • An example of an object of the present invention is to solve the above-described problem and suppress a decrease in detection accuracy of a touch position due to an erroneous operation of a user when a user performs an input operation by touching a screen displayed in the air.
  • An operation input device, an operation input method, and a computer-readable recording medium are provided.
  • an operation input device includes: A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a control device that identifies an operation input performed on the operation screen, and the sensor device includes information for identifying a two-dimensional coordinate of the object in a sensing area, and the sensor device Output sensor data including depth to the object,
  • the controller is From the sensor data, detect the position of the object in the aerial projection plane, Further, when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is set in the sensor data; and Using the depth at the set outer edge of the figure, the position of the object on the aerial projection plane is corrected. It is characterized by that.
  • an operation input method includes: A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for the sensor device to specify the two-dimensional coordinates of the object in a sensing area, and the sensor device from the sensor device
  • a display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space
  • a computer for specifying an operation input performed on the operation screen, and information for specifying the two-dimensional coordinates of the object in a sensing area, and from the sensor device, the sensor device In the operation input device that outputs sensor data including the depth to the object,
  • the computer In the computer, (A) detecting a position of the object on the aerial projection plane from the sensor data; (B) when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection plane, setting a figure surrounding the part of the object in the sensor data;
  • FIG. 1 is a configuration diagram showing the configuration of the operation input device according to the embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the function of the sensor device used in the operation input device according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of a sensing result of the sensor device provided in the operation input device according to the embodiment of the present invention.
  • FIG. 4 is a flowchart showing the operation of the operation input device according to the embodiment of the present invention.
  • FIG. 5 is a block diagram illustrating an example of a computer that implements the control device for the operation input device according to the embodiment of the present invention.
  • FIG. 1 is a configuration diagram showing the configuration of the operation input device according to the embodiment of the present invention.
  • the operation input device 100 in the present embodiment shown in FIG. 1 is a device that enables an input operation by touching a screen displayed in the air.
  • the operation input device 100 includes a display device 10, an optical plate 20, a sensor device 30, and a control device 40.
  • Display device 10 displays an operation screen for input.
  • the optical plate 20 projects the operation screen into the air and generates an aerial projection surface 21.
  • the sensor device 30 is a device for detecting the position of the object 50 in contact with the aerial projection plane 21 in a three-dimensional space.
  • the object 50 is a user's finger that performs an operation input.
  • the sensor device 30 is disposed on the back side of the aerial projection surface 21.
  • the sensor device 30 outputs sensor data including information for specifying the two-dimensional coordinates of the object 50 in the sensing area and the depth from the sensor device 30 to the object 50.
  • the control device 40 includes an operation input specifying unit 41, a figure setting unit 42, and a depth correction unit 43.
  • the operation input specifying unit 41 detects the position of the object 50 on the aerial projection plane 21 from the sensor data output by the sensor device 30. Then, the operation input specifying unit 41 specifies an operation input performed on the operation screen according to the detected position of the object 50.
  • the figure setting unit 42 detects that a part of the object 50 is located on the sensor device 30 side of the aerial projection surface 21 from the sensor data, the figure setting unit 42 sets a figure surrounding the part of the object in the sensor data. To do.
  • the depth correction unit 43 corrects the position of the object 50 on the aerial projection plane 21 using the depth of the outer edge of the graphic set by the graphic setting unit 42.
  • the position of the finger is corrected. Therefore, according to the present embodiment, when the user performs an input operation by touching the operation screen displayed in the air, a situation in which the detection accuracy of the touch position is lowered due to an erroneous operation by the user is avoided. Is done.
  • FIG. 2 is a diagram illustrating the function of the sensor device used in the operation input device according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an example of a sensing result of the sensor device provided in the operation input device according to the embodiment of the present invention.
  • the display device 10 is a liquid crystal display device or the like.
  • the optical plate 20 the image imaging plate disclosed in Patent Document 2 described above is used.
  • the optical plate 20 has a function of collecting the light emitted from the image displayed on the screen of the display device 10 at the same distance on the opposite side as viewed from the image imaging plate to form the same image.
  • the sensor device 30 is a depth sensor in the present embodiment.
  • the depth sensor When sensing, the depth sensor outputs image data of an image obtained by sensing and a depth (depth) added to each pixel of the image as sensor data.
  • a device including a camera and a distance sensor may be used as the sensor device 30 as the sensor device 30, a device including a camera and a distance sensor may be used as the sensor device 30, a device including a camera and a distance sensor may be used.
  • the image data included in the sensor data it is possible to specify the two-dimensional coordinates of the object 50 in the sensing area.
  • the position can be specified.
  • the distance between the object 50 and the sensor device 30 in the Z-axis direction can be specified.
  • the Z axis is an axis along the normal line of the sensing surface of the sensor device 30.
  • the operation input specifying unit 41 when receiving sensor data, from the image data and each depth added to each pixel of the image, the most distal side of the object 50 (sensor device). The position in the three-dimensional space of the (30 side) part is specified. Then, the operation input specifying unit 41 converts the specified position into a position on the aerial projection plane 21.
  • the operation input specifying unit 41 first extracts the most distal portion of 50 of the object. And the operation input specific
  • the operation input specifying unit 41 substitutes the position of the most distal portion of the object 50 in the three-dimensional space into the conversion formula obtained from this positional relationship, thereby making the object 50 perpendicular to the aerial projection plane 21.
  • the position in the direction and the position in the horizontal direction of the aerial projection plane 21 are detected.
  • the figure setting unit 42 detects that from the sensor data. Specifically, the graphic setting unit 42 determines whether or not the Z coordinate (depth) of the most distal portion of the object 50 is equal to or greater than a threshold value from the sensor data. As a result of the determination, if it is not equal to or greater than the threshold (less than the threshold), the graphic setting unit 42 determines that a part of the object 50 is located on the sensor device 30 side of the aerial projection plane 21.
  • the threshold value is set according to the position of the object 50. For example, in FIG. 2, the threshold value when the object 50 is above the aerial projection plane 21 is smaller than the threshold value when the object 50 is below the aerial projection plane 21.
  • the graphic setting unit 42 sets a rectangle that surrounds the most distal portion of the object 50 in the sensor data.
  • the rectangle is set in this way, although the X-axis direction of the aerial projection plane 21 matches the H-axis direction of the sensor data, but the Y-axis direction of the aerial projection plane 21 matches the V-axis direction of the sensor data. If a part of the object 50 is positioned on the sensor device 30 side of the aerial projection plane 21, an error occurs in the position in the Y-axis direction.
  • the depth correction unit 43 uses the depth of one of the two sides on the V-axis direction side of the set rectangle, and the depth of the object 50 detected by the operation input specifying unit 41 is determined. The position of the aerial projection plane 21 in the vertical direction is corrected.
  • which side of the two sides on the V-axis direction side is to be used is determined according to the position of the object 50. For example, when the object 50 is on the lower side in the vertical direction with respect to the intersection of the normal line passing through the center of the sensing surface of the sensor device 30 and the aerial projection surface 21, the correction is performed by the depth of the upper side in the V-axis direction. Done. On the other hand, when the object 50 is above the intersection in the vertical direction, correction is performed based on the depth of the lower side in the V-axis direction.
  • a side on the H-axis direction side may be used. Further, depending on the positional relationship, a figure other than a rectangle may be set. Further, only the horizontal position of the aerial projection plane 21 of the object 50 may be corrected according to these positional relationships, or the position in both the vertical direction and the horizontal direction may be corrected. There may be.
  • FIG. 4 is a flowchart showing the operation of the operation input device according to the embodiment of the present invention.
  • FIGS. 1 to 3 are referred to as appropriate.
  • the operation input method is implemented by operating the operation input device 100. Therefore, the description of the operation input method in the present embodiment is replaced with the following description of the operation of the operation input device 100.
  • the control device 40 receives the sensor data and executes the following processing.
  • the operation input specifying unit 41 determines the most distal side (sensor device 30 side) portion of the object 50 from the image data included in the sensor data and each depth. Extract (step A1).
  • the operation input specifying unit 41 specifies the position of the part specified in step A1 in the three-dimensional space, and detects the position on the aerial projection plane 21 of the most distal part of the object 50 from the specified position. (Step A2).
  • the operation input specifying unit 41 specifies the position of the extracted portion in the three-dimensional space, that is, the coordinates on the X axis, the coordinates on the Y axis, and the depth from the sensor data. Then, the operation input specifying unit 41 substitutes the specified position in the three-dimensional space for the conversion formula obtained from the positional relationship, and the position of the most distal portion of the object 50 in the vertical direction of the aerial projection plane 21 The position of the aerial projection plane 21 in the horizontal direction is detected.
  • the graphic setting unit 42 determines whether or not the Z coordinate (depth) of the most distal portion of the object 50 is greater than or equal to a threshold value from the sensor data (step A3).
  • step A3 If the result of determination in step A3 is that the Z coordinate is greater than or equal to the threshold value, the graphic setting unit 42 notifies the operation input specifying unit 41 of this fact. Thereby, the operation input specification part 41 specifies a user's operation input based on the position detected by step A2 (step A4).
  • step A3 determines whether the Z coordinate is greater than or equal to the threshold value (smaller than the threshold value). If the result of determination in step A3 is that the Z coordinate is not greater than or equal to the threshold value (smaller than the threshold value), the most distal portion of the object is located on the sensor device 30 side of the aerial projection plane 21. Therefore, as shown in FIG. 3, the graphic setting unit 42 sets a rectangle that surrounds the most distal portion of the object 50 in the sensor data (step A5).
  • the depth correction unit 43 selects one side of the set rectangle according to the position of the object 50, and uses the depth in the selected one side to detect the most distal side of the object 50 detected in step A2.
  • the position of the part on the aerial projection plane 21 is corrected (step A6).
  • step A6 the depth correction unit 43 notifies the operation input specifying unit 41 that the position has been corrected.
  • the operation input specification part 41 specifies a user's operation input based on the position after correction
  • the position of the object 50 on the aerial projection plane 21 is corrected. Therefore, according to the present embodiment, when the user performs an input operation by touching the operation screen on the aerial projection plane 21, even if the user performs an erroneous operation, a decrease in the detection accuracy of the touch position is suppressed. .
  • the program in the present embodiment may be a program that causes a computer to execute steps A1 to A6 shown in FIG.
  • the control device 40 of the operation input device 100 in the present embodiment can be realized.
  • the processor of the computer functions as the operation input specifying unit 41, the figure setting unit 42, and the depth correction unit 43 to perform processing.
  • each computer may function as any one of the operation input specifying unit 41, the figure setting unit 42, and the depth correction unit 43, respectively.
  • FIG. 5 is a block diagram illustrating an example of a computer that implements the control device for the operation input device according to the embodiment of the present invention.
  • the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. With. These units are connected to each other via a bus 121 so that data communication is possible.
  • the computer 110 may include a GPU (GraphicsGraphProcessing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
  • GPU GraphicsGraphProcessing Unit
  • FPGA Field-Programmable Gate Array
  • the CPU 111 performs various operations by developing the program (code) in the present embodiment stored in the storage device 113 in the main memory 112 and executing them in a predetermined order.
  • the main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory).
  • the program in the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
  • the storage device 113 includes a hard disk drive and a semiconductor storage device such as a flash memory.
  • the input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and a mouse.
  • the display controller 115 is connected to the display device 119 and controls display on the display device 119.
  • the data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, and reads a program from the recording medium 120 and writes a processing result in the computer 110 to the recording medium 120.
  • the communication interface 117 mediates data transmission between the CPU 111 and another computer.
  • the recording medium 120 include general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), magnetic recording media such as a flexible disk, or CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
  • CF Compact Flash
  • SD Secure Digital
  • magnetic recording media such as a flexible disk
  • CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
  • control device 40 in the present embodiment can be realized not by using a computer in which a program is installed but also by using hardware corresponding to each unit. Furthermore, a part of the control device 40 may be realized by a program, and the remaining part may be realized by hardware.
  • a display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space
  • a control device that identifies an operation input performed on the operation screen, and the sensor device includes information for identifying a two-dimensional coordinate of the object in a sensing area, and the sensor device Output sensor data including depth to the object
  • the controller is From the sensor data, detect the position of the object in the aerial projection plane, Further, when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is set in the sensor data; and Using the depth at the set outer edge of the figure, the position of the object on the aerial projection plane is corrected.
  • An operation input device characterized by that.
  • Appendix 4 The operation input device according to any one of appendices 1 to 3, In the sensor data, when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold value, a part of the object is located on the sensor device side of the aerial projection plane. To determine, An operation input device characterized by that.
  • the operation input device according to any one of appendices 1 to 4,
  • the sensor device is a depth sensor;
  • a display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space
  • a computer for specifying an operation input performed on the operation screen, and information for the sensor device to specify the two-dimensional coordinates of the object in a sensing area, and the sensor device from the sensor device
  • An operation input method for outputting sensor data including a depth to an object (A) detecting a position of the object on the aerial projection plane from the sensor data by the computer; (B) When the computer detects from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is displayed in the sensor data.
  • a display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space
  • a computer for specifying an operation input performed on the operation screen, and information for specifying the two-dimensional coordinates of the object in a sensing area, and from the sensor device, the sensor device In the operation input device that outputs sensor data including the depth to the object,
  • the computer-readable recording medium characterized by recording the program containing the instruction
  • Appendix 15 A computer-readable recording medium according to any one of appendices 11 to 14,
  • the sensor device is a depth sensor;
  • a computer-readable recording medium A computer-readable recording medium.
  • the present invention when a user performs an input operation by touching a screen displayed in the air, it is possible to suppress a decrease in detection accuracy of a touch position due to a user's erroneous operation.
  • the present invention is useful in various devices that perform input on an aerial projection plane.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

An operation input device 100 comprising a display device 10 for displaying an operation screen, an optical plate 20 for projecting the operation screen in the air to generate a midair projected surface 21, a sensor device 30, and a control device for identifying an operation input. The sensor device 30 outputs sensor data that includes information for identifying the two-dimensional coordinates of an object in a sensing area, and the depth to the object 50. From the sensor data the control device 40 detects the position of the object 50 on the midair projected surface 21, and when a portion of the object is positioned on the sensor-device side of the midair projected surface 21 the control device sets in the sensor data a figure surrounding the portion of the object 50, and uses the depth of the outer edge of the figure to correct the position of the object 50 on the midair projected surface 21.

Description

操作入力装置、操作入力方法、及びコンピュータ読み取り可能な記録媒体Operation input device, operation input method, and computer-readable recording medium
 本発明は、空中に表示された画面へのタッチによる入力操作を可能にする、操作入力装置、及び操作入力方法に関し、更には、これらを実現するためのプログラムを記録したコンピュータ読み取り可能な記録媒体に関する。 The present invention relates to an operation input device and an operation input method that enable an input operation by touching a screen displayed in the air. Furthermore, the present invention relates to a computer-readable recording medium on which a program for realizing these is recorded. About.
 近年、操作用の画面を空中に投影し、ユーザが空中に投影された画面(以下「空中投影面」と表記する)をタッチすることによって、操作入力を行えるようにする、操作装置が提案されている(例えば、特許文献1参照)。特許文献1によって提案されている操作装置は、表示装置と、表示装置の画面を空中に結像する画像結像プレートと、カメラと、距離センサと、制御ユニットとで構成されている。 In recent years, an operation device has been proposed in which an operation screen is projected in the air, and a user can input an operation by touching the screen projected in the air (hereinafter referred to as “aerial projection plane”). (For example, refer to Patent Document 1). The operation device proposed by Patent Literature 1 includes a display device, an image imaging plate that forms an image of the screen of the display device in the air, a camera, a distance sensor, and a control unit.
 このうち、画像結像プレートは、特定の位置にある画像の放つ光を、画像結像プレートから見て反対側の同じ距離の位置に集めて、同じ画像を形成する機能を備えている(例えば、特許文献2参照)。このため、表示装置に表示された画面から放たれた光が、画像結像プレートを通過すると、画面は空中に投影されることになる。 Among these, the image imaging plate has a function of collecting the light emitted from the image at a specific position at the same distance on the opposite side as viewed from the image imaging plate to form the same image (for example, , See Patent Document 2). For this reason, when the light emitted from the screen displayed on the display device passes through the image imaging plate, the screen is projected into the air.
 また、上記特許文献1に開示された操作装置では、カメラは、空中投影面とユーザの指とを撮影し、撮影画像を制御ユニットに入力する。距離センサは、それからユーザの指先までの距離を測定し、測定した距離を制御ユニットに入力する。 In the operation device disclosed in Patent Document 1, the camera captures the aerial projection plane and the user's finger, and inputs the captured image to the control unit. The distance sensor then measures the distance from the user's fingertip and inputs the measured distance to the control unit.
 制御ユニットは、まず、撮影画像に映ったユーザの指先の画像上の位置と、距離センサで測定された距離とを、換算式に代入して、ユーザの指の空中投影面での座標を算出する。また、このときに用いられる換算式は、空中投影面の位置座標と、カメラの情報(位置、画角、焦点距離等)とから、予め定められている。 The control unit first calculates the coordinates of the user's finger on the aerial projection plane by substituting the position on the image of the user's fingertip shown in the captured image and the distance measured by the distance sensor into the conversion formula. To do. The conversion formula used at this time is determined in advance from the position coordinates of the aerial projection plane and camera information (position, angle of view, focal length, etc.).
 続いて、制御ユニットは、算出した座標に基づいて、ユーザの指と空中投影面に表示された操作用のアイコンとの重なりを判定し、判定結果に基づいて、ユーザによる入力操作を特定する。このような構成により、ユーザは、空中に表示された画面をタッチすることで、入力操作を行うことができる。 Subsequently, the control unit determines the overlap between the user's finger and the operation icon displayed on the aerial projection plane based on the calculated coordinates, and specifies the input operation by the user based on the determination result. With such a configuration, the user can perform an input operation by touching a screen displayed in the air.
特開2017-62709号公報JP 2017-62709 A 特開2012-14194号公報JP 2012-14194 A
 しかしながら、上記特許文献1に開示された操作装置には、ユーザの指先が空中の操作画面よりも奥に位置してしまうと、空中投影面におけるユーザの指の接触位置の座標を正確に検出できなくなるという問題がある。 However, the operation device disclosed in Patent Document 1 can accurately detect the coordinates of the contact position of the user's finger on the aerial projection plane when the user's fingertip is positioned behind the operation screen in the air. There is a problem of disappearing.
 具体的には、ユーザの指先が空中の空中投影面よりも奥に位置してしまうと、距離センサによって測定された距離は、ユーザが意図している接触位置での距離よりも短くなってしまう。この結果、例えば、空中投影面の中心位置での距離が最も短くなるように、距離センサが設置されているとすると、上述した換算式により、空中投影面におけるユーザの指の座標は、実際よりも画面の中央側に移動してしまう。 Specifically, when the user's fingertip is located behind the aerial projection surface in the air, the distance measured by the distance sensor is shorter than the distance at the contact position intended by the user. . As a result, for example, assuming that the distance sensor is installed so that the distance at the center position of the aerial projection plane is the shortest, the coordinates of the user's finger on the aerial projection plane are Will move to the center of the screen.
 本発明の目的の一例は、上記問題を解消し、ユーザが、空中に表示された画面をタッチして、入力操作を行う場合において、ユーザの誤操作によるタッチ位置の検出精度の低下を抑制し得る、操作入力装置、操作入力方法、及びコンピュータ読み取り可能な記録媒体を提供することにある。 An example of an object of the present invention is to solve the above-described problem and suppress a decrease in detection accuracy of a touch position due to an erroneous operation of a user when a user performs an input operation by touching a screen displayed in the air. An operation input device, an operation input method, and a computer-readable recording medium are provided.
 上記目的を達成するため、本発明の一側面における操作入力装置は、
 操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、制御装置と、を備え
 前記センサ装置は、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力し、
 前記制御装置は、
前記センサデータから、前記物体の前記空中投影面における位置を検出し、
更に、前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定し、そして、設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、
ことを特徴とする。
In order to achieve the above object, an operation input device according to one aspect of the present invention includes:
A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a control device that identifies an operation input performed on the operation screen, and the sensor device includes information for identifying a two-dimensional coordinate of the object in a sensing area, and the sensor device Output sensor data including depth to the object,
The controller is
From the sensor data, detect the position of the object in the aerial projection plane,
Further, when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is set in the sensor data; and Using the depth at the set outer edge of the figure, the position of the object on the aerial projection plane is corrected.
It is characterized by that.
 また、上記目的を達成するため、本発明の一側面における操作入力方法は、
 操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、コンピュータと、を用い、前記センサ装置が、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力する、操作入力方法であって、
(a)前記コンピュータによって、前記センサデータから、前記物体の前記空中投影面における位置を検出するステップと、
(b)前記コンピュータによって、前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定する、ステップと、
(c)前記コンピュータによって、設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、ステップと、を有する、
ことを特徴とする。
In order to achieve the above object, an operation input method according to one aspect of the present invention includes:
A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for the sensor device to specify the two-dimensional coordinates of the object in a sensing area, and the sensor device from the sensor device An operation input method for outputting sensor data including a depth to an object,
(A) detecting a position of the object on the aerial projection plane from the sensor data by the computer;
(B) When the computer detects from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is displayed in the sensor data. Set, step, and
(C) correcting the position of the object on the aerial projection plane using the depth at the outer edge of the graphic set by the computer,
It is characterized by that.
 更に、上記目的を達成するため、本発明の一側面におけるコンピュータ読み取り可能な記録媒体は、
 操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、コンピュータと、を備え、前記センサ装置が、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力する、操作入力装置において、
前記コンピュータに、
(a)前記センサデータから、前記物体の前記空中投影面における位置を検出するステップと、
(b)前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定する、ステップと、
(c)設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、ステップと、実行させる命令を含む、プログラムを記録している、
ことを特徴とする。
Furthermore, in order to achieve the above object, a computer-readable recording medium according to one aspect of the present invention is provided.
A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for specifying the two-dimensional coordinates of the object in a sensing area, and from the sensor device, the sensor device In the operation input device that outputs sensor data including the depth to the object,
In the computer,
(A) detecting a position of the object on the aerial projection plane from the sensor data;
(B) when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection plane, setting a figure surrounding the part of the object in the sensor data; When,
(C) using the depth at the set outer edge of the figure to correct the position of the object on the aerial projection plane, and recording a program including a command to be executed;
It is characterized by that.
 以上のように、本発明によれば、ユーザが、空中に表示された画面をタッチして、入力操作を行う場合において、ユーザの誤操作によるタッチ位置の検出精度の低下を抑制することができる。 As described above, according to the present invention, when a user performs an input operation by touching a screen displayed in the air, it is possible to suppress a decrease in detection accuracy of a touch position due to a user's erroneous operation.
図1は、本発明の実施の形態における操作入力装置の構成を示す構成図である。FIG. 1 is a configuration diagram showing the configuration of the operation input device according to the embodiment of the present invention. 図2は、本発明の実施の形態における操作入力装置に用いられるセンサ装置の機能を説明する図である。FIG. 2 is a diagram illustrating the function of the sensor device used in the operation input device according to the embodiment of the present invention. 図3は、本発明の実施の形態における操作入力装置に備えられたセンサ装置のセンシング結果の一例を示す図である。FIG. 3 is a diagram illustrating an example of a sensing result of the sensor device provided in the operation input device according to the embodiment of the present invention. 図4は、本発明の実施の形態における操作入力装置の動作を示すフロー図である。FIG. 4 is a flowchart showing the operation of the operation input device according to the embodiment of the present invention. 図5は、本発明の実施の形態における操作入力装置の制御装置を実現するコンピュータの一例を示すブロック図である。FIG. 5 is a block diagram illustrating an example of a computer that implements the control device for the operation input device according to the embodiment of the present invention.
(実施の形態)
 以下、本発明の実施の形態における、操作入力装置、操作入力方法、及びプログラムについて、図1~図5を参照しながら説明する。
(Embodiment)
Hereinafter, an operation input device, an operation input method, and a program according to an embodiment of the present invention will be described with reference to FIGS.
[装置構成]
 最初に、本実施の形態における操作入力装置の構成について説明する。図1は、本発明の実施の形態における操作入力装置の構成を示す構成図である。
[Device configuration]
First, the configuration of the operation input device in the present embodiment will be described. FIG. 1 is a configuration diagram showing the configuration of the operation input device according to the embodiment of the present invention.
 図1に示す、本実施の形態における操作入力装置100は、空中に表示された画面へのタッチによる入力操作を可能にする装置である。図1に示すように、操作入力装置100は、表示装置10と、光学プレート20と、センサ装置30と、制御装置40とを備えている。 The operation input device 100 in the present embodiment shown in FIG. 1 is a device that enables an input operation by touching a screen displayed in the air. As shown in FIG. 1, the operation input device 100 includes a display device 10, an optical plate 20, a sensor device 30, and a control device 40.
 表示装置10は、入力用の操作画面を表示する。光学プレート20は、操作画面を空中に投影して空中投影面21を生成する。センサ装置30は、空中投影面21に接触する物体50の3次元空間上での位置を検出するための装置である。図1の例では、物体50は、操作入力を行うユーザの指である。 Display device 10 displays an operation screen for input. The optical plate 20 projects the operation screen into the air and generates an aerial projection surface 21. The sensor device 30 is a device for detecting the position of the object 50 in contact with the aerial projection plane 21 in a three-dimensional space. In the example of FIG. 1, the object 50 is a user's finger that performs an operation input.
 センサ装置30は、空中投影面21の背面側に配置されている。センサ装置30は、センシングエリアにおける物体50の2次元座標を特定するための情報と、センサ装置30から物体50までの深度とを含む、センサデータを出力する。 The sensor device 30 is disposed on the back side of the aerial projection surface 21. The sensor device 30 outputs sensor data including information for specifying the two-dimensional coordinates of the object 50 in the sensing area and the depth from the sensor device 30 to the object 50.
 制御装置40は、操作入力特定部41と、図形設定部42と、深度補正部43とを備えている。操作入力特定部41は、センサ装置30が出力したセンサデータから、物体50の、空中投影面21における位置を検出する。そして、操作入力特定部41は、検出した物体50の位置に応じて、操作画面に対して行われた操作入力を特定する。 The control device 40 includes an operation input specifying unit 41, a figure setting unit 42, and a depth correction unit 43. The operation input specifying unit 41 detects the position of the object 50 on the aerial projection plane 21 from the sensor data output by the sensor device 30. Then, the operation input specifying unit 41 specifies an operation input performed on the operation screen according to the detected position of the object 50.
 また、図形設定部42は、センサデータから、物体50の一部分が、空中投影面21のセンサ装置30側に位置していることを検出すると、センサデータにおいて、この物体の一部分を囲む図形を設定する。深度補正部43は、図形設定部42が設定した図形の外縁の深度を用いて、物体50の空中投影面21における位置を補正する。 Further, when the figure setting unit 42 detects that a part of the object 50 is located on the sensor device 30 side of the aerial projection surface 21 from the sensor data, the figure setting unit 42 sets a figure surrounding the part of the object in the sensor data. To do. The depth correction unit 43 corrects the position of the object 50 on the aerial projection plane 21 using the depth of the outer edge of the graphic set by the graphic setting unit 42.
 このように、本実施の形態では、例えば、ユーザの指先が空中投影面よりも奥に位置してしまった場合に、指の位置が補正される。このため、本実施の形態によれば、ユーザが、空中に表示された操作画面をタッチして、入力操作を行う場合において、ユーザの誤操作によってタッチ位置の検出精度が低下してしまう事態が回避される。 Thus, in the present embodiment, for example, when the user's fingertip is located behind the aerial projection plane, the position of the finger is corrected. Therefore, according to the present embodiment, when the user performs an input operation by touching the operation screen displayed in the air, a situation in which the detection accuracy of the touch position is lowered due to an erroneous operation by the user is avoided. Is done.
 続いて、図2及び図3を用いて、本実施の形態における操作入力装置100の構成及び機能についてより詳細に説明する。図2は、本発明の実施の形態における操作入力装置に用いられるセンサ装置の機能を説明する図である。図3は、本発明の実施の形態における操作入力装置に備えられたセンサ装置のセンシング結果の一例を示す図である。 Subsequently, the configuration and function of the operation input device 100 according to the present embodiment will be described in more detail with reference to FIGS. 2 and 3. FIG. 2 is a diagram illustrating the function of the sensor device used in the operation input device according to the embodiment of the present invention. FIG. 3 is a diagram illustrating an example of a sensing result of the sensor device provided in the operation input device according to the embodiment of the present invention.
 まず、本実施の形態において、表示装置10は、液晶表示装置等である。光学プレート20としては、上述した特許文献2に開示された画像結像プレートが用いられている。光学プレート20は、表示装置10の画面に表示された画像の放つ光を、画像結像プレートから見て反対側の同じ距離の位置に集めて、同じ画像を形成する機能を備えている。 First, in the present embodiment, the display device 10 is a liquid crystal display device or the like. As the optical plate 20, the image imaging plate disclosed in Patent Document 2 described above is used. The optical plate 20 has a function of collecting the light emitted from the image displayed on the screen of the display device 10 at the same distance on the opposite side as viewed from the image imaging plate to form the same image.
 センサ装置30は、本実施の形態では、デプスセンサである。デプスセンサは、センシングを行うと、センシングによって得られた画像の画像データと、画像の画素毎に付加された深度(デプス)とを、センサデータとして出力する。また、本実施の形態においては、センサ装置30として、カメラと距離センサとで構成された装置が用いられていても良い。 The sensor device 30 is a depth sensor in the present embodiment. When sensing, the depth sensor outputs image data of an image obtained by sensing and a depth (depth) added to each pixel of the image as sensor data. In the present embodiment, as the sensor device 30, a device including a camera and a distance sensor may be used.
 そして、センサデータに含まれる画像データによれば、センシングエリアにおける物体50の2次元座標を特定することが可能となる。具体的には、図2に示すように、センサデータに含まれる画像データによれば、物体50における、画像の垂直方向(V軸方向)の位置と、画像の水平方向(H軸方向)の位置とが特定可能となる。また、センサデータに含まれる深度によれば、物体50とセンサ装置30とのZ軸方向における距離の特定が可能となる。Z軸は、センサ装置30のセンシング面の法線に沿った軸である。 Then, according to the image data included in the sensor data, it is possible to specify the two-dimensional coordinates of the object 50 in the sensing area. Specifically, as shown in FIG. 2, according to the image data included in the sensor data, the position of the object 50 in the vertical direction (V-axis direction) and the horizontal direction (H-axis direction) of the image. The position can be specified. Further, according to the depth included in the sensor data, the distance between the object 50 and the sensor device 30 in the Z-axis direction can be specified. The Z axis is an axis along the normal line of the sensing surface of the sensor device 30.
 制御装置40において、操作入力特定部41は、本実施の形態では、センサデータを受け取ると、画像データと、画像の画素毎に付加された各深度とから、物体50の最も先端側(センサ装置30側)の部分の3次元空間での位置を特定する。そして、操作入力特定部41は、特定した位置を、空中投影面21上の位置に換算する。 In the control device 40, in the present embodiment, the operation input specifying unit 41, when receiving sensor data, from the image data and each depth added to each pixel of the image, the most distal side of the object 50 (sensor device). The position in the three-dimensional space of the (30 side) part is specified. Then, the operation input specifying unit 41 converts the specified position into a position on the aerial projection plane 21.
 具体的には、操作入力特定部41は、まず、物体の50の最も先端側の部分を抽出する。そして、操作入力特定部41は、センサデータから、抽出した部分の3次元空間での位置、即ち、X軸上の座標、Y軸上の座標、及び深度を特定する。 Specifically, the operation input specifying unit 41 first extracts the most distal portion of 50 of the object. And the operation input specific | specification part 41 specifies the position in the three-dimensional space of the extracted part from the sensor data, ie, the coordinate on an X-axis, the coordinate on a Y-axis, and a depth.
 そして、図2に示すように、空中投影面21での垂直方向(Y軸方向)及び水平方向(X軸方向)と、センサ装置30におけるV軸方向、H軸方向及びZ軸方向との位置関係は、予め決まっている。従って、操作入力特定部41は、この位置関係から求められる換算式に、物体50の最も先端側の部分の3次元空間での位置を代入することによって、物体50の、空中投影面21の垂直方向における位置と、空中投影面21の水平方向における位置とを検出する。 Then, as shown in FIG. 2, the positions of the vertical direction (Y-axis direction) and the horizontal direction (X-axis direction) on the aerial projection plane 21 and the V-axis direction, H-axis direction, and Z-axis direction in the sensor device 30. The relationship is predetermined. Therefore, the operation input specifying unit 41 substitutes the position of the most distal portion of the object 50 in the three-dimensional space into the conversion formula obtained from this positional relationship, thereby making the object 50 perpendicular to the aerial projection plane 21. The position in the direction and the position in the horizontal direction of the aerial projection plane 21 are detected.
 図形設定部42は、物体50の一部分が、空中投影面21のセンサ装置30側に位置していると(図2において破線で示された状態)、センサデータから、そのことを検出する。具体的には、図形設定部42は、センサデータから、物体50の最も先端側の部分のZ座標(深度)が閾値以上であるかどうかを判定する。判定の結果、閾値以上でない(閾値より小さい)場合は、図形設定部42は、物体50の一部分は、空中投影面21のセンサ装置30側に位置しいていると判定する。 When the part of the object 50 is located on the sensor device 30 side of the aerial projection surface 21 (a state indicated by a broken line in FIG. 2), the figure setting unit 42 detects that from the sensor data. Specifically, the graphic setting unit 42 determines whether or not the Z coordinate (depth) of the most distal portion of the object 50 is equal to or greater than a threshold value from the sensor data. As a result of the determination, if it is not equal to or greater than the threshold (less than the threshold), the graphic setting unit 42 determines that a part of the object 50 is located on the sensor device 30 side of the aerial projection plane 21.
 また、閾値は、物体50の位置に応じて設定される。例えば、図2において、物体50が空中投影面21の上側にあるときの閾値は、物体50が空中投影面21の下方側にあるときの閾値に比べて小さい値となる。 Further, the threshold value is set according to the position of the object 50. For example, in FIG. 2, the threshold value when the object 50 is above the aerial projection plane 21 is smaller than the threshold value when the object 50 is below the aerial projection plane 21.
 また、図4に示すように、図形設定部42は、本実施の形態では、センサデータにおいて物体50の最も先端側の部分を囲む矩形を設定する。このように矩形を設定するのは、空中投影面21のX軸方向とセンサデータのH軸方向とは一致するが、空中投影面21のY軸方向とセンサデータのV軸方向とは一致せず、物体50の一部分が空中投影面21のセンサ装置30側に位置すると、Y軸方向の位置に誤差が生じるからである。 Further, as shown in FIG. 4, in the present embodiment, the graphic setting unit 42 sets a rectangle that surrounds the most distal portion of the object 50 in the sensor data. The rectangle is set in this way, although the X-axis direction of the aerial projection plane 21 matches the H-axis direction of the sensor data, but the Y-axis direction of the aerial projection plane 21 matches the V-axis direction of the sensor data. If a part of the object 50 is positioned on the sensor device 30 side of the aerial projection plane 21, an error occurs in the position in the Y-axis direction.
 深度補正部43は、本実施の形態では、設定された矩形の、V軸方向側にある2辺のうちのいずれか一辺における深度を用いて、操作入力特定部41が検出した、物体50の、空中投影面21の垂直方向における位置を補正する。 In the present embodiment, the depth correction unit 43 uses the depth of one of the two sides on the V-axis direction side of the set rectangle, and the depth of the object 50 detected by the operation input specifying unit 41 is determined. The position of the aerial projection plane 21 in the vertical direction is corrected.
 また、V軸方向側にある2辺のうちのいずれの辺の深度を用いるかは、物体50の位置に応じて決定される。例えば、センサ装置30のセンシング面の中央を通る法線と空中投影面21との交点よりも、物体50が垂直方向において下側にあるときは、V軸方向において上側の辺の深度によって補正が行われる。一方、物体50が、この交点よりも垂直方向において上側にあるときは、V軸方向において下側の辺の深度によって補正が行われる。 Also, which side of the two sides on the V-axis direction side is to be used is determined according to the position of the object 50. For example, when the object 50 is on the lower side in the vertical direction with respect to the intersection of the normal line passing through the center of the sensing surface of the sensor device 30 and the aerial projection surface 21, the correction is performed by the depth of the upper side in the V-axis direction. Done. On the other hand, when the object 50 is above the intersection in the vertical direction, correction is performed based on the depth of the lower side in the V-axis direction.
 更に、空中投影面21とセンサ装置30との位置関係に応じては、H軸方向側にある辺が用いられても良い。更に、これらの位置関係によっては、矩形以外の図形が設定されても良い。また、これらの位置関係に応じて、物体50の空中投影面21の水平方向における位置のみが補正される態様であっても良いし、垂直方向及び水平方向の両方向における位置が補正される態様であっても良い。 Furthermore, depending on the positional relationship between the aerial projection surface 21 and the sensor device 30, a side on the H-axis direction side may be used. Further, depending on the positional relationship, a figure other than a rectangle may be set. Further, only the horizontal position of the aerial projection plane 21 of the object 50 may be corrected according to these positional relationships, or the position in both the vertical direction and the horizontal direction may be corrected. There may be.
[装置動作]
 次に、本実施の形態における操作入力装置100の動作について図4を用いて説明する。図4は、本発明の実施の形態における操作入力装置の動作を示すフロー図である。以下の説明においては、適宜図1~図3を参酌する。また、本実施の形態では、操作入力装置100を動作させることによって、操作入力方法が実施される。よって、本実施の形態における操作入力方法の説明は、以下の操作入力装置100の動作説明に代える。
[Device operation]
Next, the operation of the operation input device 100 in the present embodiment will be described with reference to FIG. FIG. 4 is a flowchart showing the operation of the operation input device according to the embodiment of the present invention. In the following description, FIGS. 1 to 3 are referred to as appropriate. In the present embodiment, the operation input method is implemented by operating the operation input device 100. Therefore, the description of the operation input method in the present embodiment is replaced with the following description of the operation of the operation input device 100.
 まず、センサ装置30は、設定間隔をおいて、センサデータを連続して出力しているとする。制御装置40は、センサデータが出力されてくると、その度に、これを受け取り、以下の処理を実行する。 First, it is assumed that the sensor device 30 continuously outputs sensor data at set intervals. Whenever sensor data is output, the control device 40 receives the sensor data and executes the following processing.
 図4に示すように、最初に、制御装置40において、操作入力特定部41は、センサデータに含まれる画像データと各深度とから、物体50の最も先端側(センサ装置30側)の部分を抽出する(ステップA1)。 As shown in FIG. 4, first, in the control device 40, the operation input specifying unit 41 determines the most distal side (sensor device 30 side) portion of the object 50 from the image data included in the sensor data and each depth. Extract (step A1).
 次に、操作入力特定部41は、ステップA1で特定した部分の3次元空間での位置を特定し、特定した位置から、物体50の最も先端側の部分の空中投影面21上の位置を検出する(ステップA2)。 Next, the operation input specifying unit 41 specifies the position of the part specified in step A1 in the three-dimensional space, and detects the position on the aerial projection plane 21 of the most distal part of the object 50 from the specified position. (Step A2).
 具体的には、操作入力特定部41は、センサデータから、抽出した部分の3次元空間での位置、即ち、X軸上の座標、Y軸上の座標、及び深度を特定する。そして、操作入力特定部41は、位置関係から求められる換算式に、特定した3次元空間での位置を代入し、物体50の最も先端側の部分の、空中投影面21の垂直方向における位置と、空中投影面21の水平方向における位置とを検出する。 Specifically, the operation input specifying unit 41 specifies the position of the extracted portion in the three-dimensional space, that is, the coordinates on the X axis, the coordinates on the Y axis, and the depth from the sensor data. Then, the operation input specifying unit 41 substitutes the specified position in the three-dimensional space for the conversion formula obtained from the positional relationship, and the position of the most distal portion of the object 50 in the vertical direction of the aerial projection plane 21 The position of the aerial projection plane 21 in the horizontal direction is detected.
 次に、図形設定部42は、センサデータから、物体50の最も先端側の部分のZ座標(深度)が閾値以上であるかどうかを判定する(ステップA3)。 Next, the graphic setting unit 42 determines whether or not the Z coordinate (depth) of the most distal portion of the object 50 is greater than or equal to a threshold value from the sensor data (step A3).
 ステップA3の判定の結果、Z座標が閾値以上である場合は、図形設定部42は、そのことを操作入力特定部41に通知する。これにより、操作入力特定部41は、ステップA2で検出した位置に基づいて、ユーザの操作入力を特定する(ステップA4)。 If the result of determination in step A3 is that the Z coordinate is greater than or equal to the threshold value, the graphic setting unit 42 notifies the operation input specifying unit 41 of this fact. Thereby, the operation input specification part 41 specifies a user's operation input based on the position detected by step A2 (step A4).
 一方、ステップA3の判定の結果、Z座標が閾値以上でない(閾値より小さい)場合は、物体の最も先端側の部分は、空中投影面21のセンサ装置30側に位置している。従って、図3に示すように、図形設定部42は、センサデータにおいて物体50の最も先端側の部分を囲む矩形を設定する(ステップA5)。 On the other hand, if the result of determination in step A3 is that the Z coordinate is not greater than or equal to the threshold value (smaller than the threshold value), the most distal portion of the object is located on the sensor device 30 side of the aerial projection plane 21. Therefore, as shown in FIG. 3, the graphic setting unit 42 sets a rectangle that surrounds the most distal portion of the object 50 in the sensor data (step A5).
 次に、深度補正部43は、物体50の位置に応じて、設定された矩形の一辺を選択し、選択した一辺における深度を用いて、ステップA2で検出された、物体50の最も先端側の部分の空中投影面21上の位置を補正する(ステップA6)。 Next, the depth correction unit 43 selects one side of the set rectangle according to the position of the object 50, and uses the depth in the selected one side to detect the most distal side of the object 50 detected in step A2. The position of the part on the aerial projection plane 21 is corrected (step A6).
 ステップA6が実行されると、深度補正部43は、位置が補正されたことを操作入力特定部41に通知する。これにより、操作入力特定部41は、補正後の位置に基づいて、ユーザの操作入力を特定する(ステップA4)。 When step A6 is executed, the depth correction unit 43 notifies the operation input specifying unit 41 that the position has been corrected. Thereby, the operation input specification part 41 specifies a user's operation input based on the position after correction | amendment (step A4).
 上述のステップA1からA4は、操作入力装置100の起動が停止されるまで、繰り返し実行される。 The above steps A1 to A4 are repeatedly executed until the operation input device 100 is stopped.
 以上のように本実施の形態では、ユーザの指先といった物体50が、空中投影面21よりも奥に位置してしまった場合は、物体50の空中投影面21上の位置が補正される。従って、本実施の形態によれば、ユーザが空中投影面21上の操作画面をタッチして入力操作を行う場合において、ユーザが誤操作を行っても、タッチ位置の検出精度の低下が抑制される。 As described above, in the present embodiment, when the object 50 such as the user's fingertip is located behind the aerial projection plane 21, the position of the object 50 on the aerial projection plane 21 is corrected. Therefore, according to the present embodiment, when the user performs an input operation by touching the operation screen on the aerial projection plane 21, even if the user performs an erroneous operation, a decrease in the detection accuracy of the touch position is suppressed. .
[プログラム]
 本実施の形態におけるプログラムは、コンピュータに、図4に示すステップA1~A6を実行させるプログラムであれば良い。このプログラムをコンピュータにインストールし、実行することによって、本実施の形態における操作入力装置100の制御装置40を実現することができる。この場合、コンピュータのプロセッサは、操作入力特定部41、図形設定部42、及び深度補正部43として機能し、処理を行なう。
[program]
The program in the present embodiment may be a program that causes a computer to execute steps A1 to A6 shown in FIG. By installing and executing this program on a computer, the control device 40 of the operation input device 100 in the present embodiment can be realized. In this case, the processor of the computer functions as the operation input specifying unit 41, the figure setting unit 42, and the depth correction unit 43 to perform processing.
 また、本実施の形態におけるプログラムは、複数のコンピュータによって構築されたコンピュータシステムによって実行されても良い。この場合は、例えば、各コンピュータが、それぞれ、操作入力特定部41、図形設定部42、及び深度補正部43のいずれかとして機能しても良い。 Further, the program in the present embodiment may be executed by a computer system constructed by a plurality of computers. In this case, for example, each computer may function as any one of the operation input specifying unit 41, the figure setting unit 42, and the depth correction unit 43, respectively.
 ここで、本実施の形態におけるプログラムを実行することによって、操作入力装置100の制御装置40を実現するコンピュータについて図5を用いて説明する。図5は、本発明の実施の形態における操作入力装置の制御装置を実現するコンピュータの一例を示すブロック図である。 Here, a computer that realizes the control device 40 of the operation input device 100 by executing the program according to the present embodiment will be described with reference to FIG. FIG. 5 is a block diagram illustrating an example of a computer that implements the control device for the operation input device according to the embodiment of the present invention.
 図5に示すように、コンピュータ110は、CPU(Central Processing Unit)111と、メインメモリ112と、記憶装置113と、入力インターフェイス114と、表示コントローラ115と、データリーダ/ライタ116と、通信インターフェイス117とを備える。これらの各部は、バス121を介して、互いにデータ通信可能に接続される。なお、コンピュータ110は、CPU111に加えて、又はCPU111に代えて、GPU(Graphics Processing Unit)、又はFPGA(Field-Programmable Gate Array)を備えていても良い。 As shown in FIG. 5, the computer 110 includes a CPU (Central Processing Unit) 111, a main memory 112, a storage device 113, an input interface 114, a display controller 115, a data reader / writer 116, and a communication interface 117. With. These units are connected to each other via a bus 121 so that data communication is possible. The computer 110 may include a GPU (GraphicsGraphProcessing Unit) or an FPGA (Field-Programmable Gate Array) in addition to or instead of the CPU 111.
 CPU111は、記憶装置113に格納された、本実施の形態におけるプログラム(コード)をメインメモリ112に展開し、これらを所定順序で実行することにより、各種の演算を実施する。メインメモリ112は、典型的には、DRAM(Dynamic Random Access Memory)等の揮発性の記憶装置である。また、本実施の形態におけるプログラムは、コンピュータ読み取り可能な記録媒体120に格納された状態で提供される。なお、本実施の形態におけるプログラムは、通信インターフェイス117を介して接続されたインターネット上で流通するものであっても良い。 The CPU 111 performs various operations by developing the program (code) in the present embodiment stored in the storage device 113 in the main memory 112 and executing them in a predetermined order. The main memory 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Further, the program in the present embodiment is provided in a state of being stored in a computer-readable recording medium 120. Note that the program in the present embodiment may be distributed on the Internet connected via the communication interface 117.
 また、記憶装置113の具体例としては、ハードディスクドライブの他、フラッシュメモリ等の半導体記憶装置が挙げられる。入力インターフェイス114は、CPU111と、キーボード及びマウスといった入力機器118との間のデータ伝送を仲介する。表示コントローラ115は、ディスプレイ装置119と接続され、ディスプレイ装置119での表示を制御する。 Further, specific examples of the storage device 113 include a hard disk drive and a semiconductor storage device such as a flash memory. The input interface 114 mediates data transmission between the CPU 111 and an input device 118 such as a keyboard and a mouse. The display controller 115 is connected to the display device 119 and controls display on the display device 119.
 データリーダ/ライタ116は、CPU111と記録媒体120との間のデータ伝送を仲介し、記録媒体120からのプログラムの読み出し、及びコンピュータ110における処理結果の記録媒体120への書き込みを実行する。通信インターフェイス117は、CPU111と、他のコンピュータとの間のデータ伝送を仲介する。 The data reader / writer 116 mediates data transmission between the CPU 111 and the recording medium 120, and reads a program from the recording medium 120 and writes a processing result in the computer 110 to the recording medium 120. The communication interface 117 mediates data transmission between the CPU 111 and another computer.
 また、記録媒体120の具体例としては、CF(Compact Flash(登録商標))及びSD(Secure Digital)等の汎用的な半導体記憶デバイス、フレキシブルディスク(Flexible Disk)等の磁気記録媒体、又はCD-ROM(Compact Disk Read Only Memory)などの光学記録媒体が挙げられる。 Specific examples of the recording medium 120 include general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), magnetic recording media such as a flexible disk, or CD- Optical recording media such as ROM (Compact Disk Read Only Memory) are listed.
 なお、本実施の形態における制御装置40は、プログラムがインストールされたコンピュータではなく、各部に対応したハードウェアを用いることによっても実現可能である。更に、制御装置40は、一部がプログラムで実現され、残りの部分がハードウェアで実現されていてもよい。 It should be noted that the control device 40 in the present embodiment can be realized not by using a computer in which a program is installed but also by using hardware corresponding to each unit. Furthermore, a part of the control device 40 may be realized by a program, and the remaining part may be realized by hardware.
 上述した実施の形態の一部又は全部は、以下に記載する(付記1)~(付記15)によって表現することができるが、以下の記載に限定されるものではない。 Some or all of the above-described embodiments can be expressed by the following (Appendix 1) to (Appendix 15), but is not limited to the following description.
(付記1)
 操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、制御装置と、を備え
 前記センサ装置は、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力し、
 前記制御装置は、
前記センサデータから、前記物体の前記空中投影面における位置を検出し、
更に、前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定し、そして、設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、
ことを特徴とする操作入力装置。
(Appendix 1)
A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a control device that identifies an operation input performed on the operation screen, and the sensor device includes information for identifying a two-dimensional coordinate of the object in a sensing area, and the sensor device Output sensor data including depth to the object,
The controller is
From the sensor data, detect the position of the object in the aerial projection plane,
Further, when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is set in the sensor data; and Using the depth at the set outer edge of the figure, the position of the object on the aerial projection plane is corrected.
An operation input device characterized by that.
(付記2)
付記1に記載の操作入力装置であって、
 前記制御装置は、前記センサデータにおいて、前記物体の一部分を囲む矩形を設定し、前記矩形のいずれか一辺における深度を用いて、前記物体の、前記空中投影面の垂直方向における位置、及び前記空中投影面の水平方向における位置のうちの一方又は両方を補正する、
ことを特徴とする操作入力装置。
(Appendix 2)
The operation input device according to attachment 1, wherein
The control device sets a rectangle surrounding a part of the object in the sensor data, and uses the depth of any one side of the rectangle to determine the position of the object in the vertical direction of the aerial projection plane, and the air Correct one or both of the horizontal positions of the projection plane,
An operation input device characterized by that.
(付記3)
付記2に記載の操作入力装置であって、
 前記一辺は、前記空中投影面と前記センサ装置との位置関係に基づいて設定されている、
ことを特徴とする操作入力装置。
(Appendix 3)
The operation input device according to attachment 2, wherein
The one side is set based on a positional relationship between the aerial projection plane and the sensor device.
An operation input device characterized by that.
(付記4)
付記1~3のいずれかに記載の操作入力装置であって、
 前記制御装置は、前記センサデータにおける、前記物体の最も前記センサ装置側の部分の深度が閾値よりも小さい場合に、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していると判定する、
ことを特徴とする操作入力装置。
(Appendix 4)
The operation input device according to any one of appendices 1 to 3,
In the sensor data, when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold value, a part of the object is located on the sensor device side of the aerial projection plane. To determine,
An operation input device characterized by that.
(付記5)
付記1~4のいずれかに記載の操作入力装置であって、
前記センサ装置が、デプスセンサである、
ことを特徴とする操作入力装置。
(Appendix 5)
The operation input device according to any one of appendices 1 to 4,
The sensor device is a depth sensor;
An operation input device characterized by that.
(付記6)
 操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、コンピュータと、を用い、前記センサ装置が、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力する、操作入力方法であって、
(a)前記コンピュータによって、前記センサデータから、前記物体の前記空中投影面における位置を検出するステップと、
(b)前記コンピュータによって、前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定する、ステップと、
(c)前記コンピュータによって、設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、ステップと、を有する、
ことを特徴とする操作入力方法。
(Appendix 6)
A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for the sensor device to specify the two-dimensional coordinates of the object in a sensing area, and the sensor device from the sensor device An operation input method for outputting sensor data including a depth to an object,
(A) detecting a position of the object on the aerial projection plane from the sensor data by the computer;
(B) When the computer detects from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is displayed in the sensor data. Set, step, and
(C) correcting the position of the object on the aerial projection plane using the depth at the outer edge of the graphic set by the computer,
An operation input method characterized by that.
(付記7)
付記6に記載の操作入力方法であって、
 前記(c)のステップにおいて、前記センサデータにおいて、前記物体の一部分を囲む矩形を設定し、前記矩形のいずれか一辺における深度を用いて、前記物体の、前記空中投影面の垂直方向における位置、及び前記空中投影面の水平方向における位置のうちの一方又は両方を補正する、
ことを特徴とする操作入力方法。
(Appendix 7)
The operation input method according to attachment 6, wherein
In the step (c), in the sensor data, a rectangle surrounding a part of the object is set, and the depth of any one side of the rectangle is used to position the object in the vertical direction of the aerial projection plane; And correcting one or both of the positions in the horizontal direction of the aerial projection plane,
An operation input method characterized by that.
(付記8)
付記7に記載の操作入力方法であって、
 前記(c)のステップにおいて、前記一辺は、前記空中投影面と前記センサ装置との位置関係に基づいて設定されている、
ことを特徴とする操作入力方法。
(Appendix 8)
The operation input method according to appendix 7,
In the step (c), the one side is set based on a positional relationship between the aerial projection plane and the sensor device.
An operation input method characterized by that.
(付記9)
付記6~8のいずれかに記載の操作入力方法であって、
 前記(b)のステップにおいて、前記センサデータにおける、前記物体の最も前記センサ装置側の部分の深度が閾値よりも小さい場合に、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していると判定する、
ことを特徴とする操作入力方法。
(Appendix 9)
The operation input method according to any one of appendices 6 to 8,
In the step (b), when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold, a part of the object is positioned on the sensor device side of the aerial projection plane. It is determined that
An operation input method characterized by that.
(付記10)
付記6~9のいずれかに記載の操作入力方法であって、
前記センサ装置が、デプスセンサである、
ことを特徴とする操作入力方法。
(Appendix 10)
The operation input method according to any one of appendices 6 to 9, wherein
The sensor device is a depth sensor;
An operation input method characterized by that.
(付記11)
 操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、コンピュータと、を備え、前記センサ装置が、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力する、操作入力装置において、
前記コンピュータに、
(a)前記センサデータから、前記物体の前記空中投影面における位置を検出するステップと、
(b)前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定する、ステップと、
(c)設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、ステップと、
を実行させる命令を含む、プログラムを記録している、ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 11)
A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for specifying the two-dimensional coordinates of the object in a sensing area, and from the sensor device, the sensor device In the operation input device that outputs sensor data including the depth to the object,
In the computer,
(A) detecting a position of the object on the aerial projection plane from the sensor data;
(B) when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection plane, setting a figure surrounding the part of the object in the sensor data; When,
(C) correcting the position of the object on the aerial projection plane using the depth at the set outer edge of the figure;
The computer-readable recording medium characterized by recording the program containing the instruction | indication which performs this.
(付記12)
付記11に記載のコンピュータ読み取り可能な記録媒体であって、
 前記(c)のステップにおいて、前記センサデータにおいて、前記物体の一部分を囲む矩形を設定し、前記矩形のいずれか一辺における深度を用いて、前記物体の、前記空中投影面の垂直方向における位置、及び前記空中投影面の水平方向における位置のうちの一方又は両方を補正する、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 12)
A computer-readable recording medium according to appendix 11,
In the step (c), in the sensor data, a rectangle surrounding a part of the object is set, and the depth of any one side of the rectangle is used to position the object in the vertical direction of the aerial projection plane; And correcting one or both of the positions in the horizontal direction of the aerial projection plane,
A computer-readable recording medium.
(付記13)
付記12に記載のコンピュータ読み取り可能な記録媒体であって、
 前記(c)のステップにおいて、前記一辺は、前記空中投影面と前記センサ装置との位置関係に基づいて設定されている、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 13)
A computer-readable recording medium according to appendix 12,
In the step (c), the one side is set based on a positional relationship between the aerial projection plane and the sensor device.
A computer-readable recording medium.
(付記14)
付記11~13のいずれかに記載のコンピュータ読み取り可能な記録媒体であって、
 前記(b)のステップにおいて、前記センサデータにおける、前記物体の最も前記センサ装置側の部分の深度が閾値よりも小さい場合に、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していると判定する、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 14)
A computer-readable recording medium according to any one of appendices 11 to 13,
In the step (b), when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold, a part of the object is positioned on the sensor device side of the aerial projection plane. It is determined that
A computer-readable recording medium.
(付記15)
付記11~14のいずれかに記載のコンピュータ読み取り可能な記録媒体であって、
前記センサ装置が、デプスセンサである、
ことを特徴とするコンピュータ読み取り可能な記録媒体。
(Appendix 15)
A computer-readable recording medium according to any one of appendices 11 to 14,
The sensor device is a depth sensor;
A computer-readable recording medium.
 以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 The present invention has been described above with reference to the embodiments, but the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2018年3月7日に出願された日本出願特願2018-41162を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2018-41162 filed on Mar. 7, 2018, the entire disclosure of which is incorporated herein.
 以上のように、本発明によれば、ユーザが、空中に表示された画面をタッチして、入力操作を行う場合において、ユーザの誤操作によるタッチ位置の検出精度の低下を抑制することができる。本発明は、空中投影面での入力が行われる各種装置において有用である。 As described above, according to the present invention, when a user performs an input operation by touching a screen displayed in the air, it is possible to suppress a decrease in detection accuracy of a touch position due to a user's erroneous operation. The present invention is useful in various devices that perform input on an aerial projection plane.
 10 表示装置
 20 光学プレート
 21 空中投影面
 30 センサ装置
 40 制御装置
 41 操作入力特定部
 42 図形設定部
 43 深度補正部
 50 物体
 110 コンピュータ
 111 CPU
 112 メインメモリ
 113 記憶装置
 114 入力インターフェイス
 115 表示コントローラ
 116 データリーダ/ライタ
 117 通信インターフェイス
 118 入力機器
 119 ディスプレイ装置
 120 記録媒体
 121 バス
DESCRIPTION OF SYMBOLS 10 Display apparatus 20 Optical plate 21 Aerial projection surface 30 Sensor apparatus 40 Control apparatus 41 Operation input specific part 42 Graphic setting part 43 Depth correction part 50 Object 110 Computer 111 CPU
112 Main Memory 113 Storage Device 114 Input Interface 115 Display Controller 116 Data Reader / Writer 117 Communication Interface 118 Input Device 119 Display Device 120 Recording Medium 121 Bus

Claims (15)

  1.  操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、制御装置と、を備え
     前記センサ装置は、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力し、
     前記制御装置は、
    前記センサデータから、前記物体の前記空中投影面における位置を検出し、
    更に、前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定し、そして、設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、
    ことを特徴とする操作入力装置。
    A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a control device that identifies an operation input performed on the operation screen, and the sensor device includes information for identifying a two-dimensional coordinate of the object in a sensing area, and the sensor device Output sensor data including depth to the object,
    The controller is
    From the sensor data, detect the position of the object in the aerial projection plane,
    Further, when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is set in the sensor data; and Using the depth at the set outer edge of the figure, the position of the object on the aerial projection plane is corrected.
    An operation input device characterized by that.
  2. 請求項1に記載の操作入力装置であって、
     前記制御装置は、前記センサデータにおいて、前記物体の一部分を囲む矩形を設定し、前記矩形のいずれか一辺における深度を用いて、前記物体の、前記空中投影面の垂直方向における位置、及び前記空中投影面の水平方向における位置のうちの一方又は両方を補正する、
    ことを特徴とする操作入力装置。
    The operation input device according to claim 1,
    The control device sets a rectangle surrounding a part of the object in the sensor data, and uses the depth of any one side of the rectangle to determine the position of the object in the vertical direction of the aerial projection plane, and the air Correct one or both of the horizontal positions of the projection plane,
    An operation input device characterized by that.
  3. 請求項2に記載の操作入力装置であって、
     前記一辺は、前記空中投影面と前記センサ装置との位置関係に基づいて設定されている、
    ことを特徴とする操作入力装置。
    The operation input device according to claim 2,
    The one side is set based on a positional relationship between the aerial projection plane and the sensor device.
    An operation input device characterized by that.
  4. 請求項1~3のいずれかに記載の操作入力装置であって、
     前記制御装置は、前記センサデータにおける、前記物体の最も前記センサ装置側の部分の深度が閾値よりも小さい場合に、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していると判定する、
    ことを特徴とする操作入力装置。
    The operation input device according to any one of claims 1 to 3,
    In the sensor data, when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold value, a part of the object is located on the sensor device side of the aerial projection plane. To determine,
    An operation input device characterized by that.
  5. 請求項1~4のいずれかに記載の操作入力装置であって、
    前記センサ装置が、デプスセンサである、
    ことを特徴とする操作入力装置。
    The operation input device according to any one of claims 1 to 4,
    The sensor device is a depth sensor;
    An operation input device characterized by that.
  6.  操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、コンピュータと、を用い、前記センサ装置が、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力する、操作入力方法であって、
    (a)前記コンピュータによって、前記センサデータから、前記物体の前記空中投影面における位置を検出するステップと、
    (b)前記コンピュータによって、前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定する、ステップと、
    (c)前記コンピュータによって、設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、ステップと、を有する、
    ことを特徴とする操作入力方法。
    A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for the sensor device to specify the two-dimensional coordinates of the object in a sensing area, and the sensor device from the sensor device An operation input method for outputting sensor data including a depth to an object,
    (A) detecting a position of the object on the aerial projection plane from the sensor data by the computer;
    (B) When the computer detects from the sensor data that a part of the object is located on the sensor device side of the aerial projection surface, a figure surrounding the part of the object is displayed in the sensor data. Set, step, and
    (C) correcting the position of the object on the aerial projection plane using the depth at the outer edge of the graphic set by the computer,
    An operation input method characterized by that.
  7. 請求項6に記載の操作入力方法であって、
     前記(c)のステップにおいて、前記センサデータにおいて、前記物体の一部分を囲む矩形を設定し、前記矩形のいずれか一辺における深度を用いて、前記物体の、前記空中投影面の垂直方向における位置、及び前記空中投影面の水平方向における位置のうちの一方又は両方を補正する、
    ことを特徴とする操作入力方法。
    The operation input method according to claim 6,
    In the step (c), in the sensor data, a rectangle surrounding a part of the object is set, and the depth of any one side of the rectangle is used to position the object in the vertical direction of the aerial projection plane; And correcting one or both of the positions in the horizontal direction of the aerial projection plane,
    An operation input method characterized by that.
  8. 請求項7に記載の操作入力方法であって、
     前記(c)のステップにおいて、前記一辺は、前記空中投影面と前記センサ装置との位置関係に基づいて設定されている、
    ことを特徴とする操作入力方法。
    The operation input method according to claim 7,
    In the step (c), the one side is set based on a positional relationship between the aerial projection plane and the sensor device.
    An operation input method characterized by that.
  9. 請求項6~8のいずれかに記載の操作入力方法であって、
     前記(b)のステップにおいて、前記センサデータにおける、前記物体の最も前記センサ装置側の部分の深度が閾値よりも小さい場合に、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していると判定する、
    ことを特徴とする操作入力方法。
    The operation input method according to any one of claims 6 to 8,
    In the step (b), when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold, a part of the object is positioned on the sensor device side of the aerial projection plane. It is determined that
    An operation input method characterized by that.
  10. 請求項6~9のいずれかに記載の操作入力方法であって、
    前記センサ装置が、デプスセンサである、
    ことを特徴とする操作入力方法。
    The operation input method according to any one of claims 6 to 9,
    The sensor device is a depth sensor;
    An operation input method characterized by that.
  11.  操作画面を表示する表示装置と、前記操作画面を空中に投影して空中投影面を生成する光学プレートと、前記空中投影面に接触する物体の3次元空間での位置を検出するためのセンサ装置と、前記操作画面に対して行われた操作入力を特定する、コンピュータと、を備え、前記センサ装置が、センシングエリアにおける前記物体の2次元座標を特定するための情報と、当該センサ装置から前記物体までの深度とを含む、センサデータを出力する、操作入力装置において、
    前記コンピュータに、
    (a)前記センサデータから、前記物体の前記空中投影面における位置を検出するステップと、
    (b)前記センサデータから、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していることを検出すると、前記センサデータにおいて、前記物体の一部分を囲む図形を設定する、ステップと、
    (c)設定した前記図形の外縁における前記深度を用いて、前記物体の前記空中投影面における位置を補正する、ステップと、
    を実行させる命令を含む、プログラムを記録している、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A display device that displays an operation screen, an optical plate that projects the operation screen into the air to generate an aerial projection surface, and a sensor device for detecting the position of an object that contacts the aerial projection surface in a three-dimensional space And a computer for specifying an operation input performed on the operation screen, and information for specifying the two-dimensional coordinates of the object in a sensing area, and from the sensor device, the sensor device In the operation input device that outputs sensor data including the depth to the object,
    In the computer,
    (A) detecting a position of the object on the aerial projection plane from the sensor data;
    (B) when detecting from the sensor data that a part of the object is located on the sensor device side of the aerial projection plane, setting a figure surrounding the part of the object in the sensor data; When,
    (C) correcting the position of the object on the aerial projection plane using the depth at the set outer edge of the figure;
    Records a program that includes instructions to execute
    A computer-readable recording medium.
  12. 請求項11に記載のコンピュータ読み取り可能な記録媒体であって、
     前記(c)のステップにおいて、前記センサデータにおいて、前記物体の一部分を囲む矩形を設定し、前記矩形のいずれか一辺における深度を用いて、前記物体の、前記空中投影面の垂直方向における位置、及び前記空中投影面の水平方向における位置のうちの一方又は両方を補正する、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to claim 11,
    In the step (c), in the sensor data, a rectangle surrounding a part of the object is set, and the depth of any one side of the rectangle is used to position the object in the vertical direction of the aerial projection plane; And correcting one or both of the positions in the horizontal direction of the aerial projection plane,
    A computer-readable recording medium.
  13. 請求項12に記載のコンピュータ読み取り可能な記録媒体であって、
     前記(c)のステップにおいて、前記一辺は、前記空中投影面と前記センサ装置との位置関係に基づいて設定されている、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to claim 12,
    In the step (c), the one side is set based on a positional relationship between the aerial projection plane and the sensor device.
    A computer-readable recording medium.
  14. 請求項11~13のいずれかに記載のコンピュータ読み取り可能な記録媒体であって、
     前記(b)のステップにおいて、前記センサデータにおける、前記物体の最も前記センサ装置側の部分の深度が閾値よりも小さい場合に、前記物体の一部分が、前記空中投影面の前記センサ装置側に位置していると判定する、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to any one of claims 11 to 13,
    In the step (b), when the depth of the portion of the object closest to the sensor device in the sensor data is smaller than a threshold, a part of the object is positioned on the sensor device side of the aerial projection plane. It is determined that
    A computer-readable recording medium.
  15. 請求項11~14のいずれかに記載のコンピュータ読み取り可能な記録媒体であって、
    前記センサ装置が、デプスセンサである、
    ことを特徴とするコンピュータ読み取り可能な記録媒体。
    A computer-readable recording medium according to any one of claims 11 to 14,
    The sensor device is a depth sensor;
    A computer-readable recording medium.
PCT/JP2018/034490 2018-03-07 2018-09-18 Operation input device, operation input method, anc computer-readable recording medium WO2019171635A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2020504657A JP6898021B2 (en) 2018-03-07 2018-09-18 Operation input device, operation input method, and program
CN201880090820.0A CN111886567B (en) 2018-03-07 2018-09-18 Operation input device, operation input method, and computer-readable recording medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018041162 2018-03-07
JP2018-041162 2018-03-07

Publications (1)

Publication Number Publication Date
WO2019171635A1 true WO2019171635A1 (en) 2019-09-12

Family

ID=67846002

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/034490 WO2019171635A1 (en) 2018-03-07 2018-09-18 Operation input device, operation input method, anc computer-readable recording medium

Country Status (3)

Country Link
JP (1) JP6898021B2 (en)
CN (1) CN111886567B (en)
WO (1) WO2019171635A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4386522A1 (en) * 2021-08-13 2024-06-19 Anhui Easpeed Technology Co., Ltd. Positioning sensing method, positioning sensing apparatus, and input terminal device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003131785A (en) * 2001-10-22 2003-05-09 Toshiba Corp Interface device, operation control method and program product
JP2014179072A (en) * 2013-03-14 2014-09-25 Honda Motor Co Ltd Three-dimensional fingertip tracking
JP2015060296A (en) * 2013-09-17 2015-03-30 船井電機株式会社 Spatial coordinate specification device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4865088B2 (en) * 2008-04-22 2012-02-01 株式会社アスカネット Optical imaging method
JP5353596B2 (en) * 2009-09-18 2013-11-27 セイコーエプソン株式会社 Projection display device and keystone correction method
JP5560771B2 (en) * 2010-02-26 2014-07-30 セイコーエプソン株式会社 Image correction apparatus, image display system, and image correction method
JP2012073659A (en) * 2010-09-01 2012-04-12 Shinsedai Kk Operation determination device, fingertip detection device, operation determination method, fingertip detection method, operation determination program, and fingertip detection program
JP5197777B2 (en) * 2011-02-01 2013-05-15 株式会社東芝 Interface device, method, and program
CN103620535B (en) * 2011-06-13 2017-06-27 西铁城时计株式会社 Message input device
WO2013124901A1 (en) * 2012-02-24 2013-08-29 日立コンシューマエレクトロニクス株式会社 Optical-projection-type display apparatus, portable terminal, and program
JP6480434B2 (en) * 2013-06-27 2019-03-13 アイサイト モバイル テクノロジーズ リミテッド System and method for direct pointing detection for interaction with digital devices
CN105765494B (en) * 2013-12-19 2019-01-22 麦克赛尔株式会社 Projection type image display device and projection type image display method
CN103677274B (en) * 2013-12-24 2016-08-24 广东威创视讯科技股份有限公司 A kind of interaction method and system based on active vision
JP6510213B2 (en) * 2014-02-18 2019-05-08 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Projection system, semiconductor integrated circuit, and image correction method
CN104375638A (en) * 2014-07-17 2015-02-25 深圳市钛客科技有限公司 Sensing equipment, mobile terminal and air sensing system
CN104375639A (en) * 2014-07-17 2015-02-25 深圳市钛客科技有限公司 Aerial sensing device
US10359883B2 (en) * 2014-12-26 2019-07-23 Nikon Corporation Detection device, electronic apparatus, detection method and program
JPWO2016121708A1 (en) * 2015-01-26 2017-11-24 Necソリューションイノベータ株式会社 INPUT SYSTEM, INPUT DEVICE, INPUT METHOD, AND PROGRAM
JP2017062709A (en) * 2015-09-25 2017-03-30 新光商事株式会社 Gesture operation device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003131785A (en) * 2001-10-22 2003-05-09 Toshiba Corp Interface device, operation control method and program product
JP2014179072A (en) * 2013-03-14 2014-09-25 Honda Motor Co Ltd Three-dimensional fingertip tracking
JP2015060296A (en) * 2013-09-17 2015-03-30 船井電機株式会社 Spatial coordinate specification device

Also Published As

Publication number Publication date
CN111886567B (en) 2023-10-20
JPWO2019171635A1 (en) 2021-02-12
JP6898021B2 (en) 2021-07-07
CN111886567A (en) 2020-11-03

Similar Documents

Publication Publication Date Title
JP4820285B2 (en) Automatic alignment touch system and method
US10254893B2 (en) Operating apparatus, control method therefor, and storage medium storing program
CN108200416B (en) Coordinate mapping method, device and the projection device of projected image in projection device
JP2019215811A (en) Projection system, image processing apparatus, and projection method
US10042426B2 (en) Information processing apparatus for detecting object from image, method for controlling the apparatus, and storage medium
US20190325593A1 (en) Image processing apparatus, system, method of manufacturing article, image processing method, and non-transitory computer-readable storage medium
TW201621454A (en) Projection alignment
US10146331B2 (en) Information processing system for transforming coordinates of a position designated by a pointer in a virtual image to world coordinates, information processing apparatus, and method of transforming coordinates
WO2019171635A1 (en) Operation input device, operation input method, anc computer-readable recording medium
EP3032380B1 (en) Image projection apparatus, and system employing interactive input-output capability
JP6555958B2 (en) Information processing apparatus, control method therefor, program, and storage medium
WO2021258506A1 (en) Sub-area touch method and apparatus, electronic device, and storage medium
US9483125B2 (en) Position information obtaining device and method, and image display system
JP6065433B2 (en) Projection apparatus, projection system, and program
JP6417939B2 (en) Handwriting system and program
US20140201687A1 (en) Information processing apparatus and method of controlling information processing apparatus
JP7452917B2 (en) Operation input device, operation input method and program
JP2018063555A (en) Information processing device, information processing method, and program
JP2016153996A (en) Coordinate acquisition system, display device, coordinate acquisition method, and program
TWI522871B (en) Processing method of object image for optical touch system
CN103488348B (en) Positioning method of image sensor group
US20240129443A1 (en) Display method, projector, and non-transitory computer-readable storage medium storing program
EP3059664A1 (en) A method for controlling a device by gestures and a system for controlling a device by gestures
CN117952928A (en) Image processing method and device
JP2024032042A (en) Detection method, detection device and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18908818

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020504657

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18908818

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载