+

WO2017104525A1 - Dispositif d'entrée, dispositif électronique et visiocasque - Google Patents

Dispositif d'entrée, dispositif électronique et visiocasque Download PDF

Info

Publication number
WO2017104525A1
WO2017104525A1 PCT/JP2016/086504 JP2016086504W WO2017104525A1 WO 2017104525 A1 WO2017104525 A1 WO 2017104525A1 JP 2016086504 W JP2016086504 W JP 2016086504W WO 2017104525 A1 WO2017104525 A1 WO 2017104525A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture operation
input
user
gesture
hand
Prior art date
Application number
PCT/JP2016/086504
Other languages
English (en)
Japanese (ja)
Inventor
榊原 悟
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Publication of WO2017104525A1 publication Critical patent/WO2017104525A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present invention relates to an input device, an electronic device, and a head mounted display.
  • Patent Document 1 discloses a computing device including an infrared LED and an infrared proximity sensor. According to the technique of Patent Document 1, the infrared light emitted from the infrared LED is reflected by the hand approaching the computing device and reflected, and the reflected light can be detected by the infrared proximity sensor to detect the movement of the hand. The user can perform an input operation desired by the user such as swipe without contact by the movement (gesture) of the hand. Therefore, even if the user's hand is dirty, there is no risk of contaminating the mobile computing device.
  • Patent Document 2 discloses a technique for promoting or supporting one or more operations or techniques for gesture detection based on signals from a plurality of ambient environment sensors.
  • providing a plurality of ambient environment sensors in this manner increases the cost and may increase the size of the mobile terminal.
  • the present invention has been made in view of the above circumstances, and provides an input device, an electronic device, and a head-mounted display that enable complex input while having a simple configuration by combining gesture operations. With the goal.
  • an input device reflecting one aspect of the present invention is: In the detection region, a detection device that detects a movement or shape change of the indicator moved by the user as a gesture operation, and generates an output corresponding to the gesture operation;
  • a control device for determining input contents according to the output of the detection device, and an input device comprising:
  • the control device determines a single input mode corresponding to the single gesture operation, and the detection device continuously detects a plurality of the gesture operations. Accordingly, it is possible to selectively set a combination mode for determining input contents different from the case where a single gesture operation is detected.
  • an input device an electronic device, and a head-mounted display that enable complex input with a simple configuration by combining gesture operations.
  • HMD head mounted display
  • FIG. 1 It is the figure which looked at the image display part 104B of the display unit 104 from the user US side. It is a figure which shows the example of an event list and a combination list.
  • (A) is a figure which shows the input screen which the user directly input using external PC etc.
  • (b) is a schematic diagram which shows converting the data into QR Code (trademark).
  • (A) is a figure which shows the example of an event which inputs a content with the combination of gesture operation for every application
  • (b) is a figure which shows the example of a combination list
  • FIG. 1 is a perspective view of an HMD 100 that is an electronic apparatus according to the present embodiment.
  • FIG. 2 is a front view of the head mounted display (hereinafter referred to as HMD) 100 according to the present embodiment.
  • FIG. 3 is a view of the HMD 100 according to the present embodiment as viewed from above.
  • the right side and the left side of the HMD 100 refer to the right side and the left side for the user wearing the HMD 100.
  • the HMD 100 of this embodiment has a frame 101 as a support member.
  • a frame 101 that is U-shaped when viewed from above has a front part 101a to which two spectacle lenses 102 are attached, and side parts 101b and 101c extending rearward from both ends of the front part 101a.
  • the two spectacle lenses 102 attached to the frame 101 may or may not have refractive power.
  • a cylindrical main body 103 as a support member is fixed to the front portion 101a of the frame 101 on the upper side of the spectacle lens 102 on the right side (which may be on the left side depending on the user's dominant eye).
  • the main body 103 is provided with a display unit 104.
  • a display control unit 104DR (see FIG. 6 to be described later) that controls display control of the display unit 104 based on an instruction from a processor 121 described later is disposed in the main body 103. If necessary, a display unit may be arranged in front of both eyes.
  • FIG. 4 is a schematic cross-sectional view showing the configuration of the display unit 104.
  • the display unit 104 as a display device that displays the detection result of the gesture operation includes an image forming unit 104A and an image display unit 104B.
  • the image forming unit 104A is incorporated in the main body unit 103, and includes a light source 104a, a unidirectional diffuser 104b, a condenser lens 104c, and a display element 104d.
  • the image display unit 104B which is a so-called see-through type display member, is disposed on the entire plate so as to extend downward from the main body unit 103 and extend in parallel to one eyeglass lens 102 (see FIG. 1).
  • the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h is disposed on the entire plate so as to extend downward from the main body unit 103 and extend in parallel to one eyeglass lens 102 (see FIG. 1).
  • the light source 104a has a function of illuminating the display element 104d.
  • the light source 104a emits light having a predetermined wavelength width, whereby the image light obtained by illuminating the display element 104d can have a predetermined wavelength width, and the hologram optical element 104h transmits the image light. When diffracted, the image can be observed by the user over the entire observation angle of view at the position of the pupil B. Further, the peak wavelength for each color of the light source 104a is set in the vicinity of the peak wavelength of the diffraction efficiency of the hologram optical element 104h, so that the light use efficiency is improved.
  • the light source 104a is composed of LEDs that emit RGB light, the cost of the light source 104a can be reduced, and a color image is displayed on the display element 104d when the display element 104d is illuminated. The color image can be visually recognized by the user.
  • each of the RGB LED elements has a narrow emission wavelength width, the use of a plurality of such LED elements enables high color reproducibility and bright image display.
  • the display element 104d displays an image by modulating the light emitted from the light source 104a in accordance with image data, and is configured by a transmissive liquid crystal display element having pixels that serve as light transmitting regions in a matrix. Has been. Note that the display element 104d may be of a reflective type.
  • the eyepiece prism 104f totally reflects the image light from the display element 104d incident through the base end face PL1 by the opposed parallel inner side face PL2 and outer side face PL3, and passes through the hologram optical element 104h to the user's pupil.
  • external light is transmitted and guided to the user's pupil, and is composed of, for example, an acrylic resin together with the deflecting prism 104g.
  • the eyepiece prism 104f and the deflection prism 104g are joined by an adhesive with the hologram optical element 104h sandwiched between inclined surfaces PL4 and PL5 inclined with respect to the inner surface PL2 and the outer surface PL3.
  • the deflection prism 104g is joined to the eyepiece prism 104f, and becomes a substantially parallel flat plate integrated with the eyepiece prism 104f. By joining the deflecting prism 104g to the eyepiece prism 104f, it is possible to prevent distortion in the external image observed by the user through the display unit 104.
  • the hologram optical element 104h diffracts and reflects the image light (light having a wavelength corresponding to the three primary colors) emitted from the display element 104d, guides it to the pupil B, enlarges the image displayed on the display element 104d, and enlarges the user's pupil. It is a volume phase type reflection hologram guided as a virtual image.
  • the hologram optical element 104h has, for example, three wavelength ranges of 465 ⁇ 5 nm (B light), 521 ⁇ 5 nm (G light), and 634 ⁇ 5 nm (R light) with a peak wavelength of diffraction efficiency and a wavelength width of half the diffraction efficiency. The light is diffracted (reflected).
  • the peak wavelength of diffraction efficiency is the wavelength at which the diffraction efficiency reaches a peak
  • the wavelength width at half maximum of the diffraction efficiency is the wavelength width at which the diffraction efficiency is at half maximum of the diffraction efficiency peak. is there.
  • the reflection-type hologram optical element 104h has high wavelength selectivity, and only diffracts and reflects light having a wavelength in the above-mentioned wavelength range (near the exposure wavelength).
  • the hologram optical element 104h is transmitted, and a high external light transmittance can be realized.
  • the light emitted from the light source 104a is diffused by the unidirectional diffusion plate 104b, condensed by the condenser lens 104c, and enters the display element 104d.
  • the light incident on the display element 104d is modulated for each pixel based on the image data input from the display control unit 104DR, and is emitted as image light. Thereby, a color image is displayed on the display element 104d.
  • Image light from the display element 104d enters the eyepiece prism 104f from its base end face PL1, is totally reflected a plurality of times by the inner side face PL2 and the outer side face PL3, and enters the hologram optical element 104h.
  • the light incident on the hologram optical element 104h is reflected there, passes through the inner side surface PL2, and reaches the pupil B.
  • the user can observe an enlarged virtual image of the image displayed on the display element 104d, and can visually recognize it as a screen formed on the image display unit 104B.
  • the hologram optical element 104h constitutes a screen, or it can be considered that a screen is formed on the inner surface PL2.
  • “screen” may refer to an image to be displayed.
  • the eyepiece prism 104f, the deflecting prism 104g, and the hologram optical element 104h transmit almost all of the external light, the user can observe an external field image (real image) through them. Therefore, the virtual image of the image displayed on the display element 104d is observed so as to overlap with a part of the external image. In this manner, the user of the HMD 100 can simultaneously observe the image provided from the display element 104d and the external image via the hologram optical element 104h. Note that when the display unit 104 is in the non-display state, the image display unit 104B is transparent, and only the external image can be observed.
  • a display unit is configured by combining a light source, a liquid crystal display element, and an optical system.
  • a self-luminous display element for example, an organic EL display
  • Element for example, an organic EL display
  • a transmissive organic EL display panel having transparency in a non-light emitting state may be used.
  • a proximity sensor 105 disposed near the center of the frame 101 and a lens 106 a of a camera 106 disposed near the side are provided in front of the main body 103 so as to face the front. It has been.
  • a “proximity sensor” which is an example of a detection device refers to a proximity sensor in order to detect that an object, for example, a part of a human body (such as a hand or a finger) is in proximity to the user's eyes. This means that a signal is output by detecting whether or not it exists in a detection region in the proximity range in front of the detection surface.
  • the proximity range may be set as appropriate according to the operator's characteristics and preferences. For example, the proximity range from the detection surface of the proximity sensor may be within a range of 200 mm.
  • the control circuit determines that the object exists in the proximity range based on a signal output from the proximity sensor when the object enters the detection area in the proximity range in front of the proximity sensor. When an object enters the detection area, a valid signal may be output from the proximity sensor to the control circuit.
  • a passive proximity sensor has a detection unit that detects invisible light and electromagnetic waves emitted from an object when the object approaches.
  • a passive proximity sensor there are a pyroelectric sensor that detects invisible light such as infrared rays emitted from an approaching human body, and a capacitance sensor that detects a change in electrostatic capacitance between the approaching human body.
  • the active proximity sensor includes an invisible light or sound wave projection unit, and a detection unit that receives the invisible light or sound wave reflected and returned from the object.
  • Active proximity sensors include infrared sensors that project infrared rays and receive infrared rays reflected by objects, laser sensors that project laser beams and receive laser beams reflected by objects, and project ultrasonic waves. Then, there is an ultrasonic sensor that receives ultrasonic waves reflected by an object. Note that a passive proximity sensor does not need to project energy toward an object, and thus has excellent low power consumption. Active proximity sensors can improve detection reliability. For example, even if the user wears gloves that do not transmit detection light emitted from the human body such as infrared light, the movement of the user's hand Can be detected. A plurality of types of proximity sensors may be used in combination.
  • ⁇ ⁇ Proximity sensors are generally smaller and cheaper than cameras, and consume less power.
  • the HMD can be operated, and complicated image processing required for gesture recognition by analysis of the image captured by the camera is also unnecessary.
  • FIG. 5 is an enlarged view of the proximity sensor 105 used in the present embodiment as viewed from the front.
  • the proximity sensor 105 includes a light receiving unit 105 a that receives invisible light such as infrared light emitted from a human body as detection light.
  • the light receiving unit 105a forms light receiving areas RA to RD arranged in 2 rows and 2 columns, and when receiving invisible light, signals corresponding to the light receiving areas RA to RD are individually output from the light receiving areas RA to RD. It has become.
  • each of the light receiving areas RA to RD changes in intensity according to the distance from the light receiving unit 105a to the object, and the intensity increases as the distance decreases. If it is a pyroelectric sensor that detects infrared light emitted from the human body, it is difficult to falsely detect objects other than the exposed human body. For example, when working in a confined space, false detection is effectively prevented. There is an advantage that you can.
  • the right sub-body portion 107 is attached to the right side portion 101b of the frame 101
  • the left sub-body portion 108 is attached to the left side portion 101c of the frame 101.
  • the right sub-main body portion 107 and the left sub-main body portion 108 have an elongated plate shape, and have elongated protrusions 107a and 108a on the inner side, respectively.
  • the right sub-body portion 107 is attached to the frame 101 in a positioned state
  • the elongated protrusion 108 a is attached to the side of the frame 101.
  • the left sub-main body portion 108 is attached to the frame 101 in a positioned state.
  • the right sub main body 107 there are a geomagnetic sensor 109 (see FIG. 6 described later) for detecting geomagnetism, and an angular velocity sensor 110B and an acceleration sensor 110A (see FIG. 6 described later) that generate an output corresponding to the posture.
  • the left sub-main body 108 is provided with a speaker 111A, an earphone 111C, and a microphone 111B (see FIG. 6 to be described later).
  • the main main body 103 and the right sub main body 107 are connected so as to be able to transmit signals through a wiring HS, and the main main body 103 and the left sub main body 108 are connected so as to be able to transmit signals through a wiring (not shown). Yes.
  • FIG. 6 As schematically illustrated in FIG.
  • the right sub-main body 107 is connected to the control unit CTU via a cord CD extending from the rear end.
  • a 6-axis sensor in which an angular velocity sensor and an acceleration sensor are integrated may be used.
  • the HMD can be operated by sound based on an output signal generated from the microphone 111B according to the input sound.
  • the main main body 103 and the left sub main body 108 may be configured to be wirelessly connected.
  • a power circuit 113 (see FIG. 6 described later) for converting the voltage applied from the battery control unit 128 of the control unit CTU into an appropriate voltage for each unit is provided in the right sub-main unit 107. Yes.
  • FIG. 6 is a block diagram of main circuits of the HMD 100.
  • the control unit CTU receives the radio waves from the processor 121, the operation unit 122, and the GPS satellite as a control device for controlling each functional device by generating a control signal for the display unit 104 and other functional devices.
  • GPS receiver 123 communication unit 124 as an interface unit for exchanging data with the outside, ROM 125 for storing programs, information on image data, single list and combination list (details will be described later), telephone directory
  • a RAM 126 serving as a storage unit for storing data and a voice dictionary, a battery 127 and a battery control unit 128, and a power supply circuit 130 for converting the voltage applied from the battery control unit 128 into an appropriate voltage for each unit; Storage devices such as SSD and flash memory And a scan 129.
  • the processor 121 can use an application processor used in a smartphone or the like, but the type of the processor 121 is not limited. For example, if an application processor includes hardware necessary for image processing such as GPU or Codec as a standard, it can be said that the processor is suitable for a small HMD.
  • the processor 121 receives a signal from the proximity sensor 105. Further, the processor 121 controls image display of the display unit 104 via the display control unit 104DR.
  • the processor 121 and the proximity sensor 105 constitute an input device.
  • the processor 121 receives power from the power supply circuit 130, operates according to a program stored in at least one of the ROM 124 and the storage device 129, and receives image data from the camera 106 according to an operation input such as power-on from the operation unit 122.
  • the data can be input and stored in the RAM 126, and can be communicated with the outside via the communication unit 124 as necessary.
  • the processor 121 executes image control according to the output from the proximity sensor 105, so that the user can perform screen control of the HMD 100 including call and video reproduction by gesture operation using hands and fingers. Can be executed.
  • FIG. 7 is a front view when the user US wears the HMD 100 of the present embodiment.
  • FIG. 8 is a side view when the user US wears the HMD 100
  • FIG. 9 is a top view thereof, which is shown together with the user's hand.
  • the gesture operation is an operation in which at least the hand HD or finger of the user US passes through the detection area of the proximity sensor 105 and can be detected by the processor 121 of the HMD 100 via the proximity sensor 105.
  • the light receiving unit 105a determines that no gesture operation is performed. To do.
  • the light receiving unit 105a detects the invisible light emitted from the hand HD, and the proximity sensor 105 based on the invisible light is detected.
  • the processor 121 determines that a gesture operation has been performed. In the following description, it is assumed that the gesture operation is performed by the user's hand HD.
  • the gesture operation may be performed by the user using a pointing device made of a material that can emit invisible light. May be performed.
  • An indicator that is moved by the user's intention such as a finger or other part of the human body or the indicator.
  • An imaging device can be used as the detection device instead of the proximity sensor.
  • the moving indicator is imaged by the imaging device, and the moving direction is determined by the image processing device (part of the control device) based on the image data output from the imaging device.
  • the light receiving unit 105a has the light receiving regions RA to RD arranged in 2 rows and 2 columns (see FIG. 5). Therefore, when the user US moves the hand HD closer to the front of the HMD 100 from either the left, right, up, or down directions, the output timings of signals detected in the light receiving areas RA to RD are different.
  • 10 and 11 are examples of signal waveforms of the light receiving areas RA to RD, where the vertical axis represents the signal intensity of the light receiving areas RA to RD and the horizontal axis represents the time.
  • the vertical axis represents the signal intensity of the light receiving areas RA to RD
  • the horizontal axis represents the time.
  • the light receiving areas RA and RC first receive invisible light. Therefore, as shown in FIG. 10, first, the signals in the light receiving areas RA and RC rise, the signals in the light receiving areas RB and RD rise later, and the signals in the light receiving areas RA and RC fall, and then the light receiving areas RB, The RD signal falls.
  • the processor 121 detects the timing of this signal, and determines that the user US has performed the gesture operation by moving the hand HD from the right to the left.
  • the light receiving state of the proximity sensor 105 when the hand HD is moved from the right to the left when viewed from the user side as shown in FIG. 11 is shown corresponding to FIG. 12 (the light receiving area of the proximity sensor 105 is shown by a circle).
  • the light receiving area detected here is indicated by hatching.
  • FIG. 12A none of the light receiving areas detects the hand HD, but in FIG. 12B, only the light receiving areas RA and RC detect the hand HD, and the hand HD is on the right side of the detection area. You can see that you have entered.
  • FIG. 12C since all the light receiving areas detect the hand HD, it can be seen that the hand HD has come in front of the proximity sensor 105.
  • FIG. 12A none of the light receiving areas detects the hand HD
  • FIG. 12B only the light receiving areas RA and RC detect the hand HD
  • FIG. 12C since all the light receiving areas detect the hand HD, it can be seen that the hand HD has come in front of the proximity sensor 105.
  • FIG. 12C since all
  • the signals in the light receiving areas RA and RB rise first, the signals in the light receiving areas RC and RD rise later, and the signals in the light receiving areas RA and RB fall, and then the light receiving areas RC and RD.
  • the signal is falling.
  • the processor 121 detects the timing of this signal, and determines that the user US has performed the gesture operation by moving the hand HD from the top to the bottom.
  • FIG. 14 shows an example of a signal waveform in which the vertical axis represents the average signal intensity of the light receiving regions RA to RD and the horizontal axis represents time.
  • the waveform indicated by the solid line is high in intensity immediately after output and decreases with time. Therefore, when such a waveform is output from the proximity sensor 105, the processor 121 determines that the gesture operation is performed by moving the hand HD away from the vicinity of the face.
  • the waveform indicated by the dotted line is low in intensity immediately after output and increases with time. Therefore, when such a waveform is output from the proximity sensor 105, the processor 121 determines that the gesture operation is performed by moving the hand HD so as to approach the face from a distance.
  • the presence and movement of the hand HD can be reliably detected by using the proximity sensor and positioning the detection region within the visual field of the user's eye facing the image display unit. Accordingly, the user US can view the hand operation and the operation in conjunction with each other, and can realize an intuitive operation. Furthermore, since the proximity sensor is small and consumes less power than the camera, the continuous operation time of the HMD 100 can be extended.
  • a single mode for inputting the input content desired by the user by a single gesture operation and a plurality of gesture operations are combined to obtain the input content desired by the user.
  • the combination mode to be input can be selectively set, and the mode is switched by a specific gesture operation.
  • the processor 121 recognizes the pattern in advance as an effective gesture operation. Move from top to bottom of hand, move from bottom to top of hand, move from left to right of hand, move from right to left of hand, move from bottom left to right of hand, from bottom right of hand Move to the upper left, move from the upper left to the lower right of the hand, move from the upper right to the lower left of the hand, rotate around the hand, click operation (moves the hand quickly back and forth), rest of the hand (short), rest of the hand (Long), moving your hand from side to side at high speed.
  • the input content may be determined individually corresponding to the above gesture operations.
  • the combination mode it is necessary to use a combination of the above gesture operations. Therefore, it is necessary to switch modes so as not to confuse gesture operations.
  • a gesture operation for switching modes is set in advance, and by detecting the gesture operation, the processor 121 can switch between the single mode and the combination mode.
  • the switching from the single mode to the combination mode and the switching from the combination mode to the single mode may be performed by either the same gesture operation or different gesture operations.
  • the mode switching gesture operation (referred to as mode switching gesture operation) is common, and as shown in FIGS. 8 and 9, the hand HD is held still for 2 seconds or more in the center of the detection area.
  • the mode switching may be performed by detecting a voice such as “mode switching” issued by the user US using the voice recognition function of the processor 121 via the microphone 111B.
  • a gesture operation (referred to as a cancel gesture operation) for canceling the content of the previous gesture operation (referred to as input information) moves the hand HD left and right at high speed within the detection area.
  • the cancel gesture operation is performed in a state where no input information is stored, the single mode may be switched.
  • a gesture operation (decision gesture operation) that moves the hand HD back and forth in the center of the detection area of the proximity sensor 105 may be set in advance. good.
  • a single list in which a single gesture operation in the single mode is associated with the input content and a combination list in which a combination of a plurality of gesture operations in the combination mode is associated with the input content are stored in the RAM 126 in advance. It is assumed that the processor 121 can read out as necessary. At the time of setting the combination mode, the processor 121 sequentially stores input information by successive gesture operations and collates with the combination list.
  • the combination list includes a mode switching gesture operation, a cancel gesture operation, and a determination gesture operation in addition to an effective gesture operation.
  • the single mode is set by default, and when the mode switching gesture operation is detected after the transition to the combination mode, or when input by a plurality of gesture operations is completed. , And transition to single mode.
  • the combination mode may be continued unless the user performs a mode switching gesture operation.
  • FIG. 15 is a flowchart showing a single mode sequence.
  • the processor 121 is detected via the proximity sensor 105 in step S102. It is determined from the gesture operation pattern whether or not the gesture operation is valid. If the gesture operation is not valid, the sequence is returned to step S101 to wait for the next gesture operation.
  • step S103 the processor 121 determines whether the gesture operation is a mode switching gesture operation. If the processor 121 determines that the gesture operation is not a mode switching gesture operation, step S104 is performed. Thus, the processor 121 determines the input content corresponding to the gesture operation detected through the proximity sensor 105 based on the single list, and executes it. On the other hand, when the processor 121 determines that the operation is a mode switching gesture operation, the single mode is interrupted in step S105, and the mode is shifted to the combination mode. Hereinafter, the processor 121 determines the input content on the assumption that a plurality of gesture operations are continued.
  • FIG. 16 is a flowchart showing the sequence of the combination mode. After the transition to the combination mode, when the proximity sensor 105 outputs a signal in response to the gesture operation in step S201, the processor 121 determines from the gesture operation pattern detected via the proximity sensor 105 in step S202. It is determined whether or not it is a mode switching gesture operation.
  • the processor 121 When determining that the detected gesture operation is a mode switching gesture operation, the processor 121 deletes the stored input information in step S210, interrupts the combination mode in step S211, and shifts to the single mode. .
  • step S203 the processor 121 determines whether or not the detected gesture operation exists in the combination list. If the detected gesture operation does not exist, in step S208, the processor 121 displays the input gesture operation content as an option on the display unit 104, and then returns the sequence to step S201.
  • the processor 121 determines that the detected gesture operation is present in the combination list, the processor 121 shifts the flow to step S204 and further determines whether the detected gesture operation is a cancel gesture operation. . If it is determined that the operation is a canceling gesture operation, in step S205, the processor 121 deletes input information corresponding to the previous gesture operation, and in step S208, it is combined with the gesture operation performed at the current time. The possible input contents are listed and displayed on the display unit 104, and then the sequence is returned to step S201. Thereby, an invalid gesture operation is ignored, and a gesture operation that can be input can be notified to the user US.
  • the processor 121 stores the input information in step S206, and determines whether there are other candidates in step S207. If it is determined that there are other candidates, the processor 121 displays, in step S208, possible input contents in combination with the gesture operation performed at the present time as options on the display unit 104, and then the sequence proceeds to step S201. return.
  • the processor 121 determines the input content in step S209 and executes an event corresponding to the content. However, even when it is determined that there are no other combination candidates in the combination list, only when a gesture operation for repeatedly moving the hand HD in the detection area is detected as an input confirmation gesture operation, for example, May be confirmed. Thereafter, the processor 121 deletes the input information stored in step S210, interrupts the combination mode in step S211, and shifts to the single mode.
  • the combination list includes the mode switching gesture operation, the cancel gesture operation, and the determination gesture operation. However, the processor 121 includes one or more of these in the event list or another list. You may make it refer.
  • FIG. 17 is a flowchart showing a part of a modification of the sequence of FIG. 16, which can be executed in place of step S203 of FIG. Here, only a different part from FIG. 16 is demonstrated.
  • the processor 121 checks whether or not the gesture operation detected via the proximity sensor 105 is an effective gesture operation after collating with the combination list. If the gesture operation is not valid (ie, an invalid gesture operation is detected), the processor 121 increments the value of the built-in counter in step S302 (if invalid judgment is first, 1 is continuously 2). If it is the second time,...) Advances the sequence to step S304. On the other hand, if a valid gesture operation is detected, the processor 121 resets the value of the built-in counter in step S303 (invalidity determination is interrupted), and advances the sequence to step S204 in FIG. Run the sequence.
  • step S304 the processor 121 determines whether or not the value of the built-in counter is greater than or equal to a threshold value N. If it is greater than or equal to N, it can be said that the number of invalid gesture operations performed by the user has exceeded a specified number. It is assumed that there is a high possibility that the user does not recognize the gestures that can be operated, and a list display flag of combinations of gesture operations that can be input is set. On the other hand, if the value of the built-in counter is less than the threshold value N, the processor 121 sets a flag for not displaying a list of combinations of gesture operations that can be input in step S306 in order not to confuse the user. After that, the sequence proceeds to step S206 in FIG. 16.
  • the processor 121 sets a combination of gesture operations that can be input to the display unit 104.
  • the list is displayed to assist the user in inputting, but the list is not displayed when the flag for hiding the list is set.
  • the combination mode by setting the combination mode, it is possible to determine different input content from when a single gesture operation is detected in response to detecting a plurality of gesture operations continuously. , The degree of freedom in setting the input content increases.
  • FIG. 18 shows the phone book data stored in the RAM 126
  • FIG. 19 shows an example of an event list and a combination list corresponding to the phone book data and used in the call application.
  • the first name is classified as a group belonging to the “a” line, a group belonging to the “ka” line, a group belonging to the “sa” line, and so on. Each is associated with a corresponding phone number.
  • the event list in FIG. 19 (a) records destinations that can be input by a combination (combination) of two gesture operations
  • the event list in FIG. 19 (b) has a combination of two or three gesture operations.
  • the other party that can be input in (Combination) is recorded in advance.
  • An arrow in the figure means a gesture operation that moves a hand in the direction of the arrow within the detection region. In addition, you may set the other party who can input by the combination of four or more gesture operation.
  • the call destination is stored as an event list. If a call to a specific destination is to be made in a single mode, a phone book as shown in FIG. 18 is displayed on the display unit 104 based on the phone book data, and then, for example, the hand is moved from left to right. After switching to the display of the destination group belonging to the “ka” line by the gesture operation, for example, the screen is scrolled by a single gesture operation to move the hand from top to bottom, and “ You can select Toshiyuki Komine, Engineering Department.
  • the procedure according to this embodiment will be described.
  • the user US stores an opponent with a high call frequency (including “Technology Department Toshiyuki Komine”) as an event list, and associates it with a plurality of gesture operations. It is assumed that a combination list is stored.
  • the processor 121 performs a combination according to the output signal from the proximity sensor 105. Enter mode.
  • the user US moves the hand HD from the left to the right within the detection area of the proximity sensor 105 according to the gesture operation combination of “Technology Department Toshiyuki Komine” shown in FIG.
  • the processor 121 determines that the user US has input the content (event) that calls to “Technology Department Toshiyuki Komine”, and the stored “Technology Department Small” Identify the phone number of Toshiyuki Tsuji and immediately execute the call operation.
  • the user US can talk to “Technology Department Toshiyuki Komine” with a simple operation.
  • the sequence returns to the single mode simultaneously with the call execution. However, only when executing a special application such as a call, the combination mode is maintained even after the call is executed. May be.
  • the user US who wants to call “Technology Department Toshiyuki Komine” moves his hand HD from right to left in the detection area of the proximity sensor 105 by the first gesture operation after the transition to the combination mode.
  • the processor 121 recognizes the gesture operation for cancellation by moving the hand HD further to the left and right at a high speed, so that the previous gesture operation can be invalidated and the first gesture operation can be performed again. .
  • the present embodiment it is possible to realize the input desired by the user with a simple and small number of gesture operations.
  • FIG. 19B shows an example of a combination list and event list in which the number of combinations of gesture operations is two and three.
  • the hand HD is moved from top to bottom within the detection area of the proximity sensor 105. If the hand is further moved from top to bottom after the movement, three candidates will remain in the combination list. In this case, the processor 121 does not immediately call and waits for the next instruction from the user US.
  • the user US who intended to make a call to “Planning Department Kyoko Yokoi” recognizes that there is another candidate by not making a call, so the gesture operation for decision that has been set in advance ,
  • the processor 121 determines that the user US has entered the information that the call is made to “Planning Department Yokoi Kyoko”, identifies the stored telephone number of “Planning Department Kyoko Yokoi”, and immediately A call operation can be performed.
  • the processor 121 assumes that the user US intends to select another candidate, and the gesture operation that moves the hand HD from the bottom to the top (to “Planning Department Yumi Wakai” Call) or a gesture operation to move the hand HD from right to left (call to “Planning Department Hisashi Wakui”). If any gesture operation is performed, the input is confirmed, so that the call operation can be immediately executed to the confirmed partner. When any other gesture operation is performed, the processor 121 can sound a warning sound indicating that the gesture operation is invalid.
  • the degree of freedom of the input content can be expanded and the user can easily realize the desired operation.
  • a combination list that combines the tilt of the neck of the user US and a specific voice can be stored in the RAM 126.
  • the processor 121 detects the tilt of the neck of the user US with the acceleration sensor 110A or collects the voice of the user US with the microphone 111B. The user can confirm the input contents desired by the user according to the combination list after performing speech recognition based on the analysis or the speech dictionary.
  • FIG. 20 is a diagram of the image display unit 104B of the display unit 104 as viewed from the user US side.
  • the icon IC1 displayed at the upper left indicates that the processor 121 is executing the camera application by a predetermined operation.
  • images including arrow icons and the like
  • display input history and gesture operation combination candidates described below are stored in advance in the RAM 126 and the like, and are called from the processor 121 as necessary, and the proximity sensor 105 The corresponding image is displayed by collating with the output signal.
  • the default setting for input is single mode
  • the icon IC1 is displayed as shown in FIG. A user who sees such a screen display knows that the single mode is set.
  • the processor 121 interrupts the single mode and starts the combination mode based on the output signal from the proximity sensor 105. Then, as shown in FIG. 20B, three frames FR representing a history of gesture operations are displayed here at the lower right of the screen of the image display unit 104B, and a gesture is displayed near the lower left of the screen. Information CAN indicating a candidate for the combination of operations is displayed. In the frame FR, display is performed from the left (here, arrows) in the order in which the gesture operation is recognized.
  • the information CAN indicates a combination of arrows indicating the direction of the gesture operation and the corresponding input content. It is difficult for the screen of the image display unit 104B to display a lot of information because the user observes the outside world through this screen. Therefore, in the present embodiment, only two combinations of gesture operations are displayed in the information CAN, and the rest of the triangles TR displayed above or below the displayed information CAN are displayed by, for example, clicking operations (for example, hand operations). The remaining combinations can be displayed two by two by moving the HD back and forth within the detection area of the proximity sensor 105. By viewing such a display, the user can perform the operation accurately even if the user has forgotten the combination of gesture operations for performing a desired input.
  • the frame FR and the information CAN are referred to as navigation display.
  • the processor 121 determines that the user moves the hand HD from the left to the right based on the output signal from the proximity sensor 105. As shown in FIG. 20C, an arrow heading from left to right is displayed in the leftmost frame FR in the screen of the image display unit 104B. The user who sees this shows that the processor 121 has correctly recognized the gesture operation performed by the user. If it is found from the display in the frame FR that the gesture operation is not as intended by the user, the user can invalidate the previous gesture operation by the cancel gesture operation as described above.
  • the processor 121 narrows down the combinations in which the gesture operation is first performed by moving the hand HD from left to right, and displays the information CAN as shown in FIG.
  • the updated information CAN only gesture operations that can be performed thereafter are displayed, so that the user who sees this can input “stop video recording” by moving the hand HD from the bottom to the top, for example. It can be seen that
  • Information CAN indicating a combination of one gesture operation is displayed.
  • the user performs the gesture operation for moving the hand HD from the upper left to the lower right, or the input operation is confirmed by performing the above-described gesture operation for confirming the input, and the moving image reproduction desired by the user is performed. Can do.
  • the mode switching gesture operation is performed at this point to shift to the single mode.
  • the screen changes to the one shown in FIG. 20A, but if a gesture operation other than the mode switching gesture operation is performed, the screen of the image display unit 104B changes to the one shown in FIG.
  • Information CAN indicating a combination candidate following the gesture operation is displayed.
  • the screen of the image display unit 104B is switched to the one shown in FIG. .
  • FIG. 21 is a diagram illustrating an example of an event list and a combination list.
  • an event list and a combination list are used for the call application.
  • each event corresponds to the input contents.
  • “event1” is “Sales Department Ai Aoki”
  • Event2 is a content to input an instruction to call “Sales Department Ito Kaoru”
  • “event3” is to "Planning Department Takumi Aizawa”
  • the content for inputting an instruction to call is “eventX”, and the content for inputting an instruction to end the call.
  • the execution program corresponding to the event is determined with reference to the event list.
  • “event1” can be instructed to execute by inputting the gesture operation “action3”
  • “event2” can be instructed to execute by inputting the gesture operation “action2 + action2”
  • “event3” Execution can be instructed by inputting the operation “action1 + action4 + action5”
  • execution of “eventX” can be instructed by inputting the gesture operation “action5 + action5”.
  • action1 is a hand movement from bottom to top
  • action2 is a hand movement from top to bottom
  • action3 is a hand movement from right to left
  • action4 is from left to right If the action is set in advance such that "action5" is the back-and-forth movement of the hand, the processor 121 checks the event list against the combination list when setting the combination mode. It is possible to call the other party.
  • FIG. 22A shows an input screen directly input by a user using an external PC or the like.
  • action defines a gesture operation pattern (action setting). Specifically, “action 1” indicates movement of the hand from the bottom to the top, and “action 2”.
  • action 3 indicates the movement of the hand from right to left
  • action 4 indicates the movement of the hand from left to right.
  • action 5 indicates a click operation by moving the hand back and forth.
  • Mode indicates an application name
  • event indicates input contents.
  • the above event list and combination list are created on a PC or the like, and then converted into a QR code (registered trademark) (see FIG. 22B, but the QR code (registered trademark) shown is different from the actual one)
  • the QR code (registered trademark) is imaged by the camera 106 mounted on the HMD 100 as the interface unit, and the processor 121 reads the list, so that the list can be loaded and stored in the RAM 126 or the like.
  • the event list and the combination list are created on the WEB screen and stored in a specific server, and the communication unit 124 accesses the specific server based on the URL and downloads the data to the HMD 100. You can also
  • FIG. 23A is a diagram illustrating an example of an event in which contents are input by a combination of gesture operations for each application.
  • FIG. 23B shows an example of a combination list.
  • all applications refer to a combination list having the same setting.
  • the processor 121 starts the “QR code (registered trademark) -read application corresponding to“ event1 ”. Is determined to have been input.
  • the processor 121 determines that an instruction “call to Mr. XX” is input.
  • these can be easily set by an event list and a combination list. These lists are stored in the RAM 126, and the processor 121 corresponds to each application being executed. The list can be read. Both the combination list and the event list can be directly input or externally input via a QR code (registered trademark) or the net.
  • the gesture operation mainly based on the moving direction of the indicator has been described as an example.
  • a change in the shape of the hand such as opening and closing the hand may be added to the gesture operation to be detected.
  • a gesture operation is displayed on the display unit 104 (see FIG. 20), it can be expressed by an animation movie or the like that repeats the operation of opening and closing the hand.
  • HMD 101 Frame 101a Front part 101b Side part 101c Side part 101d Long hole 101e Long hole 102 Eyeglass lens 103 Main body part 104 Display 104A Image forming part 104B Image display part 104DR Display control part 104a Light source 104b Unidirectional diffuser 104c Condensing lens 104d Display element 104f Eyepiece prism 104g Deflection prism 104h Hologram optical element 104i Screen 105 Proximity sensor 105a Light receiving part 106 Camera 106a Lens 107 Right sub-main part 107a Protrusion 108 Left sub-main part 108a Protrusion 109 Geomagnetic sensor 110A Acceleration sensor 110B Angular velocity sensor 111A Speaker 111B Microphone 111C Earphone 113 Power supply circuit 121 Processor 122 Operation unit 123 GPS reception unit 124 Communication unit 125 ROM 126 RAM 127 Battery 128 Battery control unit 129 Storage device 130 Power supply circuit CD Code CTU Control unit HD Hand HS Wiring PL1 Base end surface PL

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un dispositif d'entrée, un dispositif électronique et un visiocasque au moyen desquels une entrée complexe peut être activée au moyen d'une combinaison d'opérations gestuelles, tout en ayant une configuration simple. Un dispositif de commande peut régler de façon sélective : un mode unique dans lequel, lorsqu'une opération gestuelle unique est détectée par un dispositif de détection, un contenu d'entrée correspondant à l'opération gestuelle unique est déterminé ; ou un mode de combinaison dans lequel, lorsque plusieurs opérations gestuelles sont détectées continuellement par le dispositif de détection, un contenu d'entrée différent du contenu d'entrée lorsque l'opération gestuelle unique est détectée est déterminé.
PCT/JP2016/086504 2015-12-17 2016-12-08 Dispositif d'entrée, dispositif électronique et visiocasque WO2017104525A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015245836 2015-12-17
JP2015-245836 2015-12-17

Publications (1)

Publication Number Publication Date
WO2017104525A1 true WO2017104525A1 (fr) 2017-06-22

Family

ID=59056420

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/086504 WO2017104525A1 (fr) 2015-12-17 2016-12-08 Dispositif d'entrée, dispositif électronique et visiocasque

Country Status (1)

Country Link
WO (1) WO2017104525A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014501413A (ja) * 2010-12-30 2014-01-20 トムソン ライセンシング ジェスチャ認識のためのユーザ・インタフェース、装置および方法
JP2014099184A (ja) * 2008-03-04 2014-05-29 Qualcomm Inc 改良されたジェスチャに基づく画像操作
JP2015197724A (ja) * 2014-03-31 2015-11-09 株式会社メガチップス ジェスチャー検出装置、ジェスチャー検出装置の動作方法および制御プログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014099184A (ja) * 2008-03-04 2014-05-29 Qualcomm Inc 改良されたジェスチャに基づく画像操作
JP2014501413A (ja) * 2010-12-30 2014-01-20 トムソン ライセンシング ジェスチャ認識のためのユーザ・インタフェース、装置および方法
JP2015197724A (ja) * 2014-03-31 2015-11-09 株式会社メガチップス ジェスチャー検出装置、ジェスチャー検出装置の動作方法および制御プログラム

Similar Documents

Publication Publication Date Title
KR102743763B1 (ko) 가상 현실 유저 인터페이스를 제공하기 위한 전자 장치 및 그의 동작 방법
KR102394718B1 (ko) 전자 장치 및 어플리케이션 실행 방법
JP6786792B2 (ja) 情報処理装置、表示装置、情報処理方法、及び、プログラム
US10310624B2 (en) Electronic apparatus, method for controlling electronic apparatus, and control program for the same
JP5957875B2 (ja) ヘッドマウントディスプレイ
US9360935B2 (en) Integrated bi-sensing optical structure for head mounted display
US20190012000A1 (en) Deformable Electronic Device with Methods and Systems for Controlling the Deformed User Interface
KR102788907B1 (ko) 증강 현실 환경에서 다양한 기능을 수행하는 전자 장치 및 그 동작 방법
JP2013125247A (ja) ヘッドマウントディスプレイ及び情報表示装置
KR20190017347A (ko) 이동단말기 및 그 제어 방법
US20140285520A1 (en) Wearable display device using augmented reality
KR20190102587A (ko) 이동 단말기 및 그 동작방법
JP2014085954A (ja) 携帯端末装置、プログラムおよび入力操作受け付け方法
KR20190114644A (ko) 전자 디바이스 및 그 제어 방법
US11287945B2 (en) Systems and methods for gesture input
US12050726B2 (en) Augmented reality device for changing input mode and method thereof
CN105786163A (zh) 显示处理方法和显示处理装置
WO2016052061A1 (fr) Visiocasque
CN106155605B (zh) 显示处理方法和电子设备
WO2017104525A1 (fr) Dispositif d'entrée, dispositif électronique et visiocasque
WO2016072271A1 (fr) Dispositif d'affichage, procédé de commande de dispositif d'affichage et son programme de commande
CN117251082A (zh) 基于用户界面的人机交互方法、装置、设备及存储介质
US20220244791A1 (en) Systems And Methods for Gesture Input
KR101622695B1 (ko) 이동 단말기 및 그것의 제어방법
WO2017094557A1 (fr) Dispositif électronique et visiocasque

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16875505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16875505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载