+

WO2013136498A1 - Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage - Google Patents

Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage Download PDF

Info

Publication number
WO2013136498A1
WO2013136498A1 PCT/JP2012/056767 JP2012056767W WO2013136498A1 WO 2013136498 A1 WO2013136498 A1 WO 2013136498A1 JP 2012056767 W JP2012056767 W JP 2012056767W WO 2013136498 A1 WO2013136498 A1 WO 2013136498A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light amount
imaging
image recognition
light
Prior art date
Application number
PCT/JP2012/056767
Other languages
English (en)
Japanese (ja)
Inventor
坂 剛
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2012/056767 priority Critical patent/WO2013136498A1/fr
Publication of WO2013136498A1 publication Critical patent/WO2013136498A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3664Details of the user input interface, e.g. buttons, knobs or sliders, including those provided on a touch screen; remote controllers; input using gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Definitions

  • the present invention relates to an image recognition apparatus, an image recognition method, an image recognition program, and a recording medium for recognizing a recognition object in an image captured by a camera.
  • a specific light source image is detected from a captured image, and a camera is captured with an optimal exposure time for an image excluding the specific light source image. Even if the light of the specific light source is incident on the entire captured image, the entire captured image is in a good state for image recognition without being affected by the backlight.
  • the image of a specific light source is detected by complex algorithm processing in any captured image, and the exposure is adjusted appropriately so that the entire captured image does not fail after the masking. If there are a plurality of light sources, the process becomes complicated and the processing time becomes longer. Therefore, in the case where the change of the position and the shape of the recognition target is recognized in real time and clearly in each of a plurality of image frames that are continuously captured, such as the gesture operation input in the moving body described above, the above prior art is used. Application requires hardware with very high processing power, which is not realistic. For this reason, there has been a demand for a technique capable of recognizing a recognition target object in real time and clearly even when a moving image is mounted on a moving body.
  • the problems to be solved by the present invention include the above-mentioned problems as an example.
  • an invention according to claim 1 is an image recognition device, which masks a moving image format image which is mounted or carried on a moving body and is composed of a plurality of image frames taken in time series.
  • An imaging unit that captures an image based on the setting and exposure parameters, a light amount detection unit that detects an incident light amount in an image frame previously captured by the imaging unit, and the incident light amount detected by the light amount detection unit is a predetermined threshold value. If the incident light quantity is less than or equal to the predetermined threshold by the light quantity judgment means for judging whether or not it is larger, the mask setting is not performed for the imaging of the current image frame.
  • a small light amount setting means for setting an exposure parameter in accordance with the incident light amount, and when the incident light amount is larger than the predetermined threshold by the light amount determination means, The image is masked so as to mask the light source area as a range portion in which a light amount of a predetermined amount or more is detected in the previously captured image frame, and is set by the small light amount setting means before the previous time.
  • a high light intensity setting means for setting the current exposure parameter based on a history of exposure parameters; and a recognition means for recognizing a predetermined recognition object from an image frame imaged by the imaging means.
  • an invention according to claim 7 is an image recognition method used in an image recognition apparatus, and is a moving image format image composed of a plurality of image frames mounted on a moving body and imaged in time series.
  • Imaging step of imaging based on the mask setting and exposure parameters, a light amount detection step of detecting the incident light amount in the image frame previously captured in the imaging step, and the incident light amount detected in the light amount detection step A light amount determination step for determining whether or not the predetermined light amount is greater than a predetermined threshold value, and a mask setting for imaging of the current image frame when the incident light amount is equal to or less than the predetermined threshold value in the light amount determination step.
  • the mask setting is set so as to mask the light source area as a range portion in which the amount of light of a predetermined amount or more is detected in the previously captured image frame, and the setting at the time of the small light amount before the previous time
  • the invention described in claim 8 is an image recognition program, and the image recognition method according to claim 7 is executed by an image recognition apparatus.
  • the invention described in claim 9 is a recording medium, and the image recognition program according to claim 8 is stored so as to be readable by the image recognition apparatus.
  • FIG. 1 It is an example of the figure explaining the countermeasure method when a mask range overlaps in a user's hand. It is a perspective view which shows the structural example of the vehicle carrying the navigation apparatus containing the modification of the image recognition apparatus of this invention. It is an example of the front image imaged with the front camera.
  • FIG. 1 is a perspective view showing an example of gesture operation input using a navigation device including an embodiment of the image recognition device of the present invention.
  • the navigation device S is provided with an interface device 1 beside a handle 101 and instruments 102 of a vehicle V that is a moving body.
  • the interface device 1 of this example is formed in a rectangular flat plate shape as a whole, and a display 2 and a device camera 3 are provided on the front surface thereof.
  • a plurality of operation icons P corresponding to various operations on the navigation device S are displayed on the display.
  • the user in the driver's seat (or passenger's seat) does not directly touch the operation icon P on the display 2 but points to the space on the near side of the display position with his / her index finger. Is recognized as being selected and operated.
  • the gesture operation input performed by the navigation device S of the present embodiment in addition to the selection instruction operation, for example, the display screen of the display 2 is scrolled in accordance with the moving direction or the pointing direction of the user's hand H as a whole.
  • an operation input such as enlarging or reducing the display screen corresponding to the position or movement of the fingertip is also possible (not shown).
  • FIG. 2 is a block diagram illustrating a hardware configuration example of the navigation device S.
  • the navigation device S includes an interface device 1 and a navigation device body 4.
  • the interface device 1 includes the display 2 and the device camera 3 as described above.
  • the display 2 is composed of, for example, an LCD panel and has a function of displaying various information screens based on an image signal input from a graphic controller (described later) of the navigation device body 4.
  • the device camera 3 uses, for example, a CCD image sensor, and captures an image mainly in an intermediate direction between the driver's seat side and the passenger's seat side (or the periphery thereof is rotatable) in the vehicle V described above. And has a function of outputting a corresponding signal to a CPU (described later) of the navigation device body 4.
  • the device camera 3 captures a room image in the form of a moving image by continuously capturing a plurality of image frames in a time series in a sufficiently short time period.
  • the navigation device body 4 includes a CPU 11, a storage device 12, a GPS 13, a graphic controller 14, and a camera controller 15.
  • the CPU 11 has a function of controlling the navigation device S as a whole by performing various calculations according to the operation of a predetermined program and exchanging information with other units and outputting various control instructions.
  • the storage device 12 includes a ROM 12a, a RAM 12b, and a storage medium 12c.
  • the ROM 12a is an information storage medium in which various processing programs and other necessary information are written in advance.
  • the RAM 12b is an information storage medium on which information necessary for executing the various programs is written and read.
  • the storage medium 12c is a non-volatile information storage medium such as a flash memory or a hard disk.
  • the GPS 13 has a function of measuring the current location of the vehicle V and acquiring current position information, and detecting a predetermined map image, facility information, and the like based on map information stored in advance.
  • the graphic controller 14 has a function of acquiring image data from a video RAM (not shown) and the GPS 13 under the control of the CPU 11 and displaying an image signal based on the image data on the display 2.
  • the camera controller 15 has a function of performing imaging control corresponding to the mask setting and exposure parameters specified by the CPU 11 with respect to the device camera 3.
  • the mask setting is a setting that masks an arbitrary range of the imaging area of the device camera 3 so that the imaging is not performed, and the exposure parameter performs exposure control for the entire imaging area of the device camera 3.
  • Aperture value and shutter speed (shutter opening time) (these settings will be described in detail later).
  • FIG. 3 is a block diagram illustrating a software configuration example related to the gesture operation input of the navigation device S.
  • software blocks related to gesture operation input include an imaging control unit 21, an imaging adjustment unit 22, an image processing unit 23, a hand gesture interface 24, and a graphic user interface 25.
  • the imaging control unit 21 is a software block that the device camera 3 executes uniquely
  • the imaging adjustment unit 22 is a software block that the CPU 11 and the camera controller 15 of the navigation device body 4 execute in cooperation
  • the other image processing unit 23, hand gesture interface 24, and graphic user interface 25 are software blocks that the CPU 11 executes alone. This breakdown is an example, and various other combinations of processing sharing are possible.
  • the imaging control unit 21 includes an imaging unit 31, an aperture control unit 32, and a shutter speed control unit 33.
  • the imaging unit 31 uses the device camera 3 with an aperture state controlled by the aperture control unit 32 and a shutter speed controlled by the shutter speed control unit 33 in an imaging region other than the mask range set by the mask setting unit described later.
  • the image frame unit imaging is performed by hardware.
  • the aperture control unit 32 controls the aperture state of the device camera 3 in hardware based on the aperture value specified by the light amount detection comparison unit described later.
  • the shutter speed control unit 33 controls the shutter speed in the device camera 3 in hardware based on the shutter speed specified by the light amount detection comparison unit described later.
  • the imaging adjustment unit 22 includes a light amount detection comparison unit 34 and a mask setting unit 35.
  • the light amount detection / comparison unit 34 detects an incident light amount detected from the entire image frame previously captured by the image capturing unit 31, and compares the detected light amount with a predetermined threshold value to the aperture control unit 32 and the shutter speed.
  • the shutter speed is designated to the control unit 33.
  • the mask setting unit 35 detects a range in which a particularly high amount of light is detected in the imaging region of the image frame previously captured by the imaging unit 31, and sets the mask range in the image frame captured this time so as to overlap the range. Set.
  • the set mask range is output to the light amount detection / comparison unit 34 together with the imaging unit 31, and the light amount detection / comparison unit 34 also specifies the aperture value and the shutter speed by reflecting the size of the mask range.
  • the image processing unit 23 includes an object detection unit 36, a hand detection unit 37, a frame buffer 38, and a comparison unit 39.
  • the object detection unit 36 performs various types of image processing on the image frame captured by the imaging unit 31 and detects each display object in the image processing. Specifically, image processing such as filtering, edge detection, and contour detection is performed to facilitate recognition of each display object.
  • the hand detection unit 37 detects a display portion corresponding to the user's hand from each display object detected by the object detection unit 36.
  • the frame buffer 38 stores the display portion of the hand detected for each image frame.
  • the comparison unit 39 compares the position and shape of the hand display portion stored in the frame buffer 38 corresponding to each image frame.
  • the hand gesture interface 24 recognizes the hand movement based on the comparison result of the comparison unit 39 and estimates the operation content intended by the user.
  • the graphic user interface 25 determines the operation input from the user according to the operation content currently displayed on the display 2 and the operation content estimated by the hand gesture interface 24.
  • an indoor image of the vehicle V is captured in a moving image format by the device camera 3, and mask settings and exposure parameters in the device camera 3 are variably adjusted for each image frame.
  • the imaging of the room image by the device camera 3 is an imaging for recognizing the user's hand on the premise of the gesture operation input, and the indoor image is displayed on the display 2 and shown to the user. There is no need.
  • various adjustments for appropriately performing hand image recognition are possible in capturing an indoor image. This point will be described in detail below.
  • FIG. 4 is a diagram illustrating an example of an indoor image captured by the device camera 3 when the amount of incident light is normal.
  • the vehicle V is a general passenger car, and light from the surroundings can enter through the window glass in each of the left and right directions and the rear of the vehicle V.
  • a driver who is seated in the left driver's seat in the figure is a user, and a gesture operation input is performed by the operation of the left hand H.
  • the total amount of light in the entire imaging region XY of the image frame captured by the device camera 3 is in the normal range, it is raised in front of the device camera 3 as shown in FIG.
  • the position and shape of the user's left hand H are clearly recognized by the processing of the image processing unit 23 (the thick line shaded portion in the figure).
  • the above-described detection of the light amount is calculated from, for example, the sum of luminances detected in the respective pixels of the CCD image pickup device included in the device camera 3, and in the present embodiment, the light amount detected in the entire imaging region XY. Let the total amount be the amount of incident light.
  • the imaging region XY is scanned to detect a range corresponding to the light source L as the light source area A, and a mask range covering the light source area A and the surrounding range is set.
  • the exposure adjustment for the current imaging is performed with the average values of the aperture value and the shutter speed (that is, the exposure parameter) until the previous time.
  • the image frame imaged by such image capture control is obtained with uniform and appropriate luminance throughout the image frame as shown in FIG. It becomes clear and image recognition of each display object including the user's hand H becomes possible.
  • the image frame picked up as the indoor image lacks only the light source area A, and when viewed as a moving image format image, the image frame blinks due to an instantaneous increase / decrease in exposure.
  • the indoor image is an image for recognizing the user's hand H, and there is no problem because it is not necessary to display the image on the display 2 and show it to the user.
  • the set mask range is sufficiently small, the increase and decrease of the incident light amount due to this is small, and the aperture and shutter speed corresponding to the incident light amount in the entire imaging region XY are normally set. Perform automatic adjustment.
  • FIG. 7 is an example of a flowchart showing the control contents executed by the CPU 11 of the navigation device body 4 in order to realize the operation mode described above. This flow is called and executed when, for example, the graphic user interface 25 requests a gesture operation input while the device camera 3 is capturing a room image in the form of a moving image.
  • step S5 the device camera 3 is set in hardware with a specified aperture value and a specified shutter speed without setting any mask range.
  • step S10 the device camera 3 captures an indoor image for only one image frame.
  • steps S5 and S10 correspond to the imaging means and the imaging step described in each claim.
  • step S15 the incident light quantity (abbreviated as total light quantity in the figure) in the entire image frame just taken is detected.
  • the procedure of step S15 corresponds to the light amount detection means and the light amount detection step described in each claim.
  • step S20 it is determined whether or not the amount of incident light is larger than the normal amount (corresponding to a predetermined threshold). If the amount of incident light is within the normal range, the determination is not satisfied and the routine goes to Step S25.
  • the procedure of this step S20 corresponds to the light amount determination means and the light amount determination step described in each claim.
  • step S25 the mask setting is performed so that no mask range is provided in the next image frame imaging.
  • step S30 the applicable aperture value is calculated based on the incident light amount detected in step S15.
  • step S35 the average value of the applied aperture values calculated up to the previous time is calculated as an average aperture value, and is stored in the storage medium 12c and the like together with the applied aperture value just calculated in step S30.
  • step S40 the applied shutter speed is calculated based on the incident light amount detected in step S15.
  • step S45 the average value of the applied shutter speeds calculated so far is calculated as the average shutter speed, and this is stored in the storage medium 12c and the like together with the applied shutter speed just calculated in step S40.
  • step S50 the device camera 3 is set in hardware with the mask setting, the applied aperture value, and the applied shutter speed set at this time.
  • step S55 the device camera 3 captures an indoor image for only one image frame. Note that the steps S50 and S55 also correspond to the imaging means and the imaging step described in each claim.
  • step S100 an image recognition target object detection process for recognizing and detecting the user's hand H, which is a recognition target object in this case, from the image frame captured in step S55 (not shown) is performed.
  • the procedure of step S100 corresponds to the recognition means and the recognition process described in each claim.
  • Step S60 it is determined whether or not the graphic user interface 25 has completed the gesture operation input by detecting the user's hand H in Step S100. If the gesture operation input is not completed, the determination is not satisfied, and the process returns to step S15 and the same procedure is repeated.
  • step S20 determines whether the amount of incident light exceeds the normal amount range. If it is determined in step S20 that the amount of incident light exceeds the normal amount range, the determination is satisfied, and the process proceeds to step S65.
  • step S65 the imaging area XY of the image frame just taken is scanned, and a pixel range having a light amount higher than a predetermined level is detected as the light source area A.
  • the light source L corresponding to the light source area A includes reflected light and oblique light in addition to the direct backlight, and thus a plurality of light source areas A may be detected.
  • step S70 mask setting is performed corresponding to the light source area A detected in step S65.
  • the pixel corresponding to the light source area A is set not to receive light in the imaging region XY in the CCD imaging device provided in the device camera 3.
  • step S75 it is determined whether or not the total area of the light source area A detected in step S65 (abbreviated as total light source area in the drawing) is larger than a predetermined threshold value.
  • a predetermined threshold value As described above, when a plurality of light source areas A are detected, a comparison is made with the sum of the areas. If the total area of the light source area A is smaller than the predetermined threshold, the determination is not satisfied and the routine goes to Step S30. In other words, since the mask range set corresponding to the light source area A is sufficiently small, even if the aperture value and the shutter speed are obtained based on the incident light amount in the entire image frame detected in step S15, image recognition is not affected. Normal exposure adjustment is performed.
  • Step S80 exposure adjustment suitable for image recognition of the user's hand H is performed according to the following procedure.
  • step S80 the average aperture value stored in step S35 is read out and set as the applicable aperture value in the current imaging.
  • step S85 the average shutter speed stored in step S45 is read out and set as the applicable shutter speed in the current imaging. Then, the process proceeds to step S50.
  • the exposure parameter is set by feedback based on the amount of incident light without setting a mask. If the amount of incident light detected in the entire image frame exceeds the normal amount but the light source area A is sufficiently small, only the mask setting corresponding to the light source area A is set and the exposure parameter is set by feedback based on the amount of incident light. To do.
  • the procedure from step S25 to step S45 in this case corresponds to the small light quantity setting means and the small light quantity setting process described in each claim.
  • the incident light amount detected in the entire image frame exceeds the normal amount and the light source area A is large, the corresponding mask is set and the exposure parameter is set based on the previous history.
  • step S65 to step S85 corresponds to the large light amount setting means and the large light amount setting step described in each claim.
  • the camera imaging in step S10 and the repeated camera imaging in step S55 are synchronized so as to be executed at fixed time intervals corresponding to predetermined frame rates.
  • a moving image format image composed of a plurality of image frames mounted on the vehicle V (corresponding to a moving body) and imaged in time series is used for mask setting and exposure parameters.
  • the exposure parameter is set according to the incident light amount without setting the mask for the current image frame imaging.
  • Steps S25 to S45 (corresponding to the setting means at the time of small light amount) and when the incident light amount is larger than the normal amount in the procedure of Step S20, the previous imaging is performed with respect to the imaging of the current image frame.
  • the exposure parameter history is set so as to mask the light source area A as a range portion in which a light amount of a predetermined amount or more is detected in the image frame that has been set, and set in the steps S25 to S45 before the previous time.
  • a moving image format image composed of a plurality of image frames mounted on the vehicle V (corresponding to a moving body) and imaged in time series is masked and Steps S5, S10, S50, and S55 for imaging based on exposure parameters (corresponding to the imaging step), and step S15 for detecting the amount of incident light in the previously captured image frame (corresponding to the light amount detection step)
  • the previous imaging is performed with respect to the imaging of the current image frame.
  • the exposure parameter history is set so as to mask the light source area A as a range portion in which a light amount of a predetermined amount or more is detected in the image frame that has been set, and set in the steps S25 to S45 before the previous time.
  • step S65 to S85 for setting the exposure parameters for this time correspond to the setting process at the time of high light intensity
  • the user's hand H predetermined from the image frame imaged this time in the steps S50 and S55
  • step S100 corresponding to the recognition process for recognizing the recognition object
  • the exposure parameter is set by feedback based on the amount of incident light.
  • the amount of incident light in the previously captured image frame is larger than the normal amount, mask setting corresponding to the light source area A is performed, and appropriate exposure adjustment for enabling image recognition of the user's hand H is performed. Do.
  • the incidence of light is blocked by the amount of the mask range, so that there is a high possibility that the amount of incident light in the entire imaging region XY will change greatly.
  • the exposure parameter for this time is set based on the history of the exposure parameter set before the previous time together with the mask setting.
  • the image frame lacks only the mask range, and when it is viewed as a moving image, it blinks due to an instantaneous increase / decrease in exposure, but it is appropriate for image recognition. There is no problem because it does not need to be shown to the user. As a result, even when a moving image is mounted on a moving body, the recognition target can be recognized in real time and clearly.
  • the steps S65 to S75 and S30 to S45 further include a mask setting for masking the light source area A when the area of the light source area A is equal to or smaller than a predetermined area, and The current exposure parameter is set according to the incident light quantity.
  • the aperture value and the shutter speed can be obtained based on the amount of incident light in the entire image frame because the mask range is sufficiently small. Normal exposure adjustment can be performed as it does not affect image recognition. As a result, even when a small number of light source areas A with high light intensity concentrated in a narrow range are detected, normal exposure adjustment optimal for image recognition can be performed.
  • the exposure parameter for this time is set with the average value of the exposure parameters set in the procedure of step S25 to step S45 before the previous time.
  • the light amount detecting means detects the incident light amount with the total light amount in the entire range of the image frame.
  • the present invention is not limited to this, and other detection modes such as detecting the incident light amount by the total amount of light amounts respectively detected from detection points distributed at equal intervals in the imaging region XY may be employed.
  • the exposure parameters are an aperture value and a shutter speed in the imaging means.
  • the exposure adjustment in the imaging of the device camera 3 can be performed with high accuracy.
  • the present invention is not limited to this.
  • the adjustment may be made with only one of them, or other exposure parameters may be used.
  • the position of the user's hand H ′ recognized in the procedure of step S100 before the previous time and The position and shape of the current user's hand H may be estimated based on the shape (see the thick broken line hatching in the figure).
  • the imaging method according to the present invention is not limited to the gesture operation input in the indoor image, for example, the vehicle V You may use for the image recognition in the front image of.
  • a single front camera 5 is provided at the front side position of the rearview mirror 103 installed on the ceiling inside the vehicle V, and the front camera 5 can move the vehicle V in the traveling direction. A front image is taken.
  • the front vehicle Vf can be clearly recognized in real time.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

La présente invention a pour objectif d'exécuter une reconnaissance d'image précise, en temps réel, sur un objet, même quand le dispositif de reconnaissance d'image est installé dans un objet mobile et qu'il capture une vidéo. Afin atteindre l'objectif visé, la présente invention se rapporte à un procédé de reconnaissance d'image. Dans le procédé selon l'invention, quand la quantité de lumière incidente qui a été détectée à l'entrée de la zone de capture d'image X-Y totale dans la dernière trame d'image capturée, est supérieure ou égale à une quantité normale, il est déterminé qu'une grande quantité de lumière est entrée dans la zone de capture d'image X-Y en provenance d'une source de lumière spécifique (L). Ensuite, la zone de capture d'image X-Y est balayée de sorte à détecter une plage correspondant à la source de lumière (L) en tant qu'une zone de source de lumière (A); et une plage de masquage, qui recouvre la zone de source de lumière (A) et ses alentours est définie. Comme la lumière qui pénètre dans la plage de masquage est bloquée dans une trame d'image capturée avec le masque ainsi défini, on peut s'attendre à ce que la quantité de lumière incidente qui pénètre dans la zone de capture d'image X-Y totale du masque ainsi défini change plus fortement que la normale. De cette manière, un ajustement d'exposition est exécuté en vue de la capture de la prochaine image, au moyen de valeurs moyennes de paramètres d'exposition, et ce jusqu'à la dernière image capturée.
PCT/JP2012/056767 2012-03-15 2012-03-15 Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage WO2013136498A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/056767 WO2013136498A1 (fr) 2012-03-15 2012-03-15 Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2012/056767 WO2013136498A1 (fr) 2012-03-15 2012-03-15 Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage

Publications (1)

Publication Number Publication Date
WO2013136498A1 true WO2013136498A1 (fr) 2013-09-19

Family

ID=49160464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/056767 WO2013136498A1 (fr) 2012-03-15 2012-03-15 Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage

Country Status (1)

Country Link
WO (1) WO2013136498A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5989251B2 (ja) * 2013-09-12 2016-09-07 三菱電機株式会社 操作入力装置及び方法、並びにプログラム及び記録媒体
CN108415675A (zh) * 2017-02-10 2018-08-17 富士施乐株式会社 信息处理设备、信息处理系统和信息处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007251258A (ja) * 2006-03-13 2007-09-27 Fujitsu Ten Ltd 画像認識装置
JP2009060289A (ja) * 2007-08-30 2009-03-19 Honda Motor Co Ltd カメラの露出制御装置
JP2009077230A (ja) * 2007-09-21 2009-04-09 Seiko Epson Corp 画像処理装置、マイクロコンピュータ及び電子機器
JP2011178301A (ja) * 2010-03-02 2011-09-15 Panasonic Corp 障害物検知装置およびそれを備えた障害物検知システム、並びに障害物検知方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007251258A (ja) * 2006-03-13 2007-09-27 Fujitsu Ten Ltd 画像認識装置
JP2009060289A (ja) * 2007-08-30 2009-03-19 Honda Motor Co Ltd カメラの露出制御装置
JP2009077230A (ja) * 2007-09-21 2009-04-09 Seiko Epson Corp 画像処理装置、マイクロコンピュータ及び電子機器
JP2011178301A (ja) * 2010-03-02 2011-09-15 Panasonic Corp 障害物検知装置およびそれを備えた障害物検知システム、並びに障害物検知方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5989251B2 (ja) * 2013-09-12 2016-09-07 三菱電機株式会社 操作入力装置及び方法、並びにプログラム及び記録媒体
CN108415675A (zh) * 2017-02-10 2018-08-17 富士施乐株式会社 信息处理设备、信息处理系统和信息处理方法
CN108415675B (zh) * 2017-02-10 2023-06-09 富士胶片商业创新有限公司 信息处理设备、信息处理系统和信息处理方法

Similar Documents

Publication Publication Date Title
JP6443559B2 (ja) 車両用表示装置及び車両用表示方法
US10377212B2 (en) Dynamic anti-glare system for a windshield of a vehicle
JP7338146B2 (ja) 表示制御装置、表示制御方法及び表示制御プログラム
KR101438640B1 (ko) 선바이저 제어장치 및 선바이저 제어방법
US20110134252A1 (en) Information processing apparatus and control method thereof
JP2020504953A (ja) カメラアセンブリおよびモバイル電子装置
CN106604005A (zh) 一种投影电视自动对焦方法及系统
US20210235016A1 (en) Generating an image using automatic mode settings while in manual mode
US11012629B2 (en) Image capturing apparatus, control method for image capturing apparatus, and control program for image capturing apparatus
US20190135197A1 (en) Image generation device, image generation method, recording medium, and image display system
JP2010179817A (ja) 車両用防眩装置
JP2009040222A (ja) 遮光制御装置および方法、並びに、プログラム
JP4770385B2 (ja) 自動サンバイザ
JP4932067B1 (ja) 表示装置、表示方法及び表示プログラム
WO2013136498A1 (fr) Dispositif de reconnaissance d'image, procédé de reconnaissance d'image, programme de reconnaissance d'image, et support de stockage
KR20170011750A (ko) 스마트 선 바이저 및 이를 포함하는 차광 시스템
CN109118550A (zh) 汽车全景图像中车身颜色的控制方法、汽车及存储介质
WO2018098992A1 (fr) Procédé et dispositif de commande d'écran et support de stockage informatique
CN113504962A (zh) 一种标清360全景系统ui布局方法
JP5778006B2 (ja) 画像処理装置及び画像処理方法
CN115734086A (zh) 基于屏下摄像的图像处理方法、设备、系统和存储介质
KR100965315B1 (ko) 카메라모듈의 줌 렌즈 제어장치 및 제어방법
KR101673255B1 (ko) 선 바이져 제어 방법 및 장치
US20250108759A1 (en) Controller, control system and control method
CN118952972A (zh) 一种智能车窗的控制系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12871017

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12871017

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载