WO2018168038A1 - Dispositif de détermination de siège de conducteur - Google Patents
Dispositif de détermination de siège de conducteur Download PDFInfo
- Publication number
- WO2018168038A1 WO2018168038A1 PCT/JP2017/036276 JP2017036276W WO2018168038A1 WO 2018168038 A1 WO2018168038 A1 WO 2018168038A1 JP 2017036276 W JP2017036276 W JP 2017036276W WO 2018168038 A1 WO2018168038 A1 WO 2018168038A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- driver
- seat
- seating determination
- captured image
- seated
- Prior art date
Links
- 238000000034 method Methods 0.000 claims description 49
- 210000000056 organ Anatomy 0.000 claims description 35
- 238000004458 analytical method Methods 0.000 claims description 30
- 230000006399 behavior Effects 0.000 claims description 29
- 230000001815 facial effect Effects 0.000 claims description 27
- 230000006870 function Effects 0.000 claims description 27
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000010191 image analysis Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 description 81
- 210000002569 neuron Anatomy 0.000 description 38
- 238000012545 processing Methods 0.000 description 34
- 238000013527 convolutional neural network Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000011176 pooling Methods 0.000 description 7
- 230000008878 coupling Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 5
- 238000005859 coupling reaction Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000010365 information processing Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000000994 depressogenic effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010304 firing Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R21/00—Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
- B60R21/01—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
- B60R21/015—Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
Definitions
- the present invention relates to a driver seating determination device, a driver seating method, and a driver seating program.
- a technique for determining whether a driver is seated in a driver's seat of a car has been proposed. Such a technique is used, for example, to issue a warning when a driver is seated in a driver's seat but does not wear a seat belt.
- a pressure-sensitive sensor is provided in the driver's seat so that it can be determined whether or not the driver is seated in the driver's seat.
- the present invention has been made to solve this problem, and it is possible to accurately determine whether or not a driver is seated in a driver's seat of an automobile.
- An object is to provide a seating method and a driver's seating program.
- a driver's seating determination apparatus is a driver's seating determination apparatus connected to at least one camera that captures a driver's seat of an automobile, and acquires an image acquired by the camera. And an analysis unit that determines whether or not a driver is seated in the driver's seat from the captured image.
- a captured image obtained by capturing the driver's seat by the camera is acquired, and whether or not the driver is seated in the driver's seat is analyzed by analyzing whether or not the driver is included in the captured image. I am trying to judge. Therefore, it can be reliably determined that the driver is seated in the driver's seat.
- the seating determination apparatus may further include an observation information acquisition unit that acquires observation information of the driver including face behavior information regarding the behavior of the driver's face, and the analysis unit includes the driver's observation information. Learning the seating information as to whether or not the driver is seated by inputting the captured image and the observation information into a learned learning device that has learned to determine seating in the driver's seat And a driver state estimation unit that is obtained from the vessel.
- the observation information acquisition unit performs predetermined image analysis on the acquired captured image, thereby detecting whether or not the driver's face can be detected, the position of the face, the direction of the face, and the movement of the face.
- Information on at least one of eye gaze direction, face organ position, and eye opening / closing can be acquired as the face behavior information.
- the analysis unit may further include a resolution conversion unit that reduces the resolution of the acquired captured image, and the driver state estimation unit displays the captured image with the resolution reduced. It can be input to the learning device.
- the analysis unit can determine the seating of the driver by various methods.For example, the analysis unit detects the driver's face from the captured image, and It can be determined that the driver is seated in the driver seat.
- the driver's face can be detected by various methods.
- the analysis unit detects the organ of the person's face from the image included in the captured image, thereby It can be determined that the driver is seated in the driver seat.
- the analysis unit can determine the seating of the driver by various methods. For example, the analysis unit has learned learning that has been performed to detect a human face. It is possible to provide a learning device that takes a captured image including a driver's seat as an input and outputs whether or not a human face is included in the captured image.
- Each of the seating determination devices may further include a warning unit that issues a warning when it is determined that the driver is not seated in the driver seat.
- the above seating determination devices are particularly effective when the automobile has an automatic driving function.
- the analysis unit can be configured to be able to determine the seating of the driver during the operation of the automatic driving function.
- the method for determining whether a driver is seated according to the present invention includes a step of photographing a driver's seat of an automobile with at least one camera, and whether or not the driver is seated in the driver's seat from a photographed image photographed by the camera. And a step of judging.
- the seating determination method it is possible to determine that the driver is seated in the driver seat by detecting the driver's face from the captured image.
- the seating determination method it is possible to determine that the driver is seated in the driver seat by detecting a human facial organ from the image included in the captured image.
- the driver seating determination program includes a step of photographing a driver's seat of a car with at least one camera on a car computer, and a driver seated in the driver's seat from a photographed image taken by the camera. And a step of determining whether or not.
- the seating determination program it is possible to determine that the driver is seated in the driver seat by detecting the driver's face from the captured image.
- the seating determination program it is possible to determine that the driver is seated in the driver seat by detecting a human facial organ from the image included in the photographed image.
- FIG. 1 is a partial schematic configuration diagram of an automobile to which a seating determination device is attached
- FIG. 2 is a diagram illustrating a schematic configuration of a seating determination system.
- the driver's seat 900 is photographed by the camera 3 disposed in front of the driver's seat 900 to obtain a photographed image, and facial organs (eyes, nose, mouth) are obtained from the photographed image. ) Is detected, the driver 800 is determined to be seated in the driver's seat 900.
- the seating determination system includes a seating determination device 1, a learning device 2, and a camera 3.
- the seating determination device 1 can acquire a learned learning device created by the learning device 2 via the network 10, for example.
- the type of the network 10 may be appropriately selected from, for example, the Internet, a wireless communication network, a mobile communication network, a telephone network, a dedicated network, and the like.
- the learning device can be transmitted by directly connecting the seating determination device 1 and the learning device 2.
- the learning device learned by the learning device 2 is stored in a storage medium such as a CD-ROM without connecting the seating determination device 1 and the learning device 2, and the learning device stored in the storage medium Can also be stored in the seating determination apparatus 1.
- a storage medium such as a CD-ROM
- a moving image the captured image is transmitted to the seating determination device 1 for each frame, and seating determination is performed.
- FIG. 3 is a block diagram showing a seating determination apparatus according to the present embodiment.
- the seating determination apparatus 1 according to the present embodiment is electrically connected to the control unit 11, the storage unit 12, the external interface 13, the input device 14, the output device 15, the communication interface 13, and the drive 17.
- the communication interface and the external interface are described as “communication I / F” and “external I / F”, respectively.
- the control unit 11 includes a CPU (Central Processing Unit), a RAM (Random Access Memory), a ROM (Read Only Memory), and the like, and controls each component according to information processing.
- the storage unit 12 is, for example, an auxiliary storage device such as a hard disk drive or a solid state drive, and stores a seating determination program 121 executed by the control unit 11, learning result data 122 indicating information related to a learned learning device, and the like. .
- the seating determination program 121 is a program for causing the seating determination apparatus 1 to execute a process as to whether or not a human face organ can be detected from a captured image.
- the learning result data 122 is data for setting a learned learner. Details will be described later.
- the communication interface 16 is, for example, a wired LAN (Local Area Network) module, a wireless LAN module, or the like, and is an interface for performing wired or wireless communication via a network.
- the input device 14 is a device for performing input using, for example, a mouse or a keyboard.
- the output device 15 is a device for outputting, for example, a display or a speaker.
- the external interface 13 is a USB (Universal Serial Bus) port or the like, and is an interface for connecting to an external device such as the camera 3, a speaker in a vehicle, a display, or a device for controlling the speed.
- various displays such as a display for car navigation provided on a dashboard can be used as the display in the vehicle.
- the external device connected to the external interface 13 may not be limited to each of the above devices, and may be appropriately selected according to the embodiment. Therefore, the external interface 13 may be provided for each external device to be connected, and the number thereof can be selected as appropriate according to the embodiment.
- the drive 17 is, for example, a CD (Compact Disk) drive, a DVD (Digital Versatile Disk) drive, or the like, and is a device for reading a program stored in the storage medium 91.
- the type of the drive 17 may be appropriately selected according to the type of the storage medium 91.
- the seating determination program 121 and / or the learning result data 122 may be stored in the storage medium 91.
- the storage medium 91 stores information such as a program by an electrical, magnetic, optical, mechanical, or chemical action so that the information such as a program recorded by a computer or other device or machine can be read. It is a medium to do.
- the seating determination apparatus 1 may acquire the seating determination program 121 and / or the learning result data 122 from the storage medium 91.
- a disk-type storage medium such as a CD or a DVD is illustrated.
- the type of the storage medium 91 is not limited to the disk type and may be other than the disk type.
- Examples of the storage medium other than the disk type include a semiconductor memory such as a flash memory.
- the control unit 11 may include a plurality of processors.
- the seating determination device 1 may be composed of a plurality of information processing devices.
- FIG. 4 is a block diagram illustrating the learning device according to the present embodiment.
- the learning device 2 according to the present embodiment is for learning the learning device included in the second detection unit 102, and includes a control unit 21, a storage unit 22, a communication interface 23, and an input.
- a computer in which the device 24, the output device 25, the external interface 26, and the drive 27 are electrically connected.
- the communication interface and the external interface are described as “communication I / F” and “external I / F”, respectively.
- the control unit 21 to the drive 27 and the storage medium 92 are the same as the control unit 11 to the drive 17 and the storage medium 91 of the seating determination device 1, respectively.
- the storage unit 22 of the learning device 2 stores a learning program 221 executed by the control unit 21, learning data 222 used for learning of the learning device, learning result data 122 created by executing the learning program 221, and the like. . *
- the learning program 221 is a program for causing the learning device 2 to execute a neural network learning process (FIG. 8) described later.
- the learning data 222 is data for performing learning of a learning device in order to detect a human facial organ from a captured image. Details will be described later.
- the learning program 221 and / or the learning data 222 may be stored in the storage medium 92 as in the seating determination apparatus 1.
- the learning device 2 may acquire the learning program 221 and / or the learning data 222 to be used from the storage medium 92.
- the learning device 2 may be a general-purpose server device, a desktop PC, or the like, in addition to an information processing device designed exclusively for the provided service.
- FIG. 5 schematically illustrates an example of a functional configuration of the seating determination apparatus 1 according to the present embodiment.
- the seating determination apparatus 1 functions as a computer including an image acquisition unit 111, an analysis unit 116, and a warning unit 117.
- the image acquisition unit 111 acquires the captured image 123 generated by the camera 3. Further, the analysis unit 116 determines whether or not the driver is seated in the driver's seat from the captured image 123. When the analysis unit 116 determines that the driver is not seated in the driver's seat, the warning unit 117 is configured to issue a warning.
- these functional configurations will be described in detail.
- the analysis unit 116 uses the photographed image 123 as an input of a learning device learned to detect a facial organ. An output value is obtained from the learning device by the arithmetic processing of the learning device. Then, the analysis unit 116 determines whether or not a human face organ in the captured image 123 exists based on the output value obtained from the learning device.
- the facial organ includes eyes, nose, mouth, and the like, and at least one of these feature points can be detected. However, depending on the type of camera, the eyes may not be detected when the driver is wearing sunglasses. For example, feature points of the nose and mouth can be detected. In addition, when the driver wears a mask, the nose and mouth cannot be detected, so that, for example, the eye feature point can be detected.
- the seating determination apparatus 1 uses, as an example, a learning device that learns about the presence or absence of a facial organ in the captured image 123.
- the learning device 7 is composed of a neural network. Specifically, it is a neural network having a multilayer structure used for so-called deep learning as shown in FIG. 4, and includes an input layer 71, an intermediate layer (hidden layer) 72, and an output layer 73 in order from the input. .
- the neural network 7 includes one intermediate layer 72, the output of the input layer 71 is the input of the intermediate layer 72, and the output of the intermediate layer 72 is the input of the output layer 73.
- the number of intermediate layers 72 is not limited to one, and the neural network 7 may include two or more intermediate layers 72.
- Each layer 71 to 73 includes one or a plurality of neurons.
- the number of neurons in the input layer 71 can be set according to the number of pixels in each captured image 123.
- the number of neurons in the intermediate layer 72 can be set as appropriate according to the embodiment.
- the output layer 73 can be set according to the determination of the presence or absence of a facial organ.
- Adjacent layers of neurons are appropriately connected to each other, and a weight (connection load) is set for each connection.
- each neuron is connected to all neurons in the adjacent layers, but the neuron connection is not limited to such an example, and is appropriately set according to the embodiment. It's okay.
- a threshold is set for each neuron, and basically, the output of each neuron is determined by whether or not the sum of products of each input and each weight exceeds the threshold.
- the seating determination apparatus 1 determines whether or not the driver is seated in the driver's seat based on the output value obtained from the output layer 73 by inputting the respective captured images to the input layer 71 of the neural network 7. Determine.
- the configuration of the neural network 7 (for example, the number of layers of the neural network 7, the number of neurons in each layer, the connection relationship between neurons, the transfer function of each neuron), the weight of connection between each neuron, and each neuron
- the information indicating the threshold value is included in the learning result data 122.
- the seating determination device 1 refers to the learning result data 122 and sets the learned learning device 7 used for processing for determining whether or not the driver is seated in the driver's seat. This also applies to a second embodiment described later.
- Warning section> When the analysis unit 116 determines that the driver is not seated in the driver's seat, the warning unit 117 drives a display, a speaker, and the like in the vehicle through the external interface 16 to give a warning. That is, it is displayed on the display that the driver is not seated, or the vehicle is informed through the speaker that the driver is not seated. In addition, a warning can also be given by reducing the speed of a running car, such as driving a brake, or stopping.
- FIG. 6 schematically illustrates an example of a functional configuration of the learning device 2 according to the present embodiment.
- the control unit 21 of the learning device 2 expands the learning program 221 stored in the storage unit 22 in the RAM. Then, the control unit 21 interprets and executes the learning program 221 expanded in the RAM, and controls each component. Accordingly, as illustrated in FIG. 6, the learning device 2 according to the present embodiment functions as a computer including the learning data acquisition unit 211 and the learning processing unit 212.
- the learning data acquisition unit 211 includes, as learning data 222, a captured image 223 captured by the camera 3 and seating information 2241 indicating whether or not a facial organ is indicated in the captured image 223. Get a pair.
- the captured image 223 and the seating information 2241 correspond to the teacher data of the neural network 8.
- the learning processing unit 212 causes the neural network 8 to learn so as to output an output value corresponding to the seating information 2241.
- a neural network 8 as an example of a learning device includes an input layer 81, an intermediate layer (hidden layer) 82, and an output layer 83, and is configured in the same manner as the neural network 7.
- the layers 81 to 83 are the same as the layers 71 to 73 described above.
- the learning processing unit 212 constructs the neural network 8 that outputs an output value corresponding to the seating information 2241 when the captured image 223 is input by the learning processing of the neural network.
- the learning processing unit 212 stores information indicating the configuration of the constructed neural network 8, the weight of the connection between the neurons, and the threshold value of each neuron as the learning result data 122 in the storage unit 22.
- the learning result data 122 is transmitted to the seating determination apparatus 1 by the various methods described above. Further, such learning result data 122 may be periodically updated.
- the control part 21 may update the learning result data 122 which the seating determination apparatus 1 hold
- each function of the seating determination device 1 and the learning device 2 will be described in detail in an operation example described later.
- an example in which each function of the seating determination device 1 and the learning device 2 is realized by a general-purpose CPU is described.
- part or all of the above functions may be realized by one or a plurality of dedicated processors.
- functions may be omitted, replaced, and added as appropriate according to the embodiment.
- FIG. 7 is a flowchart illustrating an example of a processing procedure of the seating determination apparatus 1. Note that the processing procedure described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
- the user activates the seating determination apparatus 1 and causes the activated seating determination apparatus 1 to execute the seating determination program 121.
- the control unit 11 of the seating determination apparatus 1 refers to the learning result data 122 and sets the structure of the neural network 7, the weight of connection between neurons, and the threshold value of each neuron. And the control part 11 determines whether the driver
- the control unit 11 functions as the image acquisition unit 111 and photographs the driver's seat from the front from the camera 3 connected via the external interface 16.
- the acquired captured image 123 is acquired (step S102).
- the captured image 123 may be a still image, or in the case of a moving image, a captured image is acquired for each frame.
- control unit 11 functions as the analysis unit 116 and determines whether or not a facial organ is included in each captured image 123 acquired in step S102 (step S103). If a facial organ is detected in the captured image 123, it is determined that the driver is seated in the driver's seat (YES in step S103). Thereafter, if driving is being performed (YES in step S101), the captured image 123 is continuously acquired (step S102), and the driver's seating is determined (step S103). On the other hand, if the facial organ cannot be detected in the captured image 123, it is determined that the driver is not seated in the driver's seat (NO in step S101), and a warning is transmitted (step S104).
- control unit 11 functions as the warning unit 117 and notifies the vehicle that the driver is not seated in the driver's seat using the display or speaker in the vehicle.
- the automobile can be decelerated or stopped. Thereafter, if driving is being performed (YES in step S101), the captured image 123 is continuously acquired (step S102), and the driver's seating is determined (step S103). On the other hand, when the operation is not performed, the process is stopped.
- a warning may be issued immediately as described above. (Alternatively, a predetermined number of frames), a warning can be issued if a facial organ cannot be detected.
- the above processing may be performed immediately after the ignition power of the automobile is turned on. For example, when the automobile can be switched between the manual operation mode and the automatic operation mode, the automobile has shifted to the automatic operation mode. You may only do that.
- the captured image 123 obtained by capturing the driver's seat with the camera 3 is acquired, and by analyzing whether or not the captured image 123 includes a human facial organ, It is determined whether or not the driver is seated in the driver's seat. Therefore, it can be reliably determined that the driver is seated in the driver's seat.
- the determination is performed by the learning device 7 configured by a neural network. That is, since the learning device 7 has learned to detect a facial organ from many captured images 123, it can make a highly accurate determination.
- the seating determination system according to the present embodiment includes the seating determination device 1 and the learning device 2 as in the first embodiment.
- the seating determination device 1 captures a captured image from a camera 3 that is arranged to capture a driver 800 that has arrived at the driver's seat of the vehicle. get.
- the seating determination apparatus 1 acquires driver observation information including face behavior information related to the behavior of the driver 800 face.
- the seating determination device 1 uses the acquired captured image and observation to a learned learning device (a neural network described later) that has performed learning to determine whether or not the driver 800 is seated in the driver's seat 900. By inputting information, it is determined whether or not the driver 800 is seated in the driver's seat 900.
- the learning device 2 constructs a learning device to be used in the seating determination device 1, that is, whether or not the driver 800 is seated in the driver's seat according to the input of the captured image and the observation information. It is a computer that performs machine learning of a learning device so as to output seating information indicating the above. Specifically, the learning device 2 acquires a set of the above-described captured image, observation information, and seating information as learning data. Then, the learning device 2 causes the learning device (a neural network 6 described later) to learn so as to output an output value corresponding to the seating information when the captured image and the observation information are input. As a result, a learned learning device used in the seating determination apparatus 1 is created. The connection between the seating determination device 1 and the learning device 2 is the same as that in the first embodiment.
- a learned learning device that has performed learning for estimating the seating of the driver is used.
- the driver who took the driver's seat is photographed.
- a photographed image obtained from the camera 3 arranged in the above is used. Therefore, not only the behavior of the driver 800's face but also the state of the driver's 800 body (for example, body orientation, posture, etc.) can be analyzed from the captured image. Therefore, according to the present embodiment, it is possible to determine whether or not the driver 800 is seated in the driver's seat 900, reflecting various states that the driver 800 can take. Details will be described below.
- FIG. 8 is a block diagram of the seating determination apparatus according to the present embodiment.
- the hardware configuration of the seating determination device according to the present embodiment is substantially the same as that of the first embodiment, and the devices connected to the external I / F are different. Therefore, hereinafter, only differences from the first embodiment will be described, and the same components will be denoted by the same reference numerals and description thereof will be omitted.
- the external interface 13 is connected to the navigation device 30, the biosensor 32, and the speaker 33 in addition to the above-described camera 3 via, for example, CAN (Controller (Area Network).
- the navigation device 30 is a computer that provides route guidance when the vehicle is traveling.
- a known car navigation device may be used as the navigation device 30.
- the navigation device 30 is configured to measure the position of the vehicle based on a GPS (Global Positioning System) signal, and to perform route guidance using map information and surrounding information on surrounding buildings and the like.
- GPS information information indicating the vehicle position measured based on the GPS signal.
- the biological sensor 32 is configured to measure the biological information of the driver 800.
- the biological information to be measured is not particularly limited, and may be, for example, an electroencephalogram, a heart rate, or the like.
- the biological sensor 32 is not particularly limited as long as biological information to be measured can be measured.
- a known brain wave sensor, pulse sensor, or the like may be used.
- the biosensor 32 is attached to the body part of the driver 800 corresponding to the biometric information to be measured.
- the speaker 33 is configured to output sound.
- the speaker 33 is used to warn the driver 800 to take a state suitable for driving the vehicle when the driver 800 is not in a state suitable for driving the vehicle while the vehicle is running. Is done. Details will be described later.
- FIG. 9 schematically illustrates an example of a functional configuration of the seating determination apparatus 1 according to the present embodiment.
- the control unit 11 of the seating determination apparatus 1 expands the program 121 stored in the storage unit 12 in the RAM.
- the control unit 11 interprets and executes the program 121 expanded in the RAM by the CPU and controls each component.
- the seating determination apparatus 1 includes a computer including an image acquisition unit 111, an observation information acquisition unit 112, a resolution conversion unit 113, a driving state estimation unit 114, and a warning unit 115. Function as. Among these, the resolution conversion unit 113 and the operation state estimation unit 114 correspond to the analysis unit of the present invention.
- the image acquisition unit 111 acquires the captured image 123 from the camera 31 that is arranged so as to capture the driver 800 seated in the driver's seat of the vehicle.
- the observation information acquisition unit 112 acquires observation information 124 including face behavior information 1241 related to the behavior of the face of the driver 800 and biological information 1242 measured by the biological sensor 32.
- the face behavior information 1241 is obtained by image analysis of the captured image 123.
- the observation information 124 may not be limited to such an example, and the biological information 1242 may be omitted. In this case, the biosensor 32 may be omitted. That is, only the face behavior information 1241 acquired from the captured image 123 can be used as the observation information 124.
- the resolution conversion unit 113 reduces the resolution of the captured image 123 acquired by the image acquisition unit 111. Thereby, the resolution conversion unit 113 forms a low-resolution captured image 1231.
- the driving state estimation unit 114 reduces the resolution of the captured image 123 to a learned learning device (neural network 5) that has performed learning for reversing whether or not the driver is seated in the driver's seat.
- the low-resolution captured image 1231 and the observation information 124 obtained in the above are input. Accordingly, the driving state estimation unit 114 acquires the sitting information 125 related to the sitting of the driver 800 from the learning device. Note that the resolution reduction process may be omitted. In this case, the driving state estimation unit 114 may input the captured image 123 to the learning device.
- the driver 800 in determining whether or not the driver 800 is seated in the driver's seat 900, there is a possibility that an erroneous determination is made. For example, when the passenger in the front passenger seat has a body with a face on the driver's seat 900, or when the passenger in the rear seat has a body with a face on the driver's seat 900, the driver can enter the driver's seat 900. There is a risk that 800 is seated. Further, there is a possibility that it is determined that the driver is not seated even though the driver seat 900 is seated. For example, when the driver 800 is depressed or facing backward, the facial organ cannot be detected as will be described later, and it may be determined that the driver 800 is not seated.
- the seating determination device 1 in addition to using information on facial organs included in the observation information 124 as a material for determination, the state of the body of the person shown in the driver's seat from the low-resolution captured image 1231 (For example, body orientation, posture, etc.) is used as a material for determination. From such a physical state, it can be determined whether the person in the driver's seat is a person sitting in the driver's seat, a passenger seat or a rear seat, or a child.
- the state of the body of the person shown in the driver's seat from the low-resolution captured image 1231 (For example, body orientation, posture, etc.) is used as a material for determination. From such a physical state, it can be determined whether the person in the driver's seat is a person sitting in the driver's seat, a passenger seat or a rear seat, or a child.
- the warning unit 115 is the same as that of the first embodiment, and when it is determined that the driver 800 is not seated in the driver's seat 900, the display unit and the speaker in the vehicle are driven through the external interface 16 to give a warning.
- the seating determination device 1 is a neural network as a learned learner that has performed learning for determining whether or not the driver 800 is seated in the driver seat 900. 5 is used.
- the neural network 5 according to the present embodiment is configured by combining a plurality of types of neural networks.
- the neural network 5 is divided into four parts: a fully connected neural network 51, a convolutional neural network 52, a connected layer 53, and an LSTM network 54.
- the fully connected neural network 51 and the convolutional neural network 52 are arranged in parallel on the input side.
- Observation information 124 is input to the fully connected neural network 51, and a low-resolution captured image 1231 is input to the convolutional neural network 52.
- the connection layer 53 combines the outputs of the fully connected neural network 51 and the convolutional neural network 52.
- the LSTM network 54 receives the output from the coupling layer 53 and outputs the seating information 125.
- the fully connected neural network 51 is a so-called multilayered neural network, and includes an input layer 511, an intermediate layer (hidden layer) 512, and an output layer 513 in order from the input side.
- the number of layers of the fully connected neural network 51 may not be limited to such an example, and may be appropriately selected according to the embodiment.
- Each layer 511 to 513 includes one or a plurality of neurons (nodes).
- the number of neurons included in each of the layers 511 to 513 may be set as appropriate according to the embodiment.
- the all-connected neural network 51 is configured by connecting each neuron included in each layer 511 to 513 to all the neurons included in the adjacent layers.
- a weight (coupling load) is appropriately set for each coupling.
- the convolutional neural network 52 is a forward propagation neural network having a structure in which convolutional layers 521 and pooling layers 522 are alternately connected.
- a plurality of convolutional layers 521 and pooling layers 522 are alternately arranged on the input side. Then, the output of the pooling layer 522 arranged on the most output side is input to the total coupling layer 523, and the output of the total coupling layer 523 is input to the output layer 524.
- the convolution layer 521 is a layer that performs an image convolution operation.
- Image convolution corresponds to processing for calculating the correlation between an image and a predetermined filter. Therefore, by performing image convolution, for example, a shading pattern similar to the shading pattern of the filter can be detected from the input image.
- the pooling layer 522 is a layer that performs a pooling process.
- the pooling process discards a part of the information of the position where the response to the image filter is strong, and realizes the invariance of the response to the minute position change of the feature appearing in the image.
- the total connection layer 523 is a layer in which all neurons between adjacent layers are connected. That is, each neuron included in all connection layers 523 is connected to all neurons included in adjacent layers.
- the convolutional neural network 52 may include two or more fully connected layers 523. Further, the number of neurons included in all connection layers 423 may be set as appropriate according to the embodiment.
- the output layer 524 is a layer arranged on the most output side of the convolutional neural network 52.
- the number of neurons included in the output layer 524 may be appropriately set according to the embodiment.
- the configuration of the convolutional neural network 52 is not limited to such an example, and may be appropriately set according to the embodiment.
- connection layer 53 is disposed between the fully connected neural network 51 and the convolutional neural network 52 and the LSTM network 54.
- the connection layer 53 combines the output from the output layer 513 of the fully connected neural network 51 and the output from the output layer 524 of the convolutional neural network 52.
- the number of neurons included in the connection layer 53 may be appropriately set according to the number of outputs of the fully connected neural network 51 and the convolutional neural network 52.
- the LSTM network 54 is a recurrent neural network that includes an LSTM block 542.
- a recursive neural network is a neural network having a loop inside, such as a path from an intermediate layer to an input layer.
- the LSTM network 54 has a structure in which an intermediate layer of a general recurrent neural network is replaced with an LSTM block 542.
- the LSTM network 54 includes an input layer 541, an LSTM block 542, and an output layer 543 in order from the input side.
- a path returning from the LSTM block 542 to the input layer 541 is provided. Have.
- the number of neurons included in the input layer 541 and the output layer 543 may be set as appropriate according to the embodiment.
- the LSTM block 542 includes an input gate and an output gate, and is configured to be able to learn information storage and output timing (S. Hochreiter and J.Schmidhuber, "Long short-term memory” Neural Computation, 9). (8): 1735-1780, November 15, 1997).
- the LSTM block 542 may also include a forgetting gate that adjusts the timing of forgetting information (FelixFA. Gers, Jurgen Schmidhuber and Fred Cummins, "Learning to Forget: Continual Prediction with LSTM” Neural Computation, pages 2451- 2471, “October” 2000).
- the configuration of the LSTM network 54 can be set as appropriate according to the embodiment.
- (E) Summary A threshold is set for each neuron, and basically, the output of each neuron is determined by whether or not the sum of products of each input and each weight exceeds the threshold.
- the seating determination apparatus 1 inputs observation information 124 to the fully connected neural network 51 and inputs a low-resolution captured image 1231 to the convolutional neural network 52. And the seating determination apparatus 1 performs the firing determination of each neuron included in each layer in order from the input side. Thereby, the seating determination apparatus 1 acquires an output value corresponding to the seating information 125 from the output layer 543 of the neural network 5.
- FIG. 10 schematically illustrates an example of a functional configuration of the learning device 2 according to the present embodiment.
- the control unit 21 of the learning device 2 develops the learning program 221 stored in the storage unit 22 in the RAM. Then, the control unit 21 interprets and executes the learning program 221 expanded in the RAM, and controls each component. Accordingly, as illustrated in FIG. 10, the learning device 2 according to the present embodiment functions as a computer including the learning data acquisition unit 211 and the learning processing unit 212.
- the learning data acquisition unit 211 includes a captured image acquired from an imaging device arranged to capture a driver who has arrived at the driver's seat of the vehicle, and facial behavior information regarding the behavior of the driver's face.
- a set of observation information and seating information related to the driver's seating is acquired as learning data.
- the learning data acquisition unit 211 acquires a set of the low-resolution captured image 223, the observation information 224, and the seating information 225 as learning data 222.
- the low-resolution captured image 223 and the observation information 224 correspond to the low-resolution captured image 1231 and the observation information 124, respectively.
- the seating information 225 corresponds to the seating information 125.
- the learning processing unit 212 When the learning processing unit 212 inputs the low-resolution captured image 223 and the observation information 224, the learning processing unit 212 learns the learning device so as to output an output value corresponding to the seating information 225. Thereby, in this learning apparatus 2, learning for avoiding the erroneous determination described above is performed.
- the learning device to be learned is a neural network 6.
- the neural network 6 includes a fully connected neural network 61, a convolutional neural network 62, a connected layer 63, and an LSTM network 64, and is configured in the same manner as the neural network 5.
- the fully connected neural network 61, the convolutional neural network 62, the connection layer 63, and the LSTM network 64 are the same as the above-described all connection neural network 51, the convolutional neural network 52, the connection layer 53, and the LSTM network 54, respectively.
- the learning processing unit 212 inputs the observation information 224 to the fully connected neural network 61 and inputs the low-resolution captured image 223 to the convolutional neural network 62 by the learning processing of the neural network, and outputs an output value corresponding to the seating information 225 to the LSTM.
- the neural network 6 that outputs from the network 64 is constructed.
- the learning processing unit 212 stores information indicating the configuration of the constructed neural network 6, the weight of the connection between the neurons, and the threshold value of each neuron as the learning result data 122 in the storage unit 22.
- FIG. 11 is a flowchart illustrating an example of a processing procedure of the seating determination apparatus 1.
- the processing procedure described below is merely an example, and each processing may be changed as much as possible. Further, in the processing procedure described below, steps can be omitted, replaced, and added as appropriate according to the embodiment.
- the driver 800 activates the seating determination device 1 by turning on the ignition power of the vehicle, and causes the activated seating determination device 1 to execute the program 121.
- the timing which the seating determination apparatus 1 starts is not restricted to this.
- the timing at which the seating determination device 1 is activated may be the timing at which the automatic operation mode is activated.
- the control unit 11 of the seating determination device 1 acquires map information, peripheral information, and GPS information from the navigation device 30 and starts automatic driving of the vehicle based on the acquired map information, peripheral information, and GPS information.
- a control method for automatic operation a known control method can be used.
- the control unit 11 monitors the state of the driver 800 according to the following processing procedure.
- Step S202 In step S ⁇ b> 101, the control unit 11 functions as the image acquisition unit 111, and acquires the captured image 123 from the camera 31 arranged to capture the driver 800 attached to the driver's seat of the vehicle.
- the captured image 123 to be acquired may be a moving image or a still image.
- the control unit 11 advances the processing to the next step S203.
- step S203 In step S ⁇ b> 203, the control unit 11 functions as the observation information acquisition unit 112, and acquires observation information 124 including face behavior information 1241 and biological information 1242 that behave on the face of the driver 800.
- the control unit 11 advances the processing to the next step S204.
- the face behavior information 1241 may be acquired as appropriate.
- the control unit 11 performs predetermined image analysis on the captured image 123 acquired in step S202, thereby determining whether the driver 800 can detect the face, the position of the face, the direction of the face, the movement of the face, and the line of sight.
- Information regarding at least one of the direction, the position of the facial organ, and the opening and closing of the eyes can be acquired as the face behavior information 1241.
- the control unit 11 detects the face of the driver 800 from the photographed image 123 and specifies the position of the detected face. Thereby, the control part 11 can acquire the information regarding the detectability and position of a face. Moreover, the control part 11 can acquire the information regarding a motion of a face by detecting a face continuously. Next, the control unit 11 detects each organ (eye, mouth, nose, ear, etc.) included in the face of the driver 800 in the detected face image. Thereby, the control part 11 can acquire the information regarding the position of the facial organ.
- control part 11 can acquire the information regarding the direction of a face, the direction of eyes
- a known image analysis method may be used for face detection, organ detection, and organ state analysis.
- the control unit 11 When the captured image 123 to be acquired is a moving image or a plurality of still images arranged in time series, the control unit 11 performs these image analyzes on each frame of the captured image 123 so that the acquired images are arranged in time series. Various information can be acquired. Thereby, the control part 11 can acquire the various information represented by the histogram or the statistic (an average value, a variance value, etc.) with time series data.
- control unit 11 acquires biological information (for example, brain waves, heart rate, etc.) 1242 from the biological sensor 32.
- biological information 1242 may be represented by a histogram or a statistic (average value, variance value, etc.). Similar to the face behavior information 1241, the control unit 11 can obtain the biological information 1242 as time-series data by continuously accessing the biological sensor 32. As described above, the biological information 1242 is not necessarily required, and the control unit 11 can generate the observation information 124 using only the face behavior information 1241.
- Step S204 the control unit 11 functions as the resolution conversion unit 113, and reduces the resolution of the captured image 123 acquired in step S202. Thereby, the control unit 11 forms a low-resolution captured image 1231.
- the processing method for reducing the resolution is not particularly limited, and may be appropriately selected according to the embodiment.
- the control unit 11 can form the low-resolution captured image 1231 by the nearest neighbor method, the bilinear interpolation method, the bicubic method, or the like.
- the control unit 11 advances the processing to the next step S104. Note that this step S103 may be omitted. That is, the control unit 11 can input the captured image 123 to the learning device 5 without reducing the resolution of the captured image 123.
- step S ⁇ b> 205 the control unit 11 functions as the driving state estimation unit 114, and executes arithmetic processing of the neural network 5 using the acquired observation information 124 and the low-resolution captured image 1231 as inputs of the neural network 5. Thereby, in step S ⁇ b> 105, the control unit 11 obtains an output value corresponding to each of the seating information 125 from the neural network 5.
- control unit 11 inputs the observation information 124 acquired in step S203 to the input layer 511 of the fully connected neural network 51, and the low-resolution captured image 1231 acquired in step S204 is the most in the convolutional neural network 52. It inputs into the convolution layer 521 arrange
- step S207 the control unit 11 functions as the warning unit 115, and determines whether or not the driver 800 is seated in the driver's seat based on the seating information 125 acquired in step S206. And when it determines with the driver
- the seating determination apparatus 1 performs the processing from step S202 to step S204, the observation information 124 including the face behavior information 1241 of the driver 800 and the driver who has arrived at the driver's seat of the vehicle. And a captured image (low-resolution captured image 1231) obtained from the camera 3 arranged so as to capture the image.
- the seating determination device 1 uses the acquired observation information 124 and the low-resolution captured image 1231 as inputs of the learned neural network (the neural network 5) in steps S205 and S206, so that the driver 800 It is determined whether the user is seated at 900.
- the learned neural network is created by the learning device 2 using learning data including the low-resolution captured image 223, the observation information 224, and the seating information 225. Therefore, in the present embodiment, in the process of determining the driver's seating, not only the behavior of the driver 800's face but also the state of the driver's 800 body (for example, the body orientation, Attitude).
- the captured image 123 it is determined from the state of the body of the person shown in the driver's seat 900 whether this is the driver 800 or whether the person in the front passenger seat or the rear seat is riding on the driver's seat 900. Can be determined. Even when the driver 800 is seated, the facial organ cannot be accurately detected when the driver 800 is depressed or turned backward. Even in such a case, the body of the driver 800 is not detected. By detecting the state, it can be determined that the driver 800 is seated in the driver's seat 900. Therefore, it can be accurately determined whether or not the driver 800 is seated on the driver's seat 900.
- observation information (124, 224) including driver's face behavior information is used as an input to the neural network (5, 6). Therefore, the captured image to be input to the neural network (5, 6) does not have to have a high resolution so that the behavior of the driver's face can be determined. Therefore, in this embodiment, low-resolution captured images (1231, 223) obtained by reducing the resolution of the captured image obtained by the camera 31 may be used as the input of the neural network (5, 6). Thereby, the calculation amount of the arithmetic processing of the neural network (5, 6) can be reduced, and the load on the processor can be reduced.
- the resolution of the low-resolution captured image (1231, 223) is preferably such that the behavior of the driver's face cannot be discriminated, but the feature relating to the driver's posture can be extracted. Note that, as described above, it is not always necessary to reduce the resolution of the captured image 123.
- the captured image 123 can be used as the input of the learning device 5 without considering the processing load.
- the neural network 5 includes a fully connected neural network 51 and a convolutional neural network 52 on the input side.
- the observation information 124 is input to the fully connected neural network 51
- the low-resolution captured image 1231 is input to the convolutional neural network 52.
- analysis suitable for each input can be performed.
- the neural network 5 according to this embodiment includes an LSTM network 54.
- the time series data is used for the observation information 124 and the low-resolution captured image 1231, and the seating of the driver 800 can be determined in consideration of not only short-term dependency but also long-term dependency. . Therefore, according to the present embodiment, the seating determination accuracy of the driver 800 can be increased.
- each neural network (7, 8) a general forward propagation type neural network having a multilayer structure is used as each neural network (7, 8).
- the type of each neural network (7, 8) may not be limited to such an example, and may be appropriately selected according to the embodiment.
- each neural network (7, 8) may be a convolutional neural network that uses the input layer 71 and the intermediate layer 72 as a convolution layer and a pooling layer.
- each neural network (7, 8) may be a recursive neural network having a connection that recurs from the output side to the input side, such as the intermediate layer 72 to the input layer 71.
- the number of layers in each neural network (7, 8), the number of neurons in each layer, the connection relationship between neurons, and the transfer function of each neuron may be determined as appropriate according to the embodiment.
- the seating determination device 1 and the learning device 2 that learns the learning device (neural network) 7 are configured by separate computers.
- the configuration of the seating determination device 1 and the learning device 2 may not be limited to such an example, and a system having both functions of the seating determination device 1 and the learning device 2 is realized by one or a plurality of computers. May be.
- the learning device 2 can also be used by being incorporated in the seating determination device 1.
- the learning device is configured by a neural network.
- the type of learning device is not limited to the neural network as long as the captured image 123 captured by the camera 3 can be used as an input, and may be appropriately selected according to the embodiment.
- a learning device capable of inputting a plurality of captured images 123 for example, a learning device configured by a learning device that learns by support vector machine, self-organizing map, or reinforcement learning in addition to the neural network can be cited. .
- the seating determination device 1 is mounted on a vehicle as a single device.
- the seating determination program can be installed in a computer of the vehicle to perform seating determination.
- the method for detecting the facial organ can be used other than the learning device as described above.
- a method there are various methods, for example, known pattern matching can be used.
- there is a method of extracting feature points using a three-dimensional model Specifically, for example, a method described in International Publication No. 2006/051607, Japanese Patent Application Laid-Open No. 2007-249280, or the like is adopted. Can do.
- seating can be determined by acquiring a three-dimensional shape of an object located in the driver's seat by the visual volume intersection method and determining whether the three-dimensional shape is a person.
- a plurality of cameras are provided in the vehicle, and the driver's seat is photographed from a plurality of angles with the plurality of cameras, and a plurality of photographed images are acquired.
- the three-dimensional shape of the object seated in a driver's seat is acquired from a some picked-up image by a visual volume intersection method. Then, it is determined whether or not the three-dimensional shape is a person. If the person is a person, it can be determined that the driver is seated in the driver's seat. On the other hand, when the three-dimensional shape cannot be obtained, or when it is determined that the three-dimensional shape is not a person, a warning can be given as described above.
- the observation information 124 includes biological information 1242 in addition to the face behavior information 1241.
- the configuration of the observation information 124 is not limited to such an example, and may be appropriately selected according to the embodiment.
- the biological information 1242 may be omitted.
- the observation information 124 may include information other than the biological information 1242.
- a driver's seating determination device connected to at least one camera for photographing a driver's seat of a car, Comprising at least one hardware processor;
- the hardware processor is Obtaining a captured image captured by the camera,
- a driver seating determination device that determines whether a driver is seated in the driver seat from the captured image.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Le dispositif de détermination de siège de conducteur selon la présente invention est connecté à au moins un appareil photo pour photographier le siège d'un conducteur d'un véhicule automobile, le dispositif de détermination de siège de conducteur comprenant une unité d'acquisition qui acquiert une image photographique photographiée par l'appareil photo et une unité d'analyse, laquelle, à partir de l'image photographique, évalue si oui ou non un conducteur est assis dans le siège du conducteur.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017049009A JP2018149990A (ja) | 2017-03-14 | 2017-03-14 | 運転者の着座判定装置 |
JP2017-049009 | 2017-03-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018168038A1 true WO2018168038A1 (fr) | 2018-09-20 |
Family
ID=63523251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/036276 WO2018168038A1 (fr) | 2017-03-14 | 2017-10-05 | Dispositif de détermination de siège de conducteur |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2018149990A (fr) |
WO (1) | WO2018168038A1 (fr) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10635917B1 (en) * | 2019-01-30 | 2020-04-28 | StradVision, Inc. | Method and device for detecting vehicle occupancy using passenger's keypoint detected through image analysis for humans' status recognition |
WO2021001942A1 (fr) * | 2019-07-02 | 2021-01-07 | 三菱電機株式会社 | Dispositif de surveillance de conducteur et procédé de surveillance de conducteur |
JP7145830B2 (ja) * | 2019-09-12 | 2022-10-03 | Kddi株式会社 | 符号化パラメータ特徴量を利用した対象識別方法、装置及びプログラム |
JP7509157B2 (ja) | 2022-01-18 | 2024-07-02 | トヨタ自動車株式会社 | ドライバ監視装置、ドライバ監視用コンピュータプログラム及びドライバ監視方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002264826A (ja) * | 2001-03-09 | 2002-09-18 | Toyota Motor Corp | 移動体の自動運転装置 |
JP2013252863A (ja) * | 2013-09-27 | 2013-12-19 | Takata Corp | 乗員拘束制御装置および乗員拘束制御方法 |
WO2016092773A1 (fr) * | 2014-12-09 | 2016-06-16 | 株式会社デンソー | Dispositif de commande de conduite autonome, dispositif de sortie d'information de conduite, repose-pieds, procédé de commande de conduite autonome, et procédé de sortie d'information de conduite |
WO2016190253A1 (fr) * | 2015-05-22 | 2016-12-01 | 株式会社デンソー | Dispositif de commande de direction et procédé de commande de direction |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7508979B2 (en) * | 2003-11-21 | 2009-03-24 | Siemens Corporate Research, Inc. | System and method for detecting an occupant and head pose using stereo detectors |
JP4438753B2 (ja) * | 2006-01-27 | 2010-03-24 | 株式会社日立製作所 | 車両内状態検知システム,車両内状態検知装置および方法 |
JP6381353B2 (ja) * | 2014-08-08 | 2018-08-29 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法およびプログラム |
JP6573193B2 (ja) * | 2015-07-03 | 2019-09-11 | パナソニックIpマネジメント株式会社 | 判定装置、判定方法、および判定プログラム |
-
2017
- 2017-03-14 JP JP2017049009A patent/JP2018149990A/ja active Pending
- 2017-10-05 WO PCT/JP2017/036276 patent/WO2018168038A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002264826A (ja) * | 2001-03-09 | 2002-09-18 | Toyota Motor Corp | 移動体の自動運転装置 |
JP2013252863A (ja) * | 2013-09-27 | 2013-12-19 | Takata Corp | 乗員拘束制御装置および乗員拘束制御方法 |
WO2016092773A1 (fr) * | 2014-12-09 | 2016-06-16 | 株式会社デンソー | Dispositif de commande de conduite autonome, dispositif de sortie d'information de conduite, repose-pieds, procédé de commande de conduite autonome, et procédé de sortie d'information de conduite |
WO2016190253A1 (fr) * | 2015-05-22 | 2016-12-01 | 株式会社デンソー | Dispositif de commande de direction et procédé de commande de direction |
Also Published As
Publication number | Publication date |
---|---|
JP2018149990A (ja) | 2018-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6264492B1 (ja) | 運転者監視装置、運転者監視方法、学習装置及び学習方法 | |
CN111602137B (zh) | 评估装置、动作控制装置、评估方法以及存储媒介 | |
JP6245398B2 (ja) | 状態推定装置、状態推定方法、及び状態推定プログラム | |
JP7032387B2 (ja) | 単眼動画データに基づく車両の挙動推定システム及び方法 | |
CN112673378B (zh) | 推断器生成装置、监视装置、推断器生成方法以及推断器生成程序 | |
JP7011578B2 (ja) | 運転行動を監視する方法及びシステム | |
WO2019149061A1 (fr) | Système d'acquisition de données visuelles basé sur les gestes et le regard | |
EP3588372B1 (fr) | Commande d'un véhicule autonome basée sur le comportement du passager | |
WO2018168038A1 (fr) | Dispositif de détermination de siège de conducteur | |
US20240096116A1 (en) | Devices and methods for detecting drowsiness of drivers of vehicles | |
WO2017209225A1 (fr) | Dispositif d'estimation d'état, procédé d'estimation d'état et programme d'estimation d'état | |
KR20230137221A (ko) | 이미지 검증 방법, 그를 수행하는 진단 시스템 및 그를 수행하는 방법이 기록된 컴퓨터로 판독가능한 기록매체 | |
JP2018173757A (ja) | 検知装置、学習装置、検知方法、学習方法、およびプログラム | |
JP7219787B2 (ja) | 情報処理装置、情報処理方法、学習方法、およびプログラム | |
KR20240084221A (ko) | 운전자의 뇌졸증 전조증상을 검출하기 위한 장치 및 방법 | |
JP2022135676A (ja) | 移動体制御システム及び移動体制御方法 | |
EP4332886A1 (fr) | Dispositif électronique, procédé de commande de dispositif électronique et programme | |
JP7535827B2 (ja) | 画像検証方法、それを実行する診断システム、及びその方法が記録されたコンピューター読取可能な記録媒体 | |
EP4332885A1 (fr) | Dispositif électronique, procédé de commande de dispositif électronique et programme | |
JP7544019B2 (ja) | 手領域検出装置、手領域検出方法及び手領域検出用コンピュータプログラム | |
JP7466433B2 (ja) | 運転データ記録装置、運転支援システム、運転データ記録方法、およびプログラム | |
JP7035912B2 (ja) | 検出器生成装置、モニタリング装置、検出器生成方法及び検出器生成プログラム | |
JP2025062846A (ja) | 情報処理装置、情報処理方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17901052 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17901052 Country of ref document: EP Kind code of ref document: A1 |