+

WO2018159666A1 - Learning apparatus, learning result using apparatus, learning method and learning program - Google Patents

Learning apparatus, learning result using apparatus, learning method and learning program Download PDF

Info

Publication number
WO2018159666A1
WO2018159666A1 PCT/JP2018/007476 JP2018007476W WO2018159666A1 WO 2018159666 A1 WO2018159666 A1 WO 2018159666A1 JP 2018007476 W JP2018007476 W JP 2018007476W WO 2018159666 A1 WO2018159666 A1 WO 2018159666A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
learning
output
training
training data
Prior art date
Application number
PCT/JP2018/007476
Other languages
French (fr)
Inventor
Tanichi Ando
Original Assignee
Omron Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018026134A external-priority patent/JP6889841B2/en
Application filed by Omron Corporation filed Critical Omron Corporation
Publication of WO2018159666A1 publication Critical patent/WO2018159666A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Definitions

  • the present invention relates to a learning apparatus, a learning result using apparatus, a learning method, and a learning program.
  • JP 2016-99165A describes a calculation apparatus that uses a pressure sensor that directly acquires a body weight and an image capturing apparatus that indirectly acquires a body weight and improves the accuracy of measurement by machine learning that uses measurement values of the pressure sensor and measurement values of the image capturing apparatus.
  • JP 2016-99165A is an example of background art.
  • the apparatus described in JP 2016-99165A aims to acquire an accurate measurement result even if the measurement target moves by complementing the measurement value of the pressure sensor with the measurement value of the image capturing apparatus.
  • it is necessary to install multiple types of measurement devices for machine learning in the environment in which the apparatus is used and there are disadvantages such as the burden of installing multiple types of measurement devices, an increase in cost, and an increase in the size of the apparatus.
  • a learning apparatus has a first learning control unit that trains a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data, and a second learning control unit that trains a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
  • the first output data corresponding to the features of the first training data and the second training data is output by the first learning module that accepts the first training data and the second training data as input data
  • the second output data is output by the second learning module that accepts the first training data as input data.
  • the second learning module is trained by supervised learning in which supervisor data is the first output data, and thus the feature of the second training data is indirectly included in the second learning module. Therefore, the first learning module having a desired performance is generated by using both the first training data and the second training data, whereas the second learning module having the same performance as the first learning module is generated by using the first training data and the first output data from the first learning module (i.e., without using the second training data).
  • training of the first learning module requires a device for obtaining the first training data (the first training data obtaining device) and a device for obtaining the second training data (the second training data obtaining device)
  • training of the second learning module does not require the second training data obtaining device.
  • the second learning control unit may train the second learning module after training of the first learning module.
  • the second learning module can be trained using the first output data of the first learning module as supervisor data, after the first learning module learns the features of the first training data and the second training data, and thus the feature of the second training data is more accurately incorporated in the learning of the second learning module.
  • the first training data may be data in the same form as input data that is input to a trained second learning module, which is acquired as a result of training performed by the second learning control unit, or a copy of the trained second learning module
  • the second training data may be data temporally related to the first training data, and may be data in a form different from input data that is input to the trained second learning module or the copy of the trained second learning module.
  • the first learning module can perform multilateral learning based on the first training data in the same form as the input data that is input to the trained second learning module and the second training data that complements or reinforces the first training data.
  • the second learning module can perform supervised learning that extracts a feature that is sometimes not extracted by training based only the first training data.
  • the first learning control unit may train the first learning module by unsupervised learning based on the first training data and the second training data so as to output the first output data.
  • the first output data corresponding to the features of the first training data and the second training data can be autonomously generated by the first learning module, making it possible to perform more objective feature extraction.
  • the first learning control unit may train the first learning module by supervised learning that uses supervisor data including attribute information of the first training data and the second training data, based on the first training data and the second training data, so as to output the first output data.
  • the first output data corresponding to the features of the first training data and the second training data in consideration of existing attribute information.
  • it is not necessary to assign meaning to the first output data and thus it is not necessary to perform calculation or communication in order to interpret the first output data, whereby the processing load and the communication load are suppressed.
  • the first learning module and the second learning module may each include a neural network, and a scale of the neural network included in the second learning module may be smaller than a scale of the neural network included in the first learning module.
  • high-load processing is performed in the learning apparatus that is relatively rich in calculation resources, and it is possible to suppress the scale of a neural network that is set in a learning result using apparatus to a small scale, and to suppress the processing load and the communication load of the learning result using apparatus.
  • the first training data may include image data of a target
  • the second training data may include sensing data acquired by measuring the target using a sensor when the image data is shot
  • the first output data and the second output data may include data related to the target
  • the second learning module that outputs the second output data corresponding to a feature of the image data can indirectly learn a feature that is included in the sensing data and cannot be extracted from the image data, and the second learning module that outputs more accurate second output data is acquired.
  • the first training data may include image data acquired by shooting a person
  • the second training data may include vital data of the person when the image data is shot
  • the first output data and the second output data may be data corresponding to a human emotion
  • the second learning module that outputs the second output data corresponding to a feature of the person that was shot can indirectly learn a feature that is included in vital data and cannot be extracted from the image data, and the second learning module that outputs more accurate second output data is acquired.
  • the first training data may include image data acquired by shooting a vehicle
  • the second training data may include sensing data acquired by performing measurement using a sensor provided in the vehicle when the image data is shot
  • the first output data and the second output data may be data corresponding to an operation of the vehicle.
  • the second learning module that outputs the second output data corresponding to a feature of the shot vehicle can indirectly learn a feature that cannot be extracted from the image data and is included in the sensing data, and the second learning module is acquired, which outputs the second output data that is more accurate.
  • the learning result using apparatus has a learning module setting unit that acquires the trained second learning module acquired as a result of training performed by the second learning control unit provided in the learning apparatus of the above aspect or a copy of the trained second learning module, and sets the trained second learning module or the copy of the trained second learning module as a third learning module, an input unit for inputting data having the same form as the first training data to the third learning module, and an output unit for outputting output data from the third learning module.
  • output data corresponding to a feature of input data is output by the third learning module that accepts the data having the same form as the first training data as the input data.
  • the third learning module is set by the trained second learning module or the copy of the trained second learning module, and thus the third learning module indirectly includes a feature of the second training data. Therefore, it is possible to acquire a desired learning result in which multiple types of training data are incorporated without increasing types of devices for obtaining training data.
  • a learning method includes, training, by a control unit configured to control machine learning, a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data, and training, by the control unit, a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
  • the first output data corresponding to the features of the first training data and the second training data is output by the first learning module that accepts the first training data and the second training data as input data
  • the second output data is output by the second learning module that accepts the first training data as input data.
  • the second learning module performs supervised learning in which supervisor data is the first output data, and thus data indirectly includes the feature of the second training data. Therefore, the second learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
  • a method for producing a trained learning module or a copy of the trained learning module according to one aspect of the present invention includes outputting a trained second learning module acquired as a result of training the second learning module by the learning method of the above aspect or a copy of the trained second learning module.
  • the second learning module or a copy of the second learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
  • a trained learning module or a copy of the trained learning module according to one aspect of the present invention is acquired as a result of training the second learning module by the learning method of the above aspect.
  • the trained learning module or the copy of the trained learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
  • a learning program includes instructions which, when the program is executed by a computer, cause the computer to function as perform a method including training a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data, and training a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
  • the first output data corresponding to the features of the first training data and the second training data is output by the first learning module that accepts the first training data and the second training data as input data
  • the second output data is output by the second learning module that accepts the first training data as input data.
  • the second learning module performs supervised learning in which supervisor data is the first output data, and thus the second output data indirectly includes the feature of the second training data. Therefore, the second learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
  • a technique is obtained that makes it possible to acquire a desired learning result in which multiple types of training data are incorporated without increasing types of devices for obtaining training data
  • Fig. 1 is a diagram showing the network configuration of a learning apparatus and a learning result using apparatus according to an embodiment of the present invention.
  • Fig. 2 is a diagram showing the physical configuration of the learning apparatus according to the embodiment of the present invention.
  • Fig. 3 is a functional block diagram of the learning apparatus according to the embodiment of the present invention.
  • Fig. 4 is a functional block diagram of the learning result using apparatus according to the embodiment of the present invention.
  • Fig. 5 is a conceptual diagram showing the input/output relationship of a first neural network of the learning apparatus according to the embodiment of the present invention.
  • Fig. 6 is a conceptual diagram showing the input/output relationship of a second neural network of the learning apparatus according to the embodiment of the present invention.
  • Fig. 1 is a diagram showing the network configuration of a learning apparatus and a learning result using apparatus according to an embodiment of the present invention.
  • Fig. 2 is a diagram showing the physical configuration of the learning apparatus according to the embodiment of the present invention.
  • Fig. 3
  • FIG. 7 is a conceptual diagram showing the input/output relationship of a third neural network of the learning result using apparatus according to the embodiment of the present invention.
  • Fig. 8 is a flowchart of processing executed by the learning apparatus according to the embodiment of the present invention.
  • Fig. 9 is a flowchart of processing executed by the learning result using apparatus according to the embodiment of the present invention.
  • Fig. 1 is a diagram showing the network configuration of a learning apparatus 10 and a learning result using apparatus 20 according to an embodiment of the present invention.
  • the learning apparatus 10 according to this embodiment is connected to the learning result using apparatus 20, one or more sensors 30 and a sensing data storage DB via a communication network N.
  • the communication network N may be either a wired communication network or a wireless communication network constituted by a wired or wireless line, or may be the Internet or a LAN (Local Area Network).
  • the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 are configured separately, but may be configured integrally.
  • the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 may all be configured to be integrated, or two out of the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 may be selectively configured to be integrated.
  • the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 are configured to be integrated, the elements thereof are connected to each other via an internal bus.
  • the learning apparatus 10 trains a first learning module and a second learning module based on training data including at least one of sensing data acquired from the sensor 30 and sensing data stored in the sensing data storage DB.
  • the learning apparatus 10 according to this embodiment is provided with the first learning module and the second learning module, but the first learning module and the second learning module may be provided in an apparatus separated from the learning apparatus 10.
  • a learning module includes a unit of dedicated or general-purpose hardware or software having a learning capability, or a combination of units of such hardware and software.
  • the learning capability refers to the ability to improve a capability of processing a certain task based on experience acquired from training data.
  • the learning result using apparatus 20 outputs output data corresponding to the feature of input data using a learning result of the learning apparatus 10.
  • the learning apparatus 10 acquires, from the learning apparatus 10, the trained second learning module or a copy of the trained second learning module, and sets the trained second learning module or the copy of the trained second learning module as a third learning module.
  • a copy of a trained learning module includes a unit of dedicated or general-purpose hardware or software that can reproduce a function of the trained learning module, or a combination of units of such hardware or software.
  • a copy of a trained learning module does not necessarily need to have a learning capability.
  • the configuration of a trained learning module and the configuration of a copy of the trained learning module do not necessarily need to match each other.
  • a copy of a trained learning module includes a trained learning module or a copy of the trained learning module that has completed trained and also performed additional training.
  • a copy of the trained second learning module includes a learning module acquired as a result of causing the trained second learning module or a copy of the trained second learning module to perform additional training based on additional training data in the same form as first training data so as to output additional output data.
  • a copy of the trained second learning module also includes a learning module acquired as a result of causing the trained second learning module or a copy of the trained second learning module to perform additional training based on first training data so as to output additional output data.
  • a copy of a trained learning module further includes a learning module acquired by so-called distillation.
  • a copy of a trained learning module includes another trained learning module that has a structure different from that of the trained learning module and has been trained so as to have a function of the trained learning module.
  • the structure of the other learning module may be simpler than the structure of the trained learning module, may be more suitable for being deployed, and output data of the trained learning module may be used for the training of the other learning module.
  • a copy of a trained learning module includes a trained learning module that is acquired by changing a method for normalization for preventing overfitting, changing a learning rate of back propagation, or changing an updating algorithm of a weight coefficient, in the training process of the learning module.
  • acquiring the trained second learning module or a copy of the trained second learning module refers to acquiring information required to reproduce, in the learning result using apparatus 20, a function of the trained second learning module.
  • the second learning module includes a neural network
  • acquiring the trained second learning module or a copy of the trained second learning module refers to acquiring at least information regarding the number of layers of the neural network, the number of nodes for each of the layers, weight parameters of links connecting nodes, bias parameters for the nodes, and the functional types of activation functions the nodes.
  • the sensor 30 may be either a physical quantity sensor that detects a physical quantity or an information sensor that detects information.
  • the physical quantity sensor include cameras that detect light and output image data or moving image data, and vital sensors such as heartbeat sensors that detect heartbeat of a person and output heartbeat data, blood pressure sensors that detect blood pressure of a person and output blood pressure data, and body temperature sensors that detect human body temperature and output body temperature data, and also include any other sensors that detect a physical amount and output an electric signal.
  • the information sensor include sensors that detect a specific pattern from statistical data, and also include any other sensors that detect information.
  • the sensing data storage DB stores sensing data that has been output by the sensor 30.
  • the sensing data storage DB is shown as a single storage, but the sensing data storage DB may be constituted by one or more file servers.
  • Fig. 2 is a diagram showing the physical configuration of the learning apparatus 10 according to the embodiment of the present invention.
  • the learning apparatus 10 has a CPU (Central Processing Unit) 10a equivalent to a hardware processor, a RAM (Random Access Memory) 10b equivalent to a memory, a ROM (Read only Memory) 10c equivalent to a memory, a communication interface 10d, an input unit 10e and a display unit 10f. These constituent elements are connected via a bus so as to be able to exchange data with each other.
  • the type of the hardware processor is not limited to a CPU.
  • a CPU a GPU (Graphics Processing Unit), an FPGA (Field-programmable Gate Array), a DSP (Digital Signal Processor), and an ASIC (Application Specific Integrated Circuit) can be used independently or in combination as a hardware processor.
  • a GPU Graphics Processing Unit
  • FPGA Field-programmable Gate Array
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • the CPU 10a performs execution of a program stored in the RAM 10b or the ROM 10c and calculation and processing of data.
  • the CPU 10a is a calculation apparatus that executes an application for generating metadata.
  • the CPU 10a receives various types of input data from the input unit 10e or the communication interface 10d, and displays calculation results of the input data on the display unit 10f, and stores the calculation results in the RAM 10b or the ROM 10c.
  • the RAM 10b is a data-rewritable storage, and is constituted by a semiconductor storage element, for example.
  • the RAM 10b stores programs such as applications executed by the CPU 10a and data.
  • the ROM 10c is a data-read-only storage, and is constituted by a semiconductor storage element, for example.
  • the ROM 10c stores programs such as firmware and data, for example.
  • the communication interface 10d is a hardware interface that connects the learning apparatus 10 to the communication network N.
  • the input unit 10e accepts input of data from the user, and is constituted by a keyboard, a mouse, or a touch panel, for example.
  • the display unit 10f visually displays a result of calculation performed by the CPU 10a, and is constituted by an LCD (Liquid Crystal Display), for example.
  • LCD Liquid Crystal Display
  • the learning apparatus 10 may be configured by a learning program according to this embodiment being executed by the CPU 10a of a general personal computer.
  • the learning program may be stored in a computer-readable storage medium such as the RAM 10b or the ROM 10c and provided, or may be provided via the communication network N connected by the communication interface 10d.
  • the learning apparatus 10 may have an LSI (Large-Scale Integration) in which the CPU 10a and the RAM 10b or the ROM 10c are integrated.
  • LSI Large-Scale Integration
  • the learning result using apparatus 20 also has a physical configuration similar to that of the learning apparatus 10.
  • the learning result using apparatus 20 may be configured by a learning result using program being executed by a CPU of a general personal computer.
  • the learning result using program may be stored in a computer-readable storage medium such as a RAM or a ROM and provided, or may be provided via the communication network N connected by a communication interface.
  • Fig. 3 is a functional block diagram of the learning apparatus 10 according to the embodiment of the present invention.
  • the learning apparatus 10 has a communication unit 11, a first learning control unit 12, a first learning result extraction unit 13, a first neural network 100, a first learning result output unit 14, a second learning control unit 15, a second learning result extraction unit 16, a second neural network 200 and a second learning result output unit 17.
  • the first learning control unit 12 and the second learning control unit 15 are control units that control machine learning.
  • the first neural network 100 is an example of the first learning module
  • the second neural network 200 is an example of the second learning module.
  • the learning apparatus 10 may have a learning module other than a neural network.
  • the first learning control unit 12 trains the first neural network 100 based on first training data and second training data associated with the first training data so as to output first output data corresponding to the features of the first training data and the second training data.
  • the first training data may be image data of a target, for example, and the second training data may be sensing data acquired by a sensor measuring the target or performing measurement with regard to the target when the image data was shot.
  • the first output data is data corresponding to the features of the image data and the sensing data, and is data regarding the target that is shot.
  • the first neural network 100 may be a CNN (Convolutional Neural Network) that is sometimes used for learning of image data, or an RNN (Recurrent Neural Network) that is sometimes used for learning of time series data.
  • a learning result of the first neural network 100 is extracted by the first learning result extraction unit 13, and is output to the second learning control unit 15 by the first learning result output unit 14.
  • the first learning control unit 12 may train the first neural network 100 by unsupervised learning based on first training data and second training data so as to output first output data.
  • the first output data that is based on the features of the first training data and the second training data can be autonomously generated by the first neural network 100, and feature extraction with higher objectivity can be performed.
  • it is not necessary to prepare supervisor data and thus there is no processing load or communication load for generating and collecting supervisor data, and it is not necessary to secure storage capacity for storing supervisor data.
  • the first learning control unit 12 may train the first neural network 100 by supervised learning that uses supervisor data including attribute information of first training data and second training data, so as to output first output data based on the first training data and the second training data.
  • attribute information of training data is information indicating a feature of the training data, and may include information regarding the type of a physical amount measured by a sensor, the type of the sensor, the type of sensing data and a target measured by the sensor.
  • the second learning control unit 15 trains the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the first neural network 100 in the case where the first training data is input to the first neural network 100, based on the first training data, so as to output second output data.
  • the supervisor data is the first output data that is output from the first neural network 100 in the case where the first training data is input to the first neural network 100, based on the first training data, so as to output second output data.
  • the second neural network 200 shares the learning objective with the first neural network 100 and acquires the same type of capability as the first neural network 100.
  • both of the second output data output from the second neural network 200 and the first output data output from the first neural network 100 are data relating to the same subject and expressed in the same form.
  • the same type of capability may include the capability for performing at least one of analysis, estimation, control with respect to the same (or substantially the same) target, state or operation, and the capability for performing determination, identification, recognition with respect to the same (or substantially the same) requirement.
  • the data relating to the same subject and expressed in the same form includes, for example, data indicating control values for the same variables in the same unit, and data indicating scores for the same determination (the quality of an item, presence of an object, or the like) according the the same rule.
  • the supervisor data is the first output data that is output from the trained first neural network 100 in the case where the image data is input to the trained first neural network 100
  • the second output data that is output from the second neural network 200 in the case where the image data is input to the second neural network 200 is data relating to the same subject and expressed in the same form as the first output data, that is, data corresponding to the feature of the image data, and is data regarding the target that is shot.
  • a learning result of the second neural network 200 is extracted by the second learning result extraction unit 16, and is output to the outside via the communication unit 11 by the second learning result output unit 17.
  • the first training data used for learning performed by the first neural network 100 and the first training data used for learning performed by the second neural network 200 are the same data, but the present invention is not limited to this example. As long as the first training data used for learning performed by the first neural network 100 and the first training data used for learning performed by the second neural network 200 have the same form (or the same type), both data may differ in contents. Specifically, the first training data used for learning performed by the first neural network 100 and the first training data used for learning performed by the second neural network 200 are data in the same form, but may be data in which part of all of the content is different.
  • a configuration may be adopted in which, in the case where image data of a first group as the first training data and sensing data as the second training data were used in learning performed by the first neural network 100, when the second neural network 200 performs learning, image data of a second group is input to the trained first neural network 100 as the first training data, and the second neural network 200 performs learning based on the image data of the second group with the first output data that is output from the trained first neural network 100 serving as supervisor data.
  • a form of data indicates, for example, the form of images (e.g., colour images, infrared images, and range images) or the form of numerical values (e.g., binary, and continuous values).
  • Data in the same form may include data obtained by the same type of data obtaining devices such as cameras, sensors, and measurement devices, and data in the different forms may include data obtained by the different types of data obtaining devices.
  • data in the same form may include data obtained for the same target such as a subject of images and a sensing target object, by the same type of data obtaining device, and data in the different form may include data obtained for the different targets.
  • the image data of the first group and the image data of the second group are both image data (i.e., the data in the same form), and the image data of the second group may or may not include the same pieces of image data as the image data of the first group.
  • the first output data corresponding to the features of the first training data and the second training data is output by the first neural network 100 that accepts the first training data and the second training data as input data
  • the second output data corresponding to the feature of the first training data is output by the second neural network 200 that accepts the first training data as input data.
  • the second neural network 200 performs supervised learning in which supervisor data is the first output data, and thus the second output data indirectly includes the feature of the second training data. Therefore, a neural network having a desired performance without increasing types of measurement devices for obtaining training data is acquired.
  • a neural network is acquired which provides the same performance as that in a case where a plurality of types of measurement devices that obtain the first training data and the second training data are used without using a measurement device for the second training data.
  • a neural network in which a desired measurement result is incorporated without using a measurement device for the second training data is acquired, and thus it is possible to reduce the number of items of hardware of the learning result using apparatus 20 that uses the trained neural network, and to further reduce the processing load of the hardware processor due to a reduction in data amount.
  • the second learning control unit 15 trains the second neural network 200. Accordingly, after the first neural network 100 learned the features of the first training data and the second training data, the second neural network 200 can be trained using, as supervisor data, the first output data that is output from the first neural network 100, and thus the feature of the second training data is more accurately reflected on the training of the second neural network 200.
  • Fig. 4 is a functional block diagram of the learning result using apparatus 20 according to the embodiment of the present invention.
  • the learning result using apparatus 20 has a learning result input unit 231, a neural network setting unit 232, a third neural network 233, a control unit 234, an input unit 235, a communication unit 236, a data acquiring unit 237 that acquires data to be input to the third neural network 233, and an output unit 238.
  • the third neural network 233 is an example of a learning module
  • the learning result using apparatus 20 may have a learning module other than a neural network, and in that case, the neural network setting unit 232 will be replaced by a constituent element that sets a learning module other than a neural network.
  • the data acquiring unit 237 may acquire data via the communication unit 236, or may acquire data via communication other than communication using the communication unit 236.
  • the learning result input unit 231 accepts input of a learning result.
  • the learning result input unit 231 accepts, via the communication unit 236, a learning result that is output by the second learning result output unit 17 of the learning apparatus 10.
  • the neural network setting unit 232 acquires the trained second neural network 200 acquired as a result of training by the second learning control unit 15 provided in the learning apparatus 10 or a copy of the trained second neural network 200, and sets the trained second neural network 200 or the copy of the trained second neural network 200 as the third neural network 233.
  • the control unit 234 controls the data acquiring unit 237 and the input unit 235 so as to input designated input data to the third neural network 233 and to output output data.
  • the input unit 235 inputs data having the same form as the first training data to the third neural network 233.
  • the output unit 238 outputs the output data from the third neural network 233.
  • the output data from the third neural network 233 is output by the output unit 238 via the communication unit 236.
  • output data corresponding to the feature of input data is output by the third neural network 233 that accepts, as input data, data having the same form as the first training data.
  • the third neural network 233 is set by the trained second neural network 200 or a copy of the trained second neural network 200, and thus the third neural network 233 indirectly includes the feature of the second training data. Therefore, a learning module having a desired performance is acquired without increasing types of measurement devices.
  • a desired learning result can be acquired even without using a measurement device used for obtaining sensing data (second training data), and it is possible to reduce the number of items of hardware that constitute the learning result using apparatus 20, and to further reduce the processing load of the hardware processor due to a reduction in the data amount.
  • the first training data may be data in the same form as input data that is input to the trained second neural network 200 acquired as a result of training by the second learning control unit 15 of the learning apparatus 10 or a copy of the trained second neural network 200.
  • the second training data may be data temporally related to the first training data.
  • the second training data may be data in a form different from that of the input data that is input to the trained second neural network 200 or the copy of the trained second neural network 200.
  • the second training data is data that complements or reinforces the first training data, and is data for extracting a feature that cannot be extracted through training that is based only on the first training data.
  • Each piece of the second training data may be obtained at the same time as or in proximity to when the corresponding piece of the second training data is obtained.
  • the second training data temporally related to the first training data includes the second training data obtained within the predetermined period of time before or after the corresponding first training data is obtained.
  • the first neural network 100 can perform multilateral learning based on the first training data in the same form as the input data that is input to the trained second neural network 200, and the second training data that complements or reinforces the first training data.
  • the second neural network 200 can perform supervised learning that extracts a feature that is sometimes not extracted through learning that is based only on the first training data.
  • the scale of the second neural network 200 is smaller than the scale of the first neural network 100.
  • the scale of a neural network is a scale measured based on the number of nodes, the number of links, the number of layers and the like included in the neural network. Due to the scale of the second neural network 200 being smaller than the scale of the first neural network 100, the learning apparatus 10 that is relatively rich in calculation resources performs high-load processing, and thus the scale of the third neural network 233 that is set in the learning result using apparatus 20 can be suppressed to a small scale, and the processing load and communication load of the learning result using apparatus 20 can be suppressed.
  • first training data is image data acquired by shooting a person
  • second training data is vital data of the person at the time when the image data was shot.
  • the time when the image data was shot is a concept that includes the same time as the shooting of the image data and the temporal vicinity before and after.
  • the first training data includes first image data 301, second image data 302 and third image data 303.
  • the second training data includes first vital data 401, second vital data 402 and third vital data 403.
  • the first vital data 401 is vital data of a subject person at the time when the first image data 301 was shot.
  • the first vital data 401 is data that is the same as the first image data 301 in time series.
  • the second vital data 402 is vital data of the subject person at the time when the second image data 302 was shot
  • the third vital data 403 is vital data of the subject person at the time when the third image data 303 was shot.
  • vital data is any biological data such as a heart rate, a blood pressure, a body temperature, a blood component amount, a urine component amount, or a brain wave.
  • the learning apparatus 10 trains the first neural network 100 based on first training data and second training data so as to output first output data corresponding to the features of the first training data and the second training data.
  • the first output data includes first data 501, second data 502 and third data 503, each of which is numeric data.
  • the first data 501 is output data that is output in the case where the first image data 301 and the first vital data 401 are input as input data to the first neural network 100, and is a three-dimensional numeric vector "(0.9, 0.05, 0.05)" in the case of this example.
  • the second data 502 is output data that is output in the case where the second image data 302 and the second vital data 402 are input as input data to the first neural network 100, and is a three-dimensional numeric vector "(0.05, 0.9, 0.05)".
  • the third data 503 is output data that is output in the case where the third image data 303 and the third vital data 403 are input as input data to the first neural network 100, and is a three-dimensional numeric vector "(0.05, 0.05, 0.9)”.
  • the first output data is data corresponding to a human emotion, and each component indicates a degree of correspondence corresponding to a predetermined emotion. The larger the numeric value of the component is, the higher the reliability that is determined to indicate an emotion corresponding to the component is.
  • the user of the learning apparatus 10 compares input data and output data of the first neural network 100, and assigns meanings to the output data.
  • a meaning “anger” is assigned to the first data 501
  • a meaning “relaxation” is assigned to the second data 502
  • a meaning "smile/laughter” is assigned to the third data 503.
  • the first learning control unit 12 trains the first neural network 100 by supervised learning that uses supervisor data including attribute information of first training data and second training data, the user of the learning apparatus 10 does not need to assign meanings to the output data.
  • the first neural network 100 autonomously learns that a first component included in the three dimensional vector that is output as the output data is an amount indicating a degree of anger, a second component is an amount indicating a degree of relaxation, and a third component is an amount indicating a degree of smile/laughter.
  • the learning apparatus 10 can acquire a learning result that makes it possible to estimate a human emotion more accurately than in a case of using only image data as training data, by training the first neural network 100 using both image data and vital data as training data.
  • the image data is data that can be acquired by a camera, which is a common sensor, and is data that can be acquired without mounting a sensor to a person to be shot.
  • the vital data is data that cannot be acquired unless a dedicated sensor is used, and is data that cannot be acquired unless a sensor is mounted to the person to be shot.
  • the learning apparatus 10 may train the first neural network 100 by combining first training data that is relatively easy to acquire and second training data that is relatively difficult to acquire, but complements or reinforces the first training data.
  • Fig. 6 is a conceptual diagram showing the input/output relationship of the second neural network 200 of the learning apparatus 10 according to the embodiment of the present invention.
  • First training data shown in this figure is the same as the first training data shown in Fig. 5, and includes first image data 301, second image data 302 and third image data 303.
  • the learning apparatus 10 trains the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the first neural network 100 in the case where first training data is input to the first neural network 100, based on the first training data, so as to output second output data.
  • the second output data includes fourth data 601, fifth data 602 and sixth data 603, each of which is numeric data.
  • the fourth data 601 is output data that is output in the case where the first image data 301 is input as input data to the second neural network 200, and is a three-dimensional numeric vector "(0.96, 0.02, 0.02)" in the case of this example.
  • the fifth data 602 is output data that is output in the case where the second image data 302 is input as input data to the second neural network 200, and is a three-dimensional numeric vector "(0.02, 0.96, 0.02)".
  • the sixth data 603 is output data that is output in the case where the third image data 303 is input as input data to the second neural network 200, and is a three-dimensional numeric vector "(0.02, 0.02, 0.96)”.
  • the second output data is data corresponding to a human emotion.
  • the second learning control unit 15 trains the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the trained first neural network 100 in the case where the first training data is input to the trained first neural network 100, and thus the user of the learning apparatus 10 does not have to assign meanings to the second output data.
  • the second neural network 200 autonomously learns that the first component included in the three dimensional vector that is output as the second output data is an amount indicating the degree of anger, the second component is an amount indicating the degree of relaxation, and the third components is an amount indicating the degree of smile/laughter.
  • the learning apparatus 10 trains the second neural network 200 using, as supervisor data, output data that is output from the trained first neural network 100 in the case where the first training data is input to the trained first neural network 100, and thereby can acquire a learning result that includes vital data, using only image data as training data, and can acquire a learning result that makes it possible to estimate a human emotion more accurately.
  • the image data is data that can be acquired by a camera, which is a common sensor, and thus the trained second neural network 200 can exhibit, using only image data that is relatively easy to acquire as input data, identification performance similar to that in the case where sensing data that is relatively difficult to acquire is used for complementation.
  • the second neural network 200 is trained using, as supervisor data, the first output data of the first neural network 100 that was trained based on image data and sensing data, so as to output second output data, and thereby the second neural network 200 can indirectly learn a feature that cannot be extracted from only the image data, and the second neural network 200 in which the sensing data is incorporated is acquired.
  • a desired learning result can be acquired even without using a measurement device used for obtaining the sensing data (the second training data), and it is possible to reduce the number of items of hardware that is used, and to further reduce the processing load of the hardware processor due to a reduction in the data amount.
  • the second neural network 200 is trained using, as supervisor data, the first output data of the first neural network 100 that was trained based on image data and vital data of a human, so as to output second output data, and thereby the second neural network 200 can indirectly learn a feature that cannot be extracted from only the image data, and the second neural network 200 that can estimate a human emotion more accurately is acquired.
  • a desired learning result can be acquired without using a measurement device used for obtainingthe vital data (second training data), and it is possible to reduce the number of items of hardware that is used, and to further reduce the processing load of the hardware processor due to a reduction in the data amount.
  • the number of types of the features of the first training data is three, but generally, a large number of features, namely, four or more features are included in first training data.
  • the first neural network 100 and the second neural network 200 are trained so as to classify the thousands of types of the features of the first training data, determine which of the thousands of types of classifications input data is close to, and output output data corresponding to the features of the input data.
  • the learning apparatus 10 has been described which has the first neural network 100 and the second neural network 200, and performs training using first training data and second training data, but the configuration of the learning apparatus 10 is not limited to this example. Accordingly, the learning apparatus 10 may have three or more neural networks, and may be configured to perform training using training data of three types or more.
  • the learning apparatus 10 may have a first neural network that is trained based on first training data, second training data and third training data so as to output first output data corresponding to the features of the first training data, the second training data and the third training data, and a second neural network that performs supervised learning in which supervisor data is the first output data, based on the first training data, so as to output second output data.
  • the learning apparatus 10 may have a first neural network that is trained based on first training data, second training data and third training data so as to output first output data corresponding to the features of the first training data, the second training data and the third training data, a second neural network that performs supervised learning in which supervisor data is the first output data, based on the first training data and the second training data, so as to output second output data, and a third neural network that performs supervised learning in which supervisor data is the second output data, based on the first training data, so as to output third output data.
  • the learning apparatus 10 may have a first neural network that is trained based on first training data and second training data so as to output first output data corresponding to the features of the first training data and the second training data, and a plurality of second neural networks that perform supervised learning in which supervisor data is the first output data, based on the first training data, so as to output second output data.
  • the plurality of second neural networks may each have a different neural network structure regarding the number of layers, the number of units and the number of links, and may each output different second output data.
  • Fig. 7 is a conceptual diagram showing the input/output relationship of the third neural network 233 of the learning result using apparatus 20 according to the embodiment of the present invention.
  • Input data shown in the figure includes fourth image data 310.
  • the learning result using apparatus 20 acquires the trained second neural network 200 acquired as a result of training by the second learning control unit 15 provided in the learning apparatus 10 or a copy of the trained second neural network 200, and sets, as the third neural network 233, the trained second neural network 200 or the copy of the trained second neural network 200.
  • the third neural network 233 accepts, as input data, data having the same form as first training data.
  • the data having the same form as the first training data is image data.
  • the third neural network 233 outputs output data corresponding to the feature of the input data.
  • the output data is seventh data 701, and the seventh data 701 is numeric data.
  • the seventh data 701 is output data that is output in the case where the fourth image data 310 is input as input data to the third neural network 233, and is a three-dimensional numeric vector "(0.02, 0.02, 0.96)" in the case of this example.
  • the output data of the third neural network 233 is data corresponding to a human emotion, and the output data in this example is data corresponding to "smile/laughter".
  • the learning result using apparatus 20 acquires the trained second neural network 200 or a copy of the trained second neural network 200, and sets, as the third neural network 233, the trained second neural network 200 or the copy of the trained second neural network 200, and thereby a learning result including vital data can be used even if input data is image data only, and a human emotion can be estimated more accurately.
  • the image data is data that can be acquired by a camera, which is a common sensor, and thus the third neural network 233 of the learning result using apparatus 20 can exhibit an identification performance similar to that in the case where only image data that is relatively easy to acquire is used as input data, and sensing data that is relatively hard to acquire is used for complementation.
  • First training data and second training data are not limited to image data and vital data of a person.
  • vital data of a person may be used as first training data
  • image data of the person may be used as second training data.
  • the image data of a person may be used as data for complementing or reinforcing the vital data.
  • first training data may include image data acquired by shooting a vehicle
  • second training data may include sensing data measured by a sensor provided in the vehicle at the time when the image data was shot. More specifically, image data of a second vehicle that was shot by a camera provided in a first vehicle in the state where the first vehicle was following the second vehicle may be used as the first training data, and sensing data measured by a sensor provided in the second vehicle may be used as the second training data.
  • the sensor provided in the second vehicle may be a sensor that measures an operation of the accelerator pedal of the second vehicle, an operation of the brake pedal, a steering operation, a winker operation, and the state of the driver.
  • the first neural network 100 is trained based on the image data of the second vehicle that has been shot from the first vehicle and the sensing data related to a measured operation of the second vehicle, and first output data of the first neural network 100 will be data corresponding to the operation of the vehicle.
  • the data corresponding to the operation of the vehicle includes a speed, acceleration, a traveling direction vector, probability of course change, and the like.
  • the second neural network 200 performs supervised learning in which the supervisor data is the first output data that is output from the trained first neural network 100 in the case where the image data of the second vehicle shot from the first vehicle is input to the trained first neural network 100, based on the image data of the second vehicle shot from the first vehicle, and second output data of the second neural network 200 is data corresponding to the operation of the vehicle, similar to the first output data.
  • the second training data may include information regarding the relative distance between the first vehicle and the second vehicle. Operations of vehicles changes in a large amount according to the distance between a leading vehicle and a following vehicle. Therefore, if the second training data includes information regarding the relative distance, it is possible to improve the accuracy of operation estimation of the vehicle, which will be described later.
  • the relative distance can be acquired by the following method. For example, on a test course on which a measurement apparatus that identifies the position of a vehicle is provided, the relative distance between the first vehicle and the second vehicle can be measured while shooting the second vehicle using a camera provided in the first vehicle.
  • the distance between the first vehicle and the second vehicle can be acquired by attaching a focus detection apparatus (e.g., a laser radar) at the front of the first vehicle or the rear of the second vehicle.
  • the information regarding the relative distance may be estimated based on an image from a camera provided on a general road.
  • a configuration may be adopted in which the first vehicle and the second vehicle built as physical models run in a virtual space, and image data as the first training data, sensor data as the second training data and the relative distance are acquired from the virtual space.
  • the second neural network 200 is trained using, as supervisor data, the first output data of the first neural network 100 that was trained based on the image data and the sensing data of the vehicle, so as to output the second output data, in this manner, and thereby the second neural network 200 can indirectly learn a feature that cannot be extracted from only the image data of the vehicle, and the second neural network 200 that can perform operation estimation of the vehicle more accurately is acquired.
  • First training data and second training data may be data other than the above.
  • the first neural network 100 and the second neural network 200 may perform learning based on image data acquired by shooting a person and sensing data that has been output from a sensor that detects action of the person, the image data serving as first training data and the sensing data serving as second training data, so as to output data corresponding to the action of the person as first output data and second output data.
  • the sensor that detects action of a person may be a momentum sensor or an acceleration sensor that is mounted to a person, or a sensor that is provided on a target that is operated by a person and detects an operation performed by the person. Accordingly, it is possible to output the second output data for predicting the next action of the person in the case where the image data acquired by shooting the person is input to the second neural network 200.
  • the first neural network 100 and the second neural network 200 may be trained based on image data acquired by shooting a fruit and sensing data that has been output from a sensor that measures the degree of maturation of the fruit, the image data serving as first training data and the sensing data serving as second training data, so as to output data corresponding to the degree of the maturation of the fruit as first output data and second output data.
  • the sensor that measures the degree of maturation of a fruit may be a weight sensor, a hardness sensor, a sugar content sensor or the like. Accordingly, it is possible to output the second output data that estimates the degree of maturation of the fruit in the case where the image data acquired by shooting the fruit is input to the second neural network 200.
  • the first neural network 100 and the second neural network 200 may perform learning based on image data acquired by shooting the appearance of a substrate onto which electric parts are fixed by soldering and sensing data that has been output from a sensor that measures the state of the soldering (e.g., an air content of the soldering, denaturation due to overheat, and an unjoined state due to heating shortage), the image data serving as first training data and the sensing data serving as second training data, so as to output, as first output data and second output data, data corresponding to whether or not a soldering inspection criteria is met.
  • a sensor that measures the state of the soldering
  • the image data serving as first training data and the sensing data serving as second training data, so as to output, as first output data and second output data, data corresponding to whether or not a soldering inspection criteria is met.
  • the second neural network 200 that was trained in this manner is used in a substrate inspection apparatus for checking the state of soldering between a substrate and electric parts placed on the substrate, data corresponding to whether or not the soldering inspection criteria is met can be acquired without using the sensor that measures the state of soldering, and thus it is possible to reduce the number of items of hardware of the substrate inspection apparatus, and to further reduce the processing load of the hardware processor due to a reduction in data amount.
  • Fig. 8 is a flowchart of processing executed by the learning apparatus 10 according to the embodiment of the present invention.
  • the learning apparatus 10 designates first training data and second training data based on an instruction accepted from the user (step S10). After that, the learning apparatus 10 determines whether or not supervised learning is to be performed (step S11). Here, whether or not supervised learning is to be performed may be determined based on the instruction accepted from the user.
  • step S11 If the learning apparatus 10 determines that supervised learning is to be performed (step S11: Yes), the learning apparatus 10 designates supervisor data based on an instruction accepted from the user (step S12). The learning apparatus 10 trains the first neural network 100 by supervised learning based on the designated first training data, second training data and supervisor data (step S13).
  • step S11 determines that supervised learning is not to be performed (step S11: No)
  • the learning apparatus 10 trains the first neural network 100 by unsupervised learning based on the designated first training data and second training data (step S14).
  • the learning apparatus 10 trains the second neural network 200 by supervised learning in which supervisor data is first output data that has been output from the first neural network 100, based on the designated first training data (step S15). The processing performed by the learning apparatus 10 then ends.
  • the trained second neural network 200 or a copy of the trained second neural network 200 can be generated by using the learning apparatus 10 according to this embodiment.
  • the trained second neural network 200 or a copy of the trained second neural network 200 can be generated by the first learning control unit 12 training the first neural network 100 based on first training data and second training data, so as to output first output data corresponding to the features of the first training data and the second training data, the second learning control unit 15 training the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the first neural network 100 in the case where the first training data is input to the first neural network 100, based on the first training data, so as to output the second output data, and the second learning result output unit 17 outputting the trained second neural network 200 or the copy of the trained second neural network 200.
  • Fig. 9 is a flowchart of processing executed by the learning result using apparatus 20 according to the embodiment of the present invention.
  • the learning result using apparatus 20 acquires the trained second neural network 200 or a copy of the trained second neural network 200, using the learning apparatus 10, and sets the third neural network 233 (step S20).
  • the learning result using apparatus 20 then designates input data that is to be input to the third neural network 233 based on an instruction accepted from the user (step S21).
  • the input data is data having the same form as the first training data.
  • the learning result using apparatus 20 inputs the designated input data to the third neural network 233, and outputs output data corresponding to the feature of the input data (step S22). The processing performed by the learning result using apparatus 20 then ends.
  • a learning apparatus including at least one memory and at least one item of hardware processor connected to the memory, wherein the hardware processor trains a first learning module based on first training data and second training data associated with the first training data, so as to output first output data corresponding to the features of the first training data and the second training data, and the hardware processor trains a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in the case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
  • Additional Remark 2 A learning method: wherein at least one item of hardware processor trains a first learning module based on first training data and second training data associated with the first training data, so as to output first output data corresponding to the features of the first training data and the second training data, and the hardware processor trains a second learning module by supervised learning in which the supervisor data is the first output data that is output from the first learning module in the case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A technique for acquiring a desired learning result without increasing types of devices for obtaining training data is provided. A learning apparatus has a first learning control unit that trains a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to the features of the first training data and the second training data, and a second learning control unit that trains a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.

Description

LEARNING APPARATUS, LEARNING RESULT USING APPARATUS, LEARNING METHOD AND LEARNING PROGRAM
The present invention relates to a learning apparatus, a learning result using apparatus, a learning method, and a learning program.
CROSS-REFERENCES TO RELATED APPLICATIONS
This application claims priority to Japanese Patent Application No.2017-038492 filed March 1, 2017, the entire contents of which are incorporated herein by reference.
In recent years, studies related to machine learning have been widely performed. In particular, due to the development of techniques called deep learning, learning modules that exhibit performance equivalent to or higher than the recognizing capability of humans have become available.
As an application example of machine learning, JP 2016-99165A describes a calculation apparatus that uses a pressure sensor that directly acquires a body weight and an image capturing apparatus that indirectly acquires a body weight and improves the accuracy of measurement by machine learning that uses measurement values of the pressure sensor and measurement values of the image capturing apparatus. JP 2016-99165A is an example of background art.
The apparatus described in JP 2016-99165A aims to acquire an accurate measurement result even if the measurement target moves by complementing the measurement value of the pressure sensor with the measurement value of the image capturing apparatus. However, with the apparatus described in JP 2016-99165A, it is necessary to install multiple types of measurement devices for machine learning in the environment in which the apparatus is used, and there are disadvantages such as the burden of installing multiple types of measurement devices, an increase in cost, and an increase in the size of the apparatus.
In view of this, it is an object of the present invention to provide a technique for acquiring a desired learning result in which multiple types of training data are incorporated without increasing the types of devices for obtaining training data.
A learning apparatus according to one aspect of the present invention has a first learning control unit that trains a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data, and a second learning control unit that trains a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
According to this aspect, the first output data corresponding to the features of the first training data and the second training data is output by the first learning module that accepts the first training data and the second training data as input data, and the second output data is output by the second learning module that accepts the first training data as input data. The second learning module is trained by supervised learning in which supervisor data is the first output data, and thus the feature of the second training data is indirectly included in the second learning module. Therefore, the first learning module having a desired performance is generated by using both the first training data and the second training data, whereas the second learning module having the same performance as the first learning module is generated by using the first training data and the first output data from the first learning module (i.e., without using the second training data). That is, although training of the first learning module requires a device for obtaining the first training data (the first training data obtaining device) and a device for obtaining the second training data (the second training data obtaining device), training of the second learning module does not require the second training data obtaining device. As a result, it is possible to acquire a desired learning result in which multiple types of training data are incorporated without increasing types of devices for obtaining training data, and thus it is possible to reduce the number of items of hardware that are used and the processing load of the hardware processor due to a reduction in the data amount.
In the above aspect, the second learning control unit may train the second learning module after training of the first learning module.
According to this aspect, the second learning module can be trained using the first output data of the first learning module as supervisor data, after the first learning module learns the features of the first training data and the second training data, and thus the feature of the second training data is more accurately incorporated in the learning of the second learning module.
In the above aspect, the first training data may be data in the same form as input data that is input to a trained second learning module, which is acquired as a result of training performed by the second learning control unit, or a copy of the trained second learning module, and the second training data may be data temporally related to the first training data, and may be data in a form different from input data that is input to the trained second learning module or the copy of the trained second learning module.
According to this aspect, the first learning module can perform multilateral learning based on the first training data in the same form as the input data that is input to the trained second learning module and the second training data that complements or reinforces the first training data. In addition, if the first output data of the first learning module that performs multilateral learning serves as supervisor data, the second learning module can perform supervised learning that extracts a feature that is sometimes not extracted by training based only the first training data.
In the above aspect, the first learning control unit may train the first learning module by unsupervised learning based on the first training data and the second training data so as to output the first output data.
According to this aspect, the first output data corresponding to the features of the first training data and the second training data can be autonomously generated by the first learning module, making it possible to perform more objective feature extraction. In addition, it is not necessary to prepare supervisor data, and thus there is no processing load or communication load for generating and collecting supervisor data, and it is not necessary to secure the storage capacity for storing supervisor data.
In the above aspect, the first learning control unit may train the first learning module by supervised learning that uses supervisor data including attribute information of the first training data and the second training data, based on the first training data and the second training data, so as to output the first output data.
According to this aspect, it is possible to generate the first output data corresponding to the features of the first training data and the second training data in consideration of existing attribute information. In addition, it is not necessary to assign meaning to the first output data, and thus it is not necessary to perform calculation or communication in order to interpret the first output data, whereby the processing load and the communication load are suppressed.
In the above aspect, the first learning module and the second learning module may each include a neural network, and a scale of the neural network included in the second learning module may be smaller than a scale of the neural network included in the first learning module.
According to this aspect, high-load processing is performed in the learning apparatus that is relatively rich in calculation resources, and it is possible to suppress the scale of a neural network that is set in a learning result using apparatus to a small scale, and to suppress the processing load and the communication load of the learning result using apparatus.
In the above aspect, the first training data may include image data of a target, the second training data may include sensing data acquired by measuring the target using a sensor when the image data is shot, and the first output data and the second output data may include data related to the target.
According to this aspect, the second learning module that outputs the second output data corresponding to a feature of the image data can indirectly learn a feature that is included in the sensing data and cannot be extracted from the image data, and the second learning module that outputs more accurate second output data is acquired.
In the above aspect, the first training data may include image data acquired by shooting a person, the second training data may include vital data of the person when the image data is shot, and the first output data and the second output data may be data corresponding to a human emotion.
According to this aspect, the second learning module that outputs the second output data corresponding to a feature of the person that was shot can indirectly learn a feature that is included in vital data and cannot be extracted from the image data, and the second learning module that outputs more accurate second output data is acquired.
In the above aspect, the first training data may include image data acquired by shooting a vehicle, the second training data may include sensing data acquired by performing measurement using a sensor provided in the vehicle when the image data is shot, and the first output data and the second output data may be data corresponding to an operation of the vehicle.
According to this aspect, the second learning module that outputs the second output data corresponding to a feature of the shot vehicle can indirectly learn a feature that cannot be extracted from the image data and is included in the sensing data, and the second learning module is acquired, which outputs the second output data that is more accurate.
The learning result using apparatus according to one aspect of the present invention has a learning module setting unit that acquires the trained second learning module acquired as a result of training performed by the second learning control unit provided in the learning apparatus of the above aspect or a copy of the trained second learning module, and sets the trained second learning module or the copy of the trained second learning module as a third learning module, an input unit for inputting data having the same form as the first training data to the third learning module, and an output unit for outputting output data from the third learning module.
According to this aspect, output data corresponding to a feature of input data is output by the third learning module that accepts the data having the same form as the first training data as the input data. The third learning module is set by the trained second learning module or the copy of the trained second learning module, and thus the third learning module indirectly includes a feature of the second training data. Therefore, it is possible to acquire a desired learning result in which multiple types of training data are incorporated without increasing types of devices for obtaining training data.
A learning method according to one aspect of the present invention includes, training, by a control unit configured to control machine learning, a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data, and training, by the control unit, a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
According to this aspect, the first output data corresponding to the features of the first training data and the second training data is output by the first learning module that accepts the first training data and the second training data as input data, and the second output data is output by the second learning module that accepts the first training data as input data. The second learning module performs supervised learning in which supervisor data is the first output data, and thus data indirectly includes the feature of the second training data. Therefore, the second learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
A method for producing a trained learning module or a copy of the trained learning module according to one aspect of the present invention includes outputting a trained second learning module acquired as a result of training the second learning module by the learning method of the above aspect or a copy of the trained second learning module.
According to this aspect, the second learning module or a copy of the second learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
A trained learning module or a copy of the trained learning module according to one aspect of the present invention is acquired as a result of training the second learning module by the learning method of the above aspect.
According to this aspect, the trained learning module or the copy of the trained learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
A learning program according to one aspect of the present invention includes instructions which, when the program is executed by a computer, cause the computer to function as perform a method including training a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data, and training a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
According to this aspect, the first output data corresponding to the features of the first training data and the second training data is output by the first learning module that accepts the first training data and the second training data as input data, and the second output data is output by the second learning module that accepts the first training data as input data. The second learning module performs supervised learning in which supervisor data is the first output data, and thus the second output data indirectly includes the feature of the second training data. Therefore, the second learning module in which the second training data is incorporated is acquired without using the second training data obtaining device.
According to the present invention, a technique is obtained that makes it possible to acquire a desired learning result in which multiple types of training data are incorporated without increasing types of devices for obtaining training data
Fig. 1 is a diagram showing the network configuration of a learning apparatus and a learning result using apparatus according to an embodiment of the present invention. Fig. 2 is a diagram showing the physical configuration of the learning apparatus according to the embodiment of the present invention. Fig. 3 is a functional block diagram of the learning apparatus according to the embodiment of the present invention. Fig. 4 is a functional block diagram of the learning result using apparatus according to the embodiment of the present invention. Fig. 5 is a conceptual diagram showing the input/output relationship of a first neural network of the learning apparatus according to the embodiment of the present invention. Fig. 6 is a conceptual diagram showing the input/output relationship of a second neural network of the learning apparatus according to the embodiment of the present invention. Fig. 7 is a conceptual diagram showing the input/output relationship of a third neural network of the learning result using apparatus according to the embodiment of the present invention. Fig. 8 is a flowchart of processing executed by the learning apparatus according to the embodiment of the present invention. Fig. 9 is a flowchart of processing executed by the learning result using apparatus according to the embodiment of the present invention.
Embodiments of the present invention will be described below with reference to the attached drawings. Note that in the figures, the same or similar constituent elements are denoted by the same reference numerals.
Fig. 1 is a diagram showing the network configuration of a learning apparatus 10 and a learning result using apparatus 20 according to an embodiment of the present invention. The learning apparatus 10 according to this embodiment is connected to the learning result using apparatus 20, one or more sensors 30 and a sensing data storage DB via a communication network N. The communication network N may be either a wired communication network or a wireless communication network constituted by a wired or wireless line, or may be the Internet or a LAN (Local Area Network). Note that in Fig. 1, the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 are configured separately, but may be configured integrally. Specifically, the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 may all be configured to be integrated, or two out of the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 may be selectively configured to be integrated. Here, in the case where the sensing data storage DB, the learning apparatus 10 and the learning result using apparatus 20 are configured to be integrated, the elements thereof are connected to each other via an internal bus.
The learning apparatus 10 trains a first learning module and a second learning module based on training data including at least one of sensing data acquired from the sensor 30 and sensing data stored in the sensing data storage DB. The learning apparatus 10 according to this embodiment is provided with the first learning module and the second learning module, but the first learning module and the second learning module may be provided in an apparatus separated from the learning apparatus 10. Note that a learning module includes a unit of dedicated or general-purpose hardware or software having a learning capability, or a combination of units of such hardware and software. Here, the learning capability refers to the ability to improve a capability of processing a certain task based on experience acquired from training data.
The learning result using apparatus 20 outputs output data corresponding to the feature of input data using a learning result of the learning apparatus 10. The learning apparatus 10 according to this embodiment acquires, from the learning apparatus 10, the trained second learning module or a copy of the trained second learning module, and sets the trained second learning module or the copy of the trained second learning module as a third learning module. Note that a copy of a trained learning module includes a unit of dedicated or general-purpose hardware or software that can reproduce a function of the trained learning module, or a combination of units of such hardware or software. A copy of a trained learning module does not necessarily need to have a learning capability. In addition, the configuration of a trained learning module and the configuration of a copy of the trained learning module do not necessarily need to match each other. In addition, a copy of a trained learning module includes a trained learning module or a copy of the trained learning module that has completed trained and also performed additional training. In the case of the second learning module according to this embodiment, a copy of the trained second learning module includes a learning module acquired as a result of causing the trained second learning module or a copy of the trained second learning module to perform additional training based on additional training data in the same form as first training data so as to output additional output data. A copy of the trained second learning module also includes a learning module acquired as a result of causing the trained second learning module or a copy of the trained second learning module to perform additional training based on first training data so as to output additional output data. A copy of a trained learning module further includes a learning module acquired by so-called distillation. Specifically, a copy of a trained learning module includes another trained learning module that has a structure different from that of the trained learning module and has been trained so as to have a function of the trained learning module. Here, the structure of the other learning module may be simpler than the structure of the trained learning module, may be more suitable for being deployed, and output data of the trained learning module may be used for the training of the other learning module. Note that a copy of a trained learning module includes a trained learning module that is acquired by changing a method for normalization for preventing overfitting, changing a learning rate of back propagation, or changing an updating algorithm of a weight coefficient, in the training process of the learning module. In addition, acquiring the trained second learning module or a copy of the trained second learning module refers to acquiring information required to reproduce, in the learning result using apparatus 20, a function of the trained second learning module. For example, if the second learning module includes a neural network, acquiring the trained second learning module or a copy of the trained second learning module refers to acquiring at least information regarding the number of layers of the neural network, the number of nodes for each of the layers, weight parameters of links connecting nodes, bias parameters for the nodes, and the functional types of activation functions the nodes.
The sensor 30 may be either a physical quantity sensor that detects a physical quantity or an information sensor that detects information. Examples of the physical quantity sensor include cameras that detect light and output image data or moving image data, and vital sensors such as heartbeat sensors that detect heartbeat of a person and output heartbeat data, blood pressure sensors that detect blood pressure of a person and output blood pressure data, and body temperature sensors that detect human body temperature and output body temperature data, and also include any other sensors that detect a physical amount and output an electric signal. Examples of the information sensor include sensors that detect a specific pattern from statistical data, and also include any other sensors that detect information.
The sensing data storage DB stores sensing data that has been output by the sensor 30. In the figure, the sensing data storage DB is shown as a single storage, but the sensing data storage DB may be constituted by one or more file servers.
Fig. 2 is a diagram showing the physical configuration of the learning apparatus 10 according to the embodiment of the present invention. The learning apparatus 10 has a CPU (Central Processing Unit) 10a equivalent to a hardware processor, a RAM (Random Access Memory) 10b equivalent to a memory, a ROM (Read only Memory) 10c equivalent to a memory, a communication interface 10d, an input unit 10e and a display unit 10f. These constituent elements are connected via a bus so as to be able to exchange data with each other. Note that the type of the hardware processor is not limited to a CPU. For example, a CPU, a GPU (Graphics Processing Unit), an FPGA (Field-programmable Gate Array), a DSP (Digital Signal Processor), and an ASIC (Application Specific Integrated Circuit) can be used independently or in combination as a hardware processor.
The CPU 10a performs execution of a program stored in the RAM 10b or the ROM 10c and calculation and processing of data. The CPU 10a is a calculation apparatus that executes an application for generating metadata. The CPU 10a receives various types of input data from the input unit 10e or the communication interface 10d, and displays calculation results of the input data on the display unit 10f, and stores the calculation results in the RAM 10b or the ROM 10c.
The RAM 10b is a data-rewritable storage, and is constituted by a semiconductor storage element, for example. The RAM 10b stores programs such as applications executed by the CPU 10a and data.
The ROM 10c is a data-read-only storage, and is constituted by a semiconductor storage element, for example. The ROM 10c stores programs such as firmware and data, for example.
The communication interface 10d is a hardware interface that connects the learning apparatus 10 to the communication network N.
The input unit 10e accepts input of data from the user, and is constituted by a keyboard, a mouse, or a touch panel, for example.
The display unit 10f visually displays a result of calculation performed by the CPU 10a, and is constituted by an LCD (Liquid Crystal Display), for example.
The learning apparatus 10 may be configured by a learning program according to this embodiment being executed by the CPU 10a of a general personal computer. The learning program may be stored in a computer-readable storage medium such as the RAM 10b or the ROM 10c and provided, or may be provided via the communication network N connected by the communication interface 10d.
Note that these physical configurations are examples, and do not necessarily need to be independent configurations. For example, the learning apparatus 10 may have an LSI (Large-Scale Integration) in which the CPU 10a and the RAM 10b or the ROM 10c are integrated.
Note that the learning result using apparatus 20 also has a physical configuration similar to that of the learning apparatus 10. The learning result using apparatus 20 may be configured by a learning result using program being executed by a CPU of a general personal computer. The learning result using program may be stored in a computer-readable storage medium such as a RAM or a ROM and provided, or may be provided via the communication network N connected by a communication interface.
Fig. 3 is a functional block diagram of the learning apparatus 10 according to the embodiment of the present invention. The learning apparatus 10 has a communication unit 11, a first learning control unit 12, a first learning result extraction unit 13, a first neural network 100, a first learning result output unit 14, a second learning control unit 15, a second learning result extraction unit 16, a second neural network 200 and a second learning result output unit 17. Here, the first learning control unit 12 and the second learning control unit 15 are control units that control machine learning. In addition, the first neural network 100 is an example of the first learning module, and the second neural network 200 is an example of the second learning module. The learning apparatus 10 may have a learning module other than a neural network.
The first learning control unit 12 trains the first neural network 100 based on first training data and second training data associated with the first training data so as to output first output data corresponding to the features of the first training data and the second training data. The first training data may be image data of a target, for example, and the second training data may be sensing data acquired by a sensor measuring the target or performing measurement with regard to the target when the image data was shot. In this case, the first output data is data corresponding to the features of the image data and the sensing data, and is data regarding the target that is shot. The first neural network 100 may be a CNN (Convolutional Neural Network) that is sometimes used for learning of image data, or an RNN (Recurrent Neural Network) that is sometimes used for learning of time series data. A learning result of the first neural network 100 is extracted by the first learning result extraction unit 13, and is output to the second learning control unit 15 by the first learning result output unit 14.
The first learning control unit 12 may train the first neural network 100 by unsupervised learning based on first training data and second training data so as to output first output data. By training the first neural network 100 by unsupervised learning, the first output data that is based on the features of the first training data and the second training data can be autonomously generated by the first neural network 100, and feature extraction with higher objectivity can be performed. In addition, it is not necessary to prepare supervisor data, and thus there is no processing load or communication load for generating and collecting supervisor data, and it is not necessary to secure storage capacity for storing supervisor data.
The first learning control unit 12 may train the first neural network 100 by supervised learning that uses supervisor data including attribute information of first training data and second training data, so as to output first output data based on the first training data and the second training data. Here, attribute information of training data is information indicating a feature of the training data, and may include information regarding the type of a physical amount measured by a sensor, the type of the sensor, the type of sensing data and a target measured by the sensor. By training the first neural network 100 by supervised learning, it is possible to generate the first output data corresponding to the features of the first training data and the second training data in consideration of the existing attribute information. In addition, it is not necessary to assign a meaning such as a label and an annotation to the first output data, and thus it is not necessary to perform calculation or communication in order to interpret the first output data, and the processing load and the communication load are suppressed.
The second learning control unit 15 trains the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the first neural network 100 in the case where the first training data is input to the first neural network 100, based on the first training data, so as to output second output data. By performing the supervised learning using the first output data as the supervisor data, the second neural network 200 shares the learning objective with the first neural network 100 and acquires the same type of capability as the first neural network 100. Specifically, both of the second output data output from the second neural network 200 and the first output data output from the first neural network 100 are data relating to the same subject and expressed in the same form. The same type of capability may include the capability for performing at least one of analysis, estimation, control with respect to the same (or substantially the same) target, state or operation, and the capability for performing determination, identification, recognition with respect to the same (or substantially the same) requirement. The data relating to the same subject and expressed in the same form includes, for example, data indicating control values for the same variables in the same unit, and data indicating scores for the same determination (the quality of an item, presence of an object, or the like) according the the same rule. If the first training data is image data of a target, and the second training data is sensing data in the same time series as the image data, the supervisor data is the first output data that is output from the trained first neural network 100 in the case where the image data is input to the trained first neural network 100, and the second output data that is output from the second neural network 200 in the case where the image data is input to the second neural network 200 is data relating to the same subject and expressed in the same form as the first output data, that is, data corresponding to the feature of the image data, and is data regarding the target that is shot. A learning result of the second neural network 200 is extracted by the second learning result extraction unit 16, and is output to the outside via the communication unit 11 by the second learning result output unit 17.
Note that in this embodiment, the first training data used for learning performed by the first neural network 100 and the first training data used for learning performed by the second neural network 200 are the same data, but the present invention is not limited to this example.
As long as the first training data used for learning performed by the first neural network 100 and the first training data used for learning performed by the second neural network 200 have the same form (or the same type), both data may differ in contents. Specifically, the first training data used for learning performed by the first neural network 100 and the first training data used for learning performed by the second neural network 200 are data in the same form, but may be data in which part of all of the content is different. For example, a configuration may be adopted in which, in the case where image data of a first group as the first training data and sensing data as the second training data were used in learning performed by the first neural network 100, when the second neural network 200 performs learning, image data of a second group is input to the trained first neural network 100 as the first training data, and the second neural network 200 performs learning based on the image data of the second group with the first output data that is output from the trained first neural network 100 serving as supervisor data. A form of data indicates, for example, the form of images (e.g., colour images, infrared images, and range images) or the form of numerical values (e.g., binary, and continuous values). Data in the same form may include data obtained by the same type of data obtaining devices such as cameras, sensors, and measurement devices, and data in the different forms may include data obtained by the different types of data obtaining devices. In addition, data in the same form may include data obtained for the same target such as a subject of images and a sensing target object, by the same type of data obtaining device, and data in the different form may include data obtained for the different targets. In this embodiment, the image data of the first group and the image data of the second group are both image data (i.e., the data in the same form), and the image data of the second group may or may not include the same pieces of image data as the image data of the first group.
With the learning apparatus 10 according to this embodiment, the first output data corresponding to the features of the first training data and the second training data is output by the first neural network 100 that accepts the first training data and the second training data as input data, and the second output data corresponding to the feature of the first training data is output by the second neural network 200 that accepts the first training data as input data. The second neural network 200 performs supervised learning in which supervisor data is the first output data, and thus the second output data indirectly includes the feature of the second training data. Therefore, a neural network having a desired performance without increasing types of measurement devices for obtaining training data is acquired. Specifically, with the learning apparatus 10 according to this embodiment, a neural network is acquired which provides the same performance as that in a case where a plurality of types of measurement devices that obtain the first training data and the second training data are used without using a measurement device for the second training data. A neural network in which a desired measurement result is incorporated without using a measurement device for the second training data is acquired, and thus it is possible to reduce the number of items of hardware of the learning result using apparatus 20 that uses the trained neural network, and to further reduce the processing load of the hardware processor due to a reduction in data amount.
After the first neural network 100 was trained, the second learning control unit 15 trains the second neural network 200. Accordingly, after the first neural network 100 learned the features of the first training data and the second training data, the second neural network 200 can be trained using, as supervisor data, the first output data that is output from the first neural network 100, and thus the feature of the second training data is more accurately reflected on the training of the second neural network 200.
Fig. 4 is a functional block diagram of the learning result using apparatus 20 according to the embodiment of the present invention. The learning result using apparatus 20 has a learning result input unit 231, a neural network setting unit 232, a third neural network 233, a control unit 234, an input unit 235, a communication unit 236, a data acquiring unit 237 that acquires data to be input to the third neural network 233, and an output unit 238. Here, the third neural network 233 is an example of a learning module, and the learning result using apparatus 20 may have a learning module other than a neural network, and in that case, the neural network setting unit 232 will be replaced by a constituent element that sets a learning module other than a neural network. Note that the data acquiring unit 237 may acquire data via the communication unit 236, or may acquire data via communication other than communication using the communication unit 236.
The learning result input unit 231 accepts input of a learning result. The learning result input unit 231 accepts, via the communication unit 236, a learning result that is output by the second learning result output unit 17 of the learning apparatus 10. The neural network setting unit 232 acquires the trained second neural network 200 acquired as a result of training by the second learning control unit 15 provided in the learning apparatus 10 or a copy of the trained second neural network 200, and sets the trained second neural network 200 or the copy of the trained second neural network 200 as the third neural network 233. The control unit 234 controls the data acquiring unit 237 and the input unit 235 so as to input designated input data to the third neural network 233 and to output output data. The input unit 235 inputs data having the same form as the first training data to the third neural network 233. The output unit 238 outputs the output data from the third neural network 233. The output data from the third neural network 233 is output by the output unit 238 via the communication unit 236.
With the learning result using apparatus 20 according to this embodiment, output data corresponding to the feature of input data is output by the third neural network 233 that accepts, as input data, data having the same form as the first training data. The third neural network 233 is set by the trained second neural network 200 or a copy of the trained second neural network 200, and thus the third neural network 233 indirectly includes the feature of the second training data. Therefore, a learning module having a desired performance is acquired without increasing types of measurement devices. As a result, in the environment in which the third neural network 233 is used, a desired learning result can be acquired even without using a measurement device used for obtaining sensing data (second training data), and it is possible to reduce the number of items of hardware that constitute the learning result using apparatus 20, and to further reduce the processing load of the hardware processor due to a reduction in the data amount.
In this embodiment, the first training data may be data in the same form as input data that is input to the trained second neural network 200 acquired as a result of training by the second learning control unit 15 of the learning apparatus 10 or a copy of the trained second neural network 200. In addition, the second training data may be data temporally related to the first training data. Further, the second training data may be data in a form different from that of the input data that is input to the trained second neural network 200 or the copy of the trained second neural network 200. The second training data is data that complements or reinforces the first training data, and is data for extracting a feature that cannot be extracted through training that is based only on the first training data. Each piece of the second training data may be obtained at the same time as or in proximity to when the corresponding piece of the second training data is obtained.. The second training data temporally related to the first training data includes the second training data obtained within the predetermined period of time before or after the corresponding first training data is obtained. Accordingly, the first neural network 100 can perform multilateral learning based on the first training data in the same form as the input data that is input to the trained second neural network 200, and the second training data that complements or reinforces the first training data. In addition, if the first output data of the first neural network 100 that performed multilateral learning serves as supervisor data, the second neural network 200 can perform supervised learning that extracts a feature that is sometimes not extracted through learning that is based only on the first training data.
In addition, in the learning apparatus 10 according to this embodiment, the scale of the second neural network 200 is smaller than the scale of the first neural network 100. Here, the scale of a neural network is a scale measured based on the number of nodes, the number of links, the number of layers and the like included in the neural network. Due to the scale of the second neural network 200 being smaller than the scale of the first neural network 100, the learning apparatus 10 that is relatively rich in calculation resources performs high-load processing, and thus the scale of the third neural network 233 that is set in the learning result using apparatus 20 can be suppressed to a small scale, and the processing load and communication load of the learning result using apparatus 20 can be suppressed.
Fig. 5 is a conceptual diagram showing the input/output relationship of the first neural network 100 of the learning apparatus 10 according to the embodiment of the present invention. In the example shown in the figure, first training data is image data acquired by shooting a person, and second training data is vital data of the person at the time when the image data was shot. Note that the time when the image data was shot is a concept that includes the same time as the shooting of the image data and the temporal vicinity before and after. In addition, the first training data includes first image data 301, second image data 302 and third image data 303. Also, the second training data includes first vital data 401, second vital data 402 and third vital data 403. Here, the first vital data 401 is vital data of a subject person at the time when the first image data 301 was shot. Accordingly, the first vital data 401 is data that is the same as the first image data 301 in time series. Similarly, the second vital data 402 is vital data of the subject person at the time when the second image data 302 was shot, and the third vital data 403 is vital data of the subject person at the time when the third image data 303 was shot. Note that vital data is any biological data such as a heart rate, a blood pressure, a body temperature, a blood component amount, a urine component amount, or a brain wave.
The learning apparatus 10 trains the first neural network 100 based on first training data and second training data so as to output first output data corresponding to the features of the first training data and the second training data. In the case of this example, the first output data includes first data 501, second data 502 and third data 503, each of which is numeric data. The first data 501 is output data that is output in the case where the first image data 301 and the first vital data 401 are input as input data to the first neural network 100, and is a three-dimensional numeric vector "(0.9, 0.05, 0.05)" in the case of this example. Similarly, the second data 502 is output data that is output in the case where the second image data 302 and the second vital data 402 are input as input data to the first neural network 100, and is a three-dimensional numeric vector "(0.05, 0.9, 0.05)". In addition, the third data 503 is output data that is output in the case where the third image data 303 and the third vital data 403 are input as input data to the first neural network 100, and is a three-dimensional numeric vector "(0.05, 0.05, 0.9)". The first output data is data corresponding to a human emotion, and each component indicates a degree of correspondence corresponding to a predetermined emotion. The larger the numeric value of the component is, the higher the reliability that is determined to indicate an emotion corresponding to the component is.
If the first learning control unit 12 trains the first neural network 100 by unsupervised learning, the user of the learning apparatus 10 compares input data and output data of the first neural network 100, and assigns meanings to the output data. In this example, a meaning "anger" is assigned to the first data 501, a meaning "relaxation" is assigned to the second data 502, and a meaning "smile/laughter" is assigned to the third data 503.
If the first learning control unit 12 trains the first neural network 100 by supervised learning that uses supervisor data including attribute information of first training data and second training data, the user of the learning apparatus 10 does not need to assign meanings to the output data. The first neural network 100 autonomously learns that a first component included in the three dimensional vector that is output as the output data is an amount indicating a degree of anger, a second component is an amount indicating a degree of relaxation, and a third component is an amount indicating a degree of smile/laughter.
The learning apparatus 10 can acquire a learning result that makes it possible to estimate a human emotion more accurately than in a case of using only image data as training data, by training the first neural network 100 using both image data and vital data as training data. Here, the image data is data that can be acquired by a camera, which is a common sensor, and is data that can be acquired without mounting a sensor to a person to be shot. On the other hand, the vital data is data that cannot be acquired unless a dedicated sensor is used, and is data that cannot be acquired unless a sensor is mounted to the person to be shot. Generally, the learning apparatus 10 may train the first neural network 100 by combining first training data that is relatively easy to acquire and second training data that is relatively difficult to acquire, but complements or reinforces the first training data.
Fig. 6 is a conceptual diagram showing the input/output relationship of the second neural network 200 of the learning apparatus 10 according to the embodiment of the present invention. First training data shown in this figure is the same as the first training data shown in Fig. 5, and includes first image data 301, second image data 302 and third image data 303.
The learning apparatus 10 trains the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the first neural network 100 in the case where first training data is input to the first neural network 100, based on the first training data, so as to output second output data. In the case of this example, the second output data includes fourth data 601, fifth data 602 and sixth data 603, each of which is numeric data. The fourth data 601 is output data that is output in the case where the first image data 301 is input as input data to the second neural network 200, and is a three-dimensional numeric vector "(0.96, 0.02, 0.02)" in the case of this example. Similarly, the fifth data 602 is output data that is output in the case where the second image data 302 is input as input data to the second neural network 200, and is a three-dimensional numeric vector "(0.02, 0.96, 0.02)". In addition, the sixth data 603 is output data that is output in the case where the third image data 303 is input as input data to the second neural network 200, and is a three-dimensional numeric vector "(0.02, 0.02, 0.96)". Similarly to the first output data, the second output data is data corresponding to a human emotion.
The second learning control unit 15 trains the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the trained first neural network 100 in the case where the first training data is input to the trained first neural network 100, and thus the user of the learning apparatus 10 does not have to assign meanings to the second output data. The second neural network 200 autonomously learns that the first component included in the three dimensional vector that is output as the second output data is an amount indicating the degree of anger, the second component is an amount indicating the degree of relaxation, and the third components is an amount indicating the degree of smile/laughter.
The learning apparatus 10 trains the second neural network 200 using, as supervisor data, output data that is output from the trained first neural network 100 in the case where the first training data is input to the trained first neural network 100, and thereby can acquire a learning result that includes vital data, using only image data as training data, and can acquire a learning result that makes it possible to estimate a human emotion more accurately. Here, the image data is data that can be acquired by a camera, which is a common sensor, and thus the trained second neural network 200 can exhibit, using only image data that is relatively easy to acquire as input data, identification performance similar to that in the case where sensing data that is relatively difficult to acquire is used for complementation.
The second neural network 200 is trained using, as supervisor data, the first output data of the first neural network 100 that was trained based on image data and sensing data, so as to output second output data, and thereby the second neural network 200 can indirectly learn a feature that cannot be extracted from only the image data, and the second neural network 200 in which the sensing data is incorporated is acquired. As a result, in an environment using the second neural network as the second learning module, a desired learning result can be acquired even without using a measurement device used for obtaining the sensing data (the second training data), and it is possible to reduce the number of items of hardware that is used, and to further reduce the processing load of the hardware processor due to a reduction in the data amount.
In addition, the second neural network 200 is trained using, as supervisor data, the first output data of the first neural network 100 that was trained based on image data and vital data of a human, so as to output second output data, and thereby the second neural network 200 can indirectly learn a feature that cannot be extracted from only the image data, and the second neural network 200 that can estimate a human emotion more accurately is acquired. As a result, in an environment using the second neural network as the second learning module, a desired learning result can be acquired without using a measurement device used for obtainingthe vital data (second training data), and it is possible to reduce the number of items of hardware that is used, and to further reduce the processing load of the hardware processor due to a reduction in the data amount.
Note that in this example, in order to simplify the description, a case has been described in which the number of types of the features of the first training data is three, but generally, a large number of features, namely, four or more features are included in first training data. For example, if thousands of types of features are included in first training data, the first neural network 100 and the second neural network 200 are trained so as to classify the thousands of types of the features of the first training data, determine which of the thousands of types of classifications input data is close to, and output output data corresponding to the features of the input data.
Note that in this example, the learning apparatus 10 has been described which has the first neural network 100 and the second neural network 200, and performs training using first training data and second training data, but the configuration of the learning apparatus 10 is not limited to this example. Accordingly, the learning apparatus 10 may have three or more neural networks, and may be configured to perform training using training data of three types or more. For example, the learning apparatus 10 may have a first neural network that is trained based on first training data, second training data and third training data so as to output first output data corresponding to the features of the first training data, the second training data and the third training data, and a second neural network that performs supervised learning in which supervisor data is the first output data, based on the first training data, so as to output second output data. In addition, for example, the learning apparatus 10 may have a first neural network that is trained based on first training data, second training data and third training data so as to output first output data corresponding to the features of the first training data, the second training data and the third training data, a second neural network that performs supervised learning in which supervisor data is the first output data, based on the first training data and the second training data, so as to output second output data, and a third neural network that performs supervised learning in which supervisor data is the second output data, based on the first training data, so as to output third output data. In addition, for example, the learning apparatus 10 may have a first neural network that is trained based on first training data and second training data so as to output first output data corresponding to the features of the first training data and the second training data, and a plurality of second neural networks that perform supervised learning in which supervisor data is the first output data, based on the first training data, so as to output second output data. Here, the plurality of second neural networks may each have a different neural network structure regarding the number of layers, the number of units and the number of links, and may each output different second output data.
Fig. 7 is a conceptual diagram showing the input/output relationship of the third neural network 233 of the learning result using apparatus 20 according to the embodiment of the present invention. Input data shown in the figure includes fourth image data 310.
The learning result using apparatus 20 acquires the trained second neural network 200 acquired as a result of training by the second learning control unit 15 provided in the learning apparatus 10 or a copy of the trained second neural network 200, and sets, as the third neural network 233, the trained second neural network 200 or the copy of the trained second neural network 200. The third neural network 233 accepts, as input data, data having the same form as first training data. In the case of this example, the data having the same form as the first training data is image data. In addition, the third neural network 233 outputs output data corresponding to the feature of the input data. In the case of this example, the output data is seventh data 701, and the seventh data 701 is numeric data. The seventh data 701 is output data that is output in the case where the fourth image data 310 is input as input data to the third neural network 233, and is a three-dimensional numeric vector "(0.02, 0.02, 0.96)" in the case of this example. The output data of the third neural network 233 is data corresponding to a human emotion, and the output data in this example is data corresponding to "smile/laughter".
The learning result using apparatus 20 acquires the trained second neural network 200 or a copy of the trained second neural network 200, and sets, as the third neural network 233, the trained second neural network 200 or the copy of the trained second neural network 200, and thereby a learning result including vital data can be used even if input data is image data only, and a human emotion can be estimated more accurately. Here, the image data is data that can be acquired by a camera, which is a common sensor, and thus the third neural network 233 of the learning result using apparatus 20 can exhibit an identification performance similar to that in the case where only image data that is relatively easy to acquire is used as input data, and sensing data that is relatively hard to acquire is used for complementation.
First training data and second training data are not limited to image data and vital data of a person. For example, vital data of a person may be used as first training data, and image data of the person may be used as second training data. Accordingly, the image data of a person may be used as data for complementing or reinforcing the vital data. By setting the vital data of a person as the first training data and setting the image data of the person as the second training data, a neural network is acquired which can estimate an emotion and a mental state of the person more accurately in consideration of the image data even in the case where input data is the vital data only.
In addition, for example, first training data may include image data acquired by shooting a vehicle, and second training data may include sensing data measured by a sensor provided in the vehicle at the time when the image data was shot. More specifically, image data of a second vehicle that was shot by a camera provided in a first vehicle in the state where the first vehicle was following the second vehicle may be used as the first training data, and sensing data measured by a sensor provided in the second vehicle may be used as the second training data. Here, the sensor provided in the second vehicle may be a sensor that measures an operation of the accelerator pedal of the second vehicle, an operation of the brake pedal, a steering operation, a winker operation, and the state of the driver.
In this case, the first neural network 100 is trained based on the image data of the second vehicle that has been shot from the first vehicle and the sensing data related to a measured operation of the second vehicle, and first output data of the first neural network 100 will be data corresponding to the operation of the vehicle. Note that the data corresponding to the operation of the vehicle includes a speed, acceleration, a traveling direction vector, probability of course change, and the like. In addition, the second neural network 200 performs supervised learning in which the supervisor data is the first output data that is output from the trained first neural network 100 in the case where the image data of the second vehicle shot from the first vehicle is input to the trained first neural network 100, based on the image data of the second vehicle shot from the first vehicle, and second output data of the second neural network 200 is data corresponding to the operation of the vehicle, similar to the first output data.
Note that the second training data may include information regarding the relative distance between the first vehicle and the second vehicle. Operations of vehicles changes in a large amount according to the distance between a leading vehicle and a following vehicle. Therefore, if the second training data includes information regarding the relative distance, it is possible to improve the accuracy of operation estimation of the vehicle, which will be described later. At this time, the relative distance can be acquired by the following method. For example, on a test course on which a measurement apparatus that identifies the position of a vehicle is provided, the relative distance between the first vehicle and the second vehicle can be measured while shooting the second vehicle using a camera provided in the first vehicle. In addition, the distance between the first vehicle and the second vehicle can be acquired by attaching a focus detection apparatus (e.g., a laser radar) at the front of the first vehicle or the rear of the second vehicle. The information regarding the relative distance may be estimated based on an image from a camera provided on a general road. In addition, a configuration may be adopted in which the first vehicle and the second vehicle built as physical models run in a virtual space, and image data as the first training data, sensor data as the second training data and the relative distance are acquired from the virtual space.
The second neural network 200 is trained using, as supervisor data, the first output data of the first neural network 100 that was trained based on the image data and the sensing data of the vehicle, so as to output the second output data, in this manner, and thereby the second neural network 200 can indirectly learn a feature that cannot be extracted from only the image data of the vehicle, and the second neural network 200 that can perform operation estimation of the vehicle more accurately is acquired. In addition, it is relatively difficult for a following vehicle to acquire sensing data acquired by measuring an operation of a leading vehicle, but with the learning result using apparatus 20 according to this embodiment, by acquiring the trained second neural network 200 or a copy of the trained second neural network 200, setting, as the third neural network 233, the trained second neural network 200 or the copy of the trained second neural network 200, and inputting image data of the leading vehicle to the third neural network 233, it is possible to perform operation estimation of the vehicle.
First training data and second training data may be data other than the above. For example, the first neural network 100 and the second neural network 200 may perform learning based on image data acquired by shooting a person and sensing data that has been output from a sensor that detects action of the person, the image data serving as first training data and the sensing data serving as second training data, so as to output data corresponding to the action of the person as first output data and second output data. In this case, the sensor that detects action of a person may be a momentum sensor or an acceleration sensor that is mounted to a person, or a sensor that is provided on a target that is operated by a person and detects an operation performed by the person. Accordingly, it is possible to output the second output data for predicting the next action of the person in the case where the image data acquired by shooting the person is input to the second neural network 200.
In addition, for example, the first neural network 100 and the second neural network 200 may be trained based on image data acquired by shooting a fruit and sensing data that has been output from a sensor that measures the degree of maturation of the fruit, the image data serving as first training data and the sensing data serving as second training data, so as to output data corresponding to the degree of the maturation of the fruit as first output data and second output data. In this case, the sensor that measures the degree of maturation of a fruit may be a weight sensor, a hardness sensor, a sugar content sensor or the like. Accordingly, it is possible to output the second output data that estimates the degree of maturation of the fruit in the case where the image data acquired by shooting the fruit is input to the second neural network 200.
In addition, for example, the first neural network 100 and the second neural network 200 may perform learning based on image data acquired by shooting the appearance of a substrate onto which electric parts are fixed by soldering and sensing data that has been output from a sensor that measures the state of the soldering (e.g., an air content of the soldering, denaturation due to overheat, and an unjoined state due to heating shortage), the image data serving as first training data and the sensing data serving as second training data, so as to output, as first output data and second output data, data corresponding to whether or not a soldering inspection criteria is met. Accordingly, it is possible to output the second output data that estimates the state of the soldering in the case where the image data acquired by shooting the appearance of the substrate is input to the second neural network 200. If the second neural network 200 that was trained in this manner is used in a substrate inspection apparatus for checking the state of soldering between a substrate and electric parts placed on the substrate, data corresponding to whether or not the soldering inspection criteria is met can be acquired without using the sensor that measures the state of soldering, and thus it is possible to reduce the number of items of hardware of the substrate inspection apparatus, and to further reduce the processing load of the hardware processor due to a reduction in data amount.
Fig. 8 is a flowchart of processing executed by the learning apparatus 10 according to the embodiment of the present invention. The learning apparatus 10 designates first training data and second training data based on an instruction accepted from the user (step S10). After that, the learning apparatus 10 determines whether or not supervised learning is to be performed (step S11). Here, whether or not supervised learning is to be performed may be determined based on the instruction accepted from the user.
If the learning apparatus 10 determines that supervised learning is to be performed (step S11: Yes), the learning apparatus 10 designates supervisor data based on an instruction accepted from the user (step S12). The learning apparatus 10 trains the first neural network 100 by supervised learning based on the designated first training data, second training data and supervisor data (step S13).
On the other hand, if the learning apparatus 10 determines that supervised learning is not to be performed (step S11: No), the learning apparatus 10 trains the first neural network 100 by unsupervised learning based on the designated first training data and second training data (step S14).
In both cases, the learning apparatus 10 trains the second neural network 200 by supervised learning in which supervisor data is first output data that has been output from the first neural network 100, based on the designated first training data (step S15). The processing performed by the learning apparatus 10 then ends.
The trained second neural network 200 or a copy of the trained second neural network 200 can be generated by using the learning apparatus 10 according to this embodiment. Specifically, the trained second neural network 200 or a copy of the trained second neural network 200 can be generated by the first learning control unit 12 training the first neural network 100 based on first training data and second training data, so as to output first output data corresponding to the features of the first training data and the second training data, the second learning control unit 15 training the second neural network 200 by supervised learning in which the supervisor data is the first output data that is output from the first neural network 100 in the case where the first training data is input to the first neural network 100, based on the first training data, so as to output the second output data, and the second learning result output unit 17 outputting the trained second neural network 200 or the copy of the trained second neural network 200.
Fig. 9 is a flowchart of processing executed by the learning result using apparatus 20 according to the embodiment of the present invention. The learning result using apparatus 20 acquires the trained second neural network 200 or a copy of the trained second neural network 200, using the learning apparatus 10, and sets the third neural network 233 (step S20). The learning result using apparatus 20 then designates input data that is to be input to the third neural network 233 based on an instruction accepted from the user (step S21). Here, the input data is data having the same form as the first training data.
The learning result using apparatus 20 inputs the designated input data to the third neural network 233, and outputs output data corresponding to the feature of the input data (step S22). The processing performed by the learning result using apparatus 20 then ends.
The foregoing embodiment is for the purpose of facilitating understanding of the present invention, and is not to be interpreted as limiting the present invention. Constituent elements of the embodiment and arrangement, materials, conditions, shapes and sizes thereof are not limited to those exemplified, and can be changed as appropriate. In addition, configurations described in different embodiments can be partially substituted or combined.
In addition, a portion or the entirety of the above-described embodiment can be described as Additional Remark below, but is not limited thereto.
Additional Remark 1
A learning apparatus including at least one memory and at least one item of hardware processor connected to the memory,
wherein the hardware processor trains a first learning module based on first training data and second training data associated with the first training data, so as to output first output data corresponding to the features of the first training data and the second training data, and
the hardware processor trains a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in the case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
Additional Remark 2
A learning method:
wherein at least one item of hardware processor trains a first learning module based on first training data and second training data associated with the first training data, so as to output first output data corresponding to the features of the first training data and the second training data, and
the hardware processor trains a second learning module by supervised learning in which the supervisor data is the first output data that is output from the first learning module in the case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.

Claims (14)

  1. A learning apparatus comprising:
    a first learning control unit configured to train a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data; and
    a second learning control unit configured to train a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
  2. The learning apparatus according to claim 1,
    wherein the second learning control unit trains the second learning module after training of the first learning module.
  3. The learning apparatus according to claim 1 or 2,
    wherein the first training data is data in the same form as input data that is input to a trained second learning module, which is acquired as a result of training performed by the second learning control unit, or a copy of the trained second learning module, and
    the second training data is data temporally related to the first training data, and is data in a form different from input data that is input to the trained second learning module or the copy of the trained second learning module.
  4. The learning apparatus according to any one of claims 1 to 3,
    wherein the first learning control unit trains the first learning module by unsupervised learning based on the first training data and the second training data so as to output the first output data.
  5. The learning apparatus according to any one of claims 1 to 3,
    wherein the first learning control unit trains the first learning module by supervised learning that uses supervisor data including attribute information of the first training data and the second training data, based on the first training data and the second training data, so as to output the first output data.
  6. The learning apparatus according to any one of claims 1 to 5,
    wherein the first learning module and the second learning module each include a neural network, and
    a scale of the neural network included in the second learning module is smaller than a scale of the neural network included in the first learning module.
  7. The learning apparatus according to any one of claims 1 to 6,
    wherein the first training data includes image data of a target,
    the second training data includes sensing data acquired by measuring the target using a sensor when the image data is shot, and
    the first output data and the second output data include data related to the target.
  8. The learning apparatus according to claim 7,
    wherein the first training data includes image data acquired by shooting a person,
    the second training data includes vital data of the person when the image data is shot, and
    the first output data and the second output data are data corresponding to a human emotion.
  9. The learning apparatus according to claim 7,
    wherein the first training data includes image data acquired by shooting a vehicle,
    the second training data includes sensing data acquired by performing measurement using a sensor provided in the vehicle when the image data is shot, and
    the first output data and the second output data are data corresponding to an operation of the vehicle.
  10. A learning result using apparatus, comprising;
    a learning module setting unit configured to acquire a trained second learning module acquired as a result of training performed by the second learning control unit provided in the learning apparatus according to any one of claims 1 to 9 or a copy of the trained second learning module, and sets the trained second learning module or the copy of the trained second learning module as a third learning module;
    an input unit configured to input data having the same form as the first training data to the third learning module; and
    an output unit configured to output output data from the third learning module.
  11. A learning method comprising:
    training, by a control unit configured to control machine learning, a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data; and
    training, by the control unit, a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
  12. A method for producing a trained learning module or a copy of the trained learning module, comprising:
    outputting a trained second learning module acquired as a result of training the second learning module by the learning method according to claim 11 or a copy of the trained second learning module.
  13. A trained learning module or a copy of the trained learning module that is acquired as a result of training the second learning module by the learning method according to claim 11.
  14. A learning program comprising instructions which, when the program is executed by a computer, cause the computer to perform a method including:
    training a first learning module based on first training data and second training data associated with the first training data so as to output first output data corresponding to features of the first training data and the second training data; and
    training a second learning module by supervised learning in which supervisor data is the first output data that is output from the first learning module in a case where the first training data is input to the first learning module, based on the first training data, so as to output second output data.
PCT/JP2018/007476 2017-03-01 2018-02-28 Learning apparatus, learning result using apparatus, learning method and learning program WO2018159666A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2017038492 2017-03-01
JP2017-038492 2017-03-01
JP2018-026134 2018-02-16
JP2018026134A JP6889841B2 (en) 2017-03-01 2018-02-16 Learning device, learning result utilization device, learning method and learning program

Publications (1)

Publication Number Publication Date
WO2018159666A1 true WO2018159666A1 (en) 2018-09-07

Family

ID=62046998

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/007476 WO2018159666A1 (en) 2017-03-01 2018-02-28 Learning apparatus, learning result using apparatus, learning method and learning program

Country Status (1)

Country Link
WO (1) WO2018159666A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11965667B2 (en) 2020-09-04 2024-04-23 Daikin Industries, Ltd. Generation method, program, information processing apparatus, information processing method, and trained model
US12130037B2 (en) 2020-09-04 2024-10-29 Daikin Industries, Ltd. Generation method, program, information processing apparatus, information processing method, and trained model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1939796A2 (en) * 2006-12-19 2008-07-02 Fuji Xerox Co., Ltd. Data processing apparatus, data processing method data processing program and computer readable medium
JP2016099165A (en) 2014-11-19 2016-05-30 ソニー株式会社 Calculation device, measurement system, body weight calculation method and program
EP3065090A2 (en) * 2015-03-06 2016-09-07 Panasonic Intellectual Property Management Co., Ltd. Learning method and recording medium background
US20170024641A1 (en) * 2015-07-22 2017-01-26 Qualcomm Incorporated Transfer learning in neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1939796A2 (en) * 2006-12-19 2008-07-02 Fuji Xerox Co., Ltd. Data processing apparatus, data processing method data processing program and computer readable medium
JP2016099165A (en) 2014-11-19 2016-05-30 ソニー株式会社 Calculation device, measurement system, body weight calculation method and program
EP3065090A2 (en) * 2015-03-06 2016-09-07 Panasonic Intellectual Property Management Co., Ltd. Learning method and recording medium background
US20170024641A1 (en) * 2015-07-22 2017-01-26 Qualcomm Incorporated Transfer learning in neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CLIFFORD NASS ET AL: "Improving automotive safety by pairing driver emotion and car voice emotion", CONFERENCE PROCEEDINGS OF CHI 2005, APRIL 2-7, 2005, PORTLAND, OREGON, USA, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 2 April 2005 (2005-04-02), pages 1973 - 1976, XP058168602, ISBN: 978-1-59593-002-6, DOI: 10.1145/1056808.1057070 *
HONG-WEI NG ET AL: "Deep Learning for Emotion Recognition on Small Datasets using Transfer Learning", INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ACM, 2 PENN PLAZA, SUITE 701 NEW YORK NY 10121-0701 USA, 9 November 2015 (2015-11-09), pages 443 - 449, XP058076144, ISBN: 978-1-4503-3912-4, DOI: 10.1145/2818346.2830593 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11965667B2 (en) 2020-09-04 2024-04-23 Daikin Industries, Ltd. Generation method, program, information processing apparatus, information processing method, and trained model
US12130037B2 (en) 2020-09-04 2024-10-29 Daikin Industries, Ltd. Generation method, program, information processing apparatus, information processing method, and trained model

Similar Documents

Publication Publication Date Title
JP6889841B2 (en) Learning device, learning result utilization device, learning method and learning program
Madani et al. Fast and accurate view classification of echocardiograms using deep learning
CN111741884B (en) Traffic distress and road rage detection method
EP3488387B1 (en) Method for detecting object in image and objection detection system
CN110998604B (en) Recognition and reconstruction of objects with local appearance
US10318848B2 (en) Methods for object localization and image classification
EP3033999B1 (en) Apparatus and method for determining the state of a driver
JP4970408B2 (en) An adaptive driver assistance system using robust estimation of object properties
Alkım et al. A fast and adaptive automated disease diagnosis method with an innovative neural network model
KR102046706B1 (en) Techniques of performing neural network-based gesture recognition using wearable device
KR102046707B1 (en) Techniques of performing convolutional neural network-based gesture recognition using inertial measurement unit
US11328211B2 (en) Delimitation in unsupervised classification of gestures
Ivani et al. A gesture recognition algorithm in a robot therapy for ASD children
US12279882B2 (en) Movement disorder diagnostics from video data using body landmark tracking
CN113534189B (en) Weight detection method, human body characteristic parameter detection method and device
WO2018159666A1 (en) Learning apparatus, learning result using apparatus, learning method and learning program
US20210056402A1 (en) Methods and systems for predicting a trajectory of a road agent based on an intermediate space
Ozaltin et al. Artificial intelligence-based brain hemorrhage detection
CN112836549B (en) User information detection method and system and electronic equipment
US11983242B2 (en) Learning data generation device, learning data generation method, and learning data generation program
CN116805049A (en) Zero sample classification of measurement data
JP2023107752A (en) Method and control device for training object detector
Chinnasamy et al. An outlier based bi-level neural network classification system for improved classification of cardiotocogram data
EP4091095A1 (en) Systems and methods for eye tracking using machine learning techniques
JP2021152804A (en) Information processing device and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18720030

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18720030

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载