WO2018131789A1 - Système de robot social domestique pour reconnaître et partager des informations d'activité quotidienne par analyse de diverses données de capteur comprenant un bruit de vie à l'aide d'un capteur synthétique et d'un reconnaisseur de situation - Google Patents
Système de robot social domestique pour reconnaître et partager des informations d'activité quotidienne par analyse de diverses données de capteur comprenant un bruit de vie à l'aide d'un capteur synthétique et d'un reconnaisseur de situation Download PDFInfo
- Publication number
- WO2018131789A1 WO2018131789A1 PCT/KR2017/013401 KR2017013401W WO2018131789A1 WO 2018131789 A1 WO2018131789 A1 WO 2018131789A1 KR 2017013401 W KR2017013401 W KR 2017013401W WO 2018131789 A1 WO2018131789 A1 WO 2018131789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- noise
- robot
- social robot
- home social
- Prior art date
Links
- 230000000694 effects Effects 0.000 title claims abstract description 34
- 230000003203 everyday effect Effects 0.000 title abstract 4
- 238000000034 method Methods 0.000 claims abstract description 61
- 239000013598 vector Substances 0.000 claims abstract description 52
- 238000010801 machine learning Methods 0.000 claims abstract description 11
- 230000015572 biosynthetic process Effects 0.000 claims description 22
- 238000003786 synthesis reaction Methods 0.000 claims description 22
- 238000005259 measurement Methods 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 17
- 238000005406 washing Methods 0.000 claims description 11
- 230000033001 locomotion Effects 0.000 claims description 9
- 230000001133 acceleration Effects 0.000 claims description 8
- 230000008921 facial expression Effects 0.000 claims description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 239000000654 additive Substances 0.000 abstract description 6
- 230000000996 additive effect Effects 0.000 abstract description 6
- 230000008569 process Effects 0.000 abstract description 6
- 238000010411 cooking Methods 0.000 abstract 1
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 238000007781 pre-processing Methods 0.000 abstract 1
- 230000008859 change Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 8
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000009467 reduction Effects 0.000 description 7
- 238000009826 distribution Methods 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 238000003491 array Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 239000003795 chemical substances by application Substances 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 206010022998 Irritability Diseases 0.000 description 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 206010037180 Psychiatric symptoms Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 230000008449 language Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 229910052698 phosphorus Inorganic materials 0.000 description 1
- 239000011574 phosphorus Substances 0.000 description 1
- VVWRJUBEIPHGQF-MDZDMXLPSA-N propan-2-yl (ne)-n-propan-2-yloxycarbonyliminocarbamate Chemical compound CC(C)OC(=O)\N=N\C(=O)OC(C)C VVWRJUBEIPHGQF-MDZDMXLPSA-N 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000035882 stress Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/02—Sensing devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/06—Safety devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/22—Social work or social welfare, e.g. community support activities or counselling services
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/038—Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
Definitions
- the present disclosure relates to a home social robot management system for sharing living activity information.
- the home social robot that acquires and reads the living noise of a household of a single household through a home social robot installed by a user and shares it with other users A robot management system.
- Home social robots are attracting attention as one of the tools to solve the various complex social problems in the changing modern society, such as population aging, increasing single-person households, and deepening individualism.
- Home social robots are emotion-oriented robots that interact with people, unlike traditional robots that simply replace physical tasks that are difficult for people to do, and help people not feel isolated.
- Home social robots have emerged as a way to solve such problems in single households.
- the need for a home social robot capable of practically guiding life in the elderly's life by providing feedback to the elderly appropriately while monitoring the living pattern of the elderly from the outside is emerging.
- the present disclosure aims to promote companionship to solve the social isolation of single households, and devise a method of sharing activity information.
- the most appropriate activity for the living pattern of a single household was considered as home activity, but since home is the most private space, there may be concern about privacy infringement in sharing information of home activity.
- a core recognition module of the IoT system is a "synthetic sensor: a distance measuring sensor, a temperature and humidity measuring sensor, an illuminance measuring sensor, an acoustic measuring sensor, a grid-eye sensor, a gyro acceleration sensor. It has adopted an artificial intelligence module that recognizes and analyzes user's activity through the combination of existing sensors, etc.), and recognizes the user's activity pattern and the sound of opening and closing of home appliances. The situation can be inferred.
- Synthetic sensors one of the core research areas of Human Computer Interaction (HCI) and Human Robot Interaction (HRI), go beyond the limits of existing individual sensors and combine various types of sensors to pursue effective and economical sensing.
- HCI Human Computer Interaction
- HRI Human Robot Interaction
- a home social robot management system for living activity sharing, comprising a plurality of home social robots and a management server communicatively connected to the home social robot, the home social robot comprising living noise and living environment information
- a synthesis sensor unit for receiving a signal, a speaker for outputting the state of the home social robot as a sound, a display for outputting the state of the home social robot as an image, a communication unit for communicating with a management server of the home social robot through a network;
- a control unit connected to the synthesis sensor unit, the speaker, the display, and the communication unit, wherein the control unit transmits the living noise and living environment information obtained from the synthesis sensor unit to the management server. Based on the living noise and the living environment information, the situation information is determined and transmitted to the home social robot. Constructed, there is provided a home social robot control system.
- the synthesis sensor unit may include two or more of the distance measuring unit, gyro accelerating unit, temperature and humidity measuring unit, illuminance measuring unit, grid-eye unit, and sound measuring unit.
- the home social robot may include a hub robot and one or more edge robots communicatively connected to the hub robot.
- the distance measuring unit may include at least one of an infrared measuring device and an ultrasonic measuring device.
- the sound measurement unit includes a sound sensor and a sound recognizer
- the sound sensor may be configured to notify the sound recognizer to operate the sound recognizer only when the energy level of the sound is equal to or greater than a predetermined threshold.
- the acoustic measurement unit may be configured to divide into different time magnitudes for the noise canceled input signal.
- the acoustic measurement unit to remove the noise primarily by the wavelet transform (Wavelet transform) for the input signal related to the living noise and living environment information, the noise by applying a median filter after the inverse wavelet transform Can be configured to remove secondary.
- Wavelet transform wavelet transform
- the management server may be configured to first remove noise by wavelet transforming the input signal, and secondly remove noise by applying a median filter after the inverse wavelet transform.
- the management server may be configured to extract a feature vector by applying a wavelet transform to the input signal from which the noise is removed, and to classify a sound type based on the extracted feature vector.
- the situation information may include opening and closing the front door, opening and closing the window, turning on the tap, turning on the stove, turning on the microwave, opening the refrigerator, operating the vacuum cleaner, turning the lights on and off in the house, turning the lights on and off, and moving the people. It may include at least one.
- the management server may be configured to transmit the situation information to a terminal of a second user or a second home social robot to share life activity information with the user of the home social robot.
- the management server may instruct at least one of the facial expression and the voice of the robot to at least one of the home social robot and the second home social robot based on the situation information.
- the management server determines that a dangerous situation occurs based on the situation information
- the terminal of the second user or the second home social that share the living activity information with the user of the home social robot And send an emergency notification message to the robot.
- the living noise may include at least one of a washing machine sound, a cleaner sound, a microwave oven sound, a gas stove sound, a keyboard sound, a window opening sound, a water sound, a front door sound, a refrigerator door sound, a visit sound, and a footstep sound. It may include.
- a method of determining a sound type comprising: receiving living noise and living environment information as an input signal, and removing noise from the input signal And obtaining a feature vector by performing wavelet transform on the signal from which the noise has been removed, and determining the type of sound by applying the feature vector to a machine learning tool.
- the removing of the noise may be performed by performing a wavelet transform on the input signal to remove the noise first, and performing the inverse wavelet transform on the signal from which the noise is first removed; And applying a median filter to the inverse wavelet transformed signal to remove noise secondarily.
- the step of obtaining the feature vector may include making the noise canceled signal dyed, and performing a discrete wavelet transform (DTW) on the dyed signal using a multi resolution analysis (MRA) method. And obtaining a second half of the feature vector representing the magnitude of energy for each location section in the time domain, and obtaining the first half of the feature vector representing the magnitude of energy for each bandwidth in the frequency domain. Connecting the first half and the second half of to obtain a final feature vector.
- DTW discrete wavelet transform
- MRA multi resolution analysis
- a method of sharing living noises comprising: a home social robot Receiving the living noise and living environment information, the step of transmitting the living noise and living environment information to the management server via a network, and determining the status information based on the living noise and living environment information And transmitting the contextual information to at least one of a terminal of a second user or a second home social robot to share living activity information with the home social robot and the user of the home social robot.
- a noise sharing method is provided.
- the step of receiving the living noise and living environment information comprises the steps of determining whether an input value for the living noise and living environment information is greater than a predetermined threshold value, and if determined to be large, the left and right sound cards Recording at the same time, selecting data of a large energy of the left and right sounds recorded on the sound card as an analysis target, and dividing a sound of the analysis target data into a plurality of different sized data. Can be.
- the method may further include controlling, by the management server, at least one of a facial expression and a voice of the robot to at least one of the home social robot and the second home social robot based on the situation information. Can be.
- the management server determines that a dangerous situation occurs based on the situation information
- a terminal or a second home of a second user who has decided to share living activity information with the user of the home social robot may further include transmitting an emergency notification message to the social robot.
- a home social robot management system for sharing life activities that recognizes people's daily life activities and takes appropriate measures without violating privacy.
- the elderly living alone can be applied to an environment where they live alone, it is possible to prevent mistakes such as going out without turning off the gas stove, thereby increasing safety of the elderly and minimizing economic losses.
- the social isolation problem may be solved by connecting the user with another user through interaction with a social robot capable of social interaction with a person to expand the social connection of the single household.
- the present disclosure can easily adjust the degree of information sharing selectively according to a user's situation, privacy balancing is possible.
- FIG. 1 is a diagram illustrating a system environment for controlling a home social robot performed by a management server of a home social robot according to one embodiment of the present disclosure.
- FIG. 2 is a block diagram illustrating a home social robot according to one embodiment of the present disclosure.
- FIG. 3 is a diagram illustrating the internal components of the synthesis sensor unit according to an embodiment of the present disclosure.
- FIG. 4 is a conceptual diagram of removing noise and block phenomenon using spatial correlation according to an embodiment of the present disclosure.
- FIG. 5 is a diagram illustrating a two-stage downsampling of an input signal according to an embodiment of the present disclosure.
- FIG. 6 is a diagram illustrating a decomposition of the input signal in the frequency domain according to an embodiment of the present disclosure.
- FIG. 7 is a flowchart illustrating an operation process of an acoustic measuring unit according to an exemplary embodiment of the present disclosure.
- FIG. 8 is a flowchart for describing a procedure of controlling, by a controller, a situation analysis of a robot according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating a system environment for controlling a home social robot performed by a management server of a home social robot according to one embodiment of the present disclosure.
- the system environment 100 for controlling a home social robot may include a management server 110.
- the management server 110 may be communicatively connected to the home social robot 120 and the plurality of second user terminals 130-1, 130-2, 130-n of the user through the network 140.
- the network 140 may include an internet network, a mobile communication network (for example, a WCDMA network, a GSM network, a CDMA network, etc.), a wireless internet network (for example, a Wibro network, a Wimax network, etc.). Including but not limited to wired / wireless networks.
- the home social robot 120 is configured to act as a hub, and one or more edge robots 150-1, 150-2 ... 150-n that are communicatively connected to the home social robot 120. It may further include.
- the home social robot 120 serving as a hub is installed in the kitchen or living room, and the edge robots 150-1, 150-2 ... 150-n are installed in the bathroom, the bedroom, the porch, and the like. It is possible to monitor almost anywhere in the living room.
- the management server 110 may be a collocated server, a hosting server, or a cloud server, such as a dedicated server.
- the management server 110 receives sound from the user's home social robot 120 via the network 140 and stores the sound in a database (not shown) of the management server 110 (eg, washer sound).
- a database not shown
- It may be transmitted to the home social robot (not shown) or the terminals 130-1, 130-2, 130-n of the second user. It is possible to adjust the category of the living noise to be shared according to the intimacy of the user and the second user.
- the living environment information may include information such as temperature, humidity, illuminance of the indoor space.
- the management server 110 may be stored in a database in the management server 110 for each category by machine learning about the living noise that may occur in the home.
- the management server 110 may continuously update the frequency of the living noise through the machine learning.
- the management server 110 may use the machine learning to match the sound from the user's home social robot 120 with a certain category of living noise.
- the user's terminal (not shown) of the home social robot 120 may communicate with the user through WiFi, Bluetooth, infrared communication, WiMax, and the like. After the user installs an app on the terminal and undergoes user registration (for example, interworking with the user's home social robot 120 and the terminal), the user may control the home social robot 120 through the app.
- the user's terminal and the plurality of second user terminals 130-1, 130-2,..., 130-n are wireless devices based on various types of handhelds such as laptops, portable terminals such as note pads, and smart phones. Although it may include a communication device, a computer, a server, and the like, which can communicate with other devices through the network 140, the terminal of the user and the plurality of second user terminals 130-1, 130-2,.
- the type of is not limited to this.
- the management server 110 detects the living noise from the user's home social robot 120 by the second user terminals 130-1 and 130-2. 130-n), it is possible to transmit the living noise from the home social robot 120 of the user to the home social robot of the second user. Also in another embodiment, to transmit living noise from the user's home social robot 120 to both the second user terminal 130-1, 130-2 ... 130-n and the second user's home social robot. It is also possible.
- the acoustic measurement unit 244 or the management server 110 performs a wavelet transform on the input signal and removes noise first, and removes the noise by applying a median filter after the inverse wavelet transform. It can be configured to.
- the management server 110 may be configured to extract a feature vector by applying a wavelet transform to the input signal from which the noise is removed, and to classify a sound type based on the extracted feature vector.
- the home social robot 120 includes a synthetic sensor unit 240 for receiving living noise and living environment information such as sound, a storage unit 230 for storing the received living noise and living environment information, and living noise and living environment information.
- a speaker 250 for outputting auditory information about the display a display 260 for outputting visual information about living noise and living environment information, and a subscriber terminal (not shown) and a home social robot through the network 140.
- the communication unit 220 communicates with the management server 110 of the control unit 210 is connected to the synthesis sensor unit 240, storage unit 230, speaker 250, display 260, and communication unit 220 ).
- the controller 210 may transmit the living noise and the living environment information to the management server 110 through the network 140.
- the management server 110 stores the living noise and living environment information database After matching the living noise of the category stored in the home noise (e.g. door opening noise) to the home social robot or terminal of one or more second users to share the living noise with the user send.
- the operation of matching the living noise and the living environment information with a specific category of the living noise and the living environment information can be performed by using machine learning that is advanced through a large amount of data learning.
- control unit 210 of the home social robot 120 may be configured to receive the sound information of the second user to share the living noise from the management server 110.
- the control unit 210 may be configured to receive sound information of a second user who is to share living noise and convert the sound information of the second user as it is, or convert the sound information into a robot sound or a human voice.
- Synthetic sensor unit 240 is a distance measuring unit 241, temperature and humidity measuring unit 242, illuminance measuring unit 243, sound measuring unit 244, grid-eye unit 245, gyro acceleration unit 246, etc. It includes (see Figure 3).
- the communication unit 220 may be configured to implement a communication protocol that supports transmission and reception of various information under the control of the control unit 210.
- the communication protocol may be implemented with appropriate hardware and / or firmware.
- the communication protocol can include a Transmission Control Protocol / Internet Protocol (TCP / IP) protocol and / or a User Datagram Protocol (UDP) protocol.
- the communication unit 220 may be implemented with hardware and / or firmware that implements various Radio Access Technologies (RATs) including LTE / LTE-A.
- RATs Radio Access Technologies
- the communication unit 220 may be implemented to comply with a wireless communication interface standard such as LTE-Ue.
- the communication unit 220 may control the management server 110 and the plurality of second user terminals 130-1, 130-2,..., 130-n, and a home social robot (not shown) under the control of the controller 210. Communicate
- the storage unit 230 may store frequencies of living noise for each category.
- the storage unit 230 may also store software / firmware and / or data for the operation of the controller 210, and store data input / output.
- the storage unit 230 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, SD or XD memory), Random Access Memory (RAM), Static Random Access Memory (SRAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM) Magnetic Memory, Magnetic It may include a storage medium of at least one type of a disk, an optical disk.
- a flash memory type for example, SD or XD memory
- RAM Random Access Memory
- SRAM Static Random Access Memory
- ROM Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- PROM Programmable Read-Only Memory
- Magnetic Memory Magnetic It may include a storage medium of at least one type of a disk, an optical disk.
- FIG. 3 is a diagram illustrating the internal components of the synthesis sensor unit according to an embodiment of the present disclosure.
- Synthesis sensor unit 240 for example, distance measuring unit 241, temperature and humidity measuring unit 242, illuminance measuring unit 243, acoustic measuring unit 244, grid-eye unit 245, gyro acceleration unit And a plurality of sensors, such as 246.
- the distance measuring unit 241 may include, for example, an infrared sensor or an ultrasonic sensor.
- the distance measuring unit 241 periodically measures the distance of a specific object in front of the robot and stores the measured value. This is necessary for the robot to perceive and react to objects appearing in front of it.
- Infrared and ultrasonic sensors may use either one, or both.
- the management server 110 may execute time and IP (Internet). Protocol), robot universally unique identifier (UUID), owner's ID (identification), and distance value.
- IP Internet
- the robot 120 stores values in arrays of size N global variables, the stored value being distance.
- N is a natural number representing the size of the buffer and can be set to 100, 200, or the like. Therefore, old stored values are removed in order as new stored values increase.
- the distance measuring unit 241 waits for a predetermined time (for example, 5 seconds), and obtains the distance measurement value from the sensor again.
- Temperature and humidity measurement unit 242 periodically measures the temperature and humidity of the environment in which the robot is located, and stores it. This is necessary for the robot to perceive and react to the comfort of its environment.
- the management server 110 When measuring the temperature and humidity in the temperature and humidity thread of the temperature and humidity measurement unit 242, and transmits the measured value to the management server 110, the management server 110, time, robot UUID, owner's ID, temperature value and time It stores the robot UUID, owner ID, and humidity value.
- the robot 120 stores the temperature value in the N-size global variable arrays, and also stores the humidity value in the N-size global variable arrays.
- N is a natural number representing the size of the buffer and can be set to 100, 200, or the like. Therefore, old stored values are removed in order as new stored values increase.
- the illuminance measuring unit 243 periodically measures the ambient brightness of the environment where the robot is located and stores it. This is necessary for the robot 120 to perceive and react to whether the lighting of the environment in which the robot 120 is located is on / off.
- the management server 110 displays time, robot UUID, owner's ID, temperature value, and time. Stores the robot's UUID, owner's ID, and illuminance.
- the robot 120 stores the illuminance value in the N variable global variable arrays.
- N is a natural number representing the size of the buffer and can be set to 100, 200, or the like. Therefore, old stored values are removed in order as new stored values increase.
- the acoustic measurement unit 244 periodically measures the ambient noise of the environment in which the robot 120 is located and stores it.
- the acoustic measurement unit 244 may include a sound sensor 2441 and a sound recognizer 2442.
- the sound recognizer 2442 can operate only when the sound sensor 2441 notifies the sound recognizer 2442 about sound recognition. That is, the sound recognizer 2442 does not always operate, but the sound sensor first recognizes that the sound is generated, and the sound recognizer 2442 attempts to recognize the result only when the sound energy is greater than or equal to a predetermined threshold. It is preferable. This is to reduce the system load by not always operating the sound recognizer, which requires very large computational complexity.
- the gyro acceleration unit 245 may detect vibrations when a person moves and detect movement of a person or falls.
- the grid-eye unit 246 may detect the presence / movement of a person through a grid-eye using infrared array sensors.
- the grid-eye unit 246 only detects the presence / movement of a person, and unlike a camera, the grid-eye unit 246 does not generate an image that can identify an individual, and thus does not cause a concern about a person's privacy.
- the home social robot 120 serving as a hub has all the functions of the synthesis sensor unit 240, but the edge robots 150-1, 150-2. Depending on where it is installed, it may include only some of the functions of the synthesis sensor unit 240.
- the edge robot installed in the toilet only the acoustic measurement unit 244 and the gyro acceleration unit 245 may be included.
- only a part of the function of the synthesis sensor unit 240 of the edge robot may be configured to be activated, and includes only the distance measuring unit 241, the acoustic measurement unit 244, and the gyro acceleration unit 245.
- the part 240 may be manufactured.
- the synthesis sensor unit 240 can detect the movement of the person in the bathroom (for example, the fall of the person).
- the synthesis sensor unit 240 may detect the movement of the person (for example, the movement / exit of the person).
- the function of the acoustic measurement unit 244 will be described in more detail. In particular, the denoising technique and feature vector extraction methodology employed in one embodiment of the present disclosure will be described.
- the key to noise cancellation is to distinguish between the boundary components of noise and the boundary components of a signal in the wavelet transform domain.
- the algorithm proposed in the present disclosure should have better performance than all existing noise reduction algorithms, and at the same time, have no large computational complexity.
- the noise to be removed in the present disclosure aims at an additive white Gaussian as a primary target, but is not limited thereto.
- the boundary line As [Definition 1], the mathematical model of the boundary line includes a step edge, a roof edge, and a ridge edge. However, in the present disclosure, it is assumed that the boundary line existing in the sound is simply a step edge, and [Definition 1] is introduced.
- a (x) and a '(x) are wavelet transform results of the pure original signal without noise. It is known that the wavelet transform of a signal is equal to the result of smoothing and then differentiating the signal. That is, a (x) and a '(x) mean the result of smoothing and differentiating a given signal. Thus, all boundary components present in a given signal are represented as Local Modulus Maxima at these a (x) and a '(x).
- [Definition 1] to derive a set of x that does not belong to the boundary component Same as And for all x Is true, then x satisfying this set is Will be satisfied.
- O 1 (x) and O 2 (x) can be expressed as [Theorem 3] below.
- [Theorem 4] and [Theorem 2] described above provide useful information for distinguishing boundary components from noise and block boundary components in the wavelet transform region. Meanwhile, the method for removing noise components and block boundary components in the wavelet transform region by modifying [Theorem 3] can be represented by the following [Coordination 1].
- O 1 (x) and O 2 (x) are maintained as they are.
- the circled portion of FIG. 4 represents borderline components but the value falls to zero.
- O 1 (x) and O 2 (x) are the derivatives of a given signal. Therefore, if we look at the part where the value changed from O 1 (x), O 2 (x) to 0 in the original signal, that is, integrating O 1 (x) and O 2 (x), there is no change in the slope in that part. It makes sense. Therefore, the circled part of FIG. 4 serves as a kind of discontinuity, that is, a step edge when restored to the original signal through the inverse wavelet. And when modeling these discontinuities as noise, the impact noise will be the closest model.
- Borderline component values smaller than D (x) must also be restored by some amount.
- the portion indicated by circles is referred.
- the algorithm implemented in the present disclosure starts at the point where the boundary components are truncated and starts with O 1 (x), O 2 (x) until the slope of the D (x) Profile does not change, i.e. To restore the values.
- the truncated portion of the borderline component corresponding to the right circle of FIG. 4 is almost completely restored when the algorithm is applied, while the truncated portion of the borderline component corresponding to the left circle is not completely restored. In other words, there is a limit to this restoration method.
- the median filter is applied to the primary result signal from which the noise and the block phenomenon are removed through the foregoing method.
- the region to which the median filter is applied may be limited to portions in which the values of O 1 (x) and O 2 (x) are modified to 0, or may be extended to the entire region of the sound.
- the experimental results confirmed that when the amount of noise or block phenomena inserted into the sound was small, the median filter was applied to the areas where O 1 (x) and O 2 (x) were modified to 0. On the contrary, when the amount of noise inserted in the sound or the degree of block phenomenon is large, a better result can be obtained when the median filter application area is considered as the whole sound.
- Noise to be removed in this disclosure is additive white Gaussian noise. And this assumption is possible due to the fundamental nature of additive white Gaussian noise. This is because the property of white means that the statistical distribution of wavelet transformed noise component values is tentatively coincident through all domains of x. In other words, if only a part of the pure noise region without boundary lines is reliably included in the wavelet transformed sound, the distribution pattern of the wavelet transform values in the remaining pure noise region is also compared with the wavelet transform values in this specific region. It will have a similar distribution pattern. But another important problem arises. When the orthogonal wavelet filter is used, the noise distribution remains white even after the wavelet transform.
- the variance difference between each region is used in the wavelet transformed sound of step 2 2 , that is, O 2 (x, y), to find this random pure noise region. This was due to the assumption that the region containing both boundary and noise components was much larger than the region containing purely noise components. Table 1 below shows that this assumption is very valid (for convenience, we used data about images instead of sound).
- the acoustic measurement unit 244 performs a machine learning function of automatically recognizing and determining what kind of signal this signal is based on the signal from which the noise is removed.
- a machine learning function of automatically recognizing and determining what kind of signal this signal is based on the signal from which the noise is removed.
- the neural network is divided into a shallow neural network (SNN) and a deep neural network (DNN).
- the difference between the SNN and the DNN depends on how many layers of the hidden layer are present.However, in actual theoretical terms, the difference between the SNN and the DNN is that the SNN forms the feature vector to be inserted into the Neural Network's input node and the input.
- the neural network part that executes the optimization classification is separated, and in the case of the DNN, even the part constituting the feature vector is included in the neural network.
- the input signal is subjected to a wavelet transform.
- the wavelet transform can be thought of as a generalized version of the Fourier transform that transforms given time (or spatial) information into frequency information. Whereas the Fourier transform converts time information into all frequency information, wavelet transform maps the given time information into time-frequency space simultaneously. Therefore, the wavelet transform provides spatial information, which is difficult to extract from the conventional Fourier transform, and thus obtains much richer data to interpret the input signal.
- a method of extracting a feature vector from a signal (eg, a sound) received in the present disclosure is as follows.
- Diadic means that a number is a power of two. For example, 3, 5, and 9 are not multiples of 2, so they are not diadic, but 2, 4, 8, 16, and 32 are 2 squared, so they are diadic. For 6 and 10, although they are even numbers divided by 2, they are not squared because they are not squared.
- wavelet transform performs a discrete wavelet transform (DWT) using a multi resolution analysis (MRA) method.
- DWT discrete wavelet transform
- MRA multi resolution analysis
- FIG. 5 is a diagram illustrating a two-stage downsampling process of an input signal according to an embodiment of the present disclosure
- FIG. 6 is a diagram illustrating a decomposition aspect of the input signal in the frequency domain according to an embodiment of the present disclosure.
- c j + 1 represents an input signal
- g ( ⁇ n) represents a high frequency
- h ( ⁇ n) represents a low frequency
- a downward arrow represents downsampling.
- c j + 1 is down-sampled to the d j and c j
- c j is down-sampled back to d j-1 and j-1 c.
- C j-1 in FIG. 5 corresponds to ⁇ 0 band which is the lowest frequency band in FIG. 6, d j-1 in FIG. 5 corresponds to ⁇ 0 band in FIG. 6, and c j in FIG. 5 is 6 corresponds to the ⁇ 1 band, and d j in FIG. 5 corresponds to the ⁇ 2 band in FIG. 6.
- the feature vector of the present disclosure may be divided into a first half and a second half, and the first half expresses the magnitude of energy for each bandwidth in the frequency domain, and the second half expresses the magnitude of energy for each location section in the time domain.
- the dimension (number of elements) constituting the first half of the dimension of the feature vector is log 2 (N) +1.
- the size (number) of the input signal is 8
- the number of first half elements is 4, 16 is 5, and 1024 is 11. In fact, this coincides with the number of bandwidths generated when MRA of an input signal by one difference.
- the number of elements constituting the second half of the feature vector is the closest number to the number of elements in the first half. For example, if the size (number) of the input signal is 8, the number of second half elements is 4, 16 is 4, and 1024 is 8.
- the latter half of the vector represents the magnitude of each energy in the time domain, which can be derived during the MRA.
- the MRA lasts until there is only one element corresponding to the low frequency. During this period, there is a point where the size of the low frequency region coincides with the number of the latter elements. This is because both the magnitude of the input signal and the number of latter elements are diadic.
- the size of the low frequency region is equal to the number of elements in the latter part of the feature vector, then the smallest value among the low frequency data is obtained. If this value is less than 0, all data are adjusted upward by the absolute value of this value. Make it positive.
- the next step is to sum up the latter elements and divide each element by this value, which is a kind of normalization. If normalization is not performed, the probability of statistical bias is increased.
- the total bandwidth plus one equals the number of first half elements in the feature vector.
- FIG. 7 is a flowchart illustrating an operation process of an acoustic measuring unit according to an exemplary embodiment of the present disclosure.
- the acoustic measurement unit 244 includes a sound sensor 2441 and a sound recognizer 2442, and the sound recognizer may operate only when the sound sensor 2441 is "on".
- the sound sensor 2441 measures an input value (S710), and determines whether the input value is larger than a predetermined threshold value (S720). If the input value is less than or equal to the predetermined threshold, the controller waits for a predetermined time (for example, 3 seconds, 5 seconds, 10 seconds, etc.) (S730), and measures the input value again (S710).
- a predetermined time for example, 3 seconds, 5 seconds, 10 seconds, etc.
- the left / right sound card is recorded at the same time (S740), and the large energy data among the left / right sounds is selected as the analysis target (S750).
- S740 the large energy data among the left / right sounds
- S750 the analysis target
- voice filtering may be performed as necessary (S760).
- the present disclosure mainly deals with processing on living noise, the processing of voice signals may be performed in parallel in order to facilitate communication between a user and friends.
- the input value is divided into data having sizes t1, t2, t3, and the like (S770).
- the present inventor needs about 5 seconds of time to determine, for example, whether it is a "washing machine” sound, but "cleaner”, “microwave oven”, “gas range”, “keyboard”, “window close”, “water sound” It was found that a time of about 1 second was required to determine whether the sound was a sound of a lamp, and a time of about 0.5 seconds was required to determine whether it was a sound of "front door", “fridge door”, “visit”, and the like.
- the type of sound is determined based on sound data divided into data having sizes of t1, t2, t3, and the like (S780).
- FIG. 8 is a flowchart for describing a procedure of controlling, by a controller, a situation analysis of a robot according to an embodiment of the present disclosure.
- the controller 210 periodically rotates and analyzes the current situation of the robot 120 itself.
- the information used at this time is the history of the information of the synthesis sensor unit 240 and the type of the recognized sound. Based on this, the robot 120 selects and drives an appropriate motion, facial expression, and language to be performed by itself.
- the controller thread obtains the latest value and the past value of the history array stored in the global variables for each sensor of the synthesis sensor unit 240 to construct a plurality of input vectors which are global variables (S801).
- the following 38-dimensional input vector can be configured.
- Element 2 Current illuminance: 3 levels of dark, appropriate, bright
- Element 4 Current Distance (Infrared): 3 levels of Far, Apt, Near
- Element 6 (Ultrasonic) Current Distance: 3 Levels of Far, Apt, Close
- Element 8 Current temperature: 3 levels of cold, appropriate, and hot
- Component 9 Humidity Variation: Number of permutations of three of three: wet, moderate, dry
- Component 10 Current Humidity: Humid, Proper, 3 levels of drying
- Element 12 Current Washing Machine Sound: 2 Levels of Washing, Not Washing
- Element 16 Current keyboard sound: two levels of operation, not operation
- Element 18 Current Microwave Sound: Two Levels of Operation, Non-Operation
- Element 20 Current stove sound: Operation, non-operation 2 level
- Element 22 Current window sound: 2 levels of opening and closing, no sound
- Element 24 Current front door sound: 2 levels of close, no sound
- Element 26 Current Refrigerator Door Sound: Close, Two Levels of No Sound
- Component 30 Current “Like” Sound: Level 2 of "Like", “Like”
- Component 32 Current “No” Sounds: Level 2 of "No", "No"
- Component 34 Current “Sad” Sounds: Level 2 of "Sad”, "Sad”
- Component 35 Final time voice prompted washing machine sound
- Component 36 End Time Voice Prompted Cleaner Sound
- Component 37 End time voiced keyboard sound
- Component 38 Final time of voice prompt for microwave sound
- an output matrix corresponding to the plurality of input vectors is constructed (S802).
- a 17 * 4 output matrix corresponding to a 38-dimensional input vector is constructed.
- row values include specific contextual information about a change in conditions, such as ⁇ sound volume change, illuminance change, user distance change, temperature change, humidity change, washing machine sound change, cleaner sound change, keyboard sound change, microwave sound. Change, gas range sound change, window sound change, front door sound change, refrigerator door sound change, wonder change, liking change, dislike change, sad change ⁇ . Voice, friend's robot expression, friend's robot voice ⁇ . This is shown in the table below.
- the i-th row is selected (S804), and the time, IP, the robot UUID, the owner ID, the energy size, the input vector, the output vector, and the UUIDs of the friend robots are stored in the management server 110 (S805).
- the output vector is a set sharing activity (S809). For example, when the sound of the "front door" has been recognized and it is determined that the sound is to be shared with the friend robot, the output vector is transmitted to the friend robot's UUID channel (S810).
- i i + 1 (S812), and determines whether i is greater than N (N is a natural number, for example, "17"). If i is smaller than N (S812), the i-th row is selected (S804).
- i is larger than N
- the operation is waited for a predetermined time (for example, 1 second, 3 seconds, 5 seconds, etc.) (S813), and the operation is resumed from step S801.
- a predetermined time for example, 1 second, 3 seconds, 5 seconds, etc.
- the management server 100 when the management server 100 determines that a dangerous situation occurs based on the situation information, the management server 100 of the second user who has decided to share living activity information with the user of the home social robot 120. And send an emergency notification message to the terminal or the second home social robot.
- the situation information may include at least one of opening and closing the door, opening and closing the window, tapping the water, turning on the stove, turning on the microwave, opening the refrigerator, operating the cleaner, turning on and off the lights in the house, turning the TV on and off, and human movement. have.
- the management server 100 judges based on the status information such as "turning on the gas range” and “opening the front door", opening and closing the front door in the home of the single household elderly-> turning on the gas range-> opening and closing the front door If there is a change in the second home (for example, family, social worker, etc.) and / or your own terminal to decide to share the living activity information with the user by judging that you turned out the gas range in the home and going out An emergency notification message (eg, a mobile phone text message) can be sent.
- the second home for example, family, social worker, etc.
- An emergency notification message eg, a mobile phone text message
- turning on the gas range may be detected by, for example, the acoustic measurement unit 244 and the temperature and humidity measurement unit 242 of the synthesis sensor unit 240, and opening and closing the front door may be performed by the acoustic measurement unit of the synthesis sensor unit 240.
- the grid-eye unit 245, and the gyro accelerator unit 246 can be detected.
- the arrangement of the components shown may vary depending on the environment or requirements on which the invention is implemented. For example, some components may be omitted or several components may be integrated and implemented as one. In addition, the arrangement order and connection of some components may be changed.
- Components of the embodiments of the present disclosure described above may be implemented in hardware, software, firmware, middleware, or a combination thereof, and may be utilized as systems, subsystems, components, or subcomponents thereof. It must be understood. If implemented in software, the elements of the present disclosure may be instructions / code segments for performing the necessary tasks. The program or code segments may be stored in a machine readable medium, a computer program product, such as a processor readable medium. Machine-readable media or processor-readable media can include any medium that can store or transmit information in a form readable and executable by a machine (eg, processor, computer, etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- Tourism & Hospitality (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- General Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Quality & Reliability (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Marketing (AREA)
- Child & Adolescent Psychology (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Human Resources & Organizations (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Manipulator (AREA)
Abstract
La présente invention concerne un système de gestion de robot social domestique basé sur un capteur synthétique pour partager des informations d'activité quotidienne, le système reconnaissant les activités de vie quotidienne d'une personne et prenant des mesures appropriées, sans violation de la confidentialité. Dans la présente invention, une technique de prétraitement acoustique pour diviser et traiter des sons générés dans la vie courante dans un état optimal pour une reconnaissance acoustique, une technique pour éliminer efficacement un bruit blanc additif qui est omniprésent dans des activités de vie, une technique pour extraire des vecteurs de caractéristiques pour reconnaître efficacement des sons d'activité de vie dans des signaux prétraités, une technique pour extraire des types de bruit de vie par application d'un apprentissage automatique aux vecteurs de caractéristiques extraits, et analogues. Selon la présente invention, en particulier lorsqu'elle est appliquée à un environnement où les personnes âgées vivent seules, il est possible d'empêcher des erreurs telles que sortir sans éteindre une cuisinière à gaz pendant la cuisson des aliments, ce qui permet d'améliorer la sécurité des personnes âgées et de réduire au minimum une perte économique.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20170005638 | 2017-01-12 | ||
KR10-2017-0005638 | 2017-01-12 | ||
KR1020170017709A KR101982260B1 (ko) | 2017-01-12 | 2017-02-08 | 홈 소셜 로봇 |
KR10-2017-0017709 | 2017-02-08 | ||
KR1020170112838A KR102064365B1 (ko) | 2017-09-04 | 2017-09-04 | 생활 소음으로부터의 음향 종류 판별 방법 및 이를 이용한 일상 활동 정보 공유용 홈 소셜 로봇 관리 시스템 |
KR10-2017-0112838 | 2017-09-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018131789A1 true WO2018131789A1 (fr) | 2018-07-19 |
Family
ID=62840219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2017/013401 WO2018131789A1 (fr) | 2017-01-12 | 2017-11-23 | Système de robot social domestique pour reconnaître et partager des informations d'activité quotidienne par analyse de diverses données de capteur comprenant un bruit de vie à l'aide d'un capteur synthétique et d'un reconnaisseur de situation |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018131789A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100795475B1 (ko) * | 2001-01-18 | 2008-01-16 | 엘아이지넥스원 주식회사 | 잡음제거기 및 웨이블릿 변환 필터 설계 방법 |
KR20100081587A (ko) * | 2009-01-06 | 2010-07-15 | 삼성전자주식회사 | 로봇의 소리 인식 장치 및 그 제어 방법 |
JP4595436B2 (ja) * | 2004-03-25 | 2010-12-08 | 日本電気株式会社 | ロボット、その制御方法及び制御用プログラム |
KR20110026212A (ko) * | 2009-09-07 | 2011-03-15 | 삼성전자주식회사 | 로봇 및 그 제어방법 |
KR101457881B1 (ko) * | 2014-06-11 | 2014-11-04 | 지투파워 (주) | 초음파 신호에 의한 수배전반의 지능형 아크 및 코로나 방전 진단 시스템 |
-
2017
- 2017-11-23 WO PCT/KR2017/013401 patent/WO2018131789A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100795475B1 (ko) * | 2001-01-18 | 2008-01-16 | 엘아이지넥스원 주식회사 | 잡음제거기 및 웨이블릿 변환 필터 설계 방법 |
JP4595436B2 (ja) * | 2004-03-25 | 2010-12-08 | 日本電気株式会社 | ロボット、その制御方法及び制御用プログラム |
KR20100081587A (ko) * | 2009-01-06 | 2010-07-15 | 삼성전자주식회사 | 로봇의 소리 인식 장치 및 그 제어 방법 |
KR20110026212A (ko) * | 2009-09-07 | 2011-03-15 | 삼성전자주식회사 | 로봇 및 그 제어방법 |
KR101457881B1 (ko) * | 2014-06-11 | 2014-11-04 | 지투파워 (주) | 초음파 신호에 의한 수배전반의 지능형 아크 및 코로나 방전 진단 시스템 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2018246843B2 (en) | Data learning server and method for generating and using learning model thereof | |
WO2019156499A1 (fr) | Dispositif électronique et procédé d'exécution de fonction de dispositif électronique | |
WO2019164148A1 (fr) | Procédé et système d'exécution d'instruction vocale | |
WO2018182357A1 (fr) | Serveur d'apprentissage de données et procédé de production et d'utilisation de modèle d'apprentissage associé | |
WO2020246643A1 (fr) | Robot de service et procédé de service au client mettant en œuvre ledit robot de service | |
WO2018199390A1 (fr) | Dispositif électronique | |
WO2020050595A1 (fr) | Serveur pour fournir un service de reconnaissance vocale | |
WO2021256652A1 (fr) | Appareil électronique et son procédé de commande | |
WO2020231230A1 (fr) | Procédé et appareil pour effectuer une reconnaissance de parole avec réveil sur la voix | |
WO2018128362A1 (fr) | Appareil électronique et son procédé de fonctionnement | |
WO2021025350A1 (fr) | Dispositif électronique gérant une pluralité d'agents intelligents et son procédé de fonctionnement | |
WO2019031707A1 (fr) | Terminal mobile et procédé permettant de commander un terminal mobile au moyen d'un apprentissage machine | |
WO2020242274A1 (fr) | Dispositif électronique de commande d'un dispositif de soins de la peau et procédé permettant de le faire fonctionner | |
WO2019013510A1 (fr) | Procédé de traitement vocal et dispositif électronique le prenant en charge | |
WO2016187964A1 (fr) | Procédé et appareil de commande intelligente de dispositif commandé | |
EP3545436A1 (fr) | Appareil électronique et son procédé de fonctionnement | |
WO2018084576A1 (fr) | Dispositif électronique et procédé de commande associé | |
WO2019182252A1 (fr) | Dispositif électronique et serveur de traitement de données reçues d'un dispositif électronique | |
WO2021006404A1 (fr) | Serveur d'intelligence artificielle | |
WO2019050242A1 (fr) | Dispositif électronique, serveur, et support d'enregistrement prenant en charge l'exécution d'une tâche à l'aide d'un dispositif externe | |
WO2021029457A1 (fr) | Serveur d'intelligence artificielle et procédé permettant de fournir des informations à un utilisateur | |
WO2022240274A1 (fr) | Dispositif de robot, procédé de commande de ce dernier et support d'enregistrement sur lequel est enregistré un programme | |
WO2019074257A1 (fr) | Dispositif électronique et serveur de traitement d'énoncés d'utilisateur | |
WO2020226213A1 (fr) | Dispositif d'intelligence artificielle pour fournir une fonction de reconnaissance vocale et procédé pour faire fonctionner un dispositif d'intelligence artificielle | |
EP3523709A1 (fr) | Dispositif électronique et procédé de commande associé |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17891894 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC , EPO FORM 1205A DATED 30.09.19. |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17891894 Country of ref document: EP Kind code of ref document: A1 |