+

WO2016116992A1 - Système d'apprentissage vocal et procédé d'apprentissage vocal - Google Patents

Système d'apprentissage vocal et procédé d'apprentissage vocal Download PDF

Info

Publication number
WO2016116992A1
WO2016116992A1 PCT/JP2015/006369 JP2015006369W WO2016116992A1 WO 2016116992 A1 WO2016116992 A1 WO 2016116992A1 JP 2015006369 W JP2015006369 W JP 2015006369W WO 2016116992 A1 WO2016116992 A1 WO 2016116992A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
vehicle
time
program
speech
Prior art date
Application number
PCT/JP2015/006369
Other languages
English (en)
Japanese (ja)
Inventor
典子 加藤
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2015152126A external-priority patent/JP6443257B2/ja
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to US15/542,810 priority Critical patent/US11164472B2/en
Publication of WO2016116992A1 publication Critical patent/WO2016116992A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Definitions

  • the present disclosure relates to a speech learning system and a speech learning method for a user in a vehicle to learn by speech.
  • ⁇ ⁇ Voice learning technology is known that enables learning even when it is difficult to see text, etc., by providing learning content to users by voice. If this speech learning technique is used, it is possible to learn using time such as driving the vehicle.
  • Such audio learning content is generally provided by a program having a predetermined learning time (for example, 1 hour).
  • a technique has been proposed that enables a learning user to arbitrarily set his / her learning time (Patent Document 1).
  • the present disclosure has been made in view of the above-described problems of the related art, and an object thereof is to provide a speech learning system and a speech learning method that can promote continuous learning in a vehicle.
  • a speech learning system is a system that is applied to a vehicle and provides learning content to a user in the vehicle by speech, and stores a plurality of learning elements constituting the learning content
  • a storage unit, a boarding time estimation unit that estimates a boarding time in which the user has been on the vehicle, and a learning program that ends within the boarding time estimated by the boarding time estimation unit include a plurality of learning elements.
  • a learning program generation unit that generates a combination from the inside and an execution unit that executes the learning program are provided.
  • the user can start learning without delaying the time required for learning. Can be encouraged.
  • the learning time is set according to the boarding time, it is not necessary for the driver to make a decision.
  • there is a sense of accomplishment and the motivation of the next learning is increased by using the boarding time to reliably complete one learning, it is possible to promote continuous learning in the vehicle.
  • a speech learning method is a method that is applied to a vehicle and provides learning content by voice to a user in the vehicle, and estimates a boarding time for which the user has been on the vehicle. And generating a learning program for one time that ends within the boarding time by combining from a plurality of pre-stored learning elements constituting the learning content, and executing the learning program.
  • FIG. 1 is an explanatory diagram illustrating a configuration of a speech learning system according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the embodiment of the present disclosure.
  • FIG. 3 is a flowchart illustrating a learning program generation process according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a learning program execution process according to an embodiment of the present disclosure.
  • FIG. 1 is an explanatory diagram illustrating a configuration of a speech learning system according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the embodiment of the present disclosure.
  • FIG. 3 is a flowchart illustrating a learning program generation process according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a learning program execution process according to an embodiment of the present disclosure.
  • FIG. 5A is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns a language according to the speech learning control process of an embodiment of the present disclosure
  • FIG. 5B is a diagram showing learning objectives and learning contents of each step in the example of language learning shown in FIG. 5A.
  • FIG. 6 is an explanatory diagram illustrating a configuration of the speech learning system according to the first modified example of the present disclosure.
  • FIG. 7 is a flowchart illustrating a learning program generation process executed by the speech learning system according to the first modified example of the present disclosure.
  • FIG. 8 is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns using the speech learning system according to the first modification of the present disclosure.
  • FIG. 9 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the second modified example of the present disclosure.
  • FIG. 10 is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns using the speech learning system according to the second modified example of the present disclosure.
  • FIG. 1 shows the configuration of a speech learning system 10 of the present embodiment.
  • the voice learning system 10 according to the present embodiment is mounted on a vehicle and provides learning content by voice to a user (for example, a driver) in the vehicle.
  • the speech learning system 10 includes a movement schedule acquisition unit 11, a boarding time estimation unit 12, a movement history storage unit 13, a learning element storage unit 14, a learning program generation unit 15, and a notification unit 16.
  • the moving schedule acquisition unit 11 acquires the starting point and destination of the vehicle that starts moving. For example, when a destination is set in a navigation system (not shown) mounted on the vehicle, the movement schedule acquisition unit 11 acquires the set destination and the current location of the vehicle as the departure point.
  • the boarding time estimation unit 12 estimates the time required to move from the departure place to the destination as the boarding time (that is, the time restrained in the vehicle) based on the travel schedule acquired by the travel schedule acquisition unit 11.
  • the movement history storage unit 13 stores a history of movement of the vehicle in the past along with the date, day of the week, time, and the like.
  • the boarding time estimator 12 is based on the driver's habitual behavior stored as a history in the movement history storage unit 13, and determines the boarding time as the driver is waiting in the vehicle that has moved to a specific location. Estimate as
  • the learning element storage unit 14 stores a plurality of learning elements constituting speech learning content.
  • the speech learning system 10 according to the present embodiment provides content for learning languages (for example, a user whose native language is Japanese learns English), and the learning element storage unit 14 stores language learning language. A large number of words and short phrases are stored in advance as learning elements constituting the content.
  • the learning program generation unit 15 sets a learning time shorter than the boarding time estimated by the boarding time estimation unit 12, and stores a learning program for one time that ends in accordance with the learning time. Generate by combining elements.
  • This learning program includes a plurality of learning elements whose total learning time is shorter than the boarding time, and is also referred to as a learning target set or a learning target course.
  • the notification unit 16 notifies the user in the vehicle of the boarding time estimated by the boarding time estimation unit 12 and the learning time of the learning program generated by the learning program generation unit 15.
  • the execution unit 17 is connected to an operation switch 21 that can be operated by the user. When a start request operation is performed by the operation switch 21, the execution program 17 executes the learning program generated by the learning program generation unit 15 and executes the learning program from the speaker 22. Output audio.
  • the learning history storage unit 18 stores the learning program executed by the execution unit 17 (that is, the executed learning element).
  • the load information acquisition unit 19 acquires information (hereinafter referred to as load information) for estimating the driving load of the vehicle driver.
  • the driving load estimation unit 20 estimates the driving load of the driver based on the acquired load information.
  • the load information the movement history of the movement history storage unit 13 is acquired, and if the current movement is a movement familiar to the driver such as commuting, the driving load is estimated to be lower than a predetermined load. If the driver is not familiar with the movement, the driving load is estimated to be higher than the predetermined load.
  • load information map information of the travel route from the departure point to the destination is acquired, and if it is a section that requires attention such as a section with a lot of traffic or a section with a curve, the driving load is more than the predetermined load. Estimated high.
  • generation part 15 produces
  • the driving load estimation unit 20 estimates a real-time driving load while the vehicle is moving.
  • the driving load is higher than a predetermined load based on the fact that information accompanied by an alarm such as the approach of an obstacle is acquired from a camera or sensor that monitors the surroundings of the vehicle as load information during movement.
  • the driving load is higher than the predetermined load based on the fact that the information indicating the sudden braking or the sudden handle is acquired from the operation unit such as the accelerator, the brake, or the handle.
  • the driving load is higher than the predetermined load on the assumption that the driver has no margin when the movement amount of the line of sight decreases. .
  • the execution part 17 will come to interrupt execution of a learning program, if it is estimated that a driver
  • FIG. 2 shows a flowchart of the speech learning control process executed by the speech learning system 10 of this embodiment.
  • the speech learning control process (S100) is started when the user activates the speech learning system 10. It may be started in synchronism with the start of the vehicle engine.
  • the voice learning control process (S100) is started, first, it is determined whether or not a departure place and a destination are acquired as a moving schedule of the vehicle (S101).
  • the destination and the departure place for example, the current location
  • the time required to travel from the departure place to the destination is estimated as the boarding time. (S102).
  • the movement schedule has not been acquired (S101: NO)
  • the movement history of the vehicle together with the date, day of the week, and time, it is possible to predict the habitual behavior of the driver. For example, if it is customary for a mother who drives a vehicle to deliver a child to a cram school and waits until it finishes at a cram school parking lot, the cram school parking lot is registered as a specific place.
  • the waiting time in a parking lot of a cram school is estimated as boarding time based on the action prediction from a movement history (S104).
  • a learning time shorter than the estimated boarding time is set (S105).
  • a learning time shorter than the estimated boarding time by a predetermined margin time (for example, 10 minutes) is set. For example, when the boarding time is estimated to be 50 minutes, the margin time is 10 minutes. Is subtracted and the learning time is set to 40 minutes. The reason for providing the extra time will be described later.
  • learning program generation process a process for generating a learning program in accordance with the learning time (hereinafter referred to as learning program generation process) is started (S106).
  • learning program generation process a process for generating a learning program in accordance with the learning time.
  • a large number of words and short phrases are stored in advance as learning elements constituting the content for language learning, and a learning program is generated by combining them.
  • the learning time can be adjusted according to the number of learning elements to be selected, individual reproduction times, the number of reproduction repeats, and the like.
  • FIG. 3 shows a flowchart of the learning program generation process of this embodiment.
  • the learning program generation process S106
  • a learning program mainly composed of already-executed learning elements (here, words and phrases) is generated (S124).
  • the learning program is composed mainly of learning elements that have already been executed to reduce the difficulty of learning, so that the driver can review learning while paying attention to driving. It is possible to continue.
  • the portion reproduced in the section in which the driving load is estimated to be high is mainly composed of the already executed learning elements. You may do it.
  • the manner of reducing the difficulty level of learning is not limited to using the learning elements that have already been executed as the subject, but the number of reproduction repeats may be increased.
  • the portion that is reproduced in the section in which the driving load is estimated to be low may be configured mainly by unexecuted learning elements.
  • the review program is generated (S126).
  • a learning time shorter than the estimated boarding time by a predetermined margin time (for example, 10 minutes) is set, and after the learning program matching the learning time is completed, the margin time is secured. It is supposed to leave. Then, it is possible to execute a review program that reviews this learning using this spare time.
  • the review program is generated by combining the learning elements included in the learning program generated in S124 or S125 so as to end within the spare time.
  • the voice learning control process when returning from the learning program generation process (S106), the boarding time estimated in S102 or S104 and the learning time set in S105 are notified to the user in the vehicle (S107).
  • This notification may be performed by voice or by displaying on a display unit (not shown).
  • a learning program execution process a process for executing a learning program (hereinafter referred to as a learning program execution process) is started (S109).
  • the learning program execution process is started in response to the user's start request operation.
  • the learning program execution process may be automatically started. . This eliminates the need for the user to make a decision at the start of learning, and can encourage the user who is restrained in the vehicle to learn while moving.
  • FIG. 4 shows a flowchart of the learning program execution process of the present embodiment.
  • the learning program execution process S109
  • the learning program generated in S124 or S125 of FIG. 3 is started (S131).
  • sound is output from the speaker 22 according to the learning program.
  • the vehicle is moving (S132). If the vehicle is moving (S132: YES), real-time load information is acquired (S133), and it is determined whether the driving load estimated based on the load information is high (S134). For example, when information with an alarm such as approach of an obstacle is acquired and it is estimated that the driving load is high (S134: YES), the execution of the learning program is interrupted (S135). In situations where the driving load is temporarily high, since the driver is focusing on driving and there is no room for learning, priority is given to ensuring safety by temporarily stopping learning.
  • the contents of the learning program can be modified according to the interruption time by reducing the number of learning elements included in the learning program or reducing the number of playback repeats. Is possible.
  • FIG. 5A schematically shows an example in which the driver in the vehicle learns language according to the above-described voice learning control process (S100) of the present embodiment.
  • the horizontal axis in the figure indicates the flow of time, and proceeds in the right direction.
  • the travel time (T11) is set as a travel time (T11), and a learning time (T12) shorter than the travel time by a predetermined margin time (T13) is set.
  • a learning program is generated.
  • the learning program is composed of four stages of steps 1 to 4, with the learning target being to learn the climax part of the Western lyrics.
  • a learning program is executed when the vehicle starts to move, and the driver learns in order from step 1.
  • step 1 some unlearned words are extracted from the rust portion based on the driver's learning history, and the pronunciation and meaning of these words are lectured. Then, the driver repeats pronunciation of words according to voice guidance.
  • the inside of the vehicle is a private space that is isolated from the surroundings compared to the inside of a train or house, and if there is only one driver in the vehicle, you can make a loud voice without worrying about others, It's a great place to practice.
  • the pronunciation of the word by the driver is recorded and is played back continuously. By listening and checking your own pronunciation, you can enhance the pronunciation learning effect.
  • step 2 a short phrase including the words learned in step 1 is pronounced and its translation is lectured.
  • the driver repeatedly practiced the pronunciation of short phrases and listened to the recorded pronunciation.
  • Step 3 the short phrases learned in Step 2 are joined together to gradually make longer phrases.
  • the driver repeats the practice of pronunciation of phrases that are gradually lengthened, and listens to the recorded pronunciation to check.
  • the driver practice while singing the entire chorus part along with the accompaniment, and listen to the recorded pronunciation to check.
  • step 4 of the learning program the singer part is sung and practiced along with the accompaniment, and the driver can review this study to increase the level of acquisition. .
  • a user for example, a driver estimates a boarding time in which the user is restrained in the vehicle, and generates a learning program having a learning time shorter than the boarding time.
  • the learning time is set according to the boarding time, it is not necessary for the driver to make a decision.
  • there is a sense of accomplishment and the motivation of the next learning is increased by using the boarding time to reliably complete one learning, it is possible to promote continuous learning in the vehicle.
  • the learning time is set shorter than the boarding time by a predetermined margin time, so that the margin time is secured after the learning program is finished. Can be executed. By executing the review program for users who have increased willingness to learn through the learning program, the sense of achievement of learning can be further enhanced. Thereby, motivation for the next learning can be improved, and learning in the vehicle can be continued.
  • the front of the vehicle is monitored by a radar or the like, and if there is no preceding vehicle, the speed is adjusted to the set speed.
  • Technology that maintains distance so-called adaptive cruise control: ACC
  • technology that recognizes the lane based on the front image taken by the camera and controls the steering so that it travels along the lane so-called lane keep assist)
  • Etc Etc.
  • the modified speech learning system 10 mounted on a vehicle having an automatic driving function will be described with a focus on differences from the above-described embodiment.
  • the same components as those in the above-described embodiment are denoted by the same reference numerals and the description thereof is omitted.
  • FIG. 6 shows the configuration of the speech learning system 10 of the first modification.
  • the voice learning system 10 of the first modification includes an automatic driving possible section estimation unit 23 instead of the load information acquisition unit 19 and the driving load estimation unit 20 of the voice learning system 10 of the above-described embodiment.
  • the automatic driving possible section estimation unit 23 is also a conceptual classification of the speech learning system 10 focusing on functions, and does not necessarily exist physically independently.
  • the automatic operation possible section estimation unit can be configured by various devices, electronic components, integrated circuits, computers, computer programs, or combinations thereof.
  • the automatic operation possible section estimation unit 23 satisfies a predetermined condition between the departure point and the destination (hereinafter, referred to as “automatic driving”). Estimate the automatic operation possible section).
  • specific road types such as an expressway and a car-only road are defined as predetermined conditions for enabling automatic driving. For example, the vehicle travels on an expressway between the planned departure place and the destination. If there is a section to be operated, the section is estimated as an automatically operable section.
  • the highway is suitable for automatic driving because it eliminates intersections and gently designs curves so that it can travel at high speeds and has fewer speed fluctuations and sharp steering than ordinary roads.
  • the learning program generation unit 15 of the first modified example has a learning time shorter than the boarding time. Set. And when the user who learns is a driver
  • FIG. 7 shows a flowchart of a learning program generation process executed by the speech learning system 10 of the first modification.
  • the learning program generation process (S106) of the first modification first, it is determined whether or not the learning is performed while the vehicle is moving (S141), and the learning is not performed while the vehicle is moving, that is, waiting at a specific place. If the learning is to be performed in the middle (S141: NO), it is possible to concentrate on the learning, so a learning program mainly composed of unexecuted learning elements is generated based on the stored learning history (S142). ).
  • the automatic driving possible section is estimated based on the movement schedule (S143).
  • expressways are defined as specific types of roads that can be driven automatically.
  • driving on expressways between the departure point and the destination it is possible to automatically drive the driving section of the expressway. Estimate as interval.
  • the learning program for the portion corresponding to the manual operation section is mainly executed between the departure point and the destination.
  • the learning program of the part corresponding to the automatic driving possible section is mainly configured with the unexecuted learning elements (S147).
  • the driver's burden is greatly reduced by automatically operating the accelerator, brakes and steering wheel, so the driver can focus on learning. Therefore, the subject of the learning element is changed between the manual driving section and the automatic driving enabled section, and the difficulty of learning in the automatic driving enabled section is increased as compared with the manual driving section.
  • the driving load of the driver is not estimated, and if the learning program is started in the subsequent learning program execution process (S109), the learning is performed regardless of whether the driver is moving or not. Wait until the program is finished, and when all the learning programs are finished, the learning program execution process (S109) is finished.
  • FIG. 8 schematically shows an example in which the driver in the vehicle learns by the speech learning system 10 of the first modified example.
  • the horizontal axis in the figure indicates the flow of time and proceeds in the right direction.
  • the travel time (T21) that is shorter than the boarding time by a predetermined margin time (T23) is set with the required time to travel by the vehicle as the boarding time (T21). Then, the learning program generated in accordance with the learning time is executed when the vehicle starts moving, and after the learning program ends, the learning program is generated so as to end within the spare time according to the driver's review request. A review program is executed.
  • the portion corresponding to the manual operation section (that is, the section that is not the automatic operation enabled section) is set to a learning difficulty level low by configuring mainly the learning elements that have already been executed.
  • the portion corresponding to the drivable section is configured with a learning difficulty level by mainly configuring an unexecuted learning element.
  • the automatic driving possible section is estimated based on the movement schedule, the subject of the learning element constituting the learning program is changed between the manual driving section and the automatic driving enabled section, and the manual operation section is changed.
  • the difficulty of learning in the section where automatic driving is possible is increased.
  • the driver learns refreshingly while focusing attention on driving in the manual driving section, and learns efficiently by actively incorporating new content in the automatic driving section where the driving burden is reduced. It is possible to proceed.
  • D-2 D-2.
  • Second modification In the first modification described above, the difficulty level of voice learning is different between the manual operation section and the automatic operation possible section. However, the speech learning may be executed intensively in the section where automatic driving is possible.
  • the speech learning system 10 of the second modified example includes an automatic driving possible section estimating unit 23 as in the first modified example (see FIG. 6) described above.
  • the automatic driving possible section estimation unit 23 estimates an automatic driving capable section satisfying a predetermined condition from the departure point to the destination based on the movement schedule acquired by the movement schedule acquisition unit 11.
  • the predetermined condition for enabling automatic operation is the same as in the first modification.
  • the boarding time estimation part 12 of a 2nd modification will estimate the time required to move the automatic driving
  • the learning program generation unit 15 sets a learning time shorter than the boarding time estimated by the boarding time estimation unit 12, and generates a learning program in accordance with the learning time.
  • FIG. 9 shows a flowchart of the speech learning control process executed by the speech learning system 10 of the second modified example.
  • the voice learning control process (S200) of the second modified example is started, first, it is determined whether or not a moving schedule of the vehicle has been acquired (S201). If the travel schedule has not been acquired (S201: NO), it is subsequently determined by referring to the travel history of the vehicle whether the vehicle is waiting at a specific location (S202).
  • the waiting time based on the behavior prediction from the movement history is estimated as the boarding time (S203).
  • the required time for moving the automatic driving section (for example, driving time on the highway) is set as the boarding time. Estimate (S206).
  • a learning time shorter than the boarding time by a predetermined margin time is set (S207), and a learning program for one time that ends in accordance with the learning time is stored in a plurality of stored learning elements.
  • a combination is generated from the inside (S208).
  • a review program that ends within the spare time is generated by combining the learning elements included in the generated learning program (S209).
  • a learning program is generated in the same way for learning performed while waiting at a specific place and learning performed during an automatic driving enabled section, but the degree of difficulty is different.
  • a learning program may be generated.
  • the review program is started (S215). Subsequently, it is determined whether or not the review program has ended (S216). If the review program has not ended yet (S216: NO), the process stands by. And when all the review programs are complete
  • FIG. 10 schematically shows an example in which the driver in the vehicle learns by the speech learning system 10 of the second modified example.
  • the horizontal axis in the figure indicates the flow of time and proceeds in the right direction.
  • the travel time (T31) is set as the travel time (T31), and a learning time (T32) shorter than the travel time by a predetermined margin time (T33) is set.
  • the learning program generated in accordance with the learning time is executed when the vehicle moves and enters the automatic driving enabled section.
  • the review program generated to finish within the spare time is executed in response to the driver's review request.
  • the automatic driving possible section is estimated based on the moving schedule, and the voice learning is intensively executed in the automatic driving possible section while the vehicle is moving.
  • various operations of the vehicle are performed automatically, which greatly reduces the burden on the driver and allows the driver to afford, so during automatic driving Especially suitable for performing speech learning. Therefore, the driver can advance the voice learning safely and effectively by executing the voice learning intensively in the automatic driving enabled section.
  • the driving load is estimated in two stages of high and low, but the driving load is estimated in multiple stages (for example, four stages of loads 1 to 4). Also good.
  • the estimated driving load is higher, a learning program having a lower difficulty level (a smaller number of unexecuted learning elements) may be generated.
  • the time (riding time) in which the user is in the vehicle is estimated, but refer to the movement history of the vehicle. Then, the time during which the user is away from the vehicle between the movements may be estimated. For example, if it is customary for a mother who has driven a vehicle to deliver a child to a cram school and return to the school once it is customary to return to the cram school, he / she should return home based on behavior predictions from the movement history. Estimated time (outgoing time).
  • a learning program having a learning time shorter than the hollow time is generated and can be learned by the mobile terminal owned by the mother, continuous learning using the hollow time can be promoted. Further, the present disclosure is not limited to the above-described speech learning system, and may be provided as a speech learning method.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

L'invention concerne un système d'apprentissage vocal (10) adapté pour être utilisé dans un véhicule qui fournit à un utilisateur dans le véhicule du contenu d'apprentissage par le biais de la parole. Le système d'apprentissage vocal (10) comprend : une unité de stockage d'élément d'apprentissage (14) qui stocke une pluralité d'éléments d'apprentissage constituant un contenu d'apprentissage ; une unité d'estimation de temps de trajet (12) qui estime un temps de trajet représentant la durée pendant laquelle l'utilisateur se déplace dans le véhicule ; une unité de génération de programme d'apprentissage (15) qui combine de multiples éléments d'apprentissage parmi la pluralité d'éléments d'apprentissage pour générer un programme d'apprentissage unique qui se terminera au sein du temps de trajet estimé par l'unité d'estimation de temps de trajet ; et une unité d'exécution (17) qui exécute le programme d'apprentissage. Ce système d'apprentissage vocal permet d'encourager une étude continue dans un véhicule.
PCT/JP2015/006369 2015-01-19 2015-12-22 Système d'apprentissage vocal et procédé d'apprentissage vocal WO2016116992A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/542,810 US11164472B2 (en) 2015-01-19 2015-12-22 Audio learning system and audio learning method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2015008175 2015-01-19
JP2015-008175 2015-01-19
JP2015152126A JP6443257B2 (ja) 2015-01-19 2015-07-31 音声学習システム、音声学習方法
JP2015-152126 2015-07-31

Publications (1)

Publication Number Publication Date
WO2016116992A1 true WO2016116992A1 (fr) 2016-07-28

Family

ID=56416561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/006369 WO2016116992A1 (fr) 2015-01-19 2015-12-22 Système d'apprentissage vocal et procédé d'apprentissage vocal

Country Status (1)

Country Link
WO (1) WO2016116992A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002365061A (ja) * 2001-06-11 2002-12-18 Pioneer Electronic Corp 移動体用電子システムの制御装置及び制御方法、移動体用電子システム並びにコンピュータプログラム
JP2011085641A (ja) * 2009-10-13 2011-04-28 Power Shift Inc 語学学習支援システム及び語学学習支援方法
JP2015017944A (ja) * 2013-07-12 2015-01-29 株式会社デンソー 自動運転支援装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002365061A (ja) * 2001-06-11 2002-12-18 Pioneer Electronic Corp 移動体用電子システムの制御装置及び制御方法、移動体用電子システム並びにコンピュータプログラム
JP2011085641A (ja) * 2009-10-13 2011-04-28 Power Shift Inc 語学学習支援システム及び語学学習支援方法
JP2015017944A (ja) * 2013-07-12 2015-01-29 株式会社デンソー 自動運転支援装置

Similar Documents

Publication Publication Date Title
US12269491B2 (en) Methods and systems for increasing autonomous vehicle safety and flexibility using voice interaction
JP6469635B2 (ja) 車両制御装置
JP6443257B2 (ja) 音声学習システム、音声学習方法
CN110182210B (zh) 车辆控制装置和车辆控制方法
WO2017130482A1 (fr) Appareil de commande d'alerte et procédé de commande d'alerte
JP5115354B2 (ja) 運転支援装置
WO2017006651A1 (fr) Dispositif de commande de conduite automatique
US20140365228A1 (en) Interpretation of ambiguous vehicle instructions
JP2019043495A (ja) 自動運転調整装置、自動運転調整システム、及び自動運転調整方法
JP2018041328A (ja) 車両用情報提示装置
JP2020052658A (ja) 自動運転システム
WO2018008488A1 (fr) Procédé et système d'assistance à la conduite, dispositif d'assistance à la conduite utilisant ledit procédé, et dispositif, véhicule et programme de commande de conduite automatique
JP2023091170A (ja) 運転状態判断方法、自動運転システム
JP2018092412A (ja) 車両用制御装置
US11462103B2 (en) Driver-assistance device, driver-assistance system, and driver-assistance program
JP2019148850A (ja) 車両制御装置
JP2023146333A (ja) 車両制御装置、車両制御方法、およびプログラム
WO2016116992A1 (fr) Système d'apprentissage vocal et procédé d'apprentissage vocal
JP7597066B2 (ja) 提示制御装置、提示制御プログラム、自動運転制御装置、及び自動運転制御プログラム
JP2021160708A (ja) 提示制御装置、提示制御プログラム、自動走行制御システムおよび自動走行制御プログラム
JP7044295B2 (ja) 自動運転制御装置、自動運転制御方法、およびプログラム
JP2019125135A (ja) 車両の自動運転制御システム
WO2022030317A1 (fr) Dispositif d'affichage de véhicule et procédé d'affichage de véhicule
JP2024180564A (ja) 提示制御装置、及び提示制御プログラム
JP2024139838A (ja) 車両のリスク回避アシスト方法及び車両のリスク回避アシスト装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15878688

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15542810

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15878688

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载