+

CN110123266B - A Maneuver Decision Modeling Method Based on Multimodal Physiological Information - Google Patents

A Maneuver Decision Modeling Method Based on Multimodal Physiological Information Download PDF

Info

Publication number
CN110123266B
CN110123266B CN201910365772.8A CN201910365772A CN110123266B CN 110123266 B CN110123266 B CN 110123266B CN 201910365772 A CN201910365772 A CN 201910365772A CN 110123266 B CN110123266 B CN 110123266B
Authority
CN
China
Prior art keywords
decision
signal
physiological information
feature
eye movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910365772.8A
Other languages
Chinese (zh)
Other versions
CN110123266A (en
Inventor
龚光红
王夏爽
李妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201910365772.8A priority Critical patent/CN110123266B/en
Publication of CN110123266A publication Critical patent/CN110123266A/en
Application granted granted Critical
Publication of CN110123266B publication Critical patent/CN110123266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Cardiology (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明公开了一种基于多模态生理信息的机动决策建模方法,直接从人执行机动动作的过程中提取多模态生理信息,进行模型的构建,既不依赖于领域专家的经验总结,也不依赖于计算机的知识发现,不仅降低了工作量,节约了人工成本,而且建模方式从纯理性转变为具有感性的特点,使得建立的模型具有更高的逼真度,更接近人的行为决策过程;并且,利用多模态生理信息进行机动决策建模,还可以解决利用单一生理信号进行机动决策建模具有片面性的问题;此外,利用多模态生理信息的一个重要优势在于其特征的客观性,与传统的依赖于领域专家经验总结式的建模方式相比,采集得到的数据更为真实可靠,也更能客观地反映人的真实机动决策过程。

Figure 201910365772

The invention discloses a maneuvering decision modeling method based on multimodal physiological information, which directly extracts multimodal physiological information from the process of people performing maneuvering actions, and builds the model without relying on the experience summary of domain experts. It also does not rely on computer knowledge discovery, which not only reduces the workload and saves labor costs, but also changes the modeling method from pure rationality to perceptual characteristics, making the established model more realistic and closer to human behavior. decision-making process; moreover, the use of multimodal physiological information for motor decision modeling can also solve the one-sided problem of using a single physiological signal for motor decision modeling; in addition, an important advantage of using multimodal physiological information is its characteristic Objectivity. Compared with the traditional modeling method that relies on the experience summarization of domain experts, the collected data is more authentic and reliable, and it can more objectively reflect the real human decision-making process.

Figure 201910365772

Description

Maneuvering decision modeling method based on multi-modal physiological information
Technical Field
The invention relates to the technical field of living-machine interaction, mathematical modeling and human factor engineering fusion, in particular to a maneuvering decision modeling method based on multi-modal physiological information.
Background
The research content of human behavior modeling comprises aspects of perception, decision, planning, memory, learning and the like of the environment. Because the surrounding environment is various in form and has high complexity, the modeling objects involved are also many, such as behavior maneuver, maneuver decision modeling, and the like.
In recent years, there are two main methods for modeling human behavior: one is traditionally, people discover knowledge in the data by manually summarizing or analyzing the data; the other method is based on knowledge discovery technology, and knowledge in data is acquired mechanically through computer self-learning.
The traditional method for manually acquiring behavior data knowledge by depending on expert experience has certain difficulty in the aspect of acquiring data knowledge, and in order to improve the 'fidelity' of a model, the knowledge in related fields needs to be fully utilized to be summarized and organized into rules, however, the manual mode is difficult to continue with the explosive increase of data volume. The method for manually acquiring the knowledge of the behavior data by depending on the experience of experts is difficult to overcome the difficulty of acquiring and expressing the knowledge, and lacks the utilization of time series attributes in the behavior data.
Although the method for generating the behavior data knowledge by means of the self-learning of the computer solves the workload problem of people to a certain extent, the behavior data needs to be acquired completely by means of the computer, the calculation cost is high, the method is too rational, the characteristics of real people are lacked, in addition, the acquired behavior data also needs to be processed manually, and the whole thinking change process of people in the maneuver decision of battle is lacked.
Disclosure of Invention
In view of the above, the invention provides a maneuver decision modeling method based on multi-modal physiological information, which is used for solving the problems of large workload, excessive rationality and low fidelity of the conventional maneuver decision modeling method.
Therefore, the invention provides a maneuvering decision modeling method based on multi-modal physiological information, which comprises the following steps:
s1: building a real person immersive combat simulation scene;
s2: carrying out experimental design on the collection of the multi-modal physiological information; the multi-modal physiological information comprises an electroencephalogram signal, an eye movement signal and an electrocardiosignal;
s3: collecting the multi-modal physiological information;
s4: preprocessing the acquired electroencephalogram signals;
s5: extracting and screening the collected eye movement signals, the electrocardiosignals and the characteristics of the preprocessed electroencephalogram signals;
s6: and constructing a behavior maneuver decision model by adopting a support vector machine mode.
In a possible implementation manner, in the maneuver decision modeling method provided by the present invention, in step S2, the experimental design of the collection of the multi-modal physiological information specifically includes:
s21: recruiting the testees, and screening according to the physiological conditions and task experience of the testees;
s22: carrying out task training on the screened tested object, judging whether the tested object learns the basic operation of the flight simulator in an experiment period and independently completing a preset experiment task; if yes, go to step S23; if not, returning to the step S21, and continuing to recruit an equal number of new subjects until the number of the subjects reaches the standard;
s23: performing a pre-experiment on a tested object, checking a training result of the tested object and checking feasibility of experimental design;
s24: and carrying out formal experiments on the tested object, sequentially completing each experiment task according to a preset experiment sequence, and collecting multi-modal physiological experiment data when a person executes a maneuver decision.
In a possible implementation manner, in the maneuver decision modeling method provided by the present invention, in step S4, the preprocessing is performed on the acquired electroencephalogram signal, which specifically includes:
s41: preprocessing the acquired electroencephalogram signal by utilizing an MATLAB open source tool box to obtain an noiseless electroencephalogram signal;
s42: and storing the noiseless electroencephalogram signals.
In a possible implementation manner, in the above maneuvering decision modeling method provided by the present invention, step S41, the acquired electroencephalogram signal is preprocessed by using an open source toolbox of MATLAB, so as to obtain an electroencephalogram signal without noise, which specifically includes:
and carrying out electrode positioning, band-pass filtering, superposition averaging, baseline correction, re-referencing and independent component analysis on the acquired electroencephalogram signals by utilizing an open source tool box of MATLAB to obtain noiseless electroencephalogram signals.
In a possible implementation manner, in the above maneuvering decision modeling method provided by the present invention, step S5, the extracting and screening features of the collected eye movement signal, the collected electrocardiographic signal, and the preprocessed electroencephalogram signal specifically includes:
s51: for electroencephalogram signals with different maneuvering decisions, extracting and screening the characteristics of the electroencephalogram signals by adopting a time-frequency characteristic extraction method, a self-adaptive regression method, a common spatial mode method and a power spectrum analysis method;
s52: extracting and screening the blink rate characteristic, the fixation rate characteristic, the average fixation duration characteristic and the average pupil diameter characteristic of the eye movement signals for different maneuver decisions;
s53: extracting and screening the characteristics of the electrocardiosignals with different maneuvering decisions by respectively adopting a time domain analysis method, a frequency domain analysis method and a nonlinear analysis method;
s54: and summarizing the characteristics of the screened electroencephalogram signals, the characteristics of the eye movement signals and the characteristics of the electrocardiosignals to form multi-mode mixed physiological characteristics.
In a possible implementation manner, in the maneuver decision modeling method provided by the present invention, in step S52, for eye movement signals of different maneuvers, extracting a blink rate feature, a gaze rate feature, an average gaze duration feature, and an average pupil diameter feature of the eye movement signals specifically includes:
calculating the blink rate f of eye movement using the following formulabIs characterized in that:
Figure BDA0002048129970000031
wherein n represents the total number of winks and T represents the total time of the task;
the fixation rate f of eye movement is calculated using the following formulagIs characterized in that:
Figure BDA0002048129970000041
wherein m represents the total number of fixations;
the average duration of gaze of the eye movement is calculated using the following formula
Figure BDA0002048129970000042
Is characterized in that:
Figure BDA0002048129970000043
wherein d isfiRepresents the duration of the ith gaze activity;
the average pupil diameter of the eye movement is calculated using the following formula
Figure BDA0002048129970000044
Is characterized in that:
Figure BDA0002048129970000045
wherein ldiRepresenting the measured pupil diameter size during the i-th fixation activity.
In a possible implementation manner, in the maneuver decision modeling method provided by the present invention, after the step S6 is executed, a behavior maneuver decision model is constructed in a support vector machine manner, the method further includes the following steps:
s7: performing model training on the behavior maneuver decision model by adopting a cross validation mode;
s8: and optimizing the parameters of the behavior maneuver decision model by adopting an optimization algorithm of grid search.
The maneuvering decision modeling method provided by the invention directly extracts multi-modal physiological information from the process of executing maneuvering actions by people, constructs a model, does not depend on experience summary of domain experts and knowledge discovery of a computer, can reduce the workload of people and save labor cost compared with the traditional modeling mode which depends on the experience summary of the domain experts, and has the characteristic of perceptual transformation from pure rationality compared with the modeling mode which depends on the knowledge discovery of the computer, so that the established model has higher fidelity and is closer to the behavior decision process of people; in addition, the maneuvering decision modeling is carried out by utilizing the multi-modal physiological information, and the problem that the maneuvering decision modeling by utilizing a single physiological signal has one-sidedness can be solved; in addition, an important advantage of utilizing multimodal physiological information is the objectivity of the characteristics, and compared with the traditional modeling mode which depends on a domain expert experience summary formula, the acquired data is more real and reliable, and the real maneuvering decision process of a person can be reflected more objectively.
Drawings
FIG. 1 is a schematic flow chart of a maneuver decision modeling method based on multi-modal physiological information according to an embodiment of the present invention;
FIG. 2 is a flowchart of a maneuver decision modeling method based on multi-modal physiological information according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of experimental design for collecting multi-modal physiological information in the maneuvering decision modeling method based on multi-modal physiological information according to the embodiment of the invention;
FIG. 4 is a second flowchart of a maneuver decision modeling method based on multi-modal physiological information according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a preprocessing flow based on electroencephalogram signals in the method for modeling a maneuver decision based on multi-modal physiological information according to the embodiment of the present invention;
fig. 6 is a technical route diagram of feature extraction based on electroencephalogram signals in the maneuver decision modeling method based on multi-modal physiological information according to the embodiment of the present invention;
fig. 7 is a technical route diagram of feature extraction based on eye movement signals in a maneuver decision modeling method based on multi-modal physiological information according to an embodiment of the present invention;
fig. 8 is a technical route diagram of feature extraction based on electrocardiographic signals in the motorized decision modeling method based on multi-modal physiological information according to the embodiment of the present invention;
fig. 9 is a composition diagram based on multi-modal mixed physiological features in the maneuver decision modeling method based on multi-modal physiological information according to the embodiment of the present invention;
fig. 10 is an input/output diagram of a maneuver decision model constructed by the maneuver decision modeling method based on multi-modal physiological information according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only illustrative and are not intended to limit the present application.
The maneuvering decision modeling method based on the multi-modal physiological information provided by the embodiment of the invention has the flow diagram and the flow chart respectively shown in fig. 1 and fig. 2, and comprises the following steps:
s1: building a real person immersive combat simulation scene;
s2: carrying out experimental design on the collection of multi-modal physiological information; the multi-modal physiological information comprises an electroencephalogram signal, an eye movement signal and an electrocardiosignal;
s3: collecting multi-modal physiological information;
s4: preprocessing the acquired electroencephalogram signals;
s5: extracting and screening the characteristics of the collected eye movement signals, the collected electrocardiosignals and the preprocessed electroencephalogram signals;
s6: and constructing a behavior maneuver decision model by adopting a support vector machine mode.
The maneuvering decision modeling method provided by the embodiment of the invention directly extracts multi-modal physiological information from the process of executing maneuvering actions by people, constructs the model, does not depend on experience summary of domain experts and knowledge discovery of a computer, can reduce the workload of people and save labor cost compared with the traditional modeling mode which depends on the experience summary of the domain experts, and has the characteristic of being perceptual after being converted from pure rationality compared with the modeling mode which depends on the knowledge discovery of the computer, so that the established model has higher fidelity and is closer to the behavior decision process of people; in addition, the maneuvering decision modeling is carried out by utilizing the multi-modal physiological information, and the problem that the maneuvering decision modeling by utilizing a single physiological signal has one-sidedness can be solved; in addition, an important advantage of utilizing multimodal physiological information is the objectivity of the characteristics, and compared with the traditional modeling mode which depends on a domain expert experience summary formula, the acquired data is more real and reliable, and the real maneuvering decision process of a person can be reflected more objectively. The maneuver decision modeling method provided by the embodiment of the invention benefits from the rapid development of neuroengineering, particularly the improvement of physiological signal acquisition equipment such as electroencephalogram signals, eye movement signals, electrocardiosignals and the like and the great development of signal processing methods, and promotes the detection research of behavior physiological information.
In specific implementation, step S1 in the maneuver decision modeling method provided in the embodiment of the present invention builds an experimental simulation scene, and provides a real-person immersive simulation environment for a test to assist the test in making an effective maneuver decision. Specifically, an experimental simulation scene can be set up by taking the experimental purpose of modeling the human maneuver as a starting point. Taking an aircraft simulator as an example, the aircraft simulator comprises a visual system, an air combat simulator cockpit and a computer network system. The visual scene system adopts a computer imaging system to generate visual scenes outside a battle base cabin, mainly comprises landforms such as airport runways, buildings, fields, roads and the like, and can carry out simulation on battle scenes under complex conditions, such as rainy days, snowy days and thunderstorm days, and scenes in daytime and night modes. The appearance of the cabin of the air war simulator adopts a full-cover type cabin, and various instrument panels, operating devices and unmanned aerial vehicle seats are arranged in the cabin; the instrument panel can be divided into an instrument module, a central console module and a control panel according to a functional structure and a functional module; the control panel is a multifunctional display, and the main flight display panel is symmetrically arranged according to the driving position; the operating device comprises a throttle lever, a handle, pedals and the like, and is symmetrically arranged according to the driving position. The computer network system is the core of the whole airplane simulator, the hardware comprises a host, an interface and a bus, the software comprises software management, application software and support software, the software comprises a scene computer, a server computer and a central control computer, and the three computers are mutually matched through Ethernet to perform real-time data exchange and cooperate with each other to complete a flight simulation task.
In specific implementation, when step S2 of the maneuver decision modeling method provided by the embodiment of the present invention is performed to design an experiment for collecting multi-modal physiological information, a specific experimental process may include four stages of a subject recruitment stage, a subject training stage, a pre-experiment stage, and a formal experiment stage (as shown in fig. 3), and step S2, as shown in fig. 4, may be specifically implemented in the following manner:
s21: recruiting the testees, and screening according to the physiological conditions and task experience of the testees;
specifically, the requirement of the experiment on the subject is high and the experiment is limited to a specific population, so that the subject needs to be screened strictly according to the physiological conditions and task experience of the subject when being recruited;
s22: carrying out task training on the screened tested object, judging whether the tested object learns the basic operation of the flight simulator in an experiment period and independently completing a preset experiment task; if yes, go to step S23; if not, returning to the step S21, and continuing to recruit an equal number of new subjects until the number of the subjects reaches the standard;
specifically, the tested person needs to learn the basic operation of the flight simulator in the experimental period and independently complete the preset experimental task, and if the tested person cannot successfully complete the training, an equal number of new tested persons need to be recruited until the number of the tested persons reaches the standard;
s23: performing a pre-experiment on a tested object, checking a training result of the tested object and checking feasibility of experimental design;
specifically, the main test controls the experiment time and the experiment process through the pre-experiment, and fine adjustment improvement of the subsequent process is carried out according to the pre-experiment condition;
s24: and carrying out formal experiments on the tested object, sequentially completing each experiment task according to a preset experiment sequence, and collecting multi-modal physiological experiment data when a person executes a maneuver decision.
In specific implementation, when step S4 in the maneuver decision modeling method provided by the embodiment of the present invention is executed to perform preprocessing on the acquired electroencephalogram signal, as shown in fig. 4, the preprocessing may be specifically implemented in the following manner:
s41: preprocessing the acquired electroencephalogram signals by utilizing an MATLAB open source tool box to obtain noiseless electroencephalogram signals;
specifically, the open source toolbox for MATLAB may be Letswave;
s42: storing the noise-free electroencephalogram signals;
in particular, it can be stored in txt format.
In specific implementation, when the step S41 in the above-described maneuver decision modeling method provided in the embodiment of the present invention is executed, and the acquired electroencephalogram signal is preprocessed by using the MATLAB open source kit to obtain a noiseless electroencephalogram signal, the acquired electroencephalogram signal may be specifically subjected to electrode localization, band-pass filtering, superposition averaging, baseline correction, re-referencing, and independent component analysis by using the MATLAB open source kit to obtain a pure noiseless electroencephalogram signal as much as possible, and a preprocessing flow diagram based on the electroencephalogram signal is shown in fig. 5.
In specific implementation, when step S5 in the maneuver decision modeling method provided by the embodiment of the present invention is executed to extract and screen features of the acquired eye movement signal, the acquired electrocardiogram signal, and the preprocessed electroencephalogram signal, as shown in fig. 4, the following methods may be specifically implemented:
s51: for electroencephalogram signals with different maneuvering decisions, extracting and screening the characteristics of the electroencephalogram signals by adopting a time-frequency characteristic extraction method, a self-adaptive regression method, a common spatial mode method and a power spectrum analysis method;
specifically, when a time-frequency analysis method is adopted, the characteristics of five frequency bands of the electroencephalogram signals can be extracted by adopting a relatively classical Fast Fourier Transform (FFT), a short-time fourier transform (STFT) and a wavelet transform method, and the extracted frequency domain characteristics comprise frequency sub-bands of five wave bands of alpha waves, beta waves, delta waves, theta waves and gamma waves. The method comprises the steps of selecting the strongest frequency characteristic as a characteristic alternative by comparing the strength of the frequency characteristics among different time-frequency methods, then comparing the characteristic alternative with the characteristics of an adaptive regression model, a common spatial mode, power spectrum analysis, energy mean value and variance analysis in a significance difference mode, selecting the characteristics with significance difference among different maneuver decision characteristics as standby characteristics established by the model, and effectively decoding the maneuver decision intention of a person by the screened characteristics. The technical route of feature extraction based on electroencephalogram signals is shown in fig. 6;
s52: extracting and screening the blink rate characteristic, the fixation rate characteristic, the average fixation duration characteristic and the average pupil diameter characteristic of the eye movement signals for different maneuver decisions;
in particular, eye movement signals of different manoeuvres are another good indicator of the human-machine-action behavior of the decoding. The characteristics of the eye movement signal mainly include blink rate, pupil size, average fixation time of fixation point, and the like. In the course of performing maneuvers, the person's blink rate shows a decreasing trend as he performs stressful emotions. When the same task is completed, the pupils of the people can be expanded firstly due to the influence of time pressure; as the combat mission progresses, the person gradually becomes tired and the pupil shrinks. Therefore, the processing of the eye movement signals finally selects the blink rate, the fixation rate, the average fixation duration and the average pupil diameter as main analysis characteristics. The characteristics of the eye movement signals are screened, the characteristics with significant differences among different maneuver decisions are selected, and the change of the eye movement signals is helpful for comprehensively decoding the situations of attention maintenance, conversion and distribution when a person performs a maneuver. The technical route of feature extraction based on eye movement signals is shown in fig. 7;
s53: extracting and screening the characteristics of the electrocardiosignals with different maneuvering decisions by respectively adopting a time domain analysis method, a frequency domain analysis method and a nonlinear analysis method;
specifically, electrocardiosignals of different maneuvering decisions are used as another consideration index of the maneuvering of the decoding person. The heart rate variability index is analyzed by adopting three solving methods, namely a time domain analysis method, a frequency domain analysis method and a nonlinear analysis method, so as to identify maneuver decision information contained in behavior decision. The technical route of feature extraction based on electrocardiosignals is shown in fig. 8;
the variation of the R-R interval of the electrocardiosignal is calculated by a statistical discrete trend analysis method, namely the interval between two peaks of one heartbeat is the R-R interval. The electrocardiosignal is decomposed into a series of components with different energies and different frequency bands by respectively adopting a time domain analysis method and a frequency domain analysis method and is analyzed, so that the heart rate variability dynamic characteristics missing in a time sequence method can be effectively compensated, the balance action of sympathetic nerves and parasympathetic nerves can be quantitatively judged, and the effect on index sensitivity and specificity is better;
each of the time domain and frequency domain features of the heart rate variability to be used is specifically described below in the form of a table, and as shown in tables 1 and 2, the features of the electrocardiographic signals are screened according to the results. The heart rate variability features provide theoretical basis for establishing a maneuvering decision model for physiological information;
TABLE 1 commonly used heart rate variability time domain characterization table
Figure BDA0002048129970000101
TABLE 2 commonly used heart rate variability frequency domain characterization table
Figure BDA0002048129970000102
S54: summarizing the characteristics of the screened electroencephalogram signals, the characteristics of the eye movement signals and the characteristics of the electrocardiosignals to form multi-mode mixed physiological characteristics; as shown in fig. 9, this provides a basis for the construction of the maneuver decision model of step S5.
It should be noted that the key of modeling is the similarity to the real situation, and modeling that does not conform to the real situation loses the existing value. On the premise of rapid development of the current physiological measurement technology, the maneuver decision modeling method provided by the embodiment of the invention can comprehensively measure various physiological indexes of a person in a maneuver decision, and can measure physiological signals directly or indirectly related to central nervous system activities, such as a skin electrical signal, a respiratory wave, a functional near infrared spectrum and the like, besides an electroencephalogram signal, an oculomotor signal and an electrocardio signal. The method comprises the steps of processing multi-modal physiological information by utilizing various physiological information processing methods, decoding human brain through extracting characteristics of electroencephalogram signals to perform a decision process during information processing, analyzing attention of a human in a task execution process through extracting characteristics of eye movement signals, extracting psychological conditions of motor decision of the human through characteristics of electrocardiosignals, combining the characteristics of the multi-modal physiological information to serve as fusion characteristics, reflecting diversity of multi-modal behavior characteristics, and playing an important role in motor decision modeling. The invention constructs the behavior model by fusing the multi-modal characteristics of the electroencephalogram signal, the eye movement signal and the electrocardiosignal of the human behavior, and the constructed maneuver decision model directly obtains maneuver decision data from the human body, does not depend on the summary of field experts and the knowledge discovery of a computer, can overcome the defects of low fidelity and excessive rationality of the human behavior model, and can solve the problem that the maneuver decision modeling by using a single physiological signal has one-sidedness.
In specific implementation, in step S52 of the maneuver decision modeling method provided in the embodiment of the present invention, for eye movement signals of different maneuvers, a blink rate feature, a gaze rate feature, an average gaze duration feature, and an average pupil diameter feature of the eye movement signals are extracted, where the blink rate refers to a number of blinks in a unit time, generally speaking, a closed-eye behavior of a person lasting 70 to 500ms may be regarded as a single blink, and specifically, a blink rate f of the eye movement may be calculated by using the following formulabIs characterized in that:
Figure BDA0002048129970000111
wherein n represents the total number of winks and T represents the total time of the task;
the fixation rate of eye movement is the number of fixations per unit time, and may be regarded as one fixation with a duration of not less than 100ms, and specifically, the fixation rate f of eye movement may be calculated by using the following formulagIs characterized in that:
Figure BDA0002048129970000112
wherein m represents the total number of fixations;
the average duration of the fixation of the eye movement refers to the average duration of each fixation behavior, and the average duration of the fixation of the eye movement can be calculated by using the following formula
Figure BDA0002048129970000121
Is characterized in that:
Figure BDA0002048129970000122
wherein d isfiRepresents the duration of the ith gaze activity;
the average pupil diameter of the eye movement refers to the average value of all pupil diameter measurement results, and the eye movement acquisition equipment measures the eye movement once during each individual gazing behavior, and particularly, the average pupil diameter of the eye movement can be calculated by the following formula
Figure BDA0002048129970000123
Is characterized in that:
Figure BDA0002048129970000124
wherein ldiRepresenting the measured pupil diameter size during the i-th fixation activity.
In specific implementation, in step S6 of the maneuver decision modeling method provided in the embodiment of the present invention, in the behavior maneuver decision model is constructed in a support vector machine manner, the support vector machine manner has good classification performance and excellent generalization capability, and pseudo codes of the construction process are shown in table 3.
Table 3: support vector machine classifier pseudo code
Figure BDA0002048129970000125
In specific implementation, after the step S6 in the maneuver decision modeling method provided by the embodiment of the present invention is executed, and a behavior maneuver decision model is constructed in a support vector machine manner, as shown in fig. 4, the method may further include the following steps:
s7: performing model training on the behavior maneuver decision model by adopting a cross validation mode;
specifically, in order to prevent overfitting of the maneuvering model, a cross validation mode is adopted for model training in the model training process;
s8: optimizing parameters of the behavior maneuver decision model by adopting an optimization algorithm of grid search; therefore, the identification accuracy can be further improved, and the behavior maneuver model can be more accurately established for the multi-modal physiological information.
From the view point of model construction, the maneuvering decision model based on the multi-modal physiological information takes the multi-modal physiological mixed characteristics of maneuvering decision as model input, and is richer than the input of the maneuvering decision model established by traditional single physiological signals. The output results based on different maneuver decisions as models are shown in FIG. 10.
The maneuvering decision modeling method provided by the embodiment of the invention directly extracts multi-modal physiological information from the process of executing maneuvering actions by people, constructs the model, does not depend on experience summary of domain experts and knowledge discovery of a computer, can reduce the workload of people and save labor cost compared with the traditional modeling mode which depends on the experience summary of the domain experts, and has the characteristic of being perceptual after being converted from pure rationality compared with the modeling mode which depends on the knowledge discovery of the computer, so that the established model has higher fidelity and is closer to the behavior decision process of people; in addition, the maneuvering decision modeling is carried out by utilizing the multi-modal physiological information, and the problem that the maneuvering decision modeling by utilizing a single physiological signal has one-sidedness can be solved; in addition, an important advantage of utilizing multimodal physiological information is the objectivity of the characteristics, and compared with the traditional modeling mode which depends on a domain expert experience summary formula, the acquired data is more real and reliable, and the real maneuvering decision process of a person can be reflected more objectively. The maneuver decision modeling method provided by the embodiment of the invention benefits from the rapid development of neuroengineering, particularly the improvement of physiological signal acquisition equipment such as electroencephalogram signals, eye movement signals, electrocardiosignals and the like and the great development of signal processing methods, and promotes the detection research of behavior physiological information.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1.一种基于多模态生理信息的机动决策建模方法,其特征在于,包括如下步骤:1. a motor decision-making modeling method based on multimodal physiological information, is characterized in that, comprises the steps: S1:搭建真人沉浸式作战仿真场景;S1: Build a real-life immersive combat simulation scene; S2:对所述多模态生理信息的采集进行实验设计;其中,所述多模态生理信息包括脑电信号、眼动信号和心电信号;S2: Carry out experimental design for the acquisition of the multimodal physiological information; wherein, the multimodal physiological information includes EEG signals, eye movement signals and ECG signals; S3:对所述多模态生理信息进行采集;S3: collecting the multimodal physiological information; S4:对采集的所述脑电信号进行预处理;S4: preprocessing the collected EEG signal; S5:对采集的所述眼动信号、所述心电信号以及预处理后的所述脑电信号的特征进行提取和筛选;S5: extracting and screening features of the collected eye movement signal, the electrocardiographic signal, and the preprocessed electroencephalographic signal; S6:采用支持向量机的方式构建行为机动决策模型;S6: Build a behavioral maneuver decision-making model by means of a support vector machine; 其中,步骤S1,搭建真人沉浸式作战仿真场景,具体包括:Wherein, in step S1, build a real-life immersive combat simulation scene, which specifically includes: 飞机模拟器包括视景系统、空战模拟器座舱和计算机网络系统;其中,The aircraft simulator includes a visual system, an air combat simulator cockpit and a computer network system; among them, 视景系统包括机场跑道、建筑物、田野以及道路,采用计算机成像系统产生战斗机座舱外的视觉景象,对作战场景进行仿真模拟;The visual system includes airport runways, buildings, fields and roads. The computer imaging system is used to generate the visual scene outside the cockpit of the fighter, and simulate the combat scene; 空战模拟器座舱的外观采用全罩式座舱,舱内包括仪表面板、操纵装置及无人机座椅;其中,仪表面板按照功能结构和功能模块分为仪表模块、中央控制台模块以及控制面板;控制面板为多功能显示器,主飞行显示面板根据驾驶的位置对称布置;操纵装置包括油门杆、手柄和脚踏,根据驾驶的位置对称布置;The appearance of the air combat simulator cockpit adopts a full-face cockpit, and the cabin includes an instrument panel, a control device and a UAV seat; among them, the instrument panel is divided into an instrument module, a central console module and a control panel according to the functional structure and functional modules; The control panel is a multi-function display, and the main flight display panel is symmetrically arranged according to the driving position; the control device includes the throttle stick, handle and pedal, which is symmetrically arranged according to the driving position; 计算机网络系统包括视景计算机、服务器计算机和中控机三台计算机,硬件包括主机、接口和总线,软件包括软件管理、应用软件和支持软件,通过以太网相互配合,进行实时的数据交换,共同协作完成飞行模拟任务;The computer network system includes three computers: the visual computer, the server computer and the central control computer. The hardware includes the host computer, the interface and the bus, and the software includes software management, application software and support software. Collaborate to complete flight simulation tasks; 步骤S5,对采集的所述眼动信号、所述心电信号以及预处理后的所述脑电信号的特征进行提取和筛选,具体包括:Step S5, extracting and screening the collected features of the eye movement signal, the ECG signal and the preprocessed EEG signal, specifically including: S51:对于不同机动决策的脑电信号,分别采用快速傅里叶变换法、短时傅里叶变换法和小波变换方法提取脑电信号的alpha波、beta波、delta波、theta波和gamma波五个波段的频率子带,通过对比不同时频方法之间频率特征的强弱,选择最强的频率特征作为特征备选,将特征备选与自适应回归方法、共同空间模式方法、功率谱分析方法、能量均值方法、方差分析方法的特征进行显著性差异的对比,选择不同机动决策的特征之间具有显著性差异的特征作为筛选后的特征,筛选后的特征用于解码人的机动决策意图;S51: For the EEG signals of different maneuvering decisions, use the fast Fourier transform method, the short-time Fourier transform method and the wavelet transform method to extract the alpha wave, beta wave, delta wave, theta wave and gamma wave of the EEG signal respectively The frequency sub-bands of the five bands, by comparing the strength of the frequency features between different time-frequency methods, select the strongest frequency feature as the feature candidate, and compare the feature candidate with the adaptive regression method, the common spatial pattern method, and the power spectrum. The features of the analysis method, the energy mean method and the variance analysis method are compared for significant differences, and the features with significant differences between the features of different maneuvering decisions are selected as the features after screening, and the features after screening are used to decode people's maneuvering decisions. intention; S52:对于不同机动决策的眼动信号,对眼动信号的眨眼率特征、注视率特征、平均注视持续时长特征以及平均瞳孔直径特征进行提取并筛选;S52: For eye movement signals of different maneuvering decisions, extract and screen the blink rate feature, fixation rate feature, average fixation duration feature, and average pupil diameter feature of the eye movement signal; S53:对于不同机动决策的心电信号,通过统计学离散趋势分析法计算心电信号的R-R间期的变化,采用时域分析法将心电信号分解为一系列不同能量成分,采用频域分析法将心电信号分解为一系列不同频段成分,对不同能量成分和不同频段成分进行分析;其中,R-R间期为心跳一次两个波峰之间的间隙;S53: For the ECG signals of different maneuvering decisions, calculate the change of the R-R interval of the ECG signal through the statistical discrete trend analysis method, use the time domain analysis method to decompose the ECG signal into a series of different energy components, and use the frequency domain analysis method to decompose the ECG signal into a series of different energy components. The method decomposes the ECG signal into a series of different frequency band components, and analyzes the different energy components and different frequency band components; among them, the R-R interval is the gap between two peaks of a heartbeat; S54:将筛选后的脑电信号的特征、眼动信号的特征以及心电信号的特征进行汇总,组成为多模态混合生理特征。S54: Summarize the characteristics of the EEG signal after screening, the characteristics of the eye movement signal, and the characteristics of the ECG signal to form a multimodal mixed physiological characteristic. 2.如权利要求1所述的机动决策建模方法,其特征在于,步骤S2,对所述多模态生理信息的采集进行实验设计,具体包括:2. The motor decision modeling method according to claim 1, wherein in step S2, experiment design is performed on the collection of the multimodal physiological information, specifically comprising: S21:对被试进行招募,根据被试的生理条件以及任务经验进行筛选;S21: Recruit the subjects, and screen the subjects according to their physiological conditions and task experience; S22:对筛选出的被试进行任务培训,判断被试在实验周期内是否学会飞行模拟器的基本操作并独立完成预设的实验任务;若是,则执行步骤S23;若否,则返回步骤S21,继续招募相等数量的新被试直至人数达标;S22: Perform task training on the selected subjects, and determine whether the subjects have learned the basic operations of the flight simulator and independently completed the preset experimental tasks within the experimental period; if so, go to step S23; if not, return to step S21 , continue to recruit an equal number of new subjects until the number reaches the target; S23:对被试进行预实验,对被试的培训结果进行检验以及对实验设计的可行性进行检验;S23: conduct pre-experiment on the subjects, test the training results of the subjects, and test the feasibility of the experimental design; S24:对被试进行正式实验,依照预先制定好的实验顺序依次完成各个实验任务,采集人执行机动决策时的多模态生理实验数据。S24: Perform formal experiments on the subjects, complete each experimental task in sequence according to the pre-established experimental sequence, and collect multimodal physiological experimental data when the person performs maneuvering decisions. 3.如权利要求1所述的机动决策建模方法,其特征在于,步骤S4,对采集的所述脑电信号进行预处理,具体包括:3. The motor decision modeling method according to claim 1, wherein in step S4, preprocessing the collected EEG signals, specifically comprising: S41:利用MATLAB的开源工具箱对采集的所述脑电信号进行预处理,得到无噪声的脑电信号;S41: Preprocess the collected EEG signal by using an open source toolbox of MATLAB to obtain a noise-free EEG signal; S42:对所述无噪声的脑电信号进行存储。S42: Store the noise-free EEG signal. 4.如权利要求3所述的机动决策建模方法,其特征在于,步骤S41,利用MATLAB的开源工具箱对采集的所述脑电信号进行预处理,得到无噪声的脑电信号,具体包括:4. The method for modeling motor decision-making as claimed in claim 3, characterized in that, in step S41, using the open source toolbox of MATLAB to preprocess the collected EEG signals to obtain noise-free EEG signals, specifically comprising: : 利用MATLAB的开源工具箱对采集的所述脑电信号进行电极定位、带通滤波、叠加平均、基线校正、重参考以及独立成分分析,得到无噪声的脑电信号。Using the open source toolbox of MATLAB, electrode positioning, band-pass filtering, superposition averaging, baseline correction, re-reference and independent component analysis are performed on the collected EEG signals to obtain noise-free EEG signals. 5.如权利要求1所述的机动决策建模方法,其特征在于,步骤S52,对于不同机动决策的眼动信号,对眼动信号的眨眼率特征、注视率特征、平均注视持续时长特征以及平均瞳孔直径特征进行提取,具体包括:5. The method for modeling maneuvering decisions as claimed in claim 1, wherein in step S52, for the eye movement signals of different maneuvering decisions, the blink rate feature, the gaze rate feature, the average gaze duration feature and the The average pupil diameter features are extracted, including: 利用如下公式计算眼动的眨眼率fb特征:The blink rate f b feature of eye movement is calculated using the following formula:
Figure FDA0003024318080000031
Figure FDA0003024318080000031
其中,n代表眨眼总次数,T代表任务总时间;Among them, n represents the total number of blinks, and T represents the total task time; 利用如下公式计算眼动的注视率fg特征:The gaze rate f g feature of eye movement is calculated using the following formula:
Figure FDA0003024318080000032
Figure FDA0003024318080000032
其中,m代表注视总次数;Among them, m represents the total number of gazes; 利用如下公式计算眼动的平均注视持续时长
Figure FDA0003024318080000033
特征:
Calculate the average fixation duration of eye movements using the following formula
Figure FDA0003024318080000033
feature:
Figure FDA0003024318080000034
Figure FDA0003024318080000034
其中,dfi代表第i次注视行为的持续时长;Among them, d fi represents the duration of the i-th fixation behavior; 利用如下公式计算眼动的平均瞳孔直径
Figure FDA0003024318080000035
特征:
Calculate the average pupil diameter of eye movements using the following formula
Figure FDA0003024318080000035
feature:
Figure FDA0003024318080000041
Figure FDA0003024318080000041
其中,ldi代表第i次注视行为期间测量得到的瞳孔直径大小。where ldi represents the pupil diameter measured during the ith fixation.
6.如权利要求1所述的机动决策建模方法,其特征在于,在执行步骤S6,采用支持向量机的方式构建行为机动决策模型之后,还包括如下步骤:6. The maneuvering decision-making modeling method as claimed in claim 1, characterized in that, after performing step S6, after the behavior maneuvering decision-making model is constructed by means of a support vector machine, it also comprises the following steps: S7:采用交叉验证的方式对所述行为机动决策模型进行模型训练;S7: Model training is performed on the behavioral maneuver decision-making model by means of cross-validation; S8:采用网格搜索的优化算法对所述行为机动决策模型的参数进行优化。S8: Optimizing parameters of the behavioral maneuver decision-making model by using an optimization algorithm of grid search.
CN201910365772.8A 2019-05-05 2019-05-05 A Maneuver Decision Modeling Method Based on Multimodal Physiological Information Active CN110123266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910365772.8A CN110123266B (en) 2019-05-05 2019-05-05 A Maneuver Decision Modeling Method Based on Multimodal Physiological Information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910365772.8A CN110123266B (en) 2019-05-05 2019-05-05 A Maneuver Decision Modeling Method Based on Multimodal Physiological Information

Publications (2)

Publication Number Publication Date
CN110123266A CN110123266A (en) 2019-08-16
CN110123266B true CN110123266B (en) 2021-06-15

Family

ID=67576128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910365772.8A Active CN110123266B (en) 2019-05-05 2019-05-05 A Maneuver Decision Modeling Method Based on Multimodal Physiological Information

Country Status (1)

Country Link
CN (1) CN110123266B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111067553A (en) * 2019-12-30 2020-04-28 中国船舶工业综合技术经济研究院 An experimental system of human body efficacy under the action of multi-environmental elements
CN111067552B (en) * 2019-12-30 2022-07-01 中国船舶工业综合技术经济研究院 A measurement system for the influence of light factors on the performance of special shift workers
CN112043252B (en) * 2020-10-10 2021-09-28 山东大学 Emotion recognition system and method based on respiratory component in pulse signal
CN115120240B (en) * 2022-08-30 2022-12-02 山东心法科技有限公司 Sensitivity evaluation method, equipment and medium for special industry target perception skills

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218929A (en) * 2013-03-19 2013-07-24 哈尔滨工业大学 In-spaceport-bin navigation analogy method and system based on head-down bed resting
CN104575155A (en) * 2015-02-03 2015-04-29 扬州大学 Driving ergonomics designing platform for all-windshield head-up display

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104665849B (en) * 2014-12-11 2017-03-01 西南交通大学 A kind of high ferro dispatcher based on the interaction of many physiological signals multi-model stress detection method
CN107007291A (en) * 2017-04-05 2017-08-04 天津大学 Recognition system and information processing method of stress intensity based on multiple physiological parameters
CN107799165A (en) * 2017-09-18 2018-03-13 华南理工大学 A kind of psychological assessment method based on virtual reality technology
CN108509040A (en) * 2018-03-28 2018-09-07 哈尔滨工业大学深圳研究生院 Mixing brain machine interface system based on multidimensional processiug and adaptive learning
CN108904163A (en) * 2018-06-22 2018-11-30 北京信息科技大学 wheelchair control method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218929A (en) * 2013-03-19 2013-07-24 哈尔滨工业大学 In-spaceport-bin navigation analogy method and system based on head-down bed resting
CN104575155A (en) * 2015-02-03 2015-04-29 扬州大学 Driving ergonomics designing platform for all-windshield head-up display

Also Published As

Publication number Publication date
CN110123266A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110123266B (en) A Maneuver Decision Modeling Method Based on Multimodal Physiological Information
CN107224291B (en) Dispatcher Ability Test System
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
Peng et al. 3D-STCNN: Spatiotemporal Convolutional Neural Network based on EEG 3D features for detecting driving fatigue
CN111553618B (en) Control ergonomics analysis method, equipment and system
CN111598453B (en) Control ergonomics analysis method, equipment and system based on executive force in virtual scene
CN111544015B (en) Cognitive-based control ergonomics analysis method, equipment and system
CN113974589B (en) Multimodal behavioral paradigm evaluation optimization system and cognitive ability evaluation method
CN111553617B (en) Control ergonomics analysis method, equipment and system based on cognitive ability in virtual scene
CN114999237B (en) Intelligent education interactive teaching method
CN111265212A (en) Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN117547270A (en) Pilot cognitive load feedback system with multi-source data fusion
CN113729710A (en) Real-time attention assessment method and system integrating multiple physiological modes
CN111598451B (en) Control ergonomics analysis method, equipment and system based on task execution ability
CN102779229A (en) Self-adapting automation method based on brain function state
Jiang et al. Mental workload artificial intelligence assessment of pilots’ EEG based on multi-dimensional data fusion and LSTM with attention mechanism model
CN116269380A (en) Intelligent ship driver psychological load assessment system and method
Wang et al. Decoding pilot behavior consciousness of EEG, ECG, eye movements via an SVM machine learning model
CN116700495A (en) Brain-computer interaction method and equipment based on steady-state visual evoked potential and motor imagery
WO2024032728A1 (en) Method and apparatus for evaluating intelligent human-computer coordination system, and storage medium
Zhang et al. Assessing Pilot Workload during Takeoff and Climb under Different Weather Conditions: A fNIRS-based Modelling using Deep Learning Algorithms
Masters et al. Investigating the utility of fnirs to assess mental workload in a simulated helicopter environment
Liu et al. Identification of pilots’ mental workload under different flight phases based on a portable EEG device
CN110569968A (en) Evaluation method and evaluation system for entrepreneurial failure resilience based on electrophysiological signals
Chen et al. A pilot workload evaluation method based on EEG data and physiological data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载