Disclosure of Invention
The application mainly aims to provide a gesture processing method and an intelligent wearing system, and aims to solve the technical problem that the accuracy of the user gesture recognized by the existing gesture recognition scheme is low.
In order to achieve the above object, the present application provides a posture processing method applied to an intelligent garment, wherein the intelligent garment comprises a plurality of fiber-based stretching sensors woven into the intelligent garment in the areas corresponding to the joints of each human body;
The gesture processing method comprises the following steps:
obtaining a stretching detection value of the fiber-based stretching sensor;
determining a corresponding angle value according to the stretching detection value;
And mapping the angle value to coordinate axes of the three-dimensional coordinate system of each human joint to obtain a three-dimensional gesture.
In one embodiment, the step of determining the corresponding angle value according to the stretch detection value includes:
obtaining an initial stretching value of the fiber-based stretching sensor, wherein the initial stretching value is a stretching value of the fiber-based stretching sensor in an initial standard state;
Calculating a stretch change value between the stretch detection value and the initial stretch value;
And obtaining an angle value corresponding to the stretching detection value according to the stretching variation value and the angle mapping relation corresponding to the fiber-based stretching sensor, wherein the angle mapping relation describes the mapping relation between the stretching variation value and the angle value of the fiber-based stretching sensor.
In an embodiment, the step of mapping the angle value to coordinate axes of a three-dimensional coordinate system of each human joint to obtain a three-dimensional pose includes:
according to the setting position of the fiber-based stretching sensor, determining a coordinate axis of a three-dimensional coordinate system of a human joint corresponding to the fiber-based stretching sensor;
Mapping the angle value of the fiber-based tension sensor to a coordinate axis of a corresponding three-dimensional coordinate system to obtain a local joint angle;
According to the transformation relation between three-dimensional coordinate systems among the human joints, the local joint angles are connected in series, and the global human body posture is obtained and used as the three-dimensional posture.
In an embodiment, the step of concatenating the local joint angles according to the transformation relationship between the three-dimensional coordinate systems of the human joints to obtain the global human body posture includes:
Acquiring a designated human body area, and determining local joint angles of all human body joints in the designated human body area;
and according to the transformation relation between the three-dimensional coordinate systems of the human body joints, the local joint angles of the human body joints in the appointed human body region are connected in series to obtain the global human body posture.
In one embodiment, each human joint in the designated human body region includes an ankle joint, a knee joint, and a hip joint;
after the step of acquiring the designated human body region and determining the local joint angles of each human body joint in the designated human body region, the method comprises the following steps:
Responding to a jump event, acquiring a landing acceleration, and calculating a ground reaction force based on the landing acceleration, wherein the landing acceleration is determined according to the vertical acceleration at the jump event;
according to the local joint angles of the ankle joint, the knee joint and the hip joint, determining stress thresholds of the ankle joint, the knee joint and the hip joint;
According to the ground reaction force, performing reverse recursion calculation by taking an ankle joint as a starting point to obtain ankle joint force, knee joint force and hip joint force;
and respectively comparing the ankle joint force, the knee joint force and the hip joint force with corresponding stress thresholds, and outputting injury early warning information after any one of the ankle joint force, the knee joint force and the hip joint force exceeds the corresponding stress threshold.
In an embodiment, the gesture processing method further includes:
responding to a motion analysis instruction, and acquiring a joint chain and a standard action angle sequence related to motion to be analyzed;
Generating a current action angle sequence by using each local joint angle in the joint chain;
aligning the current action angle sequence with the standard action angle sequence to obtain a motion coordination deviation;
And generating posture correction prompt information according to the movement coordination deviation.
In an embodiment, the step of aligning the current motion angle sequence and the standard motion angle sequence to obtain a motion coordination deviation includes:
mapping the current action angle sequence and the standard action angle sequence on the same time axis, and calculating the angle difference and the time sequence difference between key phase points in the standard action angle sequence and mapping phase points corresponding to the current action angle sequence;
the angle difference and the time sequence difference are taken as motion coordination deviation.
In an embodiment, after the step of mapping the angle value to coordinate axes of the three-dimensional coordinate system of each human joint to obtain a three-dimensional pose, the pose processing method further includes:
importing the three-dimensional gesture into a standard human body model, and determining the real-time gravity center position under the three-dimensional gesture;
And outputting falling warning information after the horizontal moving speed of the real-time gravity center position is greater than a preset speed threshold value or the relative distance between the real-time gravity center position and a preset human body supporting point is greater than a preset distance threshold value.
In addition, in order to achieve the above purpose, the application also provides an intelligent wearing system, which comprises an intelligent garment and a control terminal in communication connection with the intelligent garment;
The intelligent clothing comprises a base fabric layer and a plurality of fiber-based stretching sensors, wherein the fiber-based stretching sensors are woven into the base fabric layer in areas corresponding to all human joints;
the control terminal is configured to implement the steps of the gesture processing method as described above.
In an embodiment, the smart garment further comprises a data acquisition module;
The data acquisition module is electrically connected with each fiber-based stretching sensor through a wire and is used for acquiring sensor signals of each fiber-based stretching sensor and sending the sensor signals to the control terminal.
In addition, in order to achieve the above object, the present application also proposes a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the gesture processing method as described above.
Furthermore, to achieve the above object, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the gesture processing method as described above.
One or more technical schemes provided by the application have at least the following technical effects:
The intelligent clothing is applied to intelligent clothing, the intelligent clothing comprises a plurality of fiber-based stretching sensors, and the fiber-based stretching sensors are woven into the intelligent clothing in areas corresponding to all human joints. Under the condition that the joints of the human body are at different angles, the fiber stretching degrees of the intelligent clothing and the areas corresponding to the joints of the human body are different, so that the corresponding angle value can be determined according to the stretching detection value by acquiring the stretching detection value of the fiber-based stretching sensor. Therefore, the application can map the angle values to the coordinate axes of the three-dimensional coordinate system of each human joint respectively to obtain the three-dimensional gesture. Therefore, the application accurately identifies the angle value of each human joint on the three-dimensional space from the mechanical stretching layer by means of the fiber-based stretching sensor, maps to the three-dimensional coordinate system of each human joint to form a three-dimensional gesture, and compared with the existing inertial dynamic capturing and optical dynamic capturing, the gesture processing mode of the application has stronger adaptability to different scenes, and has no problem of integral drift due to the angle identification from the mechanical stretching layer, thereby effectively improving the accuracy of gesture identification for users. In addition, the inertia motion capturing and the optical motion capturing need to fit a plurality of degrees of freedom (positions and postures) of the whole body based on inertia information and image information, so that the calculated data size is large and the processing difficulty is high.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The intelligent clothing comprises a plurality of fiber-based stretching sensors, wherein the fiber-based stretching sensors are woven into areas of the intelligent clothing corresponding to all human joints, stretching detection values of the fiber-based stretching sensors are obtained, corresponding angle values are determined according to the stretching detection values, and the angle values are mapped to coordinate axes of a three-dimensional coordinate system of all the human joints to obtain three-dimensional gestures.
Since the existing gesture recognition mode usually adopts inertial dynamic capturing or optical dynamic capturing. Inertial dynamic capturing is to embed an inertial measurement unit in a key position (such as a joint and a trunk) of the dynamic capturing clothes, and calculate the attitude angles of all parts of a human body by integrating sensor data of the inertial measurement unit, but the attitude angles can drift due to integral calculation errors after long-time use. And the optical dynamic capturing is motion capturing which is realized by relying on computer vision after the image is acquired, and is easy to be blocked or interfered by light rays. Thereby resulting in a lower accuracy of the recognized user gesture.
The application provides a solution, by means of a fiber-based stretching sensor, the angle value of each human joint in a three-dimensional space is accurately identified from a mechanical stretching layer, and the angle value is mapped to the three-dimensional coordinate system of each human joint to form a three-dimensional gesture. In addition, the inertia motion capturing and the optical motion capturing need to fit a plurality of degrees of freedom (positions and postures) of the whole body based on inertia information and image information, so that the calculated data size is large and the processing difficulty is high.
Based on this, an embodiment of the present application provides a gesture processing method, and referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the gesture processing method of the present application.
In this embodiment, the gesture processing method is applied to an intelligent garment, where the intelligent garment includes a plurality of fiber-based stretching sensors, and the fiber-based stretching sensors are woven into a region of the intelligent garment corresponding to each human joint;
The gesture processing method comprises the following steps S10-S30:
step S10, obtaining a stretching detection value of the fiber-based stretching sensor;
As shown in fig. 2, the smart garment may be a garment in the form of a jacket, trousers, gloves, sleeves, etc., and the base fabric layer (black area in fig. 2) of the smart garment is made of elastic fibers (spandex, polyester fibers, nylon, etc., or blended elastic fibers) to attach to the body surface of the user. In addition, the elastic fiber may additionally be subjected to functional treatment (such as antibacterial treatment, water washing resistant coating, fatigue resistant coating, etc.). The intelligent garment comprises a plurality of fiber-based stretching sensors, wherein the fiber-based stretching sensors are woven into the region (green region in fig. 2) of the base fabric layer of the intelligent garment, corresponding to each human joint, so as to detect the stretching amplitude of the region corresponding to each human joint and characterize the corresponding angle value. It will be appreciated that the fibre-based tension sensor is provided at least in the region where tension is generated when the human joint is in motion. Illustratively, the fiber-based tension sensor may be woven into the base fabric layer by braiding, embroidering, or adhering.
Additionally, the fiber-based tension sensor is a fiber-form sensor that causes a voltage change as the fiber is stretched, so that the present embodiment can determine the fiber tension amplitude of the region where the fiber-based tension sensor is located through the voltage change.
The embodiment can establish communication connection with each fiber-based stretching sensor in a wireless communication or wired communication mode, so that sensor signals of each fiber-based stretching sensor can be received, and a stretching detection value of the fiber-based stretching sensor can be obtained.
Step S20, determining a corresponding angle value according to the stretching detection value;
the stretching detection value characterizes the length of the fiber, and the stretching detection value can be a voltage value directly output by a fiber-based stretching sensor or a length value obtained by converting the voltage value.
Since the degree of stretching of the fibers is different for the human joint at different angles, for example, the elbow joint, when the elbow joint flexes, the fibers in the region corresponding to the dorsal aspect of the elbow joint are stretched, and the deeper the elbow joint flexes (i.e., the smaller the angle between the humerus and the ulna or radius), the greater the degree of stretching of the fibers. The fiber-based stretching sensor is arranged in the area corresponding to the back side of the elbow joint on the intelligent garment, and therefore the fiber stretching amplitude detected by the fiber-based stretching sensor has a mapping relation with the angle value of the elbow joint in the buckling direction. Therefore, according to the stretching detection value, the fiber stretching amplitude of the intelligent garment in the corresponding area of each human joint can be determined, and then according to the mapping relation between the fiber stretching amplitude and the angle value, the angle value of the human joint corresponding to the stretching detection value is determined.
In a possible embodiment, step S20 may include steps S21 to S23:
Step S21, obtaining an initial stretching value of the fiber-based stretching sensor, wherein the initial stretching value is a stretching value of the fiber-based stretching sensor in an initial standard state;
Step S22, calculating a stretch change value between the stretch detection value and the initial stretch value;
and S23, obtaining an angle value corresponding to the stretching detection value according to the stretching variation value and the angle mapping relation corresponding to the fiber-based stretching sensor, wherein the angle mapping relation describes the mapping relation between the stretching variation value and the angle value of the fiber-based stretching sensor.
The initial stretching value is a stretching value of the fiber-based stretching sensor in an initial standard state, and the initial standard state is a state of the fiber-based stretching sensor in a specified standard posture (such as a natural standing posture, a standing posture of a spreading arm, etc.). For example, the smart garment may be worn on a standard mannequin of a specified standard posture in this embodiment, and then the stretch detection value of the fiber-based stretch sensor is taken as an initial stretch value. The embodiment can also guide the user to take the stretching detection value of the fiber-based stretching sensor as an initial stretching value after wearing the intelligent garment and making a specified standard posture.
The embodiment may further obtain an initial stretch value of the fiber-based stretch sensor, where the initial stretch value is a stretch value of the fiber-based stretch sensor in an initial standard state. Whereby a stretch change value between the stretch detection value and the initial stretch value can be calculated, wherein the stretch change value is a difference between the stretch detection value and the initial stretch value. Therefore, according to the embodiment, the angle value corresponding to the stretching detection value can be obtained according to the stretching variation value and the angle mapping relation corresponding to the fiber-based stretching sensor, wherein the angle mapping relation describes the mapping relation between the stretching variation value and the angle value of the fiber-based stretching sensor. The angle mapping relationship can be described in the forms of mapping tables, fitting functions and the like. The partial pressure value is an output voltage value of the fiber-based stretching sensor after being stretched, and the signal directly output by the fiber-based stretching sensor is generally an analog signal, so that the partial pressure value can be subjected to decimal conversion to obtain the current stretching value. And then calculating the difference between the current stretching value and the initial stretching value to obtain a stretching variation value, and then calculating an angle value corresponding to the stretching detection value according to the stretching variation value and the fitting function. Illustratively, the fiber-based stretch sensor on the X-axis of the left shoulder corresponds to a fitting function of f (X) = 11.924X, a current stretch value of 464, and an initial stretch value of 460, the stretch change value is the current stretch value minus the initial stretch value, i.e., 464-460=4, and an angle value of f (4) =11.924×4=47.70 °. The corresponding fitting function of the fiber-based stretch sensor on the Y-axis of the left shoulder is f (x) = -1.8407x+90, the current stretch value is 985, the initial stretch value is 953, and the stretch change value is the current stretch value minus the initial stretch value, i.e. 985-953=32, and the angle value f (32) = -1.8407x 32+90=31.10 °. The corresponding fitting function of the fiber-based tension sensor on the Z-axis of the left shoulder is f (x) = -1.8058 x, the current tension value is 340, the initial tension value is 359, and the tension change value is the current tension value minus the initial tension value, namely 340-359= -19, and the angle value f (-19) = -1.8058 x-19=34.31 °. Further, in order to reduce errors, in this embodiment, a plurality of fiber-based stretching sensors may be disposed in each coordinate axis direction of the three-dimensional coordinate system corresponding to each human joint, so that the stretching detection values of the fiber-based stretching sensors in each coordinate axis direction of the three-dimensional coordinate system corresponding to the human joint are obtained by fusing the detection values of the plurality of fiber-based stretching sensors.
And step S30, mapping the angle value to coordinate axes of a three-dimensional coordinate system of each human joint to obtain a three-dimensional gesture.
It should be noted that, the three-dimensional coordinate system of the human joint is a coordinate system that is preset and is established with the joint of the human joint as the origin, and as shown in fig. 3, the positive X-axis direction of the three-dimensional coordinate system is the bone direction connected to one end of the human joint, the positive Y-axis direction points to a specified direction (e.g., points to the front side of the body) perpendicular to the positive X-axis direction, and the positive Z-axis is the direction perpendicular to the plane formed by the X-axis and the Y-axis. In addition, as shown in fig. 4, because of the limitation of the human body structure, the range of values of the coordinate axes in the three-dimensional coordinate system corresponding to the different human body joints is determined according to the movement range of the human body joints.
It should be noted that the three-dimensional posture may be a whole body posture of the human body or a limb posture of a part of joints of the human body, for example, a limb posture of a hand region, a limb posture of an upper body region, or the like.
Because the angle value corresponding to the fiber-based stretching sensor is described as an angle on a single degree of freedom (i.e., a certain coordinate axis), in this embodiment, the angle value corresponding to each fiber-based stretching sensor needs to be mapped to the coordinate axis of the three-dimensional coordinate system of each human joint, so as to obtain the three-dimensional gesture. Illustratively, in this embodiment, according to the setting position of the fiber-based stretching sensor, the coordinate axis of the three-dimensional coordinate system of the human joint corresponding to the fiber-based stretching sensor is determined. And then mapping the angle value of the fiber-based tension sensor to a coordinate axis of a corresponding three-dimensional coordinate system to obtain a local joint angle, wherein the local joint angle is a three-dimensional vector of a human joint in the corresponding three-dimensional coordinate system. Because each three-dimensional coordinate system is established based on the joint position of each human joint and the bone direction of one end, and the relative position relationship exists between each joint, the embodiment can connect each local joint angle in series according to the transformation relationship between the three-dimensional coordinate systems between each human joint, and obtain the global human body posture as the three-dimensional posture. Therefore, the embodiment realizes real-time mapping and visualization of the user actions after the three-dimensional gestures are imported into the human body model. Furthermore, the obtained three-dimensional gesture can be used for scenes such as motion analysis, rehabilitation, man-machine interaction, virtual reality and the like.
In a possible implementation, step S30 may include steps S31 to S33:
Step S31, determining coordinate axes of a three-dimensional coordinate system of a human joint corresponding to the fiber-based stretching sensor according to the setting position of the fiber-based stretching sensor;
Step S32, mapping the angle value of the fiber-based tension sensor to a coordinate axis of a corresponding three-dimensional coordinate system to obtain a local joint angle;
Step S33, according to the transformation relation between three-dimensional coordinate systems among all the human joints, connecting all the local joint angles in series to obtain a global human body posture as the three-dimensional posture.
The setting position of each fiber-based stretching sensor is in the range of each human joint so as to detect the fiber stretching amplitude of each human joint in the direction of each coordinate axis under the three-dimensional coordinate system.
As shown in fig. 5, the red line segment in the figure is a fiber-based stretch sensor, and the fiber-based stretch sensor 7 and the fiber-based stretch sensor 13 are used for detecting the fiber stretch amplitude in the X-axis direction of the trapezius muscle on the right side of the human body, and the fiber-based stretch sensor 8 and the fiber-based stretch sensor 14 are used for detecting the fiber stretch amplitude in the X-axis direction of the trapezius muscle on the left side of the human body, for the neck. The fiber-based tension sensor 1 is used for detecting the fiber tension amplitude of the right trapezius muscle of the human body in the Y-axis direction, and the fiber-based tension sensor 2 is used for detecting the fiber tension amplitude of the left trapezius muscle of the human body in the Y-axis direction. The fiber-based stretching sensor 5 and the fiber-based stretching sensor 11 are used for detecting the fiber stretching amplitude of the right trapezius muscle of the human body in the Z-axis direction, and the fiber-based stretching sensor 6 and the fiber-based stretching sensor 12 are used for detecting the fiber stretching amplitude of the left trapezius muscle of the human body in the Z-axis direction. For the shoulder, the fiber-based tension sensor 21 is used to detect the fiber tension in the X-axis direction of the shoulder on the right side of the human body, and the fiber-based tension sensor 22 is used to detect the fiber tension in the X-axis direction of the shoulder on the left side of the human body. The fiber-based tension sensor 3 is used for detecting the fiber tension amplitude in the Y-axis direction of the shoulder on the right side of the human body, and the fiber-based tension sensor 4 is used for detecting the fiber tension amplitude in the Y-axis direction of the shoulder on the left side of the human body. The fiber-based tension sensor 9 is used for detecting the fiber tension amplitude in the Z-axis direction of the shoulder on the right side of the human body, and the fiber-based tension sensor 10 is used for detecting the fiber tension amplitude in the Z-axis direction of the shoulder on the left side of the human body. for the chest, the fiber-based tension sensor 15 is used to detect the magnitude of fiber tension in the Z-axis direction of the chest. For the waist, the fiber-based stretching sensor 16 and the fiber-based stretching sensor 17 are used for detecting the fiber stretching amplitude in the X-axis direction of the waist, the fiber-based stretching sensor 18 and the fiber-based stretching sensor 19 are used for detecting the fiber stretching amplitude in the Y-axis direction of the waist, and the fiber-based stretching sensor 20 is used for detecting the fiber stretching amplitude in the Z-axis direction of the waist. For the elbow joint, the fiber-based tension sensor 23 is used for detecting the fiber tension amplitude of the elbow joint on the right side of the human body in the X-axis direction, the fiber-based tension sensor 24 is used for detecting the fiber tension amplitude of the elbow joint on the left side of the human body in the X-axis direction, the fiber-based tension sensor 25 is used for detecting the fiber tension amplitude of the elbow joint on the right side of the human body in the Z-axis direction, and the fiber-based tension sensor 26 is used for detecting the fiber tension amplitude of the elbow joint on the left side of the human body in the Z-axis direction, but the elbow joint has only the activity ability in two degrees of freedom due to the limitation of the human body structure, so that the fiber-based tension sensor for detecting the elbow joint in the Y-axis direction can be unnecessary. For the crotch joint, the fiber-based tension sensor 31 and the fiber-based tension sensor 32 are used for detecting the fiber tension amplitude in the X-axis direction of the crotch joint, the fiber-based tension sensor 29 and the fiber-based tension sensor 30 are used for detecting the fiber tension amplitude in the Y-axis direction of the crotch joint, and the fiber-based tension sensor 27 and the fiber-based tension sensor 28 are used for detecting the fiber tension amplitude in the Z-axis direction of the crotch joint. For the knee joint, the fiber-based tension sensor 33 is used for detecting the fiber tension amplitude in the Z-axis direction of the knee joint on the right side of the human body, and the fiber-based tension sensor 34 is used for detecting the fiber tension amplitude in the Z-axis direction of the knee joint on the left side of the human body, while the knee joint has only the activity capability in one degree of freedom due to the limitation of the human body structure, so that the fiber-based tension sensors for detecting the X-axis and Y-axis directions of the knee joint can be unnecessary to be provided. therefore, according to the setting position of the fiber-based stretching sensor, the coordinate axes of the three-dimensional coordinate system of the human joint corresponding to the fiber-based stretching sensor can be determined. And then mapping the angle value of the fiber-based tension sensor to a coordinate axis of a corresponding three-dimensional coordinate system to obtain a three-dimensional vector (local joint angle) of the human joint under the corresponding three-dimensional coordinate system, namely the angle of the human body in a three-dimensional space. The three-dimensional coordinate systems are established based on the joint positions of the human joints as the origin, so that the positions and the connection relations among the human joints can be used as transformation relations among the three-dimensional coordinate systems among the human joints. Furthermore, according to the embodiment, all local joint angles can be connected in series according to the transformation relation between the three-dimensional coordinate systems among all the human joints, so that the global human body posture corresponding to the whole human body can be obtained as the three-dimensional posture. Regarding the manner of connecting the local joint angles in series, the embodiment can determine the father-son relationship of the joints of the human body, such as the root joint, the waist, the trunk chain, the waist, the chest, the neck, the left lower limb chain, the waist, the left hip, the left knee, the left ankle, and the like. Starting with a root joint in a joint father-son relationship, multiplying each local joint angle step by a transformation matrix describing the transformation relationship to obtain a global joint angle of each human joint, and forming a global human posture. Of course, in this embodiment, the global human body posture corresponding to the partial human body area may be obtained as the three-dimensional posture according to the transformation relationship between the three-dimensional coordinate systems of the human body joints and the local joint angles of the serial connection parts.
In one possible embodiment, step S33 may include steps a10 to a20:
Step A10, acquiring a designated human body area, and determining the local joint angles of all human body joints in the designated human body area;
And step A20, connecting the local joint angles of all the human joints in the appointed human body region in series according to the transformation relation between the three-dimensional coordinate systems of all the human joints, so as to obtain the global human body posture.
The specified human body region is a specified human body region where it is desired to construct a global human body posture, such as a torso region, an upper body region, a lower body region, and the like.
The present embodiment can be implemented by acquiring a specified human body region in which pose construction is desired, and then determining human body joints included in the specified human body region, and local joint angles of the human body joints. And then according to the transformation relation between the three-dimensional coordinate systems of the human body joints, the local joint angles of the human body joints in the appointed human body region are connected in series to obtain the global human body posture.
The embodiment of the application is different from an inertial motion capturing scheme, and the inertial motion capturing scheme is used for identifying the positions of all key points under a global coordinate system (such as a ground coordinate system) through inertial information or optical information so as to realize gesture identification. Even if the global human body posture of the designated human body area is required to be acquired in the mode, the recognition error of the local human body area is overlarge due to interference caused by the overall posture change under the global coordinate system. For example, when the global human body posture is built only for the hand and elbow regions, the points of the hand and elbow regions under the global coordinate system are generally difficult to avoid interference caused by the change of the global posture, for example, when the upper body rotates, the inertial information of the hand and elbow can be changed even if the hand and elbow are not changed, so that the built global human body posture cannot be practically decoupled from the global posture, and the posture description accuracy for the designated human body region is realized.
In a possible implementation, step S30 may be followed by steps S40 to S50:
Step S40, importing the three-dimensional gesture into a standard human body model, and determining the real-time gravity center position under the three-dimensional gesture;
Step S50, after the horizontal moving speed of the real-time gravity center position is greater than a preset speed threshold value or the relative distance between the real-time gravity center position and a preset human body supporting point is greater than a preset distance threshold value, falling warning information is output.
As shown in fig. 6, the standard human body model is a three-dimensional model of a human body under a specified standard body shape.
Since the existing inertial dynamic capture and optical dynamic capture require a large amount of data processing based on inertial information or optical information, the output instantaneity of the three-dimensional gesture is difficult to ensure. According to the embodiment, the three-dimensional gesture is imported into the standard human body model, so that all the global joint angles in the three-dimensional gesture are bound with all model joints in the standard human body model, and the standard human body model presents the three-dimensional gesture. Furthermore, the embodiment can calculate the real-time gravity center position under the three-dimensional posture based on the standard human body model with the three-dimensional posture introduced. By way of example, the present embodiment may obtain the mass ratio of the body parts in the standard phantom, for example, 50% for the torso, 8% for the head and neck, 5% for each arm, and 16% for each leg. Next, the position of the center of gravity of each part needs to be determined. And determining the position of the center of gravity (e.g., near the geometric center of the body part, or corresponding proportional position) of each body part in the standard mannequin into which the three-dimensional pose is introduced. For example, the center of gravity of the thigh is at a mid-hip to knee position, or more proximal to the hip, and the center of gravity of the upper arm is at a proportional position proximal (near the shoulder), such as 43%. Then the mass proportion of each body part is multiplied by the gravity center position to obtain the sum, and the sum is divided by the total mass to obtain the real-time gravity center position under the three-dimensional gesture. The horizontal movement speed of the real-time gravity center position can be obtained by calculating the relative distance between the real-time gravity center position at the last moment and the current real-time gravity center position in the horizontal direction and the time difference between the real-time gravity center position at the last moment and the current real-time gravity center position and according to the relative distance and the time difference in the horizontal direction. The embodiment can judge whether the horizontal moving speed of the real-time gravity center position is greater than a preset speed threshold value or not, and judge whether the relative distance between the real-time gravity center position and a preset human body supporting point is greater than a preset distance threshold value or not, wherein the preset human body supporting point is the gravity center position of the human body in a preset stable posture. Therefore, in this embodiment, after the horizontal movement speed of the real-time gravity center position is greater than the predetermined speed threshold, or the relative distance between the real-time gravity center position and the predetermined human body supporting point is greater than the predetermined distance threshold, it is indicated that the real-time gravity center position is rapidly changed, or is no longer in a stable posture, and at this time, the user has a risk of falling, and then falling warning information can be output. The fall warning information is used for warning the fall risk, and can be output in the forms of characters, images, voice and the like. If the horizontal movement speed of the real-time gravity center position is not greater than a preset speed threshold value, and the relative distance between the real-time gravity center position and a preset human body supporting point is not greater than a preset distance threshold value, the falling risk can be judged to be absent. The embodiment realizes timely early warning of falling risk by means of high-accuracy joint angles and high instantaneity.
The first embodiment of the application provides a gesture processing method which is applied to intelligent clothing, wherein the intelligent clothing comprises a plurality of fiber-based stretching sensors, and the fiber-based stretching sensors are woven into the intelligent clothing in areas corresponding to joints of all human bodies. Because the fiber stretching degree of the intelligent clothing is different from that of the region corresponding to each human joint under the condition that the human joints are at different angles, the embodiment can determine the corresponding angle value according to the stretching detection value by acquiring the stretching detection value of the fiber-based stretching sensor. In this embodiment, each human joint is provided with an independent three-dimensional coordinate system, and because a single fiber-based stretching sensor can identify stretching magnitudes of the human joint in a single direction under different angles, the embodiment can map the angle values onto coordinate axes of the three-dimensional coordinate systems of each human joint respectively to obtain a three-dimensional gesture. Therefore, the embodiment accurately identifies the angle value of each human joint on the three-dimensional space from the mechanical stretching layer by means of the fiber-based stretching sensor, maps to the three-dimensional coordinate system of each human joint to form a three-dimensional gesture, compared with the existing inertial dynamic capturing and optical dynamic capturing, the gesture processing mode of the embodiment has stronger adaptability to different scenes, and the problem of integral drift is avoided because the gesture processing mode is used for carrying out angle recognition from a mechanical stretching layer, so that the accuracy of gesture recognition of a user can be effectively improved. In addition, the inertia motion capturing and the optical motion capturing need to fit a plurality of degrees of freedom (positions and postures) of the whole body based on inertia information and image information, so that the calculated data size is large and the processing difficulty is high.
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the above description, and will not be repeated. On this basis, please refer to fig. 7, wherein each human joint in the designated human body area includes an ankle joint, a knee joint and a hip joint;
the step A10 includes steps B10 to B40:
Step B10, responding to a jump event, acquiring a landing acceleration, and calculating a ground reaction force based on the landing acceleration, wherein the landing acceleration is determined according to the vertical acceleration during the jump event;
step B20, determining stress thresholds of the ankle joint, the knee joint and the hip joint according to the local joint angles of the ankle joint, the knee joint and the hip joint;
Step B30, performing reverse recursion calculation by taking an ankle joint as a starting point according to the ground reaction force to obtain ankle joint force, knee joint force and hip joint force;
And B40, respectively comparing the ankle joint force, the knee joint force and the hip joint force with corresponding stress thresholds, and outputting injury early warning information after any one of the ankle joint force, the knee joint force and the hip joint force exceeds the corresponding stress threshold.
For jump scenes (such as shooting, long jump, etc.), if the angles of the ankle joint, the knee joint and the hip joint are different when the user lands, the supporting force can be provided differently, for example, the supporting force can be provided as the local joint angle of the ankle joint is 0 °, that is, the supporting force is maximum when the lower leg is perpendicular to the ground, and the supporting force can be provided as the difference between the local joint angle and the angle of 0 ° is larger. Therefore, the embodiment can set a corresponding stress threshold for the local joint angle of each human joint, where the stress threshold is the supporting force of the human joint at the local joint angle. The jump-up event is an event that a user performs a jump-up action.
The present embodiment can determine that a jump-up event exists by acquiring the vertical acceleration of the user in the direction perpendicular to the ground, if the vertical acceleration has an acceleration that falls first (pre-squat phase) and then rises sharply (ground-off moment). The embodiment may then obtain a landing acceleration in response to a jump event, where the landing acceleration is determined according to a vertical acceleration at the jump event, and the landing acceleration may be a sum of an absolute value of a gravitational acceleration and the vertical acceleration. The present embodiment may also consider the air resistance during jump-up and drop-down, and the landing acceleration may also be a correction value of the sum of the gravitational acceleration and the absolute value of the vertical acceleration, that is, a product of the sum of the gravitational acceleration and the absolute value of the vertical acceleration and a predetermined correction coefficient. the ground reaction force can be calculated by the floor acceleration by means of Newton's second law, namely, the ground reaction force is the product of the floor acceleration and the human body mass. According to the local joint angles of the ankle joint, the knee joint and the hip joint, the mapping relation between the joint angles of the human body joint and the stress threshold can be queried, and the stress threshold corresponding to the ankle joint, the knee joint and the hip joint under the respective local joint angles is obtained. Further, according to the present embodiment, the ankle joint force applied to the ankle joint, the knee joint force applied to the knee joint, and the hip joint force applied to the hip joint at the landing time can be obtained by performing a reverse recursion calculation with the ankle joint as a starting point, based on the ground reaction force. Exemplary, ankle force Fankle: fankle=m foot*g−FGRF, where m foot is the mass of the foot, g is gravitational acceleration, and F GRF is the ground reaction force. Knee force after transfer of ankle force F ankle to knee Fknee: fknee =fankle+m shank ×g, where m foot is the mass of the calf, hip force after transfer of knee force F ankle to hip Fhip: fhip = Fknee +m thigh ×g, where m thigh is the mass of the thigh. Therefore, the ankle joint force, the knee joint force and the hip joint force can be respectively compared with the corresponding stress thresholds, and after any one of the ankle joint force, the knee joint force and the hip joint force exceeds the corresponding stress threshold, the injury early warning information is output. The injury warning information is used for warning injury risk, and can be output in the forms of characters, images, voice and the like.
In a second embodiment of the application, a landing acceleration is obtained by responding to a jump event, and a ground reaction force is calculated based on the landing acceleration, wherein the landing acceleration is determined according to the vertical acceleration during the jump event, stress thresholds of an ankle joint, a knee joint and a hip joint are determined according to local joint angles of the ankle joint, the knee joint and the hip joint, ankle joint force, knee joint force and hip joint force are obtained by performing reverse recursion calculation with the ankle joint as a starting point according to the ground reaction force, the ankle joint force, the knee joint force and the hip joint force are respectively compared with the corresponding stress thresholds, and injury early warning information is output after any one of the ankle joint force, the knee joint force and the hip joint force exceeds the corresponding stress threshold. According to the embodiment, the ground reaction force at the moment of jumping and landing is calculated through the vertical acceleration pre-estimation of the corresponding landing acceleration, the joint force is calculated in a reverse recursion mode by taking an ankle joint as a starting point, the transmission path of the impact force in a joint chain (ankle-knee-hip) is accurately decomposed, and the stress threshold value matched with the current posture is determined according to the real-time local joint angle, so that risks of joint sprain, dislocation and the like can exist after any joint force is out of limit, the injury early warning information is immediately output, and the monitoring and early warning of the injury situation possibly occurring after a jump event are realized.
In the third embodiment of the present application, the same or similar content as the first embodiment of the present application can be referred to the above description, and the description thereof will not be repeated. On this basis, referring to fig. 8, the gesture processing method further includes steps C10 to C40:
Step C10, responding to a motion analysis instruction, and acquiring a joint chain and a standard action angle sequence related to motion to be analyzed;
step C20, generating a current action angle sequence by relating each local joint angle in the joint chain;
step C30, aligning the current action angle sequence with the standard action angle sequence to obtain a motion coordination deviation;
and step C40, generating posture correction prompt information according to the movement coordination deviation.
The exercise analysis command is a command for instructing to analyze exercise activities, for example, analysis of exercise activities such as push-ups, yoga actions, and sit-ups. The joint chain related to the motion to be analyzed is a joint chain formed by joints of the human body related to the motion to be analyzed. The standard motion angle sequence is a time sequence consisting of local joint angles related to each human joint in the joint chain under the standard motion to be analyzed.
In this embodiment, the motion analysis instruction may be responded, the sequence of the joint related to the motion to be analyzed and the standard motion angle may be obtained, and then the angles of the local joints in the joint related to the motion may be arranged according to the time sequence, so as to generate the current motion angle sequence. And then aligning the current action angle sequence and the standard action angle sequence on a time axis, and then calculating an angle difference and a time sequence difference between a key phase point in the standard action angle sequence and a mapping phase point corresponding to the current action angle sequence, wherein the key phase point is a phase point of a marked node of motion to be analyzed in the standard action angle sequence, and the marked node is four nodes of lying upward, lying down, sitting up and sitting down, for example. The mapping phase points are phase points corresponding to key phase points in the standard action angle sequence in the current action angle sequence. Then, the embodiment may calculate difference information (such as angle difference and time sequence difference) between the key phase point and the mapping phase point as a motion coordination deviation, so that the embodiment generates posture correction prompt information according to the motion coordination deviation, for example, when the angle difference is large, prompting local joint angles of joints of a human body to be corrected and angle values to be adjusted, when the time sequence difference is large, prompting coordination between joints of the human body to be corrected and time points of the key phase point to be adjusted (such as advancing an execution time point of the marker node in the motion to be analyzed, delaying an execution time point of the marker node in the motion to be analyzed, etc.).
In some embodiments, step C30 may be followed by steps D10-D20:
Step D10, mapping the current action angle sequence and the standard action angle sequence on the same time axis, and calculating the angle difference and the time sequence difference between key phase points in the standard action angle sequence and mapping phase points corresponding to the current action angle sequence;
And D20, taking the angle difference and the time sequence difference as motion coordination deviation.
In this embodiment, the current action angle sequence and the standard action angle sequence may be mapped on the same time axis, and then a key phase point in the standard action angle sequence may be matched with the current action angle sequence, so as to obtain a mapped phase point corresponding to the key phase point in the current action angle sequence. Furthermore, the embodiment can calculate the angle difference and the time sequence difference between the key phase point in the standard action angle sequence and the mapping phase point corresponding to the current action angle sequence, and take the angle difference and the time sequence difference as the motion coordination deviation, so that the action standard degree of the motion to be analyzed is determined by means of the angle difference in the motion coordination deviation, and the action coordination degree is determined by means of the time sequence difference in the motion coordination deviation.
In a third embodiment of the present application, a sequence of current motion angles is generated by acquiring a sequence of joint-related chains and standard motion angles of a motion to be analyzed in response to a motion analysis instruction, and by including each local joint angle in the joint-related chains. And aligning the current action angle sequence with the standard action angle sequence to obtain a motion coordination deviation, and generating posture correction prompt information according to the motion coordination deviation. According to the method, the current action angle sequence and the standard action angle sequence are aligned in a nonlinear mode, multi-joint collaborative analysis on a time axis is achieved, motion coordination deviation is located, deviation on angles and coordination of motion to be analyzed can be achieved, and posture correction prompt is conducted.
The application provides an intelligent wearing system, as shown in fig. 9, which comprises an intelligent garment 201 and a control terminal 202 in communication connection with the intelligent garment 201;
The smart garment 201 comprises a base fabric layer and a plurality of fiber-based stretch sensors woven into the base fabric layer in areas corresponding to respective human joints;
the control terminal 202 is configured to implement the steps of the gesture processing method of the above-described embodiment.
The smart garment 201 may be a garment in the form of a coat, trousers, gloves, sleeves, etc., and the base fabric layer (black area in fig. 9) of the smart garment 201 is made of elastic fibers (spandex, polyester fibers, nylon, etc., or blended elastic fibers) to fit the body surface of the user. In addition, the elastic fiber may additionally be subjected to functional treatment (such as antibacterial treatment, water washing resistant coating, fatigue resistant coating, etc.). The smart garment 201 includes a plurality of fiber-based stretch sensors woven into the region (green region in fig. 9) of the base fabric layer of the smart garment 201 corresponding to each human joint to detect the stretch magnitude of the region corresponding to each human joint, characterizing the corresponding angle value. It will be appreciated that the fibre-based tension sensor is provided at least in the region where tension is generated when the human joint is in motion. Illustratively, the fiber-based tension sensor may be woven into the base fabric layer by braiding, embroidering, or adhering.
The control terminal 202 may be a terminal device independent of the smart garment, or it may be a control unit integrated in the smart garment. The control terminal 202 may include at least one processor and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the gesture processing method of the first embodiment. The control terminal of the smart wearable system in the embodiment of the present application may include, but is not limited to, terminal devices such as a smart phone, a smart watch, a head-mounted display device, a notebook computer, a PDA (Personal DIGITAL ASSISTANT: personal digital assistant), a PAD (Portable Application Description: tablet computer), a desktop computer, and the like.
In some embodiments, smart garment 201 further includes a data acquisition module;
The data acquisition module is electrically connected with each fiber-based stretching sensor through a wire, and is used for acquiring sensor signals of each fiber-based stretching sensor and sending the sensor signals to the control terminal 202.
As shown in fig. 10, taking smart clothing as an example, the black thick line segment in fig. 10 is a fiber-based tension sensor, and the light red thin line is a wire. As shown in fig. 11, taking an intelligent garment as an example of a glove, a black thick line segment in fig. 11 is a fiber-based tension sensor, and a light red thin line is a wire. Accordingly, the smart garment 201 further includes a data acquisition module electrically connected to each of the fiber-based tension sensors through a wire, for acquiring sensor signals of each of the fiber-based tension sensors and transmitting the sensor signals to the control terminal 202.
The smart wearable system illustrated in fig. 9 is merely an example, and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
The intelligent wearing system provided by the application can solve the technical problem of low accuracy of the user gesture recognized by the existing gesture recognition scheme by adopting the gesture processing method in the embodiment. Compared with the prior art, the beneficial effects of the intelligent wearing system provided by the application are the same as those of the gesture processing method provided by the embodiment, and other technical features in the intelligent wearing system are the same as those disclosed by the method of the previous embodiment, and are not repeated here.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the gesture processing method in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a USB flash disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, the computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The computer readable storage medium may be included in the control terminal or may exist alone without being incorporated in the control terminal.
The computer readable storage medium carries one or more programs, which when executed by a control terminal, cause the control terminal to acquire a stretch detection value of the fiber-based stretch sensor, determine a corresponding angle value according to the stretch detection value, and map the angle value onto coordinate axes of a three-dimensional coordinate system of each human joint to obtain a three-dimensional gesture.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the gesture processing method, so that the technical problem that the accuracy of the user gesture identified by the existing gesture identification scheme is low can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the gesture processing method provided by the above embodiment, and are not described herein.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the gesture processing method as described above.
The computer program product provided by the application can solve the technical problem that the accuracy of the user gesture recognized by the existing gesture recognition scheme is low. Compared with the prior art, the beneficial effects of the computer program product provided by the application are the same as those of the gesture processing method provided by the above embodiment, and are not described herein.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.