CN115814361A - Intelligent fitness action detection and guidance method and system - Google Patents
Intelligent fitness action detection and guidance method and system Download PDFInfo
- Publication number
- CN115814361A CN115814361A CN202211694276.5A CN202211694276A CN115814361A CN 115814361 A CN115814361 A CN 115814361A CN 202211694276 A CN202211694276 A CN 202211694276A CN 115814361 A CN115814361 A CN 115814361A
- Authority
- CN
- China
- Prior art keywords
- user
- action
- motion
- data
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000033001 locomotion Effects 0.000 claims abstract description 67
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000003672 processing method Methods 0.000 claims abstract description 10
- 238000004140 cleaning Methods 0.000 claims abstract description 7
- 238000012216 screening Methods 0.000 claims abstract description 7
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 8
- 238000012821 model calculation Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 2
- 210000003423 ankle Anatomy 0.000 description 2
- 210000002310 elbow joint Anatomy 0.000 description 2
- 210000002683 foot Anatomy 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an intelligent fitness action detection and guidance method, which comprises the following steps: receiving personal related information input by a user and training video information selected by the user; acquiring depth values and amplitude values of pixel points at different positions to form user data; carrying out data cleaning, data screening and standardization processing to form a preprocessed data set, carrying out three-dimensional reconstruction on the data set by adopting a processing method based on computer vision to construct a three-dimensional space of a user motion area; identifying and dividing according to key joint points of a human body, and marking the motion action of a user; and selecting a corresponding preset user action model, comparing the motion action of the user with the preset user action model, and finally outputting a judgment result. The invention also discloses an intelligent fitness action detection and guidance system. The intelligent fitness action detection and guidance method and system provided by the invention do not need a professional coach for guidance, are low in cost, and can guarantee the standardization and safety of fitness actions of users.
Description
Technical Field
The invention relates to an intelligent fitness action detection and guidance method and system, and belongs to the technical field of intelligent action identification and detection.
Background
In recent years, people are more and more aware of the importance of sports, gymnasiums and sports APP bloom all the time, and people have many ways to seek good fitness effect to search for private education to guide actions. The fitness APP only supports the user to imitate fitness actions in the video, and the user actions cannot be fed back and guided. The personal coach can improve the normative of body-building, but the cost of personal education is still higher, and ordinary people can not bear the weight of. Personal home exercise and fitness also become hot. Although home exercises are simple, ordinary people cannot master good exercise methods, and the effect cannot be achieved, but the physical injury may be caused.
Aiming at the problems, in the aspect of normative guidance of personal body-building actions, the currently existing technical means are as follows: 1) Utilize intelligent wearing equipment, gather sporter's motion information or physiological information, construct human action gesture to give body-building guidance, this kind of mode nevertheless needs the user to carry wearing equipment, and the cost is higher and have certain hindrance to user's motion (patent: CN 202111386083); 2) The motion of the user in the course of motion is captured by video or image processing, and the user motion is recognized by modeling, deep learning, or the like, but this method is slow in recognition and requires a separate camera device (patent: CN201911143087, CN 202110269986); 3) In recent years, intelligent fitness mirror comes to life, so that a fitness person can observe whether own actions are standard or not from the mirror, and partial technicians research how to provide accurate fitness guidance in real time (patent: CN202010895377, CN 202111460765), this method still requires users to purchase new home appliances, and the fitness mirror is expensive and cannot be popularized quickly.
Disclosure of Invention
The invention aims to solve the technical problems of overcoming the defects of the prior art and providing an intelligent fitness action detection and guidance method and system, which do not need to be guided by a professional coach, have low cost and can ensure the normativity and safety of fitness actions of users.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an intelligent fitness action detection and guidance method comprises the following steps:
receiving personal related information input by a user and training video information selected by the user;
acquiring depth values and amplitude values of pixel points at different positions, forming user data and storing the user data;
acquiring user data, performing data cleaning, data screening and standardization processing on the user data to form a preprocessed data set, performing three-dimensional reconstruction on user actions by adopting a computer vision-based processing method on the data set, and constructing a three-dimensional space of a user motion area;
fitting the three-dimensional space of the user motion area with a human body standard 3D model, identifying and dividing according to human body key joint points, and marking the motion action of the user;
selecting a corresponding preset user action model according to the personal related information and the training video information, comparing the motion action of the user with the preset user action model, judging that the motion action is standard if the difference range is within a threshold range, otherwise judging that the motion action is not standard, and finally outputting a judgment result.
The acquisition of the depth values and the amplitude values of the pixel points at different positions comprises the following steps: when a user moves along with a video on a television, a visible light signal sent by the display screen of the intelligent television can be transmitted to the user, the visible light signal is reflected by the body of the user and then returned to the display screen to be received by the depth sensor, the depth sensor calculates the distance between the sensor and a measured object, namely the depth value according to the time difference between the transmission and the reflection of the visible light of the LED, and the amplitude value and the phase difference between the transmitted light and the received light are utilized to obtain the amplitude value of the object.
The establishment of the preset user action model comprises the following steps: the method comprises the steps of collecting a plurality of motion video sample data of the professional fitness coach with different ages, sexes, heights and weights and the same motions and postures off line for training and learning, and constructing a standard depth three-dimensional graph and a coordinate range of angles and directions of body joint points under standard motions by adopting a computer vision processing method according to the standard motions of motion videos.
When the movement action is judged to be not standard, the user is reminded to achieve the standard through adjusting the action by calculating the coordinate ranges of the angles and the directions of the adjacent body joint points and comparing the coordinate ranges with the coordinate ranges of the angles and the directions of the standard body joint points.
The step of comparing the motion action of the user with a preset user action model comprises the following steps: calculating the Euclidean distance of the image to match the feature points, solving the Euclidean distance between one point and other two feature points of the image according to the decision rule of the Euclidean distance to obtain the minimum value and the second minimum value of the distance, wherein if the minimum value is less than or equal to the second minimum value 0.8, the feature points are a pair of matching points, and otherwise, the feature points are unmatched points.
An intelligent body-building action detection and guidance system comprises a user data input module, a user action acquisition module, an information centralized analysis and processing module and an information execution module,
the user data input module is used for receiving personal related information input by a user and training video information selected by the user;
the user action acquisition module is used for acquiring depth values and amplitude values of pixel points at different positions, forming user data and storing the user data;
the information centralized analysis and processing module comprises an information pulling module, an image preprocessing module, a model calculation module and a data judgment module, wherein the information pulling module is used for acquiring user data, the image preprocessing module is used for carrying out data cleaning, data screening and standardization processing on the user data to form a preprocessed data set, and the model calculation module is used for carrying out three-dimensional reconstruction on user actions on the data set by adopting a processing method based on computer vision, constructing a three-dimensional space of a user motion area, fitting the three-dimensional space of the user motion area and a human body standard 3D model, identifying and dividing according to human body key joint points and marking the motion actions of a user; the data judgment module compares the motion of the user with a preset user motion model, if the difference range is within a threshold range, the motion is judged to be standard, otherwise, the motion is judged to be not standard, and finally, a judgment result is output;
the information execution module is used for displaying the judgment result and reminding the user of reaching the standard through adjustment action.
The invention has the beneficial effects that: the invention provides an intelligent fitness action detection and guidance method and system, wherein a depth sensor is arranged on an LED screen of an intelligent television to acquire user data, an information centralized analysis and processing module calculates and evaluates action model data and normality of actions, and finally a prompt message is displayed and sent through an information execution module.
Drawings
FIG. 1 is a block diagram of an intelligent fitness activity detection and guidance system according to the present invention;
FIG. 2 is a flow chart of an intelligent fitness activity detection and guidance method of the present invention.
Detailed Description
The present invention is further described with reference to the accompanying drawings, and the following examples are only for clearly illustrating the technical solutions of the present invention, and should not be taken as limiting the scope of the present invention.
The television is a necessary household appliance for families, the smart television is more and more popular at present, the price is lower and lower, and the functions are more and more perfect. The LED display screen adopted by the intelligent television equipment can be used for illumination and display, and is increasingly used for communication. Therefore, the invention combines the LED visible light of the intelligent television equipment and the depth sensor with lower cost to collect the actions of the user. The frequency range of visible light is about 4 x 10^14, the frequency is higher, can be used to the real-time collection and the analysis of information, and carry out the model comparison with the standard action of presetting, whether the intelligent recognition user's action is standard, and the suggestion of carrying out pronunciation or graphical interface instructs the user to move the adjustment in real time.
As shown in FIG. 1, the invention discloses an intelligent fitness action detection and guidance method, which comprises the following steps:
firstly, before the user starts exercising, the user inputs personal related information through the user input module, and the user input module is used for supporting the user to input the personal related information, such as sex, age, weight and the like. And then, after the user selects the training video, recording the type of the training video so as to be convenient for obtaining the motion model library data subsequently.
And step two, when the user moves along with the video on the television, the visible light signals sent by the LED screen of the intelligent television can be transmitted to the user, and the visible light signals are reflected by the body of the user and then return to the intelligent display screen to be received by the depth sensor. The depth sensor calculates a distance between the sensor and the measurement object, i.e., a depth value, from a time difference between the emission and reflection of the visible light of the LED. The amplitude value of the object is acquired using the amplitude value and the phase difference between the transmitted light and the received light. And forming and storing user data by the depth values and the amplitude values of the pixel points at different positions.
And step three, the information centralized analysis and processing module acquires the depth value and the amplitude value of the pixel point and preprocesses the action information of the user. And carrying out data cleaning, data screening and standardization processing on the acquired user data to form a preprocessed data set, and carrying out three-dimensional reconstruction on user actions by adopting a processing method based on computer vision to construct a three-dimensional space of a user motion area.
And step four, fitting the human body standard 3D model and the constructed three-dimensional space, identifying and dividing according to key joint points of the human body, and marking the motion action of the user. The key joints of the three-dimensional structure of the human body are marked, and comprise a head, a left shoulder, a right shoulder, a chest, a left elbow joint, a right elbow joint, a left wrist, a right wrist, a left hand, a right hand, a left hip, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left foot and a right foot. Meanwhile, the angles and directions of adjacent body joint points are calculated, and the coordinate range is recorded.
And step five, comparing the motion actions of the user collected in real time with a preset user action model, judging that the actions are standard if the difference range is within a threshold range, and otherwise, judging that the actions are not standard. Specifically, the user action judgment module is arranged to compare the actual action input of the user with a preset user action model, calculate the Euclidean distance of the user action model and match the feature points. According to the Euclidean distance judgment rule, for one point in the image, the Euclidean distance between the point and other two feature points is obtained, the minimum value and the second minimum value of the distance are obtained, and if the minimum value is less than or equal to the second minimum value 0.8, the feature points are a pair of matching points. Otherwise, it is a mismatch point. Marking the action duration, carrying out frame data recursion judgment on the standard data, and recording the current time as the ending time of a certain action when the data of the next frame is different from the data of the current frame.
The establishment of the preset user action model comprises a user action three-dimensional model and action duration. And according to the standard action of the motion video, constructing a standard depth three-dimensional image and a coordinate range of the angle and the direction of the body joint point under the standard action by adopting a computer vision processing method. In order to meet the fitness requirements of different people, a plurality of sample data of the same action and posture of professional fitness coaches of different ages, sexes, heights and weights need to be acquired offline for training and learning, and the coordinate range of the standard action is calibrated.
And step six, aiming at the identified points with irregular actions, judging the difference between the amplitude and the direction of the normal actions by calculating the coordinate ranges of the angles and the directions of the adjacent body joint points and comparing the coordinate ranges of the angles and the directions of the adjacent body joint points with the coordinate ranges of the angles and the directions of the standard body joint points. For example, the user is reminded to adjust the motion to reach the standard by calculating and comparing the connection angle of each joint of the irregular motion.
And step seven, displaying the motion actions of the user on an LED screen of the intelligent television in real time, displaying the difference between the motion actions and the standard motion actions, reminding the user of the normative motion actions by a voice module, and reminding the user of adjusting the scheme if the motion actions are not normative.
As shown in FIG. 1, the invention also discloses an intelligent fitness action detection and guidance system, which comprises a user data input module, a user action acquisition module, an information centralized analysis and processing module and an information execution module,
the user data input module is used for receiving personal related information input by a user and training video information selected by the user;
the user action acquisition module comprises an LED display screen, a depth sensor module and a data storage module and is used for acquiring depth values and amplitude values of pixel points at different positions to form and store user data;
the information centralized analysis and processing module comprises an information pulling module, an image preprocessing module, a model calculation module and a data judgment module, wherein the information pulling module is used for acquiring user data, the image preprocessing module is used for carrying out data cleaning, data screening and standardization processing on the user data to form a preprocessed data set, and the model calculation module is used for carrying out three-dimensional reconstruction on user actions on the data set by adopting a processing method based on computer vision, constructing a three-dimensional space of a user motion area, fitting the three-dimensional space of the user motion area and a human body standard 3D model, identifying and dividing according to human body key joint points, and marking the motion actions of a user; the data judgment module compares the motion of the user with a preset user motion model, if the difference range is within a threshold range, the motion is judged to be standard, otherwise, the motion is judged to be not standard, and finally, a judgment result is output;
and the information execution module is used for displaying the actions of the user on the intelligent display screen, standardizing the action schematic and giving a user action adjustment scheme in real time by voice according to the standardization of the actions.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (6)
1. An intelligent fitness action detection and guidance method is characterized by comprising the following steps: the method comprises the following steps:
receiving personal related information input by a user and training video information selected by the user;
acquiring depth values and amplitude values of pixel points at different positions, forming user data and storing the user data;
acquiring user data, performing data cleaning, data screening and standardization processing on the user data to form a preprocessed data set, performing three-dimensional reconstruction on user actions of the data set by adopting a processing method based on computer vision, and constructing a three-dimensional space of a user motion area;
fitting the three-dimensional space of the user motion area with a human body standard 3D model, identifying and dividing according to human body key joint points, and marking the motion action of the user;
selecting a corresponding preset user action model according to the personal related information and the training video information, comparing the motion action of the user with the preset user action model, judging that the motion action is standard if the difference range is within a threshold range, otherwise judging that the motion action is not standard, and finally outputting a judgment result.
2. The intelligent fitness activity detection and guidance method of claim 1, wherein: the acquisition of the depth values and the amplitude values of the pixel points at different positions comprises the following steps: when a user moves along with a video on a television, a visible light signal sent by the display screen of the intelligent television can be transmitted to the user, the visible light signal is reflected by the body of the user and then returned to the display screen to be received by the depth sensor, the depth sensor calculates the distance between the sensor and a measured object, namely the depth value according to the time difference between the transmission and the reflection of the visible light of the LED, and the amplitude value and the phase difference between the transmitted light and the received light are utilized to obtain the amplitude value of the object.
3. The intelligent fitness activity detection and guidance method of claim 1, wherein: the establishment of the preset user action model comprises the following steps: a plurality of motion video sample data of the professional fitness coach with different ages, sexes, heights and weights and the same movement and posture are collected offline for training and learning, and a standard depth three-dimensional graph and a coordinate range of angles and directions of body joint points under standard movement are constructed by a computer vision processing method according to the standard movement of the motion video.
4. The intelligent fitness activity detection and guidance method of claim 3, wherein: when the movement action is judged to be not standard, the user is reminded to achieve the standard through adjusting the action by calculating the coordinate ranges of the angles and the directions of the adjacent body joint points and comparing the coordinate ranges with the coordinate ranges of the angles and the directions of the standard body joint points.
5. The intelligent fitness activity detection and guidance method of claim 1, wherein: the comparison of the motion action of the user with the preset user action model comprises the following steps: calculating the Euclidean distance of the image to match the feature points, solving the Euclidean distance between one point and other two feature points of the image according to the decision rule of the Euclidean distance to obtain the minimum value and the second minimum value of the distance, wherein if the minimum value is less than or equal to the second minimum value 0.8, the feature points are a pair of matching points, and otherwise, the feature points are unmatched points.
6. An intelligent fitness action detection and guidance system, characterized in that: comprises a user data input module, a user action acquisition module, an information centralized analysis and processing module and an information execution module,
the user data input module is used for receiving personal related information input by a user and training video information selected by the user;
the user action acquisition module is used for acquiring depth values and amplitude values of pixel points at different positions, forming user data and storing the user data;
the information centralized analysis and processing module comprises an information pulling module, an image preprocessing module, a model calculation module and a data judgment module, wherein the information pulling module is used for acquiring user data, the image preprocessing module is used for carrying out data cleaning, data screening and standardization processing on the user data to form a preprocessed data set, and the model calculation module is used for carrying out three-dimensional reconstruction on user actions on the data set by adopting a processing method based on computer vision, constructing a three-dimensional space of a user motion area, fitting the three-dimensional space of the user motion area and a human body standard 3D model, identifying and dividing according to human body key joint points and marking the motion actions of a user; the data judgment module compares the motion of the user with a preset user motion model, if the difference range is within a threshold range, the motion is judged to be standard, otherwise, the motion is judged to be not standard, and finally, a judgment result is output;
the information execution module is used for displaying the judgment result and reminding the user of reaching the standard through adjustment action. .
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211694276.5A CN115814361A (en) | 2022-12-28 | 2022-12-28 | Intelligent fitness action detection and guidance method and system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211694276.5A CN115814361A (en) | 2022-12-28 | 2022-12-28 | Intelligent fitness action detection and guidance method and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN115814361A true CN115814361A (en) | 2023-03-21 |
Family
ID=85518862
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211694276.5A Withdrawn CN115814361A (en) | 2022-12-28 | 2022-12-28 | Intelligent fitness action detection and guidance method and system |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115814361A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118172833A (en) * | 2024-03-13 | 2024-06-11 | 成都体育学院 | Method, device and equipment for screening sports injury in badminton |
| CN118412123A (en) * | 2024-04-03 | 2024-07-30 | 平安科技(深圳)有限公司 | User portrait method based on human body gesture shape and related equipment |
-
2022
- 2022-12-28 CN CN202211694276.5A patent/CN115814361A/en not_active Withdrawn
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN118172833A (en) * | 2024-03-13 | 2024-06-11 | 成都体育学院 | Method, device and equipment for screening sports injury in badminton |
| CN118412123A (en) * | 2024-04-03 | 2024-07-30 | 平安科技(深圳)有限公司 | User portrait method based on human body gesture shape and related equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111437583B (en) | An auxiliary training system for basic badminton movements based on Kinect | |
| CN112464918B (en) | Body-building action correcting method and device, computer equipment and storage medium | |
| CN103127691B (en) | Video-generating device and method | |
| CN106984027B (en) | Action comparison analysis method and device and display | |
| US9224037B2 (en) | Apparatus and method for controlling presentation of information toward human object | |
| CN109389054A (en) | Intelligent mirror design method based on automated graphics identification and action model comparison | |
| CN107754225A (en) | A kind of intelligent body-building coaching system | |
| US20080170123A1 (en) | Tracking a range of body movement based on 3d captured image streams of a user | |
| US9418470B2 (en) | Method and system for selecting the viewing configuration of a rendered figure | |
| CN111111111A (en) | A real-time fitness monitoring system and method | |
| CN115814361A (en) | Intelligent fitness action detection and guidance method and system | |
| CN118887738B (en) | Multi-dimensional intelligent analysis method for movement process, computer equipment and storage medium | |
| CN114618142A (en) | Auxiliary training system and method for table tennis sports | |
| CN114100103A (en) | A system and method for skipping rope counting detection based on key point recognition | |
| US20220198835A1 (en) | Information processing apparatus, and method | |
| JP2023102856A (en) | Program, method and electronic apparatus | |
| CN113517052A (en) | Multi-perception man-machine interaction system and method in commercial fitness scene | |
| CN206867650U (en) | Suitable for the athletic ground tested and assessed without apparatus work | |
| CN116153510A (en) | Correction mirror control method, device, equipment, storage medium and intelligent correction mirror | |
| CN117116417B (en) | Running scene simulation method and system based on Internet of things | |
| JP2018187284A (en) | Exercise state diagnostic system and exercise state diagnostic program | |
| JP2021037168A (en) | Movement analysis system, movement analysis program, and movement analysis method | |
| CN113657266B (en) | Fitness training management method and system based on intelligent bracelet and human body three-dimensional reconstruction | |
| CN112287840B (en) | Method and system for intelligently acquiring exercise capacity analysis data | |
| CN116152924A (en) | Motion gesture evaluation method, device and system and computer storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230321 |
|
| WW01 | Invention patent application withdrawn after publication |