+

CN109164915A - A kind of gesture identification method, device, system and equipment - Google Patents

A kind of gesture identification method, device, system and equipment Download PDF

Info

Publication number
CN109164915A
CN109164915A CN201810941071.XA CN201810941071A CN109164915A CN 109164915 A CN109164915 A CN 109164915A CN 201810941071 A CN201810941071 A CN 201810941071A CN 109164915 A CN109164915 A CN 109164915A
Authority
CN
China
Prior art keywords
information
preset
gesture
input
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810941071.XA
Other languages
Chinese (zh)
Other versions
CN109164915B (en
Inventor
徐强
方有纲
刘耀中
刘耿烨
李跃星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Time Change Communication Technology Co Ltd
Original Assignee
Hunan Time Change Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Time Change Communication Technology Co Ltd filed Critical Hunan Time Change Communication Technology Co Ltd
Priority to CN201810941071.XA priority Critical patent/CN109164915B/en
Publication of CN109164915A publication Critical patent/CN109164915A/en
Application granted granted Critical
Publication of CN109164915B publication Critical patent/CN109164915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种手势识别方法、装置、系统和设备,其中方法包括:101、根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;102、循环执行步骤101,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;103、确定不同时刻的所述手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的所述预置散射中心的坐标信息,得到所述预置散射中心的坐标变化轨迹,所述预置散射中心为所述多个散射中心中的一个;104、根据所述坐标变化轨迹得到待输入信息,并根据所述待输入信息显示所述待输入信息对应的内容。

The present application discloses a gesture recognition method, device, system and device, wherein the method includes: 101. According to coordinate information corresponding to multiple scattering centers of a human hand sent by gesture radar received in real time, construct a human hand at the current moment 102. Execute step 101 in a loop until the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval; 103. Determine different times The coordinate information of the preset scattering center on the virtual hand model of the hand, and connect the coordinate information of the preset scattering center at different times in time sequence to obtain the coordinate change trajectory of the preset scattering center. The scattering center is one of the multiple scattering centers; 104. Obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input.

Description

A kind of gesture identification method, device, system and equipment
Technical field
This application involves technical field of hand gesture recognition, a kind of gesture identification method, device, system and equipment.
Background technique
As, the continuous development of mobile terminal device and the appearance of virtual reality device, human-computer interaction become increasingly It is important.The gesture identification branch important as one of them has many advantages, such as that the living habit with people is adaptable, freedom degree is high.
Traditional Gesture Recognition is mostly to be realized by being based on camera, however these Gesture Recognitions exist respectively Kind disadvantage: a large amount of image data based on the interactive mode of optical camera due to needing to obtain the different depth of field needs powerful Data-handling capacity can just extract information needed, this will greatly occupy hardware resource, meanwhile, optical camera cannot be hidden Gear, there is also the risks of leakage for the privacy of user;The shortcomings that infrared camera, is similar to optical camera, while precision is not so good as optics Camera, and it is easy the interference by heat source and intense light source.Therefore, the human-computer interaction based on radar starts to emerge in large numbers, but current base The interactive controlling of simple gesture is only supported in the human-computer interaction of the technology of radar, and stability is poor, gesture identification success rate is low, no It can be carried out complicated interaction, so that user experience is poor.
Summary of the invention
The embodiment of the present application provides a kind of gesture identification method, device, system and equipment, is used for gesture identification, solves The interactive controlling of simple gesture is supported currently based on the human-computer interaction device of Radar Technology, and stability is poor, gesture identification at Power is low, not can be carried out complicated interaction, so that the technical problem that user experience is poor.
In view of this, the application first aspect provides a kind of gesture identification method, comprising:
101, the corresponding coordinate of the multiple scattering centers of human hands sent according to the gesture radar that real-time reception arrives Information constructs the corresponding hand dummy model of human hands at current time;
106, circulation executes step 101, until not received described in the gesture radar transmission in prefixed time interval The corresponding coordinate information of the multiple scattering centers of human hands;
103, the coordinate information of preset scattering center on the hand dummy model of different moments is determined, and according to the time The coordinate information of the preset scattering center of sequential connection different moments, obtains the changes in coordinates rail of the preset scattering center Mark, the preset scattering center are one in the multiple scattering center;
104, information to be entered is obtained according to the changes in coordinates track, and according to the information to be entered show it is described to Input the corresponding content of information.
Preferably, the method also includes:
105, while the multiple scattering center of the reception gesture radar transmission corresponding coordinate information, The corresponding motion information of the multiple scattering center that the gesture radar is sent is received, the motion information includes: speed Spend information, acceleration information and directional information;
The step 102 specifically:
102, circulation executes step 101 and 105, until not receiving the gesture radar transmission in prefixed time interval The corresponding coordinate information of the multiple scattering centers of human hands;
If the hand dummy model at the first moment is unstructured, the hand at determination first moment is empty The coordinate information of preset scattering center specifically includes on analog model:
Determine the motion information of the preset scattering center on the hand dummy model at the second moment, and according to described The motion information at the second moment determines the coordinate letter of preset scattering center on the hand dummy model at first moment Breath, second moment are the previous moment at first moment, and first moment is at the time of executing step 101 for the first time Moment at the time of executing step 101 to last time in the composed period.
Preferably, the information to be entered is image to be entered;
The step 104 specifically includes:
104, the image to be entered is obtained according to the changes in coordinates track, and shows the image to be entered.
Preferably, the information to be entered is text to be entered;
The step 104 specifically includes:
104, the text to be entered is obtained according to the changes in coordinates track, by the text to be entered and preset text Library compares, and obtains the corresponding text of the text to be entered, and show the text.
The application second aspect provides a kind of gesture identifying device, comprising:
Model construction unit, the multiple scattering centers of human hands that the gesture radar for being arrived according to real-time reception is sent are each Self-corresponding coordinate information constructs the corresponding hand dummy model of human hands at current time;
First circulation unit, for triggering the model construction unit repeatedly, until not received in prefixed time interval The corresponding coordinate information of the multiple scattering centers of the human hands that the gesture radar is sent;
Track determination unit, the coordinate letter of preset scattering center on the hand dummy model for determining different moments Breath, and the coordinate information of the preset scattering center of different moments is connected sequentially in time, it obtains in the preset scattering The changes in coordinates track of the heart, the preset scattering center are one in the multiple scattering center;
Display unit, for obtaining information to be entered according to the changes in coordinates track, and according to the information to be entered Show the corresponding content of the information to be entered.
Preferably, described device further include:
Motion information acquiring unit, for respectively being corresponded in the multiple scattering center for receiving the gesture radar transmission Coordinate information while, receive the corresponding motion information of the multiple scattering center that the gesture radar is sent, institute Stating motion information includes: velocity information, acceleration information and directional information;
First circulation unit, specifically for triggering the motion information acquiring unit and the model construction unit repeatedly, It is respectively corresponded to until not receiving the multiple scattering centers of the human hands that the gesture radar is sent in prefixed time interval Coordinate information;
If the hand dummy model at the first moment is unstructured, the hand at determination first moment is empty The coordinate information of preset scattering center specifically includes on analog model:
Determine the motion information of the preset scattering center on the hand dummy model at the second moment, and according to described The motion information at the second moment determines the coordinate letter of preset scattering center on the hand dummy model at first moment Breath, second moment are the previous moment at first moment, and first moment is to trigger the model construction for the first time To a moment in the period composed at the time of triggering the model construction unit for the last time at the time of unit.
Preferably, the information to be entered is image to be entered;
The display unit is specifically used for, and obtains the image to be entered according to the changes in coordinates track, and show institute State image to be entered.
Preferably, the information to be entered is text to be entered;
The display unit is specifically used for, and obtains the text to be entered according to the changes in coordinates track, will it is described to Input text and preset literal pool compare, and obtain the corresponding text of the text to be entered, and show the text.
The application third aspect provides a kind of gesture recognition system, comprising: gesture radar and above-mentioned gesture identifying device;
The gesture radar and gesture identifying device communication connection;
The gesture radar, for real time emission radar signal to the human hands;
The gesture radar is also used to receive the echo-signal of the corresponding reflection of multiple scattering centers of the human hands, The corresponding coordinate information of each scattering center is calculated according to each echo-signal, and it is each to send the multiple scattering center Self-corresponding coordinate information is to the gesture identifying device.
The application fourth aspect provides a kind of gesture identification equipment, and the equipment includes processor and memory;
Said program code is transferred to the processor for storing program code by the memory;
The processor is used for the gesture identification method above-mentioned according to the instruction execution in said program code.
As can be seen from the above technical solutions, the embodiment of the present application has the advantage that
In the embodiment of the present application, a kind of gesture identification method is provided, comprising: because gesture radar is real-time transmission human body The corresponding coordinate information of the multiple scattering centers of hand, can be according to the corresponding coordinate of multiple scattering centers received Information architecture hand dummy model, the multiple scatterings of human hands until not receiving the transmission of gesture radar in prefixed time interval The gesture control operation of the corresponding coordinate information in center, i.e. user terminates, during the entire process of user's operation at one It is corresponding with a hand dummy model quarter, is then believed according to the coordinate of preset scattering center on the hand dummy model of different moments Breath, determines the changes in coordinates track of preset scattering center, information to be entered, last root can be determined according to the changes in coordinates track According to the corresponding content of information to be entered display kangaroo such as information, regardless of user's hand has input in what in entire identification process Hold, the input content of user accurately can be obtained according to the changes in coordinates track of the preset scattering center of hand dummy model, Solves the interactive controlling that simple gesture is only supported currently based on the human-computer interaction of the technology of radar, and stability is poor, gesture is known Other success rate is low, not can be carried out complicated interaction, so that the technical problem that user experience is poor.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the first embodiment of gesture identification method in the embodiment of the present application;
Fig. 2 is a kind of flow diagram of the second embodiment of gesture identification method in the embodiment of the present application;
Fig. 3 is a kind of structural schematic diagram of the embodiment of gesture identifying device in the embodiment of the present application;
Fig. 4 is a kind of structural schematic diagram of the embodiment of gesture recognition system in the embodiment of the present application.
Specific embodiment
The embodiment of the present application provides a kind of gesture identification method, device, system and equipment, is used for gesture identification, solves The interactive controlling of simple gesture is only supported currently based on the human-computer interaction device of Radar Technology, and stability is poor, gesture identification Success rate is low, not can be carried out complicated interaction, so that the technical problem that user experience is poor.
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only this Apply for a part of the embodiment, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art exist Every other embodiment obtained under the premise of creative work is not made, shall fall in the protection scope of this application.
Referring to Fig. 1, a kind of flow diagram of the first embodiment of gesture identification method in the embodiment of the present application, packet It includes:
The multiple scattering centers of human hands that step 101, the gesture radar arrived according to real-time reception are sent are corresponding Coordinate information constructs the corresponding hand dummy model of human hands at current time.
It should be noted that position (such as finger, the back of the hand) different on human hands is to the echo-signal of gesture radar Difference, the i.e. corresponding hand position of an echo-signal, the human hands sent according to the gesture radar that real-time reception arrives are more A corresponding coordinate information of scattering center constructs the corresponding hand dummy model of human hands at current time.
Step 102, circulation execute step 101, the human body until not receiving the transmission of gesture radar in prefixed time interval The corresponding coordinate information of the multiple scattering centers of hand.
It should be noted that the multiple scattering centers of human hands of gesture radar transmission are not received in prefixed time interval Coordinate information, that is, represent user gesture control operation terminate.
It is configured it is understood that prefixed time interval can according to need, is not specifically limited herein.
Step 103, the coordinate information for determining preset scattering center on the hand dummy model of different moments, and according to the time The coordinate information of the preset scattering center of sequential connection different moments, obtains the changes in coordinates track of preset scattering center, preset Scattering center is one in multiple scattering centers.
It should be noted that because of the hand dummy model established in real time, it is thus determined that the virtual mould of the hand of different moments In type after the coordinate information of preset scattering center, the coordinate letter of the preset scattering center of different moments is connected sequentially in time Breath, the changes in coordinates track of available preset scattering center, i.e. the motion change track of hand dummy model namely user are logical Cross the changes in coordinates track of gesture input.It is understood that preset scattering center can be the corresponding point of finger, can also manage Solution, it is preset scattering center that the corresponding point in other position, which also can be set, true specifically according to input habit of user etc. It is fixed.
Step 104 obtains information to be entered according to changes in coordinates track, and shows information to be entered according to information to be entered Corresponding content.
In the present embodiment, because gesture radar is that the real-time corresponding coordinate of the multiple scattering centers of human hands that sends is believed Breath can construct hand dummy model according to the corresponding coordinate information of the multiple scattering centers received, when default Between be spaced in do not receive the multiple scattering centers of human hands corresponding coordinate information of gesture radar transmission, i.e. user Gesture control operation terminates, and one moment is corresponding with a hand dummy model during the entire process of user's operation, then root According to the coordinate information of preset scattering center on the hand dummy model of different moments, the changes in coordinates rail of preset scattering center is determined Mark can determine information to be entered according to the changes in coordinates track, finally show that kangaroo such as information is corresponding according to information to be entered Content can be accurately according to hand dummy model regardless of what content user's hand has input in entire identification process The changes in coordinates track of preset scattering center obtains the input content of user, solves the man-machine friendship of the technology currently based on radar It mutually only supports the interactive controlling of simple gesture, and stability is poor, gesture identification success rate is low, not can be carried out complicated interaction, make Obtain the poor technical problem of user experience.
The above are a kind of first embodiments of gesture identification method provided by the embodiments of the present application, and the following are the application implementations A kind of second embodiment for gesture identification method that example provides.
Referring to Fig. 2, a kind of flow diagram of the second embodiment of gesture identification method in the embodiment of the present application, packet It includes:
The multiple scattering centers of human hands that step 201, the gesture radar arrived according to real-time reception are sent are corresponding Coordinate information constructs the corresponding hand dummy model of human hands at current time.
It should be noted that step 201 is identical as the content of step 101 in the application first embodiment, specific descriptions can With referring to the content of first embodiment step 101, details are not described herein.
Step 202, while receiving multiple scattering centers corresponding coordinate information that gesture radar is sent, receive The corresponding motion information of multiple scattering centers that gesture radar is sent, motion information includes: velocity information, acceleration information And directional information.
It should be noted that because the reflection at each position of hand is all weaker, and due to the mankind are organism, each position Reflected intensity it is very unstable, be easily lost target, so as to cause the unstructured success of hand model, thus when a certain moment lose When target, that is, unstructured first moment hand dummy model, it can be determined in preset scattering by the motion information of previous moment The coordinate information at the heart moment.
Step 203, circulation execute step 201 and step 202, until not receiving gesture radar hair in prefixed time interval The corresponding coordinate information of the multiple scattering centers of the human hands sent.
If the hand dummy model of step 204, the first moment is unstructured, determine pre- on the hand dummy model at the second moment The motion information of scattering center is set, and according to the motion information at the second moment, determined pre- on the hand dummy model at the first moment The coordinate information of scattering center is set, the second moment was the previous moment at the first moment, and the first moment was to execute step 201 for the first time At the time of moment at the time of execute step 201 to last time in composed period.
It should be noted that if the hand dummy model at the first moment is unstructured, because the position of preset scattering center is not Meeting suddenly change, can be near the position of last moment, and the direction of motion before the direction of motion has greater probability edge continues to move, Movement velocity be also gradually change or it is constant, at this time if it is known that the velocity information of the previous moment at opposite first moment, accelerating Information and directional information are spent, determines the coordinate information at preset first moment of scattering center.Preset scattering center is in multiple scatterings In the heart one.
It is understood that only describing to determine according to the motion information of the previous moment at the first moment in the present embodiment The coordinate information at one moment may be otherwise the seat to determine for the first moment according to the motion information of the later moment in time at the first moment Mark information.Specifically implementation process may refer to aforementioned process, not repeat herein.
Step 205, the coordinate information for determining preset scattering center on the hand dummy model of different moments, it is suitable according to the time Sequence connects the coordinate information of the preset scattering center of different moments, obtains the changes in coordinates track of preset scattering center.
It should be noted that time sequencing is time sequencing from front to back, but it is understood that, time sequencing It can be time sequencing from back to front.
Step 206 obtains information to be entered according to changes in coordinates track, and shows information to be entered according to information to be entered Corresponding content.
It should be noted that when information to be entered is image to be entered, it, can direct basis for the diversification of image Changes in coordinates track obtains image to be entered, and shows image to be entered.It is understood that because hand dummy model is 3D , image to be entered corresponding at this time can be 2D or 3D, specifically be determined according to actual coordinate variation track.When to be entered When information is text to be entered, text to be entered is obtained according to changes in coordinates track, by text to be entered and preset literal pool into Row comparison, obtains the corresponding text of text to be entered, and show text.It is understood that including in preset literal pool but not It is limited to Chinese character, English character and numerical character etc..
It is understood that after the velocity information for determining each moment, such as when input text or figure, when speed is fast Corresponding thinner lines, corresponding thicker lines, can restore input content to the greatest extent when speed is slower.Specific implementation can Think, set the preset diameter of corresponding changes in coordinates track under preset speed, then according to friction speed and preset speed Ratio sets the changes in coordinates track under the speed.
In the present embodiment, because gesture radar is that the real-time corresponding coordinate of the multiple scattering centers of human hands that sends is believed Breath can construct hand dummy model according to the corresponding coordinate information of the multiple scattering centers received, when default Between be spaced in do not receive the multiple scattering centers of human hands corresponding coordinate information of gesture radar transmission, i.e. user Gesture control operation terminates, and one moment is corresponding with a hand dummy model during the entire process of user's operation, then root According to the coordinate information of preset scattering center on the hand dummy model of different moments, the changes in coordinates rail of preset scattering center is determined Mark can determine information to be entered according to the changes in coordinates track, finally show that kangaroo such as information is corresponding according to information to be entered Content can be accurately according to hand dummy model regardless of what content user's hand has input in entire identification process The changes in coordinates track of preset scattering center obtains the input content of user, solves the man-machine friendship of the technology currently based on radar It mutually only supports the interactive controlling of simple gesture, and stability is poor, gesture identification success rate is low, not can be carried out complicated interaction, make Obtain the poor technical problem of user experience.
The above are a kind of second embodiments of gesture identification method provided by the embodiments of the present application, and the following are the application implementations A kind of embodiment for gesture identifying device that example provides.
Referring to Fig. 3, a kind of structural schematic diagram of the embodiment of gesture identifying device in the embodiment of the present application, comprising:
Model construction unit 301, for according to real-time reception to gesture radar send the multiple scatterings of human hands in The corresponding coordinate information of the heart constructs the corresponding hand dummy model of human hands at current time;
First circulation unit 302, for trigger model construction unit 301 repeatedly, until not received in prefixed time interval The corresponding coordinate information of the multiple scattering centers of human hands sent to gesture radar;
Track determination unit 303, the coordinate letter of preset scattering center on the hand dummy model for determining different moments Breath, and the coordinate information of the preset scattering center of different moments is connected sequentially in time, obtain the coordinate of preset scattering center Variation track, preset scattering center are one in multiple scattering centers;
Display unit 304, for obtaining information to be entered according to changes in coordinates track, and according to information to be entered show to Input the corresponding content of information.
Further, the device further include:
Motion information acquiring unit 305, in the corresponding seat of multiple scattering centers for receiving the transmission of gesture radar While marking information, the corresponding motion information of multiple scattering centers that gesture radar is sent is received, motion information includes: speed Spend information, acceleration information and directional information;
First circulation unit 302 is specifically used for triggering motion information acquiring unit 305 and model construction unit 301 repeatedly, The corresponding coordinate letter of the multiple scattering centers of human hands until not receiving the transmission of gesture radar in prefixed time interval Breath;
If the hand dummy model at the first moment is unstructured, it is determined that preset scattering on the hand dummy model at the first moment The coordinate information at center specifically includes:
Determine the motion information of preset scattering center on the hand dummy model at the second moment, and according to the fortune at the second moment Dynamic information determines the coordinate information of preset scattering center on the hand dummy model at the first moment, and the second moment was the first moment Previous moment, to last time trigger model construction unit at the time of the first moment was first time trigger model construction unit A moment in period composed by moment.
Further, information to be entered is image to be entered;
Display unit 304 is specifically used for, and obtains image to be entered according to changes in coordinates track, and show image to be entered.
Further, information to be entered is text to be entered;
Display unit 304 is specifically used for, and obtains text to be entered according to changes in coordinates track, by text to be entered and preset Literal pool compares, and obtains the corresponding text of text to be entered, and show text.It is understood that in preset literal pool Including but not limited to Chinese character, English character and numerical character etc..
The above are a kind of embodiments of gesture identifying device provided by the embodiments of the present application, and the following are the embodiment of the present application to mention A kind of embodiment of the gesture recognition system supplied.
Referring to Fig. 4, a kind of structural schematic diagram of gesture recognition system in the embodiment of the present application, comprising: gesture radar 401 With the gesture identifying device 402 of above-described embodiment three;
Gesture radar 401 and gesture identifying device 402 communicate to connect;
Gesture radar 401, for real time emission radar signal to human hands;
Gesture radar 401 is also used to receive the echo-signal of the corresponding reflection of multiple scattering centers of human hands, according to each A echo-signal calculates the corresponding coordinate information of each scattering center, and sends the corresponding coordinate letter of multiple scattering centers It ceases to gesture identifying device 402.
It should be noted that gesture radar 401 uses LFMCW Radar Signals, work in V-band (55GHz- 65GHz).The basic principle of Modulation Continuous Wave Radar is will to send signal and echo-signal carries out mixing and obtains difference frequency letter Number, which, which carries out processing analysis, can be obtained relative distance, relative angle and relative velocity, according to relative distance and phase Coordinate information can be obtained to angle.It is understood that gesture radar 401 is a module, gesture can be directly installed on In identification device, such as in television set, mobile phone;It can also be fabricated to an external connection module, connected by USB and gesture identifying device It connects.Also, it is understood that personnel do not need to wear any article, naked hand operation, it is only necessary in gesture thunder when operating Up in 401 detectable range.It is understood that gesture radar can be set in order to obtain the angle information of target There are multiple receiving channels.
The embodiment of the present application also provides a kind of gesture identification equipment, which includes processor and memory: storage Device is transferred to processor for storing program code, and by program code, and processor according to the instruction in program code for holding The gesture identification method of row foregoing individual embodiments, thereby executing various function application and data processing.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
The description of the present application and term " first " in above-mentioned attached drawing, " second ", " third ", " the 4th " etc. are (if deposited ) it is to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that use in this way Data are interchangeable under appropriate circumstances, so that embodiments herein described herein for example can be in addition to illustrating herein Or the sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that Cover it is non-exclusive include, for example, containing the process, method, system, product or equipment of a series of steps or units need not limit In step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, produce The other step or units of product or equipment inherently.
It should be appreciated that in this application, " at least one (item) " refers to one or more, and " multiple " refer to two or two More than a."and/or" indicates may exist three kinds of relationships, for example, " A and/or B " for describing the incidence relation of affiliated partner It can indicate: only exist A, only exist B and exist simultaneously tri- kinds of situations of A and B, wherein A, B can be odd number or plural number.Word Symbol "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or"." at least one of following (a) " or its similar expression, refers to Any combination in these, any combination including individual event (a) or complex item (a).At least one of for example, in a, b or c (a) can indicate: a, b, c, " a and b ", " a and c ", " b and c ", or " a and b and c ", and wherein a, b, c can be individually, can also To be multiple.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in a read/write memory medium.Based on this understanding, the technical solution of the application is substantially in other words The all or part of the part that contributes to existing technology or the technical solution can embody in the form of software products Come, which is stored in a storage medium, including some instructions are used so that an equipment (can be individual, take Be engaged in device or the network equipment etc.) execute each embodiment the method for the application all or part of the steps.And storage above-mentioned Medium includes: USB flash disk, mobile hard disk, read-only memory (full name in English: Read-Only Memory, english abbreviation: ROM), random Accessing that memory (full name in English: Random Access Memory, english abbreviation: RAM), magnetic or disk etc. are various can be with Store the medium of program code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.

Claims (10)

1.一种手势识别方法,其特征在于,包括:1. a gesture recognition method, is characterized in that, comprises: 101、根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;101. Construct a hand virtual model corresponding to the human hand at the current moment according to the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar received in real time; 102、循环执行步骤101,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;102. Perform step 101 in a loop until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval; 103、确定不同时刻的所述手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的所述预置散射中心的坐标信息,得到所述预置散射中心的坐标变化轨迹,所述预置散射中心为所述多个散射中心中的一个;103. Determine the coordinate information of the preset scattering centers on the hand virtual model at different times, and connect the coordinate information of the preset scattering centers at different times in a time sequence to obtain the coordinate change trajectory of the preset scattering centers , the preset scattering center is one of the multiple scattering centers; 104、根据所述坐标变化轨迹得到待输入信息,并根据所述待输入信息显示所述待输入信息对应的内容。104. Obtain information to be input according to the coordinate change trajectory, and display content corresponding to the information to be input according to the information to be input. 2.根据权利要求1所述的方法,其特征在于,所述方法还包括:105;2. The method according to claim 1, wherein the method further comprises: 105; 105、在接收所述手势雷达发送的所述多个散射中心各自对应的坐标信息的同时,接收所述手势雷达发送的所述多个散射中心各自对应的运动信息,所述运动信息包括:速度信息、加速度信息和方向信息;105. While receiving the coordinate information corresponding to each of the multiple scattering centers sent by the gesture radar, receive motion information corresponding to each of the multiple scattering centers sent by the gesture radar, where the motion information includes: velocity. information, acceleration information and orientation information; 所述步骤102具体为:The step 102 is specifically: 102、循环执行步骤101和105,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;102. Repeat steps 101 and 105 until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval; 若第一时刻的所述手部虚拟模型未构建,则所述确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息具体包括:If the virtual hand model at the first moment has not been constructed, the determining the coordinate information of the preset scattering center on the virtual hand model at the first moment specifically includes: 确定第二时刻的所述手部虚拟模型上所述预置散射中心的运动信息,并根据所述第二时刻的运动信息,确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息,所述第二时刻为所述第一时刻的前一时刻,所述第一时刻为第一次执行步骤101的时刻至最后一次执行步骤101的时刻所组成的时间段内的一个时刻。determining the motion information of the preset scattering center on the hand virtual model at the second moment, and determining the preset scattering on the hand virtual model at the first moment according to the motion information at the second moment The coordinate information of the center, the second time is the previous time of the first time, and the first time is the time period formed from the time when step 101 is executed for the first time to the time when step 101 is executed for the last time. a moment. 3.根据权利要求1所述的方法,其特征在于,所述待输入信息为待输入图像;3. The method according to claim 1, wherein the information to be input is an image to be input; 所述步骤104具体包括:The step 104 specifically includes: 104、根据所述坐标变化轨迹得到所述待输入图像,并显示所述待输入图像。104. Obtain the to-be-input image according to the coordinate change trajectory, and display the to-be-input image. 4.根据权利要求1所述的方法,其特征在于,所述待输入信息为待输入文字;4. The method according to claim 1, wherein the information to be input is a text to be input; 所述步骤104具体包括:The step 104 specifically includes: 104、根据所述坐标变化轨迹得到所述待输入文字,将所述待输入文字和预置文字库进行对比,得到所述待输入文字对应的文字,并显示所述文字。104. Obtain the to-be-input text according to the coordinate change trajectory, compare the to-be-input text with a preset text library, obtain a text corresponding to the to-be-input text, and display the text. 5.一种手势识别装置,其特征在于,包括:5. A gesture recognition device, comprising: 模型构建单元,用于根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;The model building unit is used for constructing a virtual hand model corresponding to the human hand at the current moment according to the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar received in real time; 第一循环单元,用于反复触发所述模型构建单元,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;a first circulation unit, configured to repeatedly trigger the model construction unit until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval; 轨迹确定单元,用于确定不同时刻的所述手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的所述预置散射中心的坐标信息,得到所述预置散射中心的坐标变化轨迹,所述预置散射中心为所述多个散射中心中的一个;A trajectory determination unit, configured to determine the coordinate information of the preset scattering centers on the hand virtual model at different times, and connect the coordinate information of the preset scattering centers at different times in chronological order to obtain the preset scattering centers The coordinate change trajectory of , the preset scattering center is one of the plurality of scattering centers; 显示单元,用于根据所述坐标变化轨迹得到待输入信息,并根据所述待输入信息显示所述待输入信息对应的内容。The display unit is configured to obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input. 6.根据权利要求5所述的装置,其特征在于,所述装置还包括:6. The apparatus according to claim 5, wherein the apparatus further comprises: 运动信息获取单元,用于在接收所述手势雷达发送的所述多个散射中心各自对应的坐标信息的同时,接收所述手势雷达发送的所述多个散射中心各自对应的运动信息,所述运动信息包括:速度信息、加速度信息和方向信息;A motion information acquisition unit, configured to receive the motion information corresponding to each of the plurality of scattering centers sent by the gesture radar while receiving the coordinate information corresponding to the plurality of scattering centers sent by the gesture radar, the Motion information includes: speed information, acceleration information and direction information; 第一循环单元,具体用于反复触发所述运动信息获取单元和所述模型构建单元,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;The first circulation unit is specifically configured to repeatedly trigger the motion information acquisition unit and the model construction unit until the respective corresponding scattering centers of the human hand sent by the gesture radar are not received within a preset time interval. coordinate information; 若第一时刻的所述手部虚拟模型未构建,则所述确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息具体包括:If the virtual hand model at the first moment has not been constructed, the determining the coordinate information of the preset scattering center on the virtual hand model at the first moment specifically includes: 确定第二时刻的所述手部虚拟模型上所述预置散射中心的运动信息,并根据所述第二时刻的运动信息,确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息,所述第二时刻为所述第一时刻的前一时刻,所述第一时刻为第一次触发所述模型构建单元的时刻至最后一次触发所述模型构建单元的时刻所组成的时间段内的一个时刻。determining the motion information of the preset scattering center on the hand virtual model at the second moment, and determining the preset scattering on the hand virtual model at the first moment according to the motion information at the second moment The coordinate information of the center, the second time is the previous time of the first time, and the first time is the time from the time when the model construction unit is triggered for the first time to the time when the model construction unit is triggered for the last time. A moment in the composed time period. 7.根据权利要求5所述的装置,其特征在于,所述待输入信息为待输入图像;7. The device according to claim 5, wherein the information to be input is an image to be input; 所述显示单元具体用于,根据所述坐标变化轨迹得到所述待输入图像,并显示所述待输入图像。The display unit is specifically configured to obtain the to-be-input image according to the coordinate change trajectory, and to display the to-be-input image. 8.根据权利要求5所述的装置,其特征在于,所述待输入信息为待输入文字;8. The device according to claim 5, wherein the information to be input is a text to be input; 所述显示单元具体用于,根据所述坐标变化轨迹得到所述待输入文字,将所述待输入文字和预置文字库进行对比,得到所述待输入文字对应的文字,并显示所述文字。The display unit is specifically configured to obtain the text to be input according to the coordinate change trajectory, compare the text to be input with a preset text library, obtain the text corresponding to the text to be input, and display the text . 9.一种手势识别系统,其特征在于,包括:手势雷达和上述权利要求4至8中任一项所述的手势识别装置;9. A gesture recognition system, comprising: a gesture radar and the gesture recognition device according to any one of claims 4 to 8; 所述手势雷达和所述手势识别装置通信连接;the gesture radar is connected in communication with the gesture recognition device; 所述手势雷达,用于实时发射雷达信号至所述人体手部;The gesture radar is used to transmit radar signals to the human hand in real time; 所述手势雷达,还用于接收所述人体手部的多个散射中心对应反射的回波信号,根据各个所述回波信号解算出各个散射中心对应的坐标信息,并发送所述多个散射中心各自对应的坐标信息至所述手势识别装置。The gesture radar is further configured to receive echo signals reflected by multiple scattering centers of the human hand, calculate the coordinate information corresponding to each scattering center according to each of the echo signals, and send the multiple scattering centers The coordinate information corresponding to each center is sent to the gesture recognition device. 10.一种手势识别设备,其特征在于,所述设备包括处理器以及存储器;10. A gesture recognition device, wherein the device comprises a processor and a memory; 所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;the memory is used to store program code and transmit the program code to the processor; 所述处理器用于根据所述程序代码中的指令执行权利要求1至4中任一项所述的手势识别方法。The processor is configured to execute the gesture recognition method according to any one of claims 1 to 4 according to the instructions in the program code.
CN201810941071.XA 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment Active CN109164915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810941071.XA CN109164915B (en) 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810941071.XA CN109164915B (en) 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment

Publications (2)

Publication Number Publication Date
CN109164915A true CN109164915A (en) 2019-01-08
CN109164915B CN109164915B (en) 2020-03-17

Family

ID=64895863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810941071.XA Active CN109164915B (en) 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment

Country Status (1)

Country Link
CN (1) CN109164915B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111562843A (en) * 2020-04-29 2020-08-21 广州美术学院 Positioning method, device, equipment and storage medium for gesture capture
CN112885431A (en) * 2021-01-13 2021-06-01 佛山市顺德区美的洗涤电器制造有限公司 Diet recommendation method and device, range hood, processor and storage medium
CN113253832A (en) * 2020-02-13 2021-08-13 Oppo广东移动通信有限公司 Gesture recognition method, device, terminal and computer readable storage medium
WO2021238710A1 (en) * 2020-05-26 2021-12-02 京东方科技集团股份有限公司 Method and apparatus for identifying human hand and gestures, and display device
CN113918004A (en) * 2020-07-10 2022-01-11 华为技术有限公司 Gesture recognition method and its device, medium and system
CN114245542A (en) * 2021-12-17 2022-03-25 深圳市恒佳盛电子有限公司 A radar sensor light and its control method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140046922A1 (en) * 2012-08-08 2014-02-13 Microsoft Corporation Search user interface using outward physical expressions
CN104094194A (en) * 2011-12-09 2014-10-08 诺基亚公司 Method and device for gesture recognition based on fusion of multiple sensor signals
CN105677019A (en) * 2015-12-29 2016-06-15 大连楼兰科技股份有限公司 A gesture recognition sensor and its working method
CN105786185A (en) * 2016-03-12 2016-07-20 浙江大学 Non-contact type gesture recognition system and method based on continuous-wave micro-Doppler radar
CN106527670A (en) * 2015-09-09 2017-03-22 广州杰赛科技股份有限公司 Hand gesture interaction device
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic
CN108344995A (en) * 2018-01-25 2018-07-31 宁波隔空智能科技有限公司 A kind of gesture identifying device and gesture identification method based on microwave radar technology

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104094194A (en) * 2011-12-09 2014-10-08 诺基亚公司 Method and device for gesture recognition based on fusion of multiple sensor signals
US20140324888A1 (en) * 2011-12-09 2014-10-30 Nokia Corporation Method and Apparatus for Identifying a Gesture Based Upon Fusion of Multiple Sensor Signals
US20140046922A1 (en) * 2012-08-08 2014-02-13 Microsoft Corporation Search user interface using outward physical expressions
CN106527670A (en) * 2015-09-09 2017-03-22 广州杰赛科技股份有限公司 Hand gesture interaction device
CN105677019A (en) * 2015-12-29 2016-06-15 大连楼兰科技股份有限公司 A gesture recognition sensor and its working method
CN105786185A (en) * 2016-03-12 2016-07-20 浙江大学 Non-contact type gesture recognition system and method based on continuous-wave micro-Doppler radar
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic
CN108344995A (en) * 2018-01-25 2018-07-31 宁波隔空智能科技有限公司 A kind of gesture identifying device and gesture identification method based on microwave radar technology

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253832A (en) * 2020-02-13 2021-08-13 Oppo广东移动通信有限公司 Gesture recognition method, device, terminal and computer readable storage medium
CN113253832B (en) * 2020-02-13 2023-10-13 Oppo广东移动通信有限公司 Gesture recognition method, gesture recognition device, terminal and computer readable storage medium
CN111562843A (en) * 2020-04-29 2020-08-21 广州美术学院 Positioning method, device, equipment and storage medium for gesture capture
WO2021238710A1 (en) * 2020-05-26 2021-12-02 京东方科技集团股份有限公司 Method and apparatus for identifying human hand and gestures, and display device
US11797098B2 (en) 2020-05-26 2023-10-24 Boe Technology Group Co., Ltd. Methods for recognizing human hand and hand gesture from human, and display apparatus
CN113918004A (en) * 2020-07-10 2022-01-11 华为技术有限公司 Gesture recognition method and its device, medium and system
CN112885431A (en) * 2021-01-13 2021-06-01 佛山市顺德区美的洗涤电器制造有限公司 Diet recommendation method and device, range hood, processor and storage medium
CN112885431B (en) * 2021-01-13 2023-09-05 佛山市顺德区美的洗涤电器制造有限公司 Diet recommendation method and device, range hood, processor and storage medium
CN114245542A (en) * 2021-12-17 2022-03-25 深圳市恒佳盛电子有限公司 A radar sensor light and its control method
CN114245542B (en) * 2021-12-17 2024-03-22 深圳市恒佳盛电子有限公司 Radar sensor light and control method thereof

Also Published As

Publication number Publication date
CN109164915B (en) 2020-03-17

Similar Documents

Publication Publication Date Title
CN109164915A (en) A kind of gesture identification method, device, system and equipment
US20240168602A1 (en) Throwable interface for augmented reality and virtual reality environments
US20230274511A1 (en) Displaying virtual content in augmented reality using a map of the world
Carter et al. Pathsync: Multi-user gestural interaction with touchless rhythmic path mimicry
CN110585731B (en) Method, device, terminal and medium for throwing virtual article in virtual environment
LaViola et al. 3D spatial interaction: applications for art, design, and science
IL308490A (en) Virtual user input controls in a mixed reality environment
US20110151955A1 (en) Multi-player augmented reality combat
US20130010071A1 (en) Methods and systems for mapping pointing device on depth map
CN109740283A (en) Autonomous multi-agent confrontation simulation method and system
WO2015161307A1 (en) Systems and methods for augmented and virtual reality
CN106796789A (en) Interacted with the speech that cooperates with of speech reference point
CN109529340A (en) Virtual object control method, device, electronic equipment and storage medium
Chen et al. A command and control system for air defense forces with augmented reality and multimodal interaction
CN109865281A (en) A kind of method and relevant apparatus of object control
Daniels et al. Robotic Game Playing Internet of Things Device
WO2013176574A1 (en) Methods and systems for mapping pointing device on depth map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Xu Qiang

Inventor after: Fang Yougang

Inventor after: Liu Yaozhong

Inventor after: Liu Gengye

Inventor after: Li Yuexing

Inventor after: Wang Anqi

Inventor before: Xu Qiang

Inventor before: Fang Yougang

Inventor before: Liu Yaozhong

Inventor before: Liu Gengye

Inventor before: Li Yuexing

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A gesture recognition method, device, system and equipment

Effective date of registration: 20210908

Granted publication date: 20200317

Pledgee: China Everbright Bank Co.,Ltd. Xiangtan sub branch

Pledgor: TIME VARYING TRANSMISSION Co.,Ltd.

Registration number: Y2021430000044

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20200317

Pledgee: China Everbright Bank Co.,Ltd. Xiangtan sub branch

Pledgor: TIME VARYING TRANSMISSION Co.,Ltd.

Registration number: Y2021430000044

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载