CN115100742B - Meta universe exhibition and demonstration experience system based on space-apart gesture operation - Google Patents
Meta universe exhibition and demonstration experience system based on space-apart gesture operation Download PDFInfo
- Publication number
- CN115100742B CN115100742B CN202210720889.5A CN202210720889A CN115100742B CN 115100742 B CN115100742 B CN 115100742B CN 202210720889 A CN202210720889 A CN 202210720889A CN 115100742 B CN115100742 B CN 115100742B
- Authority
- CN
- China
- Prior art keywords
- gesture
- space
- exhibition
- meta
- system based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a meta space exhibition experience system based on space-isolated gesture operation, which belongs to the technical field of on-line virtual exhibition and comprises the following steps that Kinect hardware intervenes, actions of exhibition staff and arms are perceived through the early intervention of Kinect hardware, and the Kinect hardware comprises a camera capable of capturing a panoramic view angle; the motion controller is used for secondary intervention, the leapfrog sends out detection rays on equipment of the motion controller through an infrared transmitter, and a three-dimensional stereo surface is generated after the return signals are collected. The online exhibition hall in this embodiment improves the authenticity of online exhibition hall, adopts gesture and skeleton recognition induction equipment to strengthen the interactivity of online exhibition hall, increases the guide mode function that online exhibition hall visited, realizes the advantage complementation of online exhibition hall and online exhibition hall at present.
Description
Technical Field
The invention relates to the technical field of online virtual exhibition, in particular to a meta-universe exhibition experience system based on space-apart gesture operation.
Background
The development of exhibition halls is very popular and accepted by more people, different exhibition halls around the world are rapidly expanded, and with the development of artificial intelligence, the development of meta-universe is promoted, and compared with the defects of passive propaganda introduction, space-time limitation, simple picture type exhibition and the like of the exhibition halls under the traditional line, the 24-hour door-closing-free online virtual exhibition hall shows greater advantages, and has the visual modes of interconnection data, intelligent operation, user immersive experience with content as a core and no space-time limitation.
More importantly, the system can solve the problem of passive marketing, at present, an online exhibition mainly enters a virtual exhibition hall through various app entrances, and the online exhibition hall is visited in a keyboard mouse or game handle equipment mode, however, the key operation mode adopted by the equipment leads visitors to lose the natural interaction of hands and hands, and substitution sense and immersion sense are lost. The invention provides a space-isolation non-contact gesture operation mode for a virtual exhibition hall system to realize the exhibition hall guiding function.
Disclosure of Invention
The invention aims to provide a meta-universe exhibition experience system based on space-apart gesture operation, which has the natural interaction of human hands and the effects of substitution sense, immersion sense and the like in an online virtual exhibition hall so as to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the meta space exhibition experience system based on the space-free gesture operation comprises Kinect hardware, a somatosensory controller, an operation unit and an imaging unit, and comprises the following steps:
The Kinect hardware intervenes, actions of exhibition staff and arms are perceived through the Kinect hardware intervenes in advance, and the Kinect hardware comprises a camera capable of capturing a panoramic view angle;
The Motion controller performs secondary intervention, the Leap Motion sends out detection rays on equipment of the Motion controller through an infrared transmitter, and a three-dimensional surface is generated after the return signals are collected;
The operation unit takes the same joint coordinate point according to the gesture recognition result;
The imaging unit is used for providing attributes for the hand object to reflect the physical characteristics of a detected hand.
As a still further aspect of the invention, the Kinect hardware analyzes the head turn of the viewer based on the position of the viewer's head and the integrity of the face, binds the motion of the head to the displayed focused portion, and binds both hands to the focused view size, which changes as the viewer's telescope motion opens.
As a still further proposal of the invention, the motion sensing controller acquires real coordinate space coordinates (x, y, z) of the whole hand in real time, if part of the hand is moved out of the interactive effective space, kinectAzure equipment can immediately detect invalid actions and prompt the interactors to return to the effective area again on a display large screen.
As a still further proposal of the invention, the difference of minuscule joints between gestures in the somatosensory controller is captured, and the hands are split into joints and bones of the skeletal structure of the human body by adopting AI identification analysis.
The motion model is matched and analyzed by the motion controller after the motion controller performs secondary intervention, the similarity percentage of each prefabricated model comparison is obtained, and whether gesture triggering is successful or not is judged by the percentage value.
The invention further provides a scheme, wherein the operation unit judges whether the gesture moves left or right according to the change of the same coordinate point in the gesture, wherein the data of the first frame is an initial point coordinate, the data of the second frame is an end point coordinate.
As a still further scheme of the invention, the Motion sensing controller can allocate an ID indicator for gesture data in the operation unit, the indicator can be kept unchanged because the gesture exists in the visible range of the device, and frame Motion factors can be given by the displacement, rotation and scale change Leap Motion of the previous frame data.
As a still further aspect of the present invention, wherein the imaging unit, the direction and the palm normal direction are vectors describing the direction of the hand under a Leap Motion coordinate system.
A meta space exhibition experience gesture manufacturing method based on space-apart gesture operation comprises the following steps:
1) The method comprises the steps of expanding gesture interaction, performing scene interaction based on a gesture recognition mode, moving hands of visitors to a scanning button, and waiting for 3 seconds to trigger a mechanical arm to start equipment maintenance;
Prompting a part of the equipment to be maintained in the scene after the scanning is completed, and selecting and executing maintenance operation through gestures;
2) Gesture drawing of a tour guide line, recognition of a drawing mode of the tour guide line entering the exhibition area by a staff in the exhibition before visiting through a special gesture, self-defining of the tour guide line, and reference of the exhibition sequence for the staff;
The special gestures adopt two gestures, the two gestures alternately keep drawing of the visiting line for 5 seconds, and the false triggering rate of entering a drawing mode is reduced;
after a virtual navigation route function is set by activating a space gesture through a preset special gesture, the large screen prompts the palm to move in an effective detection area so as to control a navigation route indication cursor;
if the tour route is set, the tour route is carelessly and wrongly operated, or the main tour guide point is changed, and another canceling gesture can be immediately made at the moment;
3) The gesture virtual navigation previews live-action, firstly, when recording a tour route, a cursor is controlled to approach a key tour point by hand movement control, and then a real-time camera component of the key tour point which is currently approaching is activated by a preset trigger gesture;
Then, the large screen can directly enter the real-time camera picture of the currently selected key sightseeing point, so that an interactor can observe the actual situation in the restaurant in real time under the condition that the interactor does not arrive at the field, and the interactor can experience the related exhibition item in the field.
As a still further aspect of the invention, wherein the erasure cursor only requires moving the hand back, the cursor will wipe out unwanted routes and no recording from scratch is required.
Compared with the prior art, the invention has the beneficial effects that:
1. The online exhibition hall in this embodiment improves the authenticity of online exhibition hall, adopts gesture and skeleton recognition induction equipment to strengthen the interactivity of online exhibition hall, increases the guide mode function that online exhibition hall visited, realizes the advantage complementation of online exhibition hall and online exhibition hall at present.
2. In the online exhibition hall in the embodiment, a plurality of special operation means, namely gesture recognition and skeleton behavior recognition algorithms, are used in the online virtual exhibition hall system to replace a conventional mouse-keyboard operation mode to visit the virtual exhibition hall.
3. In this embodiment, an online exhibition hall defines a special gesture to operate in real time to realize the exhibition hall visit line.
Drawings
FIG. 1 is a block diagram of a B/S architecture connection in accordance with the present invention;
FIG. 2 is a block diagram of a man-machine interconnect in the present invention;
FIG. 3 is a schematic view of a camera capturing view angle according to the present invention;
FIG. 4 is a block diagram showing the connection of a motion sensing controller according to the present invention;
FIG. 5 is a schematic illustration of a gesture joint marking in accordance with the present invention;
FIG. 6 is a diagram of finger data information according to the present invention;
FIG. 7 is a schematic diagram of a gesture recognition structure according to the present invention;
FIG. 8 is a block diagram of a navigation interactive system connection in accordance with the present invention.
Detailed Description
Referring to fig. 1, an online virtual exhibition hall generally adopts a B/S architecture, and a mainstream client currently has a Chrome, edge, webkit browser. The server side is placed on a cloud virtual cloud host, and resource request service is provided through a reverse proxy program such as nginx or apache. And after the client browser kernel renders the multimedia resource, the visitor performs interactive operation by using a keyboard mouse or a touch screen. The interactive interface generally clearly identifies a plurality of functional buttons, particularly returns to an upper menu and enters a frame-level operation such as a current selection scene, which affects the beauty of the interface and the consistency of the overall style, and completely breaks the visual sense of immersive navigation.
When an important group accesses, the person can often send the person to conduct manual navigation. However, a few trained and carefully-charged professional instructors often speak two words and grass in the process of explaining scenes, and often give bad impressions of praise or joba to tourists, so that the instructors are limited, and in particular, foreign language instruction is difficult to mention for each tourist.
When a guest self-help navigator is used, as shown in fig. 2, a part of the exhibition hall is equipped with an electronic self-help navigator in order to better serve the tourist. These intelligent navigation machines have been popular in museums and tourist attractions in developed countries in early years, in recent years, scenic spots and Wen Bo exhibition halls in China have been popularized, and with the gradual increase of self-help tourists and tourists in domestic attractions and museums, the voice navigation service is a necessary novel service facility, which is a bright spot for the service means of scenic spots and museums, and the latest national tourist attraction quality classification and assessment standard has incorporated whether the portable electronic voice explanation can be provided into the ascending A plus item, which is an plus item of 4A scenic spots in China, and the 5A scenic spots are necessary explanation service items. However, the guiding mode only passively receives the electronic synthesized or recorded sound preset in advance by the guiding machine, the experience of the participant is limited, each person experiences the sound uniformly, pertinence is lacked, and a lot of fun is lost, so that the guiding efficiency is not high.
In summary, the above-mentioned exhibition modes have the following disadvantages that the traditional off-line exhibition hall has the disadvantages of passive propaganda introduction, space-time limitation, simple picture type exhibition and the like, the interaction mode of on-line exhibition is single, and the recognition technology based on two-dimensional color images is that after a scene is shot by a common camera, a two-dimensional static image is obtained, and then the recognition of the content in the image is carried out by a computer graphic algorithm. Two-dimensional hand pattern recognition can only recognize a few static gesture actions, and the actions need to be preset in advance, so the workload is high.
Based on the defects in the prior art, as shown in fig. 3, a meta-universe exhibition experience system based on space-free gesture operation is now provided, which comprises the following steps of hardware intervention, early intervention through Kinect hardware, perception of actions of exhibition staff and arms, wherein the hardware equipment comprises a camera capable of capturing a panoramic view angle, and three-dimensional modeling is carried out on the perceived actions after Kinect intervention.
In the embodiment, the Kinect is provided with a 1200w camera and a depth sensor, and can be displayed by 4K, when a tourist walks in front of a screen, the Kinect is used for intervention in advance, a 3D model is built from input data, the human skeleton of the person drawing the spectator is accurately captured, a signal is transmitted into the screen, the whole system is awakened, the Kinect can capture some arm movements of the spectator, for example, when the spectator looks at the whole picture, the spectator only needs to take the gesture of a telescope by a handle, in the process, the head steering of the spectator is analyzed according to the head position and the integrity of the face of the spectator, the movements of the head and the displayed focusing part are bound, the two hands and the size of the focusing diagram are bound, and the view size is changed along with the opening of the movements of the spectator telescope.
As shown in FIG. 4, the Motion controller intervenes for the second time, the Leap Motion perceives the specific gesture of the hand, after the Kinect wakes up the whole system, the human body is captured before walking to the platform, the Leap Motion is used as the second-level intervention, the Leap Motion sends out detection rays on the equipment by the infrared transmitter, a three-dimensional solid surface is generated after the returned signals are collected, at this time, the Leap Motion sensor acquires real space coordinates (x, y, z) of the whole hand in real time after the hand is put into the effective detection space, if the hand is moved out of the interaction effective space, the KinectAzure equipment can immediately detect ineffective actions, and prompt the interacter to return to the effective area again on a large screen, thereby realizing the gapless fusion of the Motion perception and the accurate gesture recognition.
As shown in FIG. 5, the detection and tracking of the complete gesture is realized in such a way that we can actually detect the complete gesture by completely tracking each joint part of the two hands and converting the joint parts into digital signals to be transmitted back to the background program for judgment. In order to distinguish as many gestures as possible and capture the subtle differences between each gesture, we use the function of AI recognition analysis to split the hand into skeletal structure level joints and bones of the human body, first the palm portion, directly recognized as a sphere in the system.
Wherein in fig. 5, each numeral represents:
This facilitates tracking the position of each hand in space, as shown in fig. 6. Then, each finger also creates different data objects according to the name of the finger, wherein the objects comprise the information of the joint angle, the phalange length, the rotation angle and the space coordinate of the corresponding finger. These information are stored in their respective buffers and are analyzed against other time data on the timeline to obtain motion information such as acceleration, relative displacement, etc.
According to the information, matching analysis is carried out on the gesture trigger and the pre-established motion models, the similarity percentage of each preset model is obtained, and whether gesture triggering is successful or not only needs to be judged by judging the value of the percentage.
Three-dimensional gesture recognition technology is used, the three-dimensional gesture recognition technology is added with information of a Z axis, various hand types, gestures and actions can be recognized, and the three-dimensional gesture recognition is the main direction of the development of the gesture recognition at present, but the gesture recognition containing certain depth information is realized by special hardware, and the three-dimensional gesture recognition is realized by using a custom-made industrial sensor and a professional optical camera to count sample characteristics and a deep learning neural network technology.
As shown in fig. 7, the computing unit takes the same joint coordinate point according to the gesture recognition result, in this embodiment, the data of the first frame to the left of the "0" point (i.e. the center point of the wrist) in the figure is taken as the initial point coordinate, the data of the second frame is taken as the end point target, and then determines whether to move left or right according to the change of the same coordinate in the gesture.
The Leap Motion software assigns it a unique ID indicator. The ID indicator remains unchanged as long as the entity is always within the visual range of the device, and the software analyzes the overall Motion, and the Leap Motion program gives frame Motion factors based on the Motion of that hand as long as the previous frame data has been shifted, rotated, scaled, etc.
Wherein:
1) The coordinates Rotation Axis, a direction vector, describes the Rotation of the coordinates.
2) Rotation Angle, rotation Angle in the clockwise direction with respect to Rotation coordinates (cartesian coordinate system).
3) The Matrix Rotation Matrix is rotated, a rotated Matrix transformation.
4) Scaling Factor Scale Factor, a Factor describing expansion and contraction.
5) Displacement Translation, a vector describes linear motion.
The imaging unit provides attributes to the hand object to reflect the physical characteristics of a detected hand
Wherein:
the hand object provides attributes to reflect the physical characteristics of a detected hand.
1. Palm Position, under the coordinate system of Leap Motion, the coordinates of the Palm center are measured in millimeters.
2. Palm Velocity, velocity of Palm motion in millimeters per second.
3. Palm Normal direction Palm Normal, the vector direction points to the inner side of the Palm, perpendicular to the plane formed by the Palm.
4. Direction, vector pointing from palm center to finger.
5. The sphere center SPHERE CENTER can be suitable for one sphere center of the inner arc surface of the palm. (assuming a ball is held)
6. Sphere Radius, as above, is the Sphere Radius. As the hand shape changes, the radius changes.
The direction and palm normal direction are vectors describing the direction of the hand under the Leap Motion coordinate system.
The content production with the matching of the gestures,
Step one, expanding gesture interaction, in which a content scene maintained by a virtual machine is described as an example, the scene interaction is operated based on a gesture recognition mode, a visitor moves hands to a scanning button, waits for 3 seconds to trigger a mechanical arm to start equipment maintenance, prompts a part of the equipment to be maintained in the scene after the scanning is completed, and the maintenance operation is selected and executed through gestures.
And secondly, gesture drawing of a tour guide line, and identifying a drawing mode of the tour guide line entering the exhibition area by a staff in the exhibition before visiting by special gestures, and customizing the tour guide line to provide reference of the exhibition sequence for the staff.
The special gestures adopt the following two gestures, the two gestures alternately keep drawing of the visiting line for 5 seconds, and the false triggering rate of entering the drawing mode is reduced.
After a virtual navigation route function is set by activating a space gesture through a preset special gesture, a large screen prompts palm movement in an effective detection area to control a navigation route to indicate a cursor, a sensor continuously converts a real space movement path of a hand into a navigation route in a virtual navigation three-dimensional program, the program automatically adsorbs and adheres according to the distance between an indicator of the hand and each key navigation point in the process, and then a regular straight line is generated to be displayed on the large screen.
Wherein, if the tour route is carelessly mishandled or the main unexpected navigation point is changed during the setting of the tour route, an additional cancel gesture can be immediately made at this time. This gesture is also a model that we preset in the program, and when the program detects a cancel gesture, the cursor recording the tour becomes an erase cursor.
We only need to move the hand back, the cursor will wipe off the unnecessary route and no recording from the beginning is needed.
After this set of procedures is completed, the program records our last tour route and generates relevant tour suggestions and notes.
And thirdly, performing gesture virtual navigation to preview live-action, wherein when a tour route is set by a user, some visitors may have a strong interest in some exhibition items in a library or hesitate about which tour route to use for visiting. The user wants to get the actual appearance in the museum in advance and then select the real appearance according with the mind.
At the moment, the real-time monitoring of key points in the library is accessed into the space-free gesture interactive navigation software, so that the requirement of previewing related points when a tour route is set is met.
As shown in fig. 8, when the tour route is recorded, the cursor is indicated to approach the key tour point by hand movement control, and then the real-time camera component of the currently approaching key tour point is activated by using a preset trigger gesture.
Then, the large screen can directly enter the real-time camera picture of the currently selected key sightseeing point, so that an interactor can observe the actual situation in the restaurant in real time under the condition that the interactor does not arrive at the field, and the interactor can experience the related exhibition item in the field.
The foregoing description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical solution of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.
Claims (9)
1. A gesture manufacturing method of a meta-universe exhibition display experience system based on spaced gesture operation comprises the following steps:
(1) The method comprises the steps of expanding gesture interaction, performing scene interaction based on a gesture recognition mode, moving hands of visitors to a scanning button, and waiting for 3 seconds to trigger a mechanical arm to start equipment maintenance;
Prompting a part of the equipment to be maintained in the scene after the scanning is completed, and selecting and executing maintenance operation through gestures;
(2) Gesture drawing of a tour guide line, recognition of a drawing mode of the tour guide line entering the exhibition area by a staff in the exhibition before visiting through a special gesture, self-defining of the tour guide line, and reference of the exhibition sequence for the staff;
The special gestures adopt two gestures, the two gestures alternately keep drawing of the visiting line for 5 seconds, and the false triggering rate of entering a drawing mode is reduced;
after a virtual navigation route function is set by activating a space gesture through a preset special gesture, the large screen prompts the palm to move in an effective detection area so as to control a navigation route indication cursor;
if the tour route is carelessly and wrongly operated or the main navigation point is changed in the process of setting the tour route, the gesture of further cancellation can be immediately made at the moment; the canceling gesture is a model preset in the program, and when the program detects the canceling gesture, the cursor recording the tour route can become an erasing cursor;
(3) The gesture virtual navigation previews live-action, firstly, when recording a tour route, a control indicating cursor is moved by hand to approach a key tour point, and then a real-time camera component of the current approaching key tour point is activated by a preset trigger gesture;
Then, the large screen can directly enter a real-time camera picture of the currently selected key sightseeing point, so that an interactor can observe the actual situation in the restaurant in real time under the condition that the interactor does not arrive at the field, and the interactor can experience related exhibition items in an immersive manner;
The meta space exhibition experience system based on the space-free gesture operation comprises Kinect hardware, a somatosensory controller, an operation unit and an imaging unit, and is characterized by comprising the following steps:
The Kinect hardware intervenes, actions of exhibition staff and arms are perceived through the Kinect hardware intervenes in advance, and the Kinect hardware comprises a camera capable of capturing a panoramic view angle;
The Motion controller performs secondary intervention, the Leap Motion sends out detection rays on equipment of the Motion controller through an infrared transmitter, and a three-dimensional surface is generated after the return signals are collected;
The operation unit takes the same joint coordinate point according to the gesture recognition result;
The imaging unit is used for providing attributes for the hand object to reflect the physical characteristics of a detected hand.
2. The gesture production method of the meta space exhibition experience system based on the space-free gesture operation according to claim 1, wherein the Kinect hardware analyzes the head turn of the audience according to the head position and the integrity of the face of the audience, binds the action of the head and the displayed focusing part, binds the hands and the focusing view size, and changes the view size along with the opening of the action of the telescope of the audience.
3. The method for gesture production of a meta space exhibition experience system based on a space-free gesture operation according to claim 1, wherein the somatosensory controller acquires real coordinate space coordinates (x, y, z) of the whole hand in real time, if the part of the hand is moved out of the interactive effective space, the KinectAzure device immediately detects an ineffective action and prompts the interactor to return to the effective area again on the display screen.
4. A method for gesture production of a meta-universe exhibition experience system based on spaced gesture operation according to claim 1 or 3, characterized in that the difference between minuteness of the gesture in the somatosensory controller is captured, and the hand is split into joints and bones of the skeletal structure of the human body by AI recognition analysis.
5. The gesture production method of the meta space exhibition experience system based on the space-free gesture operation according to claim 4, wherein the somatosensory controller performs secondary intervention, performs matching analysis on a pre-established motion model, obtains a similarity percentage of each prefabricated model comparison, and judges whether gesture triggering is successful or not according to the percentage value.
6. The gesture production method of the meta space exhibition experience system based on the space-free gesture operation according to claim 1, wherein the operation unit judges whether the gesture is left or right according to the change of the same coordinate point in the gesture, wherein the data of the first frame is an initial point coordinate, the data of the second frame is an end point coordinate.
7. The gesture production method of the meta-space display experience system based on the spaced gesture operation according to claim 1 or 6, wherein the somatosensory controller assigns an 'ID' indicator to gesture data in the operation unit, the indicator is kept unchanged in the visual range of the device because of the gesture, and frame Motion factors are given by previous frame data displacement, rotation and dimensional change Leap Motion.
8. The gesture production method of the meta space exhibition experience system based on the space-free gesture operation according to claim 1, wherein the imaging unit, the direction and the palm normal direction are vectors describing the direction of the hand under a Leap Motion coordinate system.
9. The gesture production method of the meta-space exhibition experience system based on the space-apart gesture operation according to claim 1, wherein the erasing cursor only needs to move back the hand, the cursor can erase the unnecessary route, and recording from the beginning is not needed.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210720889.5A CN115100742B (en) | 2022-06-23 | 2022-06-23 | Meta universe exhibition and demonstration experience system based on space-apart gesture operation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210720889.5A CN115100742B (en) | 2022-06-23 | 2022-06-23 | Meta universe exhibition and demonstration experience system based on space-apart gesture operation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115100742A CN115100742A (en) | 2022-09-23 |
| CN115100742B true CN115100742B (en) | 2025-05-23 |
Family
ID=83293255
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210720889.5A Active CN115100742B (en) | 2022-06-23 | 2022-06-23 | Meta universe exhibition and demonstration experience system based on space-apart gesture operation |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115100742B (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116071531A (en) * | 2023-04-03 | 2023-05-05 | 山东捷瑞数字科技股份有限公司 | Meta universe display method, device, equipment and medium based on digital twin |
| CN116627260A (en) * | 2023-07-24 | 2023-08-22 | 成都赛力斯科技有限公司 | Method and device for idle operation, computer equipment and storage medium |
| CN117340931B (en) * | 2023-09-21 | 2024-05-24 | 北京三月雨文化传播有限责任公司 | All-angle autonomous adjustable multimedia real object exhibition device |
| CN119004636B (en) * | 2024-10-23 | 2025-04-04 | 永麒科技集团有限公司 | Display design method and system |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104808788A (en) * | 2015-03-18 | 2015-07-29 | 北京工业大学 | Method for controlling user interfaces through non-contact gestures |
| CN108182728A (en) * | 2018-01-19 | 2018-06-19 | 武汉理工大学 | A kind of online body-sensing three-dimensional modeling method and system based on Leap Motion |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10437460B2 (en) * | 2012-06-05 | 2019-10-08 | Apple Inc. | Methods and apparatus for cartographically aware gestures |
| CN107783645A (en) * | 2016-08-30 | 2018-03-09 | 威海兴达信息科技有限公司 | A kind of virtual museum visit system based on Kinect |
| CN106652043A (en) * | 2016-12-29 | 2017-05-10 | 深圳前海弘稼科技有限公司 | Method and device for virtual touring of scenic region |
| CN111785194B (en) * | 2020-07-13 | 2022-03-08 | 西安新航展览有限公司 | Artificial intelligence display system based on 3D holographic projection |
| CN114647315A (en) * | 2022-03-25 | 2022-06-21 | 青岛虚拟现实研究院有限公司 | Man-machine interaction method based on museum navigation AR glasses |
-
2022
- 2022-06-23 CN CN202210720889.5A patent/CN115100742B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104808788A (en) * | 2015-03-18 | 2015-07-29 | 北京工业大学 | Method for controlling user interfaces through non-contact gestures |
| CN108182728A (en) * | 2018-01-19 | 2018-06-19 | 武汉理工大学 | A kind of online body-sensing three-dimensional modeling method and system based on Leap Motion |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115100742A (en) | 2022-09-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN115100742B (en) | Meta universe exhibition and demonstration experience system based on space-apart gesture operation | |
| US12182944B2 (en) | Authoring and presenting 3D presentations in augmented reality | |
| JP4768196B2 (en) | Apparatus and method for pointing a target by image processing without performing three-dimensional modeling | |
| US20050206610A1 (en) | Computer-"reflected" (avatar) mirror | |
| CN102622774B (en) | Living room film creates | |
| US20190369742A1 (en) | System and method for simulating an interactive immersive reality on an electronic device | |
| JP5256269B2 (en) | Data generation apparatus, data generation apparatus control method, and program | |
| CN101231752B (en) | Mark-free true three-dimensional panoramic display and interactive apparatus | |
| Leibe et al. | Toward spontaneous interaction with the perceptive workbench | |
| CN106125921A (en) | Gaze detection in 3D map environment | |
| WO2019147392A1 (en) | Puppeteering in augmented reality | |
| Wren et al. | Perceptive spaces for performance and entertainment untethered interaction using computer vision and audition | |
| CN106662926A (en) | Systems and methods of gestural interaction in a pervasive computing environment | |
| JP7559758B2 (en) | Image processing device, image processing method, and program | |
| CN102184020A (en) | Method for manipulating posture of user interface and posture correction | |
| KR20010081193A (en) | 3D virtual reality motion capture dance game machine by applying to motion capture method | |
| CN107861629A (en) | A kind of practice teaching method based on VR | |
| KR20210105484A (en) | Apparatus for feeling to remodeling historic cites | |
| CN113419634A (en) | Display screen-based tourism interaction method | |
| CN117369233A (en) | Holographic display method, device, equipment and storage medium | |
| Gokl et al. | Towards urban environment familiarity prediction | |
| CN114185431B (en) | Intelligent media interaction method based on MR technology | |
| CN117435055A (en) | Gesture-enhanced eye tracking human-computer interaction method based on spatial stereoscopic display | |
| Tollmar et al. | Navigating in virtual environments using a vision-based interface | |
| Gross et al. | Gesture Modelling: Using Video to Capture Freehand Modeling Commands |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB02 | Change of applicant information |
Country or region after: China Address after: 18th Floor, Building 1, No. 1750 Zhongke Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 201210 (15th Floor, Certificate of Production) Applicant after: Shanghai Miaowen Creative Technology Co.,Ltd. Address before: 5th Floor, Building 4, No. 498, Guoshoujing Road, Pudong New Area, Shanghai, 201203 Applicant before: SHANGHAI MIAOWEN EXHIBITION SERVICE Co.,Ltd. Country or region before: China |
|
| CB02 | Change of applicant information | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |