CN119105383A - Robot control method, device, equipment, storage medium and product - Google Patents
Robot control method, device, equipment, storage medium and product Download PDFInfo
- Publication number
- CN119105383A CN119105383A CN202411528544.5A CN202411528544A CN119105383A CN 119105383 A CN119105383 A CN 119105383A CN 202411528544 A CN202411528544 A CN 202411528544A CN 119105383 A CN119105383 A CN 119105383A
- Authority
- CN
- China
- Prior art keywords
- target
- grabbing
- information
- robot
- control instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000003860 storage Methods 0.000 title claims abstract description 27
- 238000004590 computer program Methods 0.000 claims description 27
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 241000282414 Homo sapiens Species 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 3
- 239000002245 particle Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000005653 Brownian motion process Effects 0.000 description 1
- 101100001674 Emericella variicolor andI gene Proteins 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009429 electrical wiring Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 229910052900 illite Inorganic materials 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- VGIBGUSAECPPNB-UHFFFAOYSA-L nonaaluminum;magnesium;tripotassium;1,3-dioxido-2,4,5-trioxa-1,3-disilabicyclo[1.1.1]pentane;iron(2+);oxygen(2-);fluoride;hydroxide Chemical compound [OH-].[O-2].[O-2].[O-2].[O-2].[O-2].[F-].[Mg+2].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[Al+3].[K+].[K+].[K+].[Fe+2].O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2.O1[Si]2([O-])O[Si]1([O-])O2 VGIBGUSAECPPNB-UHFFFAOYSA-L 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0423—Input/output
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/25—Pc structure of the system
- G05B2219/25257—Microcontroller
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The application discloses a robot control method, a device, equipment, a storage medium and a product, and relates to the technical field of artificial intelligence; the method comprises the steps of moving to a target position according to the grabbing target information, collecting target environment information, determining grabbing paths based on the target environment information and the grabbing target information, and grabbing targets corresponding to the control instructions according to the grabbing paths. The application moves to the target position according to the grabbing target information and collects the target environment information, determines the grabbing path based on the target environment information and the grabbing target information, and grabs the grabbing target corresponding to the control instruction according to the grabbing path. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
Description
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a robot control method, a device, equipment, a storage medium and a product.
Background
The intelligent robot system for mobile operation based on the body intelligence (the intelligent mobile operation robot for short) aims at constructing a robot system with autonomous environment perception, full understanding cognition, smooth man-machine interaction, reliable intelligent decision and natural motion operation planning, and enables the traditional mobile operation robot to be upgraded and energized by depending on a multi-field, multi-scene and multifunctional autonomous intelligent platform, so that the industrial development of the mobile operation robot in the future is led. After the brain-like structure capable of sensing, understanding and deciding is provided, the intelligent mobile operation robot with the body can autonomously understand and complete the advanced instruction issued by human beings, so that the real general intelligence is realized. Compared with the traditional mobile robot, the intelligent mobile operation robot with the body can complete complex works which usually need human intelligence, and the revolutionary transformation is brought to human society with the continuous development and maturation of the technology. The intelligent mobile operation robot has wide application prospect in civil fields such as service, catering, medical treatment, intelligent home, unmanned distribution and the like, industrial fields such as intelligent factories, intelligent manufacturing and the like, and military fields such as individual combat and the like. At present, research and development of intelligent mobile operation robots for bodies are still in laboratory testing at home and abroad, and certain defects exist when instructions issued by human beings are understood and completed, and the whole technology is not mature, so that how to improve the control efficiency of the intelligent mobile operation robots for bodies, so that tasks such as understanding human instructions and executing corresponding grabbing efficiently by the intelligent mobile operation robots for bodies become a technical problem to be solved urgently.
Disclosure of Invention
The application mainly aims to provide a robot control method, a device, equipment, a storage medium and a product, and aims to solve the technical problem that the execution efficiency is low when the existing intelligent mobile operation robot with a body executes a grabbing task.
In order to achieve the above object, the present application provides a robot control method comprising:
When a control instruction is received, determining grabbing target information according to the control instruction;
moving to a target position according to the grabbing target information, and collecting target environment information;
Determining a grabbing path based on the target environment information and the grabbing target information;
and grabbing the grabbing target corresponding to the control instruction according to the grabbing path.
Optionally, when receiving a control instruction, the step of determining to grasp target information according to the control instruction includes:
when a control instruction is received, determining user demand information according to the control instruction;
Collecting surrounding environment images;
Inputting the user demand information and the surrounding environment image into a preset multi-mode large model to obtain grabbing target information output by the preset multi-mode large model, wherein the grabbing target information comprises relative map positions and robot gesture coordinates of grabbing targets corresponding to the control instructions.
Optionally, the step of determining the user requirement information according to the control instruction when the control instruction is received includes:
when a control instruction is received, performing text conversion on the control instruction to obtain text information;
Judging whether a target wake-up word is detected according to the text information;
and when the target wake-up word is detected, determining user demand information according to the text information.
Optionally, the step of determining a grabbing path based on the target environment information and the grabbing target information includes:
Determining the position information of a grabbing target to be grabbed in the target environment information according to the target environment information and the grabbing target information;
the position information and the point cloud image corresponding to the position information are sent to a preset GraspNet model, and the grabbing pose information output by the preset GraspNet model is obtained;
And determining a grabbing path according to the grabbing pose information.
Optionally, the step of capturing the captured target corresponding to the control instruction according to the capturing path includes:
Constructing a Lagrange dynamics model, wherein the Lagrange dynamics model is used for predicting the stability of the robot when grabbing through the grabbing path;
Inputting the grabbing path into the Lagrange dynamics model, simulating the stability of the robot during grabbing, and outputting a prediction result;
and grabbing the grabbing target corresponding to the control instruction according to the prediction result.
Optionally, after the step of capturing the captured target corresponding to the control instruction according to the capturing path, the method further includes:
Determining a target placement position according to the grabbing target information;
and placing the grabbing target at the target placing position.
In addition, in order to achieve the above object, the present application also proposes a robot control device including:
the receiving module is used for determining grabbing target information according to the control instruction when the control instruction is received;
The moving module is used for moving to a target position according to the grabbing target information and collecting target environment information;
the grabbing path determining module is used for determining grabbing paths based on the target environment information and the grabbing target information;
and the grabbing module is used for grabbing the grabbing target corresponding to the control instruction according to the grabbing path.
In addition, in order to achieve the above object, the application also proposes a robot control device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the robot control method as described above.
In addition, in order to achieve the above object, the present application also proposes a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the robot control method as described above.
Furthermore, to achieve the above object, the present application provides a computer program product comprising a computer program which, when being executed by a processor, implements the steps of the robot control method as described above.
When a control instruction is received, grabbing target information is determined according to the control instruction, the grabbing target information is moved to a target position according to the grabbing target information, target environment information is collected, grabbing paths are determined based on the target environment information and the grabbing target information, and grabbing targets corresponding to the control instruction are grabbed according to the grabbing paths. The application moves to the target position according to the grabbing target information and collects the target environment information, determines the grabbing path based on the target environment information and the grabbing target information, and grabs the grabbing target corresponding to the control instruction according to the grabbing path. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a first embodiment of a robot control method according to the present application;
FIG. 2 is a schematic diagram of a robot with intelligent mobile operation according to a first embodiment of the present application;
FIG. 3 is a schematic view of a chassis structure of a vehicle body according to a first embodiment of the present application;
FIG. 4 is a schematic view showing details of a chassis structure of a vehicle body according to a first embodiment of the robot control method of the present application;
Fig. 5 is a schematic flow chart of a second embodiment of a robot control method according to the present application;
FIG. 6 is a schematic overall flow chart of a robot control method according to a second embodiment of the present application;
fig. 7 is a schematic block diagram of a robot control device according to an embodiment of the present application;
fig. 8 is a schematic device structure diagram of a hardware operating environment related to a robot control method in an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The main solution of the embodiment of the application is that when a control instruction is received, grabbing target information is determined according to the control instruction, the grabbing target information is moved to a target position according to the grabbing target information, target environment information is acquired, a grabbing path is determined based on the target environment information and the grabbing target information, and grabbing targets corresponding to the control instruction are grabbed according to the grabbing path. The application moves to the target position according to the grabbing target information and collects the target environment information, determines the grabbing path based on the target environment information and the grabbing target information, and grabs the grabbing target corresponding to the control instruction according to the grabbing path. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
It should be noted that, the execution body of the embodiment may be a computing service device with functions of data processing, network communication and program running, such as a tablet computer, a personal computer, a mobile phone, or an electronic device, an intelligent mobile operation robot with body, or the like, which can implement the above functions. Hereinafter, this embodiment and the following embodiments will be described with reference to a robot (hereinafter simply referred to as "robot") for performing intelligent mobile operation.
Based on this, an embodiment of the present application provides a robot control method, and referring to fig. 1, fig. 1 is a schematic flow chart provided by an embodiment of the robot control method of the present application.
In this embodiment, the robot control method includes steps S10 to S40:
Step S10, when a control instruction is received, determining grabbing target information according to the control instruction;
it should be noted that the control command may be a control command sent by the user, for example, to let the robot grasp a certain object. The determining of the grabbing target information according to the control instruction may be determining a position of an article to be grabbed, an article name or an image of the article and a target placement point to be placed after grabbing according to the control instruction.
It should be noted that, referring to fig. 2, fig. 2 is a schematic structural diagram of a body-equipped intelligent mobile operation robot according to a first embodiment of the present application, where the body-equipped intelligent mobile operation robot includes a mechanical arm and a vehicle body chassis, the mechanical arm is used for grabbing the body-equipped intelligent mobile operation robot, and the vehicle body chassis is used for moving the body-equipped intelligent mobile operation robot. Referring to fig. 3, fig. 3 is a schematic diagram of a vehicle chassis according to a first embodiment of the robot control method of the present application, where the vehicle chassis includes a Gemini Pro camera, 2 TOF lidars, and 6 uniformly distributed ultrasonic sensors. Other cameras, laser radars and ultrasonic sensors may be used instead, and the embodiment is not limited herein, and referring to fig. 4, fig. 4 is a schematic diagram of a structure of a chassis of a vehicle provided in an embodiment of a robot control method according to the present application, where the chassis of the vehicle includes a power charging port, a hard emergency stop button, a power switch, a router WAN interface, an external power supply interface, a soft emergency stop button, a USB, a TypeC, an automatic recharging, and a suspension chassis.
Further, in order to enable the robot to accurately understand the intention of the user, the step S10 may include determining user demand information according to a control instruction when the control instruction is received;
Collecting surrounding environment images;
Inputting the user demand information and the surrounding environment image into a preset multi-mode large model to obtain grabbing target information output by the preset multi-mode large model, wherein the grabbing target information comprises relative map positions and robot gesture coordinates of grabbing targets corresponding to the control instructions.
It should be noted that, when the control instruction is received, determining the user demand information according to the control instruction may be performing semantic analysis on the control instruction to obtain a semantic analysis result, and determining, according to the semantic analysis result, the article information to be grabbed by the user may include a name and a rough position of the article, for example, taking cola on the refrigerator. The capturing of the image of the surroundings may be the robot taking a picture of the surroundings with a camera on the body. The preset multi-mode large model can be ChatGPT models such as ChatGPT, purport to be thousands of questions, pro max and the like. And predicting the relative map position and the robot gesture coordinate of the grabbing target according to the user demand information, the surrounding environment image and the map information in the pre-acquired robot moving range by the preset multi-mode large model.
Before the step S10, the method further includes mapping the moving range of the robot to obtain the map information. Specifically, the mapping is performed by gmapping technology, gmapping is a synchronous positioning and mapping (Simultaneous Localization AND MAPPING, SLAM) technology based on particle filtering. It mainly uses an improved Rao-Blackwellised particle filter to solve both positioning and mapping problems. The core idea is to use a plurality of particles, each representing a possible robot pose and associated with it a map. The gmapping technology based on laser radar information is used for remotely controlling the robot to move in the moving range and simultaneously establishing a 2D plane map for autonomous navigation and positioning of the robot.
Further, in order to improve the service efficiency of the robot, the step of determining the user requirement information according to the control instruction when the control instruction is received includes:
when a control instruction is received, performing text conversion on the control instruction to obtain text information;
Judging whether a target wake-up word is detected according to the text information;
and when the target wake-up word is detected, determining user demand information according to the text information.
The control instruction may be a voice command sent by the user to the robot, and when the robot receives the voice command sent by the user, the robot performs text conversion on the control instruction to obtain text information. And judging whether a target wake-up word exists currently according to the text information, wherein the target wake-up word can be a keyword indicating the robot to grasp the object, such as 'taking, placing, delivering'. And when the target wake-up word is detected, determining user demand information according to the text information. In order to improve the analysis accuracy of the preset multi-mode large model, a prompt word can be set in the embodiment, text information of a user is optimized through the prompt word and then input into the preset multi-mode large model, so that the preset multi-mode large model outputs more accurate user demand information.
Step S20, moving to a target position according to the grabbing target information, and collecting target environment information;
The capturing target information includes a target position of a capturing target to be captured, and the moving to the target position according to the capturing target information may be by calling a navigation to move to a target position corresponding to the capturing target information, and collecting surrounding environment information at the target position to obtain the target environment information.
Step S30, a grabbing path is determined based on the target environment information and the grabbing target information;
The determining the grabbing path based on the target environment information and the grabbing target information may be determining a detailed position and pose information of the grabbing target in the target environment information according to the target environment information and the grabbing target information, and determining an optimal grabbing pose of the robot during grabbing and a path from an initial state to the optimal grabbing pose of the robot according to the detailed position and the pose information, that is, the grabbing path.
And step S40, grabbing the grabbing target corresponding to the control instruction according to the grabbing path.
It should be noted that, the capturing target corresponding to the control instruction according to the capturing path may be controlling the mechanical arm to capture the capturing target corresponding to the control instruction according to the capturing path.
Further, after the grabbing target is grabbed, the grabbing target is required to be placed at a designated position according to a control instruction, and after the step S40, the method further comprises the steps of determining a target placement position according to the grabbing target information;
and placing the grabbing target at the target placing position.
It should be noted that the capturing target information may further include a target placement position. The step of placing the grabbing target at the target placing position may be to determine a moving path of the robot according to the constructed robot moving range map and the target placing position, and move to the target placing position according to the moving path. And then placing the grabbed grabbing target at the target placing position.
When a control instruction is received, the embodiment determines grabbing target information according to the control instruction, moves to a target position according to the grabbing target information, collects target environment information, determines grabbing paths based on the target environment information and the grabbing target information, and grabs grabbing targets corresponding to the control instruction according to the grabbing paths. The method comprises the steps of moving to a target position according to grabbing target information, collecting target environment information, determining grabbing paths based on the target environment information and the grabbing target information, and grabbing the grabbing targets corresponding to control instructions according to the grabbing paths. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the above description, and will not be repeated. On this basis, please refer to fig. 5, fig. 5 is a flow chart of a second embodiment of the robot control method according to the present application, wherein the step S30 further includes the following steps:
step S301, determining the position information of a grabbing target to be grabbed in the target environment information according to the target environment information and the grabbing target information;
It should be noted that, the determining, according to the target environment information and the grabbing target information, the position information of the grabbing target to be grabbed in the target environment information may be sending the target environment information and the grabbing target information to the preset multi-mode large model, and the preset multi-mode large model determines, according to the characteristics of the grabbing target in the grabbing target information, the position information of the grabbing target in the target environment information, and may specifically be a coordinate range in the target environment information.
Step S302, the position information and the point cloud image corresponding to the position information are sent to a preset GraspNet model, and the grabbing pose information output by the preset GraspNet model is obtained;
It should be noted that GraspNet is a deep learning network for robot gripping, which aims to solve the problem that the robot can grip objects effectively in various environments. The network predicts the optimal gripping position and pose by analyzing the three-dimensional shape and surrounding environment of the object. GraspNet are not only concerned with the grabbing of a single object, but also can handle the grabbing task of multiple objects in a complex scene. GraspNet is a local neural network small model, which cooperates with the above-mentioned preset multi-mode large model deployed in the cloud to complete the object capturing. Specifically, the depth camera on the mechanical arm of the robot can collect data from a target environment, the target environment can be the environment where a grabbing target is located, and the data collected by the mechanical arm form a point cloud image to provide vision and depth information of a scene, so that a grabbing analysis basis is formed. The point cloud image is formed by data acquired from a target environment by a depth camera on the mechanical arm. The preset GraspNet model may be a deep learning network for robotic grasping. The grabbing pose information may be an optimal grabbing pose output by the preset GraspNet model.
And step S303, determining a grabbing path according to the grabbing pose information.
It should be noted that, the determining the grabbing path according to the grabbing pose information may be a path from the initial state of the robot to the optimal grabbing pose generated by using the mechanical arm path planning library MoveIt in the ROS of the robot. MoveIt is a powerful robotic motion planning framework, and the present embodiment uses the robotic arm path planning library MoveIt in ROS to generate a reference trajectory from an initial state to an optimal gripping pose.
Further, in the process of robot grabbing, random vibration exists, in order to improve grabbing efficiency, the step of grabbing the grabbing target corresponding to the control instruction according to the grabbing path includes:
Constructing a Lagrange dynamics model, wherein the Lagrange dynamics model is used for predicting the stability of the robot when grabbing through the grabbing path;
Inputting the grabbing path into the Lagrange dynamics model, simulating the stability of the robot during grabbing, and outputting a prediction result;
and grabbing the grabbing target corresponding to the control instruction according to the prediction result.
It should be noted that, in this embodiment, for the mechanical arm system operating in the random vibration environment, a tracking control algorithm is provided, and finite time stability is realized, and the method is applicable to the case that the system has unknown dynamics. First, a random lagrangian kinetic model of the mechanical arm (i.e., the lagrangian kinetic model) under a random vibration environment is constructed. Then, a command filtering self-adaptive backstepping controller is provided, not only the unknown dynamics of the mechanical arm system is approximately obtained, but also the problem of singularity of the traditional finite time backstepping method can be avoided. Furthermore, an error compensation mechanism is introduced for the error of the filter to compensate, and an auxiliary system is further introduced to deal with the input saturation problem which is common in practice. The results demonstrate the practical mean square limited time stability of tracking errors. Finally, a random mechanical arm model is applied to verify the effectiveness of the proposed control algorithm. Specifically, consider the robotic arm joint space random Lagrange control system as follows:
wherein, Is a state variable, n is used to characterize the dimension,Is thatIs used as a first derivative of (a),Is a generalized mass (inertia) matrix,Is a coriolis/centrifugal matrix,Is the gravity vector of the gravity vector,Is made of white noiseA random excitation force caused by the magnetic field, wherein;Is a control force acting on the system, andIs an input saturation function representing the input saturation of the controller, and satisfies:
Wherein the method comprises the steps of As a known constant, defined as:
Thus, there are Wherein,And define。
Suppose 1: Is symmetrical and positive to ensure that the liquid crystal is stable, AndCan be divided into a nominal part and an unknown part of uncertainty. Wherein the nominal partThe method meets the following conditions:
wherein, ,AndIs a known constant. Unknown partSatisfy the following requirementsWhereinIs an unknown constant.
As can be seen from the assumption that,Can be derived fromWherein. Likewise, the number of the cells to be processed,. The robotic arm system may be expressed as:
Definition of the definition The mechanical arm illite random integral equation can be obtained as follows:
wherein, ,。Is white noiseIs used for the power spectral density of (a),Is a positive definite matrix of the matrix and the matrix,Is oneStandard wiener process of dimensions.
Control algorithm:
Defining tracking error signals Representing the difference between the virtual control signal and the system state of the robot,
Wherein, Is a reference signal and assumes that its first derivative is present.Is the output of the finite time command filter, designed to:
is the input of the virtual control signal.
Virtual signalController and control methodThe design is as follows:
wherein, AndIs a designed gain parameter.Is thatTo estimate(s) of (a)Is defined asOf (2), whereinRepresenting the two norms of the vectorThe updating process of (1) is designed as follows:
Is a compensated error signal defined as:
Is an error compensation mechanism defined as:
wherein, 。
Is an auxiliary system defined as:
Description of finite time order filter with virtual controller For input, getAnd its first derivative. In fact, the first-order Levant differentiator proposed in the finite-time command filter can not only realize fast filtering of the virtual control signal, but also guarantee stability in finite time. The finite time command filter is firstly applied to finite time control of a random mechanical arm system, and the problem of singularity encountered by a traditional finite time back-stepping method is solved. In addition, an auxiliary system is added to the controller design to ensure that the actual control inputs can be designed to counter the effect of input saturation on control performance. The limited time stability of this control system has been demonstrated.
In a specific implementation, reference may be made to fig. 6, and fig. 6 is a schematic overall flow chart provided by a second embodiment of a robot control method according to the present application, in which LVM is the preset multi-mode large model, and access rights of the preset multi-mode large model and preset prompt are first obtained, where the prompt is used to optimize a control instruction input to the preset multi-mode large model. The audio information can be control voice sent by a user, after the audio information is converted into text language information, judging whether a wake-up word is contained or not, if the text language information is contained, shooting pictures in a room, sending a pre-established 2D map and the shot pictures to an LVM, after the LVM predicts a path, moving a robot to a target point corresponding to a grabbing target according to the path, then shooting an environmental picture around the grabbing target by a mechanical arm, inputting the environmental picture to the LVM, predicting coordinates of the grabbing target by the LVM, shooting a depth map corresponding to the coordinates by the mechanical arm, namely, the point cloud map, predicting the grabbing optimal grabbing pose by GraspNet, and generating a grabbing path from an initial state to the optimal grabbing pose by MoveIt. And finally, the mechanical arm executes a grabbing action according to the grabbing path and places the grabbing target at a designated position.
The embodiment determines the position information of a grabbing target to be grabbed in the target environment information according to the target environment information and the grabbing target information, sends the position information and a point cloud image corresponding to the position information to a preset GraspNet model to obtain grabbing pose information output by the preset GraspNet model, and determines grabbing paths according to the grabbing pose information. According to the embodiment, the grabbing pose information is determined through the preset GraspNet model, and then the grabbing path is determined according to the grabbing pose information. The grabbing success rate can be improved.
It should be noted that the foregoing examples are only for understanding the present application, and are not intended to limit the control method of the robot of the present application, and that many simple variations based on the technical concept are within the scope of the present application.
The present application also provides a robot control device, referring to fig. 7, the robot control device includes:
The receiving module 10 is used for determining grabbing target information according to the control instruction when the control instruction is received;
A moving module 20, configured to move to a target position according to the capturing target information, and collect target environment information;
a capture path determination module 30 for determining a capture path based on the target environment information and the capture target information;
and the grabbing module 40 is configured to grab the grabbing target corresponding to the control instruction according to the grabbing path.
When a control instruction is received, the embodiment determines grabbing target information according to the control instruction, moves to a target position according to the grabbing target information, collects target environment information, determines grabbing paths based on the target environment information and the grabbing target information, and grabs grabbing targets corresponding to the control instruction according to the grabbing paths. The method comprises the steps of moving to a target position according to grabbing target information, collecting target environment information, determining grabbing paths based on the target environment information and the grabbing target information, and grabbing the grabbing targets corresponding to control instructions according to the grabbing paths. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
The robot control device provided by the application can solve the technical problem that the execution efficiency is lower when the existing intelligent mobile operation robot with the body executes the grabbing task by adopting the robot control method in the embodiment. Compared with the prior art, the beneficial effects of the robot control device provided by the application are the same as those of the robot control method provided by the embodiment, and other technical features of the robot control device are the same as those disclosed by the method of the embodiment, so that the description is omitted herein.
The application provides a robot control device which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the robot control method in the first embodiment.
Referring now to fig. 8, a schematic diagram of a robotic control device suitable for use in implementing embodiments of the present application is shown. The robot control device in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal DIGITAL ASSISTANT: personal digital assistants), PADs (Portable Application Description: tablet computers), PMPs (Portable MEDIA PLAYER: portable multimedia players), vehicle-mounted terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The robot control device shown in fig. 8 is only one example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 8, the robot control apparatus may include a processing device 1001 (e.g., a central processing unit, a graphic processor, etc.), which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1003 into a random access Memory (RAM: random Access Memory) 1004. In the RAM1004, various programs and data necessary for the operation of the robot control device are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the robot control device to communicate wirelessly or by wire with other devices to exchange data. While a robotic control device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The robot control device provided by the application can solve the technical problem that the execution efficiency is lower when the existing intelligent mobile operation robot with the body executes the grabbing task by adopting the robot control method in the embodiment. Compared with the prior art, the beneficial effects of the robot control device provided by the application are the same as those of the robot control method provided by the embodiment, and other technical features of the robot control device are the same as those disclosed by the method of the previous embodiment, and are not described in detail herein.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the robot control method in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The above-mentioned computer-readable storage medium may be contained in the robot control apparatus or may exist alone without being incorporated in the robot control apparatus.
The computer-readable storage medium carries one or more programs that, when executed by the robot control device, cause the robot control device to perform the robot control method described above.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the robot control method, so that the technical problem that the existing intelligent mobile operation robot with body has lower execution efficiency when executing the grabbing task can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the robot control method provided by the above embodiment, and are not described in detail herein.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a robot control method as described above.
The computer program product provided by the application can solve the technical problem that the execution efficiency is lower when the existing intelligent mobile operation robot with a body executes the grabbing task. Compared with the prior art, the beneficial effects of the computer program product provided by the application are the same as those of the robot control method provided by the above embodiment, and are not described herein.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411528544.5A CN119105383B (en) | 2024-10-30 | 2024-10-30 | Robot control method, device, equipment, storage medium and product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411528544.5A CN119105383B (en) | 2024-10-30 | 2024-10-30 | Robot control method, device, equipment, storage medium and product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119105383A true CN119105383A (en) | 2024-12-10 |
CN119105383B CN119105383B (en) | 2025-03-07 |
Family
ID=93711870
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411528544.5A Active CN119105383B (en) | 2024-10-30 | 2024-10-30 | Robot control method, device, equipment, storage medium and product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119105383B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN120054833A (en) * | 2025-04-28 | 2025-05-30 | 上海建科检验有限公司 | Coating film forming system and method based on robot, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110193833A (en) * | 2019-06-27 | 2019-09-03 | 青岛大学 | The adaptive finite time command filtering backstepping control method of Multi-arm robots |
CN113715024A (en) * | 2021-09-03 | 2021-11-30 | 上海电机学院 | Position tracking control method of multi-degree-of-freedom upper limb rehabilitation robot |
CN114102585A (en) * | 2021-11-16 | 2022-03-01 | 北京洛必德科技有限公司 | Article grabbing planning method and system |
CN115578460A (en) * | 2022-11-10 | 2023-01-06 | 湖南大学 | Robot Grasping Method and System Based on Multimodal Feature Extraction and Dense Prediction |
CN118527372A (en) * | 2024-06-12 | 2024-08-23 | 苏州元脑智能科技有限公司 | Material sorting system |
CN118721192A (en) * | 2024-06-27 | 2024-10-01 | 华中师范大学 | Robot grasping method and device combined with vision and language instruction guidance |
CN118789548A (en) * | 2024-08-08 | 2024-10-18 | 北京深谋科技有限公司 | An intelligent grasping method and system for a home service robot based on RGB-D vision guidance |
-
2024
- 2024-10-30 CN CN202411528544.5A patent/CN119105383B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110193833A (en) * | 2019-06-27 | 2019-09-03 | 青岛大学 | The adaptive finite time command filtering backstepping control method of Multi-arm robots |
CN113715024A (en) * | 2021-09-03 | 2021-11-30 | 上海电机学院 | Position tracking control method of multi-degree-of-freedom upper limb rehabilitation robot |
CN114102585A (en) * | 2021-11-16 | 2022-03-01 | 北京洛必德科技有限公司 | Article grabbing planning method and system |
CN115578460A (en) * | 2022-11-10 | 2023-01-06 | 湖南大学 | Robot Grasping Method and System Based on Multimodal Feature Extraction and Dense Prediction |
CN118527372A (en) * | 2024-06-12 | 2024-08-23 | 苏州元脑智能科技有限公司 | Material sorting system |
CN118721192A (en) * | 2024-06-27 | 2024-10-01 | 华中师范大学 | Robot grasping method and device combined with vision and language instruction guidance |
CN118789548A (en) * | 2024-08-08 | 2024-10-18 | 北京深谋科技有限公司 | An intelligent grasping method and system for a home service robot based on RGB-D vision guidance |
Non-Patent Citations (1)
Title |
---|
张蕾等: "输入输出受限的机械臂自适应反步滑模控制", 输入输出受限的机械臂自适应反步滑模控制, vol. 38, no. 6, 17 June 2024 (2024-06-17), pages 1 - 9 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN120054833A (en) * | 2025-04-28 | 2025-05-30 | 上海建科检验有限公司 | Coating film forming system and method based on robot, electronic equipment and storage medium |
CN120054833B (en) * | 2025-04-28 | 2025-07-18 | 上海建科检验有限公司 | Coating film forming system and method based on robot, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN119105383B (en) | 2025-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114502335B (en) | Method and system for trajectory optimization for non-linear robotic systems with geometric constraints | |
CN109829947B (en) | Pose determination method, tray loading method, device, medium, and electronic apparatus | |
WO2022105395A1 (en) | Data processing method, apparatus, and system, computer device, and non-transitory storage medium | |
CN119105383B (en) | Robot control method, device, equipment, storage medium and product | |
CN107479368A (en) | A kind of method and system of the training unmanned aerial vehicle (UAV) control model based on artificial intelligence | |
CN110795523A (en) | Vehicle positioning method and device and intelligent vehicle | |
CN114488848A (en) | Autonomous UAV flight system and simulation experiment platform for indoor architectural space | |
CN119319568B (en) | Robotic arm control method, device, equipment and storage medium | |
CN115648232B (en) | Mechanical arm control method and device, electronic equipment and readable storage medium | |
CN114740854A (en) | Robot obstacle avoidance control method and device | |
CN119025850A (en) | Multimodal environmental perception and control methods, systems, media and products | |
US11262887B2 (en) | Methods and systems for assigning force vectors to robotic tasks | |
EP3115926A1 (en) | Method for control using recognition of two-hand gestures | |
WO2022091787A1 (en) | Communication system, robot, and storage medium | |
CN113778078B (en) | Positioning information generation method, device, electronic device and computer readable medium | |
Tang et al. | Real-time robot localization, vision, and speech recognition on Nvidia Jetson TX1 | |
KR102685532B1 (en) | Method of managing muti tasks and electronic device therefor | |
WO2024082558A1 (en) | Electromagnetic-positioning-based following method and apparatus for mobile robot, and readable medium | |
WO2022222532A1 (en) | Method and apparatus for establishing three-dimensional map, and electronic device and computer-readable storage medium | |
Adiprawita et al. | Service oriented architecture in robotic as a platform for cloud robotic (Case study: human gesture based teleoperation for upper part of humanoid robot) | |
CN113093716B (en) | Motion trail planning method, device, equipment and storage medium | |
CN114661064A (en) | Unmanned aerial vehicle flight test method, system, equipment and readable storage medium | |
CN114155437A (en) | Elevator taking control method and device, electronic equipment and storage medium | |
CN117826641B (en) | Simulation evaluation system and method of aerial working robot and electronic equipment | |
CN118238137A (en) | A method and device for planning the grasping posture of a mobile robot arm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |