+

CN119105383A - Robot control method, device, equipment, storage medium and product - Google Patents

Robot control method, device, equipment, storage medium and product Download PDF

Info

Publication number
CN119105383A
CN119105383A CN202411528544.5A CN202411528544A CN119105383A CN 119105383 A CN119105383 A CN 119105383A CN 202411528544 A CN202411528544 A CN 202411528544A CN 119105383 A CN119105383 A CN 119105383A
Authority
CN
China
Prior art keywords
target
grabbing
information
robot
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411528544.5A
Other languages
Chinese (zh)
Other versions
CN119105383B (en
Inventor
郭效禹
张钊
刘恋
彭博文
王敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Original Assignee
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory filed Critical Peng Cheng Laboratory
Priority to CN202411528544.5A priority Critical patent/CN119105383B/en
Publication of CN119105383A publication Critical patent/CN119105383A/en
Application granted granted Critical
Publication of CN119105383B publication Critical patent/CN119105383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a robot control method, a device, equipment, a storage medium and a product, and relates to the technical field of artificial intelligence; the method comprises the steps of moving to a target position according to the grabbing target information, collecting target environment information, determining grabbing paths based on the target environment information and the grabbing target information, and grabbing targets corresponding to the control instructions according to the grabbing paths. The application moves to the target position according to the grabbing target information and collects the target environment information, determines the grabbing path based on the target environment information and the grabbing target information, and grabs the grabbing target corresponding to the control instruction according to the grabbing path. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.

Description

Robot control method, device, equipment, storage medium and product
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a robot control method, a device, equipment, a storage medium and a product.
Background
The intelligent robot system for mobile operation based on the body intelligence (the intelligent mobile operation robot for short) aims at constructing a robot system with autonomous environment perception, full understanding cognition, smooth man-machine interaction, reliable intelligent decision and natural motion operation planning, and enables the traditional mobile operation robot to be upgraded and energized by depending on a multi-field, multi-scene and multifunctional autonomous intelligent platform, so that the industrial development of the mobile operation robot in the future is led. After the brain-like structure capable of sensing, understanding and deciding is provided, the intelligent mobile operation robot with the body can autonomously understand and complete the advanced instruction issued by human beings, so that the real general intelligence is realized. Compared with the traditional mobile robot, the intelligent mobile operation robot with the body can complete complex works which usually need human intelligence, and the revolutionary transformation is brought to human society with the continuous development and maturation of the technology. The intelligent mobile operation robot has wide application prospect in civil fields such as service, catering, medical treatment, intelligent home, unmanned distribution and the like, industrial fields such as intelligent factories, intelligent manufacturing and the like, and military fields such as individual combat and the like. At present, research and development of intelligent mobile operation robots for bodies are still in laboratory testing at home and abroad, and certain defects exist when instructions issued by human beings are understood and completed, and the whole technology is not mature, so that how to improve the control efficiency of the intelligent mobile operation robots for bodies, so that tasks such as understanding human instructions and executing corresponding grabbing efficiently by the intelligent mobile operation robots for bodies become a technical problem to be solved urgently.
Disclosure of Invention
The application mainly aims to provide a robot control method, a device, equipment, a storage medium and a product, and aims to solve the technical problem that the execution efficiency is low when the existing intelligent mobile operation robot with a body executes a grabbing task.
In order to achieve the above object, the present application provides a robot control method comprising:
When a control instruction is received, determining grabbing target information according to the control instruction;
moving to a target position according to the grabbing target information, and collecting target environment information;
Determining a grabbing path based on the target environment information and the grabbing target information;
and grabbing the grabbing target corresponding to the control instruction according to the grabbing path.
Optionally, when receiving a control instruction, the step of determining to grasp target information according to the control instruction includes:
when a control instruction is received, determining user demand information according to the control instruction;
Collecting surrounding environment images;
Inputting the user demand information and the surrounding environment image into a preset multi-mode large model to obtain grabbing target information output by the preset multi-mode large model, wherein the grabbing target information comprises relative map positions and robot gesture coordinates of grabbing targets corresponding to the control instructions.
Optionally, the step of determining the user requirement information according to the control instruction when the control instruction is received includes:
when a control instruction is received, performing text conversion on the control instruction to obtain text information;
Judging whether a target wake-up word is detected according to the text information;
and when the target wake-up word is detected, determining user demand information according to the text information.
Optionally, the step of determining a grabbing path based on the target environment information and the grabbing target information includes:
Determining the position information of a grabbing target to be grabbed in the target environment information according to the target environment information and the grabbing target information;
the position information and the point cloud image corresponding to the position information are sent to a preset GraspNet model, and the grabbing pose information output by the preset GraspNet model is obtained;
And determining a grabbing path according to the grabbing pose information.
Optionally, the step of capturing the captured target corresponding to the control instruction according to the capturing path includes:
Constructing a Lagrange dynamics model, wherein the Lagrange dynamics model is used for predicting the stability of the robot when grabbing through the grabbing path;
Inputting the grabbing path into the Lagrange dynamics model, simulating the stability of the robot during grabbing, and outputting a prediction result;
and grabbing the grabbing target corresponding to the control instruction according to the prediction result.
Optionally, after the step of capturing the captured target corresponding to the control instruction according to the capturing path, the method further includes:
Determining a target placement position according to the grabbing target information;
and placing the grabbing target at the target placing position.
In addition, in order to achieve the above object, the present application also proposes a robot control device including:
the receiving module is used for determining grabbing target information according to the control instruction when the control instruction is received;
The moving module is used for moving to a target position according to the grabbing target information and collecting target environment information;
the grabbing path determining module is used for determining grabbing paths based on the target environment information and the grabbing target information;
and the grabbing module is used for grabbing the grabbing target corresponding to the control instruction according to the grabbing path.
In addition, in order to achieve the above object, the application also proposes a robot control device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the robot control method as described above.
In addition, in order to achieve the above object, the present application also proposes a storage medium, which is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the robot control method as described above.
Furthermore, to achieve the above object, the present application provides a computer program product comprising a computer program which, when being executed by a processor, implements the steps of the robot control method as described above.
When a control instruction is received, grabbing target information is determined according to the control instruction, the grabbing target information is moved to a target position according to the grabbing target information, target environment information is collected, grabbing paths are determined based on the target environment information and the grabbing target information, and grabbing targets corresponding to the control instruction are grabbed according to the grabbing paths. The application moves to the target position according to the grabbing target information and collects the target environment information, determines the grabbing path based on the target environment information and the grabbing target information, and grabs the grabbing target corresponding to the control instruction according to the grabbing path. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a first embodiment of a robot control method according to the present application;
FIG. 2 is a schematic diagram of a robot with intelligent mobile operation according to a first embodiment of the present application;
FIG. 3 is a schematic view of a chassis structure of a vehicle body according to a first embodiment of the present application;
FIG. 4 is a schematic view showing details of a chassis structure of a vehicle body according to a first embodiment of the robot control method of the present application;
Fig. 5 is a schematic flow chart of a second embodiment of a robot control method according to the present application;
FIG. 6 is a schematic overall flow chart of a robot control method according to a second embodiment of the present application;
fig. 7 is a schematic block diagram of a robot control device according to an embodiment of the present application;
fig. 8 is a schematic device structure diagram of a hardware operating environment related to a robot control method in an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
The main solution of the embodiment of the application is that when a control instruction is received, grabbing target information is determined according to the control instruction, the grabbing target information is moved to a target position according to the grabbing target information, target environment information is acquired, a grabbing path is determined based on the target environment information and the grabbing target information, and grabbing targets corresponding to the control instruction are grabbed according to the grabbing path. The application moves to the target position according to the grabbing target information and collects the target environment information, determines the grabbing path based on the target environment information and the grabbing target information, and grabs the grabbing target corresponding to the control instruction according to the grabbing path. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
It should be noted that, the execution body of the embodiment may be a computing service device with functions of data processing, network communication and program running, such as a tablet computer, a personal computer, a mobile phone, or an electronic device, an intelligent mobile operation robot with body, or the like, which can implement the above functions. Hereinafter, this embodiment and the following embodiments will be described with reference to a robot (hereinafter simply referred to as "robot") for performing intelligent mobile operation.
Based on this, an embodiment of the present application provides a robot control method, and referring to fig. 1, fig. 1 is a schematic flow chart provided by an embodiment of the robot control method of the present application.
In this embodiment, the robot control method includes steps S10 to S40:
Step S10, when a control instruction is received, determining grabbing target information according to the control instruction;
it should be noted that the control command may be a control command sent by the user, for example, to let the robot grasp a certain object. The determining of the grabbing target information according to the control instruction may be determining a position of an article to be grabbed, an article name or an image of the article and a target placement point to be placed after grabbing according to the control instruction.
It should be noted that, referring to fig. 2, fig. 2 is a schematic structural diagram of a body-equipped intelligent mobile operation robot according to a first embodiment of the present application, where the body-equipped intelligent mobile operation robot includes a mechanical arm and a vehicle body chassis, the mechanical arm is used for grabbing the body-equipped intelligent mobile operation robot, and the vehicle body chassis is used for moving the body-equipped intelligent mobile operation robot. Referring to fig. 3, fig. 3 is a schematic diagram of a vehicle chassis according to a first embodiment of the robot control method of the present application, where the vehicle chassis includes a Gemini Pro camera, 2 TOF lidars, and 6 uniformly distributed ultrasonic sensors. Other cameras, laser radars and ultrasonic sensors may be used instead, and the embodiment is not limited herein, and referring to fig. 4, fig. 4 is a schematic diagram of a structure of a chassis of a vehicle provided in an embodiment of a robot control method according to the present application, where the chassis of the vehicle includes a power charging port, a hard emergency stop button, a power switch, a router WAN interface, an external power supply interface, a soft emergency stop button, a USB, a TypeC, an automatic recharging, and a suspension chassis.
Further, in order to enable the robot to accurately understand the intention of the user, the step S10 may include determining user demand information according to a control instruction when the control instruction is received;
Collecting surrounding environment images;
Inputting the user demand information and the surrounding environment image into a preset multi-mode large model to obtain grabbing target information output by the preset multi-mode large model, wherein the grabbing target information comprises relative map positions and robot gesture coordinates of grabbing targets corresponding to the control instructions.
It should be noted that, when the control instruction is received, determining the user demand information according to the control instruction may be performing semantic analysis on the control instruction to obtain a semantic analysis result, and determining, according to the semantic analysis result, the article information to be grabbed by the user may include a name and a rough position of the article, for example, taking cola on the refrigerator. The capturing of the image of the surroundings may be the robot taking a picture of the surroundings with a camera on the body. The preset multi-mode large model can be ChatGPT models such as ChatGPT, purport to be thousands of questions, pro max and the like. And predicting the relative map position and the robot gesture coordinate of the grabbing target according to the user demand information, the surrounding environment image and the map information in the pre-acquired robot moving range by the preset multi-mode large model.
Before the step S10, the method further includes mapping the moving range of the robot to obtain the map information. Specifically, the mapping is performed by gmapping technology, gmapping is a synchronous positioning and mapping (Simultaneous Localization AND MAPPING, SLAM) technology based on particle filtering. It mainly uses an improved Rao-Blackwellised particle filter to solve both positioning and mapping problems. The core idea is to use a plurality of particles, each representing a possible robot pose and associated with it a map. The gmapping technology based on laser radar information is used for remotely controlling the robot to move in the moving range and simultaneously establishing a 2D plane map for autonomous navigation and positioning of the robot.
Further, in order to improve the service efficiency of the robot, the step of determining the user requirement information according to the control instruction when the control instruction is received includes:
when a control instruction is received, performing text conversion on the control instruction to obtain text information;
Judging whether a target wake-up word is detected according to the text information;
and when the target wake-up word is detected, determining user demand information according to the text information.
The control instruction may be a voice command sent by the user to the robot, and when the robot receives the voice command sent by the user, the robot performs text conversion on the control instruction to obtain text information. And judging whether a target wake-up word exists currently according to the text information, wherein the target wake-up word can be a keyword indicating the robot to grasp the object, such as 'taking, placing, delivering'. And when the target wake-up word is detected, determining user demand information according to the text information. In order to improve the analysis accuracy of the preset multi-mode large model, a prompt word can be set in the embodiment, text information of a user is optimized through the prompt word and then input into the preset multi-mode large model, so that the preset multi-mode large model outputs more accurate user demand information.
Step S20, moving to a target position according to the grabbing target information, and collecting target environment information;
The capturing target information includes a target position of a capturing target to be captured, and the moving to the target position according to the capturing target information may be by calling a navigation to move to a target position corresponding to the capturing target information, and collecting surrounding environment information at the target position to obtain the target environment information.
Step S30, a grabbing path is determined based on the target environment information and the grabbing target information;
The determining the grabbing path based on the target environment information and the grabbing target information may be determining a detailed position and pose information of the grabbing target in the target environment information according to the target environment information and the grabbing target information, and determining an optimal grabbing pose of the robot during grabbing and a path from an initial state to the optimal grabbing pose of the robot according to the detailed position and the pose information, that is, the grabbing path.
And step S40, grabbing the grabbing target corresponding to the control instruction according to the grabbing path.
It should be noted that, the capturing target corresponding to the control instruction according to the capturing path may be controlling the mechanical arm to capture the capturing target corresponding to the control instruction according to the capturing path.
Further, after the grabbing target is grabbed, the grabbing target is required to be placed at a designated position according to a control instruction, and after the step S40, the method further comprises the steps of determining a target placement position according to the grabbing target information;
and placing the grabbing target at the target placing position.
It should be noted that the capturing target information may further include a target placement position. The step of placing the grabbing target at the target placing position may be to determine a moving path of the robot according to the constructed robot moving range map and the target placing position, and move to the target placing position according to the moving path. And then placing the grabbed grabbing target at the target placing position.
When a control instruction is received, the embodiment determines grabbing target information according to the control instruction, moves to a target position according to the grabbing target information, collects target environment information, determines grabbing paths based on the target environment information and the grabbing target information, and grabs grabbing targets corresponding to the control instruction according to the grabbing paths. The method comprises the steps of moving to a target position according to grabbing target information, collecting target environment information, determining grabbing paths based on the target environment information and the grabbing target information, and grabbing the grabbing targets corresponding to control instructions according to the grabbing paths. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the above description, and will not be repeated. On this basis, please refer to fig. 5, fig. 5 is a flow chart of a second embodiment of the robot control method according to the present application, wherein the step S30 further includes the following steps:
step S301, determining the position information of a grabbing target to be grabbed in the target environment information according to the target environment information and the grabbing target information;
It should be noted that, the determining, according to the target environment information and the grabbing target information, the position information of the grabbing target to be grabbed in the target environment information may be sending the target environment information and the grabbing target information to the preset multi-mode large model, and the preset multi-mode large model determines, according to the characteristics of the grabbing target in the grabbing target information, the position information of the grabbing target in the target environment information, and may specifically be a coordinate range in the target environment information.
Step S302, the position information and the point cloud image corresponding to the position information are sent to a preset GraspNet model, and the grabbing pose information output by the preset GraspNet model is obtained;
It should be noted that GraspNet is a deep learning network for robot gripping, which aims to solve the problem that the robot can grip objects effectively in various environments. The network predicts the optimal gripping position and pose by analyzing the three-dimensional shape and surrounding environment of the object. GraspNet are not only concerned with the grabbing of a single object, but also can handle the grabbing task of multiple objects in a complex scene. GraspNet is a local neural network small model, which cooperates with the above-mentioned preset multi-mode large model deployed in the cloud to complete the object capturing. Specifically, the depth camera on the mechanical arm of the robot can collect data from a target environment, the target environment can be the environment where a grabbing target is located, and the data collected by the mechanical arm form a point cloud image to provide vision and depth information of a scene, so that a grabbing analysis basis is formed. The point cloud image is formed by data acquired from a target environment by a depth camera on the mechanical arm. The preset GraspNet model may be a deep learning network for robotic grasping. The grabbing pose information may be an optimal grabbing pose output by the preset GraspNet model.
And step S303, determining a grabbing path according to the grabbing pose information.
It should be noted that, the determining the grabbing path according to the grabbing pose information may be a path from the initial state of the robot to the optimal grabbing pose generated by using the mechanical arm path planning library MoveIt in the ROS of the robot. MoveIt is a powerful robotic motion planning framework, and the present embodiment uses the robotic arm path planning library MoveIt in ROS to generate a reference trajectory from an initial state to an optimal gripping pose.
Further, in the process of robot grabbing, random vibration exists, in order to improve grabbing efficiency, the step of grabbing the grabbing target corresponding to the control instruction according to the grabbing path includes:
Constructing a Lagrange dynamics model, wherein the Lagrange dynamics model is used for predicting the stability of the robot when grabbing through the grabbing path;
Inputting the grabbing path into the Lagrange dynamics model, simulating the stability of the robot during grabbing, and outputting a prediction result;
and grabbing the grabbing target corresponding to the control instruction according to the prediction result.
It should be noted that, in this embodiment, for the mechanical arm system operating in the random vibration environment, a tracking control algorithm is provided, and finite time stability is realized, and the method is applicable to the case that the system has unknown dynamics. First, a random lagrangian kinetic model of the mechanical arm (i.e., the lagrangian kinetic model) under a random vibration environment is constructed. Then, a command filtering self-adaptive backstepping controller is provided, not only the unknown dynamics of the mechanical arm system is approximately obtained, but also the problem of singularity of the traditional finite time backstepping method can be avoided. Furthermore, an error compensation mechanism is introduced for the error of the filter to compensate, and an auxiliary system is further introduced to deal with the input saturation problem which is common in practice. The results demonstrate the practical mean square limited time stability of tracking errors. Finally, a random mechanical arm model is applied to verify the effectiveness of the proposed control algorithm. Specifically, consider the robotic arm joint space random Lagrange control system as follows:
wherein, Is a state variable, n is used to characterize the dimension,Is thatIs used as a first derivative of (a),Is a generalized mass (inertia) matrix,Is a coriolis/centrifugal matrix,Is the gravity vector of the gravity vector,Is made of white noiseA random excitation force caused by the magnetic field, wherein;Is a control force acting on the system, andIs an input saturation function representing the input saturation of the controller, and satisfies:
Wherein the method comprises the steps of As a known constant, defined as:
Thus, there are Wherein,And define
Suppose 1: Is symmetrical and positive to ensure that the liquid crystal is stable, AndCan be divided into a nominal part and an unknown part of uncertainty. Wherein the nominal partThe method meets the following conditions:
wherein, ,AndIs a known constant. Unknown partSatisfy the following requirementsWhereinIs an unknown constant.
As can be seen from the assumption that,Can be derived fromWherein. Likewise, the number of the cells to be processed,. The robotic arm system may be expressed as:
Definition of the definition The mechanical arm illite random integral equation can be obtained as follows:
wherein, ,Is white noiseIs used for the power spectral density of (a),Is a positive definite matrix of the matrix and the matrix,Is oneStandard wiener process of dimensions.
Control algorithm:
Defining tracking error signals Representing the difference between the virtual control signal and the system state of the robot,
Wherein, Is a reference signal and assumes that its first derivative is present.Is the output of the finite time command filter, designed to:
is the input of the virtual control signal.
Virtual signalController and control methodThe design is as follows:
wherein, AndIs a designed gain parameter.Is thatTo estimate(s) of (a)Is defined asOf (2), whereinRepresenting the two norms of the vectorThe updating process of (1) is designed as follows:
Is a compensated error signal defined as:
Is an error compensation mechanism defined as:
wherein,
Is an auxiliary system defined as:
Description of finite time order filter with virtual controller For input, getAnd its first derivative. In fact, the first-order Levant differentiator proposed in the finite-time command filter can not only realize fast filtering of the virtual control signal, but also guarantee stability in finite time. The finite time command filter is firstly applied to finite time control of a random mechanical arm system, and the problem of singularity encountered by a traditional finite time back-stepping method is solved. In addition, an auxiliary system is added to the controller design to ensure that the actual control inputs can be designed to counter the effect of input saturation on control performance. The limited time stability of this control system has been demonstrated.
In a specific implementation, reference may be made to fig. 6, and fig. 6 is a schematic overall flow chart provided by a second embodiment of a robot control method according to the present application, in which LVM is the preset multi-mode large model, and access rights of the preset multi-mode large model and preset prompt are first obtained, where the prompt is used to optimize a control instruction input to the preset multi-mode large model. The audio information can be control voice sent by a user, after the audio information is converted into text language information, judging whether a wake-up word is contained or not, if the text language information is contained, shooting pictures in a room, sending a pre-established 2D map and the shot pictures to an LVM, after the LVM predicts a path, moving a robot to a target point corresponding to a grabbing target according to the path, then shooting an environmental picture around the grabbing target by a mechanical arm, inputting the environmental picture to the LVM, predicting coordinates of the grabbing target by the LVM, shooting a depth map corresponding to the coordinates by the mechanical arm, namely, the point cloud map, predicting the grabbing optimal grabbing pose by GraspNet, and generating a grabbing path from an initial state to the optimal grabbing pose by MoveIt. And finally, the mechanical arm executes a grabbing action according to the grabbing path and places the grabbing target at a designated position.
The embodiment determines the position information of a grabbing target to be grabbed in the target environment information according to the target environment information and the grabbing target information, sends the position information and a point cloud image corresponding to the position information to a preset GraspNet model to obtain grabbing pose information output by the preset GraspNet model, and determines grabbing paths according to the grabbing pose information. According to the embodiment, the grabbing pose information is determined through the preset GraspNet model, and then the grabbing path is determined according to the grabbing pose information. The grabbing success rate can be improved.
It should be noted that the foregoing examples are only for understanding the present application, and are not intended to limit the control method of the robot of the present application, and that many simple variations based on the technical concept are within the scope of the present application.
The present application also provides a robot control device, referring to fig. 7, the robot control device includes:
The receiving module 10 is used for determining grabbing target information according to the control instruction when the control instruction is received;
A moving module 20, configured to move to a target position according to the capturing target information, and collect target environment information;
a capture path determination module 30 for determining a capture path based on the target environment information and the capture target information;
and the grabbing module 40 is configured to grab the grabbing target corresponding to the control instruction according to the grabbing path.
When a control instruction is received, the embodiment determines grabbing target information according to the control instruction, moves to a target position according to the grabbing target information, collects target environment information, determines grabbing paths based on the target environment information and the grabbing target information, and grabs grabbing targets corresponding to the control instruction according to the grabbing paths. The method comprises the steps of moving to a target position according to grabbing target information, collecting target environment information, determining grabbing paths based on the target environment information and the grabbing target information, and grabbing the grabbing targets corresponding to control instructions according to the grabbing paths. The robot grabbing device has the advantages that more general and autonomous machine intelligence is achieved, and the robot grabbing efficiency is improved.
The robot control device provided by the application can solve the technical problem that the execution efficiency is lower when the existing intelligent mobile operation robot with the body executes the grabbing task by adopting the robot control method in the embodiment. Compared with the prior art, the beneficial effects of the robot control device provided by the application are the same as those of the robot control method provided by the embodiment, and other technical features of the robot control device are the same as those disclosed by the method of the embodiment, so that the description is omitted herein.
The application provides a robot control device which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the robot control method in the first embodiment.
Referring now to fig. 8, a schematic diagram of a robotic control device suitable for use in implementing embodiments of the present application is shown. The robot control device in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal DIGITAL ASSISTANT: personal digital assistants), PADs (Portable Application Description: tablet computers), PMPs (Portable MEDIA PLAYER: portable multimedia players), vehicle-mounted terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The robot control device shown in fig. 8 is only one example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 8, the robot control apparatus may include a processing device 1001 (e.g., a central processing unit, a graphic processor, etc.), which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage device 1003 into a random access Memory (RAM: random Access Memory) 1004. In the RAM1004, various programs and data necessary for the operation of the robot control device are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the robot control device to communicate wirelessly or by wire with other devices to exchange data. While a robotic control device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The robot control device provided by the application can solve the technical problem that the execution efficiency is lower when the existing intelligent mobile operation robot with the body executes the grabbing task by adopting the robot control method in the embodiment. Compared with the prior art, the beneficial effects of the robot control device provided by the application are the same as those of the robot control method provided by the embodiment, and other technical features of the robot control device are the same as those disclosed by the method of the previous embodiment, and are not described in detail herein.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the robot control method in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a U disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The above-mentioned computer-readable storage medium may be contained in the robot control apparatus or may exist alone without being incorporated in the robot control apparatus.
The computer-readable storage medium carries one or more programs that, when executed by the robot control device, cause the robot control device to perform the robot control method described above.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the robot control method, so that the technical problem that the existing intelligent mobile operation robot with body has lower execution efficiency when executing the grabbing task can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the robot control method provided by the above embodiment, and are not described in detail herein.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of a robot control method as described above.
The computer program product provided by the application can solve the technical problem that the execution efficiency is lower when the existing intelligent mobile operation robot with a body executes the grabbing task. Compared with the prior art, the beneficial effects of the computer program product provided by the application are the same as those of the robot control method provided by the above embodiment, and are not described herein.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1.一种机器人控制方法,其特征在于,所述机器人控制方法包括以下步骤:1. A robot control method, characterized in that the robot control method comprises the following steps: 在接收到控制指令时,根据所述控制指令确定抓取目标信息;When receiving a control instruction, determining the grab target information according to the control instruction; 根据所述抓取目标信息移动到目标位置,并采集目标环境信息;Move to the target location according to the captured target information and collect target environment information; 基于所述目标环境信息和所述抓取目标信息确定抓取路径;Determine a grasping path based on the target environment information and the grasping target information; 根据所述抓取路径抓取所述控制指令对应的抓取目标。The grabbing target corresponding to the control instruction is grabbed according to the grabbing path. 2.如权利要求1所述的机器人控制方法,其特征在于,所述在接收到控制指令时,根据所述控制指令确定抓取目标信息的步骤,包括:2. The robot control method according to claim 1, wherein when receiving a control instruction, the step of determining the grasping target information according to the control instruction comprises: 在接收到控制指令时,根据所述控制指令确定用户需求信息;When receiving a control instruction, determining user demand information according to the control instruction; 采集周围环境图像;Collecting images of the surrounding environment; 将所述用户需求信息和所述周围环境图像输入至预设多模态大模型,得到所述预设多模态大模型输出的抓取目标信息,所述抓取目标信息包括所述控制指令对应的抓取目标的相对地图位置和机器人姿态坐标。The user demand information and the surrounding environment image are input into a preset multimodal large model to obtain the grasping target information output by the preset multimodal large model, wherein the grasping target information includes the relative map position and robot posture coordinates of the grasping target corresponding to the control instruction. 3.如权利要求2所述的机器人控制方法,其特征在于,所述在接收到控制指令时,根据所述控制指令确定用户需求信息的步骤,包括:3. The robot control method according to claim 2, wherein when receiving a control instruction, the step of determining the user demand information according to the control instruction comprises: 在接收到控制指令时,对所述控制指令进行文本转换,得到文本信息;When receiving a control instruction, converting the control instruction into text to obtain text information; 根据所述文本信息判断是否检测到目标唤醒词;Determining whether a target wake-up word is detected according to the text information; 在检测到目标唤醒词时,根据所述文本信息确定用户需求信息。When the target wake-up word is detected, the user demand information is determined based on the text information. 4.如权利要求1所述的机器人控制方法,其特征在于,所述基于所述目标环境信息和所述抓取目标信息确定抓取路径的步骤,包括:4. The robot control method according to claim 1, characterized in that the step of determining the grasping path based on the target environment information and the grasping target information comprises: 根据所述目标环境信息和所述抓取目标信息确定要抓取的抓取目标在所述目标环境信息中的位置信息;Determine the position information of the grab target to be grabbed in the target environment information according to the target environment information and the grab target information; 将所述位置信息和所述位置信息对应的点云图发送至预设GraspNet模型,得到所述预设GraspNet模型输出的抓取位姿信息;Sending the position information and the point cloud map corresponding to the position information to a preset GraspNet model to obtain the grasping posture information output by the preset GraspNet model; 根据所述抓取位姿信息确定抓取路径。A grasping path is determined according to the grasping posture information. 5.如权利要求4所述的机器人控制方法,其特征在于,所述根据所述抓取路径抓取所述控制指令对应的抓取目标的步骤,包括:5. The robot control method according to claim 4, characterized in that the step of grabbing the grab target corresponding to the control instruction according to the grab path comprises: 构建拉格朗日动力学模型,其中,所述拉格朗日动力学模型用于预测机器人通过所述抓取路径进行抓取时的稳定性;Constructing a Lagrangian dynamics model, wherein the Lagrangian dynamics model is used to predict the stability of the robot when grasping through the grasping path; 将所述抓取路径输入到所述拉格朗日动力学模型,所述拉格朗日动力学模型模拟所述机器人进行抓取时的稳定性,输出预测结果;Inputting the grasping path into the Lagrangian dynamics model, the Lagrangian dynamics model simulates the stability of the robot when grasping, and outputs a prediction result; 根据所述预测结果抓取所述控制指令对应的抓取目标。The grab target corresponding to the control instruction is grabbed according to the prediction result. 6.如权利要求1-5任一项所述的机器人控制方法,其特征在于,所述根据所述抓取路径抓取所述控制指令对应的抓取目标的步骤之后,还包括:6. The robot control method according to any one of claims 1 to 5, characterized in that after the step of grasping the grasping target corresponding to the control instruction according to the grasping path, it also includes: 根据所述抓取目标信息确定目标放置位置;Determine the target placement position according to the captured target information; 将所述抓取目标放置于所述目标放置位置。The grab target is placed at the target placement position. 7.一种机器人控制装置,其特征在于,所述机器人控制装置包括:7. A robot control device, characterized in that the robot control device comprises: 接收模块,用于在接收到控制指令时,根据所述控制指令确定抓取目标信息;A receiving module, configured to determine the grab target information according to the control instruction when receiving the control instruction; 移动模块,用于根据所述抓取目标信息移动到目标位置,并采集目标环境信息;A moving module, used to move to a target location according to the captured target information and collect target environment information; 抓取路径确定模块,用于基于所述目标环境信息和所述抓取目标信息确定抓取路径;A grasping path determination module, used to determine a grasping path based on the target environment information and the grasping target information; 抓取模块,用于根据所述抓取路径抓取所述控制指令对应的抓取目标。The grabbing module is used to grab the grabbing target corresponding to the control instruction according to the grabbing path. 8.一种机器人控制设备,其特征在于,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序配置为实现如权利要求1至6中任一项所述的机器人控制方法的步骤。8. A robot control device, characterized in that the device comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the computer program is configured to implement the steps of the robot control method according to any one of claims 1 to 6. 9.一种存储介质,其特征在于,所述存储介质为计算机可读存储介质,所述存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的机器人控制方法的步骤。9. A storage medium, characterized in that the storage medium is a computer-readable storage medium, and a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the robot control method according to any one of claims 1 to 6 are implemented. 10.一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的机器人控制方法的步骤。10. A computer program product, characterized in that the computer program product comprises a computer program, and when the computer program is executed by a processor, the steps of the robot control method according to any one of claims 1 to 6 are implemented.
CN202411528544.5A 2024-10-30 2024-10-30 Robot control method, device, equipment, storage medium and product Active CN119105383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411528544.5A CN119105383B (en) 2024-10-30 2024-10-30 Robot control method, device, equipment, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411528544.5A CN119105383B (en) 2024-10-30 2024-10-30 Robot control method, device, equipment, storage medium and product

Publications (2)

Publication Number Publication Date
CN119105383A true CN119105383A (en) 2024-12-10
CN119105383B CN119105383B (en) 2025-03-07

Family

ID=93711870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411528544.5A Active CN119105383B (en) 2024-10-30 2024-10-30 Robot control method, device, equipment, storage medium and product

Country Status (1)

Country Link
CN (1) CN119105383B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120054833A (en) * 2025-04-28 2025-05-30 上海建科检验有限公司 Coating film forming system and method based on robot, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110193833A (en) * 2019-06-27 2019-09-03 青岛大学 The adaptive finite time command filtering backstepping control method of Multi-arm robots
CN113715024A (en) * 2021-09-03 2021-11-30 上海电机学院 Position tracking control method of multi-degree-of-freedom upper limb rehabilitation robot
CN114102585A (en) * 2021-11-16 2022-03-01 北京洛必德科技有限公司 Article grabbing planning method and system
CN115578460A (en) * 2022-11-10 2023-01-06 湖南大学 Robot Grasping Method and System Based on Multimodal Feature Extraction and Dense Prediction
CN118527372A (en) * 2024-06-12 2024-08-23 苏州元脑智能科技有限公司 Material sorting system
CN118721192A (en) * 2024-06-27 2024-10-01 华中师范大学 Robot grasping method and device combined with vision and language instruction guidance
CN118789548A (en) * 2024-08-08 2024-10-18 北京深谋科技有限公司 An intelligent grasping method and system for a home service robot based on RGB-D vision guidance

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110193833A (en) * 2019-06-27 2019-09-03 青岛大学 The adaptive finite time command filtering backstepping control method of Multi-arm robots
CN113715024A (en) * 2021-09-03 2021-11-30 上海电机学院 Position tracking control method of multi-degree-of-freedom upper limb rehabilitation robot
CN114102585A (en) * 2021-11-16 2022-03-01 北京洛必德科技有限公司 Article grabbing planning method and system
CN115578460A (en) * 2022-11-10 2023-01-06 湖南大学 Robot Grasping Method and System Based on Multimodal Feature Extraction and Dense Prediction
CN118527372A (en) * 2024-06-12 2024-08-23 苏州元脑智能科技有限公司 Material sorting system
CN118721192A (en) * 2024-06-27 2024-10-01 华中师范大学 Robot grasping method and device combined with vision and language instruction guidance
CN118789548A (en) * 2024-08-08 2024-10-18 北京深谋科技有限公司 An intelligent grasping method and system for a home service robot based on RGB-D vision guidance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张蕾等: "输入输出受限的机械臂自适应反步滑模控制", 输入输出受限的机械臂自适应反步滑模控制, vol. 38, no. 6, 17 June 2024 (2024-06-17), pages 1 - 9 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120054833A (en) * 2025-04-28 2025-05-30 上海建科检验有限公司 Coating film forming system and method based on robot, electronic equipment and storage medium
CN120054833B (en) * 2025-04-28 2025-07-18 上海建科检验有限公司 Coating film forming system and method based on robot, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN119105383B (en) 2025-03-07

Similar Documents

Publication Publication Date Title
CN114502335B (en) Method and system for trajectory optimization for non-linear robotic systems with geometric constraints
CN109829947B (en) Pose determination method, tray loading method, device, medium, and electronic apparatus
WO2022105395A1 (en) Data processing method, apparatus, and system, computer device, and non-transitory storage medium
CN119105383B (en) Robot control method, device, equipment, storage medium and product
CN107479368A (en) A kind of method and system of the training unmanned aerial vehicle (UAV) control model based on artificial intelligence
CN110795523A (en) Vehicle positioning method and device and intelligent vehicle
CN114488848A (en) Autonomous UAV flight system and simulation experiment platform for indoor architectural space
CN119319568B (en) Robotic arm control method, device, equipment and storage medium
CN115648232B (en) Mechanical arm control method and device, electronic equipment and readable storage medium
CN114740854A (en) Robot obstacle avoidance control method and device
CN119025850A (en) Multimodal environmental perception and control methods, systems, media and products
US11262887B2 (en) Methods and systems for assigning force vectors to robotic tasks
EP3115926A1 (en) Method for control using recognition of two-hand gestures
WO2022091787A1 (en) Communication system, robot, and storage medium
CN113778078B (en) Positioning information generation method, device, electronic device and computer readable medium
Tang et al. Real-time robot localization, vision, and speech recognition on Nvidia Jetson TX1
KR102685532B1 (en) Method of managing muti tasks and electronic device therefor
WO2024082558A1 (en) Electromagnetic-positioning-based following method and apparatus for mobile robot, and readable medium
WO2022222532A1 (en) Method and apparatus for establishing three-dimensional map, and electronic device and computer-readable storage medium
Adiprawita et al. Service oriented architecture in robotic as a platform for cloud robotic (Case study: human gesture based teleoperation for upper part of humanoid robot)
CN113093716B (en) Motion trail planning method, device, equipment and storage medium
CN114661064A (en) Unmanned aerial vehicle flight test method, system, equipment and readable storage medium
CN114155437A (en) Elevator taking control method and device, electronic equipment and storage medium
CN117826641B (en) Simulation evaluation system and method of aerial working robot and electronic equipment
CN118238137A (en) A method and device for planning the grasping posture of a mobile robot arm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载