+

WO2018137445A1 - Ros-based mechanical arm grabbing method and system - Google Patents

Ros-based mechanical arm grabbing method and system Download PDF

Info

Publication number
WO2018137445A1
WO2018137445A1 PCT/CN2017/117168 CN2017117168W WO2018137445A1 WO 2018137445 A1 WO2018137445 A1 WO 2018137445A1 CN 2017117168 W CN2017117168 W CN 2017117168W WO 2018137445 A1 WO2018137445 A1 WO 2018137445A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
robot arm
pose information
positioning mark
spatial pose
Prior art date
Application number
PCT/CN2017/117168
Other languages
French (fr)
Chinese (zh)
Inventor
张光肖
Original Assignee
南京阿凡达机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京阿凡达机器人科技有限公司 filed Critical 南京阿凡达机器人科技有限公司
Publication of WO2018137445A1 publication Critical patent/WO2018137445A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Definitions

  • the invention belongs to the field of mechanical arm control and motion planning, in particular to a robot arm grasping method and system based on ROS system.
  • the robotic arm is one of the most widely used automation devices in the field of robotics.
  • multi-degree-of-freedom manipulators play an increasing role in many fields such as machine building, automotive, semiconductor, medical, and home services.
  • Motion control has always been a hot topic of research.
  • the main application scenarios of the robot arm are as follows:
  • Machine vision belongs to a branch of artificial intelligence. In short, machine vision is to use the camera instead of the human eye to judge and analyze the surrounding environment, combined with certain algorithms to achieve intelligent decision making. It is a comprehensive technology, including image processing. , mechanical engineering technology, control, electric light source illumination, optical imaging, sensors, analog and digital video technology, computer hardware and software technology. Machine vision is divided into several types, such as monocular, binocular, and 3D vision. Its introduction has the following advantages:
  • Machine vision is more reliable than the human eye. Machine vision continuously captures images and works continuously without visual fatigue.
  • Machine vision has higher precision. With certain processing algorithms, machine vision can achieve accurate measurement and error checking, and is conducive to data recording and integration.
  • Machine vision can adapt to complex environments. In some situations that are not suitable for manual work, machine vision can be “out of the box”.
  • the ROS system Robot Operating System
  • the ROS system is an open source robot operating system released by Willow Garage in 2010. It adopts a distributed organizational structure, which can greatly improve the reusability of code and the adaptability of complex robot systems.
  • the ROS system has the following main features:
  • Point-to-point distributed design The peer-to-peer design of ROS and the mechanisms such as services and node managers can decentralize the real-time computational pressure brought by functions such as computer vision and speech recognition, and can adapt to the challenges faced by multiple robots.
  • the ROS system supports programming languages such as C++, Python, Script, and LISP, as well as interfaces to other programming languages.
  • the software package is rich.
  • the ROS system integrates a large number of software packages, which can quickly realize the environment configuration of various applications of robots, such as robot arm motion planning, mobile robot navigation, robot SLAM and so on.
  • the present invention aims to propose a method and system for grasping a mechanical arm based on the ROS system, which can effectively solve the problems of poor adaptability of the mechanical arm environment and high difficulty in development and use.
  • the present invention provides a complete solution for visual access, target detection, image processing, robotic arm motion planning, etc. of the robot arm.
  • a mechanical arm grabbing method based on ROS system comprising:
  • Step 2 The upper computer obtains an image of the object to be grasped containing the positioning mark through the camera;
  • Step 3 The host computer processes the acquired image under the ROS system, and obtains Spatial pose information of the object to be grasped in a robot arm coordinate system;
  • Step 4 The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
  • Step 5 The host computer transmits the obtained motion information queue of the robot arm to the lower position machine
  • Step 6 The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
  • step 1 before the host computer configures the camera to use the environment under the ROS system further includes: Step 0: The host computer performs positioning mark training on the camera.
  • the process of performing the positioning mark training on the camera by the upper computer includes the following step: Step 01: The upper computer performs the training of the positioning mark on the camera based on the ARToolKit positioning mark recognition algorithm; Step 02 The upper computer trains the positioning mark on the camera based on the positioning mark recognition algorithm of OpenCV_ArUco.
  • the method further includes: Step 11: The host computer configures the camera node to drive the camera under the ROS system; Step 12: The upper computer calibrates the camera under the ROS system and saves the correction data; Step 13: The upper computer selects a positioning mark recognition algorithm.
  • the upper computer processes the acquired image under the ROS system, and the specific process of obtaining the spatial pose information of the object to be grasped in the robot arm coordinate system with the positioning mark is: Step 31: Search for the positioning mark with the highest matching degree with the preset positioning mark in the image acquired by the camera; Step 32: Locating the found positioning mark; Step 33: Obtain the above according to the positioned positioning mark Spatial pose information of the object to be grasped with the positioning mark in the camera coordinate system; Step 34: Converting the matrix according to the preset camera coordinate system and the robot arm coordinate system, and the object to be grasped is in the camera coordinate system The spatial pose information is converted to obtain spatial pose information of the object to be grasped in a robot arm coordinate system.
  • step 4 the upper computer performs motion planning on the mechanical arm in the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm.
  • the specific process of the corresponding motion information queue is: Step 41: The host computer writes a robot arm model description file of the robot arm under the ROS system; Step 42: The host computer reads the robot arm according to the robot arm model description file Performing modeling; Step 43: After modeling the robot arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm The robot arm performs motion planning to obtain a corresponding motion information queue.
  • the robot arm grabbing method based on the ROS system further includes: Step 7: while the driving robot arm performs the grabbing operation according to the corresponding path, the lower computer returns the real-time spatial pose information of the robot arm to the The host computer is described; Step 8: The host computer updates the spatial pose information of the robot arm with the real-time spatial pose information returned.
  • the present disclosure also provides a robotic arm grabbing system based on a ROS system, comprising: a host computer, a lower computer and a camera, wherein the upper computer is communicably connected to the lower computer and the camera; the camera is used for An image of the object to be grasped containing the positioning mark is acquired under the control of the upper computer; the upper computer includes: an image processing module, configured to process the acquired image under the ROS system to obtain the image to be captured Taking spatial pose information of the object in the robot arm coordinate system; the motion planning module is configured to: according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm in the ROS Performing motion planning on the robot arm to obtain a corresponding motion information queue; a message transmission module, configured to transmit the obtained motion information queue of the robot arm to the lower computer; the lower computer includes: motion execution And a module, configured to drive the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
  • the host computer further includes: a camera training module, configured to perform positioning mark training on the camera.
  • the camera training module is configured to perform positioning mark training on the camera, comprising: performing, according to an ARToolKit-based positioning mark recognition algorithm, the camera to perform the positioning mark; or, based on the OpenCV_ArUco-based positioning mark recognition algorithm, The camera is trained for the positioning mark.
  • the host computer further includes: a camera configuration module, configured to configure the camera node to drive the camera under the ROS system; and, under the ROS system, calibrate the camera and save the correction data; and, select the positioning Tag recognition algorithm.
  • a camera configuration module configured to configure the camera node to drive the camera under the ROS system; and, under the ROS system, calibrate the camera and save the correction data; and, select the positioning Tag recognition algorithm.
  • the image processing module is configured to process the acquired image under the ROS system, and obtain the spatial pose information of the object to be grasped in the robot arm coordinate system, which specifically includes: acquiring at the camera Locating the positioning mark with the highest degree of matching with the preset positioning mark; and positioning the found positioning mark; and, according to the positioned positioning mark, obtaining the object to be grasped with the positioning mark at camera coordinates The spatial pose information of the system; and, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, converting the spatial pose information of the object to be grasped in the camera coordinate system, and obtaining the The spatial pose information of the object to be grabbed in the robot arm coordinate system.
  • the motion planning module is configured to perform motion planning on the robot arm according to the spatial pose information of the object to be grasped in a robot arm coordinate system and spatial pose information of the robot arm under the ROS system.
  • Obtaining a corresponding motion information queue specifically includes: writing a robot arm model description file of the robot arm under the ROS system; and modeling the robot arm according to the robot arm model description file; and, when the machine is After the arm is modeled, the upper computer performs motion planning on the mechanical arm according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding motion. Information queue.
  • the lower position machine further includes: an information returning module, configured to return the real-time spatial pose information of the mechanical arm to the upper-level machine while the driving robot arm performs the grasping operation according to the corresponding path;
  • the motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
  • a ROS system-based mechanical arm grasping method and system of the present invention has a significant advantage in that: firstly, the technical solution of the present disclosure can greatly improve the mechanical arm by introducing machine vision as a core detector component of the mechanical arm.
  • the solution adopts a distributed system framework, The upper computer and the lower computer are separated, which can effectively utilize the supercomputer's high computing power and image processing capability, and at the same time
  • the solution proposed by the present disclosure is based on the ROS operating system, making full use of the rich software package of the ROS system, realizing the rapid configuration of the robot arm motion planning, greatly reducing the threshold of the mechanical arm control;
  • the overall solution proposed by the present disclosure can easily realize the layout of the mechanical arm, and is convenient to expand into a single upper machine with multiple mechanical arms working together, reducing the use cost of the mechanical arm, and has wide application prospects.
  • FIG. 1 is a schematic diagram of an implementation environment of a robot arm grabbing method based on a ROS system according to the present disclosure
  • FIG. 2 is a working flow chart of a robot arm grabbing method based on a ROS system according to the present disclosure
  • FIG. 3 is a flow chart showing the operation of the upper computer in performing image processing under the ROS system according to the present disclosure
  • FIG. 5 is a flow chart of the communication between the upper computer and the lower computer according to the present disclosure.
  • FIG. 6 is a schematic structural view of an embodiment of a robot arm grabbing system based on a ROS system according to the present disclosure
  • FIG. 7 is a schematic structural view of another embodiment of a robot arm grabbing system based on a ROS system according to the present disclosure.
  • a robot arm grabbing method based on a ROS system the implementation environment thereof includes a host computer, a lower computer, a camera, and a communication environment.
  • the upper computer acts as the main body to accept commands and perform crawling, and cooperates to complete the robot arm grabbing task.
  • the implementation environment of this embodiment has the following components:
  • Camera for example: USB camera.
  • the camera is placed above or obliquely above the object to be grabbed, It is best to have clear and unobstructed shooting angles, and it is necessary to clarify the coordinate system in which the camera is located (coordinate system 1, which is shown in Fig. 1, which is the camera coordinate system).
  • the host computer needs to be equipped with the ROS operating system (based on Linux), which is the "brain" of the whole system. Its main functions are: driving the camera to complete image acquisition and transmission, image processing, motion planning, and motion information queue transmission.
  • ROS operating system based on Linux
  • the lower position machine refers to the drive control part of the mechanical arm device. Its main function is to receive the motion information queue, the drive robot arm, the real-time spatial pose information sensing detection of the mechanical arm, and the real-time spatial pose information transmission.
  • the robot arm is the execution part of the robot arm device, which requires the robot arm to have more than five degrees of freedom (to ensure that the working space of the robot arm is large, and the grasping operation based on visual positioning can be realized), and the arm has an end effector (for example: suction cup, end clamp, etc.), different mechanical arm order can be adjusted according to the actual situation when the mechanical arm is modeled.
  • the position of the robot arm must be determined and modeled based on the position of the arm (coordinate system 2 shown in Figure 1), that is, the robot arm coordinate system and the robot arm in the robot arm coordinate system are required. Spatial pose information.
  • Socket communication requires a wireless network in the implementation environment, and the upper and lower computers implement communication in the same domain segment.
  • Positioning marks refer to identification patterns with specific shape requirements for visual recognition and positioning. For different algorithms, the identification patterns are different. It should be noted that when placing an object to be grasped containing a positioning mark, the positioning mark needs to be located within the visual range of the camera to ensure that the camera can capture an image of the object to be grasped containing the positioning mark.
  • a robot arm grabbing method based on a ROS system includes:
  • Step 2 The host computer acquires an image of the object to be grasped including the positioning mark by using a camera;
  • Step 3 The host computer processes the acquired image under the ROS system, and obtains Spatial pose information of the object to be grasped in a robot arm coordinate system;
  • Step 4 The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
  • Step 5 The host computer transmits the obtained motion information queue of the robot arm to the lower position machine
  • Step 6 The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
  • the host computer and the camera are communicatively connected.
  • the USB camera can be directly connected to the USB interface of the host computer through the USB interface to implement the connection.
  • the host computer is a ROS system, it is necessary to configure the camera's environment under the ROS system to ensure the normal use of the camera.
  • the method further includes: Step 1: The upper computer configures a use environment of the camera under the ROS system.
  • the upper computer can control the camera to take an image, and the image refers to a picture or an image.
  • the upper camera controls the camera to capture an image of the object to be grasped with the positioning mark in its visual range.
  • the image captured by the camera will be transmitted to the host computer, and the image captured by the host computer will be processed to obtain the spatial pose information of the object to be grasped in the robot arm coordinate system.
  • the spatial position information refers to the spatial position and the spatial attitude.
  • the spatial position refers to the coordinate of the cup in the coordinate system of the robot arm
  • the spatial posture means that the cup is placed vertically. Still horizontally.
  • the spatial position is intended to plan the movement of the robot arm to the cup during motion planning.
  • the spatial attitude is to plan whether the end effector of the robot arm reaches the position of the cup after the movement planning, whether it is horizontal or vertical.
  • the spatial pose information of the object to be grasped in the robot arm coordinate system is transmitted to the MoveIt initialization program module for motion planning.
  • the initial value of the spatial pose information of the robot arm is set in the ROS system from the beginning, and can directly obtain the spatial pose information (equivalent to the end point) and the setting in the robot arm coordinate system according to the processed object to be grasped.
  • Robotic arm in ROS system The spatial pose information (equivalent to the starting point) directly performs motion planning to obtain a motion information queue.
  • the Socket communication protocol is used for communication between the upper computer and the lower computer, and the upper computer sends the motion information queue to the lower computer through the Socket communication protocol.
  • the lower computer After receiving the motion information queue, the lower computer parses the motion information queue, and according to the parsed motion information queue, the robot arm moves according to the corresponding path, and performs a grab operation.
  • the image of the object to be grasped by the camera is captured by the camera to locate the spatial pose information of the object to be grasped (this information needs to be in the same coordinate system as the spatial pose information of the robot arm), and then The rules of motion, thereby driving the robotic arm to perform a grasping operation on the grasping object.
  • Combining machine vision (equivalent to the camera) with the robotic arm is equivalent to adding an intelligent "eye" to the robotic arm, which can greatly increase the environmental sensing capability and intelligent decision-making ability of the mechanical arm, thereby further expanding the application field of the mechanical arm.
  • the present disclosure is based on the development of the ROS system, and utilizes many characteristics of the ROS system to reduce the difficulty in realizing the motion planning of the robot arm and to lower the application threshold of the mechanical arm.
  • FIG. 2 a system operation flowchart of a POS system based visual positioning and a robot arm grasping implementation method is provided.
  • the whole implementation process is divided into two parts: upper computer configuration and lower computer configuration.
  • the host computer configures the camera node under the ROS system to drive the camera.
  • the camera driver is driven in the ROS system of the host computer.
  • the driver node program used in this embodiment is usb_cam. This node will drive the camera and publish the image captured by the camera on the usb_cam/image_raw topic.
  • the host computer calibrates the camera under the ROS system and saves the correction data.
  • the camera is calibrated using the camera_calibration procedure of the ROS system and the correction data is saved.
  • the program is used to obtain calibration data of the camera, that is, internal parameters, external parameters, and distortion coefficient data, and the above data is saved as correction data.
  • the correction data obtained by different cameras will be different, and the corrected images will be used to correct the pictures taken by the camera, and then the images with less distortion will be obtained.
  • the host computer selects (trained) the positioning mark recognition algorithm.
  • the positioning mark recognition algorithm There may be a variety of positioning mark recognition algorithms inside the camera. When it is necessary to use the camera to control the robot arm for grasping, it is necessary to select a positioning mark recognition algorithm used in the subsequent positioning process for the camera to use for positioning.
  • the positioning mark recognition algorithm may be an ARToolKit-based positioning mark recognition algorithm, an OpenCV_ArUco-based positioning mark recognition algorithm, etc., as long as it can realize visual positioning and let the upper machine control the robot arm to perform the grasping operation.
  • the step 1 configuration machine of the camera before the use environment of the ROS system further includes: Step 0: The host computer performs positioning mark training on the camera.
  • the process of performing the positioning mark training on the camera by the upper computer in step 0 is any of the following steps:
  • Step 01 The host computer performs training on the positioning mark for the camera based on an ARToolKit positioning mark recognition algorithm.
  • Step 02 The host computer trains the camera for the positioning mark based on an OpenCV_ArUco positioning mark recognition algorithm.
  • the subsequent identification work can be completed. Therefore, only one type of positioning mark recognition algorithm can be trained, and only one algorithm is selected in subsequent selection; if both algorithms are trained You can choose one of the following options.
  • the identification training methods of the two recognition algorithms proposed by the present disclosure are as follows:
  • the drawMarker() function is used to create and train the logo pattern (ie, the anchor mark).
  • the host computer processes the acquired image under the ROS system to obtain the object to be grasped with the positioning mark.
  • the specific process of spatial pose information in the robot arm coordinate system is:
  • Step 31 Search for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;
  • Step 32 Locating the found positioning mark
  • Step 33 Obtain spatial pose information of the object to be grasped with the positioning mark in a camera coordinate system according to the positioned positioning mark.
  • Step 34 Convert the spatial pose information of the object to be grasped in the camera coordinate system according to a preset camera coordinate system and a robot arm coordinate system conversion matrix, and obtain the object to be grasped on the robot arm Spatial pose information in the coordinate system.
  • the program processing process is: reading the pre-imported identification map information (ie, reading the preset positioning mark), and the (USB) camera acquires the real-time image to find the most matching positioning identifier and The positioning identifier observed by the camera is positioned, so that the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system is obtained according to the positioning marker.
  • the pre-imported identification map information ie, reading the preset positioning mark
  • the (USB) camera acquires the real-time image to find the most matching positioning identifier and The positioning identifier observed by the camera is positioned, so that the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system is obtained according to the positioning marker.
  • the above process refers to the positioning method of the still picture.
  • the object to be captured with the positioning mark captured by the camera is a slowly moving image placed on the pipeline.
  • the ARToolKit frame frequency counter needs to be added, and the position identification map of the position identification of the different frame is estimated.
  • Pose in the camera coordinate system it is equivalent to dividing the image of the multi-frame into one frame and one frame.
  • the spatial pose information of the object to be grasped in the camera coordinate system of each frame is obtained.
  • the spatial pose information of the frame to be grabbed in the camera coordinate system is obtained, it will be released in real time for subsequent processing.
  • the positioning mark marked with the OpenGL is used as the origin, and the coordinate system is drawn on the icon of the positioning mark, so that the actual display to be captured can be displayed on the monitor display screen.
  • the size, shape, movement, etc. of the object of course, the movement of the arm will also be displayed on the display, which can more intuitively and dynamically understand the grasping of the arm.
  • the cv_bridge node is configured in the system to convert the sensor_msgs/Image type image data obtained by the camera under the ROS system into cv::Mat type image data recognizable by the OpenCV library.
  • the positioning mark with the highest matching degree with the preset positioning mark is found, and the spatial pose information in the camera coordinate system is obtained.
  • the acquired image is also an image
  • the positioning mark with the highest matching degree with the preset positioning mark is sequentially searched from one frame image, and the spatial pose information of the positioned positioning mark in the camera coordinate system is sequentially extracted.
  • the spatial pose information of the object to be grabbed in the camera coordinate system is extracted as a parameter number of a motion rule.
  • the positioning mark recognition algorithm of OpenCV_ArUco When the positioning mark recognition algorithm of OpenCV_ArUco is selected to identify the positioning mark, the image of the object to be grasped with the positioning mark acquired by the camera (of course, the converted cv::Mat type image data) is used, and then the OTSU binary value is utilized.
  • the algorithm reads the value of the positioning mark in the perspective transformed image and compares it with the value of the pre-trained positioning mark (ie, the value of the preset positioning mark), thereby realizing the identification of the positioning mark.
  • the spatial position and posture estimation of the identified positioning mark is performed, and the spatial pose information of the object to be grasped in the camera coordinate system is given.
  • the spatial pose information of the object to be grasped in the camera coordinate system is converted, and the spatial pose of the object to be grasped in the robot arm coordinate system is obtained.
  • the spatial pose information of the object to be grabbed in each camera frame in the camera coordinate system needs to be converted.
  • the MoveIt interface program is written, and the spatial pose information of the object to be grabbed in the robot arm coordinate system is formatted into a quaternion form and transmitted to the MoveIt initialization program module for motion planning using the API provided by the MoveIt module.
  • the upper computer processes the image acquired by the camera, it needs to obtain the spatial pose information of the object to be grasped in each frame of the frame in the robot arm coordinate system, and convert it into a MoveIt initialization program module support.
  • FIG. 4 is a flow chart showing the operation of the upper arm machine for manipulating the arm movement under the ROS system according to the present disclosure.
  • Step 4 The upper computer according to the space position of the object to be grasped in the robot arm coordinate system
  • the pose information and the spatial pose information of the manipulator are used to plan the motion of the robot arm under the ROS system, and the corresponding motion information queue is further described as follows:
  • the upper machine (using the URDF (Unified Robot Description Format)) writes a manipulator model description file of the robot arm under the ROS system for modeling the subsequent manipulator.
  • URDF Unified Robot Description Format
  • the host computer models the robot arm according to the robot arm model description file.
  • the robotic arm is modeled by calling the created robotic arm description model with the MoveIt Setup Assistant Tool under the ROS system.
  • the steps of modeling are: collision detection setting, virtual joint setting (for example: the base of the robot arm for positioning of the robot arm coordinate system), and the arm planning joint set (the kinematics solver is KDL Kinematics Plugin) , the initial position of the arm (ie the initial value of the space pose information of the arm), the end of the arm set (for example: define it is a suction cup, clip, etc.), the passive joint setting (specifically, no drive, only The joints that move along with other joints are finally generated. If the motion algorithm is not changed, the default motion algorithm planning library is OMPL (Open Motion Planning Library).
  • OMPL Open Motion Planning Library
  • the host computer After modeling the robot arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm (using the MoveIt initialization program module) The manipulator performs motion planning, obtains a corresponding motion information queue, and publishes it (according to the communication rules of the ROS system).
  • FIG. 5 is a flow chart showing the working of the host computer and the lower computer according to the present disclosure. The process is completed on a ROS node, and further explained as follows:
  • the message server program of the ROS system is initialized for reading the motion information queue issued by the MoveIt initialization program module;
  • the Socket communication node (TCP information) is initialized, and the read motion planning information queue is placed in the sending buffer, and sent to the lower computer when the upper and lower computers communicate;
  • the lower computer receives the motion planning information, performs motion information analysis, and drives the robot arm to perform the crawling according to the planned action.
  • the method further includes: Step 7: realizing the robot arm while the driving robot arm performs the grasping operation according to the corresponding path.
  • the spatial pose information is transmitted back to the upper computer;
  • the mechanical arm may need to grasp a plurality of objects to be grasped on the position pipeline, and when the control robot performs a grasping operation, the spatial posture information of the mechanical arm will definitely change. Therefore, at the same time as the planning action is performed, the position sensor (such as the angle sensor and the encoder) of the lower position machine on the robot arm transmits the actual spatial pose information of the robot arm to the upper computer through the Socket communication, so that the upper computer updates the robot arm.
  • the spatial pose information is used for motion planning of the spatial pose information of the next object to be grasped in the robot arm coordinate system.
  • a number of objects to be grasped are slowly moved on the pipeline, and the positioning marks of the camera have been trained (using the above two positioning marker recognition algorithms to train).
  • the host computer configures the camera's use environment (that is, drives the camera, reads the correction data saved in advance, and the positioning mark recognition algorithm that the engineer can choose to use); then the host computer controls the camera to start shooting on the assembly line.
  • the image of the feature to be captured is released while being shot.
  • the image processing unit inside the host computer reads the real-time image, it is divided into one frame and one frame to identify the positioning mark.
  • the positioning mark of the frame is released after the spatial pose information of the robot arm coordinate system; the process of processing the image by calling the MoveIt initialization program module and the configuration process of the camera can be executed in parallel, so that it can be like the positioning mark training for the camera Modeling the robotic arm from the beginning, and then monitoring whether to release the spatial pose information of the feature to be captured in the robot arm coordinate system, assuming that there is spatial pose information of the first frame of the feature to be grasped in the robot arm coordinate system, The spatial pose information of the robot arm coordinate system and the spatial position of the robot arm according to the feature to be grasped of the first frame
  • the information (initial value) is used for motion planning, and the corresponding motion information queue is obtained; the lower computer drives the robot arm to perform the grab operation according to the motion information queue, and uploads the real-time spatial pose information of the robot arm to the upper computer;
  • the MoveIt initialization program module updates the spatial pose information of the robot arm according to the uploaded real-time spatial pose information; and then retrieves the spatial pose information
  • the present disclosure has a significant advantage over the prior art in that the technical solution of the present disclosure adopts a distributed design, which is advantageous for utilizing the processing capability of the upper computer and facilitating the topology to cooperate with multiple mechanical arms;
  • the proposed vision-based object localization method is suitable for grasping different objects, and the initial position requirement of the object is low;
  • the mechanical arm motion planning method proposed by the present disclosure fully utilizes the characteristics of the ROS system, and the configuration is simple, convenient and practical;
  • the overall solution uses wireless communication and flexible layout, which can be applied to different application scenarios.
  • a robotic arm grabbing system based on a ROS system includes: a host computer 10, a lower computer 30, and a camera 20, wherein the upper computer 10 and the The lower unit 30 and the camera 20 are communicatively coupled.
  • the upper computer can be a computer with ROS, and the lower computer refers to the drive control part in the mechanical arm device.
  • the lower computer and the upper position use Socket communication, and the camera needs to communicate with the upper computer, for example: using USB
  • the camera can be connected to the host computer through the USB interface to realize the communication between the two.
  • the camera 20 is configured to acquire an image of an object to be grasped including a positioning mark under the control of the upper computer;
  • the host computer 10 further includes:
  • the image processing module 12 is configured to process the acquired image under the ROS system to obtain spatial pose information of the object to be grasped in a robot arm coordinate system;
  • the motion planning module 13 is configured to perform motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtain corresponding Motion information queue;
  • a message passing module 14 is configured to transmit the obtained motion information queue of the robot arm to the lower computer;
  • the lower machine 30 includes:
  • the motion execution module 31 is configured to perform a grab operation according to the corresponding path according to the motion information queue driving robot arm.
  • the host computer is a ROS system
  • the host computer 10 further includes: a camera configuration module 11 configured to configure a use environment of the camera under the ROS system.
  • the upper computer can control the camera to take an image, and the image refers to a picture or an image.
  • the upper camera controls the camera to capture an image of the object to be grasped with the positioning mark in its visual range.
  • the image captured by the camera will be transmitted to the host computer, and the image captured by the host computer will be processed to obtain the spatial pose information of the object to be grasped in the robot arm coordinate system.
  • the spatial position information refers to the spatial position and the spatial attitude.
  • the spatial position refers to the coordinate of the cup in the coordinate system of the robot arm
  • the spatial posture means that the cup is placed vertically. Still horizontally.
  • the spatial position is intended to plan the movement of the robot arm to the cup during motion planning.
  • the spatial attitude is to plan whether the end effector of the robot arm reaches the position of the cup after the movement planning, whether it is horizontal or vertical.
  • the spatial pose information of the object to be grasped in the robot arm coordinate system is transmitted to the MoveIt initialization program module for motion planning.
  • the initial value of the spatial pose information of the robot arm is set in the ROS system from the beginning, and can directly obtain the spatial pose information (equivalent to the end point) and the setting in the robot arm coordinate system according to the processed object to be grasped.
  • the spatial pose information (equivalent to the starting point) of the robot arm in the ROS system directly performs motion planning to obtain a motion information queue.
  • the Socket communication protocol is used for communication between the upper computer and the lower computer, and the upper computer sends the motion information queue to the lower computer through the Socket communication protocol.
  • the lower computer After receiving the motion information queue, the lower computer parses the motion information queue, and according to the parsed motion information queue, the robot arm moves according to the corresponding path, and performs a grab operation.
  • the image of the object to be grasped by the camera is captured by the camera to locate the spatial pose information of the object to be grasped (this information needs to be in the same coordinate system as the spatial pose information of the robot arm), and then The rules of motion, thereby driving the robotic arm to perform a grasping operation on the grasping object.
  • Combining machine vision (equivalent to the camera) with the robotic arm is equivalent to adding an intelligent "eye" to the robotic arm, which can greatly increase the environmental sensing capability and intelligent decision-making ability of the mechanical arm, thereby further expanding the application field of the mechanical arm.
  • the present disclosure is based on the development of the ROS system, and utilizes many characteristics of the ROS system to reduce the difficulty of implementing the motion planning of the robot arm and downgrade. The application threshold of the mechanical arm.
  • the camera configuration module 11 is configured to configure the use environment of the camera under the ROS system, specifically: configuring a camera node to drive the camera under the ROS system; and calibrating the camera under the ROS system And save the correction data; and, select (trained) the positioning marker recognition algorithm.
  • the camera driver is performed in the ROS system of the host computer.
  • the driver node program used in this embodiment is usb_cam. This node will drive the camera and publish the image captured by the camera on the usb_cam/image_raw topic.
  • the camera is calibrated using the camera_calibration procedure of the ROS system and the correction data is saved.
  • the program is used to obtain calibration data of the camera, that is, internal parameters, external parameters, and distortion coefficient data, and the above data is saved as correction data.
  • the correction data obtained by different cameras will be different, and the corrected images will be used to correct the pictures taken by the camera, and then the images with less distortion will be obtained.
  • the positioning mark recognition algorithm may be an ARToolKit-based positioning mark recognition algorithm, an OpenCV_ArUco-based positioning mark recognition algorithm, etc., as long as it can realize visual positioning and let the upper machine control the robot arm to perform the grasping operation.
  • the host computer further includes: a camera training module 15 for performing positioning mark training on the camera.
  • the camera training module 15 is configured to perform positioning mark training on the camera, comprising: performing, according to an ARToolKit-based positioning mark recognition algorithm, the camera for the positioning mark; or, based on an OpenCV_ArUco-based positioning mark recognition algorithm, The camera is trained for the positioning mark.
  • any type of positioning mark recognition algorithm may be used for training to facilitate subsequent recognition.
  • the training process of the two please refer to the corresponding method embodiment, which will not be described here.
  • the image processing module is the same as the above 12, for processing the acquired image under the ROS system, and obtaining the spatial pose information of the object to be grasped in the robot arm coordinate system specifically includes:
  • the program processing process is: reading the pre-imported identification map information (ie, reading the preset positioning mark), and the (USB) camera acquiring the real-time image to find the highest matching degree. Positioning the identifier and locating the positioning identifier observed by the camera, thereby obtaining the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system according to the positioning marker.
  • the above process refers to the positioning method of the still picture.
  • the object to be captured with the positioning mark captured by the camera is a slowly moving image placed on the pipeline.
  • the ARToolKit frame frequency counter needs to be added, and the position identification map of the position identification of the different frame is estimated.
  • Pose in the camera coordinate system it is equivalent to dividing the image of the multi-frame into one frame and one frame.
  • the spatial pose information of the object to be grasped in the camera coordinate system of each frame is obtained.
  • the spatial pose information of the frame to be grabbed in the camera coordinate system is obtained, it will be released in real time for subsequent processing.
  • the positioning mark marked with the OpenGL is used as the origin, and the coordinate system is drawn on the icon of the positioning mark, so that the actual display to be captured can be displayed on the monitor display screen.
  • the size, shape, movement, etc. of the object of course, the movement of the arm will also be displayed on the display, which can more intuitively and dynamically understand the grasping of the arm.
  • the cv_bridge node is configured in the system to convert the sensor_msgs/Image type image data obtained by the camera under the ROS system into cv::Mat type image data recognizable by the OpenCV library.
  • the positioning mark with the highest matching degree with the preset positioning mark is found, and the spatial pose information in the camera coordinate system is obtained.
  • the acquired image is also an image
  • the positioning mark with the highest matching degree with the preset positioning mark is sequentially searched from one frame image, and the spatial pose information of the positioned positioning mark in the camera coordinate system is sequentially extracted.
  • the spatial pose information of the object to be grabbed in the camera coordinate system is extracted as a parameter number of a motion rule.
  • the positioning mark recognition algorithm of OpenCV_ArUco When the positioning mark recognition algorithm of OpenCV_ArUco is selected to identify the positioning mark, the image of the object to be grasped with the positioning mark acquired by the camera (of course, the converted cv::Mat type image data) is used, and then the OTSU binary value is utilized.
  • the algorithm reads the value of the positioning mark in the perspective transformed image and compares it with the value of the pre-trained positioning mark (ie, the value of the preset positioning mark), thereby realizing the identification of the positioning mark.
  • the spatial position and posture estimation of the identified positioning mark is performed, and the spatial pose information of the object to be grasped in the camera coordinate system is given.
  • the spatial pose information of the object to be grasped in the camera coordinate system is converted, and the spatial pose of the object to be grasped in the robot arm coordinate system is obtained.
  • the spatial pose information of the object to be grabbed in each camera frame in the camera coordinate system needs to be converted.
  • the MoveIt interface program is written, and the spatial pose information of the object to be grabbed in the robot arm coordinate system is formatted into a quaternion form and transmitted to the MoveIt initialization program module for motion planning using the API provided by the MoveIt module.
  • the upper computer processes the image acquired by the camera, it needs to obtain the spatial pose information of the object to be grasped in each frame of the frame in the robot arm coordinate system, and convert it into a MoveIt initialization program module support.
  • the motion planning module 13 is configured to use the spatial pose information and the machine according to the object to be grasped in the robot arm coordinate system.
  • the spatial pose information of the arm is used to plan the motion of the robot arm under the ROS system, and the corresponding motion information queue includes:
  • the manipulator description model created by the MoveIt initialization toolkit of the ROS system is used to model the robot arm.
  • the host computer after modeling the mechanical arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm (using the MoveIt initialization program module) The motion planning of the robot arm is performed, and the corresponding motion information queue is obtained and released (according to the communication rules of the ROS system).
  • the Socket communication is used to enable the upper computer and the lower computer to communicate.
  • the specific communication process refer to the corresponding method embodiment, which is not described herein.
  • the lower computer further includes:
  • the information returning module 32 is configured to return the real-time spatial pose information of the mechanical arm to the upper computer while the driving robot arm performs the grasping operation according to the corresponding path;
  • the motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
  • the mechanical arm may need to grasp a plurality of objects to be grasped on the position pipeline, and when the control robot performs a grasping operation, the spatial posture information of the mechanical arm will definitely change. Therefore, at the same time as the planning action is performed, the position sensor (such as the angle sensor and the encoder) of the lower position machine on the robot arm transmits the actual spatial pose information of the robot arm to the upper computer through the Socket communication, so that the upper computer updates the robot arm.
  • the spatial pose information is used for motion planning of the spatial pose information of the next object to be grasped in the robot arm coordinate system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

A ROS-based mechanical arm grabbing method and system. An implementation process comprises: configuring a use environment of a camera (20) on an upper computer (10), and then disposing the camera (20) above or laterally above an object to be grabbed and obtaining an image of the object to be grabbed that contains a positioning mark; inputting the image to the upper computer (10); reading the image of the camera (20) and performing data processing by using a particular algorithm to obtain spatial pose information of the object to be grabbed at mechanical arm coordinates; and giving a motion information queue and sending same to a lower computer (30). The lower computer (30) receives and parses the motion information queue sent by the upper computer (10), and drives a mechanical arm to perform a grabbing operation according to a preset action. The ROS-based mechanical arm grabbing method and system can effectively utilize the powerful processing operation capability of the upper computer (10), readily implement layout of the mechanical arm and the upper computer (10) and collaborative operation of multiple mechanical arms, and easily implement motion planning of the mechanical arms by using a ROS, and thus have a wide application prospect.

Description

一种基于ROS系统的机械臂抓取方法及系统Robot arm grabbing method and system based on ROS system
本申请要求2017年01月25日提交的申请号为:201710056272.7、发明名称为“一种基于ROS系统的视觉定位及机械臂抓取实现方法”的中国专利申请的优先权,其全部内容合并在此。This application claims the priority of the Chinese patent application filed on January 25, 2017, filed on Jan. 25, 2017, entitled "A ROS-based Visual Positioning and Robotic Arm Crawling Implementation Method", the entire contents of which are incorporated in this.
技术领域Technical field
本发明属于机械臂控制和运动规划领域,特别是一种基于ROS系统的机械臂抓取方法及系统。The invention belongs to the field of mechanical arm control and motion planning, in particular to a robot arm grasping method and system based on ROS system.
背景技术Background technique
机械臂是在机器人领域中应用最广泛的一种自动化装置,尤其是多自由度机械臂在机械制造、汽车、半导体、医疗、家庭服务等多个领域扮演越来越多的角色,因此机械臂的运动控制一直是研究的热点。目前,机械手臂最主要的应用场景主要有以下几个:The robotic arm is one of the most widely used automation devices in the field of robotics. In particular, multi-degree-of-freedom manipulators play an increasing role in many fields such as machine building, automotive, semiconductor, medical, and home services. Motion control has always been a hot topic of research. At present, the main application scenarios of the robot arm are as follows:
1)焊接领域。用于代替人工在不良的焊接环境中执行焊接人物。1) Welding field. Used to perform welding in a poor welding environment instead of manually.
2)自动化生产线领域。主要用于执行物品的抓取、翻转、物品分拣等动作,提高生产效率。2) The field of automated production lines. It is mainly used to perform actions such as grabbing, flipping, sorting items, etc., to improve production efficiency.
3)医疗领域。主要用于执行精密的医疗操作,如微创手术等。3) The medical field. Mainly used to perform sophisticated medical procedures such as minimally invasive surgery.
4)服务领域。配合移动机器人,机械臂已经走入生活之中执行如递取物品、收拾杂物等任务。4) Service area. In conjunction with mobile robots, the robotic arm has already entered life to perform tasks such as picking up items and picking up debris.
机器视觉属于人工智能的一个分支,简而言之,机器视觉就是用相机代替人眼来对周围环境进行判断和分析,结合一定的算法来实现智能决策,它是一项综合技术,包括图像处理、机械工程技术、控制、电光源照明、光学成像、传感器、模拟与数字视频技术、计算机软硬件技术等。机器视觉从原理上分为单目、双目、3D视觉等几个类型,它的引入存在以下几个优势:Machine vision belongs to a branch of artificial intelligence. In short, machine vision is to use the camera instead of the human eye to judge and analyze the surrounding environment, combined with certain algorithms to achieve intelligent decision making. It is a comprehensive technology, including image processing. , mechanical engineering technology, control, electric light source illumination, optical imaging, sensors, analog and digital video technology, computer hardware and software technology. Machine vision is divided into several types, such as monocular, binocular, and 3D vision. Its introduction has the following advantages:
1)机器视觉相对人眼具有更高得可靠性。机器视觉可以连续采集图像和连续工作,不会出现视觉疲劳等情况。 1) Machine vision is more reliable than the human eye. Machine vision continuously captures images and works continuously without visual fatigue.
2)机器视觉具有更高的精度。配合一定的处理算法,机器视觉可以实现精确测量和误差检验,且有利于数据记录和集成。2) Machine vision has higher precision. With certain processing algorithms, machine vision can achieve accurate measurement and error checking, and is conducive to data recording and integration.
3)机器视觉可以适应复杂环境。在一些不适合人工作业的场合,机器视觉则可以“大展身手”。3) Machine vision can adapt to complex environments. In some situations that are not suitable for manual work, machine vision can be “out of the box”.
ROS系统(Robot Operating System)是Willow Garage公司2010年发布的开源机器人操作系统。它采用分布式的组织架构,可以大大提高代码的复用性和环节复杂机器人系统的适应性。ROS系统主要有以下几个特点:The ROS system (Robot Operating System) is an open source robot operating system released by Willow Garage in 2010. It adopts a distributed organizational structure, which can greatly improve the reusability of code and the adaptability of complex robot systems. The ROS system has the following main features:
1)点对点的分布式设计。ROS的点对点设计以及服务和节点管理器等机制可以分散由计算机视觉和语音识别等功能带来的实时计算压力,能够适应多机器人遇到的挑战。1) Point-to-point distributed design. The peer-to-peer design of ROS and the mechanisms such as services and node managers can decentralize the real-time computational pressure brought by functions such as computer vision and speech recognition, and can adapt to the challenges faced by multiple robots.
2)多语言的支持。ROS系统支持C++、Python、Octave和LISP等编程语言,同时提供了其他编程语言的接口。2) Multi-language support. The ROS system supports programming languages such as C++, Python, Octave, and LISP, as well as interfaces to other programming languages.
3)软件包丰富。ROS系统集成了大量的软件包,可以快速的实现机器人多种应用的环境配置,如机械臂运动规划、移动机器人导航、机器人SLAM等。3) The software package is rich. The ROS system integrates a large number of software packages, which can quickly realize the environment configuration of various applications of robots, such as robot arm motion planning, mobile robot navigation, robot SLAM and so on.
4)开源且免费。ROS系统的开源特性鼓励更多的人贡献自己的工作。4) Open source and free. The open source nature of the ROS system encourages more people to contribute their work.
由于条件限制,机械臂的智能性仍然不够高,更多的是执行机械的示教动作,对于多变的环境应用仍然存在诸多问题。Due to the limitations of the conditions, the intelligence of the robot arm is still not high enough, and more is to perform mechanical teaching actions. There are still many problems for the changing environment.
发明内容Summary of the invention
针对现有技术存在的缺陷或不足,本发明旨在于提出一种基于ROS系统机械臂抓取方法及系统,可有效解决机械臂环境适应性差、开发使用难度高等不足。对于机械臂的视觉接入、目标检测、图像处理、机械臂运动规划等本发明给出了一整套的解决方案。In view of the defects or deficiencies of the prior art, the present invention aims to propose a method and system for grasping a mechanical arm based on the ROS system, which can effectively solve the problems of poor adaptability of the mechanical arm environment and high difficulty in development and use. The present invention provides a complete solution for visual access, target detection, image processing, robotic arm motion planning, etc. of the robot arm.
实现本发明目的的技术解决方案为:The technical solution to achieve the object of the present invention is:
一种基于ROS系统的机械臂抓取方法,包括:A mechanical arm grabbing method based on ROS system, comprising:
步骤2:上位机通过相机获取包含有定位标记的待抓取物体的图像;Step 2: The upper computer obtains an image of the object to be grasped containing the positioning mark through the camera;
步骤3:所述上位机在ROS系统下对获取的所述图像进行处理,得到 所述待抓取物体在机械臂坐标系下的空间位姿信息;Step 3: The host computer processes the acquired image under the ROS system, and obtains Spatial pose information of the object to be grasped in a robot arm coordinate system;
步骤4:所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;Step 4: The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
步骤5:所述上位机将获得的所述机械臂的运动信息队列传递至下位机;Step 5: The host computer transmits the obtained motion information queue of the robot arm to the lower position machine;
步骤6:所述下位机根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。Step 6: The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
进一步,所述步骤1上位机配置相机在ROS系统下的使用环境之前还包括:步骤0:上位机对所述相机执行定位标记训练。Further, the step 1 before the host computer configures the camera to use the environment under the ROS system further includes: Step 0: The host computer performs positioning mark training on the camera.
进一步,所述步骤0上位机对所述相机执行定位标记训练的过程包括以下任意一步:步骤01:上位机基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;步骤02:上位机基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。Further, the process of performing the positioning mark training on the camera by the upper computer includes the following step: Step 01: The upper computer performs the training of the positioning mark on the camera based on the ARToolKit positioning mark recognition algorithm; Step 02 The upper computer trains the positioning mark on the camera based on the positioning mark recognition algorithm of OpenCV_ArUco.
进一步,所述步骤2:上位机通过相机获取包含有定位标记的待抓取物体的图像之前还包括:步骤11:上位机在ROS系统下配置相机节点对所述相机进行驱动;步骤12:所述上位机在ROS系统下对所述相机进行标定并保存矫正数据;步骤13:所述上位机选择定位标记识别算法。Further, the step 2: before the upper computer obtains the image of the object to be grasped including the positioning mark by the camera, the method further includes: Step 11: The host computer configures the camera node to drive the camera under the ROS system; Step 12: The upper computer calibrates the camera under the ROS system and saves the correction data; Step 13: The upper computer selects a positioning mark recognition algorithm.
进一步,所述步骤3所述上位机在ROS系统下对获取的所述图像进行处理,得到所述含有定位标记的待抓取物体在机械臂坐标系下的空间位姿信息的具体过程为:步骤31:在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;步骤32:定位寻找到的所述定位标记;步骤33:根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;步骤34:根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。 Further, in the step 3, the upper computer processes the acquired image under the ROS system, and the specific process of obtaining the spatial pose information of the object to be grasped in the robot arm coordinate system with the positioning mark is: Step 31: Search for the positioning mark with the highest matching degree with the preset positioning mark in the image acquired by the camera; Step 32: Locating the found positioning mark; Step 33: Obtain the above according to the positioned positioning mark Spatial pose information of the object to be grasped with the positioning mark in the camera coordinate system; Step 34: Converting the matrix according to the preset camera coordinate system and the robot arm coordinate system, and the object to be grasped is in the camera coordinate system The spatial pose information is converted to obtain spatial pose information of the object to be grasped in a robot arm coordinate system.
进一步,所述步骤4所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对机械臂进行运动规划,得到相应的运动信息队列具体过程为:步骤41:所述上位机编写机械臂在ROS系统下的机械臂模型描述文件;步骤42:所述上位机根据所述机械臂模型描述文件对所述机械臂进行建模;步骤43:当对所述机械臂建模后,所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息对所述机械臂进行运动规划,得到相应的运动信息队列。Further, in step 4, the upper computer performs motion planning on the mechanical arm in the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm. The specific process of the corresponding motion information queue is: Step 41: The host computer writes a robot arm model description file of the robot arm under the ROS system; Step 42: The host computer reads the robot arm according to the robot arm model description file Performing modeling; Step 43: After modeling the robot arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm The robot arm performs motion planning to obtain a corresponding motion information queue.
进一步,基于ROS系统的机械臂抓取方法,还包括:步骤7:在驱动机械臂按照相应的路径执行抓取操作的同时,下位机将所述机械臂的实时空间位姿信息回传至所述上位机;步骤8:上位机用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。Further, the robot arm grabbing method based on the ROS system further includes: Step 7: while the driving robot arm performs the grabbing operation according to the corresponding path, the lower computer returns the real-time spatial pose information of the robot arm to the The host computer is described; Step 8: The host computer updates the spatial pose information of the robot arm with the real-time spatial pose information returned.
本公开还提供一种基于ROS系统的机械臂抓取系统,包括:上位机、下位机和相机,所述上位机分别与所述下位机和所述相机通信连接;所述相机,用于在所述上位机的控制下获取包含有定位标记的待抓取物体的图像;所述上位机包括:图像处理模块,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息;运动规划模块,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;消息传递模块,用于将获得的所述机械臂的运动信息队列传递至所述下位机;所述下位机包括:运动执行模块,用于根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。The present disclosure also provides a robotic arm grabbing system based on a ROS system, comprising: a host computer, a lower computer and a camera, wherein the upper computer is communicably connected to the lower computer and the camera; the camera is used for An image of the object to be grasped containing the positioning mark is acquired under the control of the upper computer; the upper computer includes: an image processing module, configured to process the acquired image under the ROS system to obtain the image to be captured Taking spatial pose information of the object in the robot arm coordinate system; the motion planning module is configured to: according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm in the ROS Performing motion planning on the robot arm to obtain a corresponding motion information queue; a message transmission module, configured to transmit the obtained motion information queue of the robot arm to the lower computer; the lower computer includes: motion execution And a module, configured to drive the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
进一步,所述上位机还包括:相机训练模块,用于对所述相机执行定位标记训练。Further, the host computer further includes: a camera training module, configured to perform positioning mark training on the camera.
进一步,所述相机训练模块,用于对所述相机执行定位标记训练包括:基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;或,基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。 Further, the camera training module is configured to perform positioning mark training on the camera, comprising: performing, according to an ARToolKit-based positioning mark recognition algorithm, the camera to perform the positioning mark; or, based on the OpenCV_ArUco-based positioning mark recognition algorithm, The camera is trained for the positioning mark.
进一步,所述上位机还包括:相机配置模块,用于在ROS系统下配置相机节点对所述相机进行驱动;以及,在ROS系统下对所述相机进行标定并保存矫正数据;以及,选择定位标记识别算法。Further, the host computer further includes: a camera configuration module, configured to configure the camera node to drive the camera under the ROS system; and, under the ROS system, calibrate the camera and save the correction data; and, select the positioning Tag recognition algorithm.
进一步,所述图像处理模块,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息具体包括:在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;以及,定位寻找到的所述定位标记;以及,根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;以及,根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。Further, the image processing module is configured to process the acquired image under the ROS system, and obtain the spatial pose information of the object to be grasped in the robot arm coordinate system, which specifically includes: acquiring at the camera Locating the positioning mark with the highest degree of matching with the preset positioning mark; and positioning the found positioning mark; and, according to the positioned positioning mark, obtaining the object to be grasped with the positioning mark at camera coordinates The spatial pose information of the system; and, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, converting the spatial pose information of the object to be grasped in the camera coordinate system, and obtaining the The spatial pose information of the object to be grabbed in the robot arm coordinate system.
进一步,所述运动规划模块,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列具体包括:编写机械臂在ROS系统下的机械臂模型描述文件;以及,根据所述机械臂模型描述文件对所述机械臂进行建模;以及,当对所述机械臂建模后,所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息对所述机械臂进行运动规划,得到相应的运动信息队列。Further, the motion planning module is configured to perform motion planning on the robot arm according to the spatial pose information of the object to be grasped in a robot arm coordinate system and spatial pose information of the robot arm under the ROS system. Obtaining a corresponding motion information queue specifically includes: writing a robot arm model description file of the robot arm under the ROS system; and modeling the robot arm according to the robot arm model description file; and, when the machine is After the arm is modeled, the upper computer performs motion planning on the mechanical arm according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding motion. Information queue.
进一步,所述下位机还包括:信息回传模块,用于在驱动机械臂按照相应的路径执行抓取操作的同时,将所述机械臂的实时空间位姿信息回传至所述上位机;所述运动规划模块,进一步用于用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。Further, the lower position machine further includes: an information returning module, configured to return the real-time spatial pose information of the mechanical arm to the upper-level machine while the driving robot arm performs the grasping operation according to the corresponding path; The motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
本发明的一种基于ROS系统的机械臂抓取方法及系统与现有技术相比,其显著优点在于:首先本公开的技术方案通过引入机器视觉作为机械臂的核心探测器件可以大大提高机械臂的适应能力,对于不同的物品,只要在机械臂的工作空间之内且物品在机械臂的负载范围之内,均可以执行抓取和放置动作;其次,本方案采用分布式的系统框架,将上位机和下位机分开,可以有效利用上位机超高的计算能力和图像处理能力,同时有利 于保证下位机的实时性;再次,本公开提出的方案是基于ROS操作系统的,充分利用ROS系统的丰富软件包,实现机械臂运动规划的快速配置,大大降低了机械臂控制的门槛;最后,本公开提出的整体解决方案可以轻松的实现机械臂的布局,方便拓展为单上位机多机械臂协同工作,降低机械臂的使用成本,应用前景广泛。Compared with the prior art, a ROS system-based mechanical arm grasping method and system of the present invention has a significant advantage in that: firstly, the technical solution of the present disclosure can greatly improve the mechanical arm by introducing machine vision as a core detector component of the mechanical arm. Adaptability, for different items, as long as it is within the working space of the robot arm and the article is within the load range of the robot arm, the grabbing and placing actions can be performed; secondly, the solution adopts a distributed system framework, The upper computer and the lower computer are separated, which can effectively utilize the supercomputer's high computing power and image processing capability, and at the same time To ensure the real-time performance of the lower computer; again, the solution proposed by the present disclosure is based on the ROS operating system, making full use of the rich software package of the ROS system, realizing the rapid configuration of the robot arm motion planning, greatly reducing the threshold of the mechanical arm control; The overall solution proposed by the present disclosure can easily realize the layout of the mechanical arm, and is convenient to expand into a single upper machine with multiple mechanical arms working together, reducing the use cost of the mechanical arm, and has wide application prospects.
附图说明DRAWINGS
图1为本公开提出的一种基于ROS系统的机械臂抓取方法的实现环境示意图;1 is a schematic diagram of an implementation environment of a robot arm grabbing method based on a ROS system according to the present disclosure;
图2为本公开提出的一种基于ROS系统的机械臂抓取方法的工作流程图;2 is a working flow chart of a robot arm grabbing method based on a ROS system according to the present disclosure;
图3为本公开提出的上位机在ROS系统下进行图像处理的工作流程图;3 is a flow chart showing the operation of the upper computer in performing image processing under the ROS system according to the present disclosure;
图4为本公开提出的上位机在ROS系统下进行机械臂运动规划的工作流程图;4 is a working flow chart of the robot arm movement planning performed by the host computer under the ROS system according to the present disclosure;
图5为本公开提出的上位机与下位机进行通信的工作流程图;FIG. 5 is a flow chart of the communication between the upper computer and the lower computer according to the present disclosure;
图6为本公开提出的基于ROS系统的机械臂抓取系统一个实施例的结构示意图;6 is a schematic structural view of an embodiment of a robot arm grabbing system based on a ROS system according to the present disclosure;
图7为本公开提出的基于ROS系统的机械臂抓取系统另一个实施例的结构示意图。FIG. 7 is a schematic structural view of another embodiment of a robot arm grabbing system based on a ROS system according to the present disclosure.
具体实施方式detailed description
如图1所示,根据本发明的较佳实施例,基于ROS系统的机械臂抓取方法,其实现环境包括上位机、下位机、相机和通信环境。上位机作为图像检测和图像处理的主体,下位机作为接受命令和执行抓取的主体,共同协作完成机械臂抓取任务。As shown in FIG. 1, according to a preferred embodiment of the present invention, a robot arm grabbing method based on a ROS system, the implementation environment thereof includes a host computer, a lower computer, a camera, and a communication environment. As the main body of image detection and image processing, the upper computer acts as the main body to accept commands and perform crawling, and cooperates to complete the robot arm grabbing task.
参考图1所示的实现环境示意图,本实施例的实现环境有以下几个部分组成:Referring to the implementation environment diagram shown in FIG. 1, the implementation environment of this embodiment has the following components:
1)相机,例如:USB相机。相机放置在待抓取物体的上方或斜上方, 以拍摄视角清晰无遮挡为最佳,且需要明确摄像机所在的坐标系(图1中所示坐标系1,即相机坐标系)。1) Camera, for example: USB camera. The camera is placed above or obliquely above the object to be grabbed, It is best to have clear and unobstructed shooting angles, and it is necessary to clarify the coordinate system in which the camera is located (coordinate system 1, which is shown in Fig. 1, which is the camera coordinate system).
2)上位机。上位机需要安装有ROS操作系统(基于Linux),它是整个系统的“大脑”,它的主要作用有:驱动相机完成图像采集和传输、图像处理、运动规划以及运动信息队列发送。2) Host computer. The host computer needs to be equipped with the ROS operating system (based on Linux), which is the "brain" of the whole system. Its main functions are: driving the camera to complete image acquisition and transmission, image processing, motion planning, and motion information queue transmission.
3)下位机。下位机是指机械臂装置的驱动控制部分,他的主要作用是接收运动信息队列、驱动机械臂、机械臂的实时空间位姿信息传感检测,实时空间位姿信息发送。3) Lower position machine. The lower position machine refers to the drive control part of the mechanical arm device. Its main function is to receive the motion information queue, the drive robot arm, the real-time spatial pose information sensing detection of the mechanical arm, and the real-time spatial pose information transmission.
4)机械臂。机械臂是机械臂装置的执行部分,这里要求机械臂具有五个以上的自由度(保证机械臂的工作空间较大,可以实现基于视觉定位的抓取操作),且机械臂具有末端执行器(例如:吸盘、末端夹持器等),不同的机械臂秩序在机械臂建模的时候根据实际情况调整即可。除此之外,机械臂的位置必须确定且建模时以机械臂的所在位置(图1中所示坐标系2)为基础,即需要了解机械臂坐标系和机械臂在机械臂坐标系的空间位姿信息。4) Robot arm. The robot arm is the execution part of the robot arm device, which requires the robot arm to have more than five degrees of freedom (to ensure that the working space of the robot arm is large, and the grasping operation based on visual positioning can be realized), and the arm has an end effector ( For example: suction cup, end clamp, etc.), different mechanical arm order can be adjusted according to the actual situation when the mechanical arm is modeled. In addition, the position of the robot arm must be determined and modeled based on the position of the arm (coordinate system 2 shown in Figure 1), that is, the robot arm coordinate system and the robot arm in the robot arm coordinate system are required. Spatial pose information.
5)局域无线网。这里Socket通信要求在实现环境中有无线网络,上下位机在相同的域段实现通信。5) Local area wireless network. Here Socket communication requires a wireless network in the implementation environment, and the upper and lower computers implement communication in the same domain segment.
6)待抓取物体及定位标记。对待抓取物体的要求为:放置在机械臂的运动空间之内且小于机械臂的额定负载,从而保证机械臂能够抓取此物体。定位标记是指用于视觉识别和定位的具有特定形状要求的标识图,对于不同的算法,标识图则有所不同。需要注意的是,放置包含有定位标记的待抓取物体时,定位标记需要位于相机的视觉范围内,以保证相机可以拍摄到含有定位标记的待抓取物体的图像。6) Objects to be grabbed and positioning marks. The requirement for the object to be grasped is that it is placed within the movement space of the robot arm and smaller than the rated load of the robot arm, thereby ensuring that the robot arm can grasp the object. Positioning marks refer to identification patterns with specific shape requirements for visual recognition and positioning. For different algorithms, the identification patterns are different. It should be noted that when placing an object to be grasped containing a positioning mark, the positioning mark needs to be located within the visual range of the camera to ensure that the camera can capture an image of the object to be grasped containing the positioning mark.
在本公开的一个实施例中,一种基于ROS系统的机械臂抓取方法,包括:In an embodiment of the present disclosure, a robot arm grabbing method based on a ROS system includes:
步骤2:所述上位机通过相机获取包含有定位标记的待抓取物体的图像;Step 2: The host computer acquires an image of the object to be grasped including the positioning mark by using a camera;
步骤3:所述上位机在ROS系统下对获取的所述图像进行处理,得到 所述待抓取物体在机械臂坐标系下的空间位姿信息;Step 3: The host computer processes the acquired image under the ROS system, and obtains Spatial pose information of the object to be grasped in a robot arm coordinate system;
步骤4:所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;Step 4: The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
步骤5:所述上位机将获得的所述机械臂的运动信息队列传递至下位机;Step 5: The host computer transmits the obtained motion information queue of the robot arm to the lower position machine;
步骤6:所述下位机根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。Step 6: The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
具体的,上位机和相机通信连接,例如:当相机为USB相机时,USB相机可以通过USB接口直接连接于上位机的USB接口处实现连接。因上位机是ROS系统,因此,需要配置相机在ROS系统下的使用环境,保证相机的正常使用。Specifically, the host computer and the camera are communicatively connected. For example, when the camera is a USB camera, the USB camera can be directly connected to the USB interface of the host computer through the USB interface to implement the connection. Because the host computer is a ROS system, it is necessary to configure the camera's environment under the ROS system to ensure the normal use of the camera.
因此,优选地,步骤2之前还包括:步骤1:上位机配置相机在ROS系统下的使用环境。Therefore, preferably, before step 2, the method further includes: Step 1: The upper computer configures a use environment of the camera under the ROS system.
当配置完成后,上位机可以控制相机拍摄图像,图像指是图片或影像,本实施例是通过上位机控制相机拍摄位于其视觉范围内含有定位标记的待抓取物体的图像。When the configuration is completed, the upper computer can control the camera to take an image, and the image refers to a picture or an image. In this embodiment, the upper camera controls the camera to capture an image of the object to be grasped with the positioning mark in its visual range.
相机拍摄的图像会传到上位机,由上位机对拍摄的图像进行处理,从而得到待抓取物体在机械臂坐标系下的空间位姿信息。空间位置信息是指空间位置和空间姿态,例如:假设待抓取物体为一个杯子,那么空间位置是指这个杯子在机械臂坐标系下的坐标,空间姿态是指这个杯子是竖着放的,还是横着放的。空间位置则是为了在运动规划时,规划机械臂运动到此杯子的位置。空间姿态是为了在运动规划时,规划机械臂的末端执行器到达杯子的位置后是橫抓还是竖抓。The image captured by the camera will be transmitted to the host computer, and the image captured by the host computer will be processed to obtain the spatial pose information of the object to be grasped in the robot arm coordinate system. The spatial position information refers to the spatial position and the spatial attitude. For example, if the object to be grasped is a cup, the spatial position refers to the coordinate of the cup in the coordinate system of the robot arm, and the spatial posture means that the cup is placed vertically. Still horizontally. The spatial position is intended to plan the movement of the robot arm to the cup during motion planning. The spatial attitude is to plan whether the end effector of the robot arm reaches the position of the cup after the movement planning, whether it is horizontal or vertical.
将待抓取物体在机械臂坐标系下的空间位姿信息传递到MoveIt初始化程序模块中进行运动规划。机械臂的空间位姿信息的初始值是一开始就设置在ROS系统中的,可以直接根据处理得到的待抓取物体在机械臂坐标系下的空间位姿信息(相当于终点)和设置在ROS系统中的机械臂的 空间位姿信息(相当于起点)直接进行运动规划,得到运动信息队列。The spatial pose information of the object to be grasped in the robot arm coordinate system is transmitted to the MoveIt initialization program module for motion planning. The initial value of the spatial pose information of the robot arm is set in the ROS system from the beginning, and can directly obtain the spatial pose information (equivalent to the end point) and the setting in the robot arm coordinate system according to the processed object to be grasped. Robotic arm in ROS system The spatial pose information (equivalent to the starting point) directly performs motion planning to obtain a motion information queue.
上位机与下位机之间采用Socket通信协议进行通信,上位机通过Socket通信协议将运动信息队列发送给下位机。The Socket communication protocol is used for communication between the upper computer and the lower computer, and the upper computer sends the motion information queue to the lower computer through the Socket communication protocol.
当下位机接收到运动信息队列后,对运动信息队列进行解析,根据解析后的运动信息队列驱动机械臂按照对应的路径移动,并执行抓取操作。After receiving the motion information queue, the lower computer parses the motion information queue, and according to the parsed motion information queue, the robot arm moves according to the corresponding path, and performs a grab operation.
本实施例中,通过相机拍摄的含有定位标记的待抓取物体的图像来定位待抓取物体的空间位姿信息(此信息需要与机械臂的空间位姿信息处于同一坐标系),然后进行运动规则,从而驱动机械臂对待抓取物体执行抓取操作。In this embodiment, the image of the object to be grasped by the camera is captured by the camera to locate the spatial pose information of the object to be grasped (this information needs to be in the same coordinate system as the spatial pose information of the robot arm), and then The rules of motion, thereby driving the robotic arm to perform a grasping operation on the grasping object.
将机器视觉(相当于相机)和机械臂结合起来,相当于为机械臂增加了智能的“眼睛”,可以大大增加机械臂的环境感知能力和智能决策能力,从而进一步扩展机械臂的应用领域。另外,本公开是基于ROS系统开发的,利用ROS系统的诸多特性,减小机械臂运动规划的实现难度,降级机械臂的应用门槛。Combining machine vision (equivalent to the camera) with the robotic arm is equivalent to adding an intelligent "eye" to the robotic arm, which can greatly increase the environmental sensing capability and intelligent decision-making ability of the mechanical arm, thereby further expanding the application field of the mechanical arm. In addition, the present disclosure is based on the development of the ROS system, and utilizes many characteristics of the ROS system to reduce the difficulty in realizing the motion planning of the robot arm and to lower the application threshold of the mechanical arm.
如图2所示,本公开提出的一种基于ROS系统的视觉定位及机械臂抓取实现方法的系统工作流程图。整个实现流程分为上位机配置和下位机配置两个部分。As shown in FIG. 2, a system operation flowchart of a POS system based visual positioning and a robot arm grasping implementation method is provided. The whole implementation process is divided into two parts: upper computer configuration and lower computer configuration.
下面进一步说明步骤1上位机配置相机在ROS系统下的使用环境:The following is a further description of the environment in which the upper computer is configured to operate under the ROS system:
11)上位机在ROS系统下配置相机节点对所述相机进行驱动。在上位机的ROS系统内进行相机驱动,本实施例采用的驱动节点程序为usb_cam,这一节点将驱动相机并将相机采集的图像发布在usb_cam/image_raw话题上。11) The host computer configures the camera node under the ROS system to drive the camera. The camera driver is driven in the ROS system of the host computer. The driver node program used in this embodiment is usb_cam. This node will drive the camera and publish the image captured by the camera on the usb_cam/image_raw topic.
12)上位机在ROS系统下对所述相机进行标定并保存矫正数据。利用ROS系统的camera_calibration这一程序对该相机进行标定并保存矫正数据。在驱动相机后,利用该程序获取该相机的标定数据,即内部参数、外部参数和畸变系数数据,并保存上述数据为矫正数据。,不同的相机得到的矫正数据会有所不同,后续利用矫正数据对相机拍摄到的图片进行矫正,进而获得畸变较小的图片。 12) The host computer calibrates the camera under the ROS system and saves the correction data. The camera is calibrated using the camera_calibration procedure of the ROS system and the correction data is saved. After driving the camera, the program is used to obtain calibration data of the camera, that is, internal parameters, external parameters, and distortion coefficient data, and the above data is saved as correction data. The correction data obtained by different cameras will be different, and the corrected images will be used to correct the pictures taken by the camera, and then the images with less distortion will be obtained.
13)上位机选择(训练好的)定位标记识别算法。相机内部可能存在多种定位标记识别算法,当需要使用相机来控制机械臂进行抓取时,需要为相机选择一个在后续定位过程中使用的定位标记识别算法,供定位使用。定位标记识别算法可以是基于ARToolKit的定位标记识别算法、基于OpenCV_ArUco的定位标记识别算法等,只要其可以实现视觉定位、让上位机控制机械臂执行抓取操作即可。13) The host computer selects (trained) the positioning mark recognition algorithm. There may be a variety of positioning mark recognition algorithms inside the camera. When it is necessary to use the camera to control the robot arm for grasping, it is necessary to select a positioning mark recognition algorithm used in the subsequent positioning process for the camera to use for positioning. The positioning mark recognition algorithm may be an ARToolKit-based positioning mark recognition algorithm, an OpenCV_ArUco-based positioning mark recognition algorithm, etc., as long as it can realize visual positioning and let the upper machine control the robot arm to perform the grasping operation.
在本发明的另一个实施例中,除与上述相同的之外,步骤1上位机配置相机在ROS系统下的使用环境之前还包括:步骤0:上位机对所述相机执行定位标记训练。优选地,步骤0上位机对所述相机执行定位标记训练的过程为以下任意一步:In another embodiment of the present invention, in addition to the above, the step 1 configuration machine of the camera before the use environment of the ROS system further includes: Step 0: The host computer performs positioning mark training on the camera. Preferably, the process of performing the positioning mark training on the camera by the upper computer in step 0 is any of the following steps:
步骤01:上位机基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;Step 01: The host computer performs training on the positioning mark for the camera based on an ARToolKit positioning mark recognition algorithm.
步骤02:上位机基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。Step 02: The host computer trains the camera for the positioning mark based on an OpenCV_ArUco positioning mark recognition algorithm.
具体的,只要有一种定位标记识别算法即可完成后续的识别工作,因此,可以只对一种定位标记识别算法进行训练,后续选择时也仅有一种算法选择;若对两个算法都进行训练,后续选择时可以二选一。Specifically, as long as there is a positioning mark recognition algorithm, the subsequent identification work can be completed. Therefore, only one type of positioning mark recognition algorithm can be trained, and only one algorithm is selected in subsequent selection; if both algorithms are trained You can choose one of the following options.
本公开提出的两种识别算法的标识训练方法分别如下:The identification training methods of the two recognition algorithms proposed by the present disclosure are as follows:
对于基于ARToolKit的定位标记识别算法,可以利用在线工具"Tarotaro"训练或者使用ARToolKit提供的mk_patt的离线工具进行训练;For the ARToolKit-based positioning mark recognition algorithm, you can use the online tool "Tarotaro" training or use the offline tool of mk_patt provided by ARToolKit for training;
对于基于OpenCV_ArUco的识别算法,则利用drawMarker()函数进行创建和训练标识图案(即定位标记)。For the recognition algorithm based on OpenCV_ArUco, the drawMarker() function is used to create and train the logo pattern (ie, the anchor mark).
在本公开的另一个实施例中,除与上述相同的之外,所述步骤3所述上位机在ROS系统下对获取的所述图像进行处理,得到所述含有定位标记的待抓取物体在机械臂坐标系下的空间位姿信息的具体过程为:In another embodiment of the present disclosure, in addition to the same as the above, the host computer processes the acquired image under the ROS system to obtain the object to be grasped with the positioning mark. The specific process of spatial pose information in the robot arm coordinate system is:
步骤31:在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;Step 31: Search for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;
步骤32:定位寻找到的所述定位标记; Step 32: Locating the found positioning mark;
步骤33:根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;Step 33: Obtain spatial pose information of the object to be grasped with the positioning mark in a camera coordinate system according to the positioned positioning mark.
步骤34:根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。Step 34: Convert the spatial pose information of the object to be grasped in the camera coordinate system according to a preset camera coordinate system and a robot arm coordinate system conversion matrix, and obtain the object to be grasped on the robot arm Spatial pose information in the coordinate system.
具体的,如图3所示,上位机在ROS系统下进行图像处理的工作流程图,整个流程是在一个ROS节点上完成的,进一步说明如下:Specifically, as shown in FIG. 3, the working flow chart of the image processing of the host computer under the ROS system is completed on one ROS node, and further explained as follows:
在对相机获取的图像进行处理之前,首先需要读取相机的矫正数据,并选择要采用的识别算法(这一步是在配置相机在ROS系统中的使用环境时就做了)。Before processing the image acquired by the camera, you first need to read the camera's correction data and select the recognition algorithm to be used (this step is done when configuring the camera's environment in the ROS system).
如果选择基于ARToolKit的定位标记识别算法,程序处理过程为:读取预先导入的标识图信息(即读取预设定位标记),(USB)相机获取实时的图像,寻找匹配度最高的定位标识并定位相机观察到的定位标识,从而根据定位的此定位标记得到含有定位标记的待抓取物体在相机坐标系下的空间位姿信息。If the ARToolKit-based positioning mark recognition algorithm is selected, the program processing process is: reading the pre-imported identification map information (ie, reading the preset positioning mark), and the (USB) camera acquires the real-time image to find the most matching positioning identifier and The positioning identifier observed by the camera is positioned, so that the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system is obtained according to the positioning marker.
需要注意的是,上面过程指是静止图片的定位方式。在实际使用中,相机拍摄的含有定位标记的待抓取物体是放置于流水线上缓慢移动的影像,这时候就需要添加ARToolKit帧频计数器,利用不同帧下定位标识的位置变化预估定位标识图在相机坐标系中的位姿。这里相当于将多帧的影像分为一帧一帧的图片利用上述方式来得到待抓取物体在每一帧的相机坐标系下的空间位姿信息。当得到一帧的待抓取物体在相机坐标系下的空间位姿信息后,就会实时发布,供后续处理。It should be noted that the above process refers to the positioning method of the still picture. In actual use, the object to be captured with the positioning mark captured by the camera is a slowly moving image placed on the pipeline. At this time, the ARToolKit frame frequency counter needs to be added, and the position identification map of the position identification of the different frame is estimated. Pose in the camera coordinate system. Here, it is equivalent to dividing the image of the multi-frame into one frame and one frame. In the above manner, the spatial pose information of the object to be grasped in the camera coordinate system of each frame is obtained. When the spatial pose information of the frame to be grabbed in the camera coordinate system is obtained, it will be released in real time for subsequent processing.
另外,在其他实施例中,当需要做调试时,同时利用OpenGL以定位到的定位标记为原点,在定位标记的图标上绘制坐标系,这样在监控显示屏上就可以显示出实际待抓取物体的大小、形状、移动情况等,当然机械臂的运动情况也会在显示屏上显示,可以更直观、动态地了解机械臂的抓取情况。In addition, in other embodiments, when debugging is needed, the positioning mark marked with the OpenGL is used as the origin, and the coordinate system is drawn on the icon of the positioning mark, so that the actual display to be captured can be displayed on the monitor display screen. The size, shape, movement, etc. of the object, of course, the movement of the arm will also be displayed on the display, which can more intuitively and dynamically understand the grasping of the arm.
如果选用基于OpenCV_ArUco的定位标记识别算法,首先需要在ROS 系统下配置cv_bridge这个节点,将ROS系统下相机获得的sensor_msgs/Image类型图像数据转换成OpenCV库可识别的cv::Mat类型图像数据。If you choose the OpenCV_ArUco-based positioning mark recognition algorithm, you first need to use ROS. The cv_bridge node is configured in the system to convert the sensor_msgs/Image type image data obtained by the camera under the ROS system into cv::Mat type image data recognizable by the OpenCV library.
然后,在转换的相机获取的图像中寻找与预设定位标记匹配度最高的定位标记,得到其在相机坐标系下的空间位姿信息。当获取的图像也是影像时,依次从一帧帧图片中查找到与预设定位标记匹配度最高的定位标记,并依次提取定位到的定位标记在相机坐标系的空间位姿信息。每提取到一帧图片的待抓取物体在相机坐标系下的空间位姿信息就会发布出去,作为一次运动规则的参数次数。Then, in the image acquired by the converted camera, the positioning mark with the highest matching degree with the preset positioning mark is found, and the spatial pose information in the camera coordinate system is obtained. When the acquired image is also an image, the positioning mark with the highest matching degree with the preset positioning mark is sequentially searched from one frame image, and the spatial pose information of the positioned positioning mark in the camera coordinate system is sequentially extracted. The spatial pose information of the object to be grabbed in the camera coordinate system is extracted as a parameter number of a motion rule.
在选用OpenCV_ArUco的定位标记识别算法来识别定位标记时,利用相机获取到的含有定位标记的待抓取物体的图像(当然其是转换过的cv::Mat类型图像数据),然后利用OTSU二值化算法读取出透视变换后的图像中定位标记的值,并将其与预先训练的定位标记的值(即预设定位标记的值)作比较,从而实现定位标记的识别。When the positioning mark recognition algorithm of OpenCV_ArUco is selected to identify the positioning mark, the image of the object to be grasped with the positioning mark acquired by the camera (of course, the converted cv::Mat type image data) is used, and then the OTSU binary value is utilized. The algorithm reads the value of the positioning mark in the perspective transformed image and compares it with the value of the pre-trained positioning mark (ie, the value of the preset positioning mark), thereby realizing the identification of the positioning mark.
之后,再对识别到的定位标记进行空间位姿估计,并给出待抓取物体在相机坐标系下的空间位姿信息。Then, the spatial position and posture estimation of the identified positioning mark is performed, and the spatial pose information of the object to be grasped in the camera coordinate system is given.
然后,根据预先设定的相机坐标系和机械臂坐标系转换矩阵,对待抓取物体在相机坐标系下的空间位姿信息进行转换,得到待抓取物体在机械臂坐标系下的空间位姿信息。每一帧的待抓取物体在相机坐标系下的所述空间位姿信息都需要被转换。Then, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, the spatial pose information of the object to be grasped in the camera coordinate system is converted, and the spatial pose of the object to be grasped in the robot arm coordinate system is obtained. information. The spatial pose information of the object to be grabbed in each camera frame in the camera coordinate system needs to be converted.
最后,编写MoveIt接口程序,将待抓取物体在机械臂坐标系下的空间位姿信息格式化成为四元数形式并利用MoveIt模块提供的API传递给MoveIt初始化程序模块进行运动规划。Finally, the MoveIt interface program is written, and the spatial pose information of the object to be grabbed in the robot arm coordinate system is formatted into a quaternion form and transmitted to the MoveIt initialization program module for motion planning using the API provided by the MoveIt module.
总的来说,上位机对相机获取的图像进行处理时,需要获取到每一帧图片的待抓取物体在机械臂坐标系下的空间位姿信息,并将其转换成MoveIt初始化程序模块支持的四元数形式,供其进行后续的运动规划。In general, when the upper computer processes the image acquired by the camera, it needs to obtain the spatial pose information of the object to be grasped in each frame of the frame in the robot arm coordinate system, and convert it into a MoveIt initialization program module support. The quaternion form for subsequent motion planning.
图4所示为本公开提出的上位机在ROS系统下进行机械臂运动规划的工作流程图。步骤4上位机根据待抓取物体在机械臂坐标系下的空间位 姿信息和机械臂的空间位姿信息在ROS系统下对机械臂进行运动规划,得到相应的运动信息队列进一步说明如下:FIG. 4 is a flow chart showing the operation of the upper arm machine for manipulating the arm movement under the ROS system according to the present disclosure. Step 4 The upper computer according to the space position of the object to be grasped in the robot arm coordinate system The pose information and the spatial pose information of the manipulator are used to plan the motion of the robot arm under the ROS system, and the corresponding motion information queue is further described as follows:
41,上位机(用URDF(Unified Robot Description Format))编写机械臂在ROS系统下的机械臂模型描述文件,用于后续机械臂的建模。41. The upper machine (using the URDF (Unified Robot Description Format)) writes a manipulator model description file of the robot arm under the ROS system for modeling the subsequent manipulator.
42,上位机根据所述机械臂模型描述文件对所述机械臂进行建模。主要是用ROS系统下的MoveIt初始化工具包(MoveIt Setup Assistant Tool)调用所创建的机械臂描述模型,来对机械臂进行建模。42. The host computer models the robot arm according to the robot arm model description file. The robotic arm is modeled by calling the created robotic arm description model with the MoveIt Setup Assistant Tool under the ROS system.
进一步,建模的步骤依次为:碰撞检测设置,虚关节设置(例如:机械臂的底座,供机械臂坐标系定位使用),机械臂规划关节组设置(其运动学求解器为KDL Kinematics Plugin),机械臂初始位置设置(即机械臂的空间位姿信息的初始值),机械臂末端执行器设置(例如:定义其是吸盘、夹子等),被动关节设置(具体是指没有驱动、只能跟着其他关节一起动的关节),最后生成MoveIt初始化程序模块,若不更改其运动算法,默认运动算法规划库为OMPL(Open Motion Planning Library)。Further, the steps of modeling are: collision detection setting, virtual joint setting (for example: the base of the robot arm for positioning of the robot arm coordinate system), and the arm planning joint set (the kinematics solver is KDL Kinematics Plugin) , the initial position of the arm (ie the initial value of the space pose information of the arm), the end of the arm set (for example: define it is a suction cup, clip, etc.), the passive joint setting (specifically, no drive, only The joints that move along with other joints are finally generated. If the motion algorithm is not changed, the default motion algorithm planning library is OMPL (Open Motion Planning Library).
43,当对机械臂建模后,上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息(利用MoveIt初始化程序模块)对所述机械臂进行运动规划,得到相应的运动信息队列,并发布出去(遵守ROS系统的通信规则)。43. After modeling the robot arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm (using the MoveIt initialization program module) The manipulator performs motion planning, obtains a corresponding motion information queue, and publishes it (according to the communication rules of the ROS system).
图5所示为本公开提出的上位机与下位机进行通信的工作流程图,本流程是在一个ROS节点上完成的,进一步说明如下:FIG. 5 is a flow chart showing the working of the host computer and the lower computer according to the present disclosure. The process is completed on a ROS node, and further explained as follows:
首先,ROS系统的消息服务器程序初始化,用于读取MoveIt初始化程序模块发布的运动信息队列;First, the message server program of the ROS system is initialized for reading the motion information queue issued by the MoveIt initialization program module;
然后,初始化Socket通信节点(TCP信息),并将读取到的运动规划信息队列放置在发送缓冲区,在上下位机进行通信时发送给下位机;Then, the Socket communication node (TCP information) is initialized, and the read motion planning information queue is placed in the sending buffer, and sent to the lower computer when the upper and lower computers communicate;
之后,下位机接收到运动规划信息后进行运动信息解析,驱动机械臂按照规划动作执行抓取。After that, the lower computer receives the motion planning information, performs motion information analysis, and drives the robot arm to perform the crawling according to the planned action.
在本公开的另一个实施例中,除与上述相同的之外,还包括:步骤7:在驱动机械臂按照相应的路径执行抓取操作的同时,将所述机械臂的实时 空间位姿信息回传至所述上位机;步骤8:上位机用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。In another embodiment of the present disclosure, in addition to the same as the above, the method further includes: Step 7: realizing the robot arm while the driving robot arm performs the grasping operation according to the corresponding path. The spatial pose information is transmitted back to the upper computer; Step 8: The upper computer updates the spatial pose information of the mechanical arm with the real-time spatial pose information returned.
具体的,如前面所说,机械臂可能需要对位置流水线上的多个待抓取物体进行抓取,在控制机械臂执行一次抓取操作时,机械臂的空间位姿信息肯定会随着变化,因此在执行规划动作的同时,下位机位于机械臂上的位置传感器(如角度传感器、编码器)等将机械臂的实际空间位姿信息通过Socket通信发送给上位机,让上位机更新机械臂的空间位姿信息供下一个待抓取物体在机械臂坐标系的空间位姿信息进行运动规划。Specifically, as mentioned above, the mechanical arm may need to grasp a plurality of objects to be grasped on the position pipeline, and when the control robot performs a grasping operation, the spatial posture information of the mechanical arm will definitely change. Therefore, at the same time as the planning action is performed, the position sensor (such as the angle sensor and the encoder) of the lower position machine on the robot arm transmits the actual spatial pose information of the robot arm to the upper computer through the Socket communication, so that the upper computer updates the robot arm. The spatial pose information is used for motion planning of the spatial pose information of the next object to be grasped in the robot arm coordinate system.
为方便理解,举一个具体使用的例子:若干个待抓取物体位于流水线上缓慢移动,已经对相机的定位标记训练好了(用了上述两种定位标记识别算法训练)。首先,上位机配置相机的使用环境(即对相机进行驱动、读取预先标定时保存的矫正数据、由工程师可上位自行选择使用的定位标记识别算法);然后上位机控制相机开始拍摄在流水线上的待抓取特征的影像,边拍边发布出去,上位机内部的图像处理单元读取到实时的影像时,将其分为一帧一帧的图片进行定位标记的识别,当识别到第一帧的定位标记在机械臂坐标系的空间位姿信息后就发布出去;通过调用MoveIt初始化程序模块的处理图像过程和相机的配置过程可以并行执行,因此,其可以像对相机进行定位标记训练一样一开始就对机械臂建模完成,然后监听是否发布有待抓取特征在机械臂坐标系的空间位姿信息,假设有第一帧的待抓取特征在机械臂坐标系的空间位姿信息,就根据第一帧的待抓取特征在机械臂坐标系的空间位姿信息和机械臂的空间位姿信息(初始值)进行运动规划,得到相应的运动信息队列;下位机根据此运动信息队列驱动机械臂执行抓取操作,并将机械臂的实时空间位姿信息上传至上位机;上位机中的MoveIt初始化程序模块根据上传的实时空间位姿信息更新机械臂的空间位姿信息;然后调取第二帧的待抓取特征在机械臂坐标系的空间位姿信息,根据更新后的机械臂的空间位姿信息来进行运动规划,得到相应的运动信息队列,下位机根据此运动信息队列执行抓取操作,并将机械臂的实时空间位姿信息上传至上位机……如此循环,实现基于ROS系统的 视觉定位及机械臂抓取操作。For the sake of understanding, a specific example is used: a number of objects to be grasped are slowly moved on the pipeline, and the positioning marks of the camera have been trained (using the above two positioning marker recognition algorithms to train). First, the host computer configures the camera's use environment (that is, drives the camera, reads the correction data saved in advance, and the positioning mark recognition algorithm that the engineer can choose to use); then the host computer controls the camera to start shooting on the assembly line. The image of the feature to be captured is released while being shot. When the image processing unit inside the host computer reads the real-time image, it is divided into one frame and one frame to identify the positioning mark. The positioning mark of the frame is released after the spatial pose information of the robot arm coordinate system; the process of processing the image by calling the MoveIt initialization program module and the configuration process of the camera can be executed in parallel, so that it can be like the positioning mark training for the camera Modeling the robotic arm from the beginning, and then monitoring whether to release the spatial pose information of the feature to be captured in the robot arm coordinate system, assuming that there is spatial pose information of the first frame of the feature to be grasped in the robot arm coordinate system, The spatial pose information of the robot arm coordinate system and the spatial position of the robot arm according to the feature to be grasped of the first frame The information (initial value) is used for motion planning, and the corresponding motion information queue is obtained; the lower computer drives the robot arm to perform the grab operation according to the motion information queue, and uploads the real-time spatial pose information of the robot arm to the upper computer; The MoveIt initialization program module updates the spatial pose information of the robot arm according to the uploaded real-time spatial pose information; and then retrieves the spatial pose information of the second frame of the to-be-grabbed feature in the robot arm coordinate system, according to the updated robot arm The spatial pose information is used for motion planning, and the corresponding motion information queue is obtained. The lower computer performs the grab operation according to the motion information queue, and uploads the real-time spatial pose information of the robot arm to the upper computer... thus looping to realize ROS-based systematic Visual positioning and robotic arm grabbing operations.
综上所述,本公开与现有技术相比,其显著优点在于:本公开的技术方案采用分布式设计,既有利于利用上位机的处理能力,又方便拓扑为多机械臂协作;本公开提出的基于视觉的物体定位方法适应于抓取不同的物体,对物体初始位置要求较低;本公开提出的机械臂运动规划方法充分利用ROS系统的特点,配置简单,方便实用;本公开提出的整体解决方案采用无线通信,布局灵活,可适用于不同的应用场景。In summary, the present disclosure has a significant advantage over the prior art in that the technical solution of the present disclosure adopts a distributed design, which is advantageous for utilizing the processing capability of the upper computer and facilitating the topology to cooperate with multiple mechanical arms; The proposed vision-based object localization method is suitable for grasping different objects, and the initial position requirement of the object is low; the mechanical arm motion planning method proposed by the present disclosure fully utilizes the characteristics of the ROS system, and the configuration is simple, convenient and practical; The overall solution uses wireless communication and flexible layout, which can be applied to different application scenarios.
在本公开的另一个实施例中,如图6所示,一种基于ROS系统的机械臂抓取系统,包括:上位机10、下位机30和相机20,所述上位机10分别与所述下位机30和所述相机20通信连接。In another embodiment of the present disclosure, as shown in FIG. 6, a robotic arm grabbing system based on a ROS system includes: a host computer 10, a lower computer 30, and a camera 20, wherein the upper computer 10 and the The lower unit 30 and the camera 20 are communicatively coupled.
上位机可以是一台装有ROS的电脑,下位机指的是机械臂装置中的驱动控制部分,本实施例中下位机和上位采用Socket通信,相机需要与上位机通信连接,例如:采用USB相机,可以通过USB接口接到上位机,实现两者的通信。The upper computer can be a computer with ROS, and the lower computer refers to the drive control part in the mechanical arm device. In this embodiment, the lower computer and the upper position use Socket communication, and the camera needs to communicate with the upper computer, for example: using USB The camera can be connected to the host computer through the USB interface to realize the communication between the two.
所述相机20,用于在所述上位机的控制下获取包含有定位标记的待抓取物体的图像;The camera 20 is configured to acquire an image of an object to be grasped including a positioning mark under the control of the upper computer;
上位机10还包括:The host computer 10 further includes:
图像处理模块12,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息;The image processing module 12 is configured to process the acquired image under the ROS system to obtain spatial pose information of the object to be grasped in a robot arm coordinate system;
运动规划模块13,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;The motion planning module 13 is configured to perform motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtain corresponding Motion information queue;
消息传递模块14,用于将获得的所述机械臂的运动信息队列传递至所述下位机;a message passing module 14 is configured to transmit the obtained motion information queue of the robot arm to the lower computer;
下位机30包括:The lower machine 30 includes:
运动执行模块31,用于根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。The motion execution module 31 is configured to perform a grab operation according to the corresponding path according to the motion information queue driving robot arm.
具体的,因上位机是ROS系统,因此,需要配置相机在ROS系统下 的使用环境,保证相机的正常使用。优选地,上位机10还包括:相机配置模块11,用于配置所述相机在ROS系统下的使用环境。当配置完成后,上位机可以控制相机拍摄图像,图像指是图片或影像,本实施例是通过上位机控制相机拍摄位于其视觉范围内含有定位标记的待抓取物体的图像。Specifically, because the host computer is a ROS system, it is necessary to configure the camera under the ROS system. The use environment ensures the normal use of the camera. Preferably, the host computer 10 further includes: a camera configuration module 11 configured to configure a use environment of the camera under the ROS system. When the configuration is completed, the upper computer can control the camera to take an image, and the image refers to a picture or an image. In this embodiment, the upper camera controls the camera to capture an image of the object to be grasped with the positioning mark in its visual range.
相机拍摄的图像会传到上位机,由上位机对拍摄的图像进行处理,从而得到待抓取物体在机械臂坐标系下的空间位姿信息。空间位置信息是指空间位置和空间姿态,例如:假设待抓取物体为一个杯子,那么空间位置是指这个杯子在机械臂坐标系下的坐标,空间姿态是指这个杯子是竖着放的,还是横着放的。空间位置则是为了在运动规划时,规划机械臂运动到此杯子的位置。空间姿态是为了在运动规划时,规划机械臂的末端执行器到达杯子的位置后是橫抓还是竖抓。The image captured by the camera will be transmitted to the host computer, and the image captured by the host computer will be processed to obtain the spatial pose information of the object to be grasped in the robot arm coordinate system. The spatial position information refers to the spatial position and the spatial attitude. For example, if the object to be grasped is a cup, the spatial position refers to the coordinate of the cup in the coordinate system of the robot arm, and the spatial posture means that the cup is placed vertically. Still horizontally. The spatial position is intended to plan the movement of the robot arm to the cup during motion planning. The spatial attitude is to plan whether the end effector of the robot arm reaches the position of the cup after the movement planning, whether it is horizontal or vertical.
将待抓取物体在机械臂坐标系下的空间位姿信息传递到MoveIt初始化程序模块中进行运动规划。机械臂的空间位姿信息的初始值是一开始就设置在ROS系统中的,可以直接根据处理得到的待抓取物体在机械臂坐标系下的空间位姿信息(相当于终点)和设置在ROS系统中的机械臂的空间位姿信息(相当于起点)直接进行运动规划,得到运动信息队列。The spatial pose information of the object to be grasped in the robot arm coordinate system is transmitted to the MoveIt initialization program module for motion planning. The initial value of the spatial pose information of the robot arm is set in the ROS system from the beginning, and can directly obtain the spatial pose information (equivalent to the end point) and the setting in the robot arm coordinate system according to the processed object to be grasped. The spatial pose information (equivalent to the starting point) of the robot arm in the ROS system directly performs motion planning to obtain a motion information queue.
上位机与下位机之间采用Socket通信协议进行通信,上位机通过Socket通信协议将运动信息队列发送给下位机。The Socket communication protocol is used for communication between the upper computer and the lower computer, and the upper computer sends the motion information queue to the lower computer through the Socket communication protocol.
当下位机接收到运动信息队列后,对运动信息队列进行解析,根据解析后的运动信息队列驱动机械臂按照对应的路径移动,并执行抓取操作。After receiving the motion information queue, the lower computer parses the motion information queue, and according to the parsed motion information queue, the robot arm moves according to the corresponding path, and performs a grab operation.
本实施例中,通过相机拍摄的含有定位标记的待抓取物体的图像来定位待抓取物体的空间位姿信息(此信息需要与机械臂的空间位姿信息处于同一坐标系),然后进行运动规则,从而驱动机械臂对待抓取物体执行抓取操作。In this embodiment, the image of the object to be grasped by the camera is captured by the camera to locate the spatial pose information of the object to be grasped (this information needs to be in the same coordinate system as the spatial pose information of the robot arm), and then The rules of motion, thereby driving the robotic arm to perform a grasping operation on the grasping object.
将机器视觉(相当于相机)和机械臂结合起来,相当于为机械臂增加了智能的“眼睛”,可以大大增加机械臂的环境感知能力和智能决策能力,从而进一步扩展机械臂的应用领域。另外,本公开是基于ROS系统开发的,利用ROS系统的诸多特性,减小机械臂运动规划的实现难度,降级 机械臂的应用门槛。Combining machine vision (equivalent to the camera) with the robotic arm is equivalent to adding an intelligent "eye" to the robotic arm, which can greatly increase the environmental sensing capability and intelligent decision-making ability of the mechanical arm, thereby further expanding the application field of the mechanical arm. In addition, the present disclosure is based on the development of the ROS system, and utilizes many characteristics of the ROS system to reduce the difficulty of implementing the motion planning of the robot arm and downgrade. The application threshold of the mechanical arm.
优选地,相机配置模块11,用于配置所述相机在ROS系统下的使用环境具体包括:在ROS系统下配置相机节点对所述相机进行驱动;以及,在ROS系统下对所述相机进行标定并保存矫正数据;以及,选择(训练好的)定位标记识别算法。Preferably, the camera configuration module 11 is configured to configure the use environment of the camera under the ROS system, specifically: configuring a camera node to drive the camera under the ROS system; and calibrating the camera under the ROS system And save the correction data; and, select (trained) the positioning marker recognition algorithm.
具体的,在上位机的ROS系统内进行相机驱动,本实施例采用的驱动节点程序为usb_cam,这一节点将驱动相机并将相机采集的图像发布在usb_cam/image_raw话题上。Specifically, the camera driver is performed in the ROS system of the host computer. The driver node program used in this embodiment is usb_cam. This node will drive the camera and publish the image captured by the camera on the usb_cam/image_raw topic.
利用ROS系统的camera_calibration这一程序对该相机进行标定并保存矫正数据。在驱动相机后,利用该程序获取该相机的标定数据,即内部参数、外部参数和畸变系数数据,并保存上述数据为矫正数据。不同的相机得到的矫正数据会有所不同,后续利用矫正数据对相机拍摄到的图片进行矫正,进而获得畸变较小的图片。The camera is calibrated using the camera_calibration procedure of the ROS system and the correction data is saved. After driving the camera, the program is used to obtain calibration data of the camera, that is, internal parameters, external parameters, and distortion coefficient data, and the above data is saved as correction data. The correction data obtained by different cameras will be different, and the corrected images will be used to correct the pictures taken by the camera, and then the images with less distortion will be obtained.
相机内部可能存在多种定位标记识别算法,当需要使用相机来控制机械臂进行抓取时,需要为相机选择一个在后续定位过程中使用的定位标记识别算法,供定位使用。定位标记识别算法可以是基于ARToolKit的定位标记识别算法、基于OpenCV_ArUco的定位标记识别算法等,只要其可以实现视觉定位、让上位机控制机械臂执行抓取操作即可。There may be a variety of positioning mark recognition algorithms inside the camera. When it is necessary to use the camera to control the robot arm for grasping, it is necessary to select a positioning mark recognition algorithm used in the subsequent positioning process for the camera to use for positioning. The positioning mark recognition algorithm may be an ARToolKit-based positioning mark recognition algorithm, an OpenCV_ArUco-based positioning mark recognition algorithm, etc., as long as it can realize visual positioning and let the upper machine control the robot arm to perform the grasping operation.
在本公开的另一个实施例中,除与上述相同的之外,如图7所示,上位机还包括:相机训练模块15,用于对所述相机执行定位标记训练。In another embodiment of the present disclosure, in addition to the same as described above, as shown in FIG. 7, the host computer further includes: a camera training module 15 for performing positioning mark training on the camera.
优选地,相机训练模块15,用于对所述相机执行定位标记训练包括:基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;或,基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。Preferably, the camera training module 15 is configured to perform positioning mark training on the camera, comprising: performing, according to an ARToolKit-based positioning mark recognition algorithm, the camera for the positioning mark; or, based on an OpenCV_ArUco-based positioning mark recognition algorithm, The camera is trained for the positioning mark.
具体的,本实施例中可以使用任意一种定位标记识别算法进行训练,便于后续的识别。两者的训练过程请参考对应的方法实施例,在此不作赘述。Specifically, in this embodiment, any type of positioning mark recognition algorithm may be used for training to facilitate subsequent recognition. For the training process of the two, please refer to the corresponding method embodiment, which will not be described here.
在本公开的另一个实施例中,除与上述相同的之外,图像处理模块 12,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息具体包括:In another embodiment of the present disclosure, the image processing module is the same as the above 12, for processing the acquired image under the ROS system, and obtaining the spatial pose information of the object to be grasped in the robot arm coordinate system specifically includes:
在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;Searching for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;
以及,定位寻找到的所述定位标记;And positioning the located positioning mark;
以及,根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;And obtaining, according to the positioning marker, the spatial pose information of the object to be grasped with the positioning marker in a camera coordinate system;
以及,根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。And converting, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, the spatial pose information of the object to be grasped in the camera coordinate system, and obtaining the coordinates of the object to be grasped at the arm The spatial pose information under the system.
具体的,如果选择基于ARToolKit的定位标记识别算法,程序处理过程为:读取预先导入的标识图信息(即读取预设定位标记),(USB)相机获取实时的图像,寻找匹配度最高的定位标识并定位相机观察到的定位标识,从而根据定位的此定位标记得到含有定位标记的待抓取物体在相机坐标系下的空间位姿信息。Specifically, if the ARToolKit-based positioning mark recognition algorithm is selected, the program processing process is: reading the pre-imported identification map information (ie, reading the preset positioning mark), and the (USB) camera acquiring the real-time image to find the highest matching degree. Positioning the identifier and locating the positioning identifier observed by the camera, thereby obtaining the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system according to the positioning marker.
需要注意的是,上面过程指是静止图片的定位方式。在实际使用中,相机拍摄的含有定位标记的待抓取物体是放置于流水线上缓慢移动的影像,这时候就需要添加ARToolKit帧频计数器,利用不同帧下定位标识的位置变化预估定位标识图在相机坐标系中的位姿。这里相当于将多帧的影像分为一帧一帧的图片利用上述方式来得到待抓取物体在每一帧的相机坐标系下的空间位姿信息。当得到一帧的待抓取物体在相机坐标系下的空间位姿信息后,就会实时发布,供后续处理。It should be noted that the above process refers to the positioning method of the still picture. In actual use, the object to be captured with the positioning mark captured by the camera is a slowly moving image placed on the pipeline. At this time, the ARToolKit frame frequency counter needs to be added, and the position identification map of the position identification of the different frame is estimated. Pose in the camera coordinate system. Here, it is equivalent to dividing the image of the multi-frame into one frame and one frame. In the above manner, the spatial pose information of the object to be grasped in the camera coordinate system of each frame is obtained. When the spatial pose information of the frame to be grabbed in the camera coordinate system is obtained, it will be released in real time for subsequent processing.
另外,在其他实施例中,当需要做调试时,同时利用OpenGL以定位到的定位标记为原点,在定位标记的图标上绘制坐标系,这样在监控显示屏上就可以显示出实际待抓取物体的大小、形状、移动情况等,当然机械臂的运动情况也会在显示屏上显示,可以更直观、动态地了解机械臂的抓取情况。In addition, in other embodiments, when debugging is needed, the positioning mark marked with the OpenGL is used as the origin, and the coordinate system is drawn on the icon of the positioning mark, so that the actual display to be captured can be displayed on the monitor display screen. The size, shape, movement, etc. of the object, of course, the movement of the arm will also be displayed on the display, which can more intuitively and dynamically understand the grasping of the arm.
如果选用基于OpenCV_ArUco的定位标记识别算法,首先需要在ROS 系统下配置cv_bridge这个节点,将ROS系统下相机获得的sensor_msgs/Image类型图像数据转换成OpenCV库可识别的cv::Mat类型图像数据。If you choose the OpenCV_ArUco-based positioning mark recognition algorithm, you first need to use ROS. The cv_bridge node is configured in the system to convert the sensor_msgs/Image type image data obtained by the camera under the ROS system into cv::Mat type image data recognizable by the OpenCV library.
然后,在转换的相机获取的图像中寻找与预设定位标记匹配度最高的定位标记,得到其在相机坐标系下的空间位姿信息。当获取的图像也是影像时,依次从一帧帧图片中查找到与预设定位标记匹配度最高的定位标记,并依次提取定位到的定位标记在相机坐标系的空间位姿信息。每提取到一帧图片的待抓取物体在相机坐标系下的空间位姿信息就会发布出去,作为一次运动规则的参数次数。Then, in the image acquired by the converted camera, the positioning mark with the highest matching degree with the preset positioning mark is found, and the spatial pose information in the camera coordinate system is obtained. When the acquired image is also an image, the positioning mark with the highest matching degree with the preset positioning mark is sequentially searched from one frame image, and the spatial pose information of the positioned positioning mark in the camera coordinate system is sequentially extracted. The spatial pose information of the object to be grabbed in the camera coordinate system is extracted as a parameter number of a motion rule.
在选用OpenCV_ArUco的定位标记识别算法来识别定位标记时,利用相机获取到的含有定位标记的待抓取物体的图像(当然其是转换过的cv::Mat类型图像数据),然后利用OTSU二值化算法读取出透视变换后的图像中定位标记的值,并将其与预先训练的定位标记的值(即预设定位标记的值)作比较,从而实现定位标记的识别。When the positioning mark recognition algorithm of OpenCV_ArUco is selected to identify the positioning mark, the image of the object to be grasped with the positioning mark acquired by the camera (of course, the converted cv::Mat type image data) is used, and then the OTSU binary value is utilized. The algorithm reads the value of the positioning mark in the perspective transformed image and compares it with the value of the pre-trained positioning mark (ie, the value of the preset positioning mark), thereby realizing the identification of the positioning mark.
之后,再对识别到的定位标记进行空间位姿估计,并给出待抓取物体在相机坐标系下的空间位姿信息。Then, the spatial position and posture estimation of the identified positioning mark is performed, and the spatial pose information of the object to be grasped in the camera coordinate system is given.
然后,根据预先设定的相机坐标系和机械臂坐标系转换矩阵,对待抓取物体在相机坐标系下的空间位姿信息进行转换,得到待抓取物体在机械臂坐标系下的空间位姿信息。每一帧的待抓取物体在相机坐标系下的所述空间位姿信息都需要被转换。Then, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, the spatial pose information of the object to be grasped in the camera coordinate system is converted, and the spatial pose of the object to be grasped in the robot arm coordinate system is obtained. information. The spatial pose information of the object to be grabbed in each camera frame in the camera coordinate system needs to be converted.
最后,编写MoveIt接口程序,将待抓取物体在机械臂坐标系下的空间位姿信息格式化成为四元数形式并利用MoveIt模块提供的API传递给MoveIt初始化程序模块进行运动规划。Finally, the MoveIt interface program is written, and the spatial pose information of the object to be grabbed in the robot arm coordinate system is formatted into a quaternion form and transmitted to the MoveIt initialization program module for motion planning using the API provided by the MoveIt module.
总的来说,上位机对相机获取的图像进行处理时,需要获取到每一帧图片的待抓取物体在机械臂坐标系下的空间位姿信息,并将其转换成MoveIt初始化程序模块支持的四元数形式,供其进行后续的运动规划。In general, when the upper computer processes the image acquired by the camera, it needs to obtain the spatial pose information of the object to be grasped in each frame of the frame in the robot arm coordinate system, and convert it into a MoveIt initialization program module support. The quaternion form for subsequent motion planning.
在本公开的另一个实施例中,除与上述相同的之外,运动规划模块13,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机 械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列具体包括:In another embodiment of the present disclosure, in addition to the same as the above, the motion planning module 13 is configured to use the spatial pose information and the machine according to the object to be grasped in the robot arm coordinate system. The spatial pose information of the arm is used to plan the motion of the robot arm under the ROS system, and the corresponding motion information queue includes:
(用URDF)编写机械臂在ROS系统下的机械臂模型描述文件。(Us URDF) Write a description of the robotic arm model of the robotic arm under the ROS system.
以及,根据所述机械臂模型描述文件对所述机械臂进行建模。主要是用ROS系统下的MoveIt初始化工具包调用所创建的机械臂描述模型,来对机械臂进行建模;建模的步骤具体请参见对应的方法实施例,在此不作赘述。And modeling the robot arm according to the robot arm model description file. The manipulator description model created by the MoveIt initialization toolkit of the ROS system is used to model the robot arm. For the specific steps of the modeling, please refer to the corresponding method embodiment, which is not described here.
以及,当对所述机械臂建模后,所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息(利用MoveIt初始化程序模块)对所述机械臂进行运动规划,得到相应的运动信息队列,并发布出去(遵守ROS系统的通信规则)。And, after modeling the mechanical arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm (using the MoveIt initialization program module) The motion planning of the robot arm is performed, and the corresponding motion information queue is obtained and released (according to the communication rules of the ROS system).
本实施例采用Socket通信使上位机和下位机进行通信,其具体的通信过程参见对应的方法实施例,在此不作赘述。In this embodiment, the Socket communication is used to enable the upper computer and the lower computer to communicate. For the specific communication process, refer to the corresponding method embodiment, which is not described herein.
在本公开的另一个实施例中,除与上述相同的之外,如图7所示,下位机还包括:In another embodiment of the present disclosure, in addition to the same as the above, as shown in FIG. 7, the lower computer further includes:
信息回传模块32,用于在驱动机械臂按照相应的路径执行抓取操作的同时,将所述机械臂的实时空间位姿信息回传至所述上位机;The information returning module 32 is configured to return the real-time spatial pose information of the mechanical arm to the upper computer while the driving robot arm performs the grasping operation according to the corresponding path;
所述运动规划模块,进一步用于用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。The motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
具体的,如前面所说,机械臂可能需要对位置流水线上的多个待抓取物体进行抓取,在控制机械臂执行一次抓取操作时,机械臂的空间位姿信息肯定会随着变化,因此在执行规划动作的同时,下位机位于机械臂上的位置传感器(如角度传感器、编码器)等将机械臂的实际空间位姿信息通过Socket通信发送给上位机,让上位机更新机械臂的空间位姿信息供下一个待抓取物体在机械臂坐标系的空间位姿信息进行运动规划。Specifically, as mentioned above, the mechanical arm may need to grasp a plurality of objects to be grasped on the position pipeline, and when the control robot performs a grasping operation, the spatial posture information of the mechanical arm will definitely change. Therefore, at the same time as the planning action is performed, the position sensor (such as the angle sensor and the encoder) of the lower position machine on the robot arm transmits the actual spatial pose information of the robot arm to the upper computer through the Socket communication, so that the upper computer updates the robot arm. The spatial pose information is used for motion planning of the spatial pose information of the next object to be grasped in the robot arm coordinate system.
虽然本发明已以较佳实施例揭露如上,然其并非用以限定本发明。本发明所属技术领域中具有通常知识者,在不脱离本发明的精神和范围内,当可作各种的更动与润饰。因此,本发明的保护范围当视权利要求书所界 定者为准。 While the invention has been described above in the preferred embodiments, it is not intended to limit the invention. A person skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the scope of protection of the present invention is defined by the claims The standard is subject to change.

Claims (14)

  1. 一种基于ROS系统的机械臂抓取方法,其特征在于,包括:A robot arm grabbing method based on a ROS system, comprising:
    步骤2:上位机通过相机获取包含有定位标记的待抓取物体的图像;Step 2: The upper computer obtains an image of the object to be grasped containing the positioning mark through the camera;
    步骤3:所述上位机在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息;Step 3: The upper computer processes the acquired image under the ROS system, and obtains spatial pose information of the object to be grasped in a robot arm coordinate system;
    步骤4:所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;Step 4: The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
    步骤5:所述上位机将获得的所述机械臂的运动信息队列传递至下位机;Step 5: The host computer transmits the obtained motion information queue of the robot arm to the lower position machine;
    步骤6:所述下位机根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。Step 6: The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
  2. 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤1上位机配置相机在ROS系统下的使用环境之前还包括:The ROS system-based robotic arm grabbing method according to claim 1, wherein the step 1 configuration machine of the camera before the use environment of the ROS system further comprises:
    步骤0:上位机对所述相机执行定位标记训练。Step 0: The host computer performs positioning mark training on the camera.
  3. 根据权利要求2所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤0上位机对所述相机执行定位标记训练的过程包括以下任意一步:The ROS system-based robotic arm grabbing method according to claim 2, wherein the step of performing the positioning mark training on the camera by the host computer comprises any one of the following steps:
    步骤01:上位机基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;Step 01: The host computer performs training on the positioning mark for the camera based on an ARToolKit positioning mark recognition algorithm.
    步骤02:上位机基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。Step 02: The host computer trains the camera for the positioning mark based on an OpenCV_ArUco positioning mark recognition algorithm.
  4. 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤2:上位机通过相机获取包含有定位标记的待抓取物 体的图像之前还包括:The ROS system-based mechanical arm grasping method according to claim 1, wherein the step 2: the upper computer acquires the object to be grasped including the positioning mark by using a camera The image of the body also includes:
    步骤11:上位机在ROS系统下配置相机节点对所述相机进行驱动;Step 11: The host computer configures a camera node to drive the camera under the ROS system;
    步骤12:所述上位机在ROS系统下对所述相机进行标定并保存矫正数据;Step 12: The host computer calibrates the camera under the ROS system and saves the correction data;
    步骤13:所述上位机选择定位标记识别算法。Step 13: The upper computer selects a positioning mark recognition algorithm.
  5. 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤3所述上位机在ROS系统下对获取的所述图像进行处理,得到所述含有定位标记的待抓取物体在机械臂坐标系下的空间位姿信息的具体过程为:The method for grasping a robot arm based on a ROS system according to claim 1, wherein the upper computer performs processing on the acquired image under the ROS system to obtain the image with the positioning mark. The specific process of grasping the spatial pose information of the object in the robot arm coordinate system is:
    步骤31:在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;Step 31: Search for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;
    步骤32:定位寻找到的所述定位标记;Step 32: Locating the found positioning mark;
    步骤33:根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;Step 33: Obtain spatial pose information of the object to be grasped with the positioning mark in a camera coordinate system according to the positioned positioning mark.
    步骤34:根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。Step 34: Convert the spatial pose information of the object to be grasped in the camera coordinate system according to a preset camera coordinate system and a robot arm coordinate system conversion matrix, and obtain the object to be grasped on the robot arm Spatial pose information in the coordinate system.
  6. 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤4所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对机械臂进行运动规划,得到相应的运动信息队列具体过程为:The ROS system-based robot arm grasping method according to claim 1, wherein the upper computer performs the spatial pose information according to the object to be grasped in a robot arm coordinate system according to the step 4 The spatial pose information of the manipulator is used to plan the motion of the manipulator under the ROS system. The specific process of obtaining the corresponding motion information queue is:
    步骤41:所述上位机编写机械臂在ROS系统下的机械臂模型描述文件;Step 41: The upper computer writes a robot arm model description file of the robot arm under the ROS system;
    步骤42:所述上位机根据所述机械臂模型描述文件对所述机械臂进行建模;Step 42: The host computer models the robot arm according to the robot arm model description file;
    步骤43:当对所述机械臂建模后,所述上位机根据所述待抓取物体 在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息对所述机械臂进行运动规划,得到相应的运动信息队列。Step 43: After modeling the mechanical arm, the upper computer according to the object to be grasped The spatial pose information and the spatial pose information of the robot arm in the robot arm coordinate system perform motion planning on the robot arm to obtain a corresponding motion information queue.
  7. 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,还包括:The method for grasping a robot arm based on a ROS system according to claim 1, further comprising:
    步骤7:在驱动机械臂按照相应的路径执行抓取操作的同时,下位机将所述机械臂的实时空间位姿信息回传至所述上位机;Step 7: while the driving robot arm performs the grasping operation according to the corresponding path, the lower computer returns the real-time spatial pose information of the robot arm to the upper computer;
    步骤8:上位机用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。Step 8: The host computer updates the spatial pose information of the robot arm with the real-time spatial pose information returned.
  8. 一种基于ROS系统的机械臂抓取系统,其特征在于,包括:上位机、下位机和相机,所述上位机分别与所述下位机和所述相机通信连接;A robotic arm grabbing system based on a ROS system, comprising: a host computer, a lower computer, and a camera, wherein the upper computer is communicably connected to the lower computer and the camera;
    所述相机,用于在所述上位机的控制下获取包含有定位标记的待抓取物体的图像;The camera is configured to acquire an image of an object to be grasped including a positioning mark under the control of the upper computer;
    所述上位机包括:The upper computer includes:
    图像处理模块,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息;An image processing module, configured to process the acquired image under the ROS system, and obtain spatial pose information of the object to be grasped in a robot arm coordinate system;
    运动规划模块,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;a motion planning module, configured to perform motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in a robot arm coordinate system and spatial pose information of the mechanical arm, and obtain corresponding Motion information queue;
    消息传递模块,用于将获得的所述机械臂的运动信息队列传递至所述下位机;a message passing module, configured to transmit the obtained motion information queue of the robot arm to the lower position machine;
    所述下位机包括:The lower position machine includes:
    运动执行模块,用于根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。And a motion execution module, configured to perform a grab operation according to the corresponding path according to the motion information queue driving robot arm.
  9. 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特 征在于,所述上位机还包括:The ROS system-based mechanical arm grasping system according to claim 8, The problem is that the upper computer further includes:
    相机训练模块,用于对所述相机执行定位标记训练。A camera training module is configured to perform positioning mark training on the camera.
  10. 根据权利要求9所述的基于ROS系统的机械臂抓取系统,其特征在于,所述相机训练模块,用于对所述相机执行定位标记训练包括:The ROS system-based robotic arm grabbing system according to claim 9, wherein the camera training module is configured to perform positioning mark training on the camera, including:
    基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;或,基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。The camera is trained for the positioning mark based on an ARToolKit-based positioning mark recognition algorithm; or, according to an OpenCV_ArUco-based positioning mark recognition algorithm, the camera is trained for the positioning mark.
  11. 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特征在于,所述上位机还包括:The ROS system-based mechanical arm grasping system according to claim 8, wherein the upper computer further comprises:
    相机配置模块,用于在ROS系统下配置相机节点对所述相机进行驱动;以及,在ROS系统下对所述相机进行标定并保存矫正数据;以及,选择定位标记识别算法。a camera configuration module for configuring a camera node to drive the camera under the ROS system; and, calibrating the camera and storing the correction data under the ROS system; and selecting a positioning mark recognition algorithm.
  12. 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特征在于,所述图像处理模块,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息具体包括:The ROS system-based robotic arm grabbing system according to claim 8, wherein the image processing module is configured to process the acquired image under the ROS system to obtain the object to be grasped. The spatial pose information under the robot arm coordinate system specifically includes:
    在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;Searching for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;
    以及,定位寻找到的所述定位标记;And positioning the located positioning mark;
    以及,根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;And obtaining, according to the positioning marker, the spatial pose information of the object to be grasped with the positioning marker in a camera coordinate system;
    以及,根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。And converting, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, the spatial pose information of the object to be grasped in the camera coordinate system, and obtaining the coordinates of the object to be grasped at the arm The spatial pose information under the system.
  13. 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特 征在于,所述运动规划模块,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列具体包括:The ROS system-based mechanical arm grasping system according to claim 8, The motion planning module is configured to move the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in a robot arm coordinate system and the spatial pose information of the mechanical arm. Planning to get the corresponding sports information queue specifically includes:
    编写机械臂在ROS系统下的机械臂模型描述文件;Write a robotic arm model description file of the robot arm under the ROS system;
    以及,根据所述机械臂模型描述文件对所述机械臂进行建模;And modeling the robot arm according to the robot arm model description file;
    以及,当对所述机械臂建模后,所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息对所述机械臂进行运动规划,得到相应的运动信息队列。And, after modeling the mechanical arm, the upper computer performs the mechanical arm according to the spatial pose information of the object to be grasped in a robot arm coordinate system and the spatial pose information of the mechanical arm. Motion planning, get the corresponding motion information queue.
  14. 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特征在于,所述下位机还包括:The ROS system-based mechanical arm grasping system according to claim 8, wherein the lower position machine further comprises:
    信息回传模块,用于在驱动机械臂按照相应的路径执行抓取操作的同时,将所述机械臂的实时空间位姿信息回传至所述上位机;The information returning module is configured to return the real-time spatial pose information of the mechanical arm to the upper computer while the driving robot arm performs the grasping operation according to the corresponding path;
    所述运动规划模块,进一步用于用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。 The motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
PCT/CN2017/117168 2017-01-25 2017-12-19 Ros-based mechanical arm grabbing method and system WO2018137445A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710056272.7 2017-01-25
CN201710056272.7A CN106826822B (en) 2017-01-25 2017-01-25 A kind of vision positioning and mechanical arm crawl implementation method based on ROS system

Publications (1)

Publication Number Publication Date
WO2018137445A1 true WO2018137445A1 (en) 2018-08-02

Family

ID=59121171

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117168 WO2018137445A1 (en) 2017-01-25 2017-12-19 Ros-based mechanical arm grabbing method and system

Country Status (2)

Country Link
CN (1) CN106826822B (en)
WO (1) WO2018137445A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112338922A (en) * 2020-11-23 2021-02-09 北京配天技术有限公司 Five-axis mechanical arm grabbing and placing method and related device

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106826822B (en) * 2017-01-25 2019-04-16 南京阿凡达机器人科技有限公司 A kind of vision positioning and mechanical arm crawl implementation method based on ROS system
CN109483526A (en) * 2017-09-13 2019-03-19 北京猎户星空科技有限公司 The control method and system of mechanical arm under virtual environment and true environment
CN107717987A (en) * 2017-09-27 2018-02-23 西安华航唯实机器人科技有限公司 A kind of industrial robot and its control method with vision system
CN107553496B (en) * 2017-09-29 2020-09-22 南京阿凡达机器人科技有限公司 Method and device for determining and correcting errors of inverse kinematics solving method of mechanical arm
CN107450571B (en) * 2017-09-30 2021-03-23 江西洪都航空工业集团有限责任公司 AGV dolly laser navigation based on ROS
CN107571260B (en) * 2017-10-25 2021-02-26 南京阿凡达机器人科技有限公司 Method and device for controlling robot to grab object
CN107818587B (en) * 2017-10-26 2021-07-09 吴铁成 ROS-based machine vision high-precision positioning method
CN107944384B (en) * 2017-11-21 2021-08-20 天地伟业技术有限公司 Delivered object behavior detection method based on video
CN108392269B (en) * 2017-12-29 2021-08-03 广州布莱医疗科技有限公司 Operation assisting method and operation assisting robot
CN108436909A (en) * 2018-03-13 2018-08-24 南京理工大学 A kind of hand and eye calibrating method of camera and robot based on ROS
CN108460369B (en) * 2018-04-04 2020-04-14 南京阿凡达机器人科技有限公司 Drawing method and system based on machine vision
CN108655026B (en) * 2018-05-07 2020-08-14 上海交通大学 A kind of robot rapid teaching sorting system and method
CN109382828B (en) * 2018-10-30 2021-04-16 武汉大学 A robot shaft hole assembly system and method based on teaching and learning
CN109531567A (en) * 2018-11-23 2019-03-29 南京工程学院 Remote operating underactuated manipulator control system based on ROS
CN109877827B (en) * 2018-12-19 2022-03-29 东北大学 Non-fixed point material visual identification and gripping device and method of connecting rod manipulator
CN109940616B (en) * 2019-03-21 2022-06-03 佛山智能装备技术研究院 Intelligent grabbing system and method based on brain-cerebellum mode
CN110037910A (en) * 2019-03-22 2019-07-23 同济大学 A kind of multi-functional automatic physiotherapeutical instrument based on realsense
CN109773798A (en) * 2019-03-28 2019-05-21 大连理工大学 Binocular vision-based double-mechanical-arm cooperative control method
CN110355756A (en) * 2019-06-11 2019-10-22 西安电子科技大学 A kind of control system and method for a wide range of 3 D-printing of multi-robot Cooperation
CN110253588A (en) * 2019-08-05 2019-09-20 江苏科技大学 A New Dynamic Grabbing System of Robotic Arm
CN112775955B (en) * 2019-11-06 2022-02-11 深圳富泰宏精密工业有限公司 Mechanical arm coordinate determination method and computer device
CN110926852B (en) * 2019-11-18 2021-10-22 迪普派斯医疗科技(山东)有限公司 Automatic film changing system and method for digital pathological section
CN110962128B (en) * 2019-12-11 2021-06-29 南方电网电力科技股份有限公司 Substation inspection and stationing method and inspection robot control method
CN111516006B (en) * 2020-04-15 2022-02-22 昆山市工研院智能制造技术有限公司 Composite robot operation method and system based on vision
CN111483803B (en) * 2020-04-17 2022-03-04 湖南视比特机器人有限公司 Control method, capture system and storage medium
CN111482967B (en) * 2020-06-08 2023-05-16 河北工业大学 Intelligent detection and grabbing method based on ROS platform
CN112102289A (en) * 2020-09-15 2020-12-18 齐鲁工业大学 Cell sample centrifugal processing system and method based on machine vision
CN112589795B (en) * 2020-12-04 2022-03-15 中山大学 Vacuum chuck mechanical arm grabbing method based on uncertainty multi-frame fusion
CN112541946A (en) * 2020-12-08 2021-03-23 深圳龙岗智能视听研究院 Real-time pose detection method of mechanical arm based on perspective multi-point projection
CN113110513A (en) * 2021-05-19 2021-07-13 哈尔滨理工大学 ROS-based household arrangement mobile robot
CN113263501A (en) * 2021-05-28 2021-08-17 湖南三一石油科技有限公司 Method and device for controlling racking platform manipulator and storage medium
CN115840420A (en) * 2022-09-13 2023-03-24 南京理工大学泰州科技学院 Intelligent mushroom sorting system and intelligent mushroom sorting method
CN117260681A (en) * 2023-09-28 2023-12-22 广州市腾龙信息科技有限公司 Control system of mechanical arm robot
CN117841041B (en) * 2024-02-05 2024-07-05 北京新雨华祺科技有限公司 Mechanical arm combination device based on multi-arm cooperation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008008790A2 (en) * 2006-07-10 2008-01-17 Ugobe, Inc. Robots with autonomous behavior
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN104820418A (en) * 2015-04-22 2015-08-05 遨博(北京)智能科技有限公司 Embedded vision system for mechanical arm and method of use
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN106826822A (en) * 2017-01-25 2017-06-13 南京阿凡达机器人科技有限公司 A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008008790A2 (en) * 2006-07-10 2008-01-17 Ugobe, Inc. Robots with autonomous behavior
CN103271784A (en) * 2013-06-06 2013-09-04 山东科技大学 Man-machine interactive manipulator control system and method based on binocular vision
CN104820418A (en) * 2015-04-22 2015-08-05 遨博(北京)智能科技有限公司 Embedded vision system for mechanical arm and method of use
CN106003036A (en) * 2016-06-16 2016-10-12 哈尔滨工程大学 Object grabbing and placing system based on binocular vision guidance
CN106826822A (en) * 2017-01-25 2017-06-13 南京阿凡达机器人科技有限公司 A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112338922A (en) * 2020-11-23 2021-02-09 北京配天技术有限公司 Five-axis mechanical arm grabbing and placing method and related device
CN112338922B (en) * 2020-11-23 2022-08-16 北京配天技术有限公司 Five-axis mechanical arm grabbing and placing method and related device

Also Published As

Publication number Publication date
CN106826822B (en) 2019-04-16
CN106826822A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
WO2018137445A1 (en) Ros-based mechanical arm grabbing method and system
CN112132894B (en) A real-time tracking method of robotic arm based on binocular vision guidance
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN113492393A (en) Robot teaching demonstration by human
CN112171661A (en) Method for grabbing target object by mechanical arm based on visual information fusion
CN108908334A (en) A kind of intelligent grabbing system and method based on deep learning
CN114097004A (en) Performance on Autonomous Tasks Based on Vision Embeddings
CN108422435A (en) Remote monitoring and control system based on augmented reality
CN109079794B (en) A robot control and teaching method based on human posture following
JP2013043271A (en) Information processing device, method for controlling the same, and program
CN115213896B (en) Object grasping method, system, device and storage medium based on robotic arm
CN104570731A (en) Uncalibrated human-computer interaction control system and method based on Kinect
CN107471218A (en) A kind of tow-armed robot hand eye coordination method based on multi-vision visual
CN106514667A (en) Human-computer cooperation system based on Kinect skeletal tracking and uncalibrated visual servo
CN111347411A (en) 3D visual recognition and grasping method of dual-arm collaborative robot based on deep learning
Schröder et al. Real-time hand tracking with a color glove for the actuation of anthropomorphic robot hands
CN106003036A (en) Object grabbing and placing system based on binocular vision guidance
CN113711275B (en) Creating training data variability for object annotation in images in machine learning
CN206105869U (en) Quick teaching apparatus of robot
CN110405775A (en) A robot teaching system and method based on augmented reality technology
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
CN113510718A (en) An intelligent food-selling robot based on machine vision and method of using the same
Bu et al. Vision-guided manipulator operating system based on CSRT algorithm
CN110142770A (en) A robot teaching system and method based on a head-mounted display device
CN115810188A (en) Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17893624

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17893624

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17893624

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 14.05.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17893624

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载