WO2018137445A1 - Ros-based mechanical arm grabbing method and system - Google Patents
Ros-based mechanical arm grabbing method and system Download PDFInfo
- Publication number
- WO2018137445A1 WO2018137445A1 PCT/CN2017/117168 CN2017117168W WO2018137445A1 WO 2018137445 A1 WO2018137445 A1 WO 2018137445A1 CN 2017117168 W CN2017117168 W CN 2017117168W WO 2018137445 A1 WO2018137445 A1 WO 2018137445A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera
- robot arm
- pose information
- positioning mark
- spatial pose
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 230000033001 locomotion Effects 0.000 claims abstract description 122
- 230000008569 process Effects 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims description 27
- 238000012937 correction Methods 0.000 claims description 14
- 239000003550 marker Substances 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000005094 computer simulation Methods 0.000 claims description 2
- 230000009471 action Effects 0.000 abstract description 7
- 230000006854 communication Effects 0.000 description 20
- 238000004891 communication Methods 0.000 description 19
- 230000000007 visual effect Effects 0.000 description 11
- 230000005540 biological transmission Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000009193 crawling Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 239000012636 effector Substances 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003466 welding Methods 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 241000124033 Salix Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011089 mechanical engineering Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002324 minimally invasive surgery Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Definitions
- the invention belongs to the field of mechanical arm control and motion planning, in particular to a robot arm grasping method and system based on ROS system.
- the robotic arm is one of the most widely used automation devices in the field of robotics.
- multi-degree-of-freedom manipulators play an increasing role in many fields such as machine building, automotive, semiconductor, medical, and home services.
- Motion control has always been a hot topic of research.
- the main application scenarios of the robot arm are as follows:
- Machine vision belongs to a branch of artificial intelligence. In short, machine vision is to use the camera instead of the human eye to judge and analyze the surrounding environment, combined with certain algorithms to achieve intelligent decision making. It is a comprehensive technology, including image processing. , mechanical engineering technology, control, electric light source illumination, optical imaging, sensors, analog and digital video technology, computer hardware and software technology. Machine vision is divided into several types, such as monocular, binocular, and 3D vision. Its introduction has the following advantages:
- Machine vision is more reliable than the human eye. Machine vision continuously captures images and works continuously without visual fatigue.
- Machine vision has higher precision. With certain processing algorithms, machine vision can achieve accurate measurement and error checking, and is conducive to data recording and integration.
- Machine vision can adapt to complex environments. In some situations that are not suitable for manual work, machine vision can be “out of the box”.
- the ROS system Robot Operating System
- the ROS system is an open source robot operating system released by Willow Garage in 2010. It adopts a distributed organizational structure, which can greatly improve the reusability of code and the adaptability of complex robot systems.
- the ROS system has the following main features:
- Point-to-point distributed design The peer-to-peer design of ROS and the mechanisms such as services and node managers can decentralize the real-time computational pressure brought by functions such as computer vision and speech recognition, and can adapt to the challenges faced by multiple robots.
- the ROS system supports programming languages such as C++, Python, Script, and LISP, as well as interfaces to other programming languages.
- the software package is rich.
- the ROS system integrates a large number of software packages, which can quickly realize the environment configuration of various applications of robots, such as robot arm motion planning, mobile robot navigation, robot SLAM and so on.
- the present invention aims to propose a method and system for grasping a mechanical arm based on the ROS system, which can effectively solve the problems of poor adaptability of the mechanical arm environment and high difficulty in development and use.
- the present invention provides a complete solution for visual access, target detection, image processing, robotic arm motion planning, etc. of the robot arm.
- a mechanical arm grabbing method based on ROS system comprising:
- Step 2 The upper computer obtains an image of the object to be grasped containing the positioning mark through the camera;
- Step 3 The host computer processes the acquired image under the ROS system, and obtains Spatial pose information of the object to be grasped in a robot arm coordinate system;
- Step 4 The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
- Step 5 The host computer transmits the obtained motion information queue of the robot arm to the lower position machine
- Step 6 The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
- step 1 before the host computer configures the camera to use the environment under the ROS system further includes: Step 0: The host computer performs positioning mark training on the camera.
- the process of performing the positioning mark training on the camera by the upper computer includes the following step: Step 01: The upper computer performs the training of the positioning mark on the camera based on the ARToolKit positioning mark recognition algorithm; Step 02 The upper computer trains the positioning mark on the camera based on the positioning mark recognition algorithm of OpenCV_ArUco.
- the method further includes: Step 11: The host computer configures the camera node to drive the camera under the ROS system; Step 12: The upper computer calibrates the camera under the ROS system and saves the correction data; Step 13: The upper computer selects a positioning mark recognition algorithm.
- the upper computer processes the acquired image under the ROS system, and the specific process of obtaining the spatial pose information of the object to be grasped in the robot arm coordinate system with the positioning mark is: Step 31: Search for the positioning mark with the highest matching degree with the preset positioning mark in the image acquired by the camera; Step 32: Locating the found positioning mark; Step 33: Obtain the above according to the positioned positioning mark Spatial pose information of the object to be grasped with the positioning mark in the camera coordinate system; Step 34: Converting the matrix according to the preset camera coordinate system and the robot arm coordinate system, and the object to be grasped is in the camera coordinate system The spatial pose information is converted to obtain spatial pose information of the object to be grasped in a robot arm coordinate system.
- step 4 the upper computer performs motion planning on the mechanical arm in the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm.
- the specific process of the corresponding motion information queue is: Step 41: The host computer writes a robot arm model description file of the robot arm under the ROS system; Step 42: The host computer reads the robot arm according to the robot arm model description file Performing modeling; Step 43: After modeling the robot arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm The robot arm performs motion planning to obtain a corresponding motion information queue.
- the robot arm grabbing method based on the ROS system further includes: Step 7: while the driving robot arm performs the grabbing operation according to the corresponding path, the lower computer returns the real-time spatial pose information of the robot arm to the The host computer is described; Step 8: The host computer updates the spatial pose information of the robot arm with the real-time spatial pose information returned.
- the present disclosure also provides a robotic arm grabbing system based on a ROS system, comprising: a host computer, a lower computer and a camera, wherein the upper computer is communicably connected to the lower computer and the camera; the camera is used for An image of the object to be grasped containing the positioning mark is acquired under the control of the upper computer; the upper computer includes: an image processing module, configured to process the acquired image under the ROS system to obtain the image to be captured Taking spatial pose information of the object in the robot arm coordinate system; the motion planning module is configured to: according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm in the ROS Performing motion planning on the robot arm to obtain a corresponding motion information queue; a message transmission module, configured to transmit the obtained motion information queue of the robot arm to the lower computer; the lower computer includes: motion execution And a module, configured to drive the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
- the host computer further includes: a camera training module, configured to perform positioning mark training on the camera.
- the camera training module is configured to perform positioning mark training on the camera, comprising: performing, according to an ARToolKit-based positioning mark recognition algorithm, the camera to perform the positioning mark; or, based on the OpenCV_ArUco-based positioning mark recognition algorithm, The camera is trained for the positioning mark.
- the host computer further includes: a camera configuration module, configured to configure the camera node to drive the camera under the ROS system; and, under the ROS system, calibrate the camera and save the correction data; and, select the positioning Tag recognition algorithm.
- a camera configuration module configured to configure the camera node to drive the camera under the ROS system; and, under the ROS system, calibrate the camera and save the correction data; and, select the positioning Tag recognition algorithm.
- the image processing module is configured to process the acquired image under the ROS system, and obtain the spatial pose information of the object to be grasped in the robot arm coordinate system, which specifically includes: acquiring at the camera Locating the positioning mark with the highest degree of matching with the preset positioning mark; and positioning the found positioning mark; and, according to the positioned positioning mark, obtaining the object to be grasped with the positioning mark at camera coordinates The spatial pose information of the system; and, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, converting the spatial pose information of the object to be grasped in the camera coordinate system, and obtaining the The spatial pose information of the object to be grabbed in the robot arm coordinate system.
- the motion planning module is configured to perform motion planning on the robot arm according to the spatial pose information of the object to be grasped in a robot arm coordinate system and spatial pose information of the robot arm under the ROS system.
- Obtaining a corresponding motion information queue specifically includes: writing a robot arm model description file of the robot arm under the ROS system; and modeling the robot arm according to the robot arm model description file; and, when the machine is After the arm is modeled, the upper computer performs motion planning on the mechanical arm according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding motion. Information queue.
- the lower position machine further includes: an information returning module, configured to return the real-time spatial pose information of the mechanical arm to the upper-level machine while the driving robot arm performs the grasping operation according to the corresponding path;
- the motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
- a ROS system-based mechanical arm grasping method and system of the present invention has a significant advantage in that: firstly, the technical solution of the present disclosure can greatly improve the mechanical arm by introducing machine vision as a core detector component of the mechanical arm.
- the solution adopts a distributed system framework, The upper computer and the lower computer are separated, which can effectively utilize the supercomputer's high computing power and image processing capability, and at the same time
- the solution proposed by the present disclosure is based on the ROS operating system, making full use of the rich software package of the ROS system, realizing the rapid configuration of the robot arm motion planning, greatly reducing the threshold of the mechanical arm control;
- the overall solution proposed by the present disclosure can easily realize the layout of the mechanical arm, and is convenient to expand into a single upper machine with multiple mechanical arms working together, reducing the use cost of the mechanical arm, and has wide application prospects.
- FIG. 1 is a schematic diagram of an implementation environment of a robot arm grabbing method based on a ROS system according to the present disclosure
- FIG. 2 is a working flow chart of a robot arm grabbing method based on a ROS system according to the present disclosure
- FIG. 3 is a flow chart showing the operation of the upper computer in performing image processing under the ROS system according to the present disclosure
- FIG. 5 is a flow chart of the communication between the upper computer and the lower computer according to the present disclosure.
- FIG. 6 is a schematic structural view of an embodiment of a robot arm grabbing system based on a ROS system according to the present disclosure
- FIG. 7 is a schematic structural view of another embodiment of a robot arm grabbing system based on a ROS system according to the present disclosure.
- a robot arm grabbing method based on a ROS system the implementation environment thereof includes a host computer, a lower computer, a camera, and a communication environment.
- the upper computer acts as the main body to accept commands and perform crawling, and cooperates to complete the robot arm grabbing task.
- the implementation environment of this embodiment has the following components:
- Camera for example: USB camera.
- the camera is placed above or obliquely above the object to be grabbed, It is best to have clear and unobstructed shooting angles, and it is necessary to clarify the coordinate system in which the camera is located (coordinate system 1, which is shown in Fig. 1, which is the camera coordinate system).
- the host computer needs to be equipped with the ROS operating system (based on Linux), which is the "brain" of the whole system. Its main functions are: driving the camera to complete image acquisition and transmission, image processing, motion planning, and motion information queue transmission.
- ROS operating system based on Linux
- the lower position machine refers to the drive control part of the mechanical arm device. Its main function is to receive the motion information queue, the drive robot arm, the real-time spatial pose information sensing detection of the mechanical arm, and the real-time spatial pose information transmission.
- the robot arm is the execution part of the robot arm device, which requires the robot arm to have more than five degrees of freedom (to ensure that the working space of the robot arm is large, and the grasping operation based on visual positioning can be realized), and the arm has an end effector (for example: suction cup, end clamp, etc.), different mechanical arm order can be adjusted according to the actual situation when the mechanical arm is modeled.
- the position of the robot arm must be determined and modeled based on the position of the arm (coordinate system 2 shown in Figure 1), that is, the robot arm coordinate system and the robot arm in the robot arm coordinate system are required. Spatial pose information.
- Socket communication requires a wireless network in the implementation environment, and the upper and lower computers implement communication in the same domain segment.
- Positioning marks refer to identification patterns with specific shape requirements for visual recognition and positioning. For different algorithms, the identification patterns are different. It should be noted that when placing an object to be grasped containing a positioning mark, the positioning mark needs to be located within the visual range of the camera to ensure that the camera can capture an image of the object to be grasped containing the positioning mark.
- a robot arm grabbing method based on a ROS system includes:
- Step 2 The host computer acquires an image of the object to be grasped including the positioning mark by using a camera;
- Step 3 The host computer processes the acquired image under the ROS system, and obtains Spatial pose information of the object to be grasped in a robot arm coordinate system;
- Step 4 The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;
- Step 5 The host computer transmits the obtained motion information queue of the robot arm to the lower position machine
- Step 6 The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
- the host computer and the camera are communicatively connected.
- the USB camera can be directly connected to the USB interface of the host computer through the USB interface to implement the connection.
- the host computer is a ROS system, it is necessary to configure the camera's environment under the ROS system to ensure the normal use of the camera.
- the method further includes: Step 1: The upper computer configures a use environment of the camera under the ROS system.
- the upper computer can control the camera to take an image, and the image refers to a picture or an image.
- the upper camera controls the camera to capture an image of the object to be grasped with the positioning mark in its visual range.
- the image captured by the camera will be transmitted to the host computer, and the image captured by the host computer will be processed to obtain the spatial pose information of the object to be grasped in the robot arm coordinate system.
- the spatial position information refers to the spatial position and the spatial attitude.
- the spatial position refers to the coordinate of the cup in the coordinate system of the robot arm
- the spatial posture means that the cup is placed vertically. Still horizontally.
- the spatial position is intended to plan the movement of the robot arm to the cup during motion planning.
- the spatial attitude is to plan whether the end effector of the robot arm reaches the position of the cup after the movement planning, whether it is horizontal or vertical.
- the spatial pose information of the object to be grasped in the robot arm coordinate system is transmitted to the MoveIt initialization program module for motion planning.
- the initial value of the spatial pose information of the robot arm is set in the ROS system from the beginning, and can directly obtain the spatial pose information (equivalent to the end point) and the setting in the robot arm coordinate system according to the processed object to be grasped.
- Robotic arm in ROS system The spatial pose information (equivalent to the starting point) directly performs motion planning to obtain a motion information queue.
- the Socket communication protocol is used for communication between the upper computer and the lower computer, and the upper computer sends the motion information queue to the lower computer through the Socket communication protocol.
- the lower computer After receiving the motion information queue, the lower computer parses the motion information queue, and according to the parsed motion information queue, the robot arm moves according to the corresponding path, and performs a grab operation.
- the image of the object to be grasped by the camera is captured by the camera to locate the spatial pose information of the object to be grasped (this information needs to be in the same coordinate system as the spatial pose information of the robot arm), and then The rules of motion, thereby driving the robotic arm to perform a grasping operation on the grasping object.
- Combining machine vision (equivalent to the camera) with the robotic arm is equivalent to adding an intelligent "eye" to the robotic arm, which can greatly increase the environmental sensing capability and intelligent decision-making ability of the mechanical arm, thereby further expanding the application field of the mechanical arm.
- the present disclosure is based on the development of the ROS system, and utilizes many characteristics of the ROS system to reduce the difficulty in realizing the motion planning of the robot arm and to lower the application threshold of the mechanical arm.
- FIG. 2 a system operation flowchart of a POS system based visual positioning and a robot arm grasping implementation method is provided.
- the whole implementation process is divided into two parts: upper computer configuration and lower computer configuration.
- the host computer configures the camera node under the ROS system to drive the camera.
- the camera driver is driven in the ROS system of the host computer.
- the driver node program used in this embodiment is usb_cam. This node will drive the camera and publish the image captured by the camera on the usb_cam/image_raw topic.
- the host computer calibrates the camera under the ROS system and saves the correction data.
- the camera is calibrated using the camera_calibration procedure of the ROS system and the correction data is saved.
- the program is used to obtain calibration data of the camera, that is, internal parameters, external parameters, and distortion coefficient data, and the above data is saved as correction data.
- the correction data obtained by different cameras will be different, and the corrected images will be used to correct the pictures taken by the camera, and then the images with less distortion will be obtained.
- the host computer selects (trained) the positioning mark recognition algorithm.
- the positioning mark recognition algorithm There may be a variety of positioning mark recognition algorithms inside the camera. When it is necessary to use the camera to control the robot arm for grasping, it is necessary to select a positioning mark recognition algorithm used in the subsequent positioning process for the camera to use for positioning.
- the positioning mark recognition algorithm may be an ARToolKit-based positioning mark recognition algorithm, an OpenCV_ArUco-based positioning mark recognition algorithm, etc., as long as it can realize visual positioning and let the upper machine control the robot arm to perform the grasping operation.
- the step 1 configuration machine of the camera before the use environment of the ROS system further includes: Step 0: The host computer performs positioning mark training on the camera.
- the process of performing the positioning mark training on the camera by the upper computer in step 0 is any of the following steps:
- Step 01 The host computer performs training on the positioning mark for the camera based on an ARToolKit positioning mark recognition algorithm.
- Step 02 The host computer trains the camera for the positioning mark based on an OpenCV_ArUco positioning mark recognition algorithm.
- the subsequent identification work can be completed. Therefore, only one type of positioning mark recognition algorithm can be trained, and only one algorithm is selected in subsequent selection; if both algorithms are trained You can choose one of the following options.
- the identification training methods of the two recognition algorithms proposed by the present disclosure are as follows:
- the drawMarker() function is used to create and train the logo pattern (ie, the anchor mark).
- the host computer processes the acquired image under the ROS system to obtain the object to be grasped with the positioning mark.
- the specific process of spatial pose information in the robot arm coordinate system is:
- Step 31 Search for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;
- Step 32 Locating the found positioning mark
- Step 33 Obtain spatial pose information of the object to be grasped with the positioning mark in a camera coordinate system according to the positioned positioning mark.
- Step 34 Convert the spatial pose information of the object to be grasped in the camera coordinate system according to a preset camera coordinate system and a robot arm coordinate system conversion matrix, and obtain the object to be grasped on the robot arm Spatial pose information in the coordinate system.
- the program processing process is: reading the pre-imported identification map information (ie, reading the preset positioning mark), and the (USB) camera acquires the real-time image to find the most matching positioning identifier and The positioning identifier observed by the camera is positioned, so that the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system is obtained according to the positioning marker.
- the pre-imported identification map information ie, reading the preset positioning mark
- the (USB) camera acquires the real-time image to find the most matching positioning identifier and The positioning identifier observed by the camera is positioned, so that the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system is obtained according to the positioning marker.
- the above process refers to the positioning method of the still picture.
- the object to be captured with the positioning mark captured by the camera is a slowly moving image placed on the pipeline.
- the ARToolKit frame frequency counter needs to be added, and the position identification map of the position identification of the different frame is estimated.
- Pose in the camera coordinate system it is equivalent to dividing the image of the multi-frame into one frame and one frame.
- the spatial pose information of the object to be grasped in the camera coordinate system of each frame is obtained.
- the spatial pose information of the frame to be grabbed in the camera coordinate system is obtained, it will be released in real time for subsequent processing.
- the positioning mark marked with the OpenGL is used as the origin, and the coordinate system is drawn on the icon of the positioning mark, so that the actual display to be captured can be displayed on the monitor display screen.
- the size, shape, movement, etc. of the object of course, the movement of the arm will also be displayed on the display, which can more intuitively and dynamically understand the grasping of the arm.
- the cv_bridge node is configured in the system to convert the sensor_msgs/Image type image data obtained by the camera under the ROS system into cv::Mat type image data recognizable by the OpenCV library.
- the positioning mark with the highest matching degree with the preset positioning mark is found, and the spatial pose information in the camera coordinate system is obtained.
- the acquired image is also an image
- the positioning mark with the highest matching degree with the preset positioning mark is sequentially searched from one frame image, and the spatial pose information of the positioned positioning mark in the camera coordinate system is sequentially extracted.
- the spatial pose information of the object to be grabbed in the camera coordinate system is extracted as a parameter number of a motion rule.
- the positioning mark recognition algorithm of OpenCV_ArUco When the positioning mark recognition algorithm of OpenCV_ArUco is selected to identify the positioning mark, the image of the object to be grasped with the positioning mark acquired by the camera (of course, the converted cv::Mat type image data) is used, and then the OTSU binary value is utilized.
- the algorithm reads the value of the positioning mark in the perspective transformed image and compares it with the value of the pre-trained positioning mark (ie, the value of the preset positioning mark), thereby realizing the identification of the positioning mark.
- the spatial position and posture estimation of the identified positioning mark is performed, and the spatial pose information of the object to be grasped in the camera coordinate system is given.
- the spatial pose information of the object to be grasped in the camera coordinate system is converted, and the spatial pose of the object to be grasped in the robot arm coordinate system is obtained.
- the spatial pose information of the object to be grabbed in each camera frame in the camera coordinate system needs to be converted.
- the MoveIt interface program is written, and the spatial pose information of the object to be grabbed in the robot arm coordinate system is formatted into a quaternion form and transmitted to the MoveIt initialization program module for motion planning using the API provided by the MoveIt module.
- the upper computer processes the image acquired by the camera, it needs to obtain the spatial pose information of the object to be grasped in each frame of the frame in the robot arm coordinate system, and convert it into a MoveIt initialization program module support.
- FIG. 4 is a flow chart showing the operation of the upper arm machine for manipulating the arm movement under the ROS system according to the present disclosure.
- Step 4 The upper computer according to the space position of the object to be grasped in the robot arm coordinate system
- the pose information and the spatial pose information of the manipulator are used to plan the motion of the robot arm under the ROS system, and the corresponding motion information queue is further described as follows:
- the upper machine (using the URDF (Unified Robot Description Format)) writes a manipulator model description file of the robot arm under the ROS system for modeling the subsequent manipulator.
- URDF Unified Robot Description Format
- the host computer models the robot arm according to the robot arm model description file.
- the robotic arm is modeled by calling the created robotic arm description model with the MoveIt Setup Assistant Tool under the ROS system.
- the steps of modeling are: collision detection setting, virtual joint setting (for example: the base of the robot arm for positioning of the robot arm coordinate system), and the arm planning joint set (the kinematics solver is KDL Kinematics Plugin) , the initial position of the arm (ie the initial value of the space pose information of the arm), the end of the arm set (for example: define it is a suction cup, clip, etc.), the passive joint setting (specifically, no drive, only The joints that move along with other joints are finally generated. If the motion algorithm is not changed, the default motion algorithm planning library is OMPL (Open Motion Planning Library).
- OMPL Open Motion Planning Library
- the host computer After modeling the robot arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the robot arm (using the MoveIt initialization program module) The manipulator performs motion planning, obtains a corresponding motion information queue, and publishes it (according to the communication rules of the ROS system).
- FIG. 5 is a flow chart showing the working of the host computer and the lower computer according to the present disclosure. The process is completed on a ROS node, and further explained as follows:
- the message server program of the ROS system is initialized for reading the motion information queue issued by the MoveIt initialization program module;
- the Socket communication node (TCP information) is initialized, and the read motion planning information queue is placed in the sending buffer, and sent to the lower computer when the upper and lower computers communicate;
- the lower computer receives the motion planning information, performs motion information analysis, and drives the robot arm to perform the crawling according to the planned action.
- the method further includes: Step 7: realizing the robot arm while the driving robot arm performs the grasping operation according to the corresponding path.
- the spatial pose information is transmitted back to the upper computer;
- the mechanical arm may need to grasp a plurality of objects to be grasped on the position pipeline, and when the control robot performs a grasping operation, the spatial posture information of the mechanical arm will definitely change. Therefore, at the same time as the planning action is performed, the position sensor (such as the angle sensor and the encoder) of the lower position machine on the robot arm transmits the actual spatial pose information of the robot arm to the upper computer through the Socket communication, so that the upper computer updates the robot arm.
- the spatial pose information is used for motion planning of the spatial pose information of the next object to be grasped in the robot arm coordinate system.
- a number of objects to be grasped are slowly moved on the pipeline, and the positioning marks of the camera have been trained (using the above two positioning marker recognition algorithms to train).
- the host computer configures the camera's use environment (that is, drives the camera, reads the correction data saved in advance, and the positioning mark recognition algorithm that the engineer can choose to use); then the host computer controls the camera to start shooting on the assembly line.
- the image of the feature to be captured is released while being shot.
- the image processing unit inside the host computer reads the real-time image, it is divided into one frame and one frame to identify the positioning mark.
- the positioning mark of the frame is released after the spatial pose information of the robot arm coordinate system; the process of processing the image by calling the MoveIt initialization program module and the configuration process of the camera can be executed in parallel, so that it can be like the positioning mark training for the camera Modeling the robotic arm from the beginning, and then monitoring whether to release the spatial pose information of the feature to be captured in the robot arm coordinate system, assuming that there is spatial pose information of the first frame of the feature to be grasped in the robot arm coordinate system, The spatial pose information of the robot arm coordinate system and the spatial position of the robot arm according to the feature to be grasped of the first frame
- the information (initial value) is used for motion planning, and the corresponding motion information queue is obtained; the lower computer drives the robot arm to perform the grab operation according to the motion information queue, and uploads the real-time spatial pose information of the robot arm to the upper computer;
- the MoveIt initialization program module updates the spatial pose information of the robot arm according to the uploaded real-time spatial pose information; and then retrieves the spatial pose information
- the present disclosure has a significant advantage over the prior art in that the technical solution of the present disclosure adopts a distributed design, which is advantageous for utilizing the processing capability of the upper computer and facilitating the topology to cooperate with multiple mechanical arms;
- the proposed vision-based object localization method is suitable for grasping different objects, and the initial position requirement of the object is low;
- the mechanical arm motion planning method proposed by the present disclosure fully utilizes the characteristics of the ROS system, and the configuration is simple, convenient and practical;
- the overall solution uses wireless communication and flexible layout, which can be applied to different application scenarios.
- a robotic arm grabbing system based on a ROS system includes: a host computer 10, a lower computer 30, and a camera 20, wherein the upper computer 10 and the The lower unit 30 and the camera 20 are communicatively coupled.
- the upper computer can be a computer with ROS, and the lower computer refers to the drive control part in the mechanical arm device.
- the lower computer and the upper position use Socket communication, and the camera needs to communicate with the upper computer, for example: using USB
- the camera can be connected to the host computer through the USB interface to realize the communication between the two.
- the camera 20 is configured to acquire an image of an object to be grasped including a positioning mark under the control of the upper computer;
- the host computer 10 further includes:
- the image processing module 12 is configured to process the acquired image under the ROS system to obtain spatial pose information of the object to be grasped in a robot arm coordinate system;
- the motion planning module 13 is configured to perform motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtain corresponding Motion information queue;
- a message passing module 14 is configured to transmit the obtained motion information queue of the robot arm to the lower computer;
- the lower machine 30 includes:
- the motion execution module 31 is configured to perform a grab operation according to the corresponding path according to the motion information queue driving robot arm.
- the host computer is a ROS system
- the host computer 10 further includes: a camera configuration module 11 configured to configure a use environment of the camera under the ROS system.
- the upper computer can control the camera to take an image, and the image refers to a picture or an image.
- the upper camera controls the camera to capture an image of the object to be grasped with the positioning mark in its visual range.
- the image captured by the camera will be transmitted to the host computer, and the image captured by the host computer will be processed to obtain the spatial pose information of the object to be grasped in the robot arm coordinate system.
- the spatial position information refers to the spatial position and the spatial attitude.
- the spatial position refers to the coordinate of the cup in the coordinate system of the robot arm
- the spatial posture means that the cup is placed vertically. Still horizontally.
- the spatial position is intended to plan the movement of the robot arm to the cup during motion planning.
- the spatial attitude is to plan whether the end effector of the robot arm reaches the position of the cup after the movement planning, whether it is horizontal or vertical.
- the spatial pose information of the object to be grasped in the robot arm coordinate system is transmitted to the MoveIt initialization program module for motion planning.
- the initial value of the spatial pose information of the robot arm is set in the ROS system from the beginning, and can directly obtain the spatial pose information (equivalent to the end point) and the setting in the robot arm coordinate system according to the processed object to be grasped.
- the spatial pose information (equivalent to the starting point) of the robot arm in the ROS system directly performs motion planning to obtain a motion information queue.
- the Socket communication protocol is used for communication between the upper computer and the lower computer, and the upper computer sends the motion information queue to the lower computer through the Socket communication protocol.
- the lower computer After receiving the motion information queue, the lower computer parses the motion information queue, and according to the parsed motion information queue, the robot arm moves according to the corresponding path, and performs a grab operation.
- the image of the object to be grasped by the camera is captured by the camera to locate the spatial pose information of the object to be grasped (this information needs to be in the same coordinate system as the spatial pose information of the robot arm), and then The rules of motion, thereby driving the robotic arm to perform a grasping operation on the grasping object.
- Combining machine vision (equivalent to the camera) with the robotic arm is equivalent to adding an intelligent "eye" to the robotic arm, which can greatly increase the environmental sensing capability and intelligent decision-making ability of the mechanical arm, thereby further expanding the application field of the mechanical arm.
- the present disclosure is based on the development of the ROS system, and utilizes many characteristics of the ROS system to reduce the difficulty of implementing the motion planning of the robot arm and downgrade. The application threshold of the mechanical arm.
- the camera configuration module 11 is configured to configure the use environment of the camera under the ROS system, specifically: configuring a camera node to drive the camera under the ROS system; and calibrating the camera under the ROS system And save the correction data; and, select (trained) the positioning marker recognition algorithm.
- the camera driver is performed in the ROS system of the host computer.
- the driver node program used in this embodiment is usb_cam. This node will drive the camera and publish the image captured by the camera on the usb_cam/image_raw topic.
- the camera is calibrated using the camera_calibration procedure of the ROS system and the correction data is saved.
- the program is used to obtain calibration data of the camera, that is, internal parameters, external parameters, and distortion coefficient data, and the above data is saved as correction data.
- the correction data obtained by different cameras will be different, and the corrected images will be used to correct the pictures taken by the camera, and then the images with less distortion will be obtained.
- the positioning mark recognition algorithm may be an ARToolKit-based positioning mark recognition algorithm, an OpenCV_ArUco-based positioning mark recognition algorithm, etc., as long as it can realize visual positioning and let the upper machine control the robot arm to perform the grasping operation.
- the host computer further includes: a camera training module 15 for performing positioning mark training on the camera.
- the camera training module 15 is configured to perform positioning mark training on the camera, comprising: performing, according to an ARToolKit-based positioning mark recognition algorithm, the camera for the positioning mark; or, based on an OpenCV_ArUco-based positioning mark recognition algorithm, The camera is trained for the positioning mark.
- any type of positioning mark recognition algorithm may be used for training to facilitate subsequent recognition.
- the training process of the two please refer to the corresponding method embodiment, which will not be described here.
- the image processing module is the same as the above 12, for processing the acquired image under the ROS system, and obtaining the spatial pose information of the object to be grasped in the robot arm coordinate system specifically includes:
- the program processing process is: reading the pre-imported identification map information (ie, reading the preset positioning mark), and the (USB) camera acquiring the real-time image to find the highest matching degree. Positioning the identifier and locating the positioning identifier observed by the camera, thereby obtaining the spatial pose information of the object to be grasped with the positioning marker in the camera coordinate system according to the positioning marker.
- the above process refers to the positioning method of the still picture.
- the object to be captured with the positioning mark captured by the camera is a slowly moving image placed on the pipeline.
- the ARToolKit frame frequency counter needs to be added, and the position identification map of the position identification of the different frame is estimated.
- Pose in the camera coordinate system it is equivalent to dividing the image of the multi-frame into one frame and one frame.
- the spatial pose information of the object to be grasped in the camera coordinate system of each frame is obtained.
- the spatial pose information of the frame to be grabbed in the camera coordinate system is obtained, it will be released in real time for subsequent processing.
- the positioning mark marked with the OpenGL is used as the origin, and the coordinate system is drawn on the icon of the positioning mark, so that the actual display to be captured can be displayed on the monitor display screen.
- the size, shape, movement, etc. of the object of course, the movement of the arm will also be displayed on the display, which can more intuitively and dynamically understand the grasping of the arm.
- the cv_bridge node is configured in the system to convert the sensor_msgs/Image type image data obtained by the camera under the ROS system into cv::Mat type image data recognizable by the OpenCV library.
- the positioning mark with the highest matching degree with the preset positioning mark is found, and the spatial pose information in the camera coordinate system is obtained.
- the acquired image is also an image
- the positioning mark with the highest matching degree with the preset positioning mark is sequentially searched from one frame image, and the spatial pose information of the positioned positioning mark in the camera coordinate system is sequentially extracted.
- the spatial pose information of the object to be grabbed in the camera coordinate system is extracted as a parameter number of a motion rule.
- the positioning mark recognition algorithm of OpenCV_ArUco When the positioning mark recognition algorithm of OpenCV_ArUco is selected to identify the positioning mark, the image of the object to be grasped with the positioning mark acquired by the camera (of course, the converted cv::Mat type image data) is used, and then the OTSU binary value is utilized.
- the algorithm reads the value of the positioning mark in the perspective transformed image and compares it with the value of the pre-trained positioning mark (ie, the value of the preset positioning mark), thereby realizing the identification of the positioning mark.
- the spatial position and posture estimation of the identified positioning mark is performed, and the spatial pose information of the object to be grasped in the camera coordinate system is given.
- the spatial pose information of the object to be grasped in the camera coordinate system is converted, and the spatial pose of the object to be grasped in the robot arm coordinate system is obtained.
- the spatial pose information of the object to be grabbed in each camera frame in the camera coordinate system needs to be converted.
- the MoveIt interface program is written, and the spatial pose information of the object to be grabbed in the robot arm coordinate system is formatted into a quaternion form and transmitted to the MoveIt initialization program module for motion planning using the API provided by the MoveIt module.
- the upper computer processes the image acquired by the camera, it needs to obtain the spatial pose information of the object to be grasped in each frame of the frame in the robot arm coordinate system, and convert it into a MoveIt initialization program module support.
- the motion planning module 13 is configured to use the spatial pose information and the machine according to the object to be grasped in the robot arm coordinate system.
- the spatial pose information of the arm is used to plan the motion of the robot arm under the ROS system, and the corresponding motion information queue includes:
- the manipulator description model created by the MoveIt initialization toolkit of the ROS system is used to model the robot arm.
- the host computer after modeling the mechanical arm, the host computer according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm (using the MoveIt initialization program module) The motion planning of the robot arm is performed, and the corresponding motion information queue is obtained and released (according to the communication rules of the ROS system).
- the Socket communication is used to enable the upper computer and the lower computer to communicate.
- the specific communication process refer to the corresponding method embodiment, which is not described herein.
- the lower computer further includes:
- the information returning module 32 is configured to return the real-time spatial pose information of the mechanical arm to the upper computer while the driving robot arm performs the grasping operation according to the corresponding path;
- the motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
- the mechanical arm may need to grasp a plurality of objects to be grasped on the position pipeline, and when the control robot performs a grasping operation, the spatial posture information of the mechanical arm will definitely change. Therefore, at the same time as the planning action is performed, the position sensor (such as the angle sensor and the encoder) of the lower position machine on the robot arm transmits the actual spatial pose information of the robot arm to the upper computer through the Socket communication, so that the upper computer updates the robot arm.
- the spatial pose information is used for motion planning of the spatial pose information of the next object to be grasped in the robot arm coordinate system.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Manipulator (AREA)
Abstract
Description
Claims (14)
- 一种基于ROS系统的机械臂抓取方法,其特征在于,包括:A robot arm grabbing method based on a ROS system, comprising:步骤2:上位机通过相机获取包含有定位标记的待抓取物体的图像;Step 2: The upper computer obtains an image of the object to be grasped containing the positioning mark through the camera;步骤3:所述上位机在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息;Step 3: The upper computer processes the acquired image under the ROS system, and obtains spatial pose information of the object to be grasped in a robot arm coordinate system;步骤4:所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;Step 4: The upper computer performs motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in the robot arm coordinate system and the spatial pose information of the mechanical arm, and obtains corresponding Motion information queue;步骤5:所述上位机将获得的所述机械臂的运动信息队列传递至下位机;Step 5: The host computer transmits the obtained motion information queue of the robot arm to the lower position machine;步骤6:所述下位机根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。Step 6: The lower computer drives the robot arm according to the motion information queue to perform a grab operation according to a corresponding path.
- 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤1上位机配置相机在ROS系统下的使用环境之前还包括:The ROS system-based robotic arm grabbing method according to claim 1, wherein the step 1 configuration machine of the camera before the use environment of the ROS system further comprises:步骤0:上位机对所述相机执行定位标记训练。Step 0: The host computer performs positioning mark training on the camera.
- 根据权利要求2所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤0上位机对所述相机执行定位标记训练的过程包括以下任意一步:The ROS system-based robotic arm grabbing method according to claim 2, wherein the step of performing the positioning mark training on the camera by the host computer comprises any one of the following steps:步骤01:上位机基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;Step 01: The host computer performs training on the positioning mark for the camera based on an ARToolKit positioning mark recognition algorithm.步骤02:上位机基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。Step 02: The host computer trains the camera for the positioning mark based on an OpenCV_ArUco positioning mark recognition algorithm.
- 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤2:上位机通过相机获取包含有定位标记的待抓取物 体的图像之前还包括:The ROS system-based mechanical arm grasping method according to claim 1, wherein the step 2: the upper computer acquires the object to be grasped including the positioning mark by using a camera The image of the body also includes:步骤11:上位机在ROS系统下配置相机节点对所述相机进行驱动;Step 11: The host computer configures a camera node to drive the camera under the ROS system;步骤12:所述上位机在ROS系统下对所述相机进行标定并保存矫正数据;Step 12: The host computer calibrates the camera under the ROS system and saves the correction data;步骤13:所述上位机选择定位标记识别算法。Step 13: The upper computer selects a positioning mark recognition algorithm.
- 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤3所述上位机在ROS系统下对获取的所述图像进行处理,得到所述含有定位标记的待抓取物体在机械臂坐标系下的空间位姿信息的具体过程为:The method for grasping a robot arm based on a ROS system according to claim 1, wherein the upper computer performs processing on the acquired image under the ROS system to obtain the image with the positioning mark. The specific process of grasping the spatial pose information of the object in the robot arm coordinate system is:步骤31:在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;Step 31: Search for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;步骤32:定位寻找到的所述定位标记;Step 32: Locating the found positioning mark;步骤33:根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;Step 33: Obtain spatial pose information of the object to be grasped with the positioning mark in a camera coordinate system according to the positioned positioning mark.步骤34:根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。Step 34: Convert the spatial pose information of the object to be grasped in the camera coordinate system according to a preset camera coordinate system and a robot arm coordinate system conversion matrix, and obtain the object to be grasped on the robot arm Spatial pose information in the coordinate system.
- 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,所述步骤4所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对机械臂进行运动规划,得到相应的运动信息队列具体过程为:The ROS system-based robot arm grasping method according to claim 1, wherein the upper computer performs the spatial pose information according to the object to be grasped in a robot arm coordinate system according to the step 4 The spatial pose information of the manipulator is used to plan the motion of the manipulator under the ROS system. The specific process of obtaining the corresponding motion information queue is:步骤41:所述上位机编写机械臂在ROS系统下的机械臂模型描述文件;Step 41: The upper computer writes a robot arm model description file of the robot arm under the ROS system;步骤42:所述上位机根据所述机械臂模型描述文件对所述机械臂进行建模;Step 42: The host computer models the robot arm according to the robot arm model description file;步骤43:当对所述机械臂建模后,所述上位机根据所述待抓取物体 在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息对所述机械臂进行运动规划,得到相应的运动信息队列。Step 43: After modeling the mechanical arm, the upper computer according to the object to be grasped The spatial pose information and the spatial pose information of the robot arm in the robot arm coordinate system perform motion planning on the robot arm to obtain a corresponding motion information queue.
- 根据权利要求1所述的基于ROS系统的机械臂抓取方法,其特征在于,还包括:The method for grasping a robot arm based on a ROS system according to claim 1, further comprising:步骤7:在驱动机械臂按照相应的路径执行抓取操作的同时,下位机将所述机械臂的实时空间位姿信息回传至所述上位机;Step 7: while the driving robot arm performs the grasping operation according to the corresponding path, the lower computer returns the real-time spatial pose information of the robot arm to the upper computer;步骤8:上位机用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。Step 8: The host computer updates the spatial pose information of the robot arm with the real-time spatial pose information returned.
- 一种基于ROS系统的机械臂抓取系统,其特征在于,包括:上位机、下位机和相机,所述上位机分别与所述下位机和所述相机通信连接;A robotic arm grabbing system based on a ROS system, comprising: a host computer, a lower computer, and a camera, wherein the upper computer is communicably connected to the lower computer and the camera;所述相机,用于在所述上位机的控制下获取包含有定位标记的待抓取物体的图像;The camera is configured to acquire an image of an object to be grasped including a positioning mark under the control of the upper computer;所述上位机包括:The upper computer includes:图像处理模块,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息;An image processing module, configured to process the acquired image under the ROS system, and obtain spatial pose information of the object to be grasped in a robot arm coordinate system;运动规划模块,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列;a motion planning module, configured to perform motion planning on the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in a robot arm coordinate system and spatial pose information of the mechanical arm, and obtain corresponding Motion information queue;消息传递模块,用于将获得的所述机械臂的运动信息队列传递至所述下位机;a message passing module, configured to transmit the obtained motion information queue of the robot arm to the lower position machine;所述下位机包括:The lower position machine includes:运动执行模块,用于根据所述运动信息队列驱动机械臂按照相应的路径执行抓取操作。And a motion execution module, configured to perform a grab operation according to the corresponding path according to the motion information queue driving robot arm.
- 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特 征在于,所述上位机还包括:The ROS system-based mechanical arm grasping system according to claim 8, The problem is that the upper computer further includes:相机训练模块,用于对所述相机执行定位标记训练。A camera training module is configured to perform positioning mark training on the camera.
- 根据权利要求9所述的基于ROS系统的机械臂抓取系统,其特征在于,所述相机训练模块,用于对所述相机执行定位标记训练包括:The ROS system-based robotic arm grabbing system according to claim 9, wherein the camera training module is configured to perform positioning mark training on the camera, including:基于ARToolKit的定位标记识别算法,对所述相机针对所述定位标记进行训练;或,基于OpenCV_ArUco的定位标记识别算法,对所述相机针对所述定位标记进行训练。The camera is trained for the positioning mark based on an ARToolKit-based positioning mark recognition algorithm; or, according to an OpenCV_ArUco-based positioning mark recognition algorithm, the camera is trained for the positioning mark.
- 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特征在于,所述上位机还包括:The ROS system-based mechanical arm grasping system according to claim 8, wherein the upper computer further comprises:相机配置模块,用于在ROS系统下配置相机节点对所述相机进行驱动;以及,在ROS系统下对所述相机进行标定并保存矫正数据;以及,选择定位标记识别算法。a camera configuration module for configuring a camera node to drive the camera under the ROS system; and, calibrating the camera and storing the correction data under the ROS system; and selecting a positioning mark recognition algorithm.
- 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特征在于,所述图像处理模块,用于在ROS系统下对获取的所述图像进行处理,得到所述待抓取物体在机械臂坐标系下的空间位姿信息具体包括:The ROS system-based robotic arm grabbing system according to claim 8, wherein the image processing module is configured to process the acquired image under the ROS system to obtain the object to be grasped. The spatial pose information under the robot arm coordinate system specifically includes:在所述相机获取的图像中寻找与预设定位标记匹配度最高的定位标记;Searching for an image with the highest matching degree with the preset positioning mark in the image acquired by the camera;以及,定位寻找到的所述定位标记;And positioning the located positioning mark;以及,根据定位的所述定位标记,得到所述含有定位标记的待抓取物体在相机坐标系下的空间位姿信息;And obtaining, according to the positioning marker, the spatial pose information of the object to be grasped with the positioning marker in a camera coordinate system;以及,根据预设的相机坐标系和机械臂坐标系转换矩阵,对所述待抓取物体在相机坐标系下的所述空间位姿信息进行转换,得到所述待抓取物体在机械臂坐标系下的空间位姿信息。And converting, according to the preset camera coordinate system and the robot arm coordinate system conversion matrix, the spatial pose information of the object to be grasped in the camera coordinate system, and obtaining the coordinates of the object to be grasped at the arm The spatial pose information under the system.
- 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特 征在于,所述运动规划模块,用于根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息在ROS系统下对所述机械臂进行运动规划,得到相应的运动信息队列具体包括:The ROS system-based mechanical arm grasping system according to claim 8, The motion planning module is configured to move the mechanical arm under the ROS system according to the spatial pose information of the object to be grasped in a robot arm coordinate system and the spatial pose information of the mechanical arm. Planning to get the corresponding sports information queue specifically includes:编写机械臂在ROS系统下的机械臂模型描述文件;Write a robotic arm model description file of the robot arm under the ROS system;以及,根据所述机械臂模型描述文件对所述机械臂进行建模;And modeling the robot arm according to the robot arm model description file;以及,当对所述机械臂建模后,所述上位机根据所述待抓取物体在机械臂坐标系下的所述空间位姿信息和机械臂的空间位姿信息对所述机械臂进行运动规划,得到相应的运动信息队列。And, after modeling the mechanical arm, the upper computer performs the mechanical arm according to the spatial pose information of the object to be grasped in a robot arm coordinate system and the spatial pose information of the mechanical arm. Motion planning, get the corresponding motion information queue.
- 根据权利要求8所述的基于ROS系统的机械臂抓取系统,其特征在于,所述下位机还包括:The ROS system-based mechanical arm grasping system according to claim 8, wherein the lower position machine further comprises:信息回传模块,用于在驱动机械臂按照相应的路径执行抓取操作的同时,将所述机械臂的实时空间位姿信息回传至所述上位机;The information returning module is configured to return the real-time spatial pose information of the mechanical arm to the upper computer while the driving robot arm performs the grasping operation according to the corresponding path;所述运动规划模块,进一步用于用回传的所述实时空间位姿信息更新所述机械臂的空间位姿信息。 The motion planning module is further configured to update the spatial pose information of the robot arm with the returned real-time spatial pose information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710056272.7 | 2017-01-25 | ||
CN201710056272.7A CN106826822B (en) | 2017-01-25 | 2017-01-25 | A kind of vision positioning and mechanical arm crawl implementation method based on ROS system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018137445A1 true WO2018137445A1 (en) | 2018-08-02 |
Family
ID=59121171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/117168 WO2018137445A1 (en) | 2017-01-25 | 2017-12-19 | Ros-based mechanical arm grabbing method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106826822B (en) |
WO (1) | WO2018137445A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112338922A (en) * | 2020-11-23 | 2021-02-09 | 北京配天技术有限公司 | Five-axis mechanical arm grabbing and placing method and related device |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106826822B (en) * | 2017-01-25 | 2019-04-16 | 南京阿凡达机器人科技有限公司 | A kind of vision positioning and mechanical arm crawl implementation method based on ROS system |
CN109483526A (en) * | 2017-09-13 | 2019-03-19 | 北京猎户星空科技有限公司 | The control method and system of mechanical arm under virtual environment and true environment |
CN107717987A (en) * | 2017-09-27 | 2018-02-23 | 西安华航唯实机器人科技有限公司 | A kind of industrial robot and its control method with vision system |
CN107553496B (en) * | 2017-09-29 | 2020-09-22 | 南京阿凡达机器人科技有限公司 | Method and device for determining and correcting errors of inverse kinematics solving method of mechanical arm |
CN107450571B (en) * | 2017-09-30 | 2021-03-23 | 江西洪都航空工业集团有限责任公司 | AGV dolly laser navigation based on ROS |
CN107571260B (en) * | 2017-10-25 | 2021-02-26 | 南京阿凡达机器人科技有限公司 | Method and device for controlling robot to grab object |
CN107818587B (en) * | 2017-10-26 | 2021-07-09 | 吴铁成 | ROS-based machine vision high-precision positioning method |
CN107944384B (en) * | 2017-11-21 | 2021-08-20 | 天地伟业技术有限公司 | Delivered object behavior detection method based on video |
CN108392269B (en) * | 2017-12-29 | 2021-08-03 | 广州布莱医疗科技有限公司 | Operation assisting method and operation assisting robot |
CN108436909A (en) * | 2018-03-13 | 2018-08-24 | 南京理工大学 | A kind of hand and eye calibrating method of camera and robot based on ROS |
CN108460369B (en) * | 2018-04-04 | 2020-04-14 | 南京阿凡达机器人科技有限公司 | Drawing method and system based on machine vision |
CN108655026B (en) * | 2018-05-07 | 2020-08-14 | 上海交通大学 | A kind of robot rapid teaching sorting system and method |
CN109382828B (en) * | 2018-10-30 | 2021-04-16 | 武汉大学 | A robot shaft hole assembly system and method based on teaching and learning |
CN109531567A (en) * | 2018-11-23 | 2019-03-29 | 南京工程学院 | Remote operating underactuated manipulator control system based on ROS |
CN109877827B (en) * | 2018-12-19 | 2022-03-29 | 东北大学 | Non-fixed point material visual identification and gripping device and method of connecting rod manipulator |
CN109940616B (en) * | 2019-03-21 | 2022-06-03 | 佛山智能装备技术研究院 | Intelligent grabbing system and method based on brain-cerebellum mode |
CN110037910A (en) * | 2019-03-22 | 2019-07-23 | 同济大学 | A kind of multi-functional automatic physiotherapeutical instrument based on realsense |
CN109773798A (en) * | 2019-03-28 | 2019-05-21 | 大连理工大学 | Binocular vision-based double-mechanical-arm cooperative control method |
CN110355756A (en) * | 2019-06-11 | 2019-10-22 | 西安电子科技大学 | A kind of control system and method for a wide range of 3 D-printing of multi-robot Cooperation |
CN110253588A (en) * | 2019-08-05 | 2019-09-20 | 江苏科技大学 | A New Dynamic Grabbing System of Robotic Arm |
CN112775955B (en) * | 2019-11-06 | 2022-02-11 | 深圳富泰宏精密工业有限公司 | Mechanical arm coordinate determination method and computer device |
CN110926852B (en) * | 2019-11-18 | 2021-10-22 | 迪普派斯医疗科技(山东)有限公司 | Automatic film changing system and method for digital pathological section |
CN110962128B (en) * | 2019-12-11 | 2021-06-29 | 南方电网电力科技股份有限公司 | Substation inspection and stationing method and inspection robot control method |
CN111516006B (en) * | 2020-04-15 | 2022-02-22 | 昆山市工研院智能制造技术有限公司 | Composite robot operation method and system based on vision |
CN111483803B (en) * | 2020-04-17 | 2022-03-04 | 湖南视比特机器人有限公司 | Control method, capture system and storage medium |
CN111482967B (en) * | 2020-06-08 | 2023-05-16 | 河北工业大学 | Intelligent detection and grabbing method based on ROS platform |
CN112102289A (en) * | 2020-09-15 | 2020-12-18 | 齐鲁工业大学 | Cell sample centrifugal processing system and method based on machine vision |
CN112589795B (en) * | 2020-12-04 | 2022-03-15 | 中山大学 | Vacuum chuck mechanical arm grabbing method based on uncertainty multi-frame fusion |
CN112541946A (en) * | 2020-12-08 | 2021-03-23 | 深圳龙岗智能视听研究院 | Real-time pose detection method of mechanical arm based on perspective multi-point projection |
CN113110513A (en) * | 2021-05-19 | 2021-07-13 | 哈尔滨理工大学 | ROS-based household arrangement mobile robot |
CN113263501A (en) * | 2021-05-28 | 2021-08-17 | 湖南三一石油科技有限公司 | Method and device for controlling racking platform manipulator and storage medium |
CN115840420A (en) * | 2022-09-13 | 2023-03-24 | 南京理工大学泰州科技学院 | Intelligent mushroom sorting system and intelligent mushroom sorting method |
CN117260681A (en) * | 2023-09-28 | 2023-12-22 | 广州市腾龙信息科技有限公司 | Control system of mechanical arm robot |
CN117841041B (en) * | 2024-02-05 | 2024-07-05 | 北京新雨华祺科技有限公司 | Mechanical arm combination device based on multi-arm cooperation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008008790A2 (en) * | 2006-07-10 | 2008-01-17 | Ugobe, Inc. | Robots with autonomous behavior |
CN103271784A (en) * | 2013-06-06 | 2013-09-04 | 山东科技大学 | Man-machine interactive manipulator control system and method based on binocular vision |
CN104820418A (en) * | 2015-04-22 | 2015-08-05 | 遨博(北京)智能科技有限公司 | Embedded vision system for mechanical arm and method of use |
CN106003036A (en) * | 2016-06-16 | 2016-10-12 | 哈尔滨工程大学 | Object grabbing and placing system based on binocular vision guidance |
CN106826822A (en) * | 2017-01-25 | 2017-06-13 | 南京阿凡达机器人科技有限公司 | A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems |
-
2017
- 2017-01-25 CN CN201710056272.7A patent/CN106826822B/en active Active
- 2017-12-19 WO PCT/CN2017/117168 patent/WO2018137445A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008008790A2 (en) * | 2006-07-10 | 2008-01-17 | Ugobe, Inc. | Robots with autonomous behavior |
CN103271784A (en) * | 2013-06-06 | 2013-09-04 | 山东科技大学 | Man-machine interactive manipulator control system and method based on binocular vision |
CN104820418A (en) * | 2015-04-22 | 2015-08-05 | 遨博(北京)智能科技有限公司 | Embedded vision system for mechanical arm and method of use |
CN106003036A (en) * | 2016-06-16 | 2016-10-12 | 哈尔滨工程大学 | Object grabbing and placing system based on binocular vision guidance |
CN106826822A (en) * | 2017-01-25 | 2017-06-13 | 南京阿凡达机器人科技有限公司 | A kind of vision positioning and mechanical arm crawl implementation method based on ROS systems |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112338922A (en) * | 2020-11-23 | 2021-02-09 | 北京配天技术有限公司 | Five-axis mechanical arm grabbing and placing method and related device |
CN112338922B (en) * | 2020-11-23 | 2022-08-16 | 北京配天技术有限公司 | Five-axis mechanical arm grabbing and placing method and related device |
Also Published As
Publication number | Publication date |
---|---|
CN106826822B (en) | 2019-04-16 |
CN106826822A (en) | 2017-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018137445A1 (en) | Ros-based mechanical arm grabbing method and system | |
CN112132894B (en) | A real-time tracking method of robotic arm based on binocular vision guidance | |
CN113379849B (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN113492393A (en) | Robot teaching demonstration by human | |
CN112171661A (en) | Method for grabbing target object by mechanical arm based on visual information fusion | |
CN108908334A (en) | A kind of intelligent grabbing system and method based on deep learning | |
CN114097004A (en) | Performance on Autonomous Tasks Based on Vision Embeddings | |
CN108422435A (en) | Remote monitoring and control system based on augmented reality | |
CN109079794B (en) | A robot control and teaching method based on human posture following | |
JP2013043271A (en) | Information processing device, method for controlling the same, and program | |
CN115213896B (en) | Object grasping method, system, device and storage medium based on robotic arm | |
CN104570731A (en) | Uncalibrated human-computer interaction control system and method based on Kinect | |
CN107471218A (en) | A kind of tow-armed robot hand eye coordination method based on multi-vision visual | |
CN106514667A (en) | Human-computer cooperation system based on Kinect skeletal tracking and uncalibrated visual servo | |
CN111347411A (en) | 3D visual recognition and grasping method of dual-arm collaborative robot based on deep learning | |
Schröder et al. | Real-time hand tracking with a color glove for the actuation of anthropomorphic robot hands | |
CN106003036A (en) | Object grabbing and placing system based on binocular vision guidance | |
CN113711275B (en) | Creating training data variability for object annotation in images in machine learning | |
CN206105869U (en) | Quick teaching apparatus of robot | |
CN110405775A (en) | A robot teaching system and method based on augmented reality technology | |
CN113103230A (en) | Human-computer interaction system and method based on remote operation of treatment robot | |
CN113510718A (en) | An intelligent food-selling robot based on machine vision and method of using the same | |
Bu et al. | Vision-guided manipulator operating system based on CSRT algorithm | |
CN110142770A (en) | A robot teaching system and method based on a head-mounted display device | |
CN115810188A (en) | Method and system for identifying three-dimensional pose of fruit on tree based on single two-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17893624 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17893624 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17893624 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 14.05.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17893624 Country of ref document: EP Kind code of ref document: A1 |