US20170368687A1 - Method for teaching a robotic arm to pick or place an object - Google Patents
Method for teaching a robotic arm to pick or place an object Download PDFInfo
- Publication number
- US20170368687A1 US20170368687A1 US15/189,292 US201615189292A US2017368687A1 US 20170368687 A1 US20170368687 A1 US 20170368687A1 US 201615189292 A US201615189292 A US 201615189292A US 2017368687 A1 US2017368687 A1 US 2017368687A1
- Authority
- US
- United States
- Prior art keywords
- image
- robot arm
- teaching
- pick
- place
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000000007 visual effect Effects 0.000 claims abstract description 47
- 230000003213 activating effect Effects 0.000 claims 1
- 238000012545 processing Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000013459 approach Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1612—Programme controls characterised by the hand, wrist, grip control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1687—Assembly, peg and hole, palletising, straight line, weaving pattern movement
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/42—Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
- G05B19/423—Teaching successive positions by walk-through, i.e. the tool head or end effector being grasped and guided directly, with or without servo-assistance, to follow a path
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39543—Recognize object and plan hand shapes in grasping movements
Definitions
- the disclosure relates in general to a teaching method for a robot arm, and more particularly to a method for teaching the robot arm a movement path for picking and placing an object.
- the method for teaching the robot arm to automatically pick and place an object includes following steps: An object to pick is circled by a user, an image of the object is captured, the image of the object is determined using image processing, image characteristic of the object are analyzed, and the direction for picking the object is planned. Then, the image of the object is inputted via an operation interface of the robot arm, the robot arm is moved, the movement path for picking the object is taught to the robot arm, and image characteristic and direction of the object to pick are set.
- the method for teaching the robot arm to place an object includes following steps: A placing position for the object is circled by a user, an image of the placing position is captured, the image of the placing position is determined using image processing, image characteristic of the placing position are analyzed, and the direction for placing the object to the placing portion is planned. Then, the image of the placing portion is inputted via an operation interface of the robot arm, the robot arm is moved, the movement path for placing the object is taught to the robot arm, and image characteristic and direction of the placing portion are set.
- a robot arm is activated, the robot arm is controlled by a control device to automatically move towards the object according to a teaching movement path for picking the object, image characteristic of the object is positioned by a vision device, and the object having the said image characteristic is picked. Then, the robot arm is controlled by a control device to automatically move towards the placing position according to a teaching movement path for placing the object, and image characteristic of the placing position are positioned by a vision device. Lastly, the object is placed at the placing position having the said image characteristics.
- the user needs to capture the images of the object and the placing position, and further perform image processing on the captured images, analyze the image characteristics, and set the direction for picking and placing the object.
- the teaching method of robot arm of the prior art requires many professional jobs that are beyond ordinary users' capacity and the complicated teaching operation further affects the operation efficiency of the robot arm. Therefore, the robot arm still has many problems to resolve when it comes to the teaching method for picking and placing an object.
- a method for teaching a robot arm to pick and place an object is provided.
- a first image and a second image are sequentially captured by an eye-in-hand vision device disposed on the robot arm at a visual point, a differential image is formed from the captured images, and image characteristic of the object and the placing portion are automatically learned, such that the difficulty of use is reduced.
- a method for teaching a robot arm to pick and place an object is provided.
- the user only needs to move a robot arm to a visual point and a pick and place point and teach a movement path to the robot arm, and the robot arm will automatically learn the rest operations, such that the teaching operation can be simplified.
- the method for teaching a robot arm to pick and place an object include following steps: Firstly, a robot arm is pushed until a target appears within the vision of an eye-in-hand vision device. Then, an appearance position of the target is set as a visual point. Then, a first image is captured by the eye-in-hand vision device at a visual point. Then, the robot arm is pushed to reach a target position from the visual point. Then, the target position is set as a pick and place point. Then, automatic movement control of the robot arm is activated. Then, the object is automatically picked or placed by the robot arm and returns back to the visual point from the pick and place point.
- the eye-in-hand vision device is controlled to capture a second image at the visual point.
- a differential image is formed by subtracting the second image from the first image, a target image is set according to the differential image, and image characteristic of the target are automatically learned.
- the target is the object picked from the placing portion by the robot arm;
- the first image being an image captured before the object is picked, shows the image of the placing portion with the object;
- the second image being an image captured after the object is captured, shows the image of the placing portion;
- a differential image of the object is formed by subtracting the second image from the first image.
- the target is a placing platform on which the robot arm places the object;
- the first image being an image captured before the object is placed, shows the image of the placing portion;
- the second image being an image captured after the object is captured, shows the image of the placing portion with the object;
- a differential image of the object is formed by subtracting the second image from the first image; the image of the placing portion surrounding the object is set.
- the robot arm is connected to a control device; the vision of the eye-in-hand vision device is shown on the screen connected to the control device; the first and second images captured by the eye-in-hand vision device are recorded in the control device; the teaching movement path, being the path along which robot arm is pushed to the pick and place point from the visual point, is recorded in the control device.
- the robot arm automatically returns to the visual point from the pick and place point along the teaching movement path.
- FIG. 1 is a schematic diagram of moving a robot arm to a visual point according to the invention.
- FIG. 2 is a schematic diagram of moving a robot arm to a pick and place point according to the invention.
- FIG. 3 is a schematic diagram of returning a robot arm to a visual point according to the invention.
- FIG. 4 is a schematic diagram of image processing of an object according to the invention.
- FIG. 5 is a schematic diagram of image processing of a placing portion according to the invention.
- FIG. 6 is a flowchart of a method for teaching a robot arm to pick an object according to the invention.
- FIG. 1 is a schematic diagram of moving a robot arm to a visual point according to the invention.
- FIG. 2 is a schematic diagram of moving a robot arm to a pick and place point according to the invention.
- FIG. 3 is a schematic diagram of returning a robot arm to a visual point according to the invention.
- FIG. 4 is a schematic diagram of image processing of an object according to the invention. As indicated in FIG. 1 , one end of the robot arm 10 of the invention is fixed on a main body 11 , an arm reference coordinate M is set, and an eye-in-hand vision device 13 is disposed at a movable portion 12 on the other end of the robot arm 10 .
- the robot arm 10 is connected to a control device 14 .
- the control device 14 moves the robot arm 10 and controls the eye-in-hand vision device 13 to capture an image.
- the robot arm 10 drives a number of toggles 17 and the movable portion 12 to approach a machine platform 18 disposed in the working environment of the robot arm 10 , and further use a picking device 19 to pick the target object 21 from the placing portion 20 of the machine platform 18 .
- the toggles 17 of the robot arm 10 rotate as the movable portion 12 of the robot arm 10 is pushed manually. Since the rotation angle of each toggle 17 can be detected by sensors, the movement position and track of the movable portion 12 with respect to the main body 11 are recorded by the control device 14 using arm reference coordinate M.
- the movable portion 12 is manually pushed to the top of the placing portion 20 of the machine platform 18 , such that the vision of the eye-in-hand vision device 13 can be shown on the screen 16 of the operation interface 15 until the object 21 on the machine platform 18 appears on the screen 16 .
- the appearance position of the object 21 is set as a visual point V.
- the eye-in-hand vision device 13 captures a first image at the visual point V, and then the control device 14 further records the first image. As indicated in FIG. 4 , the first image, being the image captured before the object 21 is picked, shows the image of the placing portion 20 of the machine platform 18 with the object 21 .
- the movable portion 12 of the robot arm 10 is manually pushed to approach the object 21 placed on the placing portion 20 from the visual point V according to a planned movement path.
- the picking position is set as a pick and place point P.
- the control device 14 records a teaching movement path, along which the movable portion 12 is moved to the pick and place point P from the visual point V, by detecting the rotation angles of the toggles 17 of the robot arm 10 .
- the control device 14 controls the picking device 19 of the robot arm 10 to automatically pick the object 21 from the placing portion 20 of the machine platform 18 and move the movable portion 12 back to the visual point V from the pick and place point P along the teaching movement path from the pick and place point P the visual point V.
- the control device 14 controls the eye-in-hand vision device 13 to capture a second image at the visual point V, and then further records the second image.
- the second image being an image captured after the object 21 is picked, only shows the image of the placing portion 20 of the machine platform 18 .
- the object 21 is not included in the second image.
- the control device 14 performs image processing on the first and second images captured at the visual point V.
- a differential image is formed by subtracting the second image from the first image.
- the first image being an image captured before the object 21 is picked, shows the image of the placing portion 20 with the object 21 .
- the second image being an image captured after the object 21 is picked, shows the image of the placing portion 20 only.
- the control device 14 automatically learns image characteristic of the object 21 to facilitate recognizing and picking the object 21 placed on the placing portion 20 .
- the image characteristic of the object 21 are obtained from the said image processing.
- FIG. 5 is a schematic diagram of image processing of a placing portion according to the invention.
- the process of teaching the robot arm 10 to place the object 21 is opposite to the process of teaching the robot arm 10 to pick the object 21 , and the only difference lies in that the robot arm 10 picks the object 21 at the visual point and then moves to the placing portion 20 for placing the object 21 .
- the movable portion 12 is manually pushed to the top of the placing portion 20 on which the object 21 is placed, such that the placing portion 20 is within the vision of the eye-in-hand vision device 13 and appears on the screen 16 .
- the appearance position of the placing portion 20 is set as a visual point V, at which the eye-in-hand vision device 13 captures a first image.
- the control device 14 further records the first image.
- the first image being an image captured before the object 21 is picked, shows the placing portion 20 of the machine platform 18 only.
- the object 21 is not included in the first image.
- the robot arm 10 when the robot arm 10 is taught to place the object 21 , the movable portion 12 of the robot arm 10 is manually pushed to a placing position of the object 21 from the visual point V, the placing position is set as a pick and place point P, and a teaching movement path, along which the movable portion 12 is moved to the pick and place point P from the visual point V, is recorded.
- the control device 14 controls the picking device 19 of the robot arm 10 to automatically place the object 21 on the placing portion 20 of the machine platform 18 , and moves the movable portion 12 back to the visual point V from the pick and place point P along the teaching movement path.
- the control device 14 controls the eye-in-hand vision device 13 to capture a second image at the visual point V.
- the control device 14 further records the second image.
- the second image being an image captured after the object 21 is placed, only shows the image of the object 21 placed on the placing portion 20 .
- the control device 14 subtracts the second image from the first image at the visual point V to form a differential image.
- the periphery of the differential image of the object 21 is set as a reserved image. Therefore, after image processing is completed, only the image of the placing portion 20 is left. Based on the processed image of the placing portion 20 , the control device 14 automatically learns image characteristic of the placing portion 20 to facilitate recognizing and placing the object 21 on the placing portion 20 .
- a flowchart of a method for teaching a robot arm to pick an object according to the invention is shown.
- the picking process is opposite to the placing process. If both the object and the placing portion are regarded as a target during each movement process, then the picking process and the placing process would be similar to each other. Therefore, the process of picking an object and the process of placing an object can be illustrated by the same process which includes following steps. Firstly, the method begins at step S 1 , a robot arm is pushed until a target appears within the vision of an eye-in-hand vision device. In step S 2 , an appearance position of the target is set as a visual point.
- step S 3 a first image is captured by the eye-in-hand vision device at a visual point, and the first image is recorded.
- step S 4 the robot arm is pushed to a target position from the visual point.
- step S 5 the target position is set as a pick and place point, and a teaching movement path from the visual point to the pick and place point is recorded.
- step S 6 automatic movement control of the robot arm is activated.
- step S 7 the object is automatically picked or placed by the robot arm, which returns back to the visual point from the pick and place point along the teaching movement path.
- step S 8 the eye-in-hand vision device is controlled to capture a second image at the visual point, and the second image is recorded.
- step S 9 a differential image is formed by subtracting the second image from the first image, the target image is set according to the differential image, and image characteristic of the target are automatically learned.
- a movement path can be taught to the robot arm by moving the robot arm to a visual point and a pick and place point with only a small amount of labor.
- An eye-in-hand vision device on the robot arm is used to capture a first image at a visual point and capture a second image when the robot arm automatically returns to the visual point from the pick and place point.
- a required image can be obtained according to a differential image formed from the first and second images. Image characteristic of the object and the placing portion can be automatically learned.
- the teaching method of the invention not only simplifies the teaching operation but further dispenses with professional level of image processing technology and reduces the difficulty of use.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Manipulator (AREA)
Abstract
Description
- The disclosure relates in general to a teaching method for a robot arm, and more particularly to a method for teaching the robot arm a movement path for picking and placing an object.
- Along with the rapid development in the manufacturing technologies, such as factory automation, an object is positioned by a vision device, and a robot arm is guided to pick the object for automatic assembly to increase the production speed. The crux of the automatic movement efficiency of the robot arm lies in teaching the robot arm to automatically pick and place an object, which has become a prominent issue for the industries of robot arm.
- The method for teaching the robot arm to automatically pick and place an object according to the prior art includes following steps: An object to pick is circled by a user, an image of the object is captured, the image of the object is determined using image processing, image characteristic of the object are analyzed, and the direction for picking the object is planned. Then, the image of the object is inputted via an operation interface of the robot arm, the robot arm is moved, the movement path for picking the object is taught to the robot arm, and image characteristic and direction of the object to pick are set.
- Similarly, the method for teaching the robot arm to place an object according to the prior art includes following steps: A placing position for the object is circled by a user, an image of the placing position is captured, the image of the placing position is determined using image processing, image characteristic of the placing position are analyzed, and the direction for placing the object to the placing portion is planned. Then, the image of the placing portion is inputted via an operation interface of the robot arm, the robot arm is moved, the movement path for placing the object is taught to the robot arm, and image characteristic and direction of the placing portion are set.
- Then, a robot arm is activated, the robot arm is controlled by a control device to automatically move towards the object according to a teaching movement path for picking the object, image characteristic of the object is positioned by a vision device, and the object having the said image characteristic is picked. Then, the robot arm is controlled by a control device to automatically move towards the placing position according to a teaching movement path for placing the object, and image characteristic of the placing position are positioned by a vision device. Lastly, the object is placed at the placing position having the said image characteristics.
- According to the teaching method of robot arm of the prior art, the user needs to capture the images of the object and the placing position, and further perform image processing on the captured images, analyze the image characteristics, and set the direction for picking and placing the object. However, the teaching method of robot arm of the prior art requires many professional jobs that are beyond ordinary users' capacity and the complicated teaching operation further affects the operation efficiency of the robot arm. Therefore, the robot arm still has many problems to resolve when it comes to the teaching method for picking and placing an object.
- According to an object of the invention, a method for teaching a robot arm to pick and place an object is provided. A first image and a second image are sequentially captured by an eye-in-hand vision device disposed on the robot arm at a visual point, a differential image is formed from the captured images, and image characteristic of the object and the placing portion are automatically learned, such that the difficulty of use is reduced.
- According to another object of the invention, a method for teaching a robot arm to pick and place an object is provided. The user only needs to move a robot arm to a visual point and a pick and place point and teach a movement path to the robot arm, and the robot arm will automatically learn the rest operations, such that the teaching operation can be simplified.
- To achieve the above objects of the invention, the method for teaching a robot arm to pick and place an object include following steps: Firstly, a robot arm is pushed until a target appears within the vision of an eye-in-hand vision device. Then, an appearance position of the target is set as a visual point. Then, a first image is captured by the eye-in-hand vision device at a visual point. Then, the robot arm is pushed to reach a target position from the visual point. Then, the target position is set as a pick and place point. Then, automatic movement control of the robot arm is activated. Then, the object is automatically picked or placed by the robot arm and returns back to the visual point from the pick and place point. Then, the eye-in-hand vision device is controlled to capture a second image at the visual point. Then, a differential image is formed by subtracting the second image from the first image, a target image is set according to the differential image, and image characteristic of the target are automatically learned.
- During the process for teaching the robot arm to pick an object, the target is the object picked from the placing portion by the robot arm; the first image, being an image captured before the object is picked, shows the image of the placing portion with the object; the second image, being an image captured after the object is captured, shows the image of the placing portion; a differential image of the object is formed by subtracting the second image from the first image. During the process for teaching the robot arm to place an object, the target is a placing platform on which the robot arm places the object; the first image, being an image captured before the object is placed, shows the image of the placing portion; the second image, being an image captured after the object is captured, shows the image of the placing portion with the object; a differential image of the object is formed by subtracting the second image from the first image; the image of the placing portion surrounding the object is set.
- According to the method for teaching a robot arm to pick and place an object, the robot arm is connected to a control device; the vision of the eye-in-hand vision device is shown on the screen connected to the control device; the first and second images captured by the eye-in-hand vision device are recorded in the control device; the teaching movement path, being the path along which robot arm is pushed to the pick and place point from the visual point, is recorded in the control device. The robot arm automatically returns to the visual point from the pick and place point along the teaching movement path.
- The above and other aspects of the invention will become better understood with regard to the following detailed description of the preferred but non-limiting embodiment(s). The following description is made with reference to the accompanying drawings.
-
FIG. 1 is a schematic diagram of moving a robot arm to a visual point according to the invention. -
FIG. 2 is a schematic diagram of moving a robot arm to a pick and place point according to the invention. -
FIG. 3 is a schematic diagram of returning a robot arm to a visual point according to the invention. -
FIG. 4 is a schematic diagram of image processing of an object according to the invention. -
FIG. 5 is a schematic diagram of image processing of a placing portion according to the invention. -
FIG. 6 is a flowchart of a method for teaching a robot arm to pick an object according to the invention. - The technical methods adopted to achieve the above objects of the invention and the consequent effects are disclosed in a number of preferred embodiments below with reference to the accompanying drawings.
- Refer to
FIG. 1 ,FIG. 2 ,FIG. 3 andFIG. 4 .FIG. 1 is a schematic diagram of moving a robot arm to a visual point according to the invention.FIG. 2 is a schematic diagram of moving a robot arm to a pick and place point according to the invention.FIG. 3 is a schematic diagram of returning a robot arm to a visual point according to the invention.FIG. 4 is a schematic diagram of image processing of an object according to the invention. As indicated inFIG. 1 , one end of therobot arm 10 of the invention is fixed on amain body 11, an arm reference coordinate M is set, and an eye-in-hand vision device 13 is disposed at amovable portion 12 on the other end of therobot arm 10. Therobot arm 10 is connected to acontrol device 14. Through anoperation interface 15 and ascreen 16, thecontrol device 14 moves therobot arm 10 and controls the eye-in-hand vision device 13 to capture an image. By processing and analyzing the captured image, therobot arm 10 drives a number oftoggles 17 and themovable portion 12 to approach amachine platform 18 disposed in the working environment of therobot arm 10, and further use apicking device 19 to pick thetarget object 21 from the placingportion 20 of themachine platform 18. - During the process of teaching the
robot arm 10 to pick theobject 21, thetoggles 17 of therobot arm 10 rotate as themovable portion 12 of therobot arm 10 is pushed manually. Since the rotation angle of eachtoggle 17 can be detected by sensors, the movement position and track of themovable portion 12 with respect to themain body 11 are recorded by thecontrol device 14 using arm reference coordinate M. Themovable portion 12 is manually pushed to the top of the placingportion 20 of themachine platform 18, such that the vision of the eye-in-hand vision device 13 can be shown on thescreen 16 of theoperation interface 15 until theobject 21 on themachine platform 18 appears on thescreen 16. The appearance position of theobject 21 is set as a visual point V. The eye-in-hand vision device 13 captures a first image at the visual point V, and then thecontrol device 14 further records the first image. As indicated inFIG. 4 , the first image, being the image captured before theobject 21 is picked, shows the image of the placingportion 20 of themachine platform 18 with theobject 21. - As indicated in
FIG. 2 , when therobot arm 10 is taught to pick theobject 21, themovable portion 12 of therobot arm 10 is manually pushed to approach theobject 21 placed on the placingportion 20 from the visual point V according to a planned movement path. When therobot arm 10 reaches a picking position of theobject 21, the picking position is set as a pick and place point P. Thecontrol device 14 records a teaching movement path, along which themovable portion 12 is moved to the pick and place point P from the visual point V, by detecting the rotation angles of thetoggles 17 of therobot arm 10. - As indicated in
FIG. 3 , automatic movement control of therobot arm 10 is activated, thecontrol device 14 controls thepicking device 19 of therobot arm 10 to automatically pick theobject 21 from the placingportion 20 of themachine platform 18 and move themovable portion 12 back to the visual point V from the pick and place point P along the teaching movement path from the pick and place point P the visual point V. Thecontrol device 14 controls the eye-in-hand vision device 13 to capture a second image at the visual point V, and then further records the second image. As indicated inFIG. 4 , the second image, being an image captured after theobject 21 is picked, only shows the image of the placingportion 20 of themachine platform 18. Theobject 21 is not included in the second image. - As indicated in
FIG. 4 , thecontrol device 14 performs image processing on the first and second images captured at the visual point V. A differential image is formed by subtracting the second image from the first image. The first image, being an image captured before theobject 21 is picked, shows the image of the placingportion 20 with theobject 21. The second image, being an image captured after theobject 21 is picked, shows the image of the placingportion 20 only. After the image processing of subtracting the second image from the first image, only the differential image of theobject 21 is left. Thecontrol device 14 automatically learns image characteristic of theobject 21 to facilitate recognizing and picking theobject 21 placed on the placingportion 20. The image characteristic of theobject 21 are obtained from the said image processing. - The process of teaching the
robot arm 10 to place theobject 21 is illustrated with reference toFIG. 1 ,FIG. 2 ,FIG. 3 andFIG. 5 .FIG. 5 is a schematic diagram of image processing of a placing portion according to the invention. Similarly, the process of teaching therobot arm 10 to place theobject 21 is opposite to the process of teaching therobot arm 10 to pick theobject 21, and the only difference lies in that therobot arm 10 picks theobject 21 at the visual point and then moves to the placingportion 20 for placing theobject 21. As indicated inFIG. 3 , themovable portion 12 is manually pushed to the top of the placingportion 20 on which theobject 21 is placed, such that the placingportion 20 is within the vision of the eye-in-hand vision device 13 and appears on thescreen 16. The appearance position of the placingportion 20 is set as a visual point V, at which the eye-in-hand vision device 13 captures a first image. Thecontrol device 14 further records the first image. As indicated inFIG. 6 , the first image, being an image captured before theobject 21 is picked, shows the placingportion 20 of themachine platform 18 only. Theobject 21 is not included in the first image. - As indicated in
FIG. 2 , when therobot arm 10 is taught to place theobject 21, themovable portion 12 of therobot arm 10 is manually pushed to a placing position of theobject 21 from the visual point V, the placing position is set as a pick and place point P, and a teaching movement path, along which themovable portion 12 is moved to the pick and place point P from the visual point V, is recorded. As indicated inFIG. 1 , when the automatic movement control of therobot arm 10 is activated, thecontrol device 14 controls the pickingdevice 19 of therobot arm 10 to automatically place theobject 21 on the placingportion 20 of themachine platform 18, and moves themovable portion 12 back to the visual point V from the pick and place point P along the teaching movement path. Thecontrol device 14 controls the eye-in-hand vision device 13 to capture a second image at the visual point V. Thecontrol device 14 further records the second image. As indicated inFIG. 6 , the second image, being an image captured after theobject 21 is placed, only shows the image of theobject 21 placed on the placingportion 20. - As indicated in
FIG. 5 , thecontrol device 14 subtracts the second image from the first image at the visual point V to form a differential image. After the differential image of theobject 21 is obtained, the periphery of the differential image of theobject 21 is set as a reserved image. Therefore, after image processing is completed, only the image of the placingportion 20 is left. Based on the processed image of the placingportion 20, thecontrol device 14 automatically learns image characteristic of the placingportion 20 to facilitate recognizing and placing theobject 21 on the placingportion 20. - As indicated in
FIG. 6 , a flowchart of a method for teaching a robot arm to pick an object according to the invention is shown. Based on the foregoing descriptions of teaching a robot arm to place or pick an object, it can be known that the picking process is opposite to the placing process. If both the object and the placing portion are regarded as a target during each movement process, then the picking process and the placing process would be similar to each other. Therefore, the process of picking an object and the process of placing an object can be illustrated by the same process which includes following steps. Firstly, the method begins at step S1, a robot arm is pushed until a target appears within the vision of an eye-in-hand vision device. In step S2, an appearance position of the target is set as a visual point. Then, the method proceeds to step S3, a first image is captured by the eye-in-hand vision device at a visual point, and the first image is recorded. Then, the method proceeds to step S4, the robot arm is pushed to a target position from the visual point. Then, the method proceeds to step S5, the target position is set as a pick and place point, and a teaching movement path from the visual point to the pick and place point is recorded. Then, the method proceeds to step S6, automatic movement control of the robot arm is activated. Then, the method proceeds to step S7, the object is automatically picked or placed by the robot arm, which returns back to the visual point from the pick and place point along the teaching movement path. Then, the method proceeds to step S8, the eye-in-hand vision device is controlled to capture a second image at the visual point, and the second image is recorded. Then, the method proceeds to step S9, a differential image is formed by subtracting the second image from the first image, the target image is set according to the differential image, and image characteristic of the target are automatically learned. - According to the method for teaching a robot arm to pick and place an object disclosed in above embodiments of the invention, a movement path can be taught to the robot arm by moving the robot arm to a visual point and a pick and place point with only a small amount of labor. An eye-in-hand vision device on the robot arm is used to capture a first image at a visual point and capture a second image when the robot arm automatically returns to the visual point from the pick and place point. A required image can be obtained according to a differential image formed from the first and second images. Image characteristic of the object and the placing portion can be automatically learned. The teaching method of the invention not only simplifies the teaching operation but further dispenses with professional level of image processing technology and reduces the difficulty of use.
- It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments. It is intended that the specification and examples be considered as exemplary only, with a true scope of the present disclosure being indicated by the following claims and their equivalents.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/189,292 US10059005B2 (en) | 2016-06-22 | 2016-06-22 | Method for teaching a robotic arm to pick or place an object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/189,292 US10059005B2 (en) | 2016-06-22 | 2016-06-22 | Method for teaching a robotic arm to pick or place an object |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170368687A1 true US20170368687A1 (en) | 2017-12-28 |
US10059005B2 US10059005B2 (en) | 2018-08-28 |
Family
ID=60675260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/189,292 Active 2036-07-13 US10059005B2 (en) | 2016-06-22 | 2016-06-22 | Method for teaching a robotic arm to pick or place an object |
Country Status (1)
Country | Link |
---|---|
US (1) | US10059005B2 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180243897A1 (en) * | 2015-08-25 | 2018-08-30 | Kawasaki Jukogyo Kabushiki Kaisha | Remote control robot system |
CN108858202A (en) * | 2018-08-16 | 2018-11-23 | 中国科学院自动化研究所 | The control method of part grabbing device based on " to quasi- approach-crawl " |
CN109895095A (en) * | 2019-02-11 | 2019-06-18 | 赋之科技(深圳)有限公司 | A kind of acquisition methods of training sample, device and robot |
JP2019150904A (en) * | 2018-03-01 | 2019-09-12 | 株式会社東芝 | Information processing device and sorting system |
JP2019171270A (en) * | 2018-03-28 | 2019-10-10 | 株式会社スギノマシン | Cleaning machine and imaging method of target position of nozzle |
CN110421565A (en) * | 2019-08-07 | 2019-11-08 | 江苏汇博机器人技术股份有限公司 | Robot global positioning and measuring system and method for practical training |
US20200016767A1 (en) * | 2019-08-21 | 2020-01-16 | Lg Electronics Inc. | Robot system and control method of the same |
US10618166B2 (en) | 2017-06-06 | 2020-04-14 | Fanuc Corporation | Teaching position correction device and teaching position correction method |
CN111006706A (en) * | 2019-11-12 | 2020-04-14 | 长沙长泰机器人有限公司 | Rotating shaft calibration method based on line laser vision sensor |
CN111283685A (en) * | 2020-03-05 | 2020-06-16 | 广州市斯睿特智能科技有限公司 | Vision teaching method of robot based on vision system |
CN112040124A (en) * | 2020-08-28 | 2020-12-04 | 深圳市商汤科技有限公司 | Data acquisition method, device, equipment, system and computer storage medium |
CN112238453A (en) * | 2019-07-19 | 2021-01-19 | 上银科技股份有限公司 | Vision-guided robot arm correction method |
WO2021012122A1 (en) * | 2019-07-19 | 2021-01-28 | 西门子(中国)有限公司 | Robot hand-eye calibration method and apparatus, computing device, medium and product |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FI127100B (en) * | 2016-08-04 | 2017-11-15 | Zenrobotics Oy | A method and apparatus for separating at least one object from the multiplicity of objects |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7103460B1 (en) * | 1994-05-09 | 2006-09-05 | Automotive Technologies International, Inc. | System and method for vehicle diagnostics |
US8306635B2 (en) * | 2001-03-07 | 2012-11-06 | Motion Games, Llc | Motivation and enhancement of physical and mental exercise, rehabilitation, health and social interaction |
US20160034305A1 (en) * | 2013-03-15 | 2016-02-04 | Advanced Elemental Technologies, Inc. | Methods and systems for purposeful computing |
US9348488B1 (en) * | 2012-11-29 | 2016-05-24 | II Andrew Renema | Methods for blatant auxiliary activation inputs, initial and second individual real-time directions, and personally moving, interactive experiences and presentations |
US9471142B2 (en) * | 2011-06-15 | 2016-10-18 | The University Of Washington | Methods and systems for haptic rendering and creating virtual fixtures from point clouds |
-
2016
- 2016-06-22 US US15/189,292 patent/US10059005B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7103460B1 (en) * | 1994-05-09 | 2006-09-05 | Automotive Technologies International, Inc. | System and method for vehicle diagnostics |
US8306635B2 (en) * | 2001-03-07 | 2012-11-06 | Motion Games, Llc | Motivation and enhancement of physical and mental exercise, rehabilitation, health and social interaction |
US9471142B2 (en) * | 2011-06-15 | 2016-10-18 | The University Of Washington | Methods and systems for haptic rendering and creating virtual fixtures from point clouds |
US9348488B1 (en) * | 2012-11-29 | 2016-05-24 | II Andrew Renema | Methods for blatant auxiliary activation inputs, initial and second individual real-time directions, and personally moving, interactive experiences and presentations |
US20160034305A1 (en) * | 2013-03-15 | 2016-02-04 | Advanced Elemental Technologies, Inc. | Methods and systems for purposeful computing |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180243897A1 (en) * | 2015-08-25 | 2018-08-30 | Kawasaki Jukogyo Kabushiki Kaisha | Remote control robot system |
US10980605B2 (en) * | 2015-08-25 | 2021-04-20 | Kawasaki Jukogyo Kabushiki Kaisha | Remote control robot system |
US10618166B2 (en) | 2017-06-06 | 2020-04-14 | Fanuc Corporation | Teaching position correction device and teaching position correction method |
JP2019150904A (en) * | 2018-03-01 | 2019-09-12 | 株式会社東芝 | Information processing device and sorting system |
JP7005388B2 (en) | 2018-03-01 | 2022-01-21 | 株式会社東芝 | Information processing equipment and sorting system |
JP2019171270A (en) * | 2018-03-28 | 2019-10-10 | 株式会社スギノマシン | Cleaning machine and imaging method of target position of nozzle |
CN108858202A (en) * | 2018-08-16 | 2018-11-23 | 中国科学院自动化研究所 | The control method of part grabbing device based on " to quasi- approach-crawl " |
CN109895095A (en) * | 2019-02-11 | 2019-06-18 | 赋之科技(深圳)有限公司 | A kind of acquisition methods of training sample, device and robot |
WO2021012122A1 (en) * | 2019-07-19 | 2021-01-28 | 西门子(中国)有限公司 | Robot hand-eye calibration method and apparatus, computing device, medium and product |
CN112238453A (en) * | 2019-07-19 | 2021-01-19 | 上银科技股份有限公司 | Vision-guided robot arm correction method |
US12042942B2 (en) | 2019-07-19 | 2024-07-23 | Siemens Ltd., China | Robot hand-eye calibration method and apparatus, computing device, medium and product |
CN110421565A (en) * | 2019-08-07 | 2019-11-08 | 江苏汇博机器人技术股份有限公司 | Robot global positioning and measuring system and method for practical training |
US20200016767A1 (en) * | 2019-08-21 | 2020-01-16 | Lg Electronics Inc. | Robot system and control method of the same |
US11559902B2 (en) * | 2019-08-21 | 2023-01-24 | Lg Electronics Inc. | Robot system and control method of the same |
CN111006706A (en) * | 2019-11-12 | 2020-04-14 | 长沙长泰机器人有限公司 | Rotating shaft calibration method based on line laser vision sensor |
CN111283685A (en) * | 2020-03-05 | 2020-06-16 | 广州市斯睿特智能科技有限公司 | Vision teaching method of robot based on vision system |
CN112040124A (en) * | 2020-08-28 | 2020-12-04 | 深圳市商汤科技有限公司 | Data acquisition method, device, equipment, system and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
US10059005B2 (en) | 2018-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10059005B2 (en) | Method for teaching a robotic arm to pick or place an object | |
WO2023056670A1 (en) | Mechanical arm autonomous mobile grabbing method under complex illumination conditions based on visual-tactile fusion | |
CN107263468B (en) | A SCARA Robot Assembly Method Using Digital Image Processing Technology | |
WO2019202900A1 (en) | Behavior estimation device, behavior estimation method, and behavior estimation program | |
JP7027299B2 (en) | Calibration and operation of vision-based operation system | |
CN108229665A (en) | A kind of the System of Sorting Components based on the convolutional neural networks by depth | |
US20130245824A1 (en) | Method and system for training a robot using human-assisted task demonstration | |
CN111604942A (en) | Object detection device, control device, and computer program for object detection | |
US20220080581A1 (en) | Dual arm robot teaching from dual hand human demonstration | |
CN110666805A (en) | A sorting method for industrial robots based on active vision | |
JP6042291B2 (en) | Robot, robot control method, and robot control program | |
CN108607819A (en) | Material sorting system and method | |
JP2002018754A (en) | Robot apparatus and control method therefor | |
US12172303B2 (en) | Robot teaching by demonstration with visual servoing | |
US11470259B2 (en) | Systems and methods for sampling images | |
CN114310954A (en) | A kind of nursing robot self-adaptive lifting control method and system | |
CN117325170A (en) | Method for grasping hard disk rack by robotic arm guided by depth vision | |
WO2023102647A1 (en) | Method for automated 3d part localization and adjustment of robot end-effectors | |
CN208092786U (en) | A kind of the System of Sorting Components based on convolutional neural networks by depth | |
JP6067547B2 (en) | Object recognition device, robot, and object recognition method | |
JP2020142323A (en) | Robot control device, robot control method and robot control program | |
JP2015003348A (en) | Robot control system, control device, robot, control method for robot control system and robot control method | |
JP2023059837A (en) | Robot program generation method from human demonstration | |
CN115063670A (en) | Automatic sorting method, device and system | |
KR102452315B1 (en) | Apparatus and method of robot control through vision recognition using deep learning and marker |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QUANTA STORAGE INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUANG, CHUNG-HSIEN;HUANG, SHIH-JUNG;REEL/FRAME:038984/0106 Effective date: 20160620 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: TECHMAN ROBOT INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:QUANTA STORAGE INC.;REEL/FRAME:054345/0328 Effective date: 20201029 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |