+

WO2008047872A1 - Manipulator - Google Patents

Manipulator Download PDF

Info

Publication number
WO2008047872A1
WO2008047872A1 PCT/JP2007/070360 JP2007070360W WO2008047872A1 WO 2008047872 A1 WO2008047872 A1 WO 2008047872A1 JP 2007070360 W JP2007070360 W JP 2007070360W WO 2008047872 A1 WO2008047872 A1 WO 2008047872A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
manipulator
gripping
image input
arm
Prior art date
Application number
PCT/JP2007/070360
Other languages
French (fr)
Japanese (ja)
Inventor
Saku Egawa
Original Assignee
Hitachi, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi, Ltd. filed Critical Hitachi, Ltd.
Priority to CN2007800378723A priority Critical patent/CN101522377B/en
Priority to JP2008539872A priority patent/JPWO2008047872A1/en
Publication of WO2008047872A1 publication Critical patent/WO2008047872A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices
    • E02F9/2025Particular purposes of control systems not otherwise provided for
    • E02F9/2033Limiting the movement of frames or implements, e.g. to avoid collision between implements and the cabin

Definitions

  • the present invention relates to a manipulator that holds an object with an arm and performs positioning and transportation.
  • a manipulator is a device in which a joint and an arm (arm) are combined like a human arm, and is a generic name for a device that grips an object and performs positioning and transportation.
  • manipulators include a gripping mechanism that grips an object and an arm mechanism that moves the gripping mechanism.
  • the manipulator includes a mechanism that automatically controls movement of the arm and a mechanism that is operated by a person.
  • Examples of manipulators whose arms are automatically controlled include industrial robot arms used for parts transportation and assembly in factories, and service robots that perform tasks such as housework and nursing care in public space 'office' homes. There are arms.
  • Examples of manipulators that operate the arm by humans include construction machines that handle large 'heavy objects', master-slave manipulators that are used in space environments and nuclear power facilities, and medical operation support manipulators. .
  • the object to be grasped is applied to a simple shape by image recognition, its size and orientation are calculated, and the grasping method is obtained based on it.
  • a method of reliably grasping an object of arbitrary shape has been proposed.
  • an ultrasonic sensor provided in an arm type robot main body detects a surrounding moving object and moves the robot arm when the distance from the main body falls within a certain range.
  • a robot apparatus that performs speed reduction control is disclosed.
  • an object region image indicating a learning target object is cut out by moving a learning target object by bringing a movable part such as an arm unit into contact with the learning target object,
  • a robot apparatus that extracts features from the object region image and registers them in the object model database and a learning method thereof are disclosed.
  • the means for extracting the image of the target object uses a method of extracting an area that has changed before and after the target object is moved from the captured image.
  • surrounding objects other than the object to be gripped may move.
  • a method for extracting an object from an image when the object is moving a method for extracting a changed portion from a plurality of images at different times is known. If the surrounding object moves, the moving background part will be mistaken for the object, and the object cannot be extracted correctly.
  • the object to be grasped is surely recognized in a general environment. It was difficult.
  • Patent Document 1 Japanese Patent Application Laid-Open No. 2000-202790
  • Patent Document 2 Japanese Patent Laid-Open No. 2005-128959
  • the present invention has been made in view of the force and the point, and in the situation where the object being grasped and the surrounding object can move arbitrarily, the shape of the unknown grasped object can be reliably recognized and recognized.
  • an object is to perform a work based on the shape and to provide a manipulator capable of preventing the gripped object from coming into contact with surrounding objects and people and thereby improving safety.
  • the manipulator of the present invention includes an arm, arm driving means for driving the arm, a gripping portion provided on the arm, an image input means for acquiring a peripheral image of the gripping portion, and the image input Gripper relative position detection means for detecting the relative position of the gripper with respect to the means, a plurality of images acquired by the image input means, and the image input means detected by the gripper relative position detection means
  • Storage means for storing the relative position of the gripping part, and based on the plurality of images stored in the storage means and the relative position of the gripping part to the image input means! It detects the position and shape of an object.
  • FIG. 1 is a schematic diagram showing an example of the overall configuration according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing a system configuration example of a manipulator device according to an embodiment of the present invention.
  • FIG. 3 is a flowchart showing an overall processing example of contact possibility determination processing according to an embodiment of the present invention.
  • FIG. 4 is a flowchart showing a processing example of a gripping object position ′ shape determination process according to an embodiment of the present invention.
  • FIG. 5 is an explanatory diagram showing an example of a grayscale image according to an embodiment of the present invention.
  • FIG. 6 is an explanatory view showing an example of image processing in the gripping object position ′ shape determination processing according to the embodiment of the present invention.
  • FIG. 7 is a flowchart showing an example of a contact possibility determination process according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing an example of the overall configuration according to another embodiment of the present invention.
  • FIG. 1 shows an example of the overall configuration of the manipulator in the present embodiment
  • FIG. 2 shows an example of the system configuration of the manipulator in the present embodiment.
  • the manipulator 101 in this embodiment includes a base 105, a plurality of arms (arms) 103 and 104 that are directly connected to the base, and a grip portion 102 attached to the tip of the arm.
  • a force with two arms may be provided.
  • Each joint has actuators 132, 133, and 134 (shown in FIG. 1 overlaid on joints 112, 113, and 114) and joints to measure the amount of change in the movable part.
  • Angle sensors 122, 123, 124 that measure the angle of 112, 113, 114 (in Fig. 1, joint 112,
  • the angle sensor acquires rotation angles in a plurality of directions.
  • a control device 106 that inputs angle information acquired by the angle sensor and controls the actuator to move the manipulator 101 is provided.
  • the control device 106 has various installation positions depending on the use and shape of the manipulator 101.
  • FIG. 1 shows the control device 106 in the vicinity of the manipulator 101.
  • the manipulator mainly controls the position of the gripping part by using an arm connected in series from the base, and rotates the joint between the arm at the tip and the gripping part, so that the posture of the gripping part is increased.
  • the arm 103 in front of the grip portion 102 is referred to as the forearm 103 for comparison with the human structure.
  • the joint 112 between the forearm 103 and the grip portion 102 is called a wrist joint 112. Even when the wrist joint 112 is rotated, the position of the tip of the grip portion 102 moves, but since the length of the grip portion is shorter than that of the arms 103 and 104, the amount of change in the position is relatively small.
  • the part corresponding to the wrist joint is composed of multiple joints, short arms, and arms! /, Which may be a force S. Here, these are collectively regarded as one wrist joint.
  • the manipulator 101 of the present embodiment mounted on the manipulator 101 includes an image input device 2 that is an image input unit that acquires a three-dimensional image of an object near the grip portion 102, and a surrounding monitoring unit that can detect the position of a surrounding object 108, Ambient monitoring device 3 is provided.
  • the image input device 2 is attached to the forearm 103, and the surroundings monitoring device 3 is provided on the base 105 of the manipulator 101 so as to acquire a wider range of surrounding images than the image input device 2.
  • the image input device 2 may be provided on the arm 104 as long as it can acquire an image in the vicinity of the grip portion 102, but is preferably provided on the forearm (arm) 103 if the joint 113 can be inserted therebetween. Further, an angle sensor 122 that measures the angle of the wrist joint 112 is used as a gripper relative position detection unit that detects a change in the position of the gripper 102 relative to the image input device 2.
  • the image storage unit 301 is a storage unit that stores the stereoscopic image data acquired by the image input device 2 and the angle information of the angle sensor 122 of the wrist joint 112 acquired from the control device 106!
  • the image of the gripping object 109 is extracted using the plurality of images stored in the image storage unit 301 and the relative position / posture information of the gripping unit, and the position and shape of the gripping object are detected.
  • An image extraction unit 302 and based on the detected position and shape of the gripping object 109 and the position information of the surrounding object 108 detected by the surrounding monitoring device 3, the gripping object 109 and the surrounding object 108
  • a contact possibility determination device 4 which is a contact possibility determination means for determining the possibility of contact with the user
  • an alarm device 5 which is an alarm means for notifying the possibility of contact by sound / image.
  • the contact possibility determination device 4 notifies the control device 106 of the possibility of contact, thereby restricting the operation of the manipulator and preventing contact.
  • the contact possibility determination device 4 and the alarm device 5 are illustrated separately from the manipulator 101 and the control device 6, but may be configured to be incorporated in the manipulator 101 and the control device 6.
  • the image input apparatus 2 has a function of acquiring a distance image including a grayscale image and distance information to each point of the image.
  • the image input device 2 can be a stereo camera that measures the distance by image shift using two or more cameras, or a round trip until the laser beam hits an object and returns after scanning in one or two dimensions.
  • a laser radar that measures distance according to time
  • a combination of a plurality of the above image input devices may be used.
  • the surroundings monitoring device 3 has a function of acquiring a distance image.
  • the warning device 5 a buzzer, a lamp, a display, etc. are used.
  • the distance image is a term used in the field of image processing technology, and means data in which the value of the distance to the object is stored for each pixel of the image.
  • a normal image (grayscale image) is a two-dimensional representation of the brightness of light in a number of directions divided into a grid pattern, that is, the intensity of light reflected by an object in that direction.
  • the distance image stores the distance to the object in that direction instead of the brightness of the light for each point in the image.
  • a distance image is two-dimensional array data that stores the distance to an object, and is generally used to store the output of a sensor that can obtain depth information such as a stereo camera.
  • the manipulator 101 has a function of recognizing the position and shape of the gripping object 109 when the gripping unit 102 is gripping the target gripping object 109.
  • the following is the grip The description will be made on the assumption that the body 109 is held.
  • the target gripping object is recognized by recognizing the image using the shape feature based on the image information acquired by the image input device 2.
  • the gripping object 109 is gripped by measuring the position of the easy-to-grip point in 109 and positioning the gripping part at that position.
  • the operator visually determines the position and shape of the gripping object 109 and moves the manipulator by manual operation to grip the gripping object 109.
  • the manipulator 101 in this embodiment moves the manipulator 101 in a state where the gripping object 109 is gripped, and the movement of the gripping object in the image near the gripping part 102 is compared with the gripping part 102 and the image input device 2
  • the gripping object 109 is predicted from the image by predicting the gripping object 109 based on the change in the relative position of the image, and extracting the part that moves the same as the prediction from the image of the image input device 2. Extract high.
  • FIG. 3 is a flowchart showing an overall outline of a processing example of the image storage unit 301, the image extraction unit 302, and the contact possibility determination device 4.
  • This process starts when the gripping object 109 is gripped by the gripping part 102 of the manipulator 101.
  • stereoscopic image information around the grip portion 102 that is, a grayscale image 12 and a distance image 13 are acquired from the image input device 2, and at the same time, the wrist joint 112 measured by the angle sensor 122 through the control device 106 is obtained.
  • the angle 14 is acquired and stored in the image storage unit 301 (step Sl).
  • Information acquired before the movement of the manipulator 101 is referred to by the grayscale image 12a, the distance image 13a, and the wrist joint angle 14a in the subsequent processing.
  • the manipulator 101 starts moving and waits until the gripping part 102 moves a little (step S2).
  • Step S3 The information acquired after the movement is referred to by the grayscale image 12b, the distance image 13b, and the wrist joint angle 14b in the subsequent processing.
  • Step S4 the position / shape of the grasped object 109 is determined by the image extraction unit 302 based on the information acquired in Step 1 and Step 3 (Step S4).
  • the contact possibility determination device 4 acquires the position information of the surrounding object from the surrounding monitoring device 3, and compares it with the position of the gripping object 109 determined in step S4. (Step S5).
  • step S6 The result of the contact possibility determination process in step S5 is determined (step S6), and if there is a possibility of contact, an alarm is output by the alarm device 5 (step S7).
  • an alarm output method for example, in the case of an automatic manipulator, the control device 106 is notified and the manipulator is stopped, and in the case of an operation type manipulator, the manipulator operation is performed by voice output or lighting of a warning lamp. It is possible to tell the person.
  • the contact possibility determination process if there is no possibility of contact, the process proceeds to the next process. Finally, it is determined whether or not the grip portion 102 of the manipulator 101 is gripping the gripped object 109 (step S8).
  • step S9 If the object is being gripped, the grayscale image 12b, distance image 13b, Replace the information of angle 14b of wrist joint 112 with the information before the movement (step S9), return to the process of step S2, repeat the process, and continue to monitor the possibility of contact between the gripped object and surrounding objects . If it is determined in step S8 that the gripper 102 has not already gripped the object, the process ends.
  • step S4 a processing example of the gripping object position / shape determination process (step S4) in the image extraction unit 302 will be described with reference to the flowchart of FIG.
  • the position 'shape of the grasped object 109 is determined based on the grayscale image, the distance image, and the wrist contact angle information acquired in step S1 and step S3.
  • the moved gray image 12b and distance image 13b are divided into grid-like blocks (step S31).
  • the size of the block is determined in advance. The smaller the block, the higher the position resolution, and the larger the block, the better the image matching accuracy described later. Usually, the block is set to about 5 X 5 pixels (pixels) to 25 X 25 pixels.
  • the forearm 103 appears in the image as shown in the grayscale image example in FIG.
  • step S32 one piece of block information of the divided gray image 12b after movement is extracted, and the following processing is performed as block B (i) (step S32).
  • the subsequent image processing will be described with reference to the image processing example shown in FIG. First, the spatial position Pb (i) of the point Q (i) in the block B (i) is obtained (step S33). That is, the point of the object in the center of block B (i) is Q (i), and the position (two-dimensional coordinates) of the point Q (i) on the image is obtained and Rb (i).
  • step S34 the spatial position Pa (i) of Q (i) before movement when it is assumed that the point Q (i) is fixed to the grip portion 102 is obtained (step S34). That is, assuming that the point Q (i) is a point on the grasped object 109 fixed to the grasping part 102, the angle 14 of the wrist joint 112 is changed from the wrist angle 14b after the current movement to the wrist angle 14a before the movement 14a.
  • the three-dimensional spatial position (relative to image input device 2) when rotated to the position is obtained by coordinate transformation, and this is defined as Pa (i) (J2 in Fig. 6).
  • the position (two-dimensional coordinate) Ra (i) where the spatial position Pa (i) appears on the image of the image input device 2 is obtained by projection transformation (step S35) (J3 in FIG. 6). Note that the spatial position Pa (i) is also a relative position with respect to the image input device 2 in the same manner as P b (i).
  • a comparison is made to determine whether the images match (step S36). If the images match as a result of the determination (step S37), a gripping object mark indicating that the object is on the gripping object 109 is attached to block B (i) (step S38).
  • the position Ra (i) is assumed that the point Q (i) shown in the block B (i) on the grayscale image 12b after movement is a part of the grasped object 109. It is presumed that the point Q (i) appears in the gray image 12 a before moving. Therefore, if the point Q (i) is actually a part of the grasped object 109, the image of the block B (i) and the image of the partial image 21 should match. On the other hand, if Q (i) is different from the assumption, if the object is a background object rather than a part of the grasped object 109, the images do not match. This is because the manipulator 101 moves when Q (i) is a background object.
  • the point Q (i) appears at a different location from the position Ra (i) on the gray image 12a before movement. It is. Therefore, when the images match, it can be determined that the point Q (i) is a part of the grasped object 109.
  • an image matching technique such as normalized correlation is used for determining image matching.
  • the normalized correlation value between the image of the block B (i) on the gray image 12b after movement and the partial image 21 at the position Ra (i) on the gray image 12a before movement is obtained, and the value is determined in advance.
  • it is larger than the threshold! /, It is determined that the images match.
  • step S39 it is determined whether or not all blocks have been processed. If there is a block that has not been processed yet, the above steps S32 to S38 are repeated to process all blocks. When it is completed, this process ends. With the above processing, the 3D spatial position of the point on the grasped object can be detected, and the position 'shape of the grasped object can be known.
  • the manipulator is moved while the gripping object is gripped, and the gripping object motion in the image at that time is determined using the relative position of the gripping unit and the image input device measured by the sensor.
  • the relative position between the grip unit 102 and the image input device 2 is measured, and the image is captured so that the grip unit 102 shown in the image does not move before and after the movement of the manipulator 101.
  • step S5 of FIG. 3 a processing example of the contact possibility determination process (step S5) of FIG. 3 will be described with reference to the flowchart of FIG.
  • This process is performed by the contact possibility determination device 4. Compare the position of the point on the gripped object 109 extracted by the image extraction process described above with the position of the surrounding object obtained from the surrounding monitoring device 4, and if there is a close point, the possibility of contact It is determined that there is.
  • step S41 position information of surrounding objects is obtained from the surrounding monitoring device 3 (step S41).
  • step S42 block with the gripping object mark is extracted (step S42).
  • Block B with the extracted gripping object mark The spatial position Pb (i) corresponding to (i) is compared with the positions of surrounding objects acquired in step S41 (step S43). As a result, if the distance force is smaller than a predetermined threshold value, it is determined that the position is close (step S44), and if there is even one block that is close in position, the gripping object 109 and surrounding objects are detected. If there is a possibility of contact (step S45), the process is terminated.
  • step S44 if it is determined that the position is not close, it is determined whether or not all blocks that have been gripped are marked and processed (step S46). When the block processing is completed, it is determined that there is no possibility of contact (step S47), and the processing is terminated. If there is a block that has not been processed yet, the above steps S42 to S44 are repeated.
  • the gripping in the image is performed by image matching using the difference in the movement of the gripping object 109 and the surrounding object 108 on the image when the manipulator 101 is moved. Since the portion of the object 109 is detected, the shape 'position of the grasped object 109 can be reliably detected even when the background is complicated or there is a moving portion in the background. For this reason, even in a general usage environment, the possibility of approach between the gripped object 109 and surrounding objects is reliably determined, and if there is an object approaching the gripped object 109 using the determined result, the manipulator 101 is operated. Can be warned.
  • the grasped object 109 can always be grasped at the center of the visual field. For this reason, compared to the case where the image input device 2 is attached to the base 105 or the like, the grasped object 109 does not move much in the field of view (in the image), so that the grasped object 109 can be detected reliably and the viewing angle is limited Because it can do S, high resolution information can be obtained. Also, the distance force S between the image input device 2 and the gripping object 109 is moderately separated compared to the case where the image input device 2 is attached to the gripping part 102, so the overall shape of the gripping object 109 can be easily monitored. it can.
  • the force manipulator is used to determine the approaching state with the surrounding object 108 using the current position of the gripping object 109 detected based on the image information acquired from the image input device 2.
  • the surroundings monitoring device 3 is provided separately from the image input device 2.
  • the surrounding object 108 is connected to the image input device 2.
  • map information prepared in advance may be stored in the contact possibility determination device 4, and position information of surrounding objects may be acquired by referring to the map information.
  • the image input device 2 has a function of acquiring an image of the surrounding object 108
  • the image is extracted from the stereoscopic image input by the image input device 2 by the above method.
  • the surrounding object 108 is extracted from the background excluding the gripped object 109, and the three-dimensional coordinates of the object are stored.
  • the background force excluding the extracted grasped object 109 is also extracted as the surrounding object 108
  • the position of the surrounding object 108 is not basically changed by the operation of the manipulator 101. Therefore, the image input device 2 and the surrounding monitoring device 3 It is only necessary to obtain a surrounding distance image.
  • a stereoscopic image sensor such as a stereo camera is used as the image input device 2, but instead, a monocular camera such as a CCD may be used as a simple configuration.
  • a monocular camera such as a CCD
  • the distance may be estimated from the image of the monocular camera by assuming that the gripping object 109 is on one virtual plane fixed to the gripping portion 102.
  • the virtual surface is preferably a place where the probability that the gripping object 109 is high is preferably set to a surface that passes through the tip of the gripping portion 102 and is orthogonal to the gripping portion.
  • the image input device 2 can be attached to other places with the force attached to the forearm 103 of the manipulator.
  • the image input device 2 was attached to the base 105.
  • step S34 in the flowchart of the shape determination process shown in Fig. 4 the spatial position Pa (i) before the movement of the point Q (i) can be obtained considering the angle change of joints other than the wrist. That's fine.
  • the grasped object 109 moves greatly in the field of view.
  • the position of the image input device 2 is fixed, so that the structure becomes simple and the image input device 2 also serves as the surrounding monitoring device 3. There is an advantage that you can.
  • step S34 the spatial position Pa (i ) Should be equal to the position Pb (i) after movement.
  • the problem that the grasped object 109 is too close to the image input device 2 has an advantage that a certain force calculation process is simplified.
  • an angle sensor that measures the angle of the wrist joint is used as the gripper relative position detection unit that detects the relative positional relationship between the image input device and the gripper.
  • the relative position relationship may be obtained by detecting the position of the image input device and the gripping portion by other methods.
  • the method of measuring the position / posture includes the method of obtaining the position / posture by photographing with a camera with the object to be measured outside.
  • FIG. 8 shows a configuration example when the manipulator of this embodiment is mounted on a work machine used for forestry or demolition work.
  • the work machine 201 includes a grapple 2002, which is a gripping part, an arm 203, and an arm 204 as manipulators.
  • the work machine 201 is used for applications such as grabbing an object by a grappet nozzle 202 for disassembly and transportation.
  • the image input device 2 constituting the manipulator of the present embodiment is attached to the bottom surface of the arm 203 corresponding to the forearm. This location is suitable for capturing the object gripped by the grapple 202 because the positional relationship with the grapple 202 does not change greatly and is appropriately separated from the grapple 202.
  • the surrounding monitoring device 3 is attached to the upper part of the cabin 209 to cover a wide field of view.
  • An angle sensor 222 is attached to the joint 212 corresponding to the wrist as a wrist angle sensor.
  • the contact possibility determination device 4 and the alarm device 5 are illustrated to be installed inside the cabin 209. It may be installed in other places as long as it can communicate with each device that does not interfere with the operation of force S and manipulator.
  • the contact possibility determination device 4 uses the stereoscopic image around the grip nore 202 acquired by the image input device 2 and the angle of the joint 212 acquired from the angle sensor 222, so that the grapple 202 The gripping object is detected and compared with the position of the surrounding object detected by the surroundings monitoring device 3 to determine whether there is a possibility of contact. If there is a possibility of contact, an alarm device 5 informs the operator. There are methods for the alarm device 5 to notify the operator of the possibility of contact, such as giving vibration to the operation lever in addition to the notification method by voice or image. In addition, a series of image information related to processing in the contact possibility determination device 4 may be displayed by a display means, not shown.
  • the manipulator of this embodiment by applying the manipulator of this embodiment, the burden on the operator can be reduced. Also, as in this example, in a complex work environment with many obstacles such as forestry, demolition work, construction work, etc., by warning in advance the possibility of contact between the grasped object and the surrounding object 108, Work safely and quickly.
  • the shape of the gripped object of the manipulator can be reliably recognized, and the gripping object is a surrounding object. This can increase the safety of the manipulator.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Mining & Mineral Resources (AREA)
  • Civil Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Manipulator (AREA)

Abstract

The position and shape of an object (109) held by a holding section (102) is detected based on an image of the periphery of the holding section (102) acquired by an image input device (2) provided on an arm (103) of a manipulator (101) and on a positional change of the manipulator (101) detected by an angle sensor (122) provided at a joint (112) between the holding section (102) and the arm (103). Further, a contact possibility determination device (4) detects the shape and position of the object (109) and compares them with the position of a peripheral object (108), detected by a periphery monitoring device (3), to determine the possibility of contact between the objects. Further, when there is a possibility that the object (109) comes into contact with the peripheral object (108), movement of the manipulator (101) is stopped or the approach between the held object (109) and the peripheral object (108) are warned by alarm means (5).

Description

明 細 書  Specification
マニピュレータ  Manipulator
技術分野  Technical field
[0001] 本発明は、アームにより対象物を把持して位置決めや運搬を行うマニピュレータに 関する。  The present invention relates to a manipulator that holds an object with an arm and performs positioning and transportation.
背景技術  Background art
[0002] マニピュレータは、人間の腕のように関節とアーム (腕)が組み合わされた装置であ り、対象物を把持して位置決めや運搬を行う装置の総称である。マニピュレータは、 一般に、対象物を把持する把持機構と把持機構を動かすアーム機構を備えており、 アームの移動が自動制御されるものと、人が操作するものがある。アームが自動制御 されるタイプのマニピュレータの例としては、工場において部品の搬送、組み立て等 に用いられる産業用ロボットアームや、公共空間 'オフィス'家庭内などで家事や介護 などの作業を行うサービスロボットのアームなどがある。アームを人が操作するタイプ のマニピュレータの例としては、大型'重量物を取り扱う建設機械や、宇宙環境や原 子力施設などで使用されるマスタスレーブマニピュレータ、医療用の手術支援マニピ ユレータなどがある。  [0002] A manipulator is a device in which a joint and an arm (arm) are combined like a human arm, and is a generic name for a device that grips an object and performs positioning and transportation. In general, manipulators include a gripping mechanism that grips an object and an arm mechanism that moves the gripping mechanism. The manipulator includes a mechanism that automatically controls movement of the arm and a mechanism that is operated by a person. Examples of manipulators whose arms are automatically controlled include industrial robot arms used for parts transportation and assembly in factories, and service robots that perform tasks such as housework and nursing care in public space 'office' homes. There are arms. Examples of manipulators that operate the arm by humans include construction machines that handle large 'heavy objects', master-slave manipulators that are used in space environments and nuclear power facilities, and medical operation support manipulators. .
[0003] 従来から、マニピュレータが周囲の物や人に接触することを防止する安全技術が提 案されている。アーム型ロボットでは、ロボットアームの作業範囲の周囲に侵入検知 センサなどを設け、障害物の進入を検知すると緊急停止するものが提案されて!/、る。 また、建設機械では、作業員に赤外線発光器を装着させると共に、建設機械に赤外 線受光器を取り付け、建設機械の周囲に設定した警報領域内に作業員が侵入する と、赤外線受光器で作業員を検知し、警告を発する建設機械の警報装置が提案され ている。  [0003] Conventionally, safety techniques for preventing the manipulator from coming into contact with surrounding objects and people have been proposed. For arm-type robots, it has been proposed to install an intrusion detection sensor around the work area of the robot arm and to make an emergency stop when an obstacle enters! In addition, in construction machines, workers are equipped with infrared light emitters, and infrared light receivers are attached to the construction machines. When workers enter the alarm area set around the construction machines, infrared receivers are used. A construction machine warning device that detects workers and issues warnings has been proposed.
[0004] 一方、マニピュレータが把持している物体についても、周囲の物や人に対する接触 を防止する必要がある。しかし、従来は、形状が予め与えられていない未知の物体を 把持してレ、る場合に対応できる、把持物体の接触に関する安全技術は提案されて!/ヽ なかった。 [0005] 未知の把持物体の周囲への接触を防止するためには、把持中の物体の寸法'形 状を知る手段が必要となる。これに関連する技術としては、マニピュレータにより対象 物を把持することを目的として、その対象物の形状に関する情報を得る技術がある。 その技術として、アーム型ロボットに設けた視覚センサで取得した画像情報を基に、 画像認識によって把持対象物を単純形状へと当てはめ、その大きさ、向きを算出し、 それに基づいて把持方法を求め、任意形状物体を確実に把持する方法が提案され ている。 [0004] On the other hand, it is necessary to prevent the object held by the manipulator from coming into contact with surrounding objects and people. However, in the past, safety techniques related to the contact of gripped objects that can handle the case of gripping an unknown object that has not been given a shape in advance have been proposed! / ヽ. [0005] In order to prevent an unknown gripping object from coming into contact with the surroundings, a means for knowing the dimension of the object being gripped is required. As a technology related to this, there is a technology for obtaining information on the shape of an object for the purpose of grasping the object by a manipulator. As the technology, based on the image information acquired by the visual sensor provided in the arm type robot, the object to be grasped is applied to a simple shape by image recognition, its size and orientation are calculated, and the grasping method is obtained based on it. A method of reliably grasping an object of arbitrary shape has been proposed.
[0006] 特開 2000— 202790号公報には、アーム型ロボット本体に設けられている超音波 センサが、周囲の移動物体を検出し、本体との距離が一定の範囲内になるとロボット アームの移動速度を減速制御するロボット装置が開示されている。  [0006] In Japanese Patent Application Laid-Open No. 2000-202790, an ultrasonic sensor provided in an arm type robot main body detects a surrounding moving object and moves the robot arm when the distance from the main body falls within a certain range. A robot apparatus that performs speed reduction control is disclosed.
[0007] また、特開 2005— 128959号公報には、腕部ユニット等の可動部を学習対象物体 に接触させて学習対象物体を動かすことで学習対象物体を示す物体領域画像を切 り出し、その物体領域画像から特徴量を抽出して物体モデルデータベースに登録す るロボット装置およびその学習方法が開示されている。ここで、対象物体の画像を抽 出する手段は、撮像した画像から対象物体を動かす前後で変化のあった領域を抽 出する方法を用いている。  [0007] Further, in Japanese Patent Laid-Open No. 2005-128959, an object region image indicating a learning target object is cut out by moving a learning target object by bringing a movable part such as an arm unit into contact with the learning target object, A robot apparatus that extracts features from the object region image and registers them in the object model database and a learning method thereof are disclosed. Here, the means for extracting the image of the target object uses a method of extracting an area that has changed before and after the target object is moved from the captured image.
[0008] 従来は、マニピュレータが未知の物体を把持している場合に、把持物体が周囲の 物や人に接触することを防止する、適当な安全技術は存在しなかった。また、上記従 来の、画像認識によって把持対象物体の形状に関する情報を得る方法は、工場のよ うな整備された環境には適用できるが、一般のオフィス '家庭内や屋外など、複雑な 環境にぉレ、ては、画像に写って!/、る背景の物体から把持対象物体を分離するのが 難しいという問題がある。  [0008] Conventionally, when a manipulator is holding an unknown object, there has been no appropriate safety technique for preventing the gripped object from coming into contact with surrounding objects or people. In addition, the conventional method of obtaining information related to the shape of the object to be grasped by image recognition can be applied to a prepared environment such as a factory, but it can be applied to a complex environment such as a general office 'home or outdoors. There is a problem that it is difficult to separate the object to be grasped from the object in the background!
[0009] より詳しく述べると、一般のオフィス '家庭内や屋外などの複雑な環境では、把持す る対象物以外の周囲の物も動く可能性がある。特開 2005— 128959号公報のように 、対象物が動いている場合に、対象物を画像の中から抽出する方法として、時刻の 異なる複数の画像から、変化した部分を抽出する方法が知られている力 周囲の物 も動く場合は、動いた背景の部分を対象物と誤認してしまうので、正しく対象物を抽 出できない。 [0010] このように、未知の物体を安全に取り扱うためには、把持している物体を認識する必 要があるが、従来技術では、一般の環境においては、把持対象物体を確実に認識 するのは困難だった。このことは、オフィス.家庭用サービスロボットなど、未知の物体 を扱う自動型のマニピュレータを持つロボットを実用化することを困難にしていた。ま た、建設機械など、操作型マニピュレータの場合は、操作者の注意力によって安全 性を確保しなければならず、操作者の負担が大きかった。 More specifically, in a general office 'complex environment such as at home or outdoors, surrounding objects other than the object to be gripped may move. As disclosed in Japanese Patent Laid-Open No. 2005-128959, as a method for extracting an object from an image when the object is moving, a method for extracting a changed portion from a plurality of images at different times is known. If the surrounding object moves, the moving background part will be mistaken for the object, and the object cannot be extracted correctly. [0010] As described above, in order to safely handle an unknown object, it is necessary to recognize the object being grasped. However, in the conventional technique, the object to be grasped is surely recognized in a general environment. It was difficult. This made it difficult to put into practical use robots with automatic manipulators that handle unknown objects, such as office and home service robots. In the case of operation-type manipulators such as construction machines, safety has to be ensured by the operator's attention, which has been a heavy burden on the operator.
[0011] 特許文献 1 :特開 2000— 202790号公報  Patent Document 1: Japanese Patent Application Laid-Open No. 2000-202790
特許文献 2:特開 2005— 128959号公報  Patent Document 2: Japanese Patent Laid-Open No. 2005-128959
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0012] 本発明は力、かる点に鑑みてなされたものであり、把持している物体も周囲の物体も 任意に動きうる状況において、未知の把持物体の形状を確実に認識でき、認識した 形状に基づいて作業を行い、かつ、把持物体が周囲の物 ·人に接触するのを防止し 、それによつて安全性を高めることができるマニピュレータを提供することを目的とす 課題を解決するための手段 [0012] The present invention has been made in view of the force and the point, and in the situation where the object being grasped and the surrounding object can move arbitrarily, the shape of the unknown grasped object can be reliably recognized and recognized. In order to solve the problem, an object is to perform a work based on the shape and to provide a manipulator capable of preventing the gripped object from coming into contact with surrounding objects and people and thereby improving safety. Means
[0013] 本発明のマニピュレータは、アームと、前記アームを駆動するアーム駆動手段と、 前記アームに設けられた把持部と、前記把持部の周辺画像を取得する画像入力手 段と、前記画像入力手段に対する前記把持部の相対位置を検出する把持部相対位 置検出手段と、前記画像入力手段により取得された複数の画像と、前記把持部相対 位置検出手段により検出された前記画像入力手段に対する前記把持部の相対位置 とを記憶する記憶手段と、を有し、前記記憶手段に記憶された複数の画像と前記把 持部の前記画像入力手段に対する相対位置に基づ!/、て、前記把持物体の位置およ び形状を検出するものである。  [0013] The manipulator of the present invention includes an arm, arm driving means for driving the arm, a gripping portion provided on the arm, an image input means for acquiring a peripheral image of the gripping portion, and the image input Gripper relative position detection means for detecting the relative position of the gripper with respect to the means, a plurality of images acquired by the image input means, and the image input means detected by the gripper relative position detection means Storage means for storing the relative position of the gripping part, and based on the plurality of images stored in the storage means and the relative position of the gripping part to the image input means! It detects the position and shape of an object.
発明の効果  The invention's effect
[0014] 本発明によれば、マニピュレータの把持物体の形状を認識して、その把持物体が 周囲の物体に接触するのを防止して、マニピュレータの安全性を高めることができる 〇 [0014] According to the present invention, it is possible to recognize the shape of the gripping object of the manipulator, prevent the gripping object from coming into contact with surrounding objects, and improve the safety of the manipulator. Yes
図面の簡単な説明  Brief Description of Drawings
[0015] [図 1]本発明の一実施の形態による全体構成例を示す模式図。  FIG. 1 is a schematic diagram showing an example of the overall configuration according to an embodiment of the present invention.
[図 2]本発明の一実施の形態によるマニピュレータ装置のシステム構成例を示すプロ ック図。  FIG. 2 is a block diagram showing a system configuration example of a manipulator device according to an embodiment of the present invention.
[図 3]本発明の一実施の形態による接触可能性判定処理の全体処理例示すフロー チャート。  FIG. 3 is a flowchart showing an overall processing example of contact possibility determination processing according to an embodiment of the present invention.
[図 4]本発明の一実施の形態による把持物体の位置'形状判定処理の処理例を示す フローチャート。  FIG. 4 is a flowchart showing a processing example of a gripping object position ′ shape determination process according to an embodiment of the present invention.
[図 5]本発明の一実施の形態による濃淡画像例を示す説明図。  FIG. 5 is an explanatory diagram showing an example of a grayscale image according to an embodiment of the present invention.
[図 6]本発明の一実施の形態による把持物体の位置'形状判定処理における画像処 理例を示す説明図。  FIG. 6 is an explanatory view showing an example of image processing in the gripping object position ′ shape determination processing according to the embodiment of the present invention.
[図 7]本発明の一実施の形態による接触可能性判定処理の処理例を示すフローチヤ ート。  FIG. 7 is a flowchart showing an example of a contact possibility determination process according to an embodiment of the present invention.
[図 8]本発明の他の実施の形態による全体構成例を示す模式図。  FIG. 8 is a schematic diagram showing an example of the overall configuration according to another embodiment of the present invention.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0016] 以下、本発明の一実施の形態を、添付図面を参照して説明する。 Hereinafter, an embodiment of the present invention will be described with reference to the accompanying drawings.
[0017] 図 1に本実施例におけるマニピュレータの全体構成例を、図 2に本実施例における マニピュレータのシステム構成例を示す。まず、図 1を参照して、本実施例の全体構 成例について説明する。 FIG. 1 shows an example of the overall configuration of the manipulator in the present embodiment, and FIG. 2 shows an example of the system configuration of the manipulator in the present embodiment. First, an overall configuration example of the present embodiment will be described with reference to FIG.
[0018] 本実施例におけるマニピュレータ 101は、ベース 105と、ベースから直接に繋がつ ている複数のアーム(腕) 103、 104と、アームの先端に取り付けられた把持部 102と を備えている。図 1では、アームの数を 2本としている力 それ以上設けてもよい。把 持き と、アーム 103、 104とベース 105の間には、可動きである関節 112、 113、 114がある。また、各関節には、関節を駆動するァクチユエータ 132、 133、 134 (図 1 では関節 112、 113、 114と重ねて図示)と、可動部の変化量を測定するために関節[0018] The manipulator 101 in this embodiment includes a base 105, a plurality of arms (arms) 103 and 104 that are directly connected to the base, and a grip portion 102 attached to the tip of the arm. In Fig. 1, a force with two arms may be provided. Between the grip and the arms 103, 104 and the base 105, there are movable joints 112, 113, 114. Each joint has actuators 132, 133, and 134 (shown in FIG. 1 overlaid on joints 112, 113, and 114) and joints to measure the amount of change in the movable part.
112、 113、 114の角度を測定する角度センサ 122、 123、 124 (図 1では関節 112、Angle sensors 122, 123, 124 that measure the angle of 112, 113, 114 (in Fig. 1, joint 112,
113、 114と重ねて図示)を備えている。ここで、関節は複数の回転自由度を持って いてもよく、その場合角度センサは複数の方向の回転角度を取得する。また、角度セ ンサが取得した角度情報を入力し、ァクチユエータを制御してマニピュレータ 101を 動かす制御装置 106を備えている。制御装置 106は、マニピュレータ 101の用途、形 状により、その設置位置はさまざまである力 図 1ではマニピュレータ 101の近傍に記 載した。 113 and 114). Where the joint has multiple rotational degrees of freedom In this case, the angle sensor acquires rotation angles in a plurality of directions. In addition, a control device 106 that inputs angle information acquired by the angle sensor and controls the actuator to move the manipulator 101 is provided. The control device 106 has various installation positions depending on the use and shape of the manipulator 101. FIG. 1 shows the control device 106 in the vicinity of the manipulator 101.
[0019] 一般的に、マニピュレータは主にベースから直列に繋がるアームを用いて把持部の 位置を制御し、先端のアームと把持部との間の関節を回転させることで、把持部の姿 勢を制御する。ここでは、人の構造との対比から、把持部 102の手前のアーム 103を 前腕 103と呼ぶ。また、前腕 103と把持部 102の間の関節 112を手首関節 112と呼 ぶ。手首関節 112を回転させた場合も把持部 102の先端の位置は動くが、把持部の 長さはアーム 103、 104よりも短いため、位置の変化量は相対的に小さい。なお、手 首関節に相当する部分が、複数の関節と短レ、アームから構成されて!/、る場合もある 力 S、ここではそれらをまとめて 1個の手首関節と見なすことにする。  [0019] In general, the manipulator mainly controls the position of the gripping part by using an arm connected in series from the base, and rotates the joint between the arm at the tip and the gripping part, so that the posture of the gripping part is increased. To control. Here, the arm 103 in front of the grip portion 102 is referred to as the forearm 103 for comparison with the human structure. In addition, the joint 112 between the forearm 103 and the grip portion 102 is called a wrist joint 112. Even when the wrist joint 112 is rotated, the position of the tip of the grip portion 102 moves, but since the length of the grip portion is shorter than that of the arms 103 and 104, the amount of change in the position is relatively small. Note that the part corresponding to the wrist joint is composed of multiple joints, short arms, and arms! /, Which may be a force S. Here, these are collectively regarded as one wrist joint.
[0020] 次に、図 1および図 2を参照して、本実施例におけるマニピュレータ 101のシステム 構成例について説明する。マニピュレータ 101に搭載した本実施例のマニピュレータ 101は、把持部 102付近の物体の立体画像を取得する画像入力手段である画像入 力装置 2と、周囲の物体 108などの位置を検出できる周囲監視手段である周囲監視 装置 3とを備える。本実施例では、画像入力装置 2は前腕 103に取り付け、周囲監視 装置 3は、画像入力装置 2よりも広い範囲の周辺画像を取得できるよう、マニピユレ一 タ 101のベース 105に設けている。画像入力装置 2は把持部 102付近の画像を取得 できるのであればアーム 104に設けても構わないが、間に関節 113が入るためにでき たら前腕 (アーム) 103に設けるのが好ましい。また、画像入力装置 2に対する把持部 102の装置位置'姿勢の変化を検出する把持部相対位置検出手段として、手首関節 112の角度を測定する角度センサ 122を用いる。また、画像入力装置 2が取得した 立体画像データと、制御装置 106から取得した手首関節 112の角度センサ 122の角 度情報を組み合わせて記憶する記憶手段である画像記憶部 301を備えて!/、る。また 、画像記憶手段 301に記憶された複数の画像と把持部の相対位置 ·姿勢の情報を 用いて、把持物体 109の画像を抽出し、把持物体の位置および形状を検出する画 像抽出部 302を備えており、さらに、検出された把持物体 109の位置および形状と、 周囲監視装置 3が検出した周囲物体 108の位置の情報を基に、把持物体 109と周 囲物体 108との接触する可能性を判定する接触可能性判定手段である接触可能性 判定装置 4と、接触の可能性を音 ·画像等で通知する警報手段である警報装置 5を 備えている。また、接触可能性判定装置 4は、制御装置 106に接触の可能性を通知 することにより、マニピュレータの動作を制限させて接触を防止する。ここで、図 1では 、接触可能性判定装置 4と警報装置 5をマニピュレータ 101や制御装置 6とは分離し て記載しているが、マニピュレータ 101や制御装置 6に組み込む構成としてもよい。 Next, a system configuration example of the manipulator 101 in the present embodiment will be described with reference to FIG. 1 and FIG. The manipulator 101 of the present embodiment mounted on the manipulator 101 includes an image input device 2 that is an image input unit that acquires a three-dimensional image of an object near the grip portion 102, and a surrounding monitoring unit that can detect the position of a surrounding object 108, Ambient monitoring device 3 is provided. In this embodiment, the image input device 2 is attached to the forearm 103, and the surroundings monitoring device 3 is provided on the base 105 of the manipulator 101 so as to acquire a wider range of surrounding images than the image input device 2. The image input device 2 may be provided on the arm 104 as long as it can acquire an image in the vicinity of the grip portion 102, but is preferably provided on the forearm (arm) 103 if the joint 113 can be inserted therebetween. Further, an angle sensor 122 that measures the angle of the wrist joint 112 is used as a gripper relative position detection unit that detects a change in the position of the gripper 102 relative to the image input device 2. The image storage unit 301 is a storage unit that stores the stereoscopic image data acquired by the image input device 2 and the angle information of the angle sensor 122 of the wrist joint 112 acquired from the control device 106! /, The In addition, the image of the gripping object 109 is extracted using the plurality of images stored in the image storage unit 301 and the relative position / posture information of the gripping unit, and the position and shape of the gripping object are detected. An image extraction unit 302, and based on the detected position and shape of the gripping object 109 and the position information of the surrounding object 108 detected by the surrounding monitoring device 3, the gripping object 109 and the surrounding object 108 A contact possibility determination device 4 which is a contact possibility determination means for determining the possibility of contact with the user, and an alarm device 5 which is an alarm means for notifying the possibility of contact by sound / image. Further, the contact possibility determination device 4 notifies the control device 106 of the possibility of contact, thereby restricting the operation of the manipulator and preventing contact. Here, in FIG. 1, the contact possibility determination device 4 and the alarm device 5 are illustrated separately from the manipulator 101 and the control device 6, but may be configured to be incorporated in the manipulator 101 and the control device 6.
[0021] 本実施例の画像入力装置 2は、濃淡画像と、画像の各点までの距離情報からなる 距離画像を取得する機能を有する。画像入力装置 2としては、 2台以上のカメラを用 いて画像のずれにより距離を測定するステレオカメラや、レーザ光を 1次元あるいは 2 次元に走査し、レーザ光が物体に当たって戻ってくるまでの往復時間により距離を測 定するレーザレーダや、同じく光の往復時間により距離を測定可能な撮像素子を備 える距離画像カメラや、これらの装置と通常の 2次元の CCDカメラ等との組み合わせ が使用できる。また、上記の画像入力装置を複数組み合わせて用いても良い。周囲 監視装置 3も画像入力装置 2と同様に距離画像を取得する機能を有する。また、警 報装置 5としては、ブザー、ランプ、ディスプレイなどを使用する。  The image input apparatus 2 according to the present embodiment has a function of acquiring a distance image including a grayscale image and distance information to each point of the image. The image input device 2 can be a stereo camera that measures the distance by image shift using two or more cameras, or a round trip until the laser beam hits an object and returns after scanning in one or two dimensions. A laser radar that measures distance according to time, a distance image camera equipped with an image sensor that can measure distance according to the round-trip time of light, and a combination of these devices with a normal two-dimensional CCD camera can be used. . A combination of a plurality of the above image input devices may be used. Similar to the image input device 2, the surroundings monitoring device 3 has a function of acquiring a distance image. As the warning device 5, a buzzer, a lamp, a display, etc. are used.
[0022] ここで、距離画像とは、画像処理技術の分野で用いられる用語であり、画像の画素 ごとに物体までの距離の値を記憶したデータのことを意味する。通常の画像 (濃淡画 像)は、格子状に細力べ分割された多数の方向に対する光の明るさ、すなわち、その 方向に存在する物体で反射されてきた光の強さの情報を 2次元の配列データとして 記憶したものである力 それに対して、距離画像は、画像中のそれぞれの点に対して 、光りの明るさの代わりにその方向における物体までの距離を記憶している。つまり、 距離画像は、物体までの距離を格納した 2次元配列データであり、ステレオカメラな どの奥行き情報が得られるセンサの出力を格納するために一般的に用いられている  Here, the distance image is a term used in the field of image processing technology, and means data in which the value of the distance to the object is stored for each pixel of the image. A normal image (grayscale image) is a two-dimensional representation of the brightness of light in a number of directions divided into a grid pattern, that is, the intensity of light reflected by an object in that direction. On the other hand, the distance image stores the distance to the object in that direction instead of the brightness of the light for each point in the image. In other words, a distance image is two-dimensional array data that stores the distance to an object, and is generally used to store the output of a sensor that can obtain depth information such as a stereo camera.
[0023] マニピュレータ 101は、把持部 102によって対象となる把持物体 109を把持してい る時に、把持物体 109の位置 ·形状を認識できる機能を有している。以下は、把持物 体 109を把持していることを前提として説明する。把持物体 109を把持する方法とし ては、自動型マニピュレータの場合は、例えば、画像入力装置 2が取得した画像情 報を基に形状の特徴を用いて画像認識することにより、対象となる把持物体 109のな かの把持しやすい点の位置を計測し、把持部をその位置に位置決めすることで、把 持物体 109を把持する。また、操作型マニピュレータの場合は、操作者が目視で把 持物体 109の位置と形状を判断し、マニピュレータを手動操作で動かして把持物体 1 09を把持する。 The manipulator 101 has a function of recognizing the position and shape of the gripping object 109 when the gripping unit 102 is gripping the target gripping object 109. The following is the grip The description will be made on the assumption that the body 109 is held. As a method of gripping the gripping object 109, in the case of an automatic manipulator, for example, the target gripping object is recognized by recognizing the image using the shape feature based on the image information acquired by the image input device 2. The gripping object 109 is gripped by measuring the position of the easy-to-grip point in 109 and positioning the gripping part at that position. In the case of an operation type manipulator, the operator visually determines the position and shape of the gripping object 109 and moves the manipulator by manual operation to grip the gripping object 109.
[0024] 本実施例におけるマニュピレータ 101は、把持物体 109を把持している状態で、マ ニピユレータ 101を動かし、把持部 102付近の画像中での把持物体の動きを把持部 102と画像入力装置 2の相対位置の変化に基づいて把持物体 109を予測して、その 予測と同じ動きをしている部分を画像入力装置 2の画像中から抽出することで、把持 物体 109を画像の中から信頼性高く抽出するというものである。  [0024] The manipulator 101 in this embodiment moves the manipulator 101 in a state where the gripping object 109 is gripped, and the movement of the gripping object in the image near the gripping part 102 is compared with the gripping part 102 and the image input device 2 The gripping object 109 is predicted from the image by predicting the gripping object 109 based on the change in the relative position of the image, and extracting the part that moves the same as the prediction from the image of the image input device 2. Extract high.
[0025] 次に、図 3を参照し、画像記憶部 301、画像抽出部 302および接触可能性判定装 置 4の動作について説明する。図 3は、画像記憶部 301、画像抽出部 302および接 触可能性判定装置 4の処理例の全体概要を表すフローチャートである。  Next, the operations of the image storage unit 301, the image extraction unit 302, and the contact possibility determination device 4 will be described with reference to FIG. FIG. 3 is a flowchart showing an overall outline of a processing example of the image storage unit 301, the image extraction unit 302, and the contact possibility determination device 4.
[0026] 本処理は、マニピュレータ 101の把持部 102によって把持物体 109を把持した時点 で処理を開始する。まず、画像入力装置 2から把持部 102周辺の立体画像情報、す なわち、濃淡画像 12と距離画像 13を取得し、同時に、制御装置 106を通じて、角度 センサ 122により計測された、手首関節 112の角度 14を取得して画像記憶部 301に 保存する(ステップ Sl)。これらのマニピュレータ 101の移動前に取得された情報は、 以降の処理において、濃淡画像 12a、距離画像 13a、手首関節角度 14aで参照する 。次に、把持物体 109を移動させるために、マニピュレータ 101が動き始め、把持部 1 02が少し移動するまで待つ(ステップ S2)。把持部 102の移動後、画像入力装置 2か ら把持部 102周辺の濃淡画像 12と距離画像 13を取得し、制御装置 106から手首関 節 112の角度 14を取得して画像記憶部 301に保存する(ステップ S3)。これらの移 動後に取得された情報は、以降の処理において、濃淡画像 12b、距離画像 13b、手 首関節角度 14bで参照する。次に、画像抽出部 302により、ステップ 1及びステップ 3 で取得した情報を基に、把持物体 109の位置 ·形状を判定する(ステップ S4)。次に 、接触可能性判定装置 4により、周囲監視装置 3から周囲の物体の位置情報を取得 し、ステップ S4で判定した把持物体 109の位置と比較することにより、把持物体 109 と周囲物体 108との接触の可能性があるかを判定する(ステップ S5)。ステップ S5の 接触可能性判定処理の結果を判断し (ステップ S6)、接触の可能性がある場合は、 警報装置 5により警報を出力する(ステップ S7)。警報出力方法としては、例えば、自 動型マニピュレータの場合は、制御装置 106に通知してマニピュレータを停止させ、 操作型マニピュレータの場合は、音声出力や、警告ランプの点灯などにより、マニピ ユレータの操作者に伝える、などの方法が可能である。接触可能性判定処理の結果 、接触の可能性のない場合は、次の処理へ進む。最後に、マニピュレータ 101の把 持部 102が把持物体 109を把持しているか否かを判定し(ステップ S8)、物体を把持 中の場合は、ステップ S3で取得した濃淡画像 12b、距離画像 13b、手首関節 112の 角度 14bの情報を移動前の情報に置き換えて保存し (ステップ S9)、ステップ S2の処 理に戻って処理を繰り返し、把持物体と周囲の物体との接触可能性の監視を続ける 。ステップ S8で、把持部 102が既に物体を把持していない場合は、処理を終了する This process starts when the gripping object 109 is gripped by the gripping part 102 of the manipulator 101. First, stereoscopic image information around the grip portion 102, that is, a grayscale image 12 and a distance image 13 are acquired from the image input device 2, and at the same time, the wrist joint 112 measured by the angle sensor 122 through the control device 106 is obtained. The angle 14 is acquired and stored in the image storage unit 301 (step Sl). Information acquired before the movement of the manipulator 101 is referred to by the grayscale image 12a, the distance image 13a, and the wrist joint angle 14a in the subsequent processing. Next, in order to move the gripping object 109, the manipulator 101 starts moving and waits until the gripping part 102 moves a little (step S2). After the gripper 102 is moved, the gray scale image 12 and the distance image 13 around the gripper 102 are acquired from the image input device 2, and the angle 14 of the wrist joint 112 is acquired from the control device 106 and stored in the image storage unit 301. (Step S3). The information acquired after the movement is referred to by the grayscale image 12b, the distance image 13b, and the wrist joint angle 14b in the subsequent processing. Next, the position / shape of the grasped object 109 is determined by the image extraction unit 302 based on the information acquired in Step 1 and Step 3 (Step S4). next The contact possibility determination device 4 acquires the position information of the surrounding object from the surrounding monitoring device 3, and compares it with the position of the gripping object 109 determined in step S4. (Step S5). The result of the contact possibility determination process in step S5 is determined (step S6), and if there is a possibility of contact, an alarm is output by the alarm device 5 (step S7). As an alarm output method, for example, in the case of an automatic manipulator, the control device 106 is notified and the manipulator is stopped, and in the case of an operation type manipulator, the manipulator operation is performed by voice output or lighting of a warning lamp. It is possible to tell the person. As a result of the contact possibility determination process, if there is no possibility of contact, the process proceeds to the next process. Finally, it is determined whether or not the grip portion 102 of the manipulator 101 is gripping the gripped object 109 (step S8). If the object is being gripped, the grayscale image 12b, distance image 13b, Replace the information of angle 14b of wrist joint 112 with the information before the movement (step S9), return to the process of step S2, repeat the process, and continue to monitor the possibility of contact between the gripped object and surrounding objects . If it is determined in step S8 that the gripper 102 has not already gripped the object, the process ends.
[0027] 上記の各処理について更に詳しく説明する。まず、図 4のフローチャートを参照して 画像抽出部 302における把持物体の位置 ·形状判定処理 (ステップ S4)の処理例に ついて説明する。 [0027] Each of the above processes will be described in more detail. First, a processing example of the gripping object position / shape determination process (step S4) in the image extraction unit 302 will be described with reference to the flowchart of FIG.
[0028] 本処理では、ステップ S 1及びステップ S3で取得した濃淡画像、距離画像、手首間 接角度の情報を基に、把持物体 109の位置'形状を判定する。まず、図 6に示すよう に、移動後の濃淡画像 12bと距離画像 13b (図示せず)を格子状のブロックに分割す る(ステップ S31)。ブロックの大きさは、あらかじめ定めておく。ブロックは、小さいほど 位置分解能が高まり、大きいほど後述の画像マッチングの精度が向上するという特徴 があり、通常は 5 X 5ピクセル(画素)ないし 25 X 25ピクセル程度に設定する。なお、 本実施例では、画像入力装置 2を前腕 103に取り付けているため、図 5の濃淡画像 例に示すように、前腕 103が画像に写りこむ。把持部 102が移動しても、画像の上で は前腕 103は常に同じ位置に見えるので、その部分(図 5のハッチングの部分)はこ れ以降の処理対象から除く。 [0029] 次に、分割した移動後の濃淡画像 12bのブロック情報をひとつ取り出し、ブロック B ( i)として以下の処理を行う(ステップ S32)。なお、以降の画像処理については、図 6 に示す画像処理例も参照して説明する。はじめに、ブロック B (i)に写っている点 Q (i) の空間位置 Pb (i)を求める(ステップ S33)。すなわち、ブロック B (i)の中心に写って いる物体の点を Q (i)とし、点 Q (i)の画像上における位置(2次元座標)を求め、 Rb (i )とする。そして、移動後の距離画像 13bのブロック B (i)の内部の各画素が持つ距離 情報を基に、各画素の距離の値を平均し、ブロック B (i)に写っている点 Q ( までの 平均距離を求め、逆投影変換により、点 Q (i)の画像入力装置 2に対する相対的な 3 次元空間位置 Pb (i)を求める(図 6の J1)。 [0028] In this process, the position 'shape of the grasped object 109 is determined based on the grayscale image, the distance image, and the wrist contact angle information acquired in step S1 and step S3. First, as shown in FIG. 6, the moved gray image 12b and distance image 13b (not shown) are divided into grid-like blocks (step S31). The size of the block is determined in advance. The smaller the block, the higher the position resolution, and the larger the block, the better the image matching accuracy described later. Usually, the block is set to about 5 X 5 pixels (pixels) to 25 X 25 pixels. In this embodiment, since the image input device 2 is attached to the forearm 103, the forearm 103 appears in the image as shown in the grayscale image example in FIG. Even if the grip portion 102 moves, the forearm 103 always appears in the same position on the image, so that portion (hatched portion in FIG. 5) is excluded from the subsequent processing targets. [0029] Next, one piece of block information of the divided gray image 12b after movement is extracted, and the following processing is performed as block B (i) (step S32). The subsequent image processing will be described with reference to the image processing example shown in FIG. First, the spatial position Pb (i) of the point Q (i) in the block B (i) is obtained (step S33). That is, the point of the object in the center of block B (i) is Q (i), and the position (two-dimensional coordinates) of the point Q (i) on the image is obtained and Rb (i). Then, based on the distance information of each pixel in the block B (i) of the distance image 13b after movement, the distance value of each pixel is averaged, and the point Q ( The relative distance of point Q (i) relative to the image input device 2 is obtained by backprojection transformation (J1 in Fig. 6).
[0030] 次に、点 Q (i)が把持部 102に固定されていると仮定した時の移動前の Q (i)の空間 位置 Pa (i)を求める(ステップ S34)。すなわち、点 Q (i)が把持部 102に固定されて いる把持物体 109上の点と仮定し、手首関節 112の角度 14を、現在の移動後の手 首角度 14bから移動前の手首角度 14aに回転させた時の 3次元空間位置(画像入力 装置 2に対する相対位置)を座標変換により求め、これを Pa (i)とする(図 6の J2)。次 に、空間位置 Pa (i)が、画像入力装置 2の画像上に写る位置(2次元座標) Ra (i)を 投影変換により求める (ステップ S35) (図 6の J3)。なお、上記の空間位置 Pa (i)も、 P b (i)と同様に画像入力装置 2に対する相対位置である。  Next, the spatial position Pa (i) of Q (i) before movement when it is assumed that the point Q (i) is fixed to the grip portion 102 is obtained (step S34). That is, assuming that the point Q (i) is a point on the grasped object 109 fixed to the grasping part 102, the angle 14 of the wrist joint 112 is changed from the wrist angle 14b after the current movement to the wrist angle 14a before the movement 14a. The three-dimensional spatial position (relative to image input device 2) when rotated to the position is obtained by coordinate transformation, and this is defined as Pa (i) (J2 in Fig. 6). Next, the position (two-dimensional coordinate) Ra (i) where the spatial position Pa (i) appears on the image of the image input device 2 is obtained by projection transformation (step S35) (J3 in FIG. 6). Note that the spatial position Pa (i) is also a relative position with respect to the image input device 2 in the same manner as P b (i).
[0031] 次に、移動後の濃淡画像 12b上のブロック B (i)の画像と、移動前の濃淡画像 12a 上の位置 Ra (i)を中心とする、ブロックと同じサイズの部分画像 21を比較して、画像 がー致するかどうかを判定する (ステップ S 36)。判定の結果、画像が一致した場合は (ステップ S37)、ブロック B (i)に把持物体 109上であることを示す把持物体マークを 付ける(ステップ S38)。  [0031] Next, a partial image 21 having the same size as the block, centered on the image of the block B (i) on the gray image 12b after movement and the position Ra (i) on the gray image 12a before movement, is obtained. A comparison is made to determine whether the images match (step S36). If the images match as a result of the determination (step S37), a gripping object mark indicating that the object is on the gripping object 109 is attached to block B (i) (step S38).
[0032] ここで、位置 Ra (i)は、移動後の濃淡画像 12b上のブロック B (i)に写っている点 Q ( i)が把持物体 109の一部であると仮定した場合に、点 Q (i)が移動前の濃淡画像 12 aに写る場所と推定される。そのため、もし点 Q (i)が実際に把持物体 109の一部であ れば、ブロック B (i)の画像と部分画像 21の画像が一致するはずである。逆に、もし Q (i)が仮定とは異なり、把持物体 109の一部でなくて背景の物体だった場合には、画 像は一致しない。これは、 Q (i)が背景の物体だった場合は、マニピュレータ 101の動 作に伴って、前腕 103に取り付けられた画像入力装置 2の位置'姿勢が変化するの で、点 Q (i)は移動前濃淡画像 12a上の位置 Ra (i)とは異なる場所に現れるためであ る。このため、画像が一致した場合には、点 Q (i)が把持物体 109の一部であると判 断できる。 [0032] Here, the position Ra (i) is assumed that the point Q (i) shown in the block B (i) on the grayscale image 12b after movement is a part of the grasped object 109. It is presumed that the point Q (i) appears in the gray image 12 a before moving. Therefore, if the point Q (i) is actually a part of the grasped object 109, the image of the block B (i) and the image of the partial image 21 should match. On the other hand, if Q (i) is different from the assumption, if the object is a background object rather than a part of the grasped object 109, the images do not match. This is because the manipulator 101 moves when Q (i) is a background object. Since the position of the image input device 2 attached to the forearm 103 changes with the operation, the point Q (i) appears at a different location from the position Ra (i) on the gray image 12a before movement. It is. Therefore, when the images match, it can be determined that the point Q (i) is a part of the grasped object 109.
[0033] なお、画像一致の判定には、正規化相関などの画像マッチングの手法を用いる。  Note that an image matching technique such as normalized correlation is used for determining image matching.
例えば、移動後の濃淡画像 12b上のブロック B (i)の画像と、移動前の濃淡画像 12a 上の位置 Ra (i)の部分画像 21の正規化相関値を求め、その値が予め定めた閾値よ りも大き!/、時は画像が一致したと判定する。  For example, the normalized correlation value between the image of the block B (i) on the gray image 12b after movement and the partial image 21 at the position Ra (i) on the gray image 12a before movement is obtained, and the value is determined in advance. When it is larger than the threshold! /, It is determined that the images match.
[0034] 最後に、全てのブロックについて処理を行ったか否かを判定し(ステップ S39)、ま だ処理していないブロックがある場合は、上記のステップ S32〜S38を繰り返し、全 ブロックの処理を完了したら本処理を終了する。以上の処理により、把持物体上の点 の 3次元空間位置を検出でき、把持物体の位置'形状が分かる。  [0034] Finally, it is determined whether or not all blocks have been processed (step S39). If there is a block that has not been processed yet, the above steps S32 to S38 are repeated to process all blocks. When it is completed, this process ends. With the above processing, the 3D spatial position of the point on the grasped object can be detected, and the position 'shape of the grasped object can be known.
[0035] 上記の画像抽出手順は、把持物体を把持している状態でマニピュレータを動かし、 センサで計測した把持部と画像入力装置の相対位置を用いて、その時の画像中で の把持物体の動きを予測して、その予測と同じ動きをしている部分を画像中から抽出 することで、把持物体のみを画像の中から抽出するというものである。また、この方法 は、見方を変えると、把持部 102と画像入力装置 2の相対位置を測定して、マユピュ レータ 101の移動の前後で、画像に写った把持部 102が動かないように画像を変形 し、そのように変形された画像同士を比較して、一致する部分を抽出することにより、 把持部 102に把持された把持物体 109のみを画像の中から抽出する方法でもある。  [0035] In the image extraction procedure described above, the manipulator is moved while the gripping object is gripped, and the gripping object motion in the image at that time is determined using the relative position of the gripping unit and the image input device measured by the sensor. By extracting the part that moves in the same way as the prediction from the image, only the grasped object is extracted from the image. In addition, when this method is changed, the relative position between the grip unit 102 and the image input device 2 is measured, and the image is captured so that the grip unit 102 shown in the image does not move before and after the movement of the manipulator 101. It is also a method of extracting only the gripped object 109 gripped by the gripping unit 102 from the images by comparing the deformed images and extracting the matching parts.
[0036] 次に、図 3の接触可能性判定処理 (ステップ S5)の処理例について図 7のフローチ ヤートを参照して説明する。  Next, a processing example of the contact possibility determination process (step S5) of FIG. 3 will be described with reference to the flowchart of FIG.
[0037] この処理は、接触可能性判定装置 4によって行われる。上述の画像抽出処理によ つて抽出された把持物体 109上の点の位置と、周囲監視装置 4から得られた周囲の 物体の位置を比較し、近い点があった場合は、接触の可能性があると判定する。  [0037] This process is performed by the contact possibility determination device 4. Compare the position of the point on the gripped object 109 extracted by the image extraction process described above with the position of the surrounding object obtained from the surrounding monitoring device 4, and if there is a close point, the possibility of contact It is determined that there is.
[0038] まず、周囲監視装置 3から周囲の物体の位置情報を得る(ステップ S41)。次に、移 動後の濃淡画像 12b上のブロック B (i)のうち、把持物体マークが付けられているブロ ックを抽出する(ステップ S42)。抽出した把持物体マークが付けられているブロック B (i)に対応する空間位置 Pb (i)と、ステップ S41で取得した周囲の物体の位置を比較 する(ステップ S43)。その結果、それらの距離力 予め定めた閾値よりも小さい場合 は、位置が近いと判断し (ステップ S44)、位置が近いブロックが 1つでもあった場合 には、把持物体 109と周囲の物体に接触の可能性があるとして (ステップ S45)、処 理を終了する。ステップ S44の判定の結果、位置は近くないと判断した場合は、把持 物体マークが付けられてレ、る全てのブロックにつ!/、て処理したかを判定し(ステップ S 46)、全てのブロックの処理が終了した場合は、接触可能性なしとして (ステップ S47 )、処理を終了する。まだ処理していないブロックがある場合は、上記のステップ S42 〜S44を繰り返す。 First, position information of surrounding objects is obtained from the surrounding monitoring device 3 (step S41). Next, among the blocks B (i) on the grayscale image 12b after the movement, the block with the gripping object mark is extracted (step S42). Block B with the extracted gripping object mark The spatial position Pb (i) corresponding to (i) is compared with the positions of surrounding objects acquired in step S41 (step S43). As a result, if the distance force is smaller than a predetermined threshold value, it is determined that the position is close (step S44), and if there is even one block that is close in position, the gripping object 109 and surrounding objects are detected. If there is a possibility of contact (step S45), the process is terminated. As a result of the determination in step S44, if it is determined that the position is not close, it is determined whether or not all blocks that have been gripped are marked and processed (step S46). When the block processing is completed, it is determined that there is no possibility of contact (step S47), and the processing is terminated. If there is a block that has not been processed yet, the above steps S42 to S44 are repeated.
[0039] このように、本実施例のマニピュレータ 101では、マニピュレータ 101を動かした時 の把持物体 109と周囲物体 108の画像上での動きの違いを利用して、画像マツチン グにより画像中の把持物体 109の部分を検出しているので、背景が複雑な場合や、 背景の中に動く部分がある場合でも、確実に把持物体 109の形状'位置を検出でき る。そのため、一般の使用環境においても、把持物体 109と周囲の物体との接近の 可能性を確実に判定し、判定した結果を用いて把持物体 109に接近する物体があ れば、マニピュレータ 101の操作者に警告することができる。  As described above, in the manipulator 101 of the present embodiment, the gripping in the image is performed by image matching using the difference in the movement of the gripping object 109 and the surrounding object 108 on the image when the manipulator 101 is moved. Since the portion of the object 109 is detected, the shape 'position of the grasped object 109 can be reliably detected even when the background is complicated or there is a moving portion in the background. For this reason, even in a general usage environment, the possibility of approach between the gripped object 109 and surrounding objects is reliably determined, and if there is an object approaching the gripped object 109 using the determined result, the manipulator 101 is operated. Can be warned.
[0040] また、本実施例では、画像入力装置 2をマニピュレータの前腕 103に取り付けてい るので、把持物体 109を常に視野の中心に捉えることができる。このため、画像入力 装置 2をベース 105などに取り付けた場合に比べて、把持物体 109が視野内(画像 内)で大きく動かないので、把持物体 109を確実に検出できるとともに、視野角を限 定すること力 Sできるので、高い解像度の情報が得られる。また、画像入力装置 2と把 持物体 109までの距離力 S、把持部 102に画像入力装置 2を取り付ける場合と比較し て適度に離れているので、把持物体 109の全体の形状を容易に監視できる。一方、 画像入力装置 2を前腕 103に取り付けているので、手首関節 112が動くと把持物体 1 09の像が視野内で動くという問題がある力 関節角度センサ 122で得られた手首関 節角度 14の情報を用いて、把持物体 109の視野内での移動を考慮して画像マッチ ング処理を行って!/、るので、手首関節 112が動!/、ても把持物体 109を正確に検出で きる。 [0041] なお、本実施例では、画像入力装置 2から取得した画像情報を基に検出した把持 物体 109の現在位置を用いて、周囲物体 108との接近の状態を判定している力 マ ニピユレータ 101の運動から、把持物体 109が今後の一定時間内に到着する位置を 予測計算し、その位置での周囲物体 108との接近の状態を調べることも可能である。 これにより、事前に把持物体 109と周囲物体 108との接触の可能性を判定し、より早 く警告を発することができるため、マニピュレータ 101稼動時の安全性が高まる。 In the present embodiment, since the image input device 2 is attached to the forearm 103 of the manipulator, the grasped object 109 can always be grasped at the center of the visual field. For this reason, compared to the case where the image input device 2 is attached to the base 105 or the like, the grasped object 109 does not move much in the field of view (in the image), so that the grasped object 109 can be detected reliably and the viewing angle is limited Because it can do S, high resolution information can be obtained. Also, the distance force S between the image input device 2 and the gripping object 109 is moderately separated compared to the case where the image input device 2 is attached to the gripping part 102, so the overall shape of the gripping object 109 can be easily monitored. it can. On the other hand, since the image input device 2 is attached to the forearm 103, there is a problem that the image of the grasped object 109 moves within the field of view when the wrist joint 112 moves. The wrist joint angle obtained by the joint angle sensor 122 14 Using this information, image matching processing is performed in consideration of the movement of the grasped object 109 within the field of view! /, So the wrist joint 112 moves! / wear. In this embodiment, the force manipulator is used to determine the approaching state with the surrounding object 108 using the current position of the gripping object 109 detected based on the image information acquired from the image input device 2. From the motion of 101, it is possible to predict and calculate the position where the gripping object 109 will arrive within a certain period of time in the future, and to check the approaching state with the surrounding object 108 at that position. As a result, the possibility of contact between the gripped object 109 and the surrounding object 108 can be determined in advance and a warning can be issued earlier, so that the safety during operation of the manipulator 101 is increased.
[0042] また、本実施例では、周囲監視装置 3を画像入力装置 2とは別に設けているが、周 囲監視装置 3を独立した装置として設ける代わりに、画像入力装置 2に周囲物体 108 の画像を取得する機能を持たせることにより、周囲監視装置を兼ねる構成としても良 い。また、接触可能性判定装置 4に、あらかじめ製作された地図情報を格納しておき 、その地図情報を参照して周囲の物の位置情報を取得してもよい。  In the present embodiment, the surroundings monitoring device 3 is provided separately from the image input device 2. However, instead of providing the surroundings monitoring device 3 as an independent device, the surrounding object 108 is connected to the image input device 2. By providing a function for acquiring an image, a configuration that also serves as a surrounding monitoring device may be used. Further, map information prepared in advance may be stored in the contact possibility determination device 4, and position information of surrounding objects may be acquired by referring to the map information.
[0043] 具体的には、画像入力装置 2に周囲物体 108の画像を取得する機能を持たせる場 合は、画像入力装置 2により入力された立体画像の中から、上記の方法で抽出され た把持物体 109を除いた背景の中から周囲物体 108を抽出し、その物体の 3次元座 標を記憶しておくようにする。抽出された把持物体 109を除いた背景力も周囲物体 1 08を抽出するに際しては、周囲物体 108はマニピュレータ 101の動作により、基本的 には位置を変えないので、画像入力装置 2や周囲監視装置 3により周囲の距離画像 を得れば足りる。  [0043] Specifically, when the image input device 2 has a function of acquiring an image of the surrounding object 108, the image is extracted from the stereoscopic image input by the image input device 2 by the above method. The surrounding object 108 is extracted from the background excluding the gripped object 109, and the three-dimensional coordinates of the object are stored. When the background force excluding the extracted grasped object 109 is also extracted as the surrounding object 108, the position of the surrounding object 108 is not basically changed by the operation of the manipulator 101. Therefore, the image input device 2 and the surrounding monitoring device 3 It is only necessary to obtain a surrounding distance image.
[0044] また、本実施例では、画像入力装置 2としてステレオカメラなどの立体画像センサを 用いているが、その代わりに、簡易な構成として、 CCD等の単眼のカメラを用いても 良い。この場合は、奥行き情報が検出できないので、把持物体 109が一定の距離に あると仮定して把持物体の検出および周囲との接触可能性の判定を行う。  In the present embodiment, a stereoscopic image sensor such as a stereo camera is used as the image input device 2, but instead, a monocular camera such as a CCD may be used as a simple configuration. In this case, since depth information cannot be detected, the gripping object 109 is detected and the possibility of contact with the surroundings is determined on the assumption that the gripping object 109 is at a certain distance.
[0045] あるいは、把持物体 109が把持部 102に固定された一つの仮想面の上にあると仮 定することで、単眼カメラの像から距離を推定しても良い。その場合の仮想面は、把 持物体 109がある確率が高い場所にするのが良ぐ例えば、把持部 102の先端を通 り、把持部に直交する面に設定するのがよい。  Alternatively, the distance may be estimated from the image of the monocular camera by assuming that the gripping object 109 is on one virtual plane fixed to the gripping portion 102. In this case, the virtual surface is preferably a place where the probability that the gripping object 109 is high is preferably set to a surface that passes through the tip of the gripping portion 102 and is orthogonal to the gripping portion.
[0046] また、本実施例では、画像入力装置 2をマニピュレータの前腕 103に取り付けてい る力 他の場所に取り付けることもできる。画像入力装置 2をベース 105に取り付けた 場合は、図 4に示す把持物体の位置.形状判定処理のフローチャートのステップ S34 において、手首以外の関節の角度変化も考慮して点 Q (i)の移動前の空間位置 Pa (i )を求めればよい。この場合は、把持物体 109が視野内で大きく動くという問題はある 力 画像入力装置 2の位置が固定されるので、構造が簡単になることや、画像入力装 置 2が周囲監視装置 3を兼ねることができるという利点がある。一方、画像入力装置 2 を把持部 102に取り付けた場合は、把持物体 109が視野内で動かないので、ステツ プ S34にお!/、て点 Q (i)の移動前の空間位置 Pa (i)は移動後の位置 Pb (i)と等しくす ればよい。この場合は、把持物体 109が画像入力装置 2に近づきすぎるという問題は ある力 計算処理が簡単になるという利点がある。 In the present embodiment, the image input device 2 can be attached to other places with the force attached to the forearm 103 of the manipulator. The image input device 2 was attached to the base 105. In step S34 in the flowchart of the shape determination process shown in Fig. 4, the spatial position Pa (i) before the movement of the point Q (i) can be obtained considering the angle change of joints other than the wrist. That's fine. In this case, there is a problem that the grasped object 109 moves greatly in the field of view. The position of the image input device 2 is fixed, so that the structure becomes simple and the image input device 2 also serves as the surrounding monitoring device 3. There is an advantage that you can. On the other hand, when the image input device 2 is attached to the gripping part 102, the gripping object 109 does not move within the field of view, so in step S34, the spatial position Pa (i ) Should be equal to the position Pb (i) after movement. In this case, the problem that the grasped object 109 is too close to the image input device 2 has an advantage that a certain force calculation process is simplified.
[0047] 更に、本実施例では、画像入力装置と把持部の相対位置関係を検出する、把持部 相対位置検出手段として、手首関節の角度を測定する角度センサを用いているが、 その代わりに、他の方法で画像入力装置と把持部の位置'姿勢を検出して相対位置 関係を求めても良い。位置 ·姿勢を測定する方法には、アームの関節角度を計測す る方法の他に、測定対象物を外部に置いたカメラで撮影して、位置 ·姿勢を求める方 法などがある。 Furthermore, in this embodiment, an angle sensor that measures the angle of the wrist joint is used as the gripper relative position detection unit that detects the relative positional relationship between the image input device and the gripper. Alternatively, the relative position relationship may be obtained by detecting the position of the image input device and the gripping portion by other methods. In addition to the method of measuring the joint angle of the arm, the method of measuring the position / posture includes the method of obtaining the position / posture by photographing with a camera with the object to be measured outside.
[0048] 次に、本実施例のマニピュレータを建設機械に搭載した実施例を示す。図 8は林業 や解体作業などに使われる作業機械に、本実施例のマニピュレータを搭載した場合 の構成例である。  [0048] Next, an embodiment in which the manipulator of the present embodiment is mounted on a construction machine is shown. Fig. 8 shows a configuration example when the manipulator of this embodiment is mounted on a work machine used for forestry or demolition work.
[0049] 本実施例による作業機械 201は、マニピュレータとして、把持部であるグラップル 2 02と、アーム 203と、フ、、ーム 204を備えている。作業機械 201は、グラップノレ 202によ り物を掴んで解体 ·運搬等を行う用途に使用される。  [0049] The work machine 201 according to the present embodiment includes a grapple 2002, which is a gripping part, an arm 203, and an arm 204 as manipulators. The work machine 201 is used for applications such as grabbing an object by a grappet nozzle 202 for disassembly and transportation.
[0050] 本実施例のマニピュレータを構成する画像入力装置 2は、前腕に相当するアーム 2 03の底面に取り付けられている。この場所は、グラップル 202との位置関係が大きく 変わらず、かつグラップル 202から適度に離れているので、グラップル 202が把持し ている物体を捉えるのに適している。また、周囲監視装置 3は広い視界をカバーする ため、キャビン 209の上部に取り付けられている。また、手首に相当する関節 212に は、手首角度センサとして角度センサ 222が取り付けられている。本実施例では、接 触可能性判定装置 4と警報装置 5をキャビン 209の内部に設置するよう図示している 力 S、マニピュレータの操作に支障がなぐ各装置との情報伝達が可能な場所であれ ば、他の場所に設置してもよい。 [0050] The image input device 2 constituting the manipulator of the present embodiment is attached to the bottom surface of the arm 203 corresponding to the forearm. This location is suitable for capturing the object gripped by the grapple 202 because the positional relationship with the grapple 202 does not change greatly and is appropriately separated from the grapple 202. The surrounding monitoring device 3 is attached to the upper part of the cabin 209 to cover a wide field of view. An angle sensor 222 is attached to the joint 212 corresponding to the wrist as a wrist angle sensor. In this embodiment, the contact possibility determination device 4 and the alarm device 5 are illustrated to be installed inside the cabin 209. It may be installed in other places as long as it can communicate with each device that does not interfere with the operation of force S and manipulator.
[0051] 本実施例では、接触可能性判定装置 4は、画像入力装置 2により取得したグラップ ノレ 202の周辺の立体画像と、角度センサ 222から取得した関節 212の角度を用いて 、グラップル 202が把持している物体を検出し、周囲監視装置 3により検出された周 囲の物体の位置と比較して、接触の可能性があるかを判定する。接触の可能性があ る場合は、警報装置 5により操作者に伝える。警報装置 5が、操作者に対して接触の 可能性を伝える方法は、音声や画像による通知方法に加えて、操作レバーに振動を 与えるなどの方法がある。また、接触可能性判定装置 4における処理に関する一連 の画像情報を、図示しなレ、表示手段により表示するようにしてもょレ、。  [0051] In the present embodiment, the contact possibility determination device 4 uses the stereoscopic image around the grip nore 202 acquired by the image input device 2 and the angle of the joint 212 acquired from the angle sensor 222, so that the grapple 202 The gripping object is detected and compared with the position of the surrounding object detected by the surroundings monitoring device 3 to determine whether there is a possibility of contact. If there is a possibility of contact, an alarm device 5 informs the operator. There are methods for the alarm device 5 to notify the operator of the possibility of contact, such as giving vibration to the operation lever in addition to the notification method by voice or image. In addition, a series of image information related to processing in the contact possibility determination device 4 may be displayed by a display means, not shown.
[0052] このように、本実施例のマニピュレータを適用することにより、操作者の負担を軽減 することができる。また、本実施例のように、林業や解体作業、建設作業などのような 障害物の多い複雑な作業環境において、事前に把持物体と周囲物体 108との接触 可能性を警告することにより、より安全、迅速に作業を行うことができる。  Thus, by applying the manipulator of this embodiment, the burden on the operator can be reduced. Also, as in this example, in a complex work environment with many obstacles such as forestry, demolition work, construction work, etc., by warning in advance the possibility of contact between the grasped object and the surrounding object 108, Work safely and quickly.
[0053] 以上により本発明の実施例によれば、複雑な環境において未知の物体を把持して いる場合においても、マニピュレータの把持物体の形状を確実に認識でき、その把 持物体が周囲の物体に接触するのを防止して、マニピュレータの安全性を高めること ができる。  As described above, according to the embodiment of the present invention, even when an unknown object is gripped in a complicated environment, the shape of the gripped object of the manipulator can be reliably recognized, and the gripping object is a surrounding object. This can increase the safety of the manipulator.

Claims

請求の範囲 The scope of the claims
[1] アームと、前記アームを駆動するアーム駆動手段と、前記アームに設けられた把持 部と、前記把持部の周辺画像を取得する画像入力手段と、前記画像入力手段に対 する前記把持部の相対位置を検出する把持部相対位置検出手段と、前記画像入力 手段により取得された複数の画像と、前記把持部相対位置検出手段により検出され た前記画像入力手段に対する前記把持部の相対位置とを記憶する記憶手段と、を 有し、前記記憶手段に記憶された複数の画像と前記把持部の前記画像入力手段に 対する相対位置に基づ!/、て、前記把持物体の位置および形状を検出するマユピユレ ータ。  [1] An arm, an arm driving unit that drives the arm, a gripping unit provided in the arm, an image input unit that acquires a peripheral image of the gripping unit, and the gripping unit for the image input unit A relative position detecting means for detecting a relative position of the gripper, a plurality of images acquired by the image input means, and a relative position of the gripper with respect to the image input means detected by the gripper relative position detecting means. Based on a plurality of images stored in the storage unit and a relative position of the gripping unit with respect to the image input unit! /, Based on the position and shape of the gripping object. Mayupile to detect.
[2] 請求項 1記載のマニピュレータにおいて、周囲の物体の位置を検出する周囲監視 手段と、検出された前記把持物体の位置及び形状と、前記周囲監視手段により検出 された周囲の物体の位置に基づいて、前記把持物体と前記周囲の物体との接触の 可能性を判定する接触可能性判定手段と、前記接触可能性判定手段の判定結果に 基づ!/、て警報を発する警報手段と、を備えるマニピュレータ。  [2] The manipulator according to claim 1, wherein the surrounding monitoring means for detecting the position of the surrounding object, the detected position and shape of the gripping object, and the position of the surrounding object detected by the surrounding monitoring means Based on the contact possibility determination means for determining the possibility of contact between the gripped object and the surrounding object, and alarm means for issuing an alarm based on the determination result of the contact possibility determination means; A manipulator comprising:
[3] 請求項 1または請求項 2記載のマニピュレータにおいて、前記画像入力手段は、前 記マニピュレータのアームに設けられており、前記把持部相対位置検出手段は、前 記画像入力手段が設けられたアームから前記把持部の間に設けられた可動部の変 化量を検出する手段である、ことを特徴とするマニピュレータ。  [3] The manipulator according to claim 1 or 2, wherein the image input means is provided on an arm of the manipulator, and the grip portion relative position detection means is provided with the image input means. A manipulator characterized in that it is means for detecting the amount of change of a movable part provided between an arm and the grip part.
[4] 請求項 1ないし請求項 3記載のマニピュレータにおいて、前記画像入力手段は、濃 淡画像および距離画像を取得できる立体画像入力手段である、ことを特徴とするマ ニピユレータ。  [4] The manipulator according to any one of claims 1 to 3, wherein the image input means is a stereoscopic image input means capable of acquiring a grayscale image and a distance image.
[5] 請求項 2記載のマニピュレータにおいて、前記接触可能性判定手段は、前記把持 部相対位置検出手段により検出された把持部の相対位置の変化状態に基づいて、 前記把持物体の将来の位置を予測し、前記周囲の物体との接触の可能性を判定す る、ことを特徴とするマニピュレータ。  [5] The manipulator according to claim 2, wherein the contact possibility determination means determines a future position of the gripping object based on a change state of the relative position of the gripping part detected by the gripping part relative position detection means. A manipulator that predicts and determines the possibility of contact with the surrounding object.
PCT/JP2007/070360 2006-10-20 2007-10-18 Manipulator WO2008047872A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2007800378723A CN101522377B (en) 2006-10-20 2007-10-18 Manipulator
JP2008539872A JPWO2008047872A1 (en) 2006-10-20 2007-10-18 manipulator

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006286419 2006-10-20
JP2006-286419 2006-10-20

Publications (1)

Publication Number Publication Date
WO2008047872A1 true WO2008047872A1 (en) 2008-04-24

Family

ID=39314090

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/070360 WO2008047872A1 (en) 2006-10-20 2007-10-18 Manipulator

Country Status (3)

Country Link
JP (1) JPWO2008047872A1 (en)
CN (1) CN101522377B (en)
WO (1) WO2008047872A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010127719A (en) * 2008-11-26 2010-06-10 Canon Inc Work system and information processing method
JP2010271978A (en) * 2009-05-22 2010-12-02 Nippon Telegr & Teleph Corp <Ntt> Behavior estimating device
WO2011077693A1 (en) * 2009-12-21 2011-06-30 Canon Kabushiki Kaisha Robot system for reorienting a held workpiece
CN102189548A (en) * 2010-03-05 2011-09-21 发那科株式会社 Robot system comprising visual sensor
JP2011200331A (en) * 2010-03-24 2011-10-13 Fuji Xerox Co Ltd Position measurement system, position measurement apparatus, and position measurement program
JP2013036988A (en) * 2011-07-08 2013-02-21 Canon Inc Information processing apparatus and information processing method
JP2013036987A (en) * 2011-07-08 2013-02-21 Canon Inc Information processing device and information processing method
US20130306543A1 (en) * 2010-11-08 2013-11-21 Fresenius Medical Care Deutschland Gmbh Manually openable clamping holder with sensor
US9217636B2 (en) 2012-06-11 2015-12-22 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and a computer-readable storage medium
US20190024348A1 (en) * 2016-03-02 2019-01-24 Kabushiki Kaisha Kobe Seiko Sho (Kobe Steel, Ltd.) Interference prevention device for construction machinery
WO2021070454A1 (en) * 2019-10-10 2021-04-15 清水建設株式会社 Robot for construction work
JP2021175592A (en) * 2016-05-20 2021-11-04 グーグル エルエルシーGoogle LLC Machine learning methods and apparatus related to predicting motions of objects in robot's environment based on images capturing objects and based on parameters for future robot movement in environment
EP4088888A1 (en) * 2021-05-14 2022-11-16 Intelligrated Headquarters, LLC Object height detection for palletizing and depalletizing operations
JP2023029576A (en) * 2018-04-27 2023-03-03 新明和工業株式会社 work vehicle
US11618120B2 (en) 2018-07-12 2023-04-04 Novatron Oy Control system for controlling a tool of a machine

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101870110B (en) 2010-07-01 2012-01-04 三一重工股份有限公司 Control method and control device of mechanical articulated arm
JP5505138B2 (en) * 2010-07-05 2014-05-28 株式会社安川電機 Robot apparatus and gripping method using robot apparatus
DE102010063214A1 (en) * 2010-12-16 2012-06-21 Robert Bosch Gmbh Securing device for a handling device, in particular an industrial robot, and method for operating the securing device
KR101634463B1 (en) * 2011-06-29 2016-06-28 미쓰비시덴키 가부시키가이샤 Component supply apparatus
FR2982941B1 (en) * 2011-11-18 2020-06-12 Hexagon Metrology Sas MEASURING DEVICE COMPRISING AN INDEXED LOCKING ARM
CN103192414B (en) * 2012-01-06 2015-06-03 沈阳新松机器人自动化股份有限公司 Robot anti-collision protection device and method based on machine vision
CN103101760A (en) * 2012-12-28 2013-05-15 长春大正博凯汽车设备有限公司 Visual transportation system for workpiece transportation and transportation method thereof
CN104416581A (en) * 2013-08-27 2015-03-18 富泰华工业(深圳)有限公司 Mechanical arm with warning function
CN108081268A (en) * 2013-10-10 2018-05-29 精工爱普生株式会社 Robot control system, robot, program and robot control method
CN108602187A (en) * 2015-09-09 2018-09-28 碳机器人公司 Mechanical arm system and object hide method
CN105870814A (en) * 2016-03-31 2016-08-17 广东电网有限责任公司中山供电局 An operating device suitable for emergency opening of 10kV switch
JP6548816B2 (en) * 2016-04-22 2019-07-24 三菱電機株式会社 Object operating device and object operating method
KR102750211B1 (en) 2018-02-23 2025-01-07 구라시키 보세키 가부시키가이샤 Method for moving tip of linear object, and control device
JP7000992B2 (en) * 2018-05-25 2022-01-19 トヨタ自動車株式会社 Manipulators and mobile robots
CN108527374A (en) * 2018-06-29 2018-09-14 德淮半导体有限公司 Anti-collision system and method applied to mechanical arm
EP3885495B1 (en) * 2018-11-19 2024-06-05 Sumitomo Construction Machinery Co., Ltd. Excavator and excavator control device
CN113386135A (en) * 2021-06-16 2021-09-14 深圳谦腾科技有限公司 Manipulator with 2D camera and grabbing method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63245387A (en) * 1987-03-30 1988-10-12 豊田工機株式会社 Visual recognizer for robot
JP2004243454A (en) * 2003-02-13 2004-09-02 Yaskawa Electric Corp Apparatus for designating tool shape of robot and apparatus for checking interference of tool

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05261692A (en) * 1992-03-17 1993-10-12 Fujitsu Ltd Working environment monitoring device for robot
DE10319253B4 (en) * 2003-04-28 2005-05-19 Tropf, Hermann Three-dimensionally accurate feeding with robots
JP2005001022A (en) * 2003-06-10 2005-01-06 Yaskawa Electric Corp Object model creating device and robot control device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63245387A (en) * 1987-03-30 1988-10-12 豊田工機株式会社 Visual recognizer for robot
JP2004243454A (en) * 2003-02-13 2004-09-02 Yaskawa Electric Corp Apparatus for designating tool shape of robot and apparatus for checking interference of tool

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010127719A (en) * 2008-11-26 2010-06-10 Canon Inc Work system and information processing method
JP2010271978A (en) * 2009-05-22 2010-12-02 Nippon Telegr & Teleph Corp <Ntt> Behavior estimating device
WO2011077693A1 (en) * 2009-12-21 2011-06-30 Canon Kabushiki Kaisha Robot system for reorienting a held workpiece
US9418291B2 (en) 2009-12-21 2016-08-16 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and computer-readable storage medium
US8326460B2 (en) 2010-03-05 2012-12-04 Fanuc Corporation Robot system comprising visual sensor
JP2011201007A (en) * 2010-03-05 2011-10-13 Fanuc Ltd Robot system with visual sensor
CN102189548B (en) * 2010-03-05 2014-06-18 发那科株式会社 Robot system comprising visual sensor
CN102189548A (en) * 2010-03-05 2011-09-21 发那科株式会社 Robot system comprising visual sensor
JP2011200331A (en) * 2010-03-24 2011-10-13 Fuji Xerox Co Ltd Position measurement system, position measurement apparatus, and position measurement program
US20130306543A1 (en) * 2010-11-08 2013-11-21 Fresenius Medical Care Deutschland Gmbh Manually openable clamping holder with sensor
US10835665B2 (en) * 2010-11-08 2020-11-17 Fresenius Medical Care Deutschland Gmbh Manually openable clamping holder with sensor
JP2013036988A (en) * 2011-07-08 2013-02-21 Canon Inc Information processing apparatus and information processing method
JP2013036987A (en) * 2011-07-08 2013-02-21 Canon Inc Information processing device and information processing method
US9437005B2 (en) 2011-07-08 2016-09-06 Canon Kabushiki Kaisha Information processing apparatus and information processing method
US9217636B2 (en) 2012-06-11 2015-12-22 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and a computer-readable storage medium
EP3409841A4 (en) * 2016-03-02 2019-03-20 Kabushiki Kaisha Kobe Seiko Sho (Kobe Steel, Ltd.) INTERFERENCE PREVENTION DEVICE FOR CONSTRUCTION MACHINE
US20190024348A1 (en) * 2016-03-02 2019-01-24 Kabushiki Kaisha Kobe Seiko Sho (Kobe Steel, Ltd.) Interference prevention device for construction machinery
US11111654B2 (en) 2016-03-02 2021-09-07 Kabushiki Kaisha Kobe Seiko Sho (Kobe Steel, Ltd.) Interference prevention device for construction machinery
JP2021175592A (en) * 2016-05-20 2021-11-04 グーグル エルエルシーGoogle LLC Machine learning methods and apparatus related to predicting motions of objects in robot's environment based on images capturing objects and based on parameters for future robot movement in environment
JP7399912B2 (en) 2016-05-20 2023-12-18 グーグル エルエルシー A machine learning method and apparatus for predicting the movement of an object in a robot's environment based on images capturing the object and based on parameters regarding future robot movement in the environment.
JP2023029576A (en) * 2018-04-27 2023-03-03 新明和工業株式会社 work vehicle
JP7427350B2 (en) 2018-04-27 2024-02-05 新明和工業株式会社 work vehicle
US11618120B2 (en) 2018-07-12 2023-04-04 Novatron Oy Control system for controlling a tool of a machine
WO2021070454A1 (en) * 2019-10-10 2021-04-15 清水建設株式会社 Robot for construction work
JP2021062413A (en) * 2019-10-10 2021-04-22 清水建設株式会社 Robot for construction work
JP7341837B2 (en) 2019-10-10 2023-09-11 清水建設株式会社 construction work robot
EP4088888A1 (en) * 2021-05-14 2022-11-16 Intelligrated Headquarters, LLC Object height detection for palletizing and depalletizing operations

Also Published As

Publication number Publication date
JPWO2008047872A1 (en) 2010-02-25
CN101522377B (en) 2011-09-14
CN101522377A (en) 2009-09-02

Similar Documents

Publication Publication Date Title
WO2008047872A1 (en) Manipulator
JP5216690B2 (en) Robot management system, robot management terminal, robot management method and program
JP6392972B2 (en) Method, persistent computer readable medium and system implemented by a computing system
JP7154815B2 (en) Information processing device, control method, robot system, computer program, and storage medium
JP4850984B2 (en) Action space presentation device, action space presentation method, and program
JP6567563B2 (en) Humanoid robot with collision avoidance and orbit return capability
CN109129474B (en) Manipulator active grasping device and method based on multimodal fusion
KR101751405B1 (en) Work machine peripheral monitoring device
CN111055281A (en) A ROS-based autonomous mobile grasping system and method
WO2019146201A1 (en) Information processing device, information processing method, and information processing system
JP2010120139A (en) Safety control device for industrial robot
JP5276931B2 (en) Method for recovering from moving object and position estimation error state of moving object
US12194630B2 (en) Industrial robot system and method for controlling an industrial robot
CN110394779B (en) Robot Simulator
CN110856932A (en) Interference avoidance device and robot system
JP5326794B2 (en) Remote operation system and remote operation method
CN112706158B (en) Industrial Human-Computer Interaction System and Method Based on Vision and Inertial Navigation Positioning
JP6927937B2 (en) Systems and methods for generating 3D skeletal representations
US11097414B1 (en) Monitoring of surface touch points for precision cleaning
KR20150136399A (en) Collision detection robot remote control system and method thereof
JP2019198907A (en) Robot system
US11926064B2 (en) Remote control manipulator system and remote control assistance system
JP3565763B2 (en) Master arm link position detection method
JP3376029B2 (en) Robot remote control device
CN112857314A (en) Bimodal terrain identification method, hardware system and sensor installation method thereof

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200780037872.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07830094

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2008539872

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07830094

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载