US20180181793A1 - Image processing apparatus and image processing method - Google Patents
Image processing apparatus and image processing method Download PDFInfo
- Publication number
- US20180181793A1 US20180181793A1 US15/848,011 US201715848011A US2018181793A1 US 20180181793 A1 US20180181793 A1 US 20180181793A1 US 201715848011 A US201715848011 A US 201715848011A US 2018181793 A1 US2018181793 A1 US 2018181793A1
- Authority
- US
- United States
- Prior art keywords
- image
- color
- commodity
- cpu
- calibration plate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003672 processing method Methods 0.000 title description 2
- 239000011159 matrix material Substances 0.000 claims description 38
- 239000003086 colorant Substances 0.000 claims description 14
- 238000000034 method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 13
- 230000006870 function Effects 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 239000000284 extract Substances 0.000 description 10
- 230000002596 correlated effect Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 2
- 230000002542 deteriorative effect Effects 0.000 description 2
- 239000000976 ink Substances 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 241001122767 Theaceae Species 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G06K9/00208—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G06F17/30247—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0007—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/603—Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
- H04N1/6033—Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer using test pattern analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Definitions
- Embodiments described herein relate generally to an image processing apparatus and an image processing method.
- an image processing apparatus acquires an image of a shelf or the like on which a commodity is disposed, and recognizes the commodity in the image.
- a color of the commodity in the image may be different from the color pre-registered for the commodity due to various factors such as differences in a position and characteristics of a camera used to capture the image of the commodity on the shelf, and lighting around the shelf when the image was acquired.
- a mobile robot must internally maintain an environment map so as to match the external environment of the mobile robot to prevent a self-positioning function of the mobile robot from deteriorating.
- the mobile robot can drive a predetermined route based on estimated self-positioning of the robot.
- the environment map stored by the mobile robot is different from the actual environment, the self-positioning function of the mobile robot deteriorates and an error in the robot positioning occurs.
- FIG. 1 is a block diagram of an image processing system.
- FIG. 2 is a diagram of a dictionary image storage apparatus.
- FIG. 3 is a diagram of a calibration plate.
- FIG. 4 is a diagram of a commodity table.
- FIG. 5 is a diagram of map information.
- FIG. 6 is a diagram of a route table.
- FIG. 7 is a diagram of a robot apparatus.
- FIG. 8 is a diagram of a robot apparatus.
- FIG. 9 is a diagram of a dictionary image storage apparatus.
- FIG. 10 is a flowchart of an operation of a dictionary image storage apparatus.
- FIG. 11 is a flowchart of an operation of a dictionary image storage apparatus.
- FIG. 12 is a flowchart of an operation of an image processing apparatus.
- FIG. 13 is a flowchart of an operation of an image processing apparatus.
- an autonomous inventory tracking apparatus includes an image acquisition unit configured to acquire an image, and a processor configured to detect a calibration plate in the image acquired from the image acquisition unit, calculate a color correction value for the image according to a color block of the calibration plate, the color block matching a reference value, color correct the image using the calculated color correction value to provide a color-corrected image, and perform commodity recognition processing on the color-corrected image so as to identify the commodity in the image acquired from the image acquisition unit.
- An image processing system detects a commodity which is disposed in a commodity shelf based on an image of the commodity shelf or the like in which the commodity is disposed, for example, in a store.
- FIG. 1 is a block diagram of an image processing system 1 .
- the image processing system 1 includes an image processing apparatus 10 , a dictionary image storage apparatus 20 , a map information apparatus 30 , and a missing commodity recognition apparatus 40 .
- the image processing apparatus 10 detects a commodity based on a captured image of a commodity shelf or the like in which the commodity is placed.
- the dictionary image storage apparatus 20 stores an image of a commodity (also referred to as a dictionary image) which is necessary for the image processing apparatus 10 to detect the commodity.
- the map information apparatus 30 provides map information indicating a map inside a store having the image processing system 1 installed to the image processing apparatus 10 .
- the missing commodity recognition apparatus 40 detects a missing commodity based on the commodity that has been detected by the image processing apparatus 10 .
- the missing commodity recognition apparatus 40 detects the missing commodity based on information indicating a commodity that has been pre-registered for a certain position in the commodity shelf and the commodity that has been detected at that position in the commodity shelf.
- the image processing apparatus 10 includes a commodity recognition unit 101 , a self-driving robot 102 , a camera 103 , an operation unit 104 , and the like.
- the commodity recognition unit 101 includes a central processing unit (CPU) 11 , a random access memory (RAM) 12 , a non-volatile memory (NVM) 13 , and interfaces (I/F) 14 to 19 .
- the elements of the commodity recognition unit 101 are connected to each other through a data bus or the like.
- the commodity recognition unit 101 may have more elements in addition to the elements depicted in FIG. 1 , or some of the elements depicted in FIG. 1 may be omitted in some embodiments.
- the CPU 11 has a function of controlling the overall operation of the commodity recognition unit 101 .
- the CPU 11 may include an internal cache, various interfaces, and the like.
- the CPU 11 executes a program stored in an internal memory or NVM 13 in advance to provide various processing operations.
- the CPU 11 is, for example, a processor.
- Some of the processing operations by the CPU 11 executing a program may be realized via a hardware circuit.
- the CPU 11 controls an operation of the hardware circuit.
- the RAM 12 is a volatile memory.
- the RAM 12 temporarily stores data of CPU 11 in processing or the like.
- the RAM 12 stores various application programs based on an instruction from the CPU 11 .
- the RAM 12 may store data which is necessary for executing the application program, an execution result of the application program, and the like.
- the NVM 13 is a non-volatile memory capable of writing and rewriting of data.
- the NVM 13 may be, for example, a (solid-state drive) SSD, an EEPROM®, a flash memory, or the like.
- the NVM 13 stores a control program, an application program, and various data according to an operational purpose of the commodity recognition unit 101 .
- the NVM 13 includes a storage region 13 a that stores a reference color table, a storage region 13 b that stores a route table, and the like.
- the interface 14 is an interface for communicating data with the dictionary image storage apparatus 20 .
- the CPU 11 acquires the dictionary image or the like from the dictionary image storage apparatus 20 through the interface 14 .
- the interface 15 is an interface for communicating data with the self-driving robot 102 .
- the CPU 11 sets a stop position or the like on the self-driving robot 102 through the interface 15 .
- the interface 16 is an interface for communicating data with the map information apparatus 30 .
- the CPU 11 acquires map information from the map information apparatus 30 through the interface 16 .
- the interface 17 is an interface for communicating data with the camera 103 .
- the CPU 11 acquires a captured image from the camera 103 through the interface 17 .
- the interface 18 is an interface for communicating data with the operation unit 104 .
- the CPU 11 acquires an operation input into the operation unit 104 through the interface 18 .
- the interface 19 is an interface for communicating data with the missing commodity recognition apparatus 40 .
- the CPU 11 transmits information indicating a recognized commodity to the missing commodity recognition apparatus 40 through the interface 19 .
- the interfaces 14 to 19 support wireless connection or wired connection protocols.
- the interfaces 14 to 19 may support a local area network (LAN) protocol or a Universal Serial Bus (USB) protocol.
- LAN local area network
- USB Universal Serial Bus
- the self-driving robot 102 is a mobile object that carries the camera 103 .
- the self-driving robot 102 includes a main body to mount the camera, a motor, a tire, and the like.
- the self-driving robot 102 moves by rotating the tire using driving force of the motor.
- the self-driving robot 102 can move with the camera 103 mounted to a predetermined position.
- the self-driving robot 102 may include a mechanism for changing a position and an angle of the camera 103 .
- the self-driving robot 102 may include a sensor for detecting its own position.
- the self-driving robot 102 moves based on a signal from the commodity recognition unit 101 .
- the self-driving robot 102 receives information including a stop position from the commodity recognition unit 101 .
- the self-driving robot 102 moves to and then stops at the stop position.
- the self-driving robot 102 may determine a stop position based on a calibration plate. For example, the self-driving robot 102 detects the calibration plate based on an image of the calibration plate captured by the camera 103 . The self-driving robot 102 may adjust the stop position based on a position of the detected calibration plate or the like.
- the camera 103 captures images of a commodity.
- the camera 103 captures an image of a commodity shelf in which the commodity is disposed.
- the camera 103 is mounted on the self-driving robot 102 at a predetermined height.
- the camera 103 is installed at a height at which a predetermined commodity shelf can be imaged.
- the camera 103 may capture an image according to a signal from the commodity recognition unit 101 .
- the camera 103 may capture an image when the self-driving robot 102 stops at a stop position.
- the camera 103 transmits the captured image to the commodity recognition unit 101 .
- the camera 103 is, for example, a charge coupled device (CCD) camera.
- CCD charge coupled device
- the operation unit 104 receives various operation instructions by an operator.
- the operation unit 104 transmits, to the commodity recognition unit 101 , a signal indicating the operation instruction input by the operator.
- the operation unit 104 is, for example, a keyboard, a numeric keypad, and a touch panel.
- the dictionary image storage apparatus 20 includes a CPU 21 , a RAM 22 , an NVM 23 , interfaces (I/F) 24 and 25 , and the like.
- the elements in the dictionary image storage apparatus 20 are connected to each other through a data bus or the like.
- the dictionary image storage apparatus 20 may have more elements in addition to the elements depicted in FIG. 1 , or some of the elements depicted in FIG. 1 may be omitted.
- the dictionary image storage apparatus 20 is connected to a camera 26 .
- the CPU 21 has a function of controlling the overall operation of the dictionary image storage apparatus 20 .
- the CPU 21 may include an internal cache, various interfaces, and the like.
- the CPU 21 executes a program stored in an internal memory or NVM 23 in advance to provide various processing.
- the CPU 21 is, for example, a processor.
- Some of the processing operations by the CPU 21 executing a program may be realized via a hardware circuit.
- the CPU 21 controls an operation of the hardware circuit.
- the RAM 22 is a volatile memory.
- the RAM 22 temporarily stores data of CPU 21 in processing or the like.
- the RAM 22 stores various application programs based on an instruction from the CPU 21 .
- the RAM 22 may store data which is necessary for executing the application program, an execution result of the application program, and the like.
- the NVM 23 is a non-volatile memory capable of writing and rewriting of data.
- the NVM 23 may be a hard disk, an SSD, an EEPROM®, a flash memory, or the like.
- the NVM 23 stores a control program, an application program, and various data according to an operational purpose of the dictionary image storage apparatus 20 .
- the NVM 23 includes a storage region 23 a that stores a reference color table, a storage region 23 b that stores a commodity table and the like.
- the interface 25 is an interface for communicating data with the commodity recognition unit 101 .
- the CPU 21 transmits the dictionary image or the like to the commodity recognition unit 101 through the interface 25 .
- the interface 24 is an interface for communicating data with the camera 26 .
- the CPU 21 acquires a captured image or the like from the camera 26 through the interface 24 .
- the interfaces 24 and 25 support wireless connection or wired connection protocols.
- the interfaces 24 and 25 may support a LAN protocol or a USB protocol.
- the camera 26 captures an image of a commodity.
- the camera 26 may capture an image according to an operator's instruction.
- the camera 26 transmits the captured image to the dictionary image storage apparatus 20 .
- the camera 26 may transmit the captured image to the dictionary image storage apparatus 20 , at the time of capturing the image.
- the camera 26 may transmit the captured images to the dictionary image storage apparatus 20 , at a predetermined interval or after the camera has captured the predetermined number of images.
- the camera 26 may store the image in a detachable memory, and the dictionary image storage apparatus 20 may acquire the image from the detachable memory.
- the camera 26 is, for example, a CCD camera or the like.
- the CPU 21 has a function of acquiring an image of a commodity and a calibration plate captured by the camera 26 .
- the CPU 21 acquires a captured image from the camera 26 .
- the camera 26 may store a captured image to a detachable memory, and the CPU 21 may acquire the captured image from the detachable memory.
- FIG. 2 illustrates an operation example in which the CPU 21 acquires a captured image.
- the camera 26 captures an image of a commodity 50 and a calibration plate 60 .
- the commodity 50 and the calibration plate 60 are disposed adjacent each other so as to be contained in one captured image.
- the positional relationship between the commodity 50 and the calibration plate 60 is not limited to a specific configuration.
- the calibration plate 60 is a plate having a plurality of blocks in different colors.
- FIG. 3 illustrates an example of the calibration plate 60 .
- the calibration plate 60 includes blocks 601 to 618 .
- Each of the blocks 601 to 618 has different colors.
- Each of the blocks 601 to 618 may be individually membered.
- the blocks 601 to 618 may be regions in which inks of predetermined colors are painted on the calibration plate 60 .
- the calibration plate 60 includes 18 blocks.
- the calibration plate 60 may include three or more, or nine or more blocks.
- the number of blocks on the calibration plate 60 is not limited to a specific number.
- the CPU 21 has a function of generating a correction matrix, including correction values, based on the captured image of the calibration plate 60 .
- the correction matrix is a matrix that maps a color of a block of the calibration plate 60 in the captured image to an actual color of the block.
- the correction matrix can eliminate color deviation due to camera characteristics, external light, or the like.
- the CPU 21 extracts a region of the captured image corresponding to the calibration plate 60 , using a pattern recognition algorithm or the like.
- the CPU 21 Upon extracting the region of the image corresponding to the calibration plate 60 , the CPU 21 extracts color information of each block of the calibration plate 60 from the extracted image. The CPU 21 detects a position of each block from the extracted image. The CPU 21 extracts the color information of the position of each block. Here, the color information indicates colors in RGB format.
- the CPU 21 Upon extracting the color information, the CPU 21 acquires reference color information of each block of the calibration plate 60 .
- the CPU 21 acquires the reference color information from the reference color table stored in the storage region 23 a.
- the reference color table stores the reference color information, indicating reference colors of blocks of the calibration plate 60 individually.
- the reference color indicates a block color registered in advance.
- the reference color may be an actual color specified by an ink used to print the block, or a block color imaged under a predetermined condition.
- the reference color table stores information indicating a location of the a block, for example, a position or a label of a block, and reference color information indicating a reference color of the block in a correlated manner.
- the reference color information indicates colors, for example, in RGB format.
- the reference color information may include Rs, Gs, and Bs.
- the storage region 23 a stores the reference color table in advance.
- the CPU 21 Upon acquiring the reference color information of each block, the CPU 21 generates a correction matrix based on the color information and the reference color information of each block.
- Rs, Gs, and Bs are reference color information for one block of the calibration plate 60 .
- R, G, and B are color information of the block extracted from the extracted image.
- A is a correction matrix.
- Equation (2) A is represented by the following Equation (2):
- a 11 to a 33 are correction values as elements of the correction matrix A.
- the CPU 21 generates the correction matrix A using the reference color information and the color information of each block.
- the CPU 21 generates the correction matrix A by which the reference color information and the color information of each block satisfy Equation (1).
- the CPU 21 calculates the elements of the correction matrix using the least square method, for example.
- the CPU 21 has a function of storing an image of the commodity that has been corrected (by using the correction matrix A) as a dictionary image.
- the CPU 21 specifies a region of the captured image corresponding to the commodity 50 .
- the CPU 21 may detect an edge of the commodity 50 from the captured image and thus specify the region in the captured image corresponding to the commodity 50 .
- the CPU 21 corrects colors in the region of the captured image corresponding to the commodity region (also referred to as a commodity image) using the correction matrix.
- the CPU 21 corrects R, G, and B values for each pixel in the commodity image using the correction matrix values on each pixel in the commodity image.
- the CPU 21 may correct color of the commodity image according to the following Equation (3):
- Rn, Gn, and Bn represent color information of a pixel after the correction and R, G, and B represent color information of the pixel before the color correction.
- the CPU 21 assigns a file name to the color-corrected commodity image and stores this corrected commodity image in the NVM 23 as a dictionary image.
- the CPU 21 has a function of storing the commodity table that includes information about a commodity.
- FIG. 4 illustrates a configuration example of the commodity table.
- the commodity table stores “commodity name”, “commodity code”, and “dictionary image file name” in a correlated manner.
- the “commodity name” is a text name of a commodity.
- the “commodity code” is ID for specifying the commodity according to a numerical value, a character string, a symbol, a combination thereof, or the like.
- the “dictionary image file name” is a file name under which the dictionary image of the commodity (generated by correcting the commodity image with the correction matrix) is stored.
- the commodity table stores “tea”, “0000”, and “img_0000” as “commodity name”, “commodity code”, and “dictionary image file name” in a first row.
- the CPU 21 acquires a commodity name and a commodity code of the commodity 50 .
- the CPU 21 may receive a commodity name and a commodity code from an operator through an operation unit 104 or the like.
- the CPU 21 Upon acquiring a commodity name and a commodity code of the commodity 50 , the CPU 21 adds the commodity name, the commodity code, and a file name of a dictionary image to the commodity table in a correlated manner.
- the CPU 11 has a function of directing movement of the self-driving robot 102 to a predetermined stop position and stopping the self-driving robot 102 at the stop position.
- the CPU 11 acquires the map information from the map information apparatus 30 .
- the map information indicates an imaging position corresponding to a commodity shelf in a store.
- the imaging position is a position at which the camera 103 mounted on the self-driving robot 102 can capture an image of the commodity shelf.
- FIG. 5 illustrates a configuration example of map information.
- the map information indicates imaging positions on passage A (A 0 to A 7 ) and imaging positions on passage B (B 0 to B 7 ).
- the passage A is between a commodity shelf column 701 and a commodity shelf column 702 .
- the passage B is between the commodity shelf column 702 and a commodity shelf column 703 .
- Each of the commodity shelf columns 701 to 703 is configured to have a plurality of commodity shelves.
- the CPU 11 determines a stop position of the self-driving robot 102 based on a route table.
- the route table indicates a stop position of the self-driving robot 102 .
- the route table may indicate a plurality of imaging positions as stop position of the self-driving robot 102 .
- the route table may indicate that the self-driving robot 102 stops sequentially at the plurality of imaging positions.
- FIG. 6 illustrates a configuration example of a route table.
- the route table stores “route name” and “stop position” in a correlated manner.
- the “route name” is a name for identifying a particular route.
- the “stop position” is an imaging position at which the self-driving robot 102 can stop and then from which an image of the commodity 50 can be captured. There may be a plurality of “stop positions”. The self-driving robot 102 may be stopped sequentially at one of several different imaging positions along a route.
- the route table indicates that the stop positions in the “route 1 ” are A 0 to A 7 .
- the route table indicates that in “route 1 ” the self-driving robot 102 stops sequentially at A 0 to A 7 .
- the CPU 11 selects a route from the route table.
- the CPU 11 may select a route based on an operator's instruction received via the operation unit 104 .
- the CPU 11 may set a route according to a predetermined condition.
- FIGS. 7 and 8 illustrate operation examples in which the CPU 11 causes the self-driving robot 102 to stop at a predetermined position, for example, A 4 .
- FIG. 7 is a diagram of a store having commodity shelves 701 , 702 , and 703 installed, viewed from the top.
- FIG. 8 is a diagram of the commodity shelves 701 and 702 in the store, viewed horizontally.
- the self-driving robot 102 moves to and stops at A 4 according to the signal.
- the CPU 11 has a function of adjusting the imaging position based on an image of a commodity and the calibration plate 60 captured by the camera 103
- the CPU 11 transmits a signal, to the camera 103 , for capturing an image of the commodity shelf 701 .
- the camera 103 captures an image according to the signal.
- the camera 103 transmits the captured image to the CPU 11 .
- the CPU 11 acquires the captured image from the camera 103 .
- FIG. 9 illustrates an operation example in which the CPU 11 acquires an image of a commodity and the calibration plate 60 .
- a plurality of commodities is disposed in a commodity shelf 70 .
- the type or the number of the commodities disposed in the commodity shelf 70 is not limited to a specific type or number.
- the calibration plate 60 is installed at a predetermined position of the commodity shelf 70 .
- the calibration plate 60 is installed at the top of the commodity shelf 70 .
- the calibration plate 60 may be installed near the commodity shelf 70 .
- the location of the calibration plate 60 is not limited to a specific location, but the calibration plate 60 may be disposed at any predetermined position near a commodity shelf.
- the CPU 11 After the self-driving robot 102 has stopped at a predetermined position, for example, A 4 , the CPU 11 acquires an image captured by the camera 103 . Upon detecting that the self-driving robot has stopped at A 4 , the CPU 11 transmits a signal, to the camera 103 , for capturing an image of the commodity shelf. The camera 103 captures an image according to the signal. Here, the camera 103 captures an image of the commodity shelf 70 and the calibration plate 60 . The camera 103 transmits the captured image to the CPU 11 . The CPU 11 acquires the captured image from the camera 103 .
- the CPU 11 extracts an image of the calibration plate 60 from the captured image.
- the CPU 11 extracts the image of the calibration plate 60 from the captured image using pattern detection or the like.
- the CPU 11 Upon extracting the image of the calibration plate 60 , the CPU 11 detects a pixel position in the captured image of the calibration plate 60 from the extracted image. The CPU 11 adjusts the stop position of the self-driving robot 102 such that the calibration plate 60 , which has been installed at a predetermined position on a commodity shelf, is at a predetermined pixel position in the image. For example, the CPU 11 controls the self-driving robot 102 , by instructing the self-driving robot 102 to move forward, backward, and rotate, so the calibration plate 60 will be displayed at the predetermined pixel position in the image.
- the CPU 11 has a function of acquiring the image including the commodity and the calibration plate 60 using the camera 103 or an image acquisition unit.
- the CPU 11 Upon completing the position adjustment of the self-driving robot 102 , the CPU 11 acquires another image using the camera 103 .
- the CPU 11 generates a correction matrix based on the calibration plate 60 in the image captured after the position adjustment.
- the CPU 11 extracts an image of the calibration plate 60 from the image captured after repositioning of the self-driving robot 102 using a pattern recognition algorithm or the like.
- the CPU 11 Upon extracting the color information, the CPU 11 generates the correction matrix based on the reference color information, included in the reference color table stored in the storage region 13 a , and the color information extracted from the captured image of the calibration plate 60 .
- the CPU 11 has a function of correcting the captured image using the correction matrix.
- the CPU 11 may correct color information (R, G, and B values) of each pixel in the captured image.
- the CPU 11 corrects the color of the captured image according to Equation (3), described above.
- the CPU 11 has a function of recognizing a commodity in the captured image using the color-corrected captured image of the commodity shelf.
- the CPU 11 acquires a dictionary image from the dictionary image storage apparatus 20 .
- the CPU 11 recognizes the commodity from the corrected captured image by using the dictionary image.
- the CPU 11 calculates image feature data from the dictionary image, and then searches for matching image feature data in the corrected captured image.
- the CPU 11 searches for an image region with a coincidence ratio (matching score) exceeds a predetermined threshold value. For example, the CPU 11 searches for a matching image region by raster scan or the like. Upon finding a matching image region, the CPU 11 determines the commodity matching the dictionary image is at the position of the region.
- a coincidence ratio matching score
- the CPU 11 then masks or excludes this matching image region from the corrected captured image and further attempts to find additional matching regions using image feature data matching. If no other region of the corrected captured image can be found, the CPU 11 terminates the detection process for the commodity corresponding to the dictionary image.
- the CPU 11 performs the same operation with the other dictionary image.
- the CPU 11 repeats the operation but this time using the other dictionary image stored in the dictionary image storage apparatus 20 to recognize the corresponding commodity.
- the CPU 11 may acquire the dictionary images from the dictionary image storage apparatus 20 one by one or acquire several dictionary images at once.
- the method of detecting a commodity by the CPU 11 is not limited to the example described above.
- the CPU 11 may acquire a commodity name and a commodity code of a detected commodity by referencing the commodity table stored in the dictionary image storage apparatus 20 .
- the CPU 11 may transmit the acquired commodity name and commodity code to the missing commodity recognition apparatus 40 .
- FIG. 10 is a flowchart of an operation example in which the CPU 21 stores a dictionary image.
- the CPU 21 captures or otherwise acquires an image of a commodity and the calibration plate 60 using the camera 26 (ACT 11 ). Upon acquiring the captured image, the CPU 21 generates a correction matrix based on the captured image (ACT 12 ).
- the CPU 21 After generating the correction matrix, the CPU 21 extracts an image of the commodity from the captured image (ACT 13 ). Upon extracting the image, the CPU 21 corrects the image with the correction matrix (ACT 14 ). Upon correcting the image using the correction matrix, the CPU 21 stores the corrected image of the commodity as a dictionary image in the NVM 13 (ACT 15 ).
- the CPU 21 Upon storing the dictionary image in the NVM 13 , the CPU 21 adds to a commodity list a commodity name, a commodity code, and a dictionary image file name of the dictionary image in a correlated manner (ACT 16 ).
- the CPU 21 Upon adding the names and codes to the commodity list, the CPU 21 terminates the operation.
- the CPU 21 repeats ACTs 11 to 16 for each captured image.
- the CPU 21 may execute ACT 12 after ACT 13 .
- FIG. 11 is a flowchart of an operation example in which the CPU 21 generates the correction matrix (ACT 12 ).
- the CPU 21 extracts a region of the captured image corresponding to the calibration plate 60 (ACT 22 ). Upon extracting the region of the captured image corresponding to the calibration plate 60 , the CPU 21 acquires color information of each block of the calibration plate 60 from the extracted image (ACT 23 ).
- the CPU 21 Upon acquiring the color information of each block, the CPU 21 acquires reference color information of each block from the reference color table (ACT 23 ). Upon acquiring the reference color information of each block, the CPU 21 generates the correction matrix based on the color information and reference color information (ACT 24 ).
- the CPU 21 Upon generates the correction matrix, the CPU 21 stores the generated correction matrix in the RAM 12 or the NVM 13 (ACT 25 ). Upon storing the correction matrix, the CPU 21 terminates the operation.
- the CPU 21 may execute ACT 22 after ACT 23 .
- FIG. 12 is a flowchart of an example in which the CPU 11 detects the commodity.
- the CPU 11 initializes the image processing apparatus 10 (ACT 31 ). Upon initializing the image processing apparatus 10 , the CPU 11 selects a route from the route table (ACT 32 ).
- the CPU 11 determines whether commodity imaging has been completed (ACT 33 ). The CPU 11 determines whether an image has been captured at each stop position in the selected route.
- the CPU 11 Upon determining that imaging has not been completed at a stop position (ACT 33 , NO), the CPU 11 instructs the self-driving robot 102 to move to and stop at the stop position (ACT 34 ). Upon instructing the self-driving robot 102 to move to and stop at the stop position, the CPU 11 determines whether the self-driving robot 102 has stopped at the stop position (ACT 35 ).
- the CPU 11 Upon determining that the self-driving robot 102 has not stopped at a stop position (ACT 35 , NO), the CPU 11 returns to ACT 35 .
- the CPU 11 Upon determining that the self-driving robot 102 has stopped at the stop position (ACT 35 , YES), the CPU 11 acquires an image using the camera 103 and, as necessary, adjusts the position of the self-driving robot 102 (ACT 36 ). After adjusting the position of the self-driving robot 102 , the CPU 11 acquires an image using the camera 103 (ACT 37 ). The CPU 11 then generates a correction matrix based on the captured image (ACT 38 ).
- the CPU 11 then corrects the captured image using the correction matrix (ACT 39 ).
- CPU 11 then detects the commodity in the corrected captured image (ACT 40 ).
- the CPU 11 After detecting the commodity, the CPU 11 returns to ACT 33 .
- the CPU 11 causes the self-driving robot 102 to move to an initial position (ACT 41 ). Upon causing the self-driving robot 102 to move to the initial position, the CPU 11 terminates the operation.
- FIG. 13 is a flowchart of an operation example in which the CPU 11 recognizes or detects a commodity.
- the CPU 11 selects a dictionary image (ACT 51 ).
- the CPU 11 acquires this dictionary image from the dictionary image storage apparatus 20 .
- the CPU 11 calculates image feature data for the dictionary image (ACT 52 ).
- the CPU 11 searches for a region of the corrected captured image having image features that coincides with image feature of a region of the dictionary image at a ratio (matching score) exceeding a predetermined threshold value (ACT 53 ).
- the CPU 11 determines the commodity registered in the dictionary image is at a position corresponding to the region of the corrected captured image (ACT 55 ). Upon detecting the commodity depicted in the dictionary image at the position corresponding to the region of the corrected captured image, the CPU 11 masks the region of the corrected captured image that has been found in ACT 55 (ACT 56 ). Upon masking the region of the corrected captured image, the CPU 11 returns to ACT 53 .
- the CPU 11 Upon finding no region of the corrected captured image having image features that coincide with image features of the dictionary image at a coincidence ratio exceeding the predetermined threshold value (ACT 54 , NO), the CPU 11 next determines whether there is another dictionary image to evaluate for matching in the corrected captured image (ACT 57 ). Upon determining that there is another dictionary image (ACT 57 , YES), the CPU 11 selects another dictionary image (ACT 58 ). Upon selecting the other dictionary image, the CPU 11 returns to ACT 52 .
- the self-driving robot 102 may incorporate the commodity recognition unit 101 .
- the image processing apparatus 10 need not include the self-driving robot 102 .
- the CPU 11 may detect a commodity from an image captured by the camera 103 according to an operator's instruction or manipulation.
- the image processing apparatus 10 and the dictionary image storage apparatus 20 may be integrated.
- the image processing system generates a correction matrix that alleviates imaging variations that might be caused by the influence of camera characteristics, external lighting, or the like using an image of a calibration plate in the captured image.
- the image processing system then corrects a captured image using the correction matrix.
- the image processing system detects a commodity in the corrected image. Accordingly, the image processing system detects the commodity more reliably from the captured image. As a result, the image processing system can prevent the detection accuracy of the commodity from deteriorating due to the influence of camera characteristics, external light or the like. Thus, the image processing system can more effectively recognize the commodity.
- a self-driving robot can adjust its position based on its relative positioning with respect to the calibration plate.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Manipulator (AREA)
Abstract
An autonomous inventory tracking apparatus includes an image acquisition unit configured to acquire an image, and a processor configured to detect a calibration plate in the image acquired from the image acquisition unit, calculate a color correction value for the image according to a color block of the calibration plate, the color block matching a reference value, correct color in the image using the calculated color correction value to provide a color-corrected image, and perform commodity recognition processing on the color-corrected image so as to identify the commodity in the image acquired from the image acquisition unit.
Description
- This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2016-249568, filed Dec. 22, 2016, the entire contents of which are incorporated herein by reference.
- Embodiments described herein relate generally to an image processing apparatus and an image processing method.
- In the related art, an image processing apparatus acquires an image of a shelf or the like on which a commodity is disposed, and recognizes the commodity in the image. A color of the commodity in the image may be different from the color pre-registered for the commodity due to various factors such as differences in a position and characteristics of a camera used to capture the image of the commodity on the shelf, and lighting around the shelf when the image was acquired.
- In some related art examples, a mobile robot must internally maintain an environment map so as to match the external environment of the mobile robot to prevent a self-positioning function of the mobile robot from deteriorating.
- However, even though such an environment map is created so as to match the surrounding external environment, an error in the stopping position of the robot may still occur. Accordingly, repeated images from the exact same position will not necessarily be obtained by such a robot, and the deterioration of the commodity recognition may occur due to the difference in colors of the commodity, if such color differences are not compensated.
- If the color of the commodity in the image is different from the pre-registered or expected color of the commodity, accuracy of recognition of the commodity from the captured image decreases in the image processing apparatus.
- When an image processing apparatus mounted on an autonomous driving robot images the commodity shelf in which the commodity is disposed, the mobile robot can drive a predetermined route based on estimated self-positioning of the robot. However, if the environment map stored by the mobile robot is different from the actual environment, the self-positioning function of the mobile robot deteriorates and an error in the robot positioning occurs. As such, there can also be a problem in that a particular commodity may not be captured as the image correctly due to an error in the stop positioning of the robot.
-
FIG. 1 is a block diagram of an image processing system. -
FIG. 2 is a diagram of a dictionary image storage apparatus. -
FIG. 3 is a diagram of a calibration plate. -
FIG. 4 is a diagram of a commodity table. -
FIG. 5 is a diagram of map information. -
FIG. 6 is a diagram of a route table. -
FIG. 7 is a diagram of a robot apparatus. -
FIG. 8 is a diagram of a robot apparatus. -
FIG. 9 is a diagram of a dictionary image storage apparatus. -
FIG. 10 is a flowchart of an operation of a dictionary image storage apparatus. -
FIG. 11 is a flowchart of an operation of a dictionary image storage apparatus. -
FIG. 12 is a flowchart of an operation of an image processing apparatus. -
FIG. 13 is a flowchart of an operation of an image processing apparatus. - In general, according to embodiments, an autonomous inventory tracking apparatus includes an image acquisition unit configured to acquire an image, and a processor configured to detect a calibration plate in the image acquired from the image acquisition unit, calculate a color correction value for the image according to a color block of the calibration plate, the color block matching a reference value, color correct the image using the calculated color correction value to provide a color-corrected image, and perform commodity recognition processing on the color-corrected image so as to identify the commodity in the image acquired from the image acquisition unit.
- Hereinafter, embodiments will be described with reference to drawings.
- An image processing system according to embodiments detects a commodity which is disposed in a commodity shelf based on an image of the commodity shelf or the like in which the commodity is disposed, for example, in a store.
-
FIG. 1 is a block diagram of animage processing system 1. - As illustrated in
FIG. 1 , theimage processing system 1 includes animage processing apparatus 10, a dictionaryimage storage apparatus 20, amap information apparatus 30, and a missingcommodity recognition apparatus 40. - The
image processing apparatus 10 detects a commodity based on a captured image of a commodity shelf or the like in which the commodity is placed. - The dictionary
image storage apparatus 20 stores an image of a commodity (also referred to as a dictionary image) which is necessary for theimage processing apparatus 10 to detect the commodity. - The
map information apparatus 30 provides map information indicating a map inside a store having theimage processing system 1 installed to theimage processing apparatus 10. - The missing
commodity recognition apparatus 40 detects a missing commodity based on the commodity that has been detected by theimage processing apparatus 10. The missingcommodity recognition apparatus 40 detects the missing commodity based on information indicating a commodity that has been pre-registered for a certain position in the commodity shelf and the commodity that has been detected at that position in the commodity shelf. - As illustrated in
FIG. 1 , theimage processing apparatus 10 includes acommodity recognition unit 101, a self-drivingrobot 102, acamera 103, anoperation unit 104, and the like. Thecommodity recognition unit 101 includes a central processing unit (CPU) 11, a random access memory (RAM) 12, a non-volatile memory (NVM) 13, and interfaces (I/F) 14 to 19. The elements of thecommodity recognition unit 101 are connected to each other through a data bus or the like. Thecommodity recognition unit 101 may have more elements in addition to the elements depicted inFIG. 1 , or some of the elements depicted inFIG. 1 may be omitted in some embodiments. - The
CPU 11 has a function of controlling the overall operation of thecommodity recognition unit 101. TheCPU 11 may include an internal cache, various interfaces, and the like. TheCPU 11 executes a program stored in an internal memory orNVM 13 in advance to provide various processing operations. TheCPU 11 is, for example, a processor. - Some of the processing operations by the
CPU 11 executing a program may be realized via a hardware circuit. In this case, theCPU 11 controls an operation of the hardware circuit. - The
RAM 12 is a volatile memory. TheRAM 12 temporarily stores data ofCPU 11 in processing or the like. TheRAM 12 stores various application programs based on an instruction from theCPU 11. TheRAM 12 may store data which is necessary for executing the application program, an execution result of the application program, and the like. - The NVM 13 is a non-volatile memory capable of writing and rewriting of data. The NVM 13 may be, for example, a (solid-state drive) SSD, an EEPROM®, a flash memory, or the like. The NVM 13 stores a control program, an application program, and various data according to an operational purpose of the
commodity recognition unit 101. - The NVM 13 includes a
storage region 13 a that stores a reference color table, astorage region 13 b that stores a route table, and the like. - The
interface 14 is an interface for communicating data with the dictionaryimage storage apparatus 20. TheCPU 11 acquires the dictionary image or the like from the dictionaryimage storage apparatus 20 through theinterface 14. - The
interface 15 is an interface for communicating data with the self-drivingrobot 102. For example, theCPU 11 sets a stop position or the like on the self-drivingrobot 102 through theinterface 15. - The
interface 16 is an interface for communicating data with themap information apparatus 30. For example, theCPU 11 acquires map information from themap information apparatus 30 through theinterface 16. - The
interface 17 is an interface for communicating data with thecamera 103. For example, theCPU 11 acquires a captured image from thecamera 103 through theinterface 17. - The
interface 18 is an interface for communicating data with theoperation unit 104. For example, theCPU 11 acquires an operation input into theoperation unit 104 through theinterface 18. - The
interface 19 is an interface for communicating data with the missingcommodity recognition apparatus 40. For example, theCPU 11 transmits information indicating a recognized commodity to the missingcommodity recognition apparatus 40 through theinterface 19. - The
interfaces 14 to 19 support wireless connection or wired connection protocols. For example, theinterfaces 14 to 19 may support a local area network (LAN) protocol or a Universal Serial Bus (USB) protocol. - The self-driving
robot 102 is a mobile object that carries thecamera 103. The self-drivingrobot 102 includes a main body to mount the camera, a motor, a tire, and the like. The self-drivingrobot 102 moves by rotating the tire using driving force of the motor. The self-drivingrobot 102 can move with thecamera 103 mounted to a predetermined position. The self-drivingrobot 102 may include a mechanism for changing a position and an angle of thecamera 103. The self-drivingrobot 102 may include a sensor for detecting its own position. - The self-driving
robot 102 moves based on a signal from thecommodity recognition unit 101. For example, the self-drivingrobot 102 receives information including a stop position from thecommodity recognition unit 101. The self-drivingrobot 102 moves to and then stops at the stop position. - The self-driving
robot 102 may determine a stop position based on a calibration plate. For example, the self-drivingrobot 102 detects the calibration plate based on an image of the calibration plate captured by thecamera 103. The self-drivingrobot 102 may adjust the stop position based on a position of the detected calibration plate or the like. - The
camera 103 captures images of a commodity. Thecamera 103 captures an image of a commodity shelf in which the commodity is disposed. Thecamera 103 is mounted on the self-drivingrobot 102 at a predetermined height. For example, thecamera 103 is installed at a height at which a predetermined commodity shelf can be imaged. - The
camera 103 may capture an image according to a signal from thecommodity recognition unit 101. Thecamera 103 may capture an image when the self-drivingrobot 102 stops at a stop position. Thecamera 103 transmits the captured image to thecommodity recognition unit 101. - The
camera 103 is, for example, a charge coupled device (CCD) camera. - The
operation unit 104 receives various operation instructions by an operator. Theoperation unit 104 transmits, to thecommodity recognition unit 101, a signal indicating the operation instruction input by the operator. Theoperation unit 104 is, for example, a keyboard, a numeric keypad, and a touch panel. - As illustrated in
FIG. 1 , the dictionaryimage storage apparatus 20 includes aCPU 21, aRAM 22, anNVM 23, interfaces (I/F) 24 and 25, and the like. The elements in the dictionaryimage storage apparatus 20 are connected to each other through a data bus or the like. The dictionaryimage storage apparatus 20 may have more elements in addition to the elements depicted inFIG. 1 , or some of the elements depicted inFIG. 1 may be omitted. The dictionaryimage storage apparatus 20 is connected to acamera 26. - The
CPU 21 has a function of controlling the overall operation of the dictionaryimage storage apparatus 20. TheCPU 21 may include an internal cache, various interfaces, and the like. TheCPU 21 executes a program stored in an internal memory orNVM 23 in advance to provide various processing. TheCPU 21 is, for example, a processor. - Some of the processing operations by the
CPU 21 executing a program may be realized via a hardware circuit. In this case, theCPU 21 controls an operation of the hardware circuit. - The
RAM 22 is a volatile memory. TheRAM 22 temporarily stores data ofCPU 21 in processing or the like. TheRAM 22 stores various application programs based on an instruction from theCPU 21. TheRAM 22 may store data which is necessary for executing the application program, an execution result of the application program, and the like. - The
NVM 23 is a non-volatile memory capable of writing and rewriting of data. TheNVM 23 may be a hard disk, an SSD, an EEPROM®, a flash memory, or the like. TheNVM 23 stores a control program, an application program, and various data according to an operational purpose of the dictionaryimage storage apparatus 20. - The
NVM 23 includes astorage region 23 a that stores a reference color table, astorage region 23 b that stores a commodity table and the like. - The
interface 25 is an interface for communicating data with thecommodity recognition unit 101. For example, theCPU 21 transmits the dictionary image or the like to thecommodity recognition unit 101 through theinterface 25. - The
interface 24 is an interface for communicating data with thecamera 26. For example, theCPU 21 acquires a captured image or the like from thecamera 26 through theinterface 24. - The
interfaces interfaces - The
camera 26 captures an image of a commodity. Thecamera 26 may capture an image according to an operator's instruction. Thecamera 26 transmits the captured image to the dictionaryimage storage apparatus 20. Thecamera 26 may transmit the captured image to the dictionaryimage storage apparatus 20, at the time of capturing the image. Thecamera 26 may transmit the captured images to the dictionaryimage storage apparatus 20, at a predetermined interval or after the camera has captured the predetermined number of images. Thecamera 26 may store the image in a detachable memory, and the dictionaryimage storage apparatus 20 may acquire the image from the detachable memory. - The
camera 26 is, for example, a CCD camera or the like. - The
CPU 21 has a function of acquiring an image of a commodity and a calibration plate captured by thecamera 26. - The
CPU 21 acquires a captured image from thecamera 26. Thecamera 26 may store a captured image to a detachable memory, and theCPU 21 may acquire the captured image from the detachable memory. -
FIG. 2 illustrates an operation example in which theCPU 21 acquires a captured image. - As illustrated in
FIG. 2 , thecamera 26 captures an image of acommodity 50 and acalibration plate 60. Thecommodity 50 and thecalibration plate 60 are disposed adjacent each other so as to be contained in one captured image. The positional relationship between thecommodity 50 and thecalibration plate 60 is not limited to a specific configuration. - The
calibration plate 60 is a plate having a plurality of blocks in different colors. -
FIG. 3 illustrates an example of thecalibration plate 60. - As illustrated in
FIG. 3 , thecalibration plate 60 includesblocks 601 to 618. Each of theblocks 601 to 618 has different colors. Each of theblocks 601 to 618 may be individually membered. Theblocks 601 to 618 may be regions in which inks of predetermined colors are painted on thecalibration plate 60. - In the example illustrated in
FIG. 3 , thecalibration plate 60 includes 18 blocks. Thecalibration plate 60 may include three or more, or nine or more blocks. The number of blocks on thecalibration plate 60 is not limited to a specific number. - The
CPU 21 has a function of generating a correction matrix, including correction values, based on the captured image of thecalibration plate 60. - The correction matrix is a matrix that maps a color of a block of the
calibration plate 60 in the captured image to an actual color of the block. The correction matrix can eliminate color deviation due to camera characteristics, external light, or the like. - The
CPU 21 extracts a region of the captured image corresponding to thecalibration plate 60, using a pattern recognition algorithm or the like. - Upon extracting the region of the image corresponding to the
calibration plate 60, theCPU 21 extracts color information of each block of thecalibration plate 60 from the extracted image. TheCPU 21 detects a position of each block from the extracted image. TheCPU 21 extracts the color information of the position of each block. Here, the color information indicates colors in RGB format. - Upon extracting the color information, the
CPU 21 acquires reference color information of each block of thecalibration plate 60. - The
CPU 21 acquires the reference color information from the reference color table stored in thestorage region 23 a. - The reference color table stores the reference color information, indicating reference colors of blocks of the
calibration plate 60 individually. The reference color indicates a block color registered in advance. For example, the reference color may be an actual color specified by an ink used to print the block, or a block color imaged under a predetermined condition. - The reference color table stores information indicating a location of the a block, for example, a position or a label of a block, and reference color information indicating a reference color of the block in a correlated manner.
- The reference color information indicates colors, for example, in RGB format. The reference color information may include Rs, Gs, and Bs.
- The
storage region 23 a stores the reference color table in advance. - The reference color table stored in the
storage region 13 a is the same as the reference color table stored in thestorage region 23 a. - Upon acquiring the reference color information of each block, the
CPU 21 generates a correction matrix based on the color information and the reference color information of each block. - For example, the
CPU 21 calculates a correction matrix A satisfying the following Equation (1): -
- Here, Rs, Gs, and Bs are reference color information for one block of the
calibration plate 60. R, G, and B are color information of the block extracted from the extracted image. A is a correction matrix. - For example, A is represented by the following Equation (2):
-
- Here, a11 to a33 are correction values as elements of the correction matrix A.
- The
CPU 21 generates the correction matrix A using the reference color information and the color information of each block. TheCPU 21 generates the correction matrix A by which the reference color information and the color information of each block satisfy Equation (1). TheCPU 21 calculates the elements of the correction matrix using the least square method, for example. - The
CPU 21 has a function of storing an image of the commodity that has been corrected (by using the correction matrix A) as a dictionary image. - The
CPU 21 specifies a region of the captured image corresponding to thecommodity 50. TheCPU 21 may detect an edge of thecommodity 50 from the captured image and thus specify the region in the captured image corresponding to thecommodity 50. Upon specifying the region of the captured image corresponding to thecommodity 50, theCPU 21 corrects colors in the region of the captured image corresponding to the commodity region (also referred to as a commodity image) using the correction matrix. - For example, the
CPU 21 corrects R, G, and B values for each pixel in the commodity image using the correction matrix values on each pixel in the commodity image. - Specifically, the
CPU 21 may correct color of the commodity image according to the following Equation (3): -
- where Rn, Gn, and Bn represent color information of a pixel after the correction and R, G, and B represent color information of the pixel before the color correction.
- The
CPU 21 assigns a file name to the color-corrected commodity image and stores this corrected commodity image in theNVM 23 as a dictionary image. - The
CPU 21 has a function of storing the commodity table that includes information about a commodity. -
FIG. 4 illustrates a configuration example of the commodity table. - As illustrated in
FIG. 4 , the commodity table stores “commodity name”, “commodity code”, and “dictionary image file name” in a correlated manner. - The “commodity name” is a text name of a commodity.
- The “commodity code” is ID for specifying the commodity according to a numerical value, a character string, a symbol, a combination thereof, or the like.
- The “dictionary image file name” is a file name under which the dictionary image of the commodity (generated by correcting the commodity image with the correction matrix) is stored.
- In the example illustrated in
FIG. 4 , the commodity table stores “tea”, “0000”, and “img_0000” as “commodity name”, “commodity code”, and “dictionary image file name” in a first row. - The
CPU 21 acquires a commodity name and a commodity code of thecommodity 50. TheCPU 21 may receive a commodity name and a commodity code from an operator through anoperation unit 104 or the like. - Upon acquiring a commodity name and a commodity code of the
commodity 50, theCPU 21 adds the commodity name, the commodity code, and a file name of a dictionary image to the commodity table in a correlated manner. - The
CPU 11 has a function of directing movement of the self-drivingrobot 102 to a predetermined stop position and stopping the self-drivingrobot 102 at the stop position. - The
CPU 11 acquires the map information from themap information apparatus 30. The map information indicates an imaging position corresponding to a commodity shelf in a store. The imaging position is a position at which thecamera 103 mounted on the self-drivingrobot 102 can capture an image of the commodity shelf. -
FIG. 5 illustrates a configuration example of map information. - As illustrated in
FIG. 5 , the map information indicates imaging positions on passage A (A0 to A7) and imaging positions on passage B (B0 to B7). - The passage A is between a
commodity shelf column 701 and acommodity shelf column 702. The passage B is between thecommodity shelf column 702 and acommodity shelf column 703. Each of thecommodity shelf columns 701 to 703 is configured to have a plurality of commodity shelves. - The
CPU 11 determines a stop position of the self-drivingrobot 102 based on a route table. - The route table indicates a stop position of the self-driving
robot 102. The route table may indicate a plurality of imaging positions as stop position of the self-drivingrobot 102. The route table may indicate that the self-drivingrobot 102 stops sequentially at the plurality of imaging positions. -
FIG. 6 illustrates a configuration example of a route table. - As illustrated in
FIG. 6 , the route table stores “route name” and “stop position” in a correlated manner. - The “route name” is a name for identifying a particular route.
- The “stop position” is an imaging position at which the self-driving
robot 102 can stop and then from which an image of thecommodity 50 can be captured. There may be a plurality of “stop positions”. The self-drivingrobot 102 may be stopped sequentially at one of several different imaging positions along a route. - In the example illustrated in
FIG. 6 , the route table indicates that the stop positions in the “route 1” are A0 to A7. The route table indicates that in “route 1” the self-drivingrobot 102 stops sequentially at A0 to A7. - The
CPU 11 selects a route from the route table. TheCPU 11 may select a route based on an operator's instruction received via theoperation unit 104. TheCPU 11 may set a route according to a predetermined condition. - The
CPU 11 causes the self-drivingrobot 102 to stop at a stop position according to the selected route. TheCPU 11 causes the self-drivingrobot 102 to move to and stop at a first stop position indicated by the route. TheCPU 11 causes the self-drivingrobot 102 to move to and stop at stop positions in an order indicated by the route. -
FIGS. 7 and 8 illustrate operation examples in which theCPU 11 causes the self-drivingrobot 102 to stop at a predetermined position, for example, A4.FIG. 7 is a diagram of a store havingcommodity shelves FIG. 8 is a diagram of thecommodity shelves - Here, the
CPU 11 selects “route 1” from the route table along thecommodity shelf 701. When the self-drivingrobot 102 is at A3, theCPU 11 transmits a signal, to the self-drivingrobot 102, for instructing to move to and stop at A4. - The self-driving
robot 102 moves to and stops at A4 according to the signal. - The
CPU 11 has a function of adjusting the imaging position based on an image of a commodity and thecalibration plate 60 captured by thecamera 103 - After the self-driving
robot 102 has stopped at a predetermined position, theCPU 11 transmits a signal, to thecamera 103, for capturing an image of thecommodity shelf 701. Thecamera 103 captures an image according to the signal. Thecamera 103 transmits the captured image to theCPU 11. TheCPU 11 acquires the captured image from thecamera 103. -
FIG. 9 illustrates an operation example in which theCPU 11 acquires an image of a commodity and thecalibration plate 60. - As illustrated in
FIG. 9 , a plurality of commodities is disposed in acommodity shelf 70. The type or the number of the commodities disposed in thecommodity shelf 70 is not limited to a specific type or number. - The
calibration plate 60 is installed at a predetermined position of thecommodity shelf 70. In the example illustrated inFIG. 9 , thecalibration plate 60 is installed at the top of thecommodity shelf 70. Thecalibration plate 60 may be installed near thecommodity shelf 70. The location of thecalibration plate 60 is not limited to a specific location, but thecalibration plate 60 may be disposed at any predetermined position near a commodity shelf. - After the self-driving
robot 102 has stopped at a predetermined position, for example, A4, theCPU 11 acquires an image captured by thecamera 103. Upon detecting that the self-driving robot has stopped at A4, theCPU 11 transmits a signal, to thecamera 103, for capturing an image of the commodity shelf. Thecamera 103 captures an image according to the signal. Here, thecamera 103 captures an image of thecommodity shelf 70 and thecalibration plate 60. Thecamera 103 transmits the captured image to theCPU 11. TheCPU 11 acquires the captured image from thecamera 103. - The
CPU 11 extracts an image of thecalibration plate 60 from the captured image. For example, theCPU 11 extracts the image of thecalibration plate 60 from the captured image using pattern detection or the like. - Upon extracting the image of the
calibration plate 60, theCPU 11 detects a pixel position in the captured image of thecalibration plate 60 from the extracted image. TheCPU 11 adjusts the stop position of the self-drivingrobot 102 such that thecalibration plate 60, which has been installed at a predetermined position on a commodity shelf, is at a predetermined pixel position in the image. For example, theCPU 11 controls the self-drivingrobot 102, by instructing the self-drivingrobot 102 to move forward, backward, and rotate, so thecalibration plate 60 will be displayed at the predetermined pixel position in the image. - The
CPU 11 has a function of acquiring the image including the commodity and thecalibration plate 60 using thecamera 103 or an image acquisition unit. - Upon completing the position adjustment of the self-driving
robot 102, theCPU 11 acquires another image using thecamera 103. - The
CPU 11 generates a correction matrix based on thecalibration plate 60 in the image captured after the position adjustment. - For example, the
CPU 11 extracts an image of thecalibration plate 60 from the image captured after repositioning of the self-drivingrobot 102 using a pattern recognition algorithm or the like. - Upon extracting the image of the
calibration plate 60, theCPU 11 extracts color information for each block on thecalibration plate 60. TheCPU 11 detects a position of each block from the extracted image. TheCPU 11 extracts color information of the position of each block. Here, the color information corresponds to a color value in RGB format. - Upon extracting the color information, the
CPU 11 generates the correction matrix based on the reference color information, included in the reference color table stored in thestorage region 13 a, and the color information extracted from the captured image of thecalibration plate 60. - Since the method of generating the correction matrix by the
CPU 11 is the same as the method of generating a correction matrix by theCPU 21, a detailed description thereof will be omitted. - The
CPU 11 has a function of correcting the captured image using the correction matrix. - The
CPU 11 may correct color information (R, G, and B values) of each pixel in the captured image. - Specifically, the
CPU 11 corrects the color of the captured image according to Equation (3), described above. - The
CPU 11 has a function of recognizing a commodity in the captured image using the color-corrected captured image of the commodity shelf. - The
CPU 11 acquires a dictionary image from the dictionaryimage storage apparatus 20. TheCPU 11 recognizes the commodity from the corrected captured image by using the dictionary image. TheCPU 11 calculates image feature data from the dictionary image, and then searches for matching image feature data in the corrected captured image. - The
CPU 11 searches for an image region with a coincidence ratio (matching score) exceeds a predetermined threshold value. For example, theCPU 11 searches for a matching image region by raster scan or the like. Upon finding a matching image region, theCPU 11 determines the commodity matching the dictionary image is at the position of the region. - The
CPU 11 then masks or excludes this matching image region from the corrected captured image and further attempts to find additional matching regions using image feature data matching. If no other region of the corrected captured image can be found, theCPU 11 terminates the detection process for the commodity corresponding to the dictionary image. - If there is another dictionary image to evaluate, the
CPU 11 performs the same operation with the other dictionary image. TheCPU 11 repeats the operation but this time using the other dictionary image stored in the dictionaryimage storage apparatus 20 to recognize the corresponding commodity. - The
CPU 11 may acquire the dictionary images from the dictionaryimage storage apparatus 20 one by one or acquire several dictionary images at once. - The method of detecting a commodity by the
CPU 11 is not limited to the example described above. - The
CPU 11 may acquire a commodity name and a commodity code of a detected commodity by referencing the commodity table stored in the dictionaryimage storage apparatus 20. TheCPU 11 may transmit the acquired commodity name and commodity code to the missingcommodity recognition apparatus 40. - Next, an operation example of the
CPU 21 of the dictionaryimage storage apparatus 20 will be described. - First, an operation example in which the
CPU 21 stores a dictionary image will be described. -
FIG. 10 is a flowchart of an operation example in which theCPU 21 stores a dictionary image. - First, the
CPU 21 captures or otherwise acquires an image of a commodity and thecalibration plate 60 using the camera 26 (ACT 11). Upon acquiring the captured image, theCPU 21 generates a correction matrix based on the captured image (ACT 12). - After generating the correction matrix, the
CPU 21 extracts an image of the commodity from the captured image (ACT 13). Upon extracting the image, theCPU 21 corrects the image with the correction matrix (ACT 14). Upon correcting the image using the correction matrix, theCPU 21 stores the corrected image of the commodity as a dictionary image in the NVM 13 (ACT 15). - Upon storing the dictionary image in the
NVM 13, theCPU 21 adds to a commodity list a commodity name, a commodity code, and a dictionary image file name of the dictionary image in a correlated manner (ACT 16). - Upon adding the names and codes to the commodity list, the
CPU 21 terminates the operation. - If a plurality of captured images is acquired, the
CPU 21 repeats ACTs 11 to 16 for each captured image. - In some embodiments, the
CPU 21 may executeACT 12 afterACT 13. - Next, an operation example in which the
CPU 21 generates the correction matrix (ACT 12) will be described. -
FIG. 11 is a flowchart of an operation example in which theCPU 21 generates the correction matrix (ACT 12). - The
CPU 21 extracts a region of the captured image corresponding to the calibration plate 60 (ACT 22). Upon extracting the region of the captured image corresponding to thecalibration plate 60, theCPU 21 acquires color information of each block of thecalibration plate 60 from the extracted image (ACT 23). - Upon acquiring the color information of each block, the
CPU 21 acquires reference color information of each block from the reference color table (ACT 23). Upon acquiring the reference color information of each block, theCPU 21 generates the correction matrix based on the color information and reference color information (ACT 24). - Upon generates the correction matrix, the
CPU 21 stores the generated correction matrix in theRAM 12 or the NVM 13 (ACT 25). Upon storing the correction matrix, theCPU 21 terminates the operation. - The
CPU 21 may executeACT 22 afterACT 23. - Next, an example of an operation of the
CPU 11 will be described. - First, an example in which the
CPU 11 recognizes a commodity will be described. -
FIG. 12 is a flowchart of an example in which theCPU 11 detects the commodity. - The
CPU 11 initializes the image processing apparatus 10 (ACT 31). Upon initializing theimage processing apparatus 10, theCPU 11 selects a route from the route table (ACT 32). - After selecting the route, the
CPU 11 determines whether commodity imaging has been completed (ACT 33). TheCPU 11 determines whether an image has been captured at each stop position in the selected route. - Upon determining that imaging has not been completed at a stop position (ACT 33, NO), the
CPU 11 instructs the self-drivingrobot 102 to move to and stop at the stop position (ACT 34). Upon instructing the self-drivingrobot 102 to move to and stop at the stop position, theCPU 11 determines whether the self-drivingrobot 102 has stopped at the stop position (ACT 35). - Upon determining that the self-driving
robot 102 has not stopped at a stop position (ACT 35, NO), theCPU 11 returns to ACT 35. - Upon determining that the self-driving
robot 102 has stopped at the stop position (ACT 35, YES), theCPU 11 acquires an image using thecamera 103 and, as necessary, adjusts the position of the self-driving robot 102 (ACT 36). After adjusting the position of the self-drivingrobot 102, theCPU 11 acquires an image using the camera 103 (ACT 37). TheCPU 11 then generates a correction matrix based on the captured image (ACT 38). - The
CPU 11 then corrects the captured image using the correction matrix (ACT 39).CPU 11 then detects the commodity in the corrected captured image (ACT 40). - After detecting the commodity, the
CPU 11 returns to ACT 33. - Once the imaging has been completed (ACT 33, YES), the
CPU 11 causes the self-drivingrobot 102 to move to an initial position (ACT 41). Upon causing the self-drivingrobot 102 to move to the initial position, theCPU 11 terminates the operation. - Since an operation example in which the
CPU 11 generates a correction matrix (ACT 38) is the same as inFIG. 11 , repeated description will be omitted. -
FIG. 13 is a flowchart of an operation example in which theCPU 11 recognizes or detects a commodity. - The
CPU 11 selects a dictionary image (ACT 51). TheCPU 11 acquires this dictionary image from the dictionaryimage storage apparatus 20. Upon selecting the dictionary image, theCPU 11 calculates image feature data for the dictionary image (ACT 52). TheCPU 11 then searches for a region of the corrected captured image having image features that coincides with image feature of a region of the dictionary image at a ratio (matching score) exceeding a predetermined threshold value (ACT 53). - Upon finding a region where a coincidence ratio of the feature data exceeds a predetermined threshold value (ACT 54, YES), the
CPU 11 determines the commodity registered in the dictionary image is at a position corresponding to the region of the corrected captured image (ACT 55). Upon detecting the commodity depicted in the dictionary image at the position corresponding to the region of the corrected captured image, theCPU 11 masks the region of the corrected captured image that has been found in ACT 55 (ACT 56). Upon masking the region of the corrected captured image, theCPU 11 returns to ACT 53. - Upon finding no region of the corrected captured image having image features that coincide with image features of the dictionary image at a coincidence ratio exceeding the predetermined threshold value (ACT 54, NO), the
CPU 11 next determines whether there is another dictionary image to evaluate for matching in the corrected captured image (ACT 57). Upon determining that there is another dictionary image (ACT 57, YES), theCPU 11 selects another dictionary image (ACT 58). Upon selecting the other dictionary image, theCPU 11 returns to ACT 52. - Upon determining that there is no other dictionary image to evaluate for matching (ACT 57, NO), the
CPU 11 terminates the operation. - In some embodiments, the self-driving
robot 102 may incorporate thecommodity recognition unit 101. - In some embodiments, the
image processing apparatus 10 need not include the self-drivingrobot 102. For example, theCPU 11 may detect a commodity from an image captured by thecamera 103 according to an operator's instruction or manipulation. - The
image processing apparatus 10 and the dictionaryimage storage apparatus 20 may be integrated. - In the above-described embodiments, the image processing system generates a correction matrix that alleviates imaging variations that might be caused by the influence of camera characteristics, external lighting, or the like using an image of a calibration plate in the captured image. The image processing system then corrects a captured image using the correction matrix. The image processing system detects a commodity in the corrected image. Accordingly, the image processing system detects the commodity more reliably from the captured image. As a result, the image processing system can prevent the detection accuracy of the commodity from deteriorating due to the influence of camera characteristics, external light or the like. Thus, the image processing system can more effectively recognize the commodity.
- Also, as the camera captures an image of a commodity shelf and a calibration plate, which can have a known physical location, a self-driving robot can adjust its position based on its relative positioning with respect to the calibration plate.
- While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Claims (19)
1. An autonomous inventory tracking apparatus, comprising:
an image acquisition unit configured to acquire an image; and
a processor configured to:
detect a calibration plate in the image acquired from the image acquisition unit;
calculate a color correction value for the image according to a color block of the calibration plate, the color block matching a reference value;
correct color in the image using the calculated color correction value to provide a color-corrected image; and
perform commodity recognition processing on the color-corrected image so as to identify a commodity in the image acquired from the image acquisition unit.
2. The autonomous inventory tracking apparatus according to claim 1 , wherein
the calibration plate includes a plurality of blocks having colors that match different reference color values, and
the processor is configured to generate a correction matrix for correcting R, G, and B values of each pixel of the image.
3. The autonomous inventory tracking apparatus according to claim 1 , wherein the calibration plate includes a plurality of blocks having colors that match different reference color values.
4. The autonomous inventory tracking apparatus according to claim 1 , further comprising:
a robot configured to move along a route including a plurality of predetermined stopping positions; and
a camera on the robot, wherein
at each predetermined stopping position a calibration plate is disposed at a position viewable by the camera, and
the image acquisition unit is configured to acquire an image captured using the camera at each predetermined stopping position at the respective predetermined stopping positions.
5. The autonomous inventory tracking apparatus according to claim 4 , further comprising:
a positional adjustment unit configured to adjust a position of the robot at each predetermined stopping position by reference to a detected position of the calibration plate in the image captured using the camera at the predetermined stopping position, wherein
the image acquisition unit is configured to acquire another image if the position of the robot has been adjusted.
6. The autonomous inventory tracking apparatus according to claim 4 , wherein the image acquisition unit is mounted on the robot.
7. The image processing apparatus according to claim 1 , wherein the process to identify the commodity includes comparison of the color-corrected image to a dictionary image of the commodity retrieved from an external non-volatile memory via an interface.
8. An inventorying apparatus, comprising
a robot configured to move along a route including a plurality of predetermined stopping positions;
a camera on the robot; and
an image acquisition unit configured to acquire an image captured using the camera at each predetermined stopping position; and
a processor configured to:
detect a calibration plate in the image acquired from the image acquisition unit;
calculate a color correction value for the image according to a color block of the calibration plate, the color block matching a reference value;
correct color in the image using the calculated color correction value to provide a color-corrected image; and
perform commodity recognition processing on the color-corrected image so as to identify a commodity in the image acquired from the image acquisition unit.
9. The inventorying apparatus according to claim 8 , wherein
the calibration plate includes a plurality of blocks having colors that match different reference color values, and
the processor is configured to generate a correction matrix for correcting R, G, and B values of each pixel of the image.
10. The inventorying apparatus according to claim 8 , wherein the calibration plate includes a plurality of blocks having colors that match different reference color values.
11. The inventorying apparatus according to claim 8 , further comprising:
a positional adjustment unit configured to adjust a position of the robot at each predetermined stopping position by reference to a detected position of the calibration plate in the image captured using the camera at the predetermined stopping position, wherein
the image acquisition unit is configured to acquire another image if the position of the robot has been adjusted.
12. The inventorying apparatus according to claim 8 , wherein the image acquisition unit is mounted on the robot.
13. The inventorying apparatus according to claim 8 , wherein the process to identify the commodity includes comparison of the color-corrected image to a dictionary image of the commodity retrieved from an external non-volatile memory via an interface.
14. An inventory tracking method, comprising:
moving a robot along a route including a plurality of predetermined stopping positions;
acquiring an image captured using a camera on the robot at each predetermined stopping position, the image depicting a commodity and a calibration plate positioned in proximity to the commodity, wherein the calibration plate includes a color block having a color that matches a reference color value; and
calculating a color correction value for the color of the color block according to the reference value color value;
correcting color in the image using the calculated color correction value to provide a color-corrected image; and
processing the color-corrected image so as to identify the commodity.
15. The inventory tracking method according to claim 14 , further comprising:
generating a correction matrix for correcting R, G, and B values of each pixel of the image, wherein
the calibration plate includes a plurality of blocks having colors that match different reference color values.
16. The inventory tracking method according to claim 14 , wherein the calibration plate includes a plurality of blocks having different colors matching different reference color values.
17. The inventory tracking method according to claim 14 , further comprising:
adjusting a position of the robot at each predetermined stopping position by reference to position of the calibration in the image captured using the camera at the predetermined stopping position; and
acquiring another image including the calibration plate if the position of the robot has been adjusted.
18. The inventory tracking method according to claim 14 , wherein the image acquisition unit is mounted on the robot.
19. The inventory tracking method according to claim 14 , wherein the process to identify the commodity includes comparison of the color-corrected image to a dictionary image of the commodity retrieved from an external non-volatile memory via an interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/726,776 US10970521B2 (en) | 2016-12-22 | 2019-12-24 | Image processing apparatus and image processing method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-249568 | 2016-12-22 | ||
JP2016249568A JP6840530B2 (en) | 2016-12-22 | 2016-12-22 | Image processing device and image processing method |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/726,776 Continuation US10970521B2 (en) | 2016-12-22 | 2019-12-24 | Image processing apparatus and image processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180181793A1 true US20180181793A1 (en) | 2018-06-28 |
Family
ID=62630683
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/848,011 Abandoned US20180181793A1 (en) | 2016-12-22 | 2017-12-20 | Image processing apparatus and image processing method |
US16/726,776 Active US10970521B2 (en) | 2016-12-22 | 2019-12-24 | Image processing apparatus and image processing method |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/726,776 Active US10970521B2 (en) | 2016-12-22 | 2019-12-24 | Image processing apparatus and image processing method |
Country Status (2)
Country | Link |
---|---|
US (2) | US20180181793A1 (en) |
JP (1) | JP6840530B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180150972A1 (en) * | 2016-11-30 | 2018-05-31 | Jixiang Zhu | System for determining position of a robot |
US20190124232A1 (en) * | 2017-10-19 | 2019-04-25 | Ford Global Technologies, Llc | Video calibration |
US11520341B2 (en) * | 2019-05-07 | 2022-12-06 | Toshiba Tec Kabushiki Kaisha | Information processing apparatus and information processing method |
US11829128B2 (en) | 2019-10-23 | 2023-11-28 | GM Global Technology Operations LLC | Perception system diagnosis using predicted sensor data and perception results |
US11961260B1 (en) * | 2013-03-15 | 2024-04-16 | True-See Systems, Llc | System for producing three-dimensional medical images using a calibration slate |
US12238256B1 (en) * | 2013-03-15 | 2025-02-25 | True-See Systems, Llc | System and method for mapping colors of a color calibrated image or video |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11727476B2 (en) | 2020-01-22 | 2023-08-15 | Lydia Ann Colby | Color rendering |
CN116061187B (en) * | 2023-03-07 | 2023-06-16 | 睿尔曼智能科技(江苏)有限公司 | Method for identifying, positioning and grabbing goods on goods shelves by composite robot |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2892737A (en) * | 1955-07-26 | 1959-06-30 | Standard Brands Inc | Refining hydrolyzates of starch by ion exchange |
US6333511B1 (en) * | 1997-05-27 | 2001-12-25 | Amos Talmi | Methods and apparatus for position determination |
US20030160877A1 (en) * | 2002-01-23 | 2003-08-28 | Naoaki Sumida | Imaging device for autonomously movable body, calibration method therefor, and calibration program therefor |
US20100017407A1 (en) * | 2008-07-16 | 2010-01-21 | Hitachi, Ltd. | Three-dimensional object recognition system and inventory system using the same |
US9193073B1 (en) * | 2014-10-15 | 2015-11-24 | Quanta Storage Inc. | Robot calibration apparatus for calibrating a robot arm |
US20160048798A1 (en) * | 2013-01-11 | 2016-02-18 | Tagnetics, Inc. | Inventory sensor |
US20160364869A1 (en) * | 2015-06-12 | 2016-12-15 | Hexagon Technology Center Gmbh | Method to control a drive mechanism of an automated machine having a camera |
US20170249491A1 (en) * | 2011-08-30 | 2017-08-31 | Digimarc Corporation | Methods and arrangements for identifying objects |
US9933515B2 (en) * | 2014-12-09 | 2018-04-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Sensor calibration for autonomous vehicles |
US20180345493A1 (en) * | 2017-06-06 | 2018-12-06 | Fanuc Corporation | Teaching position correction device and teaching position correction method |
US20180354130A1 (en) * | 2015-10-30 | 2018-12-13 | Keba Ag | Method, control system and movement setting means for controlling the movements of articulated arms of an industrial robot |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002218491A (en) * | 2001-01-23 | 2002-08-02 | Torai Tec:Kk | Digital camera color correction system |
JP4998792B2 (en) * | 2007-10-30 | 2012-08-15 | 独立行政法人情報通信研究機構 | Camera, camera array and camera array system |
JP2009187482A (en) * | 2008-02-08 | 2009-08-20 | Nippon Sogo System Kk | Shelf allocation reproducing method, shelf allocation reproduction program, shelf allocation evaluating method, shelf allocation evaluation program, and recording medium |
JP5429901B2 (en) | 2012-02-08 | 2014-02-26 | 富士ソフト株式会社 | Robot and information processing apparatus program |
JP2014127857A (en) * | 2012-12-26 | 2014-07-07 | Toshiba Corp | Display device |
JP2015210651A (en) * | 2014-04-25 | 2015-11-24 | サントリーシステムテクノロジー株式会社 | Merchandise identification system |
US9656806B2 (en) * | 2015-02-13 | 2017-05-23 | Amazon Technologies, Inc. | Modular, multi-function smart storage containers |
JP2017187988A (en) | 2016-04-07 | 2017-10-12 | 東芝テック株式会社 | Code recognition device |
-
2016
- 2016-12-22 JP JP2016249568A patent/JP6840530B2/en active Active
-
2017
- 2017-12-20 US US15/848,011 patent/US20180181793A1/en not_active Abandoned
-
2019
- 2019-12-24 US US16/726,776 patent/US10970521B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2892737A (en) * | 1955-07-26 | 1959-06-30 | Standard Brands Inc | Refining hydrolyzates of starch by ion exchange |
US6333511B1 (en) * | 1997-05-27 | 2001-12-25 | Amos Talmi | Methods and apparatus for position determination |
US20030160877A1 (en) * | 2002-01-23 | 2003-08-28 | Naoaki Sumida | Imaging device for autonomously movable body, calibration method therefor, and calibration program therefor |
US20100017407A1 (en) * | 2008-07-16 | 2010-01-21 | Hitachi, Ltd. | Three-dimensional object recognition system and inventory system using the same |
US20170249491A1 (en) * | 2011-08-30 | 2017-08-31 | Digimarc Corporation | Methods and arrangements for identifying objects |
US20160048798A1 (en) * | 2013-01-11 | 2016-02-18 | Tagnetics, Inc. | Inventory sensor |
US9193073B1 (en) * | 2014-10-15 | 2015-11-24 | Quanta Storage Inc. | Robot calibration apparatus for calibrating a robot arm |
US9933515B2 (en) * | 2014-12-09 | 2018-04-03 | Toyota Motor Engineering & Manufacturing North America, Inc. | Sensor calibration for autonomous vehicles |
US20160364869A1 (en) * | 2015-06-12 | 2016-12-15 | Hexagon Technology Center Gmbh | Method to control a drive mechanism of an automated machine having a camera |
US20180354130A1 (en) * | 2015-10-30 | 2018-12-13 | Keba Ag | Method, control system and movement setting means for controlling the movements of articulated arms of an industrial robot |
US20180345493A1 (en) * | 2017-06-06 | 2018-12-06 | Fanuc Corporation | Teaching position correction device and teaching position correction method |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11961260B1 (en) * | 2013-03-15 | 2024-04-16 | True-See Systems, Llc | System for producing three-dimensional medical images using a calibration slate |
US12238256B1 (en) * | 2013-03-15 | 2025-02-25 | True-See Systems, Llc | System and method for mapping colors of a color calibrated image or video |
US20180150972A1 (en) * | 2016-11-30 | 2018-05-31 | Jixiang Zhu | System for determining position of a robot |
US20190124232A1 (en) * | 2017-10-19 | 2019-04-25 | Ford Global Technologies, Llc | Video calibration |
US10635116B2 (en) * | 2017-10-19 | 2020-04-28 | Ford Global Technologies, Llc | Video calibration with illumination invariant image |
US11520341B2 (en) * | 2019-05-07 | 2022-12-06 | Toshiba Tec Kabushiki Kaisha | Information processing apparatus and information processing method |
US11829128B2 (en) | 2019-10-23 | 2023-11-28 | GM Global Technology Operations LLC | Perception system diagnosis using predicted sensor data and perception results |
Also Published As
Publication number | Publication date |
---|---|
US20200134292A1 (en) | 2020-04-30 |
JP6840530B2 (en) | 2021-03-10 |
US10970521B2 (en) | 2021-04-06 |
JP2018106268A (en) | 2018-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10970521B2 (en) | Image processing apparatus and image processing method | |
US10255471B2 (en) | Code recognition device | |
US10636391B2 (en) | Electronic label system including control device for controlling electronic labels | |
US20160379367A1 (en) | Image processing apparatus | |
EP3163252A1 (en) | Three-dimensional shape measurement device, three-dimensional shape measurement system, program, computer-readable storage medium, and three-dimensional shape measurement method | |
US10191470B2 (en) | Welding machine and control method therefor | |
US9420205B2 (en) | Image acquisition method of object on supporting surface | |
US20150023593A1 (en) | Image processing method for character recognition, character recognition apparatus using this method, and program | |
JP6669163B2 (en) | Colorimetric device and colorimetric method | |
CN109213090B (en) | Position control system, position detection device, and recording medium | |
EP3079100A1 (en) | Image processing apparatus, image processing method and computer readable storage medium | |
JPWO2015166797A1 (en) | Color measuring device and color measuring method | |
US20200301439A1 (en) | Information processing apparatus and reading system | |
US12183048B2 (en) | Image stitching method, apparatus and device based on reinforcement learning and storage medium | |
US10134138B2 (en) | Information processing apparatus, computer-readable storage medium, information processing method | |
US10430628B2 (en) | Slip processing device, slip processing method, and recording medium | |
US11386573B2 (en) | Article recognition apparatus | |
US10178245B2 (en) | Terminal device, diagnosis system and non-transitory computer readable medium | |
US12051266B2 (en) | Information processing apparatus, information processing method, and storage medium | |
US20200125883A1 (en) | Article recognition apparatus | |
KR102432430B1 (en) | Vehicle and controlling method for the same | |
JP2018054437A (en) | Inspection device | |
US20180015744A1 (en) | Conveyance detection apparatus, conveying apparatus, and recording apparatus | |
US11503200B2 (en) | Photographing device and photographing method | |
US20170355544A1 (en) | Conveyance detection apparatus, conveying apparatus, and recording apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TOSHIBA TEC KABUSHIKI KAISHA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARIGA, NORIMASA;TAIRA, KAZUKI;YASUNAGA, MASAAKI;SIGNING DATES FROM 20171213 TO 20171214;REEL/FRAME:044443/0570 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |