+

WO1998039739A1 - Traitement des donnees - Google Patents

Traitement des donnees Download PDF

Info

Publication number
WO1998039739A1
WO1998039739A1 PCT/SE1998/000379 SE9800379W WO9839739A1 WO 1998039739 A1 WO1998039739 A1 WO 1998039739A1 SE 9800379 W SE9800379 W SE 9800379W WO 9839739 A1 WO9839739 A1 WO 9839739A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
camera
unit
information content
point
Prior art date
Application number
PCT/SE1998/000379
Other languages
English (en)
Inventor
Thorleif Josefsson
Original Assignee
Qualisys Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualisys Ab filed Critical Qualisys Ab
Priority to AU66435/98A priority Critical patent/AU6643598A/en
Publication of WO1998039739A1 publication Critical patent/WO1998039739A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition

Definitions

  • the present invention relates to a method and arrangement for data processing in a system including at least one data collecting device, comprising means for at least partly processing collected data, said device being connected to a computer unit, forming a chain of units, to process said at least partly processed data from said data collecting device.
  • the invention further refers to a motion analyses system including at least one camera unit, a computer unit and a marker, said camera unit being connected to said computer unit and/or other camera units building a chain of units, each camera unit including means to convert an image, substantially including an image of said marker to a data set.
  • Yet another aspekt of the present invention is a virtual reality application including a motion analyss system.
  • Motion analysis is now a well-known method using camera unit and computer aid to analyse, e.g. biomechanichs of human, animals or motions of a robot arm etc.
  • markers are attached to the object to be analysed.
  • the object provided with the markers was first filmed and then manually analysed and digitalised to determine the correct position of the markers. This was a time-consuming procedure.
  • CCD sensor which is a light sensor
  • a CCD sensor consisting of lines of charged coupled sensors arranged as a matrix, i.e. arranged in an X and Y coordinate system, for one or several colours, converts the light (from the optical element) projected on it, by electronically scanning in Y direction each line of X sensors and producing a television (video) signal. Then, the signals may be analysed in different ways to detect the position of the markers attached to the object.
  • the main object of the invention is to distribute the data processing, substantially between the data collecting devices and provide, for instance a host computer in the data processing chain a substantially prepared data.
  • the increased performance in this way will not overload the host computer, and the data transmitted to and processed with the host computer will increase dramatically, which increases the possibility of real time processing of the computer.
  • Another object of the present invention is to increase the accuracy and reliability of the data processed.
  • Yet another object of the present invention is to provide a new data type for fast data processing.
  • Said object are obtained by said data collecting device being arranged to internally process the data collected by said device into different levels, having different degrees of information content, including a high degree of information content and a low degree of information content, said data collecting devices further including means to receive data from other data collecting devices in said different degrees of information content, whereby a set of data generated in said data collecting device or received from a data collecting device having a high degree of information content, supersedes a data having lower degree of information content before forwarding the processed data in the chain.
  • a camera unit includes means for communication with other camera units and/or said computer unit, the set of processed data includes different levels of processed data, including a set with substantially high degree of information content and a substantially low degree of information content, whereby a set of data generated in said camera unit or received from a camera unit having a high degree of information content supersedes a data having lower degree of information content before forwarding the processed data in said chain of units.
  • said data set has substantially a high degree of information content, essentially includes information on a three-dimensional position of the marker in the space.
  • said data set having substantially a low degree of information content includes information on a two-dimensional point in the space, described as a line having direction coefficients K ⁇ , K y , K ⁇ and diameter ⁇ of the marker and a function value of ⁇ .
  • said data set further includes information on a three-dimensional position for the marker in space, described by X, Y, Z coordinates and an index for substantially two crossing lines.
  • said camera unit comprises preprocessing units, buffer, optical element and means to communicate with other camera units, .
  • said marker is sequentially coded.
  • each camera is arranged to generate a list containing line data in a memory unit, said list being processed and compared with data received from a forgoing camera, by comparing the incoming point data with the internally generated point data, if any, and updating the point data, comparing the incoming cross data with internally generated line data crossing the cross point, updating cross data and transferring the cross data to point data in respect of the line data, comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance accepting the data by updating the point data and/or generating a cross data, otherwise retaining the line data.
  • Fig. 1 is a schematic diagram of a simple motion analyse system.
  • Fig. 2 schematically illustrates an image of a marker and digital slices of the same.
  • Fig. 3 is a schematic diagram illustrating a system av data collecting devices using the method according to the present invention.
  • Fig. 4 is a schematic view of the coordinate system in a preferred embodiment.
  • Fig. 5 is a schematic data flow diagram in a preferred camera used in an arrangement according to the present invention.
  • Figs. 6 is a flow chart, schematically illustrating the method according to the present invention.
  • Fig. 7 schematically shows an embodiment using the method according to the present invention.
  • an analogous system uses an ordinarily video signal from a camera as an input. By means of said signal the X and Y coordinates for the marker, which is separated from the surroundings in intensity of light are calculated. The aim is to measure a movement of the marker as exact as possible, i.e. the inaccuracy, which is a result of the video signal consisting of a set of finite number of dots to be minimised.
  • the video signal consists of a number of lines, which are scanned in chronological order.
  • a marker generates an image, which extends over one or several lines.
  • the marker image on a line is called a segment.
  • the time is measured partly from the beginning of the line to the beginning of the segment (X s ) and partly from the beginning of the line to the end of the segment (X e ).
  • the mean value of these two periods is a measure for the position of a segment in the space, in horizontal direction (if the lines are horizontal) while the serial number of the line (S) is a measure for position of the segment in the vertical direction.
  • the length / of the segments is then X e -X s -
  • the centre of the marker may be obtained as a calculated common central point of all segments being part of the marker (image), where the X and Y coordinates of the marker, X m and Y m , respectively are obtained through formulas 1 and 2:
  • the ⁇ sign indicates that the summation is carried out over all segments being a member of the marker image.
  • the time points X s and X e can be measured with an electronic counting device connected to an oscillator.
  • the counter starts in the beginning av the line and it is read when the start and end of a segment are reached.
  • the oscillator frequency due to technical and economical reasons is limited. In the digital case the problem may be that the image elements cannot be as small as required.
  • a comparator which starts an integrator, which generates a linear potential slope, which starts from a potential V a to N b at time X s .
  • the slope is than sampled and measured when the counter changes between two values.
  • the point when the slope passes a predetermined alteration value is used to define the time X ss .
  • the difference between X. and X ⁇ is a constant and it is determined by the integrator and the delays in the comparators.
  • the time X ss is easily calculated from the measured points on the potential slope at the time for the change of the counter provided that at least two points on the slope are measured. For example, if two voltage values are measured, V, at t, and V 2 at t 2 and V 0 is between V, and V 2 , X ss is interpolated by formula 3:
  • the time X e is measured in same way.
  • a linear slope is used, because the calculations become simpler, however, other curves may be used.
  • the image elements may be arranged in lines by means of a low-pass filter to provide some continuos signal, which can be processed as the analogous signal. However, in the preferred embodiment each image element is measured individually and from the measured values, a value is interpolated, determining when a threshold T is passed.
  • the digitalized signal is passed over to a comparison unit, which interpolates individual sample values about a predetermined threshold value T, also called video level.
  • T a predetermined threshold value
  • the object is to determine when the amplitude of the signal passes the value T.
  • Each passage presents a start and stop coordinate of each segment with a high resolution, which can be about 30 x number of pixels on a row.
  • a computation unit following calculation is executed:
  • V j and Y are the signal levels of preceding and succeeding pixels, respectively, received from the comparator unit.
  • formula 4 may be regarded as a special case of formula 3.
  • the pixel number may be obtained from a counter (not shown). Depending on the components used, levels V, and V 2 may be measured having resolution of 10 bits, the pixel number (MSB) 9 bits and (T-V 1 )/(V 2 -V 1 ) 7 bits. Then the centre point x' of the marker is computed in a computation unit by means of previous values stored in a memory unit, using formula (5):
  • l k is the length of the segment k (i.e., X ek -X sk )
  • S is the serial number of the image element, and is the centre of the segment k.
  • the formulas (1) and (2) are substituted by formulas (5) and (6), respectively.
  • formula (1) and (2) alone do not contribute to obtaining an exact value as desired.
  • the n power of l k is calculated.
  • the l k 's may then be stored in the memory unit 23, for further calculations.
  • A r 2 . ⁇ , which yields formula (7):
  • Fig. 4 schematically illustrates a principle for positioning the object 1 1, the X, Y and Z coordinates may be calculated as described below.
  • X 0 ,Y 0 the centre of the detector plane, corresponding to the centre of the lens, which are camera constants;
  • D m the diameter of the marker 11.
  • X ⁇ , Y m and Z m are the three-dimensional position (vector components) of the marker, and specially the distance between the camera (lens) and the object.
  • Broadcast Addressing mode used to address all units simultaneously.
  • Point data A three-dimensional position for a marking device in space, described by X, Y, Z position and Q tolerance. Point data represents the cross point of three lines in space.
  • Cross data A three-dimensional position for a marking device in space, described by X, Y, Z coordinates and an index for substantially two crossing lines. This data cannot be converted to a point data. A further crossing line contributes to a reliable point determination.
  • Line data A two-dimensional point in the space, described as a line having direction coefficients K ⁇ K y , K ⁇ , and diameter ⁇ of the marker and a function value of ⁇ , and ⁇ 2 (maximum and minimum for ⁇ ) and the identity number of the detecting device (camera), which has detected the device.
  • ABC is one end point coordinate on the line
  • DEF is another end point coordinate on the line
  • a simple schematic system is illustrated in fig. 1.
  • the system comprises a camera unit 10 directed to an object, in this case a human body 12, to be analysed and one or several markers 1 1 attached to the object 12.
  • the camera 10 may be connected to a host computer 13, to process further and present the data received from the camera 10.
  • Fig. 3 shows an application using four cameras 14, 15, 16 and 17.
  • an optical element such as a lens 18 or other focussing means
  • a light sensor such as a CCD sensor and preprocessing unit 19
  • processing unit 20 for processing the video signals.
  • a camera unit is preferably arranged to produce frames, i.e. a data representation of the image including markers.
  • Each camera is arranged with an IN and OUT port for interconnecting cameras and other equipment, such as the computer 13. Consequently, the first camera 14 through its OUT port is connected to the IN port of the second camera 15, which in turn is connected to the third 16 camera and so on.
  • the last camera 17 in the chain is through its OUT port or other data port of known type can directly or via an I/O device 21 be connected to the host computer.
  • each camera processes frames for determining the position of each marker (if several).
  • Each camera processes and produces a set of data, substantially including point data, line data and cross data as defined above.
  • the processing unit of each camera is connected to the processing unit of the forgoing camera (or to the computer or the I/O device, if it is last camera in the chain). Accordingly, each processing unit processes the frame of its own and the frame received from the forgoing camera, if special criteria are fulfilled, which will be discussed later.
  • the computer receives the pre-processed data and avoids the further process of entire received data.
  • Fig. 5 is a schematic block diagram of an embodiment of the data processing unit 20.
  • the data processing unit 20 comprises a first IN buffer 22, a data computing unit 23, a postprocessing unit 24, a second IN buffer 25, an OUT buffer 29 and a message handling unit 30.
  • the buffer 22 is connected to a preprocessing unit 19.
  • the preprocessing unit 19 may be a video processing unit.
  • the preprocessing unit 19, IN buffer 22, data computing unit 23, post-processing unit 24, the second IN buffer 25 and OUT buffer 29 are interconnected by means of a data bus, e.g. of ISA type.
  • a signal provided to the data processing unit 20, through the buffer 22.
  • the video signal is processed and a two-dimensional coordinate is determined.
  • the parameters x and y are the 2D coordinates (camera scale) according to formula 5 and 6, ⁇ j and ⁇ 2 the maximum and minimum radius of the marker according to formula 7 and i is the intensity parameter for the image 26.
  • the ⁇ i and ⁇ values are obtained through a roundness control as described in the Swedish patent application no. 9700065-7, and generally depend on the light intensity received from the marker, which may depend on the flash intensity and light reflection from said marker.
  • the processing unit further comprises the computing unit 23, which processes the data according to the flow chart of fig. 6.
  • x, and y represent a foot point for lens correction
  • K and K are coefficients provided from a calibration process 102.
  • a diameter correction is carried out, 103, using parameters x m , y m , 1 2 , 1 3 and 1 4 , where the effect of the light intensity distribution in space, i.e. the light flash intensity if one is used, is taken into consideration.
  • x m , y m are basic points for diameter correction and 1 2 , 1 3 and 1 4 constants.
  • the results are ⁇ ', and ⁇ ' 2 , i.e. the corrected ⁇ .
  • the diameter value may be corrected for known static error contributions, such as illumination intensity and insufficient focussing.
  • An additional calculation for transforming the coordinates is carried out at 104 to provide a common coordinate system for camera units, yielding data in following from:
  • K is a constant and m is the lens positions coordinate.
  • the direction coefficient for the line is transformed to a global system, having the position of the point on the line as the argument.
  • the roundness control or the value of ⁇ gives two end point coordinates ⁇ ', and ⁇ ' 2 of the line.
  • the frame data received 106 from a forgoing camera unit i.e. if the camera is not first in the chain, will be processed in the current camera.
  • Each camera generates a list substantially containing line data in a memory unit. Then the list is processed and compared, 105, with the received data in the post-processing unit 24, by:
  • Each frame is provided with frame header. If the incoming frame has a higher number, it means that one frame is lost during the communication with the forgoing camera. Then the cameras own frame is sent to the next camera as a line data. If the incoming frame has a lower number, it means that a frame was lost in the communication between the preprocessing unit 19 and the data processing unit 20, and the frame is sent to the next camera.
  • IN and OUT buffers, 25 and 29, respectively, are arranged for buffering the incoming and outgoing data, i.e. messages from the forgoing and forthcoming units.
  • the message handling means 30 is arranged to handle the identity and addresses in the messages.
  • the data processing unit 20 processes the frames in time order. If there is not enough time to handle the frames, the frames are queued in the buffers 22, 25 and 29 and processed as soon as possible. If the storage place is not enough, preferably the oldest data is overwritten or data fetching is paused.
  • the instruction set controlling each camera unit may be used (with some modification) in the host computer to execute the same process as in the camera units.
  • all messages (frames) between the connected units are sent from point to point, i.e. each data processing unit 20 must first receive a message and then decide to process it or send it further.
  • Message transfer timing in a preferred embodiment is according to the following.
  • the unit, which sends a message waits for an acknowledgement message (ACK) within T resend , if the transmission were successful, otherwise the message is repeated a predetermined number of times before the communication is assumed failed and an error message is generated, which is sent to the last camera 17 in the chain and further to the host computer 13. If a receiver of a message detects that the message is incorrect a negative ACK (NACK) is sent to the transmitter, i.e. a request for a resend.
  • ACK acknowledgement message
  • NACK negative ACK
  • the timing schedule is according to the following: resend msg ' ⁇ delay " ⁇ " A Cmsg where C delay is the delay time in each camera, T msg is transfer time for a message, T cmsg is transfer time for ACK/NACK, and
  • T resend timeout time at resending a message.
  • Initiation and suitable self controls of the camera are executed by the corresponding data processing unit 20 after power on.
  • the instructions may be stored in a memory unit (BIOS) of the data processing unit 20.
  • BIOS memory unit
  • the self control conducts inspections on communication ports, memory units and parts essential for the function.
  • each camera unit checks the IN/OUT ports for connection. If an OUT connection is not present, the camera is last in the chain and obtains, for instance address 1 (decimal). Camera 1 (Cl) sends an address message to the next camera in the chain, which assumes address 2, and so on. Cl controls the communication with every other unit in the chain by scanning for correct addresses and broadcast address, which is a message sent from Cl to all cameras in the chain and last camera answers by an ACK.
  • the synchronisation of each camera is performed from the last unit (C last ) closest to the host computer.
  • the purpose of synchronisation is to arrange for the frames produced by each camera to be exposed and marked simultaneously.
  • a start command from the host computer starts the synchronisation.
  • the start command substantially sets all cameras in start mode.
  • the command for starting producing the frames is generated by Cl .
  • the calibration is intended to compensate, e.g. for lens distortion, varying light intensities within a measuring volume and obtain information on one metrical scale for markers and obliging all cameras to use a common coordinate system.
  • a frame When a frame is produced by the preprocessor 19, it is buffered for use by the data processing unit 20, for example by setting an interrupt signal.
  • the data processing unit 20 fetches the frames through the data bus 31 and puts them in a wait queue. Both processing and preprocessing units can queue the frames.
  • FIG. 7 schematically shows the detection of two markers 1 1 a and 1 lb by three linked cameras 32, 33 and 34.
  • the vision fields of the cameras are indicated by broken lines.
  • Camera 32 "sees" marker 11a and l ib and produces a line data, which is send to camera 33.
  • Camera 33 sees said markers too, and the line data is transferred to a cross data for markers.
  • Camera 34 sees only marker 11 a and produces a point data for 1 1 a and sends the cross data for marker 1 lb to another camera or the host computer 13.
  • the camera units may be substituted by radar units or the like.
  • the data processing unit may be modified, where many of components may be integrated.
  • the camera units may be arranged to communicate by means of electromagnetic waves, infrared signals or the like.
  • the system according to the present invention is suitable for virtual reality (VR) applications, preferably substituting the special suit needed in such an application by attaching one or several markers to a person/object being part of the VR application.
  • VR virtual reality
  • the cameras attached to a host computer will be able to trace the marker(s) and generate the data required without loading the host computer with unnecessary data processing.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un traitement des données dans un système, de préférence, un système d'analyse de mouvement intégrant au moins un dispositif de collecte de données (10, 14, 15, 16, 17) tel qu'une caméra, et comprenant un moyen permettant le traitement au moins partiel des données collectées. Le dispositif de collecte est relié à un ordinateur (13), formant ainsi une chaîne d'unités, pour traiter lesdites données au moins partiellement traitées du dispositif de collecte. Le dispositif de collecte de données (10, 14, 15, 16, 17) est conçu pour traiter de façon interne les données collectées et les restituer en différents niveaux de contenu informatif (niveau élevé, niveau bas). Ces dispositifs de collecte de données comprennent également un moyen permettant de recevoir des données émanant d'autres dispositifs de collecte de données (10, 14, 15, 16, 17) en différents niveaux de contenu informatif. Un ensemble de données généré dans ledit dispositif de collecte de données ou envoyé par un dispositif de collecte de données ayant un niveau élevé de contenu informatif, remplace des données de faible niveau informatif avant l'envoi des données traitées dans la chaîne.
PCT/SE1998/000379 1997-03-03 1998-03-03 Traitement des donnees WO1998039739A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU66435/98A AU6643598A (en) 1997-03-03 1998-03-03 Data processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE9700760A SE511461C2 (sv) 1997-03-03 1997-03-03 Databearbetning
SE9700760-3 1997-03-03

Publications (1)

Publication Number Publication Date
WO1998039739A1 true WO1998039739A1 (fr) 1998-09-11

Family

ID=20406005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE1998/000379 WO1998039739A1 (fr) 1997-03-03 1998-03-03 Traitement des donnees

Country Status (3)

Country Link
AU (1) AU6643598A (fr)
SE (1) SE511461C2 (fr)
WO (1) WO1998039739A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943846B2 (en) 2000-12-20 2005-09-13 Robert Bosch Gmbh Multi-picture in picture system
US7222198B2 (en) * 2003-05-13 2007-05-22 Hewlett-Packard Development Company, L.P. System for transferring data between devices by making brief connection with external contacts that extend outwardly from device exterior

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990015509A1 (fr) * 1989-06-07 1990-12-13 Loredan Biomedical, Inc. Analyse video cinematique
US5475422A (en) * 1993-06-21 1995-12-12 Nippon Telegraph And Telephone Corporation Method and apparatus for reconstructing three-dimensional objects
EP0735512A2 (fr) * 1995-03-29 1996-10-02 SANYO ELECTRIC Co., Ltd. Méthodes pour la formation d'une image pour affichage tridimensionnel, pour le calcul d'information de profondeur et pour le traitement d'image utilisant l'information de profondeur
WO1997013218A1 (fr) * 1995-10-04 1997-04-10 Visual Interface, Inc. Procede de creation d'une image tridimensionnelle a partir d'images bidimensionnelles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990015509A1 (fr) * 1989-06-07 1990-12-13 Loredan Biomedical, Inc. Analyse video cinematique
US5475422A (en) * 1993-06-21 1995-12-12 Nippon Telegraph And Telephone Corporation Method and apparatus for reconstructing three-dimensional objects
EP0735512A2 (fr) * 1995-03-29 1996-10-02 SANYO ELECTRIC Co., Ltd. Méthodes pour la formation d'une image pour affichage tridimensionnel, pour le calcul d'information de profondeur et pour le traitement d'image utilisant l'information de profondeur
WO1997013218A1 (fr) * 1995-10-04 1997-04-10 Visual Interface, Inc. Procede de creation d'une image tridimensionnelle a partir d'images bidimensionnelles

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943846B2 (en) 2000-12-20 2005-09-13 Robert Bosch Gmbh Multi-picture in picture system
US7222198B2 (en) * 2003-05-13 2007-05-22 Hewlett-Packard Development Company, L.P. System for transferring data between devices by making brief connection with external contacts that extend outwardly from device exterior

Also Published As

Publication number Publication date
AU6643598A (en) 1998-09-22
SE9700760L (sv) 1998-11-04
SE9700760D0 (sv) 1997-03-03
SE511461C2 (sv) 1999-10-04

Similar Documents

Publication Publication Date Title
US6700604B1 (en) Image capturing method and apparatus for determining a shape of an object
US7526121B2 (en) Three-dimensional visual sensor
EP0951696B1 (fr) Procede et dispositif permettant de determiner la position d'un objet
JPH08233556A (ja) 撮像画像処理装置および撮像画像処理方法
CN108307113A (zh) 图像采集方法、图像采集控制方法及相关装置
TW201817215A (zh) 影像掃描系統及其方法
CN113758498B (zh) 一种无人机云台标定方法和装置
CN111750804A (zh) 一种物体测量的方法及设备
KR101623828B1 (ko) 연산 속도를 개선한 어안 카메라의 왜곡 영상 보정장치
CN110068308B (zh) 一种基于多目相机的测距方法及测距系统
WO1998039739A1 (fr) Traitement des donnees
JP2000134537A (ja) 画像入力装置及びその方法
US6898298B1 (en) Installing posture parameter auto extracting method of imaging device, and monitoring system using the imaging device
WO2022075607A1 (fr) Système lidar capable de détecter une surface de route, et procédé de traitement de données
JPH0981790A (ja) 三次元形状復元装置および方法
KR20230128683A (ko) 라이다의 감지 정확도 향상을 위한 거리 왜곡 보정 방법 및 장치
CN115496879A (zh) 障碍物感知方法、装置、设备及存储介质
Li et al. An accurate camera calibration for the aerial image analysis
JP2775924B2 (ja) 画像データ作成装置
JP3340599B2 (ja) 平面推定方法
JPH06221832A (ja) カメラの焦点距離較正方法及び装置
JP2000074665A (ja) 距離画像生成装置及び方法
JPH07128013A (ja) 光学式位置検出装置
Fisher et al. Combining intensity and range images for 3d architectural modelling
JP2016146546A (ja) 画像処理装置、情報処理方法及びプログラム

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998538442

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载