+

WO1998039739A1 - Data processing - Google Patents

Data processing Download PDF

Info

Publication number
WO1998039739A1
WO1998039739A1 PCT/SE1998/000379 SE9800379W WO9839739A1 WO 1998039739 A1 WO1998039739 A1 WO 1998039739A1 SE 9800379 W SE9800379 W SE 9800379W WO 9839739 A1 WO9839739 A1 WO 9839739A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
camera
unit
information content
point
Prior art date
Application number
PCT/SE1998/000379
Other languages
French (fr)
Inventor
Thorleif Josefsson
Original Assignee
Qualisys Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualisys Ab filed Critical Qualisys Ab
Priority to AU66435/98A priority Critical patent/AU6643598A/en
Publication of WO1998039739A1 publication Critical patent/WO1998039739A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition

Definitions

  • the present invention relates to a method and arrangement for data processing in a system including at least one data collecting device, comprising means for at least partly processing collected data, said device being connected to a computer unit, forming a chain of units, to process said at least partly processed data from said data collecting device.
  • the invention further refers to a motion analyses system including at least one camera unit, a computer unit and a marker, said camera unit being connected to said computer unit and/or other camera units building a chain of units, each camera unit including means to convert an image, substantially including an image of said marker to a data set.
  • Yet another aspekt of the present invention is a virtual reality application including a motion analyss system.
  • Motion analysis is now a well-known method using camera unit and computer aid to analyse, e.g. biomechanichs of human, animals or motions of a robot arm etc.
  • markers are attached to the object to be analysed.
  • the object provided with the markers was first filmed and then manually analysed and digitalised to determine the correct position of the markers. This was a time-consuming procedure.
  • CCD sensor which is a light sensor
  • a CCD sensor consisting of lines of charged coupled sensors arranged as a matrix, i.e. arranged in an X and Y coordinate system, for one or several colours, converts the light (from the optical element) projected on it, by electronically scanning in Y direction each line of X sensors and producing a television (video) signal. Then, the signals may be analysed in different ways to detect the position of the markers attached to the object.
  • the main object of the invention is to distribute the data processing, substantially between the data collecting devices and provide, for instance a host computer in the data processing chain a substantially prepared data.
  • the increased performance in this way will not overload the host computer, and the data transmitted to and processed with the host computer will increase dramatically, which increases the possibility of real time processing of the computer.
  • Another object of the present invention is to increase the accuracy and reliability of the data processed.
  • Yet another object of the present invention is to provide a new data type for fast data processing.
  • Said object are obtained by said data collecting device being arranged to internally process the data collected by said device into different levels, having different degrees of information content, including a high degree of information content and a low degree of information content, said data collecting devices further including means to receive data from other data collecting devices in said different degrees of information content, whereby a set of data generated in said data collecting device or received from a data collecting device having a high degree of information content, supersedes a data having lower degree of information content before forwarding the processed data in the chain.
  • a camera unit includes means for communication with other camera units and/or said computer unit, the set of processed data includes different levels of processed data, including a set with substantially high degree of information content and a substantially low degree of information content, whereby a set of data generated in said camera unit or received from a camera unit having a high degree of information content supersedes a data having lower degree of information content before forwarding the processed data in said chain of units.
  • said data set has substantially a high degree of information content, essentially includes information on a three-dimensional position of the marker in the space.
  • said data set having substantially a low degree of information content includes information on a two-dimensional point in the space, described as a line having direction coefficients K ⁇ , K y , K ⁇ and diameter ⁇ of the marker and a function value of ⁇ .
  • said data set further includes information on a three-dimensional position for the marker in space, described by X, Y, Z coordinates and an index for substantially two crossing lines.
  • said camera unit comprises preprocessing units, buffer, optical element and means to communicate with other camera units, .
  • said marker is sequentially coded.
  • each camera is arranged to generate a list containing line data in a memory unit, said list being processed and compared with data received from a forgoing camera, by comparing the incoming point data with the internally generated point data, if any, and updating the point data, comparing the incoming cross data with internally generated line data crossing the cross point, updating cross data and transferring the cross data to point data in respect of the line data, comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance accepting the data by updating the point data and/or generating a cross data, otherwise retaining the line data.
  • Fig. 1 is a schematic diagram of a simple motion analyse system.
  • Fig. 2 schematically illustrates an image of a marker and digital slices of the same.
  • Fig. 3 is a schematic diagram illustrating a system av data collecting devices using the method according to the present invention.
  • Fig. 4 is a schematic view of the coordinate system in a preferred embodiment.
  • Fig. 5 is a schematic data flow diagram in a preferred camera used in an arrangement according to the present invention.
  • Figs. 6 is a flow chart, schematically illustrating the method according to the present invention.
  • Fig. 7 schematically shows an embodiment using the method according to the present invention.
  • an analogous system uses an ordinarily video signal from a camera as an input. By means of said signal the X and Y coordinates for the marker, which is separated from the surroundings in intensity of light are calculated. The aim is to measure a movement of the marker as exact as possible, i.e. the inaccuracy, which is a result of the video signal consisting of a set of finite number of dots to be minimised.
  • the video signal consists of a number of lines, which are scanned in chronological order.
  • a marker generates an image, which extends over one or several lines.
  • the marker image on a line is called a segment.
  • the time is measured partly from the beginning of the line to the beginning of the segment (X s ) and partly from the beginning of the line to the end of the segment (X e ).
  • the mean value of these two periods is a measure for the position of a segment in the space, in horizontal direction (if the lines are horizontal) while the serial number of the line (S) is a measure for position of the segment in the vertical direction.
  • the length / of the segments is then X e -X s -
  • the centre of the marker may be obtained as a calculated common central point of all segments being part of the marker (image), where the X and Y coordinates of the marker, X m and Y m , respectively are obtained through formulas 1 and 2:
  • the ⁇ sign indicates that the summation is carried out over all segments being a member of the marker image.
  • the time points X s and X e can be measured with an electronic counting device connected to an oscillator.
  • the counter starts in the beginning av the line and it is read when the start and end of a segment are reached.
  • the oscillator frequency due to technical and economical reasons is limited. In the digital case the problem may be that the image elements cannot be as small as required.
  • a comparator which starts an integrator, which generates a linear potential slope, which starts from a potential V a to N b at time X s .
  • the slope is than sampled and measured when the counter changes between two values.
  • the point when the slope passes a predetermined alteration value is used to define the time X ss .
  • the difference between X. and X ⁇ is a constant and it is determined by the integrator and the delays in the comparators.
  • the time X ss is easily calculated from the measured points on the potential slope at the time for the change of the counter provided that at least two points on the slope are measured. For example, if two voltage values are measured, V, at t, and V 2 at t 2 and V 0 is between V, and V 2 , X ss is interpolated by formula 3:
  • the time X e is measured in same way.
  • a linear slope is used, because the calculations become simpler, however, other curves may be used.
  • the image elements may be arranged in lines by means of a low-pass filter to provide some continuos signal, which can be processed as the analogous signal. However, in the preferred embodiment each image element is measured individually and from the measured values, a value is interpolated, determining when a threshold T is passed.
  • the digitalized signal is passed over to a comparison unit, which interpolates individual sample values about a predetermined threshold value T, also called video level.
  • T a predetermined threshold value
  • the object is to determine when the amplitude of the signal passes the value T.
  • Each passage presents a start and stop coordinate of each segment with a high resolution, which can be about 30 x number of pixels on a row.
  • a computation unit following calculation is executed:
  • V j and Y are the signal levels of preceding and succeeding pixels, respectively, received from the comparator unit.
  • formula 4 may be regarded as a special case of formula 3.
  • the pixel number may be obtained from a counter (not shown). Depending on the components used, levels V, and V 2 may be measured having resolution of 10 bits, the pixel number (MSB) 9 bits and (T-V 1 )/(V 2 -V 1 ) 7 bits. Then the centre point x' of the marker is computed in a computation unit by means of previous values stored in a memory unit, using formula (5):
  • l k is the length of the segment k (i.e., X ek -X sk )
  • S is the serial number of the image element, and is the centre of the segment k.
  • the formulas (1) and (2) are substituted by formulas (5) and (6), respectively.
  • formula (1) and (2) alone do not contribute to obtaining an exact value as desired.
  • the n power of l k is calculated.
  • the l k 's may then be stored in the memory unit 23, for further calculations.
  • A r 2 . ⁇ , which yields formula (7):
  • Fig. 4 schematically illustrates a principle for positioning the object 1 1, the X, Y and Z coordinates may be calculated as described below.
  • X 0 ,Y 0 the centre of the detector plane, corresponding to the centre of the lens, which are camera constants;
  • D m the diameter of the marker 11.
  • X ⁇ , Y m and Z m are the three-dimensional position (vector components) of the marker, and specially the distance between the camera (lens) and the object.
  • Broadcast Addressing mode used to address all units simultaneously.
  • Point data A three-dimensional position for a marking device in space, described by X, Y, Z position and Q tolerance. Point data represents the cross point of three lines in space.
  • Cross data A three-dimensional position for a marking device in space, described by X, Y, Z coordinates and an index for substantially two crossing lines. This data cannot be converted to a point data. A further crossing line contributes to a reliable point determination.
  • Line data A two-dimensional point in the space, described as a line having direction coefficients K ⁇ K y , K ⁇ , and diameter ⁇ of the marker and a function value of ⁇ , and ⁇ 2 (maximum and minimum for ⁇ ) and the identity number of the detecting device (camera), which has detected the device.
  • ABC is one end point coordinate on the line
  • DEF is another end point coordinate on the line
  • a simple schematic system is illustrated in fig. 1.
  • the system comprises a camera unit 10 directed to an object, in this case a human body 12, to be analysed and one or several markers 1 1 attached to the object 12.
  • the camera 10 may be connected to a host computer 13, to process further and present the data received from the camera 10.
  • Fig. 3 shows an application using four cameras 14, 15, 16 and 17.
  • an optical element such as a lens 18 or other focussing means
  • a light sensor such as a CCD sensor and preprocessing unit 19
  • processing unit 20 for processing the video signals.
  • a camera unit is preferably arranged to produce frames, i.e. a data representation of the image including markers.
  • Each camera is arranged with an IN and OUT port for interconnecting cameras and other equipment, such as the computer 13. Consequently, the first camera 14 through its OUT port is connected to the IN port of the second camera 15, which in turn is connected to the third 16 camera and so on.
  • the last camera 17 in the chain is through its OUT port or other data port of known type can directly or via an I/O device 21 be connected to the host computer.
  • each camera processes frames for determining the position of each marker (if several).
  • Each camera processes and produces a set of data, substantially including point data, line data and cross data as defined above.
  • the processing unit of each camera is connected to the processing unit of the forgoing camera (or to the computer or the I/O device, if it is last camera in the chain). Accordingly, each processing unit processes the frame of its own and the frame received from the forgoing camera, if special criteria are fulfilled, which will be discussed later.
  • the computer receives the pre-processed data and avoids the further process of entire received data.
  • Fig. 5 is a schematic block diagram of an embodiment of the data processing unit 20.
  • the data processing unit 20 comprises a first IN buffer 22, a data computing unit 23, a postprocessing unit 24, a second IN buffer 25, an OUT buffer 29 and a message handling unit 30.
  • the buffer 22 is connected to a preprocessing unit 19.
  • the preprocessing unit 19 may be a video processing unit.
  • the preprocessing unit 19, IN buffer 22, data computing unit 23, post-processing unit 24, the second IN buffer 25 and OUT buffer 29 are interconnected by means of a data bus, e.g. of ISA type.
  • a signal provided to the data processing unit 20, through the buffer 22.
  • the video signal is processed and a two-dimensional coordinate is determined.
  • the parameters x and y are the 2D coordinates (camera scale) according to formula 5 and 6, ⁇ j and ⁇ 2 the maximum and minimum radius of the marker according to formula 7 and i is the intensity parameter for the image 26.
  • the ⁇ i and ⁇ values are obtained through a roundness control as described in the Swedish patent application no. 9700065-7, and generally depend on the light intensity received from the marker, which may depend on the flash intensity and light reflection from said marker.
  • the processing unit further comprises the computing unit 23, which processes the data according to the flow chart of fig. 6.
  • x, and y represent a foot point for lens correction
  • K and K are coefficients provided from a calibration process 102.
  • a diameter correction is carried out, 103, using parameters x m , y m , 1 2 , 1 3 and 1 4 , where the effect of the light intensity distribution in space, i.e. the light flash intensity if one is used, is taken into consideration.
  • x m , y m are basic points for diameter correction and 1 2 , 1 3 and 1 4 constants.
  • the results are ⁇ ', and ⁇ ' 2 , i.e. the corrected ⁇ .
  • the diameter value may be corrected for known static error contributions, such as illumination intensity and insufficient focussing.
  • An additional calculation for transforming the coordinates is carried out at 104 to provide a common coordinate system for camera units, yielding data in following from:
  • K is a constant and m is the lens positions coordinate.
  • the direction coefficient for the line is transformed to a global system, having the position of the point on the line as the argument.
  • the roundness control or the value of ⁇ gives two end point coordinates ⁇ ', and ⁇ ' 2 of the line.
  • the frame data received 106 from a forgoing camera unit i.e. if the camera is not first in the chain, will be processed in the current camera.
  • Each camera generates a list substantially containing line data in a memory unit. Then the list is processed and compared, 105, with the received data in the post-processing unit 24, by:
  • Each frame is provided with frame header. If the incoming frame has a higher number, it means that one frame is lost during the communication with the forgoing camera. Then the cameras own frame is sent to the next camera as a line data. If the incoming frame has a lower number, it means that a frame was lost in the communication between the preprocessing unit 19 and the data processing unit 20, and the frame is sent to the next camera.
  • IN and OUT buffers, 25 and 29, respectively, are arranged for buffering the incoming and outgoing data, i.e. messages from the forgoing and forthcoming units.
  • the message handling means 30 is arranged to handle the identity and addresses in the messages.
  • the data processing unit 20 processes the frames in time order. If there is not enough time to handle the frames, the frames are queued in the buffers 22, 25 and 29 and processed as soon as possible. If the storage place is not enough, preferably the oldest data is overwritten or data fetching is paused.
  • the instruction set controlling each camera unit may be used (with some modification) in the host computer to execute the same process as in the camera units.
  • all messages (frames) between the connected units are sent from point to point, i.e. each data processing unit 20 must first receive a message and then decide to process it or send it further.
  • Message transfer timing in a preferred embodiment is according to the following.
  • the unit, which sends a message waits for an acknowledgement message (ACK) within T resend , if the transmission were successful, otherwise the message is repeated a predetermined number of times before the communication is assumed failed and an error message is generated, which is sent to the last camera 17 in the chain and further to the host computer 13. If a receiver of a message detects that the message is incorrect a negative ACK (NACK) is sent to the transmitter, i.e. a request for a resend.
  • ACK acknowledgement message
  • NACK negative ACK
  • the timing schedule is according to the following: resend msg ' ⁇ delay " ⁇ " A Cmsg where C delay is the delay time in each camera, T msg is transfer time for a message, T cmsg is transfer time for ACK/NACK, and
  • T resend timeout time at resending a message.
  • Initiation and suitable self controls of the camera are executed by the corresponding data processing unit 20 after power on.
  • the instructions may be stored in a memory unit (BIOS) of the data processing unit 20.
  • BIOS memory unit
  • the self control conducts inspections on communication ports, memory units and parts essential for the function.
  • each camera unit checks the IN/OUT ports for connection. If an OUT connection is not present, the camera is last in the chain and obtains, for instance address 1 (decimal). Camera 1 (Cl) sends an address message to the next camera in the chain, which assumes address 2, and so on. Cl controls the communication with every other unit in the chain by scanning for correct addresses and broadcast address, which is a message sent from Cl to all cameras in the chain and last camera answers by an ACK.
  • the synchronisation of each camera is performed from the last unit (C last ) closest to the host computer.
  • the purpose of synchronisation is to arrange for the frames produced by each camera to be exposed and marked simultaneously.
  • a start command from the host computer starts the synchronisation.
  • the start command substantially sets all cameras in start mode.
  • the command for starting producing the frames is generated by Cl .
  • the calibration is intended to compensate, e.g. for lens distortion, varying light intensities within a measuring volume and obtain information on one metrical scale for markers and obliging all cameras to use a common coordinate system.
  • a frame When a frame is produced by the preprocessor 19, it is buffered for use by the data processing unit 20, for example by setting an interrupt signal.
  • the data processing unit 20 fetches the frames through the data bus 31 and puts them in a wait queue. Both processing and preprocessing units can queue the frames.
  • FIG. 7 schematically shows the detection of two markers 1 1 a and 1 lb by three linked cameras 32, 33 and 34.
  • the vision fields of the cameras are indicated by broken lines.
  • Camera 32 "sees" marker 11a and l ib and produces a line data, which is send to camera 33.
  • Camera 33 sees said markers too, and the line data is transferred to a cross data for markers.
  • Camera 34 sees only marker 11 a and produces a point data for 1 1 a and sends the cross data for marker 1 lb to another camera or the host computer 13.
  • the camera units may be substituted by radar units or the like.
  • the data processing unit may be modified, where many of components may be integrated.
  • the camera units may be arranged to communicate by means of electromagnetic waves, infrared signals or the like.
  • the system according to the present invention is suitable for virtual reality (VR) applications, preferably substituting the special suit needed in such an application by attaching one or several markers to a person/object being part of the VR application.
  • VR virtual reality
  • the cameras attached to a host computer will be able to trace the marker(s) and generate the data required without loading the host computer with unnecessary data processing.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A data processing in a system, preferably a motion analysis system including at least one data collecting device (10, 14, 15, 16, 17), such as a camera unit, comprising means for at least partly processing collected data, said device being connected to a computer unit (13), forming a chain of units, to process said at least partly process data from said data collecting device. Said data collective device (10, 14, 15, 16, 17) is arranged to internally process the data collected by said device into different levels, having different degrees of information content, including a high degree of information content and a low degree of information content, said data collecting devices further including means to receive data from other data collecting devices (10, 14, 15, 16, 17) in said different degrees of information content, whereby a set of data generated in said data collecting device or received from a data collecting device having a high degree of information content, supersedes a data having lower degree of information content before forwarding the processed data in the chain.

Description

DATA PROCESSING
TECHNICAL FIELD OF THE INVENTION The present invention relates to a method and arrangement for data processing in a system including at least one data collecting device, comprising means for at least partly processing collected data, said device being connected to a computer unit, forming a chain of units, to process said at least partly processed data from said data collecting device.
The invention further refers to a motion analyses system including at least one camera unit, a computer unit and a marker, said camera unit being connected to said computer unit and/or other camera units building a chain of units, each camera unit including means to convert an image, substantially including an image of said marker to a data set.
Yet another aspekt of the present invention is a virtual reality application including a motion analyss system.
BACKGROUND AND RELATED ART
Motion analysis is now a well-known method using camera unit and computer aid to analyse, e.g. biomechanichs of human, animals or motions of a robot arm etc.
In a simple system markers are attached to the object to be analysed. In the past the object provided with the markers was first filmed and then manually analysed and digitalised to determine the correct position of the markers. This was a time-consuming procedure.
Presently, cameras equipped with so-called CCD sensors are used. CCD sensor, which is a light sensor, is generally arranged in communication with necessary optical elements. A CCD sensor, consisting of lines of charged coupled sensors arranged as a matrix, i.e. arranged in an X and Y coordinate system, for one or several colours, converts the light (from the optical element) projected on it, by electronically scanning in Y direction each line of X sensors and producing a television (video) signal. Then, the signals may be analysed in different ways to detect the position of the markers attached to the object.
Presently used systems, use one or several cameras which send the video signal to a computer unit to analyse or just partly process the video signal and then send it to a computer device for analysing. The problem is that when many camera units are used the amount of data send to the computer unit accelerates substantially, which burdens the resources of the computer. These systems also tend to slow down as the amount of data increases.
SUMMARY OF THE INVENTION
It is an object of the present invention to overcome above problem and present a novel system for faster data processing and calculation, in generall and a motionon analysis system in in particular, for determination of the position of a marker in real time and using the resources of each information collecting unit, i.e. a camera.
The main object of the invention is to distribute the data processing, substantially between the data collecting devices and provide, for instance a host computer in the data processing chain a substantially prepared data. The increased performance in this way will not overload the host computer, and the data transmitted to and processed with the host computer will increase dramatically, which increases the possibility of real time processing of the computer.
Another object of the present invention is to increase the accuracy and reliability of the data processed.
Yet another object of the present invention is to provide a new data type for fast data processing.
Said object are obtained by said data collecting device being arranged to internally process the data collected by said device into different levels, having different degrees of information content, including a high degree of information content and a low degree of information content, said data collecting devices further including means to receive data from other data collecting devices in said different degrees of information content, whereby a set of data generated in said data collecting device or received from a data collecting device having a high degree of information content, supersedes a data having lower degree of information content before forwarding the processed data in the chain.
In a preferred embodiment of a motion analyses system according to the invention a camera unit includes means for communication with other camera units and/or said computer unit, the set of processed data includes different levels of processed data, including a set with substantially high degree of information content and a substantially low degree of information content, whereby a set of data generated in said camera unit or received from a camera unit having a high degree of information content supersedes a data having lower degree of information content before forwarding the processed data in said chain of units.
In an embodiment said data set has substantially a high degree of information content, essentially includes information on a three-dimensional position of the marker in the space.
In another embodiment of the invention that said data set having substantially a low degree of information content includes information on a two-dimensional point in the space, described as a line having direction coefficients K^, Ky, K^ and diameter φ of the marker and a function value of φ.
In yet another embodiment said data set further includes information on a three-dimensional position for the marker in space, described by X, Y, Z coordinates and an index for substantially two crossing lines.
In a preferred embodiment of the present invention said camera unit comprises preprocessing units, buffer, optical element and means to communicate with other camera units, . In an embodiment said marker is sequentially coded.
In an embodiment substantially each camera is arranged to generate a list containing line data in a memory unit, said list being processed and compared with data received from a forgoing camera, by comparing the incoming point data with the internally generated point data, if any, and updating the point data, comparing the incoming cross data with internally generated line data crossing the cross point, updating cross data and transferring the cross data to point data in respect of the line data, comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance accepting the data by updating the point data and/or generating a cross data, otherwise retaining the line data.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following the invention will be described with reference to enclosed drawings, in which:
Fig. 1 is a schematic diagram of a simple motion analyse system. Fig. 2 schematically illustrates an image of a marker and digital slices of the same. Fig. 3 is a schematic diagram illustrating a system av data collecting devices using the method according to the present invention.
Fig. 4 is a schematic view of the coordinate system in a preferred embodiment.
Fig. 5 is a schematic data flow diagram in a preferred camera used in an arrangement according to the present invention.
Figs. 6 is a flow chart, schematically illustrating the method according to the present invention.
Fig. 7 schematically shows an embodiment using the method according to the present invention.
BASIC THEORY Basically, an analogous system uses an ordinarily video signal from a camera as an input. By means of said signal the X and Y coordinates for the marker, which is separated from the surroundings in intensity of light are calculated. The aim is to measure a movement of the marker as exact as possible, i.e. the inaccuracy, which is a result of the video signal consisting of a set of finite number of dots to be minimised.
The video signal consists of a number of lines, which are scanned in chronological order. A marker generates an image, which extends over one or several lines. By means of a comparator, it is possible to determine the start and the end of a marker section on a line. The marker image on a line is called a segment. The time is measured partly from the beginning of the line to the beginning of the segment (Xs) and partly from the beginning of the line to the end of the segment (Xe). The mean value of these two periods is a measure for the position of a segment in the space, in horizontal direction (if the lines are horizontal) while the serial number of the line (S) is a measure for position of the segment in the vertical direction. The length / of the segments is then Xe-Xs-
The centre of the marker may be obtained as a calculated common central point of all segments being part of the marker (image), where the X and Y coordinates of the marker, Xm and Ym, respectively are obtained through formulas 1 and 2:
_ ( X -X ) . ( X +x )
X ( 1 )
X . -X s )'
Figure imgf000007_0001
The ∑ sign indicates that the summation is carried out over all segments being a member of the marker image.
The above is applicable for an analogous signal. Similar calculations may be carried out, if image dots from a digital detector are transferred to another order than linear. There the centre points for all image elements that are members of the same marker are calculated. First, the image elements can be translated to lines and then the calculation may be carried out as in the analogous case.
The time points Xs and Xe can be measured with an electronic counting device connected to an oscillator. The counter starts in the beginning av the line and it is read when the start and end of a segment are reached. One problem is that the oscillator frequency, due to technical and economical reasons is limited. In the digital case the problem may be that the image elements cannot be as small as required.
To overcome this problem in analogues case a comparator is provided, which starts an integrator, which generates a linear potential slope, which starts from a potential Va to Nb at time Xs. The slope is than sampled and measured when the counter changes between two values. The point when the slope passes a predetermined alteration value is used to define the time Xss. The difference between X. and X^ is a constant and it is determined by the integrator and the delays in the comparators. The time Xss is easily calculated from the measured points on the potential slope at the time for the change of the counter provided that at least two points on the slope are measured. For example, if two voltage values are measured, V, at t, and V2 at t2 and V0 is between V, and V2, Xss is interpolated by formula 3:
( t2 - tx ) . ( v0-vx )
X ( 3 )
The time Xe is measured in same way. In this embodiment a linear slope is used, because the calculations become simpler, however, other curves may be used.
The image elements may be arranged in lines by means of a low-pass filter to provide some continuos signal, which can be processed as the analogous signal. However, in the preferred embodiment each image element is measured individually and from the measured values, a value is interpolated, determining when a threshold T is passed.
The digitalized signal is passed over to a comparison unit, which interpolates individual sample values about a predetermined threshold value T, also called video level. As described above, the object is to determine when the amplitude of the signal passes the value T. Each passage presents a start and stop coordinate of each segment with a high resolution, which can be about 30 x number of pixels on a row. In a computation unit following calculation is executed:
T- V X hhigh . resul .ot . ion = Pixel No . + r r _ T T ( 4 )
Where Vj and Y are the signal levels of preceding and succeeding pixels, respectively, received from the comparator unit.
Here, formula 4 may be regarded as a special case of formula 3.
The pixel number may be obtained from a counter (not shown). Depending on the components used, levels V, and V2 may be measured having resolution of 10 bits, the pixel number (MSB) 9 bits and (T-V1)/(V2-V1) 7 bits. Then the centre point x' of the marker is computed in a computation unit by means of previous values stored in a memory unit, using formula (5):
Figure imgf000009_0001
Figure imgf000009_0002
where lk is the length of the segment k (i.e., Xek-Xsk), S is the serial number of the image element, and is the centre of the segment k.
In this case, in the digital case, the formulas (1) and (2) are substituted by formulas (5) and (6), respectively. However, formula (1) and (2) alone do not contribute to obtaining an exact value as desired. To obtain a more accurate, stable and x' with high resolution, the n power of lk is calculated. In the preferred embodiment the square feof I , i.e. n = 2 is calculated.
The lk's may then be stored in the memory unit 23, for further calculations. For example, the area A of the image may be calculated using formula for area: A= ∑lk. For a circular marker, it is possible to calculate the radius using A= r2.π, which yields formula (7):
Figure imgf000010_0001
which may be computed in a computation unit.
Referring now to Fig. 4, which schematically illustrates a principle for positioning the object 1 1, the X, Y and Z coordinates may be calculated as described below.
Following are known data: x',y' the centre of the marker image 26 on the detector plane 28, i.e. on the
CCD, computed using formula 5 and 6; c the distance c from the detector plane 28 to the lens 18;
X0,Y0 the centre of the detector plane, corresponding to the centre of the lens, which are camera constants; Dm the diameter of the marker 11.
In the calculation unit 22 following parameters are calculated: xp Xo-x' i-e- the X-coordinate of the marker image on the detector plane relative the centre of the detector plane; yp Yo_y'; i-e- the X-coordinate of the marker image on the detector plane relative the centre of the detector plane; and d r x 2, r being the radius as above.
As between the triangle B and the triangle A in Fig. 4, similarity exists, following proportional relationship also exist: m/xp = Ym/yp = Zm/c = Dm/d, which enables following calculations in the unit 22:
D x = — . x d
D y = _^ . y ( 9 ) m d p
D Z = — ϊ . c ( 10 ) m d
Where X^, Ym and Zm are the three-dimensional position (vector components) of the marker, and specially the distance between the camera (lens) and the object.
DEFINITIONS
To simplify the understanding of the forthcoming description, following terms are used having specific definitions.
Broadcast Addressing mode used to address all units simultaneously.
Frame Information about the position of object, in two or three-dimension, and other parameters collected substantially at one collection occasion.
Point data A three-dimensional position for a marking device in space, described by X, Y, Z position and Q tolerance. Point data represents the cross point of three lines in space.
Cross data A three-dimensional position for a marking device in space, described by X, Y, Z coordinates and an index for substantially two crossing lines. This data cannot be converted to a point data. A further crossing line contributes to a reliable point determination.
Line data A two-dimensional point in the space, described as a line having direction coefficients K^ Ky, K^, and diameter φ of the marker and a function value of φ, and φ2 (maximum and minimum for φ) and the identity number of the detecting device (camera), which has detected the device.
It may also be described by ABC and DEF, where ABC is one end point coordinate on the line, DEF is another end point coordinate on the line, where C≥F
Frame Header Frame number and a checksum.
DETAILED DESCRIPTION OF AN EMBODIMENT
A simple schematic system is illustrated in fig. 1. The system comprises a camera unit 10 directed to an object, in this case a human body 12, to be analysed and one or several markers 1 1 attached to the object 12.
The camera 10 may be connected to a host computer 13, to process further and present the data received from the camera 10.
Fig. 3 shows an application using four cameras 14, 15, 16 and 17. The encircled part in fig. 3, schematically illustrates parts enclosed in the camera, an optical element, such as a lens 18 or other focussing means, a light sensor such as a CCD sensor and preprocessing unit 19, and processing unit 20 for processing the video signals.
A camera unit is preferably arranged to produce frames, i.e. a data representation of the image including markers.
Each camera is arranged with an IN and OUT port for interconnecting cameras and other equipment, such as the computer 13. Consequently, the first camera 14 through its OUT port is connected to the IN port of the second camera 15, which in turn is connected to the third 16 camera and so on. The last camera 17 in the chain is through its OUT port or other data port of known type can directly or via an I/O device 21 be connected to the host computer.
Briefly, all cameras produce frames, which are pictures of markers detected. As each camera "sees" a marker from a special angel, the produced frames are different. The processing unit 20 of each camera process each frame for determining the position of each marker (if several). Each camera processes and produces a set of data, substantially including point data, line data and cross data as defined above. The processing unit of each camera is connected to the processing unit of the forgoing camera (or to the computer or the I/O device, if it is last camera in the chain). Accordingly, each processing unit processes the frame of its own and the frame received from the forgoing camera, if special criteria are fulfilled, which will be discussed later. Eventually, the computer receives the pre-processed data and avoids the further process of entire received data.
Fig. 5 is a schematic block diagram of an embodiment of the data processing unit 20. The data processing unit 20 comprises a first IN buffer 22, a data computing unit 23, a postprocessing unit 24, a second IN buffer 25, an OUT buffer 29 and a message handling unit 30. The buffer 22 is connected to a preprocessing unit 19. The preprocessing unit 19 may be a video processing unit. The preprocessing unit 19, IN buffer 22, data computing unit 23, post-processing unit 24, the second IN buffer 25 and OUT buffer 29 are interconnected by means of a data bus, e.g. of ISA type.
From the preprocessing unit 19 is a signal provided to the data processing unit 20, through the buffer 22. In a preprocessing unit 19, the video signal is processed and a two-dimensional coordinate is determined. A frame including two-dimensional data (2D), comprising x, y, φ1; φ2 and i is generated and sent to the data processing unit 20. The parameters x and y are the 2D coordinates (camera scale) according to formula 5 and 6, φj and φ2 the maximum and minimum radius of the marker according to formula 7 and i is the intensity parameter for the image 26. The φi and φ values are obtained through a roundness control as described in the Swedish patent application no. 9700065-7, and generally depend on the light intensity received from the marker, which may depend on the flash intensity and light reflection from said marker. The processing unit further comprises the computing unit 23, which processes the data according to the flow chart of fig. 6.
First, the input data 100 is first processed, 101, for lens correction by executing following compound equation: x = (x'-x1)(K1*a2+K2*a4)
Figure imgf000014_0001
where x, and y represent a foot point for lens correction, and K and K are coefficients provided from a calibration process 102.
Next, a diameter correction is carried out, 103, using parameters xm, ym, 12, 13 and 14, where the effect of the light intensity distribution in space, i.e. the light flash intensity if one is used, is taken into consideration. Where xm, ym are basic points for diameter correction and 12, 13 and 14 constants. The results are φ', and φ'2, i.e. the corrected φ . The diameter value may be corrected for known static error contributions, such as illumination intensity and insufficient focussing. Then an internal calculation is carried out: Xn = Dm* (x-x0)/φn
Yn = Dm* (y-y0)/φn Zn = Dm* c/φn where Dm = D/d = (marker diameter)/(image diameter), φn is either φ, or φ2 and c is a camera constant, as described in the basic theory part. For each φn, where n = 1 or 2, an end point corresponding to end point coordinates, ABC and DEF are obtained
An additional calculation for transforming the coordinates is carried out at 104 to provide a common coordinate system for camera units, yielding data in following from:
X(t) = Kx . t + mx
Y(t) = Ky . t + my
Z(t) = K, . t + mz, where t is t(D/d') [t=0 for the position of the camera]
K is a constant and m is the lens positions coordinate.
In this stage the direction coefficient for the line is transformed to a global system, having the position of the point on the line as the argument. The roundness control or the value of φ gives two end point coordinates φ', and φ'2 of the line.
The frame data received 106 from a forgoing camera unit, i.e. if the camera is not first in the chain, will be processed in the current camera. Each camera generates a list substantially containing line data in a memory unit. Then the list is processed and compared, 105, with the received data in the post-processing unit 24, by:
- comparing the incoming point data with the internally generated line data, if any, and updating the point data having same coordinates.
- Comparing the incoming cross data with internally generated line data and eliminating lines coinciding with known cross data or crossing the cross point and updating cross data and if possible, transferring the cross data to point data in respect of the line data.
- Comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance, on this respect generating a cross data, otherwise retaining the line data.
In the camera, which directly or indirectly through the I/O device is connected to the host computer 13, an evaluation of solitary lines, their φn and D is carried out, to determine whether they can be transformed to a point data form or not.
Each frame is provided with frame header. If the incoming frame has a higher number, it means that one frame is lost during the communication with the forgoing camera. Then the cameras own frame is sent to the next camera as a line data. If the incoming frame has a lower number, it means that a frame was lost in the communication between the preprocessing unit 19 and the data processing unit 20, and the frame is sent to the next camera.
In the data processing unit 20, IN and OUT buffers, 25 and 29, respectively, are arranged for buffering the incoming and outgoing data, i.e. messages from the forgoing and forthcoming units. The message handling means 30 is arranged to handle the identity and addresses in the messages.
The data processing unit 20 processes the frames in time order. If there is not enough time to handle the frames, the frames are queued in the buffers 22, 25 and 29 and processed as soon as possible. If the storage place is not enough, preferably the oldest data is overwritten or data fetching is paused.
In a preferred system, the instruction set controlling each camera unit may be used (with some modification) in the host computer to execute the same process as in the camera units.
In one embodiment all messages (frames) between the connected units are sent from point to point, i.e. each data processing unit 20 must first receive a message and then decide to process it or send it further.
Message transfer timing in a preferred embodiment is according to the following. The unit, which sends a message waits for an acknowledgement message (ACK) within Tresend, if the transmission were successful, otherwise the message is repeated a predetermined number of times before the communication is assumed failed and an error message is generated, which is sent to the last camera 17 in the chain and further to the host computer 13. If a receiver of a message detects that the message is incorrect a negative ACK (NACK) is sent to the transmitter, i.e. a request for a resend. The timing schedule is according to the following: resend msg ' ^delay "■" A Cmsg where Cdelay is the delay time in each camera, Tmsg is transfer time for a message, Tcmsg is transfer time for ACK/NACK, and
Tresend timeout time at resending a message.
the maximum of a total time for a message is: no. x l de]ay + J 1 resencJ where Cno Number of units included (camera, computer).
When the system, including one or more cameras and preferably a host computer 13, following operations may be executed:
- initiation and self control of the cameras, - addressing,
- synchronisation, and
- calibration.
Initiation and suitable self controls of the camera are executed by the corresponding data processing unit 20 after power on. The instructions may be stored in a memory unit (BIOS) of the data processing unit 20. The self control conducts inspections on communication ports, memory units and parts essential for the function.
During the start, each camera unit checks the IN/OUT ports for connection. If an OUT connection is not present, the camera is last in the chain and obtains, for instance address 1 (decimal). Camera 1 (Cl) sends an address message to the next camera in the chain, which assumes address 2, and so on. Cl controls the communication with every other unit in the chain by scanning for correct addresses and broadcast address, which is a message sent from Cl to all cameras in the chain and last camera answers by an ACK.
The synchronisation of each camera is performed from the last unit (Clast) closest to the host computer. The purpose of synchronisation is to arrange for the frames produced by each camera to be exposed and marked simultaneously. Usually, a start command from the host computer starts the synchronisation. The start command, substantially sets all cameras in start mode. Preferably, the command for starting producing the frames is generated by Cl .
By calibration is referred to adjustment between the "reality" and the present sharpness setting and diaphragm setting used for the moment. The calibration is intended to compensate, e.g. for lens distortion, varying light intensities within a measuring volume and obtain information on one metrical scale for markers and obliging all cameras to use a common coordinate system.
When a frame is produced by the preprocessor 19, it is buffered for use by the data processing unit 20, for example by setting an interrupt signal. The data processing unit 20 fetches the frames through the data bus 31 and puts them in a wait queue. Both processing and preprocessing units can queue the frames.
The example of fig. 7 schematically shows the detection of two markers 1 1 a and 1 lb by three linked cameras 32, 33 and 34. The vision fields of the cameras are indicated by broken lines. Camera 32 "sees" marker 11a and l ib and produces a line data, which is send to camera 33. Camera 33 sees said markers too, and the line data is transferred to a cross data for markers. Camera 34 sees only marker 11 a and produces a point data for 1 1 a and sends the cross data for marker 1 lb to another camera or the host computer 13.
Obviously, further advantageous with the present invention is the simplicity to use a standard data form for communication. Moreover, by using the data form according to the invention the translation to spacial coordinate system using metric scale is facilitated. One further feature of the invention is that it can be constructed modulus, which makes it possible to increase or decrease the number of included components, in a simple way.
Although, we have described and shown preferred embodiments, the invention is not limited to said embodiments, variations and modifications may be made within the scope of the attached claims.
For example the camera units may be substituted by radar units or the like. Also, the data processing unit may be modified, where many of components may be integrated. The camera units may be arranged to communicate by means of electromagnetic waves, infrared signals or the like.
It is also possible to use sequentially coded markers as disclosed in the Swedish patent application no. 9700067-3. Additionally, the system according to the present invention is suitable for virtual reality (VR) applications, preferably substituting the special suit needed in such an application by attaching one or several markers to a person/object being part of the VR application. In this case the cameras attached to a host computer will be able to trace the marker(s) and generate the data required without loading the host computer with unnecessary data processing.
LIST OF DESIGNATION SIGNS
10 Camera unit Marker
Object
Computer
Camera
Camera
Camera
Camera
Lens
Preprocessing unit
Data processing unit
I/O unit
Buffer
Computing unit
Post-processing unit
IN buffer
Marker image
Measurement plane
Sensor plane
OUT buffer
Message handling unit
Data bus
Camera
Camera
Camera

Claims

1. Data processing in a system including at least one data collecting device (10, 14, 15, 16, 17), comprising means for at least partly processing collected data, said device being connected to a computer unit (13), forming a chain of units, to process said at least partly processed data from said data collecting device, characterised in, that said data collecting device (10, 14, 15, 16, 17) is arranged to internally process the data collected by said device into different levels, having different degrees of information content, including a high degree of information content and a low degree of information content, said data collecting devices further including means to receive data from other data collecting devices (10, 14, 15, 16, 17) in said different degrees of information content, whereby a set of data generated in said data collecting device or received from a data collecting device having a high degree of information content, supersedes a data having lower degree of information content before forwarding the processed data in the chain.
2. Data processing according to claim 1 , characterised in, that said data collecting device consist of a camera unit (10, 14, 15, 16, 17), a radar unit, or the like.
3. A motion analyses system including at least one camera unit (10, 14, 15, 16, 17), a computer unit (13) and a marker (11), said camera unit (10, 14, 15, 16, 17) being connected to said computer unit (13) and/or other camera units building a chain of units, each camera unit including means to convert an image, substantially including an image (26) of said marker (1 1) to a data set, characterised in, that said camera unit (10, 14, 15, 16, 17) includes means for communication with other camera units (10, 14, 15, 16, 17) and/or said computer unit (13), said set of processed data includes different levels of processed data, including a set with substantially high degree of information content and a substantially low degree of information content, whereby a set of data generated in said camera unit or received from a camera unit having a high degree of information content supersedes a data having lower degree of information content before forwarding the processed data in said chain of units.
4. A motion analyses system according to claim 3, characterised in, that said data set having substantially a high degree of information content, essentially includes information on a three-dimensional position of the marker (1 1) in the space.
5. A motion analyses system according to claim 3, characterised in, that said data set having substantially a low degree of information content includes information on a two-dimensional point in the space, described as a line having direction coefficients K^, Ky, K,,, and diameter φ of the marker and a function value of φ.
6. A motion analyses system according to claim 3, characterised in, that said data set further includes information on a three-dimensional position for the marker in space, described by X, Y, Z coordinates and an index for substantially two crossing lines.
7. A motion analyses system according to anyone of claims 3-6, characterised in, that said camera unit comprises preprocessing units (19, 23, 24), buffer (22, 25, 29), optical element (18) and means to communicate with other camera units,.
8. A motion analyses system according to anyone of claims 3-7, characterised in, that said camera unit is connected to the computer unit through an I/O-device (21).
9. A motion analyses system according to anyone of claims 3-8, characterised in, that said marker (1 1) is sequentially coded.
10. A motion analyses system according to claims 4-6, characterised in, that substantially each camera is arranged to generate a list containing line data in a memory unit, said list being processed and compared (105) with data received from a forgoing camera, by comparing the incoming point data with the internally generated point data, if any, and updating the point data, comparing the incoming cross data with internally generated line data crossing the cross point, updating cross data and transferring the cross data to point data in respect of the line data, comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance accepting the data by updating the point data and/or generating a cross data, otherwise retaining the line data.
11. A motion analyses system according to claim 3, characterised in, that said computer unit (13) initiates the data collecting process.
12. A motion analyses system according to anyone of claims 3 - 11, characterised in, that data is packaged as a frame, each provided with a frame header, that is an incoming frame has a higher number, it means that one frame is lost during the communication with a forgoing camera, whereby a cameras own frame is sent to the next camera as a line data, and that if the incoming frame has a lower number, it means that a frame was lost in the communication between a preprocessing unit (19) and a data processing unit (20), and the frame is sent to the next camera.
13. A motion analyses system according to claim 12, characterised in, that the data processing unit (20) processes the frames in time order.
14. A motion analyses system according to anyone of claims 3-1 1 characterised in, that an instruction set controlling each camera unit is provided in the host computer (13) to execute the same process as in a camera unit.
15. A motion analyses system according to anyone of claims 3-14, characterised in, that all data communication between the connected units are sent from point to point.
16. A motion analyses system according to anyone of claims 3-15, characterised in, that message transfer substantially includes following steps:
- each unit, which sends a message waits for an acknowledgement message (ACK) within a substantially predetermined time period (Tresend), if the transmission was successful, otherwise - the message is repeated a predetermined number of times before the communication is assumed failed and an error message is generated, which is sent to the last camera (17) in the chain and further to the host computer (13),
- if a receiver of a message detects that the message is incorrect a negative ACK (NACK) is sent to the transmitter, i.e. a request for a resend
17. A motion analyses system according to claim 16, characterised in, that said predetermined time period (Tresend) is: resend msg ^delay ~t~ * Cmsg where Cdelay is the delay time in each camera, Tmsg is transfer time for a message,
Tcmsg is transfer time for ACK/NACK, and
TreSend timeout time at resending a message.
18. A virtual reality application including a motion analyss system according to anyone of claims 3 to 17.
PCT/SE1998/000379 1997-03-03 1998-03-03 Data processing WO1998039739A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU66435/98A AU6643598A (en) 1997-03-03 1998-03-03 Data processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE9700760-3 1997-03-03
SE9700760A SE511461C2 (en) 1997-03-03 1997-03-03 Data Processing

Publications (1)

Publication Number Publication Date
WO1998039739A1 true WO1998039739A1 (en) 1998-09-11

Family

ID=20406005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE1998/000379 WO1998039739A1 (en) 1997-03-03 1998-03-03 Data processing

Country Status (3)

Country Link
AU (1) AU6643598A (en)
SE (1) SE511461C2 (en)
WO (1) WO1998039739A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943846B2 (en) 2000-12-20 2005-09-13 Robert Bosch Gmbh Multi-picture in picture system
US7222198B2 (en) * 2003-05-13 2007-05-22 Hewlett-Packard Development Company, L.P. System for transferring data between devices by making brief connection with external contacts that extend outwardly from device exterior

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990015509A1 (en) * 1989-06-07 1990-12-13 Loredan Biomedical, Inc. Video analysis of motion
US5475422A (en) * 1993-06-21 1995-12-12 Nippon Telegraph And Telephone Corporation Method and apparatus for reconstructing three-dimensional objects
EP0735512A2 (en) * 1995-03-29 1996-10-02 SANYO ELECTRIC Co., Ltd. Methods for creating an image for a three-dimensional display, for calculating depth information, and for image processing using the depth information
WO1997013218A1 (en) * 1995-10-04 1997-04-10 Visual Interface, Inc. Method of producing a three-dimensional image from two-dimensional images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990015509A1 (en) * 1989-06-07 1990-12-13 Loredan Biomedical, Inc. Video analysis of motion
US5475422A (en) * 1993-06-21 1995-12-12 Nippon Telegraph And Telephone Corporation Method and apparatus for reconstructing three-dimensional objects
EP0735512A2 (en) * 1995-03-29 1996-10-02 SANYO ELECTRIC Co., Ltd. Methods for creating an image for a three-dimensional display, for calculating depth information, and for image processing using the depth information
WO1997013218A1 (en) * 1995-10-04 1997-04-10 Visual Interface, Inc. Method of producing a three-dimensional image from two-dimensional images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943846B2 (en) 2000-12-20 2005-09-13 Robert Bosch Gmbh Multi-picture in picture system
US7222198B2 (en) * 2003-05-13 2007-05-22 Hewlett-Packard Development Company, L.P. System for transferring data between devices by making brief connection with external contacts that extend outwardly from device exterior

Also Published As

Publication number Publication date
AU6643598A (en) 1998-09-22
SE9700760L (en) 1998-11-04
SE9700760D0 (en) 1997-03-03
SE511461C2 (en) 1999-10-04

Similar Documents

Publication Publication Date Title
JP3951984B2 (en) Image projection method and image projection apparatus
US6700604B1 (en) Image capturing method and apparatus for determining a shape of an object
US7526121B2 (en) Three-dimensional visual sensor
EP0951696B1 (en) Method and arrangement for determining the position of an object
JPH08233556A (en) Picked-up image processor and picked-up image processing method
TW201817215A (en) Image scanning system and method thereof
CN111750804A (en) Object measuring method and device
JP2000504418A (en) Distance and / or position measuring device
KR101623828B1 (en) Correcting apparatus for distorted picture of a fisheye camera improving processing velocity
CN113340405A (en) Bridge vibration mode measuring method, device and system
CN110068308B (en) Distance measurement method and distance measurement system based on multi-view camera
WO1998039739A1 (en) Data processing
JP2000134537A (en) Image input device and its method
US6898298B1 (en) Installing posture parameter auto extracting method of imaging device, and monitoring system using the imaging device
WO2022075607A1 (en) Lidar system capable of sensing road surface, and data processing method
JPH0981790A (en) Device and method for three-dimensional shape restoration
KR20230128683A (en) A method for correcting distance distortion to improve detection accuracy of Lidar and the device thereof
Li et al. An accurate camera calibration for the aerial image analysis
JP2775924B2 (en) Image data creation device
JP3340599B2 (en) Plane estimation method
JPH06221832A (en) Method and device for calibrating focusing distance of camera
JP2000074665A (en) Device and method for generating distance image
JPH07128013A (en) Optical position detector
Fisher et al. Combining intensity and range images for 3d architectural modelling
JP2016146546A (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998538442

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载