DATA PROCESSING
TECHNICAL FIELD OF THE INVENTION The present invention relates to a method and arrangement for data processing in a system including at least one data collecting device, comprising means for at least partly processing collected data, said device being connected to a computer unit, forming a chain of units, to process said at least partly processed data from said data collecting device.
The invention further refers to a motion analyses system including at least one camera unit, a computer unit and a marker, said camera unit being connected to said computer unit and/or other camera units building a chain of units, each camera unit including means to convert an image, substantially including an image of said marker to a data set.
Yet another aspekt of the present invention is a virtual reality application including a motion analyss system.
BACKGROUND AND RELATED ART
Motion analysis is now a well-known method using camera unit and computer aid to analyse, e.g. biomechanichs of human, animals or motions of a robot arm etc.
In a simple system markers are attached to the object to be analysed. In the past the object provided with the markers was first filmed and then manually analysed and digitalised to determine the correct position of the markers. This was a time-consuming procedure.
Presently, cameras equipped with so-called CCD sensors are used. CCD sensor, which is a light sensor, is generally arranged in communication with necessary optical elements. A CCD sensor, consisting of lines of charged coupled sensors arranged as a matrix, i.e. arranged in an X and Y coordinate system, for one or several colours, converts the light (from the optical element) projected on it, by electronically scanning in Y direction each line of X
sensors and producing a television (video) signal. Then, the signals may be analysed in different ways to detect the position of the markers attached to the object.
Presently used systems, use one or several cameras which send the video signal to a computer unit to analyse or just partly process the video signal and then send it to a computer device for analysing. The problem is that when many camera units are used the amount of data send to the computer unit accelerates substantially, which burdens the resources of the computer. These systems also tend to slow down as the amount of data increases.
SUMMARY OF THE INVENTION
It is an object of the present invention to overcome above problem and present a novel system for faster data processing and calculation, in generall and a motionon analysis system in in particular, for determination of the position of a marker in real time and using the resources of each information collecting unit, i.e. a camera.
The main object of the invention is to distribute the data processing, substantially between the data collecting devices and provide, for instance a host computer in the data processing chain a substantially prepared data. The increased performance in this way will not overload the host computer, and the data transmitted to and processed with the host computer will increase dramatically, which increases the possibility of real time processing of the computer.
Another object of the present invention is to increase the accuracy and reliability of the data processed.
Yet another object of the present invention is to provide a new data type for fast data processing.
Said object are obtained by said data collecting device being arranged to internally process the data collected by said device into different levels, having different degrees of information
content, including a high degree of information content and a low degree of information content, said data collecting devices further including means to receive data from other data collecting devices in said different degrees of information content, whereby a set of data generated in said data collecting device or received from a data collecting device having a high degree of information content, supersedes a data having lower degree of information content before forwarding the processed data in the chain.
In a preferred embodiment of a motion analyses system according to the invention a camera unit includes means for communication with other camera units and/or said computer unit, the set of processed data includes different levels of processed data, including a set with substantially high degree of information content and a substantially low degree of information content, whereby a set of data generated in said camera unit or received from a camera unit having a high degree of information content supersedes a data having lower degree of information content before forwarding the processed data in said chain of units.
In an embodiment said data set has substantially a high degree of information content, essentially includes information on a three-dimensional position of the marker in the space.
In another embodiment of the invention that said data set having substantially a low degree of information content includes information on a two-dimensional point in the space, described as a line having direction coefficients K^, Ky, K^ and diameter φ of the marker and a function value of φ.
In yet another embodiment said data set further includes information on a three-dimensional position for the marker in space, described by X, Y, Z coordinates and an index for substantially two crossing lines.
In a preferred embodiment of the present invention said camera unit comprises preprocessing units, buffer, optical element and means to communicate with other camera units, .
In an embodiment said marker is sequentially coded.
In an embodiment substantially each camera is arranged to generate a list containing line data in a memory unit, said list being processed and compared with data received from a forgoing camera, by comparing the incoming point data with the internally generated point data, if any, and updating the point data, comparing the incoming cross data with internally generated line data crossing the cross point, updating cross data and transferring the cross data to point data in respect of the line data, comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance accepting the data by updating the point data and/or generating a cross data, otherwise retaining the line data.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following the invention will be described with reference to enclosed drawings, in which:
Fig. 1 is a schematic diagram of a simple motion analyse system. Fig. 2 schematically illustrates an image of a marker and digital slices of the same. Fig. 3 is a schematic diagram illustrating a system av data collecting devices using the method according to the present invention.
Fig. 4 is a schematic view of the coordinate system in a preferred embodiment.
Fig. 5 is a schematic data flow diagram in a preferred camera used in an arrangement according to the present invention.
Figs. 6 is a flow chart, schematically illustrating the method according to the present invention.
Fig. 7 schematically shows an embodiment using the method according to the present invention.
BASIC THEORY Basically, an analogous system uses an ordinarily video signal from a camera as an input. By
means of said signal the X and Y coordinates for the marker, which is separated from the surroundings in intensity of light are calculated. The aim is to measure a movement of the marker as exact as possible, i.e. the inaccuracy, which is a result of the video signal consisting of a set of finite number of dots to be minimised.
The video signal consists of a number of lines, which are scanned in chronological order. A marker generates an image, which extends over one or several lines. By means of a comparator, it is possible to determine the start and the end of a marker section on a line. The marker image on a line is called a segment. The time is measured partly from the beginning of the line to the beginning of the segment (Xs) and partly from the beginning of the line to the end of the segment (Xe). The mean value of these two periods is a measure for the position of a segment in the space, in horizontal direction (if the lines are horizontal) while the serial number of the line (S) is a measure for position of the segment in the vertical direction. The length / of the segments is then Xe-Xs-
The centre of the marker may be obtained as a calculated common central point of all segments being part of the marker (image), where the X and Y coordinates of the marker, Xm and Ym, respectively are obtained through formulas 1 and 2:
_ ( X -X ) . ( X +x )
X ( 1 )
X . -X s )'
The ∑ sign indicates that the summation is carried out over all segments being a member of the marker image.
The above is applicable for an analogous signal. Similar calculations may be carried out, if image dots from a digital detector are transferred to another order than linear. There the
centre points for all image elements that are members of the same marker are calculated. First, the image elements can be translated to lines and then the calculation may be carried out as in the analogous case.
The time points Xs and Xe can be measured with an electronic counting device connected to an oscillator. The counter starts in the beginning av the line and it is read when the start and end of a segment are reached. One problem is that the oscillator frequency, due to technical and economical reasons is limited. In the digital case the problem may be that the image elements cannot be as small as required.
To overcome this problem in analogues case a comparator is provided, which starts an integrator, which generates a linear potential slope, which starts from a potential Va to Nb at time Xs. The slope is than sampled and measured when the counter changes between two values. The point when the slope passes a predetermined alteration value is used to define the time Xss. The difference between X. and X^ is a constant and it is determined by the integrator and the delays in the comparators. The time Xss is easily calculated from the measured points on the potential slope at the time for the change of the counter provided that at least two points on the slope are measured. For example, if two voltage values are measured, V, at t, and V2 at t2 and V0 is between V, and V2, Xss is interpolated by formula 3:
( t2 - tx ) . ( v0-vx )
X ( 3 )
The time Xe is measured in same way. In this embodiment a linear slope is used, because the calculations become simpler, however, other curves may be used.
The image elements may be arranged in lines by means of a low-pass filter to provide some continuos signal, which can be processed as the analogous signal. However, in the preferred embodiment each image element is measured individually and from the measured values,
a value is interpolated, determining when a threshold T is passed.
The digitalized signal is passed over to a comparison unit, which interpolates individual sample values about a predetermined threshold value T, also called video level. As described above, the object is to determine when the amplitude of the signal passes the value T. Each passage presents a start and stop coordinate of each segment with a high resolution, which can be about 30 x number of pixels on a row. In a computation unit following calculation is executed:
T- V X hhigh . resul .ot . ion = Pixel No . + r r _ T T ( 4 )
Where Vj and Y are the signal levels of preceding and succeeding pixels, respectively, received from the comparator unit.
Here, formula 4 may be regarded as a special case of formula 3.
The pixel number may be obtained from a counter (not shown). Depending on the components used, levels V, and V2 may be measured having resolution of 10 bits, the pixel number (MSB) 9 bits and (T-V1)/(V2-V1) 7 bits. Then the centre point x' of the marker is computed in a computation unit by means of previous values stored in a memory unit, using formula (5):
where l
k is the length of the segment k (i.e., X
ek-X
sk), S is the serial number of the image element, and is the centre of the segment k.
In this case, in the digital case, the formulas (1) and (2) are substituted by formulas (5) and (6), respectively. However, formula (1) and (2) alone do not contribute to obtaining an exact value as desired. To obtain a more accurate, stable and x' with high resolution, the n power of lk is calculated. In the preferred embodiment the square feof I , i.e. n = 2 is calculated.
The lk's may then be stored in the memory unit 23, for further calculations. For example, the area A of the image may be calculated using formula for area: A= ∑lk. For a circular marker, it is possible to calculate the radius using A= r2.π, which yields formula (7):
which may be computed in a computation unit.
Referring now to Fig. 4, which schematically illustrates a principle for positioning the object 1 1, the X, Y and Z coordinates may be calculated as described below.
Following are known data: x',y' the centre of the marker image 26 on the detector plane 28, i.e. on the
CCD, computed using formula 5 and 6; c the distance c from the detector plane 28 to the lens 18;
X0,Y0 the centre of the detector plane, corresponding to the centre of the lens, which are camera constants; Dm the diameter of the marker 11.
In the calculation unit 22 following parameters are calculated:
xp Xo-x' i-e- the X-coordinate of the marker image on the detector plane relative the centre of the detector plane; yp Yo_y'; i-e- the X-coordinate of the marker image on the detector plane relative the centre of the detector plane; and d r x 2, r being the radius as above.
As between the triangle B and the triangle A in Fig. 4, similarity exists, following proportional relationship also exist: m/xp = Ym/yp = Zm/c = Dm/d, which enables following calculations in the unit 22:
D x = — . x d
D y = _^ . y ( 9 ) m d p
D Z = — ϊ . c ( 10 ) m d
Where X^, Ym and Zm are the three-dimensional position (vector components) of the marker, and specially the distance between the camera (lens) and the object.
DEFINITIONS
To simplify the understanding of the forthcoming description, following terms are used having specific definitions.
Broadcast Addressing mode used to address all units simultaneously.
Frame Information about the position of object, in two or three-dimension, and other parameters collected substantially at one collection occasion.
Point data A three-dimensional position for a marking device in space, described
by X, Y, Z position and Q tolerance. Point data represents the cross point of three lines in space.
Cross data A three-dimensional position for a marking device in space, described by X, Y, Z coordinates and an index for substantially two crossing lines. This data cannot be converted to a point data. A further crossing line contributes to a reliable point determination.
Line data A two-dimensional point in the space, described as a line having direction coefficients K^ Ky, K^, and diameter φ of the marker and a function value of φ, and φ2 (maximum and minimum for φ) and the identity number of the detecting device (camera), which has detected the device.
It may also be described by ABC and DEF, where ABC is one end point coordinate on the line, DEF is another end point coordinate on the line, where C≥F
Frame Header Frame number and a checksum.
DETAILED DESCRIPTION OF AN EMBODIMENT
A simple schematic system is illustrated in fig. 1. The system comprises a camera unit 10 directed to an object, in this case a human body 12, to be analysed and one or several markers 1 1 attached to the object 12.
The camera 10 may be connected to a host computer 13, to process further and present the data received from the camera 10.
Fig. 3 shows an application using four cameras 14, 15, 16 and 17. The encircled part in fig. 3, schematically illustrates parts enclosed in the camera, an optical element, such as a lens
18 or other focussing means, a light sensor such as a CCD sensor and preprocessing unit 19, and processing unit 20 for processing the video signals.
A camera unit is preferably arranged to produce frames, i.e. a data representation of the image including markers.
Each camera is arranged with an IN and OUT port for interconnecting cameras and other equipment, such as the computer 13. Consequently, the first camera 14 through its OUT port is connected to the IN port of the second camera 15, which in turn is connected to the third 16 camera and so on. The last camera 17 in the chain is through its OUT port or other data port of known type can directly or via an I/O device 21 be connected to the host computer.
Briefly, all cameras produce frames, which are pictures of markers detected. As each camera "sees" a marker from a special angel, the produced frames are different. The processing unit 20 of each camera process each frame for determining the position of each marker (if several). Each camera processes and produces a set of data, substantially including point data, line data and cross data as defined above. The processing unit of each camera is connected to the processing unit of the forgoing camera (or to the computer or the I/O device, if it is last camera in the chain). Accordingly, each processing unit processes the frame of its own and the frame received from the forgoing camera, if special criteria are fulfilled, which will be discussed later. Eventually, the computer receives the pre-processed data and avoids the further process of entire received data.
Fig. 5 is a schematic block diagram of an embodiment of the data processing unit 20. The data processing unit 20 comprises a first IN buffer 22, a data computing unit 23, a postprocessing unit 24, a second IN buffer 25, an OUT buffer 29 and a message handling unit 30. The buffer 22 is connected to a preprocessing unit 19. The preprocessing unit 19 may be a video processing unit.
The preprocessing unit 19, IN buffer 22, data computing unit 23, post-processing unit 24, the second IN buffer 25 and OUT buffer 29 are interconnected by means of a data bus, e.g. of ISA type.
From the preprocessing unit 19 is a signal provided to the data processing unit 20, through the buffer 22. In a preprocessing unit 19, the video signal is processed and a two-dimensional coordinate is determined. A frame including two-dimensional data (2D), comprising x, y, φ1; φ2 and i is generated and sent to the data processing unit 20. The parameters x and y are the 2D coordinates (camera scale) according to formula 5 and 6, φj and φ2 the maximum and minimum radius of the marker according to formula 7 and i is the intensity parameter for the image 26. The φi and φ values are obtained through a roundness control as described in the Swedish patent application no. 9700065-7, and generally depend on the light intensity received from the marker, which may depend on the flash intensity and light reflection from said marker. The processing unit further comprises the computing unit 23, which processes the data according to the flow chart of fig. 6.
First, the input data 100 is first processed, 101, for lens correction by executing following compound equation: x = (x'-x1)(K1*a2+K2*a4)
where x, and y represent a foot point for lens correction, and K and K are coefficients provided from a calibration process 102.
Next, a diameter correction is carried out, 103, using parameters xm, ym, 12, 13 and 14, where the effect of the light intensity distribution in space, i.e. the light flash intensity if one is used, is taken into consideration. Where xm, ym are basic points for diameter correction and 12, 13 and 14 constants. The results are φ', and φ'2, i.e. the corrected φ . The diameter value may be corrected for known static error contributions, such as illumination intensity and insufficient focussing.
Then an internal calculation is carried out: Xn = Dm* (x-x0)/φn
Yn = Dm* (y-y0)/φn Zn = Dm* c/φn where Dm = D/d = (marker diameter)/(image diameter), φn is either φ, or φ2 and c is a camera constant, as described in the basic theory part. For each φn, where n = 1 or 2, an end point corresponding to end point coordinates, ABC and DEF are obtained
An additional calculation for transforming the coordinates is carried out at 104 to provide a common coordinate system for camera units, yielding data in following from:
X(t) = Kx . t + mx
Y(t) = Ky . t + my
Z(t) = K, . t + mz, where t is t(D/d') [t=0 for the position of the camera]
K is a constant and m is the lens positions coordinate.
In this stage the direction coefficient for the line is transformed to a global system, having the position of the point on the line as the argument. The roundness control or the value of φ gives two end point coordinates φ', and φ'2 of the line.
The frame data received 106 from a forgoing camera unit, i.e. if the camera is not first in the chain, will be processed in the current camera. Each camera generates a list substantially containing line data in a memory unit. Then the list is processed and compared, 105, with the received data in the post-processing unit 24, by:
- comparing the incoming point data with the internally generated line data, if any, and updating the point data having same coordinates.
- Comparing the incoming cross data with internally generated line data and eliminating lines coinciding with known cross data or crossing the cross point and updating cross data and if possible, transferring the cross data to point data in respect
of the line data.
- Comparing the incoming line data in respect of internally generated line data. If two crossing lines result in a point having a good tolerance, on this respect generating a cross data, otherwise retaining the line data.
In the camera, which directly or indirectly through the I/O device is connected to the host computer 13, an evaluation of solitary lines, their φn and D is carried out, to determine whether they can be transformed to a point data form or not.
Each frame is provided with frame header. If the incoming frame has a higher number, it means that one frame is lost during the communication with the forgoing camera. Then the cameras own frame is sent to the next camera as a line data. If the incoming frame has a lower number, it means that a frame was lost in the communication between the preprocessing unit 19 and the data processing unit 20, and the frame is sent to the next camera.
In the data processing unit 20, IN and OUT buffers, 25 and 29, respectively, are arranged for buffering the incoming and outgoing data, i.e. messages from the forgoing and forthcoming units. The message handling means 30 is arranged to handle the identity and addresses in the messages.
The data processing unit 20 processes the frames in time order. If there is not enough time to handle the frames, the frames are queued in the buffers 22, 25 and 29 and processed as soon as possible. If the storage place is not enough, preferably the oldest data is overwritten or data fetching is paused.
In a preferred system, the instruction set controlling each camera unit may be used (with some modification) in the host computer to execute the same process as in the camera units.
In one embodiment all messages (frames) between the connected units are sent from point
to point, i.e. each data processing unit 20 must first receive a message and then decide to process it or send it further.
Message transfer timing in a preferred embodiment is according to the following. The unit, which sends a message waits for an acknowledgement message (ACK) within Tresend, if the transmission were successful, otherwise the message is repeated a predetermined number of times before the communication is assumed failed and an error message is generated, which is sent to the last camera 17 in the chain and further to the host computer 13. If a receiver of a message detects that the message is incorrect a negative ACK (NACK) is sent to the transmitter, i.e. a request for a resend. The timing schedule is according to the following: resend msg ' ^delay "■" A Cmsg where Cdelay is the delay time in each camera, Tmsg is transfer time for a message, Tcmsg is transfer time for ACK/NACK, and
Tresend timeout time at resending a message.
the maximum of a total time for a message is: no. x l de]ay + J 1 resencJ where Cno Number of units included (camera, computer).
When the system, including one or more cameras and preferably a host computer 13, following operations may be executed:
- initiation and self control of the cameras, - addressing,
- synchronisation, and
- calibration.
Initiation and suitable self controls of the camera are executed by the corresponding data processing unit 20 after power on. The instructions may be stored in a memory unit (BIOS)
of the data processing unit 20. The self control conducts inspections on communication ports, memory units and parts essential for the function.
During the start, each camera unit checks the IN/OUT ports for connection. If an OUT connection is not present, the camera is last in the chain and obtains, for instance address 1 (decimal). Camera 1 (Cl) sends an address message to the next camera in the chain, which assumes address 2, and so on. Cl controls the communication with every other unit in the chain by scanning for correct addresses and broadcast address, which is a message sent from Cl to all cameras in the chain and last camera answers by an ACK.
The synchronisation of each camera is performed from the last unit (Clast) closest to the host computer. The purpose of synchronisation is to arrange for the frames produced by each camera to be exposed and marked simultaneously. Usually, a start command from the host computer starts the synchronisation. The start command, substantially sets all cameras in start mode. Preferably, the command for starting producing the frames is generated by Cl .
By calibration is referred to adjustment between the "reality" and the present sharpness setting and diaphragm setting used for the moment. The calibration is intended to compensate, e.g. for lens distortion, varying light intensities within a measuring volume and obtain information on one metrical scale for markers and obliging all cameras to use a common coordinate system.
When a frame is produced by the preprocessor 19, it is buffered for use by the data processing unit 20, for example by setting an interrupt signal. The data processing unit 20 fetches the frames through the data bus 31 and puts them in a wait queue. Both processing and preprocessing units can queue the frames.
The example of fig. 7 schematically shows the detection of two markers 1 1 a and 1 lb by three linked cameras 32, 33 and 34. The vision fields of the cameras are indicated by broken lines. Camera 32 "sees" marker 11a and l ib and produces a line data, which is send to
camera 33. Camera 33 sees said markers too, and the line data is transferred to a cross data for markers. Camera 34 sees only marker 11 a and produces a point data for 1 1 a and sends the cross data for marker 1 lb to another camera or the host computer 13.
Obviously, further advantageous with the present invention is the simplicity to use a standard data form for communication. Moreover, by using the data form according to the invention the translation to spacial coordinate system using metric scale is facilitated. One further feature of the invention is that it can be constructed modulus, which makes it possible to increase or decrease the number of included components, in a simple way.
Although, we have described and shown preferred embodiments, the invention is not limited to said embodiments, variations and modifications may be made within the scope of the attached claims.
For example the camera units may be substituted by radar units or the like. Also, the data processing unit may be modified, where many of components may be integrated. The camera units may be arranged to communicate by means of electromagnetic waves, infrared signals or the like.
It is also possible to use sequentially coded markers as disclosed in the Swedish patent application no. 9700067-3. Additionally, the system according to the present invention is suitable for virtual reality (VR) applications, preferably substituting the special suit needed in such an application by attaching one or several markers to a person/object being part of the VR application. In this case the cameras attached to a host computer will be able to trace the marker(s) and generate the data required without loading the host computer with unnecessary data processing.
LIST OF DESIGNATION SIGNS
10 Camera unit
Marker
Object
Computer
Camera
Camera
Camera
Camera
Lens
Preprocessing unit
Data processing unit
I/O unit
Buffer
Computing unit
Post-processing unit
IN buffer
Marker image
Measurement plane
Sensor plane
OUT buffer
Message handling unit
Data bus
Camera
Camera
Camera