WO1995003526A1 - An inspection system for a conduit - Google Patents
An inspection system for a conduit Download PDFInfo
- Publication number
- WO1995003526A1 WO1995003526A1 PCT/AU1994/000409 AU9400409W WO9503526A1 WO 1995003526 A1 WO1995003526 A1 WO 1995003526A1 AU 9400409 W AU9400409 W AU 9400409W WO 9503526 A1 WO9503526 A1 WO 9503526A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- inspection system
- conduit
- sonar
- pipe
- Prior art date
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 55
- 230000007547 defect Effects 0.000 claims abstract description 48
- 238000005259 measurement Methods 0.000 claims abstract description 32
- 238000013528 artificial neural network Methods 0.000 claims description 26
- 238000002592 echocardiography Methods 0.000 claims description 15
- 230000033001 locomotion Effects 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 11
- 238000012937 correction Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 2
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000013211 curve analysis Methods 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 claims 1
- 238000000034 method Methods 0.000 description 69
- 230000006870 function Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 14
- 238000004458 analytical method Methods 0.000 description 12
- 238000012549 training Methods 0.000 description 9
- 230000009466 transformation Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000001228 spectrum Methods 0.000 description 5
- 238000003860 storage Methods 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000003628 erosive effect Effects 0.000 description 4
- 238000010191 image analysis Methods 0.000 description 4
- 238000003709 image segmentation Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 239000000872 buffer Substances 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 239000012530 fluid Substances 0.000 description 3
- 239000004519 grease Substances 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000630 rising effect Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000014616 translation Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 239000011800 void material Substances 0.000 description 3
- 208000010392 Bone Fractures Diseases 0.000 description 2
- 206010017076 Fracture Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000010865 sewage Substances 0.000 description 2
- 230000007847 structural defect Effects 0.000 description 2
- 238000012876 topography Methods 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000006670 Multiple fractures Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003973 irrigation Methods 0.000 description 1
- 230000002262 irrigation Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
- H04N7/185—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
- F16L—PIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
- F16L55/00—Devices or appurtenances for use in, or in connection with, pipes or pipe systems
- F16L55/26—Pigs or moles, i.e. devices movable in a pipe or conduit with or without self-contained propulsion means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C7/00—Tracing profiles
- G01C7/06—Tracing profiles of cavities, e.g. tunnels
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M3/00—Investigating fluid-tightness of structures
- G01M3/005—Investigating fluid-tightness of structures using pigs or moles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
- G01N29/06—Visualisation of the interior, e.g. acoustic microscopy
- G01N29/0609—Display arrangements, e.g. colour displays
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/22—Details, e.g. general constructional or apparatus details
- G01N29/26—Arrangements for orientation or scanning by relative movement of the head and the sensor
- G01N29/265—Arrangements for orientation or scanning by relative movement of the head and the sensor by moving the sensor relative to a stationary material
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4481—Neural networks
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F16—ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
- F16L—PIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
- F16L2101/00—Uses or applications of pigs or moles
- F16L2101/30—Inspecting, measuring or testing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/04—Wave modes and trajectories
- G01N2291/044—Internal reflections (echoes), e.g. on walls or defects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/26—Scanned objects
- G01N2291/263—Surfaces
- G01N2291/2636—Surfaces cylindrical from inside
Definitions
- the present invention relates to an inspection system for a conduit.
- the system is particularly advantageous for use in inspecting pipes, such as sanitary and storm water sewers, but can also be adapted for inspection of other conduits, for example tunnels and drains, in particular those used in water supply, sewerage, irrigation and drainage systems.
- the inspection and assessment of sewers for defects is critical to maintain an effective sewerage system.
- the task is of a particular importance for the world's larger and older cities where demand on the system is high and the main sewers used have aged considerably, thereby providing conditions conducive to the production of potentially serious defects.
- Sewers are presently inspected using a closed circuit television camera which is mounted on a trolley configured to travel within the length of a pipe.
- the images obtained by the camera are assessed by experienced personnel who identify defects and potential defects in the pipe's structure on the basis of the images.
- Personnel can also be sent into pipes which are large enough for direct inspection, however, this is normally discouraged as, although safety precautions are stringent, there remains inherent risks to personnel.
- Both assessment techniques are highly subjective and open to different interpretations. Reliance on visual assessment is also time consuming and places considerable time constraints on an inspection program when the number of experienced personnel available is limited.
- an inspection system for a conduit comprising measurement means for travelling in said conduit and obtaining data on said conduit, and processing means for processing said data to identify regions of said conduit corresponding to defects in said conduit.
- processing means includes means for extracting dimension data on said conduit from said data obtained by said measurement means.
- said processing means includes segmentation means for processing said dimension data into segments which represent regions which could correspond to defects in said conduit.
- the processing means also preferably includes classification means for processing said segments to classify said segments as corresponding to a feature of said conduit, which may be a defect.
- said classification means implements a neural network to classify said segments.
- the system further includes interpretation means for processing a classified segment to obtain attributes of the corresponding feature and for identifying said feature on the basis of said attributes and feature data.
- said dimension data comprises a plurality of circumferential data of the inner surface of said conduit each corresponding to points along the path of travel of said measurement means.
- the measurement means may include laser scanning means and/or sonar scanning means.
- the laser scanning means includes a laser, means for circulating a beam generated by the laser against the surface of said conduit, and camera means for producing image data representative of areas illuminated by said beam.
- said sonar scanning means includes a rotating sonar transducer for emitting sonar signals to and receiving sonar echoes from the inner surface of said conduit.
- processing means includes movement correction means for adjusting said circumferential data to account for movement of said measurement means within said conduit.
- Figure 1 is a schematic diagram of an inspection system for a pipe
- Figure 2 is a perspective view of a laser scanner of the system within a pipe
- Figure 3 is a block diagram of a feature extraction board of the system
- Figure 4 is a block diagram of an MIS of the system
- Figure 5 is a perspective view of a sonar scanner of the system within a pipe
- Figure 6 is a block diagram of a sonar head of the sonar scanner
- Figure 7 is a graphical representation of a sonar echo produced by the sonar head
- Figure 8 is a frequency response of a digital filter used in processing the sonar echoes received by the sonar head
- Figure 9 is a graphical representation of a curve-fitted to a set of sonar echo samples
- Figure 10 is a graphical representation of a standard sonar echo used in cross-correlation of measured sonar echoes
- Figure 11 is a graphical representation of the results achieved by cross-correlation of the standard sonar echo with a measured sonar echo
- Figure 12 is a flow diagram of image processing software modules and databases of a work station of the system
- Figure 13 is a flow diagram of an image segmentation module of the system
- Figure 14 is a flow diagram of an image classification module of the system
- Figure 15 is a diagram of a grid representing a triangulation step of the image classification module
- Figure 16 is a diagram of a neural network structure used in the image classification module
- Figure 17 is a flow diagram of an image interpretation module of the system
- Figure 18 is a diagram illustrating the coordinate axis of a vehicle in a pipe
- Figure 19 is a diagram of coordinates for a general ellipse.
- Figure 20 is a diagram illustrating determination of vehicle yaw and pitch angles from a fitted ellipse.
- Figure 21 is a diagram illustrating centre coordinate correction.
- a pipe inspection system 2 includes a laser scanner 4, a sonar scanner 6, and a colour video camera 8 which are all mounted on a measurement vehicle 40 that can travel within a pipe, such as a sewage pipe.
- the measurement vehicle is self-propelled for movement through the pipe, and a chainage detector 10 is used to detect incremental movement of the measurement vehicle through the pipe, which enables the position of the measurement vehicle along the length of the pipe to be determined.
- the orientation of the measurement vehicle, and the scanning equipment 4 to 8, is determined by an orientation detector 12 which is also mounted on the measurement vehicle.
- the orientation detector 12 includes a gyroscope and accelerometers.
- the signals generated by the measurement and detection equipment 4 to 12 of the vehicle are sent to a processing system 14 of the inspection system 2 via a high speed communications link 16.
- the signals from the laser scanner 4, sonar scanner 6 and colour video camera 8 are placed in a form suitable for transmission by a laser scan converter 18, a sonar scan converter 20 and a video conditioner 22, respectively.
- Signals from the chainage detector 10, the orientation detector 12 and auxiliary sensors are combined and placed in a form suitable for transmission by a sensor conditioner 23. All the signals are combined by a link controller 24 mounted on the vehicle and which is connected to the link 16.
- the link 16 is an optical fibre link, and the converters 18 and 20 and the conditioners 22 and 23 digitise the signals produced by the scanners 4 and 6 and the camera 8 prior to submission to the link controller 24.
- the link controller 24 is then used to time division multiplex the digital signals and convert them to optical signals for transmission on the link 16.
- the processing system 14 can be placed outside the pipe and connected directly to the link 16 or alternatively the signals on the link 16 can be transferred over a digital telecommunications network to a remote location of the processing system 14.
- the processing system 14 includes a link controller 26 connected to the link 16 which is used to unpack or demultiplex the signals transmitted from the measurement vehicle.
- the link controller 26 is connected to other components of the processing system 14 by a VME (Versa Module Europe) bus architecture 28.
- the signals from the scanner 4 are passed to image processing and feature extraction circuitry 30 which records the signals obtained for each inner circumferential scan of the pipe at each longitudinal position, z, of the measurement vehicle.
- the signals are processed to obtain circumferential or ring data for each position z, which comprises a set of x,y or r, ⁇ coordinates defining the inner circumferential dimensions of the pipe.
- a Sun Microsystems Sparc-2 work station 32 is connected to the VME bus 28 by a VME bus to S bus converter 34, and is used to process the ring data to form feature segments and then classify the segments to determine if they correspond to a defect in the pipe structure.
- the processing system 14 further includes an A/D board 36 for digitising the signals from other analog sensors external to the pipe, and a Motorola MV 147 central processing unit 38 which is used to control the data flow in processing system 14 and operate the scanners and detectors 4 to 12 in real-time. Ring data for each position z is also derived from the signals provided by sonar scanner 6.
- the laser scanner 4 is used to obtain ring data 4 for empty pipes or the upper non-fluid sections of part full pipes.
- the sonar scanner 6 is complementary, and is used to obtain ring data for full pipes or the lower fluid sections of partly full pipes.
- the colour video camera 8 is used to obtain visual images of the inside of the pipe which are recorded onto video tape and can be used for subsequent visual inspection of sections of the pipe, if desired.
- the laser scanner 4 as shown in Figure 2 mounted on the measurement vehicle 40 within a pipe 42, includes a He ⁇ e laser 44 which generates a laser beam 46 in the direction of travel of the support vehicle 40 substantially parallel to the axis of the pipe 42.
- the beam 46 is emitted from a first end 48 of the laser 44 to which a mirror 50 is attached which directs the beam 46 onto a refracting prism 52.
- the prism 52 directs the beam 46 in a radial direction onto the inner surface of the pipe 42.
- the prism 52 is rotated by a stepper motor 54, mounted on top of the laser 44, so as to sweep the beam 46 in a circular path across inner circumferences of the pipe 42.
- the prism 52 completes a revolution every 20 milliseconds, and a complete trace 55 of the beam 46 is imaged at each position z of a support vehicle 40 by a CCD camera 56 mounted on the laser 44 at the back end 58 of the laser 44.
- the position z of the support vehicle 44 for each trace 55 is measured by the chainage detector 10, which comprises an optical encoder mounted on a sprocket which transfers the drive of an electric motor to the measurement vehicle 40 as it moves longitudinally along the pipe 42.
- the trace 55 provides an accurate representation of the form of the inner surface of the pipe 42, at a position z from which x,y or r, ⁇ coordinates representative of the surface dimensions can be extracted, bearing in mind the trace 55 is substantially perpendicular to the pipe axis.
- the image processing and feature extraction circuit 30 includes a feature extractor board 62 which receives the image signals generated by the CCD camera 56 of the laser scanner 4, as shown in Figure 3.
- the board 62 includes a timing generation circuit 64 which provides vertical and horizontal synchronising (sync) signals to the camera 56.
- the sync signals are generated by a video sync generator 66 based on a 28.375 MHz clock signal generated by a two phase clock generator 68.
- the clock generator 68 generates two 28.375 MHz clock signals which are 90° out of phase, the first being inputted to the video sync generator 66.
- the clock generator 68 generates the clock signals from a 56.75 MHz master clock 70.
- the feature extraction board 62 further includes a video processing circuit 72, a feature store control and an addressing circuit 74, a feature data generation circuit 76, a feature storage circuit 78, and an interface 80 to the VME bus 28.
- the video processing circuit 72 includes a camera interface 82 which buffers the video signal output by the CCD camera 56.
- the level of the video signal is compared by an analog comparator 86 with the level of a threshold signal generated by a threshold generator 84 to determine whether the video signal corresponds to a pixel of the imaged laser beam trace 55.
- the comparator 86 will produce a high signal if the video signal exceeds the threshold indicating that the incident signal corresponds to a pixel of the trace 55, otherwise a low signal is produced at the output of the comparator 86.
- the timing of the output of the comparator 86 is controlled by the first clock signal.
- the feature store control addressing circuit 74 includes feature store control logic 88 and an address counter 90.
- the feature store control logic 88 receives the output of the comparator 86 and its timing is controlled by the first and second clock signals generated by the timing generation circuit 64.
- the feature store control logic 88 enables a write operation to the feature storage circuit 78 when the output of the comparator 86 provides a high signal.
- the write operation occurs at the address provided by the address counter 90 and the data written to that location is the contents of a pixel counter 92 and a line counter 94 of the feature data generation circuit 76.
- the line counter 94 provides the number of the horizontal line of the current pixel corresponding to the video signal output by the camera interface 82, and the pixel counter 92 provides the number of the current pixel in that line.
- the pixel counter 92 is incremented by the second clock signal, and reset by the horizontal sync signal whereas the line counter 94 is incremented by the horizontal sync signal and reset by the vertical sync signal.
- the pixel counter 92 and the line counter 94 therefore provide x and y position data, respectively, for the current pixel, which is only stored in the storage circuit 78 when the current pixel corresponds to a pixel of the laser trace 55.
- the feature store control logic 88 increments the address in the address counter 90 after performing each write operation.
- the contents of the counters 92 and 94 form a 32-bit word which is stored in the storage circuit 78.
- the feature storage circuit 78 is a dual port RAM which can hold 32 K 32-bit words and is split into two banks for double buffer operation, i.e. one bank is written to while the other bank is read by the VME interface 80.
- the feature store control logic 88 controls the double buffer operation by ensuring that when one bank is full, a read signal is provided to the VME interface 80, and then the write operation continues at the other bank. An error signal to the VME interface 80 is generated when the read operation to one bank is not finished yet the other bank is full of x,y data.
- the VME interface 80 controls the reading of data onto the VME bus, and the feature extraction board 62 function as a slave on the VME bus 28.
- the sonar head 100 as shown in Figure 6, includes a transducer 102 for emitting sonar signals and receiving the return echoes 106, and a stepper motor 104 for rotating the transducer 102 at rates up to 2 revolutions per second.
- the sonar signals and echoes 106 therefore provide a circumferential scan 108 of the inner surface of the pipe 42 at each position z.
- the sonar head 100 further includes a motor control circuit 105, a transmit/receive circuit 107, a signal processing circuit 110 and a general control circuit 112.
- the signal processing circuit 130 and the control circuit 112 are connected to the processing system 14 by the link 16, and the control circuit 112 on receiving a scan signal from the processing circuit 14 generates control signals for the motor control circuit 105 and the transmit/receive circuit 107 so that the motor control circuit 105 begins controlling the motor 104 so as to rotate the transducer 102.
- the transmit/receive circuit 107 is then switched to transmit mode so as to cause the transducer to emit a short pulse of 2.2 MHz ultrasonic radiation and then reverts to receive mode so as to receive the sonar echo, which is passed to signal processing circuit 110 for transmission to processing system 14.
- a typical return echo signal 114 is shown in Figure 7.
- the radial dimension r of the point on the scan 108 corresponding to the echo 114 is determined from the time between sending of the sonar pulse and the instant T 116 when the return signal level reaches a threshold level L 118, as discussed hereinafter.
- the second polar coordinate ⁇ of the point on the scan 108 is determined from index pulses generated by the motor control circuit 106 for each incremental step of the stepper motor 104, bearing in mind the initial orientation of the sonar transducer 102 is known.
- the pulses are transmitted to the processing system 14 from the motor control circuit 106 by the control circuit 112.
- Frequency multiplexing is used to provide separate channels with the processing system 14 for control signals to the head 100, and signals transmitted to the system 14.
- the analog signal 114 representing the intensity of the sonar echo as a function of time is digitised into 12-bit samples at a 250 kHz sampling rate using the A/D board 36 on receipt at the processing system 14.
- an 8-th order analogue low-pass filter with a cut-off frequency of 100 kHz is used to remove high-frequency components of the echo which may cause aliasing of the sampled signal.
- the output of the optical encoder of the chainage detector 10 is obtained by the processing system 14 immediately after each sonar echo is received.
- three techniques are available, a curve-fitting technique, a cross-correlation technique, and a deconvolution technique.
- the curve-fitting technique involves fitting the echo samples to a curve for each echo and then deriving the radial distance from the curve.
- the cross-correlation technique involves comparing the return samples with an echo from a known radius and using the cross-correlation result to determine the radial distance relative to the known radius.
- the curve-fitting technique is performed as follows.
- a water to air interface is an almost perfect mirror to sonar signals, and from an analysis of echoes obtained from a water-air interface using the transducer 102 it has been determined that a function of the form provided in equation (1) below can be used to describe a sonar echo produced by the sonar scanner 6.
- the parameters required to define each peak, i, of an echo, y are the amplitude of the peak, c i , the width of the peak, ⁇ i , and the time at which the peak occurs, ⁇ i .
- the function y takes into account that the sonar signal generated by the transducer 102 has a trailing exponential decay at the end of the pulse.
- the derivative of the function y is discontinuous and the number of factors for each function is determined by the number of peaks detected in the return echo. A peak of the return echo is detected using an estimate of the derivative of the samples of echo.
- a peak is defined to occur when there is a positive to negative zero-crossing of the derivative (indicating a change to the slope of the signal) and the height of the peak is greater than 5% of the maximum range of the samples, which eliminates for any noise inherent in the samples.
- Numerical differentiation is inherently a noise increasing operation, and to reduce noise components, a five point smoothing differentiating filter, having a frequency response as shown in
- u k is the kth sample
- y k is the kth sample of the filtered signal
- n is the number of samples.
- An estimate of the time of the peak is obtained by linear interpolation of the derivative values obtained on either side of the zero-crossing as shown in equation (3).
- the maximum value of the peak at is estimated by fitting a quadratic through
- An estimate of the width parameter ⁇ i is determined by locating the times of the half peak amplitudes on the rising and falling edges of a peak. Linear interpolation is again used between the two nearest sample values to the half amplitudes in order to obtain estimates of a time. The time difference between the peak and the half amplitude of the rising edge is used in preference to the falling edge, and if the time difference between the peak amplitude and half amplitude is given by w, then for the rising edge
- the echo return time or transit time T is determined to be time at which the function y for an echo reaches 10% of its maximum amplitude.
- the radial distance r i from the sonar head 102 to the internal feature of the pipe 42 that reflected the sonar signal to generate the echo is obtained by:
- v sound is the velocity of the sonar signal
- ⁇ i and ⁇ i are the time and width parameters of the maximum peak of the echo.
- the time t trig is the time between the start of emission of the sonar pulse and the time a trigger signal is received by the processing system 14 from the sonar head 100 and when the asynchronous sampling clock of the A/D board 36 begins sampling the echo.
- the signal processing circuit 110 of the head 100 generates the trigger signal at the end of generation of a sonar pulse.
- R transducer is half the width of the sonar transducer 102, which is approximately 10 mm.
- the work station 32 can be used and the initial parameter estimates for ⁇ i , ⁇ i and c i fed to an fmins module of an algebra software package MATLAB to minimise the error between the fitted function y and the samples.
- An example of the results obtained for a three peak echo 120 is illustrated in Figure 9.
- the cross-correlation technique first requires a standard sonar echo to be selected for cross-correlation with the echoes obtained in the pipe 42.
- a series of 10 echoes from an air/water interface were used to construct a standard echo template.
- the echoes were recorded using a cathode ray oscilloscope, at 8-bit resolution with a sample rate of 5 MHz and as there was a significant amount of switching noise and other signals, the echoes were averaged, the offset removed and the signal normalised.
- the template was digitally low-passed filtered using a 30 point filter with a cut-off frequency of 660 kHz.
- the leading signal of the standard peak is truncated so that the first point in the signal represents the beginning of the rise, which is the first part of the returning echo detected by the transducer 102. Therefore by truncating the standard signal in this manner the signal represents the shortest distance from the transducer 102 to a reflecting surface.
- the signal was then decimated according to the actual sampling period of the measured echoes to form the standard peak.
- the standard echo signal 122 is illustrated in Figure 10.
- a standard cross-correlation algorithm applied to a measured echo and the standard echo is as follows:
- x(k) is the measured sampled sonar echo
- y(k) is the standard sonar echo
- R xy (k) is the cross-correlation of the vectors x(k) and y(k). The location of the peaks of the cross-correlation are determined using equation
- An alternative technique for obtaining the parameters ⁇ i and ⁇ i for equation is to apply a deconvolution algorithm, as follows, to the standard echo and the return echo.
- S xy (k) is the signal resulting from the deconvolution of x(k) with y(k), x(k) is the measured sample sonar echo,
- y(k) is the standard sonar echo with discrete Fourier transform Y(f)
- Y*(f) is the complex conjugate of Y(f)
- s is a small constant to prevent division by zero.
- the accuracy of the raw ring data (x,y and r, ⁇ data) for each position z obtained from the laser scanner 4 and sonar scanner 6, as discussed above, can be improved by correction for movement of the measurement vehicle 40.
- the measurement vehicle 40 cannot be constrained to remain at the centre of the pipe 42 and orientated parallel with respect to the longitudinal axis of the pipe 42. Measurements obtained from the vehicle orientation detector 12 and the measured data can be used to determine components of motion of the measurement vehicle 40, such as x and y offsets from the centre of the pipe 42 to the prism 52 and the sonar head 102, and the yaw and pitch angles of the measurement vehicle 40.
- a method for correcting the raw data is discussed in the accompanying Appendix on pages 45 to 49, and involves obtaining the effects of the vehicle motion from the measured data and processing the measured data by a transformation required to transform a fitted ellipse onto a circle centred in the middle of the pipe.
- the method assumes the pipe 42 is circular and first fits the ellipse to the ring data obtaining from the scanners 4 and 6. Having defined a fitted ellipse for the ring data, a transformation matrix is obtained which can be used to transform the ring data onto a circle corresponding to the assumed cross-section of the pipe 42.
- Defining the ellipse also provides offsets x c ,y c to the centre of the pipe 42 which are used to correct the cylindrical ring data so as to be centred on the centre of the pipe 42.
- the Appendix also describes how the yaw and pitch angles of the vehicle can be determined from the measured data and this can be used for comparison with signals obtained from a gyroscope of the vehicle orientation detector 12. The comparison enables adjustment of the measured yaw and pitch angles and calibration of the gyroscopes.
- the plurality of ring data obtained over a length of the pipe 42 from the laser scanner 4 and the sonar scanner 6 each comprise a set of raw range data representing an image of the inner surface of the pipe which can be subjected to image processing.
- the range data comprises a set of z values, and for each value z is a plurality of x,y or r, ⁇ dimension values, corresponding to the features of the inner surface at each position z. All the x,y and r, ⁇ values correspond to pixels of the image of the inner surface of the pipe 42.
- the work station 32 first performs image preprocessing 200 on the range data 201 and then segments the pixels into regions of interest using an image segmentation procedure 202, as shown in Figure 12.
- the segmented regions are classified using a feed-forward neural network classifier of an image classification procedure 204.
- a classifier training procedure 206 trains the network classifier off-line using the back-propagation method.
- the classifier training procedure 206 relies on a training set prepared by a preparation procedure 208.
- the training set preparation procedure 208 generates training data based on results obtained by performing the image segmentation procedure 202 on a known pipe structure with known defects.
- the results of the image classification procedure 204 are provided to an image interpretation procedure 208 which further defines the classified features of the image on the basis of a knowledge database 210 of known pipe features.
- a graphic display of the interpreted image can be produced at step 210 and an automatic defect report 212 produced at step 214, the reports generated being stored in a structured database 216.
- the image preprocessing procedure 200 removes insignificant variations in the range data and places it in a form which is not dependent on the type of scanner 4 or 6 used.
- the image preprocessing procedure 200 involves first processing the range data to produce a calibrated constant grid map of the internal surface of pipe 42, where the map is two dimensional and the surface of the pipe is considered to be split longitudinally and opened to expose the surface.
- the x,y and r, ⁇ values for each pixel obtained by the scanners 4 and 6 are calibrated and converted into depth values. Provision can also be made for pipe manufacturing tolerances.
- the depth value of each point on the grid map is determined by taking the depth value of the closest pixel to the point. Linear interpolation can be used at an increased computational cost, but it has been found that the improvement gained does not justify the cost.
- the image preprocessing procedure 200 involves generation of a pipe model, which represents a perfect pipe with no defects or features.
- the model provides a reference level with respect to which range or depth values can be measured.
- each the first step is the identification of pixels in the image which are likely to be part of a defective region by differential geometric techniques.
- Biquadratic surface patches are used to estimate image surface derivatives as described in "Surfaces in Range Image Understanding" by Paul J. Besl, Springer- Verlag, 1988. Pixels with small first derivatives may be called good pixels.
- the model is built ring by ring. An ellipse of best fit is determined for all the good pixels in a ring. The ellipse of best fit is obtained in the same fashion as discussed previously.
- the image is systematically covered with facets which are rectangular subimages. The facet with the largest number of good pixels is chosen. Multiple bilinear regression is carried out. If this is successful then the model is determined at that facet and the facet is "written" which means that all the pixel values in the facet are set to the values determined by the regression. If the regression is not successful then the facet is skipped and the best remaining facet is chosen.
- the regression will be successful if there are enough good pixels in the facet to get reliable results and the regression equations have a unique solution. Note that once a facet has "written” all its pixels become good. After one pass through the facets in the image if there are any facets not written then the "read" size of the facets is incremented. This means that they examine pixels over a larger region when carrying out the regressions. When a regression for a facet is successful it writes only to its initial region and not the extended read region. Successive passes are made in this way through the list of facets until all facets are written.
- the advantage of the local facet method is that it applies to pipes of any shape.
- z i is the range value of the pixel (x i ,y i ) which is the ith good pixel in the rectangular subimage under consideration.
- the data of the laser scanner 4 usually has data values missing which may be caused by a light absorbing surface, obstruction of the return light signal to the camera 56, a surface void, or surface which reflects light away from the camera 56.
- the missing values caused when an object obscures the line of sight between the camera 56 and the laser beam trace 55 are termed shadows.
- Symbolic values are assigned to all values missing on the grid map.
- a set of possible values are obtained for the shadow pixels and other missing values by projecting a line from the camera position through every pixel. If any of these lines intersect the line projected from the centre of the pipe through the grid position of the missing value and perpendicular to the longitudinal axis of the pipe, the point of intersection is added to the set. Since this set of possible values is assumed to be continuous only the minimum range value found need be kept for subsequent processing.
- Shadows are not a considerable problem for the data obtained by the sonar scanner 6, however, instead of a single radial value for each point on the surface, there may be several radial values caused by multiple echoes being returned to the sonar head 102.
- the sonar data is therefore further preprocessed by examination of the pixels in a window surrounding the current pixel, and a single depth value is chosen from amongst the pixel values in the window.
- the size of the window presently 7 ⁇ 7, is dependent on the size of the area covered by the sonar signal 106 on the internal surface of the pipe 42.
- the data obtained by both the laser scanner 4 and the sonar scanner 6 also contains a considerable amount of noise which can interfere with subsequent image analysis routines.
- the effects of the noise can be reduced by an image smoothing process.
- An adaptive smoothing process is used for its properties of reducing noise whilst preserving discontinuities.
- An equally weighted local neighbourhood averaging kernal which adapts itself to the local topography of the surface to smooth is used.
- the resulting surface is smooth within regions whilst it's discontinuities (often corresponding to region boundaries) are preserved.
- the strategy consists of automatically selecting an adequate kernal size which defines the size of the neighbourhood.
- the size of the smoothing kernal is based on an evaluation of the differences between the value at the centre point of the kernal window and its neighbours.
- the size of the kernal is chosen such that the largest absolute difference between the centre pixel and it's neighbours is less than or equal to twice the noise level in the image and subject to the constraints that the kernal size is greater than a minimum size (typically 3 by 3) and smaller than a maximum size (typically 19 by 19).
- the image segmentation procedure 202 includes a feature extraction step 220, a pixel labelling step 222 and a region extraction step 224.
- the feature extraction step 220 computes a set of features which are combined to form a feature vector for each pixel.
- the features used consist of features describing more or less the range value properties of the pixel and features representing the texture in a local neighbourhood and include:
- the pixel labelling step 222 uses the feature vectors of the pixels to perform a classification technique which partitions the image into regions that are uniform with respect to certain features and that contrast with their adjacent neighbours in the same features.
- a Nearest Neighbour Classifier which has been previously trained offline, is used to perform the classification.
- the classifier is trained to classify pixels according to one of the following surface primitives: background
- root Training is performed by extracting a set of feature vectors from suitable known examples of the desired surface types.
- the set of feature vectors becomes the exemplar set for the classifier.
- the nearest neighbour classifier performs it's classification by comparing the unknown feature vector to it's set of exemplars.
- the assigned classification is the same as the classification of the closest exemplar in feature space using the Euclidean distance measure.
- the output of the pixel labelling step 222 is a label list in the form of a function l(r,c) which is the primitive surface classification for the pixel of row r in column c.
- a relaxation step is used to improve the performance of the pixel labelling step.
- neighbourhood is a certain neighbourhood (typically a 3 by 3 window) of
- the output of the relaxation step is a label list in the form of a function 1'(r,c) which is the refined primitive surface classification for the pixel of row r in column c.
- the region extraction step 224 groups connected pixels having the same label into connected components. Those connected components whose pixel type is not of the background type are extracted to form regions of interest R. The coordinates of a bounding rectangle for each region R is determined and a bit map, map(i,j), is generated to identify pixels of interest, i.e. those pixels within the region R belonging to the same connected component, where image :R -> and bit map: R-> ⁇ 0,1 ⁇ .
- the output of the region extraction step 224 is a list of regions of interest in the range image of the pipe surface.
- the image classification procedure 204 includes a filter step 230, an interpolation step 232, a transform step 234 and a sampling step 336 which process the bit maps of the regions of interest so as to place them in a form which can be applied to the neural networks of a neural network classification procedure 238.
- the output of the classification procedure 238 provides a classification for each region of interest and a confidence value representative of the confidence that the classification is correct.
- classes for surface defects may include void, crack, corrosion or deposit.
- the confidence value may be 0 or 1, i.e. low or high, or a "fuzzy" value between 0 and 1.
- the neural networks used in the neural network classification procedure 238 have a fixed structure, i.e. a fixed number of inputs and hidden layer nodes, yet the number of pixels of interest in a region of interest and the size of the region is not fixed. Therefore the data of the regions of interest must be manipulated so it can be applied to the fixed inputs of the networks, regardless of region size, and this should be done without sacrificing the data and true representation of the regions.
- This is achieved by first applying a defect map filter at step 232 to the bit map of each region of interest.
- the depth values for pixels of interest in the defect map are re-scaled to values between 1 and 2 and the remaining pixels are set to 0.
- the functions f and g are given by, for example,
- Nominal radius is that of the pipe, and maximum radius and minimum radius are the maximum and minimum depth or radius which the scanners 4 and 6 can detect.
- Model(i,j) is the pipe model value at the pixel(ij) where the pipe model can be obtained by either of the two methods described previously.
- the interpolation step 232 is used then to re-scale or compress the region of an arbitrary size m ⁇ n to an image, image2 : ⁇ 0, ..., M-1 ⁇ ⁇ ⁇ 0, ..., N-1 ⁇ R, of a set size M ⁇ N so it can be applied to the neural networks. Transformation to the set image size is done by triangulation of the domain of the region, which involves forming triangles with the vertices being the pixels of the region.
- the triangles have a number of different faces or "facets” orientated in different directions depending on the values of the pixels, and each triangle is then mapped to a function so as to form a continuous "facet” image.
- This image is then sampled onto a M ⁇ N grid to obtain the interpolated image.
- the form of the triangulation for a m ⁇ n region is illustrated in Figure 15 where each grid is divided into an upper triangle 233 and a lower triangle 235.
- the continuous "facet" image is defined by a function f(x,y) where for any point (x,y) (0 ⁇ x ⁇ m-1 and O ⁇ y ⁇ n-1) in the grid map plane, f(x,y) is obtained by first finding the triangle which bounds (x,y).
- the samples are transformed at step 234 from the spatial domain into a power domain or power spectrum using a discrete two dimensional Fourier transform.
- the power spectrum provides a good representation of the region due to the transformation process which reconstructs the surface in the Fourier domain.
- a discrete Fourier transform of image2 is performed using a recursive doubling strategy for Fast Fourier Transform (FFT).
- FFT Fast Fourier Transform
- the power spectrum is then wedge-ring sampled at step 236, which involves integrating, or summing, the samples in the power domain over eight wedge shaped and eight ring shaped sections of the domain.
- the result of the wedge-ring sampling provides a set of sixteen values which represent a compressed discrete signature of the inner surface of the pipe 42 which can be applied to the neural networks of the neural network classification step 238.
- the wedge-ring sampling is performed by first letting WEDGES be the number of wedges to be used and RINGS be the number of rings to be used.
- WEDGES be the number of wedges to be used
- RINGS be the number of rings to be used.
- the interval from 0 to ⁇ /2 in the power domain is divided into WEDGES equal subintervals defined by angles [ ⁇ i , ⁇ i+1 ].
- the interval from 0 to [(M-1) 2 + (N-1) 2 ] 1 ⁇ 2 is divided into RINGS equal subintervals defined by radii [r j ,r j+1 ].
- the ith wedge sample is defined to be the sum of power(u,v) for all (u,v) such that theta(u,v) is in [ ⁇ i , ⁇ i+1 ] where theta : R 2 ⁇ [0,2 ⁇ ] is the polar coordinate angle map given by
- the jth ring sample is defined to be the sum of power(u,v) for all (u,v) such that radius(u,v) is in [r j ,r j+1 ] where radius : R 2 ⁇ [0, ⁇ ] is the polar coordinate radius map given by
- ⁇ s is the characteristic function of S which indicates whether x, being theta(u,v) or radius (u,v), is in S, and is defined by
- the wedge-ring samples are put together to form a feature vector.
- the first WEDGES, in this case eight, features are the wedge samples and the last RINGS, in this case eight, features are the ring samples.
- the Fourier transform and wedge-ring sampling steps 234 and 236 preserve the overall shape and texture of the regions of interest and enables the neural network classifier to discriminate between complex surface defects. A number of factors enable this to occur, as discussed above, and are a consequence of the fact that the Fourier transform is an invertible mapping onto the frequency domain, and the power spectrum is translation invariant, which ensures the wedge-ring samples are translation invariant. Furthermore, the wedge samples are scale invariant and the ring samples are rotation invariant. This facilitates invariant classification of defects in the pipe surface.
- the neural network classification step 238 involves applying the wedge-ring samples to a number of feed-forward neural networks which have been trained by the back-propagation method.
- a neural network 241 is provided for each classification class of surface defects and pipe features.
- the networks 241 each presently have a topology of 16 inputs 243 in the input layer 239, eight nodes 235 in the first hidden layer 247, four nodes 249 in the second hidden layer 251 and two outputs
- Weights W l (i,j) are applied to each value passed between the four layers 239, 247, 251, 255, 2 being the layer number, i being the number of the node or output of the layerl and j being the number of the node or input of the l+1 layer.
- Biases are applied to the result of summing all of the weighted values received at each node 245 and 249. The outputs include one for an affirmative signal, and one for a negative signal, the signal level being between 0 and 1 on each output and indicating a degree to which the network believes the region of interest belongs to its respective class or not.
- the neural networks are divided into a first class which operates on images that represent intrusions into the pipe, and the second class which operates on images that represent defects or features that protrude away from the centre of the pipe 42.
- An example of the first class is the tree root/non-tree root neural network classifier, and its weights and biases are listed below in natural order below, the weights for each node being provided first, starting from the first input 243 of the first layer 239 to the outputs of the fourth node 249 of the third layer 251, and the biases of the nodes 247 and 249 being specified thereafter.
- An example of the second class of neural network is the pipe join/non pipe join neural network classifier, the weights and biases of which are listed below in natural order.
- the weights and biases for the neural networks are established by the classifier training procedure 206 which uses the back-propagation method, as discussed in "Neural Computing” by R. Beale and T. Jackson, IOP Publishing 1990, on a data set prepared by the training set preparation procedure 208.
- the data sets together with their known classification results are fed to the classifier training procedure 206 which attempts to seek convergence of the weights and biases using the back-propagation method to produce correct neural networks for each classification class.
- the data sets are randomly interleaved, i.e. for the tree root network, the tree root data sets are interleaved with data sets that do not relate to tree roots so as to obtain accurate convergence of the weight and bias values.
- Filters are used with some of the dedicated networks. For example, with the pipe-connection network if the bounding rectangle length of the region of interest is less than i_HLTER or the bounding rectangle height is less than j_FILTER or the defect map area is less than area_FILTER then "not pipe join" is returned and use of the neural network is circumvented.
- the image interpretation procedure 208 involves the verification and rating of each classified region of interest based on a database of 210 of defect and pipe feature characteristics and previously identified defects and features, which includes defect characteristics, e.g. direction/angle, gradient, area, and depth of defects, ratings, e.g. structural, service or other, and spatial relation with other defects or features.
- the image interpretation procedure 208 is separated into two parts, as shown in Figure 17, an image analysis step 250 which performs further image analysis on the bit maps of the classified regions in order to determine additional feature attributes associated with the pixels of interest, at step 252.
- the feature attributes may include direction, edge details, area, position and depth of a surface feature or defect.
- An inference engine step 254 is provided to match the model of the defects provided in the knowledge database 210 using a set of defect identification and rating rules 256 which may be coded in an "if condition then conclusion" format.
- the inference engine is a general purpose facility.
- the application dependent interface functions are provided to interface between the engine and the application program's data.
- the image interpretation procedure 208 is supported by other procedures for the efficient application of a knowledge base. These are: ⁇ Rule base maintenance provides facilities for the addition, modification and deletion of rules from a knowledge base. ⁇ Rule base compiler parses the knowledge base and ensures correct syntax. The knowledge base is then translated into a format suitable for the knowledge base engine.
- cross-sectional area loss is less than 5%
- cross-sectional area loss is not less than 5%
- cross-sectional area loss is less than 20%
- cross-sectional area loss is less than 5%
- cross-sectional area loss is not less than 20%
- IF region is an open joint
- Openness' is less than the pipe wall thickness THEN
- IF region is an open joint
- Openness' is less than 1.5 pipe wall thicknesses THEN
- IF region is a displaced joint
- cross-sectional area loss is not less than 5%
- cross-sectional area loss is less than 20% THEN
- cross-sectional area loss is not less than 20%
- cross-sectional area loss is not less than 5%
- cross-sectional area loss is not less than 5%
- cross-sectional area loss is less than 20%
- cross-sectional area loss is not less than 20% THEN
- IF region is a circumferential crack
- IF region is a circumferential fracture
- THEN score is 1.0 * length(m) IF region is erosion
- the image interpretation procedure 208 allows the defects and pipe features specified in the following tables to be identified.
- the tables specify the neural network classification given to the defects and features, and feature attributes which need to be determined by the image analysis step 250 in order for the inference engine step 254 to complete identification of the defects and features.
- the tables document the attributes used by the rule base to classify regions. A tick indicates that the region should strongly exhibit this attribute whilst a cross indicates that the region should not exhibit this attribute. A dash indicates a don't care situation. The tables do not indicate whether the attributes are used in the conjunctive or disjunctive manner.
- the pipe position in rings, clock reference, and the pixels of the regions of interest corresponding to each defect and feature identify the precise location of a defect or feature.
- the rules 256 can also be used to resolve location conflicts which may occur during segmentation and classification when different defects overlap. At least three different types of conflicts can be resolved, conflicts among superimposed defects, conflicts among adjacent defects, and conflicts arising due to classification ambiguities between different parts of defects.
- the report is outputted in the following form:
- MIS Management Information System
- GUI graphic user interface
- the MIS has been written to run in a main window of the display 211.
- the main window will list the assessment for each of the defects, and allows display of multiple detailed views of the data related to the defect. These alternate views can comprise: one showing the current ring slice of the pipe data, one showing the current range or unwrapped section of the pipe data, and the other showing a three dimensional (3D) display of the pipe data.
- the MIS can also provide a graph showing variation of rating over time, based on previous pipe inspections.
- the MIS has been designed and implemented in a modular fashion, and Figure
- the GUI sub-system allows the user to specify the operation (either data interpretation, visualisation or comparison) and the data to operate on (either the entire list of regions of interest, or a subset of it).
- the GUI sub-system invokes the sub-system which performs the particular operation on the data requested by the user.
- the called sub-system loads the specified data from data base using the facilities provided by the interfacing sub-system, if it has not been previously loaded.
- the analysis sub-system it in turn calls components from the knowledge base sub-system, to apply the knowledge base(s) to the data.
- a default knowledge base(s) is used except where the user explicitly overrides this default through the user interface.
- GUI graphical user interface
- the GUI processes interactive user requests that are activated using the keyboard and mouse.
- the GUI provides menus and windows to enable easy interaction with the system. It contains the control logic to sequence and control the interaction with, and of, the sub-systems.
- the GUI provides access to, or invokes facilities for:
- the software uses a computer graphics display to provide visualisation of analysed lists of regions of interest.
- Various views can be generated depending on a viewing position and data supplied by the GUI sub-system. That is, when the viewing parameters are changed in the GUI, the views are automatically regenerated to reflect this change.
- This sub-system provides display services to the other sub-systems.
- the visualisation sub-system generates a required picture from the supplied data, scales the picture to the appropriate window size and draws the picture into a graphics window.
- Cross section view generates a cross-sectional view of the pipe taken at the nominated pipe position. The generated image is scaled appropriately to fill the viewing window.
- Unwrapped view generates an unwrapped view of the pipe starting at the nominated pipe position.
- 3D view generates a 3D perspective view of the pipe from the nominated pipe position. The orientation of the viewing camera is provided to enable the generation of the view from almost any position. The generated image is scaled appropriately to fill the viewing window.
- Graphing produces a 2D line graph from the two data vector parameters.
- a comparison sub-system relates multiple pipe condition assessment results.
- a time-based rating of assessment results can be assembled and displayed as a graph.
- An interfacing sub-system provides a level of abstraction between the system and the underlying operating system and any external systems that require interfacing to.
- This subsystem is intended to provide a library of interfacing services (which are implementation, or organisation, specific).
- Data load reads data from the supporting file system and converts it into a form suitable for the internal set of regions structure.
- Data save extracts data from the internal set of regions of interest and converts it into a form suitable for writing to the underlying file system.
- Pipe directory interrogates the file system and returns the set of all pipe identifiers.
- Analysis directory interrogates the file system and returns the set of all analysis identifiers for a given pipe.
- Analysis export extracts information from the results of the analysis and formats the data in a form suitable for import into Water Authority databases.
- Printing displays results on a printer device.
- the data managed by the MIS system is primarily made up of list of regions of interest information for a pre-processed pipe image.
- the information stored in, or calculated for, each region of interest consists of region attributes which include: region location in the pipe; an image of the region, a region type descriptor, a region classification, area, perimeter, cross-sectional area loss, geometric descriptors, rating, etc.
- the list of regions of interest encapsulates the information used for condition assessment from the raw sensor pipe data. This list of regions of interest is a main data component within the system 2 and it is extensively used by both the analysis and comparison subsystems via an interfacing sub-system.
- the pipe image is a form of annotated grid of pixel values representing a pipe surface in 3D space.
- the list of regions of interest contains attribute and sub-image information for each segmented region.
- a knowledge base representation is another data structure, which is an internal representation of an ASCII file of structured English-like declarative rules and functions.
- Raw pipe data from the sensors is supplied to the pre-processing module in real-time or as a file.
- the pipe image data is transformed into a list of regions of interest and put into the data base.
- This data base can be archived or input directly into MIS to perform pipe analysis. For a given pipe there may be many lists of regions of interest, each one corresponding to a particular run of a given sensor through the pipe.
- the interfacing sub-system provides an abstract conduit between the data used by the MIS and the data base structure within the underlying operating system, as well as external data bases to which the user required information about the list of regions of interest may be sent to, or retrieved from.
- Assessment rules are input from the user into a knowledge base representation used by the knowledge base sub-system with a text editor.
- the knowledge base representation can also be compiled directly into the knowledge base sub-system.
- GUI graphical user interface
- a simplified interface to gather and pre-process raw sensor data can be run on a minimally configured Sparc workstation installed in a field environment.
- the transport of data from the field to the office workstation can be via magnetic tape.
- the recommended initial configuration is:
- Data from both laser and sonar scanners is processed into the form of cylindrical coordinates (r, ⁇ , z).
- This data can be corrected for movement of the vehicle relative to the pipe axis. This can improve measurement accuracy as the vehicle may not remain at the center of the pipe and oriented parallel with the pipe's longitudinal axis.
- Figure 18 shows the orientation detector 12 at an arbitrary orientation and position in the pipe.
- the orientation of the vehicle can be described by roll, pitch and yaw angles ( ⁇ , ⁇ , ⁇ ), while (x, y, z) coordinates can be used to describe its position.
- an inclusion zone is estimated from the orientation and radius of the previous point; all data not in this region is excluded from the estimator so as not to bias the estimates with the defects in the pipe,
- the equation for the ellipse in the (x, y) coordinate frame can be determined by suitable transformations from the equation of the ellipse in the (x", y") coordinate frame,
- Equating the coefficients of Equation 6 to those in Equation 5 results in the following equations for the w i
- the data can be transformed into a circle by a single rotation ( ⁇ ) about a vector (k) from the center of the ellipse along its minor axis (b) which also forms the radius of the circle. From the ellipse fitting procedure we know the angle ⁇ that the vector k makes with the y-axis, see Figure 20
- the desired angle ⁇ is given by:
- the yaw ( ⁇ ) and pitch ( ⁇ ) angles of the vehicle can be determined from the fitted ellipse.
- the transformation of the measured data point (x' i ,y' i ,z' i ) to the point on the circle (x i ,y i ,z i ) can either be performed by:
- Figure 21 shows the geometry of the correction required to be applied to a measurement (x i , y i ) taken from a position not in the center of the pipe.
- the center of the pipe is given by the coordinates (x c , y c ) from the current position.
- the known parameters are the angle ⁇ i and the measured radius (r i ). It is desired to convert these parameters into those at the center of the pipe
- the corrections required are:
- ⁇ is the angle between the nominal horizontal given by
- r c is the distance from the current position (x c , y c ) to the center of the pipe given by
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Chemical & Material Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Combustion & Propulsion (AREA)
- Acoustics & Sound (AREA)
- Mechanical Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU72599/94A AU7259994A (en) | 1993-07-20 | 1994-07-20 | An inspection system for a conduit |
EP94922792A EP0710351A4 (en) | 1993-07-20 | 1994-07-20 | An inspection system for a conduit |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AUPM0024 | 1993-07-20 | ||
AUPM002493 | 1993-07-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1995003526A1 true WO1995003526A1 (en) | 1995-02-02 |
Family
ID=3777068
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU1994/000409 WO1995003526A1 (en) | 1993-07-20 | 1994-07-20 | An inspection system for a conduit |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP0710351A4 (en) |
WO (1) | WO1995003526A1 (en) |
ZA (1) | ZA945334B (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2305796A (en) * | 1995-09-26 | 1997-04-16 | London Underground Ltd | Monitoring track condition |
FR2771184A1 (en) * | 1997-11-17 | 1999-05-21 | Jean Pierre Quatrefages | Acoustic imaging system for use under water to identify objects |
EP0831299A4 (en) * | 1995-03-27 | 1999-09-22 | Toa Grout Kogyo Co | APPARATUS FOR OBSERVING THE INTERIOR SURFACE OF A PIPELINE |
WO2000063607A1 (en) * | 1999-04-16 | 2000-10-26 | Hans Oberdorfer | Device and method for inspecting hollow spaces |
WO2002006848A3 (en) * | 2000-07-14 | 2002-04-25 | Lockheed Corp | System and method for locating and positioning an ultrasonic signal generator for testing purposes |
WO2002063607A3 (en) * | 2001-01-19 | 2003-05-01 | Lockheed Corp | Remote laser beam delivery system and method for use with a gantry positioning system for ultrasonic testing purposes |
WO2002044709A3 (en) * | 2000-11-29 | 2003-08-14 | Cooper Cameron Corp | Ultrasonic testing system for tubulars |
WO2003089833A1 (en) * | 2002-04-19 | 2003-10-30 | Norsk Elektro Optikk As | Pipeline internal inspection device and method |
EP1840505A1 (en) * | 2006-03-31 | 2007-10-03 | Coperion Werner & Pfleiderer GmbH & Co. KG | Measuring device for the determination of the wear state of wells of screw extruders |
WO2008000940A1 (en) * | 2006-06-30 | 2008-01-03 | V & M France | Non-destructive testing by ultrasound of foundry products |
NL1032345C2 (en) * | 2006-08-18 | 2008-02-19 | Martijn Van Der Valk | Inspection system and device. |
WO2008099177A1 (en) * | 2007-02-14 | 2008-08-21 | Sperry Rail (International) Limited | Photographic recording of a rail surface |
DE102010049401A1 (en) * | 2010-10-26 | 2012-04-26 | Leistritz Extrusionstechnik Gmbh | Device for acquiring measurement information from an inner surface of a hollow body, in particular a bore of a single- or twin-screw extruder cylinder |
NO334482B1 (en) * | 2002-04-19 | 2014-03-17 | Norsk Elektro Optikk As | Apparatus and method for internal inspection of pipeline |
BE1022397B1 (en) * | 2014-01-17 | 2016-03-22 | Vliegen Nv | MEASURING TECHNOLOGY |
EP2626624A4 (en) * | 2010-10-04 | 2016-12-21 | Mitsubishi Heavy Ind Ltd | Device for monitoring thickness reduction of inner surface in heat transfer pipe or inner surface in evaporation pipe |
RU169803U1 (en) * | 2016-12-21 | 2017-04-03 | Ильвина Гамировна Хуснутдинова | Device for contactless control of stress-strain state and level of damage to metal structures |
US9739411B1 (en) | 2014-08-06 | 2017-08-22 | The United States Of Americas As Represented By The Administrator Of The National Aeronautics And Space Administration | System and method for traversing pipes |
EP3084419A4 (en) * | 2013-12-17 | 2017-11-15 | Ontario Power Generation Inc. | Improved ultrasound inspection |
DE102019108743A1 (en) * | 2019-04-03 | 2020-10-08 | Jt-Elektronik Gmbh | Inspection unit that can be moved in a channel |
CN113223177A (en) * | 2021-05-12 | 2021-08-06 | 武汉中仪物联技术股份有限公司 | Pipeline three-dimensional model construction method and system based on standard attitude angle correction |
WO2022032379A1 (en) * | 2020-08-10 | 2022-02-17 | Hifi Engineering Inc. | Methods and systems for tracking a pipeline inspection gauge |
CN114459353A (en) * | 2022-02-25 | 2022-05-10 | 广东工业大学 | A device and method for measuring the position and orientation of a pipeline |
US11598728B2 (en) * | 2018-05-04 | 2023-03-07 | Hydromax USA, LLC | Multi-sensor pipe inspection system and method |
US11629807B1 (en) | 2019-02-12 | 2023-04-18 | Davaus, LLC | Drainage tile inspection system |
CN117646828A (en) * | 2024-01-29 | 2024-03-05 | 中国市政工程西南设计研究总院有限公司 | Device and method for detecting relative displacement and water leakage of pipe jacking interface |
CN118583403A (en) * | 2024-05-30 | 2024-09-03 | 点夺机电工程江苏有限公司 | A kind of air leakage detection system for pipeline |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110376286B (en) * | 2019-06-13 | 2021-08-27 | 国网浙江省电力有限公司电力科学研究院 | Intelligent automatic ultrasonic detection system and method for in-service basin-type insulator |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0051912A1 (en) * | 1980-11-11 | 1982-05-19 | British Gas Corporation | Apparatus for monitoring the topography of the internal surface of a pipe |
GB2094470A (en) * | 1980-11-18 | 1982-09-15 | Dyk Johannes Wilhelmus Van | Examining surface profile |
GB2102565A (en) * | 1981-07-11 | 1983-02-02 | Draftrule Limited | Surface inspection |
WO1986003295A1 (en) * | 1984-11-30 | 1986-06-05 | Lennart Wettervik | Method and apparatus for detecting leaks and other defects on sewers and the like channels |
EP0282687A2 (en) * | 1987-03-20 | 1988-09-21 | Nippon Kokan Kabushiki Kaisha | Intrapipe spot examination pig device |
US4868648A (en) * | 1988-08-19 | 1989-09-19 | Kabushiki Kaisha Iseki Kaihatsu Koki | Method and apparatus for inspecting a pipeline in order to determine eccentricity thereof |
EP0504591A2 (en) * | 1991-03-22 | 1992-09-23 | Denso-Chemie Wedekind KG | Device for detecting joints, fissures or the like in drain channels |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3236947A1 (en) * | 1982-10-06 | 1984-04-12 | Rainer 6074 Rödermark Hitzel | PIPE MANIPULATOR FOR PIPING THROUGH PIPES |
-
1994
- 1994-07-20 EP EP94922792A patent/EP0710351A4/en not_active Withdrawn
- 1994-07-20 WO PCT/AU1994/000409 patent/WO1995003526A1/en not_active Application Discontinuation
- 1994-07-20 ZA ZA945334A patent/ZA945334B/en unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0051912A1 (en) * | 1980-11-11 | 1982-05-19 | British Gas Corporation | Apparatus for monitoring the topography of the internal surface of a pipe |
GB2094470A (en) * | 1980-11-18 | 1982-09-15 | Dyk Johannes Wilhelmus Van | Examining surface profile |
GB2102565A (en) * | 1981-07-11 | 1983-02-02 | Draftrule Limited | Surface inspection |
WO1986003295A1 (en) * | 1984-11-30 | 1986-06-05 | Lennart Wettervik | Method and apparatus for detecting leaks and other defects on sewers and the like channels |
EP0282687A2 (en) * | 1987-03-20 | 1988-09-21 | Nippon Kokan Kabushiki Kaisha | Intrapipe spot examination pig device |
US4868648A (en) * | 1988-08-19 | 1989-09-19 | Kabushiki Kaisha Iseki Kaihatsu Koki | Method and apparatus for inspecting a pipeline in order to determine eccentricity thereof |
EP0504591A2 (en) * | 1991-03-22 | 1992-09-23 | Denso-Chemie Wedekind KG | Device for detecting joints, fissures or the like in drain channels |
Non-Patent Citations (1)
Title |
---|
See also references of EP0710351A4 * |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0831299A4 (en) * | 1995-03-27 | 1999-09-22 | Toa Grout Kogyo Co | APPARATUS FOR OBSERVING THE INTERIOR SURFACE OF A PIPELINE |
GB2305796A (en) * | 1995-09-26 | 1997-04-16 | London Underground Ltd | Monitoring track condition |
FR2771184A1 (en) * | 1997-11-17 | 1999-05-21 | Jean Pierre Quatrefages | Acoustic imaging system for use under water to identify objects |
WO2000063607A1 (en) * | 1999-04-16 | 2000-10-26 | Hans Oberdorfer | Device and method for inspecting hollow spaces |
US6788334B2 (en) | 1999-04-16 | 2004-09-07 | Hans Oberdorfer | Device and method for inspecting hollow spaces |
WO2002006848A3 (en) * | 2000-07-14 | 2002-04-25 | Lockheed Corp | System and method for locating and positioning an ultrasonic signal generator for testing purposes |
US6643002B2 (en) | 2000-07-14 | 2003-11-04 | Lockheed Martin Corporation | System and method for locating and positioning an ultrasonic signal generator for testing purposes |
NO337559B1 (en) * | 2000-11-29 | 2016-05-09 | Cooper Cameron Corp | Ultrasonic testing system |
WO2002044709A3 (en) * | 2000-11-29 | 2003-08-14 | Cooper Cameron Corp | Ultrasonic testing system for tubulars |
GB2385129B (en) * | 2000-11-29 | 2004-12-01 | Cooper Cameron Corp | Ultrasonic testing system |
WO2002063607A3 (en) * | 2001-01-19 | 2003-05-01 | Lockheed Corp | Remote laser beam delivery system and method for use with a gantry positioning system for ultrasonic testing purposes |
CN100335838C (en) * | 2002-04-19 | 2007-09-05 | 诺斯克埃莱克特罗奥普体克公司 | Pipeline internal inspection device and method |
US6931149B2 (en) | 2002-04-19 | 2005-08-16 | Norsk Elektro Optikk A/S | Pipeline internal inspection device and method |
NO334482B1 (en) * | 2002-04-19 | 2014-03-17 | Norsk Elektro Optikk As | Apparatus and method for internal inspection of pipeline |
WO2003089833A1 (en) * | 2002-04-19 | 2003-10-30 | Norsk Elektro Optikk As | Pipeline internal inspection device and method |
EP1840505A1 (en) * | 2006-03-31 | 2007-10-03 | Coperion Werner & Pfleiderer GmbH & Co. KG | Measuring device for the determination of the wear state of wells of screw extruders |
US7717003B2 (en) | 2006-03-31 | 2010-05-18 | Coperion Gmbh | Measuring device for detecting the state of wear of the bore walls of two interpenetrating housing bores |
EA012925B1 (en) * | 2006-06-30 | 2010-02-26 | В Э М Франс | Non-destructive testing foundry products by ultrasound |
AU2007264866B2 (en) * | 2006-06-30 | 2012-04-26 | V & M France | Non-destructive testing by ultrasound of foundry products |
NO340510B1 (en) * | 2006-06-30 | 2017-05-02 | V & M France | Non-destructive testing, especially for pipes in production or in finished condition |
FR2903187A1 (en) * | 2006-06-30 | 2008-01-04 | Setval Sarl | NON-DESTRUCTIVE CONTROL, ESPECIALLY FOR TUBES DURING MANUFACTURING OR IN THE FINAL STATE |
US8265886B2 (en) | 2006-06-30 | 2012-09-11 | V & M France | Non-destructive testing, in particular for pipes during manufacture or in the finished state |
AU2007264866C1 (en) * | 2006-06-30 | 2014-01-16 | V & M France | Non-destructive testing by ultrasound of foundry products |
WO2008000940A1 (en) * | 2006-06-30 | 2008-01-03 | V & M France | Non-destructive testing by ultrasound of foundry products |
NL1032345C2 (en) * | 2006-08-18 | 2008-02-19 | Martijn Van Der Valk | Inspection system and device. |
WO2008099177A1 (en) * | 2007-02-14 | 2008-08-21 | Sperry Rail (International) Limited | Photographic recording of a rail surface |
EP2626624A4 (en) * | 2010-10-04 | 2016-12-21 | Mitsubishi Heavy Ind Ltd | Device for monitoring thickness reduction of inner surface in heat transfer pipe or inner surface in evaporation pipe |
EP2447664A1 (en) * | 2010-10-26 | 2012-05-02 | Leistritz Extrusionstechnik GmbH | Device for recording measurement information from an internal surface of a hollow body, in particular a borehole of a single or dual shaft extruder cylinder |
DE102010049401A1 (en) * | 2010-10-26 | 2012-04-26 | Leistritz Extrusionstechnik Gmbh | Device for acquiring measurement information from an inner surface of a hollow body, in particular a bore of a single- or twin-screw extruder cylinder |
EP3084419A4 (en) * | 2013-12-17 | 2017-11-15 | Ontario Power Generation Inc. | Improved ultrasound inspection |
EP4386777A3 (en) * | 2013-12-17 | 2024-08-07 | Ontario Power Generation Inc. | Improved ultrasound inspection |
BE1022397B1 (en) * | 2014-01-17 | 2016-03-22 | Vliegen Nv | MEASURING TECHNOLOGY |
US9739411B1 (en) | 2014-08-06 | 2017-08-22 | The United States Of Americas As Represented By The Administrator Of The National Aeronautics And Space Administration | System and method for traversing pipes |
RU169803U1 (en) * | 2016-12-21 | 2017-04-03 | Ильвина Гамировна Хуснутдинова | Device for contactless control of stress-strain state and level of damage to metal structures |
US12222298B2 (en) | 2018-05-04 | 2025-02-11 | Hydromax USA, LLC | Multi-sensor pipe inspection system and method |
US11598728B2 (en) * | 2018-05-04 | 2023-03-07 | Hydromax USA, LLC | Multi-sensor pipe inspection system and method |
US11629807B1 (en) | 2019-02-12 | 2023-04-18 | Davaus, LLC | Drainage tile inspection system |
DE102019108743A1 (en) * | 2019-04-03 | 2020-10-08 | Jt-Elektronik Gmbh | Inspection unit that can be moved in a channel |
DE102019108743B4 (en) | 2019-04-03 | 2022-01-13 | Jt-Elektronik Gmbh | Inspection unit that can be moved in a channel and a method for operating the inspection unit |
WO2022032379A1 (en) * | 2020-08-10 | 2022-02-17 | Hifi Engineering Inc. | Methods and systems for tracking a pipeline inspection gauge |
CN113223177A (en) * | 2021-05-12 | 2021-08-06 | 武汉中仪物联技术股份有限公司 | Pipeline three-dimensional model construction method and system based on standard attitude angle correction |
CN114459353A (en) * | 2022-02-25 | 2022-05-10 | 广东工业大学 | A device and method for measuring the position and orientation of a pipeline |
CN117646828A (en) * | 2024-01-29 | 2024-03-05 | 中国市政工程西南设计研究总院有限公司 | Device and method for detecting relative displacement and water leakage of pipe jacking interface |
CN117646828B (en) * | 2024-01-29 | 2024-04-05 | 中国市政工程西南设计研究总院有限公司 | Device and method for detecting relative displacement and water leakage of pipe jacking interface |
CN118583403A (en) * | 2024-05-30 | 2024-09-03 | 点夺机电工程江苏有限公司 | A kind of air leakage detection system for pipeline |
Also Published As
Publication number | Publication date |
---|---|
EP0710351A1 (en) | 1996-05-08 |
ZA945334B (en) | 1995-02-28 |
EP0710351A4 (en) | 1996-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO1995003526A1 (en) | An inspection system for a conduit | |
CN109118542A (en) | Scaling method, device, equipment and storage medium between laser radar and camera | |
Daniel et al. | Side-scan sonar image matching | |
CN113281780B (en) | Method and device for marking image data and electronic equipment | |
JP6756889B1 (en) | Vortex detector, vortex detection method, program and trained model | |
CN110370287A (en) | Subway column inspection robot path planning's system and method for view-based access control model guidance | |
Kawashima et al. | Finding the next-best scanner position for as-built modeling of piping systems | |
CN110161053A (en) | Defect detecting system | |
US6256035B1 (en) | Image processing method and apparatus | |
AU7259994A (en) | An inspection system for a conduit | |
JPH0854219A (en) | Image processor | |
CN118191855A (en) | Distance identification method, device and storage medium in power scene | |
CN115272248B (en) | Intelligent detection method for fan gesture and electronic equipment | |
Magin et al. | A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots | |
CN114167443B (en) | Information completion method, device, computer equipment and storage medium | |
Witzgall et al. | Recovering spheres from 3D point data | |
Mermigkas et al. | Constructing Visibility Maps of Optimal Positions for Robotic Inspection in Ultra-High Voltage Centers | |
Elgazzar et al. | 3D data acquisition for indoor environment modeling using a compact active range sensor | |
Wu et al. | Power transmission line reconstruction from sequential oblique uav images | |
CN119225238A (en) | Equipment monitoring visualization system based on digital twin technology | |
JP3048896B2 (en) | Noise removal filter device for binary image | |
Liu et al. | A Design and Experimental Method of Perception Fusion | |
CN114167441B (en) | Information collection method, device, computer equipment and storage medium | |
Westling et al. | Object recognition by fast hypothesis generation and reasoning about object interactions | |
Najjari et al. | 3D CAD-based object recognition for a flexible assembly cell |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB GE HU JP KE KG KP KR KZ LK LT LU LV MD MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TJ TT UA US UZ VN |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE MW SD AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref country code: US Ref document number: 1996 583041 Date of ref document: 19960117 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1994922792 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1994922792 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
NENP | Non-entry into the national phase |
Ref country code: CA |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1994922792 Country of ref document: EP |