+

WO1995003526A1 - An inspection system for a conduit - Google Patents

An inspection system for a conduit Download PDF

Info

Publication number
WO1995003526A1
WO1995003526A1 PCT/AU1994/000409 AU9400409W WO9503526A1 WO 1995003526 A1 WO1995003526 A1 WO 1995003526A1 AU 9400409 W AU9400409 W AU 9400409W WO 9503526 A1 WO9503526 A1 WO 9503526A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
inspection system
conduit
sonar
pipe
Prior art date
Application number
PCT/AU1994/000409
Other languages
French (fr)
Inventor
Ian Barry Macintyre
Patrick Dale Kearney
Jensen Lok Chueng Fong
Michael Vaughan Roberts
Kevin John Rogers
Ron Sharpe
Jacek Gibert
John Sebastian Mashford
Robert Andrew Parker
Michael Albert Rahilly
Murray John Jensen
Original Assignee
Commonwealth Scientific And Industrial Research Organisation
Melbourne Water Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commonwealth Scientific And Industrial Research Organisation, Melbourne Water Corporation filed Critical Commonwealth Scientific And Industrial Research Organisation
Priority to AU72599/94A priority Critical patent/AU7259994A/en
Priority to EP94922792A priority patent/EP0710351A4/en
Publication of WO1995003526A1 publication Critical patent/WO1995003526A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16LPIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
    • F16L55/00Devices or appurtenances for use in, or in connection with, pipes or pipe systems
    • F16L55/26Pigs or moles, i.e. devices movable in a pipe or conduit with or without self-contained propulsion means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C7/00Tracing profiles
    • G01C7/06Tracing profiles of cavities, e.g. tunnels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M3/00Investigating fluid-tightness of structures
    • G01M3/005Investigating fluid-tightness of structures using pigs or moles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0609Display arrangements, e.g. colour displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/22Details, e.g. general constructional or apparatus details
    • G01N29/26Arrangements for orientation or scanning by relative movement of the head and the sensor
    • G01N29/265Arrangements for orientation or scanning by relative movement of the head and the sensor by moving the sensor relative to a stationary material
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16LPIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
    • F16L2101/00Uses or applications of pigs or moles
    • F16L2101/30Inspecting, measuring or testing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/04Wave modes and trajectories
    • G01N2291/044Internal reflections (echoes), e.g. on walls or defects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/26Scanned objects
    • G01N2291/263Surfaces
    • G01N2291/2636Surfaces cylindrical from inside

Definitions

  • the present invention relates to an inspection system for a conduit.
  • the system is particularly advantageous for use in inspecting pipes, such as sanitary and storm water sewers, but can also be adapted for inspection of other conduits, for example tunnels and drains, in particular those used in water supply, sewerage, irrigation and drainage systems.
  • the inspection and assessment of sewers for defects is critical to maintain an effective sewerage system.
  • the task is of a particular importance for the world's larger and older cities where demand on the system is high and the main sewers used have aged considerably, thereby providing conditions conducive to the production of potentially serious defects.
  • Sewers are presently inspected using a closed circuit television camera which is mounted on a trolley configured to travel within the length of a pipe.
  • the images obtained by the camera are assessed by experienced personnel who identify defects and potential defects in the pipe's structure on the basis of the images.
  • Personnel can also be sent into pipes which are large enough for direct inspection, however, this is normally discouraged as, although safety precautions are stringent, there remains inherent risks to personnel.
  • Both assessment techniques are highly subjective and open to different interpretations. Reliance on visual assessment is also time consuming and places considerable time constraints on an inspection program when the number of experienced personnel available is limited.
  • an inspection system for a conduit comprising measurement means for travelling in said conduit and obtaining data on said conduit, and processing means for processing said data to identify regions of said conduit corresponding to defects in said conduit.
  • processing means includes means for extracting dimension data on said conduit from said data obtained by said measurement means.
  • said processing means includes segmentation means for processing said dimension data into segments which represent regions which could correspond to defects in said conduit.
  • the processing means also preferably includes classification means for processing said segments to classify said segments as corresponding to a feature of said conduit, which may be a defect.
  • said classification means implements a neural network to classify said segments.
  • the system further includes interpretation means for processing a classified segment to obtain attributes of the corresponding feature and for identifying said feature on the basis of said attributes and feature data.
  • said dimension data comprises a plurality of circumferential data of the inner surface of said conduit each corresponding to points along the path of travel of said measurement means.
  • the measurement means may include laser scanning means and/or sonar scanning means.
  • the laser scanning means includes a laser, means for circulating a beam generated by the laser against the surface of said conduit, and camera means for producing image data representative of areas illuminated by said beam.
  • said sonar scanning means includes a rotating sonar transducer for emitting sonar signals to and receiving sonar echoes from the inner surface of said conduit.
  • processing means includes movement correction means for adjusting said circumferential data to account for movement of said measurement means within said conduit.
  • Figure 1 is a schematic diagram of an inspection system for a pipe
  • Figure 2 is a perspective view of a laser scanner of the system within a pipe
  • Figure 3 is a block diagram of a feature extraction board of the system
  • Figure 4 is a block diagram of an MIS of the system
  • Figure 5 is a perspective view of a sonar scanner of the system within a pipe
  • Figure 6 is a block diagram of a sonar head of the sonar scanner
  • Figure 7 is a graphical representation of a sonar echo produced by the sonar head
  • Figure 8 is a frequency response of a digital filter used in processing the sonar echoes received by the sonar head
  • Figure 9 is a graphical representation of a curve-fitted to a set of sonar echo samples
  • Figure 10 is a graphical representation of a standard sonar echo used in cross-correlation of measured sonar echoes
  • Figure 11 is a graphical representation of the results achieved by cross-correlation of the standard sonar echo with a measured sonar echo
  • Figure 12 is a flow diagram of image processing software modules and databases of a work station of the system
  • Figure 13 is a flow diagram of an image segmentation module of the system
  • Figure 14 is a flow diagram of an image classification module of the system
  • Figure 15 is a diagram of a grid representing a triangulation step of the image classification module
  • Figure 16 is a diagram of a neural network structure used in the image classification module
  • Figure 17 is a flow diagram of an image interpretation module of the system
  • Figure 18 is a diagram illustrating the coordinate axis of a vehicle in a pipe
  • Figure 19 is a diagram of coordinates for a general ellipse.
  • Figure 20 is a diagram illustrating determination of vehicle yaw and pitch angles from a fitted ellipse.
  • Figure 21 is a diagram illustrating centre coordinate correction.
  • a pipe inspection system 2 includes a laser scanner 4, a sonar scanner 6, and a colour video camera 8 which are all mounted on a measurement vehicle 40 that can travel within a pipe, such as a sewage pipe.
  • the measurement vehicle is self-propelled for movement through the pipe, and a chainage detector 10 is used to detect incremental movement of the measurement vehicle through the pipe, which enables the position of the measurement vehicle along the length of the pipe to be determined.
  • the orientation of the measurement vehicle, and the scanning equipment 4 to 8, is determined by an orientation detector 12 which is also mounted on the measurement vehicle.
  • the orientation detector 12 includes a gyroscope and accelerometers.
  • the signals generated by the measurement and detection equipment 4 to 12 of the vehicle are sent to a processing system 14 of the inspection system 2 via a high speed communications link 16.
  • the signals from the laser scanner 4, sonar scanner 6 and colour video camera 8 are placed in a form suitable for transmission by a laser scan converter 18, a sonar scan converter 20 and a video conditioner 22, respectively.
  • Signals from the chainage detector 10, the orientation detector 12 and auxiliary sensors are combined and placed in a form suitable for transmission by a sensor conditioner 23. All the signals are combined by a link controller 24 mounted on the vehicle and which is connected to the link 16.
  • the link 16 is an optical fibre link, and the converters 18 and 20 and the conditioners 22 and 23 digitise the signals produced by the scanners 4 and 6 and the camera 8 prior to submission to the link controller 24.
  • the link controller 24 is then used to time division multiplex the digital signals and convert them to optical signals for transmission on the link 16.
  • the processing system 14 can be placed outside the pipe and connected directly to the link 16 or alternatively the signals on the link 16 can be transferred over a digital telecommunications network to a remote location of the processing system 14.
  • the processing system 14 includes a link controller 26 connected to the link 16 which is used to unpack or demultiplex the signals transmitted from the measurement vehicle.
  • the link controller 26 is connected to other components of the processing system 14 by a VME (Versa Module Europe) bus architecture 28.
  • the signals from the scanner 4 are passed to image processing and feature extraction circuitry 30 which records the signals obtained for each inner circumferential scan of the pipe at each longitudinal position, z, of the measurement vehicle.
  • the signals are processed to obtain circumferential or ring data for each position z, which comprises a set of x,y or r, ⁇ coordinates defining the inner circumferential dimensions of the pipe.
  • a Sun Microsystems Sparc-2 work station 32 is connected to the VME bus 28 by a VME bus to S bus converter 34, and is used to process the ring data to form feature segments and then classify the segments to determine if they correspond to a defect in the pipe structure.
  • the processing system 14 further includes an A/D board 36 for digitising the signals from other analog sensors external to the pipe, and a Motorola MV 147 central processing unit 38 which is used to control the data flow in processing system 14 and operate the scanners and detectors 4 to 12 in real-time. Ring data for each position z is also derived from the signals provided by sonar scanner 6.
  • the laser scanner 4 is used to obtain ring data 4 for empty pipes or the upper non-fluid sections of part full pipes.
  • the sonar scanner 6 is complementary, and is used to obtain ring data for full pipes or the lower fluid sections of partly full pipes.
  • the colour video camera 8 is used to obtain visual images of the inside of the pipe which are recorded onto video tape and can be used for subsequent visual inspection of sections of the pipe, if desired.
  • the laser scanner 4 as shown in Figure 2 mounted on the measurement vehicle 40 within a pipe 42, includes a He ⁇ e laser 44 which generates a laser beam 46 in the direction of travel of the support vehicle 40 substantially parallel to the axis of the pipe 42.
  • the beam 46 is emitted from a first end 48 of the laser 44 to which a mirror 50 is attached which directs the beam 46 onto a refracting prism 52.
  • the prism 52 directs the beam 46 in a radial direction onto the inner surface of the pipe 42.
  • the prism 52 is rotated by a stepper motor 54, mounted on top of the laser 44, so as to sweep the beam 46 in a circular path across inner circumferences of the pipe 42.
  • the prism 52 completes a revolution every 20 milliseconds, and a complete trace 55 of the beam 46 is imaged at each position z of a support vehicle 40 by a CCD camera 56 mounted on the laser 44 at the back end 58 of the laser 44.
  • the position z of the support vehicle 44 for each trace 55 is measured by the chainage detector 10, which comprises an optical encoder mounted on a sprocket which transfers the drive of an electric motor to the measurement vehicle 40 as it moves longitudinally along the pipe 42.
  • the trace 55 provides an accurate representation of the form of the inner surface of the pipe 42, at a position z from which x,y or r, ⁇ coordinates representative of the surface dimensions can be extracted, bearing in mind the trace 55 is substantially perpendicular to the pipe axis.
  • the image processing and feature extraction circuit 30 includes a feature extractor board 62 which receives the image signals generated by the CCD camera 56 of the laser scanner 4, as shown in Figure 3.
  • the board 62 includes a timing generation circuit 64 which provides vertical and horizontal synchronising (sync) signals to the camera 56.
  • the sync signals are generated by a video sync generator 66 based on a 28.375 MHz clock signal generated by a two phase clock generator 68.
  • the clock generator 68 generates two 28.375 MHz clock signals which are 90° out of phase, the first being inputted to the video sync generator 66.
  • the clock generator 68 generates the clock signals from a 56.75 MHz master clock 70.
  • the feature extraction board 62 further includes a video processing circuit 72, a feature store control and an addressing circuit 74, a feature data generation circuit 76, a feature storage circuit 78, and an interface 80 to the VME bus 28.
  • the video processing circuit 72 includes a camera interface 82 which buffers the video signal output by the CCD camera 56.
  • the level of the video signal is compared by an analog comparator 86 with the level of a threshold signal generated by a threshold generator 84 to determine whether the video signal corresponds to a pixel of the imaged laser beam trace 55.
  • the comparator 86 will produce a high signal if the video signal exceeds the threshold indicating that the incident signal corresponds to a pixel of the trace 55, otherwise a low signal is produced at the output of the comparator 86.
  • the timing of the output of the comparator 86 is controlled by the first clock signal.
  • the feature store control addressing circuit 74 includes feature store control logic 88 and an address counter 90.
  • the feature store control logic 88 receives the output of the comparator 86 and its timing is controlled by the first and second clock signals generated by the timing generation circuit 64.
  • the feature store control logic 88 enables a write operation to the feature storage circuit 78 when the output of the comparator 86 provides a high signal.
  • the write operation occurs at the address provided by the address counter 90 and the data written to that location is the contents of a pixel counter 92 and a line counter 94 of the feature data generation circuit 76.
  • the line counter 94 provides the number of the horizontal line of the current pixel corresponding to the video signal output by the camera interface 82, and the pixel counter 92 provides the number of the current pixel in that line.
  • the pixel counter 92 is incremented by the second clock signal, and reset by the horizontal sync signal whereas the line counter 94 is incremented by the horizontal sync signal and reset by the vertical sync signal.
  • the pixel counter 92 and the line counter 94 therefore provide x and y position data, respectively, for the current pixel, which is only stored in the storage circuit 78 when the current pixel corresponds to a pixel of the laser trace 55.
  • the feature store control logic 88 increments the address in the address counter 90 after performing each write operation.
  • the contents of the counters 92 and 94 form a 32-bit word which is stored in the storage circuit 78.
  • the feature storage circuit 78 is a dual port RAM which can hold 32 K 32-bit words and is split into two banks for double buffer operation, i.e. one bank is written to while the other bank is read by the VME interface 80.
  • the feature store control logic 88 controls the double buffer operation by ensuring that when one bank is full, a read signal is provided to the VME interface 80, and then the write operation continues at the other bank. An error signal to the VME interface 80 is generated when the read operation to one bank is not finished yet the other bank is full of x,y data.
  • the VME interface 80 controls the reading of data onto the VME bus, and the feature extraction board 62 function as a slave on the VME bus 28.
  • the sonar head 100 as shown in Figure 6, includes a transducer 102 for emitting sonar signals and receiving the return echoes 106, and a stepper motor 104 for rotating the transducer 102 at rates up to 2 revolutions per second.
  • the sonar signals and echoes 106 therefore provide a circumferential scan 108 of the inner surface of the pipe 42 at each position z.
  • the sonar head 100 further includes a motor control circuit 105, a transmit/receive circuit 107, a signal processing circuit 110 and a general control circuit 112.
  • the signal processing circuit 130 and the control circuit 112 are connected to the processing system 14 by the link 16, and the control circuit 112 on receiving a scan signal from the processing circuit 14 generates control signals for the motor control circuit 105 and the transmit/receive circuit 107 so that the motor control circuit 105 begins controlling the motor 104 so as to rotate the transducer 102.
  • the transmit/receive circuit 107 is then switched to transmit mode so as to cause the transducer to emit a short pulse of 2.2 MHz ultrasonic radiation and then reverts to receive mode so as to receive the sonar echo, which is passed to signal processing circuit 110 for transmission to processing system 14.
  • a typical return echo signal 114 is shown in Figure 7.
  • the radial dimension r of the point on the scan 108 corresponding to the echo 114 is determined from the time between sending of the sonar pulse and the instant T 116 when the return signal level reaches a threshold level L 118, as discussed hereinafter.
  • the second polar coordinate ⁇ of the point on the scan 108 is determined from index pulses generated by the motor control circuit 106 for each incremental step of the stepper motor 104, bearing in mind the initial orientation of the sonar transducer 102 is known.
  • the pulses are transmitted to the processing system 14 from the motor control circuit 106 by the control circuit 112.
  • Frequency multiplexing is used to provide separate channels with the processing system 14 for control signals to the head 100, and signals transmitted to the system 14.
  • the analog signal 114 representing the intensity of the sonar echo as a function of time is digitised into 12-bit samples at a 250 kHz sampling rate using the A/D board 36 on receipt at the processing system 14.
  • an 8-th order analogue low-pass filter with a cut-off frequency of 100 kHz is used to remove high-frequency components of the echo which may cause aliasing of the sampled signal.
  • the output of the optical encoder of the chainage detector 10 is obtained by the processing system 14 immediately after each sonar echo is received.
  • three techniques are available, a curve-fitting technique, a cross-correlation technique, and a deconvolution technique.
  • the curve-fitting technique involves fitting the echo samples to a curve for each echo and then deriving the radial distance from the curve.
  • the cross-correlation technique involves comparing the return samples with an echo from a known radius and using the cross-correlation result to determine the radial distance relative to the known radius.
  • the curve-fitting technique is performed as follows.
  • a water to air interface is an almost perfect mirror to sonar signals, and from an analysis of echoes obtained from a water-air interface using the transducer 102 it has been determined that a function of the form provided in equation (1) below can be used to describe a sonar echo produced by the sonar scanner 6.
  • the parameters required to define each peak, i, of an echo, y are the amplitude of the peak, c i , the width of the peak, ⁇ i , and the time at which the peak occurs, ⁇ i .
  • the function y takes into account that the sonar signal generated by the transducer 102 has a trailing exponential decay at the end of the pulse.
  • the derivative of the function y is discontinuous and the number of factors for each function is determined by the number of peaks detected in the return echo. A peak of the return echo is detected using an estimate of the derivative of the samples of echo.
  • a peak is defined to occur when there is a positive to negative zero-crossing of the derivative (indicating a change to the slope of the signal) and the height of the peak is greater than 5% of the maximum range of the samples, which eliminates for any noise inherent in the samples.
  • Numerical differentiation is inherently a noise increasing operation, and to reduce noise components, a five point smoothing differentiating filter, having a frequency response as shown in
  • u k is the kth sample
  • y k is the kth sample of the filtered signal
  • n is the number of samples.
  • An estimate of the time of the peak is obtained by linear interpolation of the derivative values obtained on either side of the zero-crossing as shown in equation (3).
  • the maximum value of the peak at is estimated by fitting a quadratic through
  • An estimate of the width parameter ⁇ i is determined by locating the times of the half peak amplitudes on the rising and falling edges of a peak. Linear interpolation is again used between the two nearest sample values to the half amplitudes in order to obtain estimates of a time. The time difference between the peak and the half amplitude of the rising edge is used in preference to the falling edge, and if the time difference between the peak amplitude and half amplitude is given by w, then for the rising edge
  • the echo return time or transit time T is determined to be time at which the function y for an echo reaches 10% of its maximum amplitude.
  • the radial distance r i from the sonar head 102 to the internal feature of the pipe 42 that reflected the sonar signal to generate the echo is obtained by:
  • v sound is the velocity of the sonar signal
  • ⁇ i and ⁇ i are the time and width parameters of the maximum peak of the echo.
  • the time t trig is the time between the start of emission of the sonar pulse and the time a trigger signal is received by the processing system 14 from the sonar head 100 and when the asynchronous sampling clock of the A/D board 36 begins sampling the echo.
  • the signal processing circuit 110 of the head 100 generates the trigger signal at the end of generation of a sonar pulse.
  • R transducer is half the width of the sonar transducer 102, which is approximately 10 mm.
  • the work station 32 can be used and the initial parameter estimates for ⁇ i , ⁇ i and c i fed to an fmins module of an algebra software package MATLAB to minimise the error between the fitted function y and the samples.
  • An example of the results obtained for a three peak echo 120 is illustrated in Figure 9.
  • the cross-correlation technique first requires a standard sonar echo to be selected for cross-correlation with the echoes obtained in the pipe 42.
  • a series of 10 echoes from an air/water interface were used to construct a standard echo template.
  • the echoes were recorded using a cathode ray oscilloscope, at 8-bit resolution with a sample rate of 5 MHz and as there was a significant amount of switching noise and other signals, the echoes were averaged, the offset removed and the signal normalised.
  • the template was digitally low-passed filtered using a 30 point filter with a cut-off frequency of 660 kHz.
  • the leading signal of the standard peak is truncated so that the first point in the signal represents the beginning of the rise, which is the first part of the returning echo detected by the transducer 102. Therefore by truncating the standard signal in this manner the signal represents the shortest distance from the transducer 102 to a reflecting surface.
  • the signal was then decimated according to the actual sampling period of the measured echoes to form the standard peak.
  • the standard echo signal 122 is illustrated in Figure 10.
  • a standard cross-correlation algorithm applied to a measured echo and the standard echo is as follows:
  • x(k) is the measured sampled sonar echo
  • y(k) is the standard sonar echo
  • R xy (k) is the cross-correlation of the vectors x(k) and y(k). The location of the peaks of the cross-correlation are determined using equation
  • An alternative technique for obtaining the parameters ⁇ i and ⁇ i for equation is to apply a deconvolution algorithm, as follows, to the standard echo and the return echo.
  • S xy (k) is the signal resulting from the deconvolution of x(k) with y(k), x(k) is the measured sample sonar echo,
  • y(k) is the standard sonar echo with discrete Fourier transform Y(f)
  • Y*(f) is the complex conjugate of Y(f)
  • s is a small constant to prevent division by zero.
  • the accuracy of the raw ring data (x,y and r, ⁇ data) for each position z obtained from the laser scanner 4 and sonar scanner 6, as discussed above, can be improved by correction for movement of the measurement vehicle 40.
  • the measurement vehicle 40 cannot be constrained to remain at the centre of the pipe 42 and orientated parallel with respect to the longitudinal axis of the pipe 42. Measurements obtained from the vehicle orientation detector 12 and the measured data can be used to determine components of motion of the measurement vehicle 40, such as x and y offsets from the centre of the pipe 42 to the prism 52 and the sonar head 102, and the yaw and pitch angles of the measurement vehicle 40.
  • a method for correcting the raw data is discussed in the accompanying Appendix on pages 45 to 49, and involves obtaining the effects of the vehicle motion from the measured data and processing the measured data by a transformation required to transform a fitted ellipse onto a circle centred in the middle of the pipe.
  • the method assumes the pipe 42 is circular and first fits the ellipse to the ring data obtaining from the scanners 4 and 6. Having defined a fitted ellipse for the ring data, a transformation matrix is obtained which can be used to transform the ring data onto a circle corresponding to the assumed cross-section of the pipe 42.
  • Defining the ellipse also provides offsets x c ,y c to the centre of the pipe 42 which are used to correct the cylindrical ring data so as to be centred on the centre of the pipe 42.
  • the Appendix also describes how the yaw and pitch angles of the vehicle can be determined from the measured data and this can be used for comparison with signals obtained from a gyroscope of the vehicle orientation detector 12. The comparison enables adjustment of the measured yaw and pitch angles and calibration of the gyroscopes.
  • the plurality of ring data obtained over a length of the pipe 42 from the laser scanner 4 and the sonar scanner 6 each comprise a set of raw range data representing an image of the inner surface of the pipe which can be subjected to image processing.
  • the range data comprises a set of z values, and for each value z is a plurality of x,y or r, ⁇ dimension values, corresponding to the features of the inner surface at each position z. All the x,y and r, ⁇ values correspond to pixels of the image of the inner surface of the pipe 42.
  • the work station 32 first performs image preprocessing 200 on the range data 201 and then segments the pixels into regions of interest using an image segmentation procedure 202, as shown in Figure 12.
  • the segmented regions are classified using a feed-forward neural network classifier of an image classification procedure 204.
  • a classifier training procedure 206 trains the network classifier off-line using the back-propagation method.
  • the classifier training procedure 206 relies on a training set prepared by a preparation procedure 208.
  • the training set preparation procedure 208 generates training data based on results obtained by performing the image segmentation procedure 202 on a known pipe structure with known defects.
  • the results of the image classification procedure 204 are provided to an image interpretation procedure 208 which further defines the classified features of the image on the basis of a knowledge database 210 of known pipe features.
  • a graphic display of the interpreted image can be produced at step 210 and an automatic defect report 212 produced at step 214, the reports generated being stored in a structured database 216.
  • the image preprocessing procedure 200 removes insignificant variations in the range data and places it in a form which is not dependent on the type of scanner 4 or 6 used.
  • the image preprocessing procedure 200 involves first processing the range data to produce a calibrated constant grid map of the internal surface of pipe 42, where the map is two dimensional and the surface of the pipe is considered to be split longitudinally and opened to expose the surface.
  • the x,y and r, ⁇ values for each pixel obtained by the scanners 4 and 6 are calibrated and converted into depth values. Provision can also be made for pipe manufacturing tolerances.
  • the depth value of each point on the grid map is determined by taking the depth value of the closest pixel to the point. Linear interpolation can be used at an increased computational cost, but it has been found that the improvement gained does not justify the cost.
  • the image preprocessing procedure 200 involves generation of a pipe model, which represents a perfect pipe with no defects or features.
  • the model provides a reference level with respect to which range or depth values can be measured.
  • each the first step is the identification of pixels in the image which are likely to be part of a defective region by differential geometric techniques.
  • Biquadratic surface patches are used to estimate image surface derivatives as described in "Surfaces in Range Image Understanding" by Paul J. Besl, Springer- Verlag, 1988. Pixels with small first derivatives may be called good pixels.
  • the model is built ring by ring. An ellipse of best fit is determined for all the good pixels in a ring. The ellipse of best fit is obtained in the same fashion as discussed previously.
  • the image is systematically covered with facets which are rectangular subimages. The facet with the largest number of good pixels is chosen. Multiple bilinear regression is carried out. If this is successful then the model is determined at that facet and the facet is "written" which means that all the pixel values in the facet are set to the values determined by the regression. If the regression is not successful then the facet is skipped and the best remaining facet is chosen.
  • the regression will be successful if there are enough good pixels in the facet to get reliable results and the regression equations have a unique solution. Note that once a facet has "written” all its pixels become good. After one pass through the facets in the image if there are any facets not written then the "read" size of the facets is incremented. This means that they examine pixels over a larger region when carrying out the regressions. When a regression for a facet is successful it writes only to its initial region and not the extended read region. Successive passes are made in this way through the list of facets until all facets are written.
  • the advantage of the local facet method is that it applies to pipes of any shape.
  • z i is the range value of the pixel (x i ,y i ) which is the ith good pixel in the rectangular subimage under consideration.
  • the data of the laser scanner 4 usually has data values missing which may be caused by a light absorbing surface, obstruction of the return light signal to the camera 56, a surface void, or surface which reflects light away from the camera 56.
  • the missing values caused when an object obscures the line of sight between the camera 56 and the laser beam trace 55 are termed shadows.
  • Symbolic values are assigned to all values missing on the grid map.
  • a set of possible values are obtained for the shadow pixels and other missing values by projecting a line from the camera position through every pixel. If any of these lines intersect the line projected from the centre of the pipe through the grid position of the missing value and perpendicular to the longitudinal axis of the pipe, the point of intersection is added to the set. Since this set of possible values is assumed to be continuous only the minimum range value found need be kept for subsequent processing.
  • Shadows are not a considerable problem for the data obtained by the sonar scanner 6, however, instead of a single radial value for each point on the surface, there may be several radial values caused by multiple echoes being returned to the sonar head 102.
  • the sonar data is therefore further preprocessed by examination of the pixels in a window surrounding the current pixel, and a single depth value is chosen from amongst the pixel values in the window.
  • the size of the window presently 7 ⁇ 7, is dependent on the size of the area covered by the sonar signal 106 on the internal surface of the pipe 42.
  • the data obtained by both the laser scanner 4 and the sonar scanner 6 also contains a considerable amount of noise which can interfere with subsequent image analysis routines.
  • the effects of the noise can be reduced by an image smoothing process.
  • An adaptive smoothing process is used for its properties of reducing noise whilst preserving discontinuities.
  • An equally weighted local neighbourhood averaging kernal which adapts itself to the local topography of the surface to smooth is used.
  • the resulting surface is smooth within regions whilst it's discontinuities (often corresponding to region boundaries) are preserved.
  • the strategy consists of automatically selecting an adequate kernal size which defines the size of the neighbourhood.
  • the size of the smoothing kernal is based on an evaluation of the differences between the value at the centre point of the kernal window and its neighbours.
  • the size of the kernal is chosen such that the largest absolute difference between the centre pixel and it's neighbours is less than or equal to twice the noise level in the image and subject to the constraints that the kernal size is greater than a minimum size (typically 3 by 3) and smaller than a maximum size (typically 19 by 19).
  • the image segmentation procedure 202 includes a feature extraction step 220, a pixel labelling step 222 and a region extraction step 224.
  • the feature extraction step 220 computes a set of features which are combined to form a feature vector for each pixel.
  • the features used consist of features describing more or less the range value properties of the pixel and features representing the texture in a local neighbourhood and include:
  • the pixel labelling step 222 uses the feature vectors of the pixels to perform a classification technique which partitions the image into regions that are uniform with respect to certain features and that contrast with their adjacent neighbours in the same features.
  • a Nearest Neighbour Classifier which has been previously trained offline, is used to perform the classification.
  • the classifier is trained to classify pixels according to one of the following surface primitives: background
  • root Training is performed by extracting a set of feature vectors from suitable known examples of the desired surface types.
  • the set of feature vectors becomes the exemplar set for the classifier.
  • the nearest neighbour classifier performs it's classification by comparing the unknown feature vector to it's set of exemplars.
  • the assigned classification is the same as the classification of the closest exemplar in feature space using the Euclidean distance measure.
  • the output of the pixel labelling step 222 is a label list in the form of a function l(r,c) which is the primitive surface classification for the pixel of row r in column c.
  • a relaxation step is used to improve the performance of the pixel labelling step.
  • neighbourhood is a certain neighbourhood (typically a 3 by 3 window) of
  • the output of the relaxation step is a label list in the form of a function 1'(r,c) which is the refined primitive surface classification for the pixel of row r in column c.
  • the region extraction step 224 groups connected pixels having the same label into connected components. Those connected components whose pixel type is not of the background type are extracted to form regions of interest R. The coordinates of a bounding rectangle for each region R is determined and a bit map, map(i,j), is generated to identify pixels of interest, i.e. those pixels within the region R belonging to the same connected component, where image :R -> and bit map: R-> ⁇ 0,1 ⁇ .
  • the output of the region extraction step 224 is a list of regions of interest in the range image of the pipe surface.
  • the image classification procedure 204 includes a filter step 230, an interpolation step 232, a transform step 234 and a sampling step 336 which process the bit maps of the regions of interest so as to place them in a form which can be applied to the neural networks of a neural network classification procedure 238.
  • the output of the classification procedure 238 provides a classification for each region of interest and a confidence value representative of the confidence that the classification is correct.
  • classes for surface defects may include void, crack, corrosion or deposit.
  • the confidence value may be 0 or 1, i.e. low or high, or a "fuzzy" value between 0 and 1.
  • the neural networks used in the neural network classification procedure 238 have a fixed structure, i.e. a fixed number of inputs and hidden layer nodes, yet the number of pixels of interest in a region of interest and the size of the region is not fixed. Therefore the data of the regions of interest must be manipulated so it can be applied to the fixed inputs of the networks, regardless of region size, and this should be done without sacrificing the data and true representation of the regions.
  • This is achieved by first applying a defect map filter at step 232 to the bit map of each region of interest.
  • the depth values for pixels of interest in the defect map are re-scaled to values between 1 and 2 and the remaining pixels are set to 0.
  • the functions f and g are given by, for example,
  • Nominal radius is that of the pipe, and maximum radius and minimum radius are the maximum and minimum depth or radius which the scanners 4 and 6 can detect.
  • Model(i,j) is the pipe model value at the pixel(ij) where the pipe model can be obtained by either of the two methods described previously.
  • the interpolation step 232 is used then to re-scale or compress the region of an arbitrary size m ⁇ n to an image, image2 : ⁇ 0, ..., M-1 ⁇ ⁇ ⁇ 0, ..., N-1 ⁇ R, of a set size M ⁇ N so it can be applied to the neural networks. Transformation to the set image size is done by triangulation of the domain of the region, which involves forming triangles with the vertices being the pixels of the region.
  • the triangles have a number of different faces or "facets” orientated in different directions depending on the values of the pixels, and each triangle is then mapped to a function so as to form a continuous "facet” image.
  • This image is then sampled onto a M ⁇ N grid to obtain the interpolated image.
  • the form of the triangulation for a m ⁇ n region is illustrated in Figure 15 where each grid is divided into an upper triangle 233 and a lower triangle 235.
  • the continuous "facet" image is defined by a function f(x,y) where for any point (x,y) (0 ⁇ x ⁇ m-1 and O ⁇ y ⁇ n-1) in the grid map plane, f(x,y) is obtained by first finding the triangle which bounds (x,y).
  • the samples are transformed at step 234 from the spatial domain into a power domain or power spectrum using a discrete two dimensional Fourier transform.
  • the power spectrum provides a good representation of the region due to the transformation process which reconstructs the surface in the Fourier domain.
  • a discrete Fourier transform of image2 is performed using a recursive doubling strategy for Fast Fourier Transform (FFT).
  • FFT Fast Fourier Transform
  • the power spectrum is then wedge-ring sampled at step 236, which involves integrating, or summing, the samples in the power domain over eight wedge shaped and eight ring shaped sections of the domain.
  • the result of the wedge-ring sampling provides a set of sixteen values which represent a compressed discrete signature of the inner surface of the pipe 42 which can be applied to the neural networks of the neural network classification step 238.
  • the wedge-ring sampling is performed by first letting WEDGES be the number of wedges to be used and RINGS be the number of rings to be used.
  • WEDGES be the number of wedges to be used
  • RINGS be the number of rings to be used.
  • the interval from 0 to ⁇ /2 in the power domain is divided into WEDGES equal subintervals defined by angles [ ⁇ i , ⁇ i+1 ].
  • the interval from 0 to [(M-1) 2 + (N-1) 2 ] 1 ⁇ 2 is divided into RINGS equal subintervals defined by radii [r j ,r j+1 ].
  • the ith wedge sample is defined to be the sum of power(u,v) for all (u,v) such that theta(u,v) is in [ ⁇ i , ⁇ i+1 ] where theta : R 2 ⁇ [0,2 ⁇ ] is the polar coordinate angle map given by
  • the jth ring sample is defined to be the sum of power(u,v) for all (u,v) such that radius(u,v) is in [r j ,r j+1 ] where radius : R 2 ⁇ [0, ⁇ ] is the polar coordinate radius map given by
  • ⁇ s is the characteristic function of S which indicates whether x, being theta(u,v) or radius (u,v), is in S, and is defined by
  • the wedge-ring samples are put together to form a feature vector.
  • the first WEDGES, in this case eight, features are the wedge samples and the last RINGS, in this case eight, features are the ring samples.
  • the Fourier transform and wedge-ring sampling steps 234 and 236 preserve the overall shape and texture of the regions of interest and enables the neural network classifier to discriminate between complex surface defects. A number of factors enable this to occur, as discussed above, and are a consequence of the fact that the Fourier transform is an invertible mapping onto the frequency domain, and the power spectrum is translation invariant, which ensures the wedge-ring samples are translation invariant. Furthermore, the wedge samples are scale invariant and the ring samples are rotation invariant. This facilitates invariant classification of defects in the pipe surface.
  • the neural network classification step 238 involves applying the wedge-ring samples to a number of feed-forward neural networks which have been trained by the back-propagation method.
  • a neural network 241 is provided for each classification class of surface defects and pipe features.
  • the networks 241 each presently have a topology of 16 inputs 243 in the input layer 239, eight nodes 235 in the first hidden layer 247, four nodes 249 in the second hidden layer 251 and two outputs
  • Weights W l (i,j) are applied to each value passed between the four layers 239, 247, 251, 255, 2 being the layer number, i being the number of the node or output of the layerl and j being the number of the node or input of the l+1 layer.
  • Biases are applied to the result of summing all of the weighted values received at each node 245 and 249. The outputs include one for an affirmative signal, and one for a negative signal, the signal level being between 0 and 1 on each output and indicating a degree to which the network believes the region of interest belongs to its respective class or not.
  • the neural networks are divided into a first class which operates on images that represent intrusions into the pipe, and the second class which operates on images that represent defects or features that protrude away from the centre of the pipe 42.
  • An example of the first class is the tree root/non-tree root neural network classifier, and its weights and biases are listed below in natural order below, the weights for each node being provided first, starting from the first input 243 of the first layer 239 to the outputs of the fourth node 249 of the third layer 251, and the biases of the nodes 247 and 249 being specified thereafter.
  • An example of the second class of neural network is the pipe join/non pipe join neural network classifier, the weights and biases of which are listed below in natural order.
  • the weights and biases for the neural networks are established by the classifier training procedure 206 which uses the back-propagation method, as discussed in "Neural Computing” by R. Beale and T. Jackson, IOP Publishing 1990, on a data set prepared by the training set preparation procedure 208.
  • the data sets together with their known classification results are fed to the classifier training procedure 206 which attempts to seek convergence of the weights and biases using the back-propagation method to produce correct neural networks for each classification class.
  • the data sets are randomly interleaved, i.e. for the tree root network, the tree root data sets are interleaved with data sets that do not relate to tree roots so as to obtain accurate convergence of the weight and bias values.
  • Filters are used with some of the dedicated networks. For example, with the pipe-connection network if the bounding rectangle length of the region of interest is less than i_HLTER or the bounding rectangle height is less than j_FILTER or the defect map area is less than area_FILTER then "not pipe join" is returned and use of the neural network is circumvented.
  • the image interpretation procedure 208 involves the verification and rating of each classified region of interest based on a database of 210 of defect and pipe feature characteristics and previously identified defects and features, which includes defect characteristics, e.g. direction/angle, gradient, area, and depth of defects, ratings, e.g. structural, service or other, and spatial relation with other defects or features.
  • the image interpretation procedure 208 is separated into two parts, as shown in Figure 17, an image analysis step 250 which performs further image analysis on the bit maps of the classified regions in order to determine additional feature attributes associated with the pixels of interest, at step 252.
  • the feature attributes may include direction, edge details, area, position and depth of a surface feature or defect.
  • An inference engine step 254 is provided to match the model of the defects provided in the knowledge database 210 using a set of defect identification and rating rules 256 which may be coded in an "if condition then conclusion" format.
  • the inference engine is a general purpose facility.
  • the application dependent interface functions are provided to interface between the engine and the application program's data.
  • the image interpretation procedure 208 is supported by other procedures for the efficient application of a knowledge base. These are: ⁇ Rule base maintenance provides facilities for the addition, modification and deletion of rules from a knowledge base. ⁇ Rule base compiler parses the knowledge base and ensures correct syntax. The knowledge base is then translated into a format suitable for the knowledge base engine.
  • cross-sectional area loss is less than 5%
  • cross-sectional area loss is not less than 5%
  • cross-sectional area loss is less than 20%
  • cross-sectional area loss is less than 5%
  • cross-sectional area loss is not less than 20%
  • IF region is an open joint
  • Openness' is less than the pipe wall thickness THEN
  • IF region is an open joint
  • Openness' is less than 1.5 pipe wall thicknesses THEN
  • IF region is a displaced joint
  • cross-sectional area loss is not less than 5%
  • cross-sectional area loss is less than 20% THEN
  • cross-sectional area loss is not less than 20%
  • cross-sectional area loss is not less than 5%
  • cross-sectional area loss is not less than 5%
  • cross-sectional area loss is less than 20%
  • cross-sectional area loss is not less than 20% THEN
  • IF region is a circumferential crack
  • IF region is a circumferential fracture
  • THEN score is 1.0 * length(m) IF region is erosion
  • the image interpretation procedure 208 allows the defects and pipe features specified in the following tables to be identified.
  • the tables specify the neural network classification given to the defects and features, and feature attributes which need to be determined by the image analysis step 250 in order for the inference engine step 254 to complete identification of the defects and features.
  • the tables document the attributes used by the rule base to classify regions. A tick indicates that the region should strongly exhibit this attribute whilst a cross indicates that the region should not exhibit this attribute. A dash indicates a don't care situation. The tables do not indicate whether the attributes are used in the conjunctive or disjunctive manner.
  • the pipe position in rings, clock reference, and the pixels of the regions of interest corresponding to each defect and feature identify the precise location of a defect or feature.
  • the rules 256 can also be used to resolve location conflicts which may occur during segmentation and classification when different defects overlap. At least three different types of conflicts can be resolved, conflicts among superimposed defects, conflicts among adjacent defects, and conflicts arising due to classification ambiguities between different parts of defects.
  • the report is outputted in the following form:
  • MIS Management Information System
  • GUI graphic user interface
  • the MIS has been written to run in a main window of the display 211.
  • the main window will list the assessment for each of the defects, and allows display of multiple detailed views of the data related to the defect. These alternate views can comprise: one showing the current ring slice of the pipe data, one showing the current range or unwrapped section of the pipe data, and the other showing a three dimensional (3D) display of the pipe data.
  • the MIS can also provide a graph showing variation of rating over time, based on previous pipe inspections.
  • the MIS has been designed and implemented in a modular fashion, and Figure
  • the GUI sub-system allows the user to specify the operation (either data interpretation, visualisation or comparison) and the data to operate on (either the entire list of regions of interest, or a subset of it).
  • the GUI sub-system invokes the sub-system which performs the particular operation on the data requested by the user.
  • the called sub-system loads the specified data from data base using the facilities provided by the interfacing sub-system, if it has not been previously loaded.
  • the analysis sub-system it in turn calls components from the knowledge base sub-system, to apply the knowledge base(s) to the data.
  • a default knowledge base(s) is used except where the user explicitly overrides this default through the user interface.
  • GUI graphical user interface
  • the GUI processes interactive user requests that are activated using the keyboard and mouse.
  • the GUI provides menus and windows to enable easy interaction with the system. It contains the control logic to sequence and control the interaction with, and of, the sub-systems.
  • the GUI provides access to, or invokes facilities for:
  • the software uses a computer graphics display to provide visualisation of analysed lists of regions of interest.
  • Various views can be generated depending on a viewing position and data supplied by the GUI sub-system. That is, when the viewing parameters are changed in the GUI, the views are automatically regenerated to reflect this change.
  • This sub-system provides display services to the other sub-systems.
  • the visualisation sub-system generates a required picture from the supplied data, scales the picture to the appropriate window size and draws the picture into a graphics window.
  • Cross section view generates a cross-sectional view of the pipe taken at the nominated pipe position. The generated image is scaled appropriately to fill the viewing window.
  • Unwrapped view generates an unwrapped view of the pipe starting at the nominated pipe position.
  • 3D view generates a 3D perspective view of the pipe from the nominated pipe position. The orientation of the viewing camera is provided to enable the generation of the view from almost any position. The generated image is scaled appropriately to fill the viewing window.
  • Graphing produces a 2D line graph from the two data vector parameters.
  • a comparison sub-system relates multiple pipe condition assessment results.
  • a time-based rating of assessment results can be assembled and displayed as a graph.
  • An interfacing sub-system provides a level of abstraction between the system and the underlying operating system and any external systems that require interfacing to.
  • This subsystem is intended to provide a library of interfacing services (which are implementation, or organisation, specific).
  • Data load reads data from the supporting file system and converts it into a form suitable for the internal set of regions structure.
  • Data save extracts data from the internal set of regions of interest and converts it into a form suitable for writing to the underlying file system.
  • Pipe directory interrogates the file system and returns the set of all pipe identifiers.
  • Analysis directory interrogates the file system and returns the set of all analysis identifiers for a given pipe.
  • Analysis export extracts information from the results of the analysis and formats the data in a form suitable for import into Water Authority databases.
  • Printing displays results on a printer device.
  • the data managed by the MIS system is primarily made up of list of regions of interest information for a pre-processed pipe image.
  • the information stored in, or calculated for, each region of interest consists of region attributes which include: region location in the pipe; an image of the region, a region type descriptor, a region classification, area, perimeter, cross-sectional area loss, geometric descriptors, rating, etc.
  • the list of regions of interest encapsulates the information used for condition assessment from the raw sensor pipe data. This list of regions of interest is a main data component within the system 2 and it is extensively used by both the analysis and comparison subsystems via an interfacing sub-system.
  • the pipe image is a form of annotated grid of pixel values representing a pipe surface in 3D space.
  • the list of regions of interest contains attribute and sub-image information for each segmented region.
  • a knowledge base representation is another data structure, which is an internal representation of an ASCII file of structured English-like declarative rules and functions.
  • Raw pipe data from the sensors is supplied to the pre-processing module in real-time or as a file.
  • the pipe image data is transformed into a list of regions of interest and put into the data base.
  • This data base can be archived or input directly into MIS to perform pipe analysis. For a given pipe there may be many lists of regions of interest, each one corresponding to a particular run of a given sensor through the pipe.
  • the interfacing sub-system provides an abstract conduit between the data used by the MIS and the data base structure within the underlying operating system, as well as external data bases to which the user required information about the list of regions of interest may be sent to, or retrieved from.
  • Assessment rules are input from the user into a knowledge base representation used by the knowledge base sub-system with a text editor.
  • the knowledge base representation can also be compiled directly into the knowledge base sub-system.
  • GUI graphical user interface
  • a simplified interface to gather and pre-process raw sensor data can be run on a minimally configured Sparc workstation installed in a field environment.
  • the transport of data from the field to the office workstation can be via magnetic tape.
  • the recommended initial configuration is:
  • Data from both laser and sonar scanners is processed into the form of cylindrical coordinates (r, ⁇ , z).
  • This data can be corrected for movement of the vehicle relative to the pipe axis. This can improve measurement accuracy as the vehicle may not remain at the center of the pipe and oriented parallel with the pipe's longitudinal axis.
  • Figure 18 shows the orientation detector 12 at an arbitrary orientation and position in the pipe.
  • the orientation of the vehicle can be described by roll, pitch and yaw angles ( ⁇ , ⁇ , ⁇ ), while (x, y, z) coordinates can be used to describe its position.
  • an inclusion zone is estimated from the orientation and radius of the previous point; all data not in this region is excluded from the estimator so as not to bias the estimates with the defects in the pipe,
  • the equation for the ellipse in the (x, y) coordinate frame can be determined by suitable transformations from the equation of the ellipse in the (x", y") coordinate frame,
  • Equating the coefficients of Equation 6 to those in Equation 5 results in the following equations for the w i
  • the data can be transformed into a circle by a single rotation ( ⁇ ) about a vector (k) from the center of the ellipse along its minor axis (b) which also forms the radius of the circle. From the ellipse fitting procedure we know the angle ⁇ that the vector k makes with the y-axis, see Figure 20
  • the desired angle ⁇ is given by:
  • the yaw ( ⁇ ) and pitch ( ⁇ ) angles of the vehicle can be determined from the fitted ellipse.
  • the transformation of the measured data point (x' i ,y' i ,z' i ) to the point on the circle (x i ,y i ,z i ) can either be performed by:
  • Figure 21 shows the geometry of the correction required to be applied to a measurement (x i , y i ) taken from a position not in the center of the pipe.
  • the center of the pipe is given by the coordinates (x c , y c ) from the current position.
  • the known parameters are the angle ⁇ i and the measured radius (r i ). It is desired to convert these parameters into those at the center of the pipe
  • the corrections required are:
  • is the angle between the nominal horizontal given by
  • r c is the distance from the current position (x c , y c ) to the center of the pipe given by

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Combustion & Propulsion (AREA)
  • Acoustics & Sound (AREA)
  • Mechanical Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An inspection system for a conduit comprising measurement means for travelling in the conduit and obtaining data on the conduit, and processing means for processing the data to identify regions of the conduit corresponding to defects in the conduit. The measurement means may be laser scanning means and sonar scanning means.

Description

AN INSPECTION SYSTEM FOR A CONDUIT
The present invention relates to an inspection system for a conduit. The system is particularly advantageous for use in inspecting pipes, such as sanitary and storm water sewers, but can also be adapted for inspection of other conduits, for example tunnels and drains, in particular those used in water supply, sewerage, irrigation and drainage systems.
The inspection and assessment of sewers for defects is critical to maintain an effective sewerage system. The task is of a particular importance for the world's larger and older cities where demand on the system is high and the main sewers used have aged considerably, thereby providing conditions conducive to the production of potentially serious defects. Sewers are presently inspected using a closed circuit television camera which is mounted on a trolley configured to travel within the length of a pipe. The images obtained by the camera are assessed by experienced personnel who identify defects and potential defects in the pipe's structure on the basis of the images. Personnel can also be sent into pipes which are large enough for direct inspection, however, this is normally discouraged as, although safety precautions are stringent, there remains inherent risks to personnel. Both assessment techniques are highly subjective and open to different interpretations. Reliance on visual assessment is also time consuming and places considerable time constraints on an inspection program when the number of experienced personnel available is limited.
Both techniques also require inspections to be undertaken at low flow levels or fluid flow to be diverted from the pipes being inspected. This places constraints on the effectiveness of the sewage system and normally means that inspection programs have to be carried out at night, thereby increasing the labour cost of the program.
Sonar systems have been used to inspect flooded pipes, but the images produced from sonar systems still need to be manually interpreted. In accordance with the present invention there is provided an inspection system for a conduit comprising measurement means for travelling in said conduit and obtaining data on said conduit, and processing means for processing said data to identify regions of said conduit corresponding to defects in said conduit.
Preferably said processing means includes means for extracting dimension data on said conduit from said data obtained by said measurement means.
Preferably said processing means includes segmentation means for processing said dimension data into segments which represent regions which could correspond to defects in said conduit. The processing means also preferably includes classification means for processing said segments to classify said segments as corresponding to a feature of said conduit, which may be a defect. Preferably said classification means implements a neural network to classify said segments. Advantageously the system further includes interpretation means for processing a classified segment to obtain attributes of the corresponding feature and for identifying said feature on the basis of said attributes and feature data. Preferably said dimension data comprises a plurality of circumferential data of the inner surface of said conduit each corresponding to points along the path of travel of said measurement means.
The measurement means may include laser scanning means and/or sonar scanning means.
Preferably the laser scanning means includes a laser, means for circulating a beam generated by the laser against the surface of said conduit, and camera means for producing image data representative of areas illuminated by said beam.
Preferably said sonar scanning means includes a rotating sonar transducer for emitting sonar signals to and receiving sonar echoes from the inner surface of said conduit.
Preferably said processing means includes movement correction means for adjusting said circumferential data to account for movement of said measurement means within said conduit.
A preferred embodiment of the present invention is hereinafter described, by way of example only, with reference to the accompanying drawings, wherein:
Figure 1 is a schematic diagram of an inspection system for a pipe;
Figure 2 is a perspective view of a laser scanner of the system within a pipe;
Figure 3 is a block diagram of a feature extraction board of the system;
Figure 4 is a block diagram of an MIS of the system;
Figure 5 is a perspective view of a sonar scanner of the system within a pipe;
Figure 6 is a block diagram of a sonar head of the sonar scanner;
Figure 7 is a graphical representation of a sonar echo produced by the sonar head;
Figure 8 is a frequency response of a digital filter used in processing the sonar echoes received by the sonar head;
Figure 9 is a graphical representation of a curve-fitted to a set of sonar echo samples;
Figure 10 is a graphical representation of a standard sonar echo used in cross-correlation of measured sonar echoes;
Figure 11 is a graphical representation of the results achieved by cross-correlation of the standard sonar echo with a measured sonar echo;
Figure 12 is a flow diagram of image processing software modules and databases of a work station of the system;
Figure 13 is a flow diagram of an image segmentation module of the system;
Figure 14 is a flow diagram of an image classification module of the system;
Figure 15 is a diagram of a grid representing a triangulation step of the image classification module;
Figure 16 is a diagram of a neural network structure used in the image classification module;
Figure 17 is a flow diagram of an image interpretation module of the system; Figure 18 is a diagram illustrating the coordinate axis of a vehicle in a pipe; Figure 19 is a diagram of coordinates for a general ellipse.
Figure 20 is a diagram illustrating determination of vehicle yaw and pitch angles from a fitted ellipse; and
Figure 21 is a diagram illustrating centre coordinate correction.
A pipe inspection system 2, as shown in Figure 1, includes a laser scanner 4, a sonar scanner 6, and a colour video camera 8 which are all mounted on a measurement vehicle 40 that can travel within a pipe, such as a sewage pipe. The measurement vehicle is self-propelled for movement through the pipe, and a chainage detector 10 is used to detect incremental movement of the measurement vehicle through the pipe, which enables the position of the measurement vehicle along the length of the pipe to be determined. The orientation of the measurement vehicle, and the scanning equipment 4 to 8, is determined by an orientation detector 12 which is also mounted on the measurement vehicle. The orientation detector 12 includes a gyroscope and accelerometers. The signals generated by the measurement and detection equipment 4 to 12 of the vehicle are sent to a processing system 14 of the inspection system 2 via a high speed communications link 16. The signals from the laser scanner 4, sonar scanner 6 and colour video camera 8 are placed in a form suitable for transmission by a laser scan converter 18, a sonar scan converter 20 and a video conditioner 22, respectively. Signals from the chainage detector 10, the orientation detector 12 and auxiliary sensors are combined and placed in a form suitable for transmission by a sensor conditioner 23. All the signals are combined by a link controller 24 mounted on the vehicle and which is connected to the link 16. The link 16 is an optical fibre link, and the converters 18 and 20 and the conditioners 22 and 23 digitise the signals produced by the scanners 4 and 6 and the camera 8 prior to submission to the link controller 24. The link controller 24 is then used to time division multiplex the digital signals and convert them to optical signals for transmission on the link 16. The processing system 14 can be placed outside the pipe and connected directly to the link 16 or alternatively the signals on the link 16 can be transferred over a digital telecommunications network to a remote location of the processing system 14. The processing system 14 includes a link controller 26 connected to the link 16 which is used to unpack or demultiplex the signals transmitted from the measurement vehicle. The link controller 26 is connected to other components of the processing system 14 by a VME (Versa Module Europe) bus architecture 28. The signals from the scanner 4 are passed to image processing and feature extraction circuitry 30 which records the signals obtained for each inner circumferential scan of the pipe at each longitudinal position, z, of the measurement vehicle. The signals are processed to obtain circumferential or ring data for each position z, which comprises a set of x,y or r,θ coordinates defining the inner circumferential dimensions of the pipe. A Sun Microsystems Sparc-2 work station 32 is connected to the VME bus 28 by a VME bus to S bus converter 34, and is used to process the ring data to form feature segments and then classify the segments to determine if they correspond to a defect in the pipe structure. The processing system 14 further includes an A/D board 36 for digitising the signals from other analog sensors external to the pipe, and a Motorola MV 147 central processing unit 38 which is used to control the data flow in processing system 14 and operate the scanners and detectors 4 to 12 in real-time. Ring data for each position z is also derived from the signals provided by sonar scanner 6.
The laser scanner 4 is used to obtain ring data 4 for empty pipes or the upper non-fluid sections of part full pipes. The sonar scanner 6 is complementary, and is used to obtain ring data for full pipes or the lower fluid sections of partly full pipes. The colour video camera 8 is used to obtain visual images of the inside of the pipe which are recorded onto video tape and can be used for subsequent visual inspection of sections of the pipe, if desired.
The laser scanner 4, as shown in Figure 2 mounted on the measurement vehicle 40 within a pipe 42, includes a HeΝe laser 44 which generates a laser beam 46 in the direction of travel of the support vehicle 40 substantially parallel to the axis of the pipe 42. The beam 46 is emitted from a first end 48 of the laser 44 to which a mirror 50 is attached which directs the beam 46 onto a refracting prism 52. The prism 52 directs the beam 46 in a radial direction onto the inner surface of the pipe 42. The prism 52 is rotated by a stepper motor 54, mounted on top of the laser 44, so as to sweep the beam 46 in a circular path across inner circumferences of the pipe 42. The prism 52 completes a revolution every 20 milliseconds, and a complete trace 55 of the beam 46 is imaged at each position z of a support vehicle 40 by a CCD camera 56 mounted on the laser 44 at the back end 58 of the laser 44. The position z of the support vehicle 44 for each trace 55 is measured by the chainage detector 10, which comprises an optical encoder mounted on a sprocket which transfers the drive of an electric motor to the measurement vehicle 40 as it moves longitudinally along the pipe 42. The trace 55 provides an accurate representation of the form of the inner surface of the pipe 42, at a position z from which x,y or r,θ coordinates representative of the surface dimensions can be extracted, bearing in mind the trace 55 is substantially perpendicular to the pipe axis.
The image processing and feature extraction circuit 30 includes a feature extractor board 62 which receives the image signals generated by the CCD camera 56 of the laser scanner 4, as shown in Figure 3. The board 62 includes a timing generation circuit 64 which provides vertical and horizontal synchronising (sync) signals to the camera 56. The sync signals are generated by a video sync generator 66 based on a 28.375 MHz clock signal generated by a two phase clock generator 68. The clock generator 68 generates two 28.375 MHz clock signals which are 90° out of phase, the first being inputted to the video sync generator 66. The clock generator 68 generates the clock signals from a 56.75 MHz master clock 70.
The feature extraction board 62 further includes a video processing circuit 72, a feature store control and an addressing circuit 74, a feature data generation circuit 76, a feature storage circuit 78, and an interface 80 to the VME bus 28. The video processing circuit 72 includes a camera interface 82 which buffers the video signal output by the CCD camera 56. The level of the video signal is compared by an analog comparator 86 with the level of a threshold signal generated by a threshold generator 84 to determine whether the video signal corresponds to a pixel of the imaged laser beam trace 55. The comparator 86 will produce a high signal if the video signal exceeds the threshold indicating that the incident signal corresponds to a pixel of the trace 55, otherwise a low signal is produced at the output of the comparator 86. The timing of the output of the comparator 86 is controlled by the first clock signal. The feature store control addressing circuit 74 includes feature store control logic 88 and an address counter 90. The feature store control logic 88 receives the output of the comparator 86 and its timing is controlled by the first and second clock signals generated by the timing generation circuit 64. The feature store control logic 88 enables a write operation to the feature storage circuit 78 when the output of the comparator 86 provides a high signal. The write operation occurs at the address provided by the address counter 90 and the data written to that location is the contents of a pixel counter 92 and a line counter 94 of the feature data generation circuit 76. The line counter 94 provides the number of the horizontal line of the current pixel corresponding to the video signal output by the camera interface 82, and the pixel counter 92 provides the number of the current pixel in that line. The pixel counter 92 is incremented by the second clock signal, and reset by the horizontal sync signal whereas the line counter 94 is incremented by the horizontal sync signal and reset by the vertical sync signal. The pixel counter 92 and the line counter 94 therefore provide x and y position data, respectively, for the current pixel, which is only stored in the storage circuit 78 when the current pixel corresponds to a pixel of the laser trace 55. The feature store control logic 88 increments the address in the address counter 90 after performing each write operation. The contents of the counters 92 and 94 form a 32-bit word which is stored in the storage circuit 78. The feature storage circuit 78 is a dual port RAM which can hold 32 K 32-bit words and is split into two banks for double buffer operation, i.e. one bank is written to while the other bank is read by the VME interface 80. The feature store control logic 88 controls the double buffer operation by ensuring that when one bank is full, a read signal is provided to the VME interface 80, and then the write operation continues at the other bank. An error signal to the VME interface 80 is generated when the read operation to one bank is not finished yet the other bank is full of x,y data. The VME interface 80 controls the reading of data onto the VME bus, and the feature extraction board 62 function as a slave on the VME bus 28. The sonar scanner 6, as shown in Figure 5 mounted separately on the measurement vehicle 40 in the pipe 42, includes a sonar head 100 which is based on a Simrad Mesotech Model 990 sonar system. The sonar head 100, as shown in Figure 6, includes a transducer 102 for emitting sonar signals and receiving the return echoes 106, and a stepper motor 104 for rotating the transducer 102 at rates up to 2 revolutions per second. The sonar signals and echoes 106 therefore provide a circumferential scan 108 of the inner surface of the pipe 42 at each position z. The sonar head 100 further includes a motor control circuit 105, a transmit/receive circuit 107, a signal processing circuit 110 and a general control circuit 112. The signal processing circuit 130 and the control circuit 112 are connected to the processing system 14 by the link 16, and the control circuit 112 on receiving a scan signal from the processing circuit 14 generates control signals for the motor control circuit 105 and the transmit/receive circuit 107 so that the motor control circuit 105 begins controlling the motor 104 so as to rotate the transducer 102. The transmit/receive circuit 107 is then switched to transmit mode so as to cause the transducer to emit a short pulse of 2.2 MHz ultrasonic radiation and then reverts to receive mode so as to receive the sonar echo, which is passed to signal processing circuit 110 for transmission to processing system 14. A typical return echo signal 114 is shown in Figure 7. The radial dimension r of the point on the scan 108 corresponding to the echo 114 is determined from the time between sending of the sonar pulse and the instant T 116 when the return signal level reaches a threshold level L 118, as discussed hereinafter. The second polar coordinate θ of the point on the scan 108 is determined from index pulses generated by the motor control circuit 106 for each incremental step of the stepper motor 104, bearing in mind the initial orientation of the sonar transducer 102 is known. The pulses are transmitted to the processing system 14 from the motor control circuit 106 by the control circuit 112. Frequency multiplexing is used to provide separate channels with the processing system 14 for control signals to the head 100, and signals transmitted to the system 14. The analog signal 114 representing the intensity of the sonar echo as a function of time is digitised into 12-bit samples at a 250 kHz sampling rate using the A/D board 36 on receipt at the processing system 14. Prior to sampling, an 8-th order analogue low-pass filter with a cut-off frequency of 100 kHz is used to remove high-frequency components of the echo which may cause aliasing of the sampled signal. The output of the optical encoder of the chainage detector 10 is obtained by the processing system 14 immediately after each sonar echo is received. To determine the return time T of each sampled echo and from that the radial distance, r, to a feature of the inner surface of the pipe 42, three techniques are available, a curve-fitting technique, a cross-correlation technique, and a deconvolution technique. The curve-fitting technique involves fitting the echo samples to a curve for each echo and then deriving the radial distance from the curve. The cross-correlation technique involves comparing the return samples with an echo from a known radius and using the cross-correlation result to determine the radial distance relative to the known radius.
The curve-fitting technique is performed as follows. A water to air interface is an almost perfect mirror to sonar signals, and from an analysis of echoes obtained from a water-air interface using the transducer 102 it has been determined that a function of the form provided in equation (1) below can be used to describe a sonar echo produced by the sonar scanner 6.
Figure imgf000011_0001
The parameters required to define each peak, i, of an echo, y, are the amplitude of the peak, ci, the width of the peak, λi, and the time at which the peak occurs, ɸi. The function y takes into account that the sonar signal generated by the transducer 102 has a trailing exponential decay at the end of the pulse. The derivative of the function y is discontinuous and the number of factors for each function is determined by the number of peaks detected in the return echo. A peak of the return echo is detected using an estimate of the derivative of the samples of echo. A peak is defined to occur when there is a positive to negative zero-crossing of the derivative (indicating a change to the slope of the signal) and the height of the peak is greater than 5% of the maximum range of the samples, which eliminates for any noise inherent in the samples. Numerical differentiation is inherently a noise increasing operation, and to reduce noise components, a five point smoothing differentiating filter, having a frequency response as shown in
Figure 8, is used. The filter is implemented in software and the algorithm for implementing the filter is provided in equation (2).
Figure imgf000012_0001
where uk is the kth sample, yk is the kth sample of the filtered signal and n is the number of samples.
An estimate of the time of the peak
Figure imgf000012_0003
is obtained by linear interpolation of the derivative values obtained on either side of the zero-crossing as shown in equation (3).
Figure imgf000012_0002
The maximum value of the peak at is estimated by fitting a quadratic through
Figure imgf000012_0004
the three points prior and including the point after the zero-crossing of the derivative. An estimate of the width parameter λi is determined by locating the times of the half peak amplitudes on the rising and falling edges of a peak. Linear interpolation is again used between the two nearest sample values to the half amplitudes in order to obtain estimates of a time. The time difference between the peak and the half amplitude of the rising edge is used in preference to the falling edge, and if the time difference between the peak amplitude and half amplitude is given by w, then for the rising edge
Figure imgf000012_0007
and for the falling edge
Figure imgf000012_0006
After obtaining
Figure imgf000012_0005
and λi, ci be solved for each peak using a matrix operation based on equation (1), which can give a least squares solution for ci.
The echo return time or transit time T is determined to be time at which the function y for an echo reaches 10% of its maximum amplitude. The radial distance ri from the sonar head 102 to the internal feature of the pipe 42 that reflected the sonar signal to generate the echo is obtained by:
Figure imgf000013_0001
where vsound is the velocity of the sonar signal; and ϕi and λi are the time and width parameters of the maximum peak of the echo. The time ttrig is the time between the start of emission of the sonar pulse and the time a trigger signal is received by the processing system 14 from the sonar head 100 and when the asynchronous sampling clock of the A/D board 36 begins sampling the echo. The signal processing circuit 110 of the head 100 generates the trigger signal at the end of generation of a sonar pulse. Rtransducer is half the width of the sonar transducer 102, which is approximately 10 mm.
The above solution for ri can be performed in real-time using the microprocessor
38, or alternatively the work station 32 can be used and the initial parameter estimates for ϕi, λi and ci fed to an fmins module of an algebra software package MATLAB to minimise the error between the fitted function y and the samples. An example of the results obtained for a three peak echo 120 is illustrated in Figure 9.
The cross-correlation technique first requires a standard sonar echo to be selected for cross-correlation with the echoes obtained in the pipe 42. A series of 10 echoes from an air/water interface were used to construct a standard echo template. The echoes were recorded using a cathode ray oscilloscope, at 8-bit resolution with a sample rate of 5 MHz and as there was a significant amount of switching noise and other signals, the echoes were averaged, the offset removed and the signal normalised. The template was digitally low-passed filtered using a 30 point filter with a cut-off frequency of 660 kHz. The leading signal of the standard peak is truncated so that the first point in the signal represents the beginning of the rise, which is the first part of the returning echo detected by the transducer 102. Therefore by truncating the standard signal in this manner the signal represents the shortest distance from the transducer 102 to a reflecting surface. The signal was then decimated according to the actual sampling period of the measured echoes to form the standard peak. The standard echo signal 122 is illustrated in Figure 10.
A standard cross-correlation algorithm applied to a measured echo and the standard echo, is as follows:
Figure imgf000014_0001
where:
x(k) is the measured sampled sonar echo,
y(k) is the standard sonar echo, and
Rxy(k) is the cross-correlation of the vectors x(k) and y(k). The location of the peaks of the cross-correlation are determined using equation
(3), as discussed previously, and the location of the peaks gives the time lag between the standard and measured signals. An example of a typical cross-correlation 124 is shown in Figure 11. From the cross-correlation 124, the parameters ϕi and λi can be determined for the measured echo and the radial distance ri determined using equation (6). The cross-correlation method is significantly faster than the curve-fitting method when the latter uses the optimisation module fmins, however, the former method assumes that the return echo has the same profile as the standard echo.
An alternative technique for obtaining the parameters ϕi and λi for equation is to apply a deconvolution algorithm, as follows, to the standard echo and the return echo.
Figure imgf000014_0002
where
Figure imgf000014_0003
is the discrete Fourier transform of the measured sampled sonar echo x(k),
Figure imgf000015_0001
is the inverse discrete Fourier transform of X(f),
Sxy(k) is the signal resulting from the deconvolution of x(k) with y(k), x(k) is the measured sample sonar echo,
y(k) is the standard sonar echo with discrete Fourier transform Y(f),
Y*(f) is the complex conjugate of Y(f), and
s is a small constant to prevent division by zero.
The accuracy of the raw ring data (x,y and r,θ data) for each position z obtained from the laser scanner 4 and sonar scanner 6, as discussed above, can be improved by correction for movement of the measurement vehicle 40. The measurement vehicle 40 cannot be constrained to remain at the centre of the pipe 42 and orientated parallel with respect to the longitudinal axis of the pipe 42. Measurements obtained from the vehicle orientation detector 12 and the measured data can be used to determine components of motion of the measurement vehicle 40, such as x and y offsets from the centre of the pipe 42 to the prism 52 and the sonar head 102, and the yaw and pitch angles of the measurement vehicle 40. A method for correcting the raw data is discussed in the accompanying Appendix on pages 45 to 49, and involves obtaining the effects of the vehicle motion from the measured data and processing the measured data by a transformation required to transform a fitted ellipse onto a circle centred in the middle of the pipe. The method assumes the pipe 42 is circular and first fits the ellipse to the ring data obtaining from the scanners 4 and 6. Having defined a fitted ellipse for the ring data, a transformation matrix is obtained which can be used to transform the ring data onto a circle corresponding to the assumed cross-section of the pipe 42. Defining the ellipse also provides offsets xc,yc to the centre of the pipe 42 which are used to correct the cylindrical ring data so as to be centred on the centre of the pipe 42. The Appendix also describes how the yaw and pitch angles of the vehicle can be determined from the measured data and this can be used for comparison with signals obtained from a gyroscope of the vehicle orientation detector 12. The comparison enables adjustment of the measured yaw and pitch angles and calibration of the gyroscopes. The plurality of ring data obtained over a length of the pipe 42 from the laser scanner 4 and the sonar scanner 6 each comprise a set of raw range data representing an image of the inner surface of the pipe which can be subjected to image processing. The range data comprises a set of z values, and for each value z is a plurality of x,y or r,θ dimension values, corresponding to the features of the inner surface at each position z. All the x,y and r,θ values correspond to pixels of the image of the inner surface of the pipe 42.
The work station 32 first performs image preprocessing 200 on the range data 201 and then segments the pixels into regions of interest using an image segmentation procedure 202, as shown in Figure 12. The segmented regions are classified using a feed-forward neural network classifier of an image classification procedure 204. A classifier training procedure 206 trains the network classifier off-line using the back-propagation method. The classifier training procedure 206 relies on a training set prepared by a preparation procedure 208. The training set preparation procedure 208 generates training data based on results obtained by performing the image segmentation procedure 202 on a known pipe structure with known defects. The results of the image classification procedure 204 are provided to an image interpretation procedure 208 which further defines the classified features of the image on the basis of a knowledge database 210 of known pipe features. A graphic display of the interpreted image can be produced at step 210 and an automatic defect report 212 produced at step 214, the reports generated being stored in a structured database 216.
The image preprocessing procedure 200 removes insignificant variations in the range data and places it in a form which is not dependent on the type of scanner 4 or 6 used. The image preprocessing procedure 200 involves first processing the range data to produce a calibrated constant grid map of the internal surface of pipe 42, where the map is two dimensional and the surface of the pipe is considered to be split longitudinally and opened to expose the surface. The x,y and r,θ values for each pixel obtained by the scanners 4 and 6 are calibrated and converted into depth values. Provision can also be made for pipe manufacturing tolerances. The depth value of each point on the grid map is determined by taking the depth value of the closest pixel to the point. Linear interpolation can be used at an increased computational cost, but it has been found that the improvement gained does not justify the cost.
The image preprocessing procedure 200 involves generation of a pipe model, which represents a perfect pipe with no defects or features. The model provides a reference level with respect to which range or depth values can be measured.
Two techniques for data driven model generation have been implemented as part of the system 2. In each the first step is the identification of pixels in the image which are likely to be part of a defective region by differential geometric techniques.
Biquadratic surface patches are used to estimate image surface derivatives as described in "Surfaces in Range Image Understanding" by Paul J. Besl, Springer- Verlag, 1988. Pixels with small first derivatives may be called good pixels.
In the first model building technique the model is built ring by ring. An ellipse of best fit is determined for all the good pixels in a ring. The ellipse of best fit is obtained in the same fashion as discussed previously. In the second model building technique the image is systematically covered with facets which are rectangular subimages. The facet with the largest number of good pixels is chosen. Multiple bilinear regression is carried out. If this is successful then the model is determined at that facet and the facet is "written" which means that all the pixel values in the facet are set to the values determined by the regression. If the regression is not successful then the facet is skipped and the best remaining facet is chosen. The regression will be successful if there are enough good pixels in the facet to get reliable results and the regression equations have a unique solution. Note that once a facet has "written" all its pixels become good. After one pass through the facets in the image if there are any facets not written then the "read" size of the facets is incremented. This means that they examine pixels over a larger region when carrying out the regressions. When a regression for a facet is successful it writes only to its initial region and not the extended read region. Successive passes are made in this way through the list of facets until all facets are written.
The advantage of the local facet method is that it applies to pipes of any shape.
The bilinear multiple regression for a facet is carried out by finding a facet or planar surface given by f(x,y) = c + m1x + m2y, which minimises
Figure imgf000018_0001
Here zi is the range value of the pixel (xi,yi) which is the ith good pixel in the rectangular subimage under consideration.
This minimum occurs with the vanishing of the following derivatives.
Figure imgf000018_0002
This occurs when the following system of equations is satisfied.
Figure imgf000018_0003
This is a linear system and can be solved by standard methods.
The data of the laser scanner 4 usually has data values missing which may be caused by a light absorbing surface, obstruction of the return light signal to the camera 56, a surface void, or surface which reflects light away from the camera 56. The missing values caused when an object obscures the line of sight between the camera 56 and the laser beam trace 55 are termed shadows. Symbolic values are assigned to all values missing on the grid map. A set of possible values are obtained for the shadow pixels and other missing values by projecting a line from the camera position through every pixel. If any of these lines intersect the line projected from the centre of the pipe through the grid position of the missing value and perpendicular to the longitudinal axis of the pipe, the point of intersection is added to the set. Since this set of possible values is assumed to be continuous only the minimum range value found need be kept for subsequent processing.
Shadows are not a considerable problem for the data obtained by the sonar scanner 6, however, instead of a single radial value for each point on the surface, there may be several radial values caused by multiple echoes being returned to the sonar head 102. The sonar data is therefore further preprocessed by examination of the pixels in a window surrounding the current pixel, and a single depth value is chosen from amongst the pixel values in the window. The size of the window, presently 7×7, is dependent on the size of the area covered by the sonar signal 106 on the internal surface of the pipe 42.
The data obtained by both the laser scanner 4 and the sonar scanner 6 also contains a considerable amount of noise which can interfere with subsequent image analysis routines. The effects of the noise can be reduced by an image smoothing process. An adaptive smoothing process is used for its properties of reducing noise whilst preserving discontinuities. An equally weighted local neighbourhood averaging kernal which adapts itself to the local topography of the surface to smooth is used. The resulting surface is smooth within regions whilst it's discontinuities (often corresponding to region boundaries) are preserved. The strategy consists of automatically selecting an adequate kernal size which defines the size of the neighbourhood. The size of the smoothing kernal is based on an evaluation of the differences between the value at the centre point of the kernal window and its neighbours. The size of the kernal is chosen such that the largest absolute difference between the centre pixel and it's neighbours is less than or equal to twice the noise level in the image and subject to the constraints that the kernal size is greater than a minimum size (typically 3 by 3) and smaller than a maximum size (typically 19 by 19).
The image segmentation procedure 202, as shown in Figure 13, includes a feature extraction step 220, a pixel labelling step 222 and a region extraction step 224. The feature extraction step 220 computes a set of features which are combined to form a feature vector for each pixel. The features used consist of features describing more or less the range value properties of the pixel and features representing the texture in a local neighbourhood and include:
1. The row r on the grid map for the pixel;
2. The column c on the grid map for the pixel;
3. The depth value f(r,c) of the pixel;
4. The deviation from the Model f(r,c) - Model(r,c);
5. First and second order derivatives of the function f at the position (r,c) based on estimating derivatives of digital surfaces as described in Besl; and
6. Surface characteristics including biquadratic surface fit error, and quadratic variation as described in Besl.
The pixel labelling step 222 uses the feature vectors of the pixels to perform a classification technique which partitions the image into regions that are uniform with respect to certain features and that contrast with their adjacent neighbours in the same features. A Nearest Neighbour Classifier, which has been previously trained offline, is used to perform the classification. The classifier is trained to classify pixels according to one of the following surface primitives: background
void
corrosive
deposit
root Training is performed by extracting a set of feature vectors from suitable known examples of the desired surface types. The set of feature vectors becomes the exemplar set for the classifier. The nearest neighbour classifier performs it's classification by comparing the unknown feature vector to it's set of exemplars. The assigned classification is the same as the classification of the closest exemplar in feature space using the Euclidean distance measure. The output of the pixel labelling step 222 is a label list in the form of a function l(r,c) which is the primitive surface classification for the pixel of row r in column c. A relaxation step is used to improve the performance of the pixel labelling step.
The underlying assumption of the relaxation process is that surface primitives consist of locally connected regions. This assumption is translated into the statement that classification decisions for single pixels are not independent of the decisions for neighbouring pixels. In other words, if all neighbours of a pixel are classified as pixels of a certain primitive, it is very unlikely that this pixel does not belong to the same primitive. Since we want to modify the classification (label) at a pixel (r,c) we define a refined classification 1'(r,c) for the initial classification l(r,c) as:
1'(r,c) = M (i,j∈ Neighbourhood 1(i,j))
Where neighbourhood is a certain neighbourhood (typically a 3 by 3 window) of
(i,j) and M is a function of all the classifications in the neighbourhood of the pixel at (i,j). Typically M is defined as the function returning the mode of it's arguments. The output of the relaxation step is a label list in the form of a function 1'(r,c) which is the refined primitive surface classification for the pixel of row r in column c.
The region extraction step 224 groups connected pixels having the same label into connected components. Those connected components whose pixel type is not of the background type are extracted to form regions of interest R. The coordinates of a bounding rectangle for each region R is determined and a bit map, map(i,j), is generated to identify pixels of interest, i.e. those pixels within the region R belonging to the same connected component, where image :R -> and bit map: R-> {0,1}. The output of the region extraction step 224 is a list of regions of interest in the range image of the pipe surface.
The image classification procedure 204, as shown in Figure 14, includes a filter step 230, an interpolation step 232, a transform step 234 and a sampling step 336 which process the bit maps of the regions of interest so as to place them in a form which can be applied to the neural networks of a neural network classification procedure 238. The output of the classification procedure 238 provides a classification for each region of interest and a confidence value representative of the confidence that the classification is correct. For example, classes for surface defects may include void, crack, corrosion or deposit. The confidence value may be 0 or 1, i.e. low or high, or a "fuzzy" value between 0 and 1.
The neural networks used in the neural network classification procedure 238 have a fixed structure, i.e. a fixed number of inputs and hidden layer nodes, yet the number of pixels of interest in a region of interest and the size of the region is not fixed. Therefore the data of the regions of interest must be manipulated so it can be applied to the fixed inputs of the networks, regardless of region size, and this should be done without sacrificing the data and true representation of the regions. This is achieved by first applying a defect map filter at step 232 to the bit map of each region of interest. The depth values for pixels of interest in the defect map are re-scaled to values between 1 and 2 and the remaining pixels are set to 0. This enhances the contrast between the pixels of interest and the remaining pixels, highlights the boundary of a defect, while retaining an image of the region's topography. The new image, image 1, is defined for all pixels(ij) in the region R by equations (9), (10) and (11) below: image 1(i,j) = 0 (9) for all (i,j) in R where map(i,j) = 0 image1(i,j) = f(image(i,j) - model(i,j)) (10) for all (i,j) for which map(i,j) = 1 and the label represents an "outrusion", and image1(i,j) = g(model(i,j) - image(i,j)) (11) for all (i,j) for which map(i,j) = 1 and the label represents an "intrusion". The functions f and g are given by, for example,
Figure imgf000023_0001
Nominal radius is that of the pipe, and maximum radius and minimum radius are the maximum and minimum depth or radius which the scanners 4 and 6 can detect. Model(i,j) is the pipe model value at the pixel(ij) where the pipe model can be obtained by either of the two methods described previously. The interpolation step 232 is used then to re-scale or compress the region of an arbitrary size m×n to an image, image2 : {0, ..., M-1} × {0, ..., N-1}→ R, of a set size M×N so it can be applied to the neural networks. Transformation to the set image size is done by triangulation of the domain of the region, which involves forming triangles with the vertices being the pixels of the region. The triangles have a number of different faces or "facets" orientated in different directions depending on the values of the pixels, and each triangle is then mapped to a function so as to form a continuous "facet" image. This image is then sampled onto a M×N grid to obtain the interpolated image. The form of the triangulation for a m×n region is illustrated in Figure 15 where each grid is divided into an upper triangle 233 and a lower triangle 235. The continuous "facet" image is defined by a function f(x,y) where for any point (x,y) (0≤ x≤ m-1 and O≤y≤ n-1) in the grid map plane, f(x,y) is obtained by first finding the triangle which bounds (x,y). The triangle presents a facet in 3D space and f(x,y) is the height of the facet above the point (x,y). If (xi,yi) are the vertices of a triangle (i = 1, 2 and 3) and zi is the depth value corresponding to (xi,yi), then f(x,y) is evaluated as follows:
1. If (x,y) lies in an upper triangle 233 then f(x,y) = z1 + (y - y1 - x + x1)(z2 - z1) + (x - x1)(z3 - z1)
2. If (x,y) lies in a lower triangle 235 then f(x,y) = z1 + (x - x1 - y + y1)(z2 - z1) + (y - y1)(z3 - z1) The function f(x,y) is then sampled onto an M×N grid.
To reduce the data dimensionally further, the samples are transformed at step 234 from the spatial domain into a power domain or power spectrum using a discrete two dimensional Fourier transform. The power spectrum provides a good representation of the region due to the transformation process which reconstructs the surface in the Fourier domain. A discrete Fourier transform of image2 is performed using a recursive doubling strategy for Fast Fourier Transform (FFT). The discrete Fourier transform of an image f : {0, ..., M-1} × {0, ..., N-1}→ R is defined by equation (12).
Figure imgf000024_0001
k,l being the cartesian coordinates of image f and u,v being the cartesian coordinates of the transformed samples in the power domain, 0≤u,v≤N-1. The recursive doubling strategy is described in "Algorithms for Graphics and Image Processing" by Theo Pavlidis, Springer- Verlag, 1982. The power spectrum is the absolute value of the complex Fourier transform which gives a power domain image power : {0, ..., M-1} × {0, ..., N-1}→ R, and power(u,v) = | F(u,v) | · The power spectrum is then wedge-ring sampled at step 236, which involves integrating, or summing, the samples in the power domain over eight wedge shaped and eight ring shaped sections of the domain. The result of the wedge-ring sampling provides a set of sixteen values which represent a compressed discrete signature of the inner surface of the pipe 42 which can be applied to the neural networks of the neural network classification step 238. The wedge-ring sampling is performed by first letting WEDGES be the number of wedges to be used and RINGS be the number of rings to be used. The interval from 0 to π/2 in the power domain is divided into WEDGES equal subintervals defined by angles [θii+1]. The interval from 0 to [(M-1)2 + (N-1)2]½ is divided into RINGS equal subintervals defined by radii [rj,rj+1]. The ith wedge sample is defined to be the sum of power(u,v) for all (u,v) such that theta(u,v) is in [θii+1] where theta : R2→ [0,2π] is the polar coordinate angle map given by
Figure imgf000025_0001
The jth ring sample is defined to be the sum of power(u,v) for all (u,v) such that radius(u,v) is in [rj,rj+1] where radius : R2→ [0,∞] is the polar coordinate radius map given by
Figure imgf000025_0002
Thus, the ith wedge sample wedge(i) is defined by
Figure imgf000025_0003
and the jth sample ring(j) is defined to be
Figure imgf000025_0004
where for any wedge or ring set S, χs is the characteristic function of S which indicates whether x, being theta(u,v) or radius (u,v), is in S, and is defined by
Figure imgf000025_0005
The wedge-ring samples are put together to form a feature vector. The first WEDGES, in this case eight, features are the wedge samples and the last RINGS, in this case eight, features are the ring samples.
The Fourier transform and wedge-ring sampling steps 234 and 236 preserve the overall shape and texture of the regions of interest and enables the neural network classifier to discriminate between complex surface defects. A number of factors enable this to occur, as discussed above, and are a consequence of the fact that the Fourier transform is an invertible mapping onto the frequency domain, and the power spectrum is translation invariant, which ensures the wedge-ring samples are translation invariant. Furthermore, the wedge samples are scale invariant and the ring samples are rotation invariant. This facilitates invariant classification of defects in the pipe surface. The neural network classification step 238 involves applying the wedge-ring samples to a number of feed-forward neural networks which have been trained by the back-propagation method. A neural network 241, as shown in Figure 16, is provided for each classification class of surface defects and pipe features. The networks 241 each presently have a topology of 16 inputs 243 in the input layer 239, eight nodes 235 in the first hidden layer 247, four nodes 249 in the second hidden layer 251 and two outputs
253 in the final fourth layer 255. Weights W(i,j) are applied to each value passed between the four layers 239, 247, 251, 255, 2 being the layer number, i being the number of the node or output of the layerℓ and j being the number of the node or input of the ℓ+1 layer. Biases are applied to the result of summing all of the weighted values received at each node 245 and 249. The outputs include one for an affirmative signal, and one for a negative signal, the signal level being between 0 and 1 on each output and indicating a degree to which the network believes the region of interest belongs to its respective class or not.
The neural networks are divided into a first class which operates on images that represent intrusions into the pipe, and the second class which operates on images that represent defects or features that protrude away from the centre of the pipe 42. An example of the first class is the tree root/non-tree root neural network classifier, and its weights and biases are listed below in natural order below, the weights for each node being provided first, starting from the first input 243 of the first layer 239 to the outputs of the fourth node 249 of the third layer 251, and the biases of the nodes 247 and 249 being specified thereafter. 12.746346 0.645722 2.463586 5.257362 -0.197740 8.822066 0.945439 2.853589 0.449587 1.397464 -0.533578 5.705785 1.628069 -1.967862 -0.239797 -0.373825 1.3772190.5207090.4530154.0179160.673481 -0.2344370.1563300.0807593.560530 0.0247261.190002 -0.3581050.6181133.0743330.7470601.0334513.5799810.893241 -0.0712984.3165091.2670151.4934710.2372190.5113652.0992121.0189081.023318 1.529006 0.670844 1.240182 0.540520 0.775410 -0.014260 1.262444 -0.958270 6.563632 1.846852 -2.157752 0.372251 -0.693959 -3.006805 1.458163 -1.210690 7.822542 2.175437 -4.379960 -0.061603 -1.556301 -26.296545 2.685686 -10.934907 11.4702015.980796 -28.453907 -4.285449 -11.1800372.9631110.9220240.107060 3.6446590.8042941.3908820.7026100.8691461.9329071.1441550.4906622.916481 0.2844880.3049540.5526350.2306302.3140490.639178 -0.1340803.9303350.525155 0.944172 0.529907 -0.047074 7.097008 0.538283 2.110033 0.402595 -0.196998 5.534210 1.178307 2.593368 2.963713 1.239985 -0.773600 12.584067 1.767470 -1.303928 -0.271549 -0.504803 0.201155 1.116746 -0.7466196.032501 2.042504 -2.044746 -0.043633 -1.141010 -1.3743430.919341 -1.1910147.0245991.851477 -3.833785 -0.535100 -1.051623 -2.154537 -1.302927 -1.167616 -14.931460 -0.376227 -0.283871 -0.0526143.0217430.684641 -1.029852 -0.681440 -5.504961 2.518857 -0.470717 0.093677 18.494614 0.617955 -0.325076 0.309436 5.028311 -1.376231 -2.193236 -1.027059 -14.6135490.687801 -0.960444 -0.916111 -2.058457 -0.389501 -1.347371 -0.341156 -5.834723 -1.9024741.8389910.517196 -0.478243 0.028812 0.223330 3.593781 -3.583304 -1.741506 1.305033 -0.042125 6.244700 0.438302 -2.0124241.386442 -0.002000 -1.6192841.6426772.030912 -12.951331 0.616731 -0.642265
An example of the second class of neural network is the pipe join/non pipe join neural network classifier, the weights and biases of which are listed below in natural order.
15.601082 1.313890 -4.911501 2.071737 6.847770 -1.955913 0.47975923.305962 -11.0338480.294237 -9.0934911.76327613.544494 -9.255530 -0.530280 -12.524919 -6.430345 -0.5061241.7930310.23640011.9595728.7195850.085171 -10.598364 14.4647480.10422118.004253 -2.007996 -7.57186737.5666281.5635609.753113 3.913759 0.088835 2.713513 0.757766 23.501945 18.608824 0.587159 1.179299 -11.0307480.242176 -5.0949670.84269615.003055 -2.304438 -0.025316 -14.214068 -14.9260970.035510 -11.8462211.63193428.704191 -10.8294890.083905 -16.000715 -15.0834930.116805 -5.2909420.71277516.129330 -8.731529 -0.287789 -17.385916 24.353518 1.897373 3.986410 1.951551 15.928616 5.156585 1.666751 32.937801 3.5324540.609612 -1.3273961.2261753.7271801.9717280.7456504.0400711.858634 0.535590 2.401938 0.487294 -0.504847 9.046494 0.441551 0.203021 -18.910927 -0.660695 -8.2261061.38113112.661963 -7.653401 -0.483721 -24.69290010.056084 0.513577 6.766282 0.206353 0.355041 22.668598 1.356813 8.777742 -1.936229 -0.228150 6.818745 0.594466 30.175247 23.757566 0.693519 -7.394564 -7.025899 -0.038548 -6.2100981.38655813.0933873 -3.644164 -0.309244 -8.490670 -21.163414 -0.423117 -17.802017 2.518166 33.727161 -24.713118 -1.424613 -21.156178 -8.455983 0.184931 0.430795 -12.507034 -1.018161 -0.215443 -0.097470 -0.057819 9.579165 -0.587254 -0.855436 -12.936895 -1.489346 -0.738745 -0.257254 2.980933 17.2449803.112999 -1.67941833.10601813.431716 -3.643328 -2.187083 -24.992414 0.823561 -0.437280 -0.970514 -1.308472 -16.635191 -2.442617 2.454849 -9.283930 2.711961 -2.711932 0.368954 -0.368977 -0.886622 0.886741 -2.976039 2.976049 4.088806 3.415415 -0.099539 2.509209 15.089481 5.665998 3.394832 5.804759 -6.528076 -1.664564 2.117919 -19.312078 1.332425 -1.332394
The weights and biases for the neural networks are established by the classifier training procedure 206 which uses the back-propagation method, as discussed in "Neural Computing" by R. Beale and T. Jackson, IOP Publishing 1990, on a data set prepared by the training set preparation procedure 208. This involves obtaining data on known pipe defects and features using the pipe inspection system 2 and then on segmentation of the data by the segmentation procedure 202 providing the results to the preparation procedure 206 which performs the filtering, interpolation, Fourier transform and wedge-ring sampling steps 230 to 236 to create the data sets. The data sets together with their known classification results are fed to the classifier training procedure 206 which attempts to seek convergence of the weights and biases using the back-propagation method to produce correct neural networks for each classification class. The data sets are randomly interleaved, i.e. for the tree root network, the tree root data sets are interleaved with data sets that do not relate to tree roots so as to obtain accurate convergence of the weight and bias values.
Filters are used with some of the dedicated networks. For example, with the pipe-connection network if the bounding rectangle length of the region of interest is less than i_HLTER or the bounding rectangle height is less than j_FILTER or the defect map area is less than area_FILTER then "not pipe join" is returned and use of the neural network is circumvented. The image interpretation procedure 208 involves the verification and rating of each classified region of interest based on a database of 210 of defect and pipe feature characteristics and previously identified defects and features, which includes defect characteristics, e.g. direction/angle, gradient, area, and depth of defects, ratings, e.g. structural, service or other, and spatial relation with other defects or features. The image interpretation procedure 208 is separated into two parts, as shown in Figure 17, an image analysis step 250 which performs further image analysis on the bit maps of the classified regions in order to determine additional feature attributes associated with the pixels of interest, at step 252. The feature attributes may include direction, edge details, area, position and depth of a surface feature or defect. An inference engine step 254 is provided to match the model of the defects provided in the knowledge database 210 using a set of defect identification and rating rules 256 which may be coded in an "if condition then conclusion" format. The inference engine is a general purpose facility. The application dependent interface functions are provided to interface between the engine and the application program's data.
The image interpretation procedure 208 is supported by other procedures for the efficient application of a knowledge base. These are: · Rule base maintenance provides facilities for the addition, modification and deletion of rules from a knowledge base. · Rule base compiler parses the knowledge base and ensures correct syntax. The knowledge base is then translated into a format suitable for the knowledge base engine.
The following specifies the knowledge base for complete identification of the service and structural defects and features. SERVICE AND STRUCTURAL DEFECT SCORING RULES
Grease
IF region is grease
AND
cross-sectional area loss is less than 5%
THEN
score is 1.0 * length(m) IF region is grease
AND
cross-sectional area loss is not less than 5%
AND
cross-sectional area loss is less than 20%
THEN
score is 5.0 * length(m)
Roots
IF region is a root
AND
cross-sectional area loss is less than 5%
THEN
score is 2.0 IF region is root
AND
cross-sectional area loss is not less than 20%
THEN
score is 5.0
Note: Root scoring varies slightly from Australian Conduit Condition Manual Open Joints
IF region is an open joint
AND
Openness' is less than the pipe wall thickness THEN
score is 0.1
IF region is an open joint
AND
'openness' is greater than the pipe wall thickness
AND
Openness' is less than 1.5 pipe wall thicknesses THEN
score is 2.0
Displaced Joints
IF region is a displaced joint
AND
displacement is less than the pipe wall thickness THEN
score is 0.1 IF region is a displaced joint
AND
displacement is greater than the pipe wall thickness
AND
displacement is less than 1.5 pipe wall thicknesses THEN
score is 0.5 IF region is a displaced joint
AND displacement is greater than 1.5 pipe wall thicknesses THEN
score is 2.0 Encrustation
IF region is encrustation
AND
cross-sectional area loss is less than 5% THEN
score is 1.0 * length(m) IF region is encrustation
AND
cross-sectional area loss is not less than 5%
AND
cross-sectional area loss is less than 20% THEN
score is 2.0 * length(m) IF region is encrustation
AND
cross-sectional area loss is not less than 20%
THEN
score is 5.0 * length(m)
Scale
IF region is scale
AND
cross-sectional area loss is not less than 5%
THEN
score is 1.0 * length(m) IF region is scale
AND
cross-sectional area loss is not less than 5% AND
cross-sectional area loss is less than 20%
THEN
score is 2.0 * length(m) IF region is scale
AND
cross-sectional area loss is not less than 20% THEN
score is 5.0 * length(m) Cracked
IF region is a circumferential crack
THEN
score is 1.0 * length(m) IF region is a longitudinal crack
THEN
score is 2.0 * length(m) IF region is a multiple crack
THEN
score is 5.0 * length(m)
Fractured
IF region is a circumferential fracture
THEN
score is 8.0 * length(m) IF region is a longitudinal fracture THEN
score is 15.0 * length(m) IF region is a multiple fracture THEN
score is 40.0 * length(m)
Gas Attack
IF region is gas attack
AND
extent is slight
THEN
score is 2.0 * length(m) IF region is gas attack
AND
extent is medium THEN
score is 40.0 * length(m) IF region is gas attack
AND
extent is extreme THEN
score is 100.0 * length(m)
Erosion
IF region is erosion
AND
extent of slight
THEN score is 1.0 * length(m) IF region is erosion
AND
extent is medium
THEN
score is 25.0 * length(m) IF region is erosion
AND
extent is large
THEN
score is 100.0 * length(m) Deformed
IF region is deformed
AND
deformation is less than 10% THEN
score is 10.0 * length(m) IF region is deformed
AND
deformation is not less than 10% AND
deformation is less than 15% THEN
score is 30.0 * length(m) IF region is deformed
AND
deformation is not less than 15% AND
deformation is less than 20%
THEN
score is 90.0 * length(m) IF region is deformed
AND
deformation is not less than 20%
AND
deformation is less than 25%
THEN
score is 125.0 * length(m) IF region is deformed
AND
deformation is not less than 25%
THEN
score is 165.0 * length(m) Broken
IF region is broken
THEN
score is 60.0 The image interpretation procedure 208 allows the defects and pipe features specified in the following tables to be identified. The tables specify the neural network classification given to the defects and features, and feature attributes which need to be determined by the image analysis step 250 in order for the inference engine step 254 to complete identification of the defects and features.
The tables document the attributes used by the rule base to classify regions. A tick indicates that the region should strongly exhibit this attribute whilst a cross indicates that the region should not exhibit this attribute. A dash indicates a don't care situation. The tables do not indicate whether the attributes are used in the conjunctive or disjunctive manner.
Figure imgf000038_0001
Figure imgf000039_0001
The pipe position in rings, clock reference, and the pixels of the regions of interest corresponding to each defect and feature identify the precise location of a defect or feature. The rules 256 can also be used to resolve location conflicts which may occur during segmentation and classification when different defects overlap. At least three different types of conflicts can be resolved, conflicts among superimposed defects, conflicts among adjacent defects, and conflicts arising due to classification ambiguities between different parts of defects.
The use of the knowledge based interpretation procedure 208 after the segmentation and classification procedures 202 and 204 enhances the identification and detection of surface defects. Furthermore, the database 210 enables the severity of the defects to be evaluated and rated, from which a comprehensive report can be generated using the report generation procedure 214.
The report is outputted in the following form:
Figure imgf000040_0001
Figure imgf000041_0001
In order to allow further evaluation of defect severity, and to provide information about changes in pipe condition, a Management Information System (MIS) has been developed. It uses the work station 32 to provide a graphic user interface (GUI) which allow the asset manager or engineer to call up selected previously processed data sets, and to interactively display the defect regions, or to generate new reports using a modified rulebase. The MIS has been written to run in a main window of the display 211. The main window will list the assessment for each of the defects, and allows display of multiple detailed views of the data related to the defect. These alternate views can comprise: one showing the current ring slice of the pipe data, one showing the current range or unwrapped section of the pipe data, and the other showing a three dimensional (3D) display of the pipe data. The MIS can also provide a graph showing variation of rating over time, based on previous pipe inspections. The MIS has been designed and implemented in a modular fashion, and Figure
4 shows the main MIS software components.
Through the MIS, the user (such as an asset manager or maintenance engineer), can control access to the data base using the GUI. The GUI sub-system allows the user to specify the operation (either data interpretation, visualisation or comparison) and the data to operate on (either the entire list of regions of interest, or a subset of it). The GUI sub-system invokes the sub-system which performs the particular operation on the data requested by the user. The called sub-system loads the specified data from data base using the facilities provided by the interfacing sub-system, if it has not been previously loaded. When the analysis sub-system is called, it in turn calls components from the knowledge base sub-system, to apply the knowledge base(s) to the data. A default knowledge base(s) is used except where the user explicitly overrides this default through the user interface. The graphical user interface (GUI) processes interactive user requests that are activated using the keyboard and mouse. The GUI provides menus and windows to enable easy interaction with the system. It contains the control logic to sequence and control the interaction with, and of, the sub-systems. The GUI provides access to, or invokes facilities for:
Data load
Data save
Knowledge base maintenance
Pipe analysis
Comparison over time
Compare two analyses Visualise an analysis
The software uses a computer graphics display to provide visualisation of analysed lists of regions of interest. Various views can be generated depending on a viewing position and data supplied by the GUI sub-system. That is, when the viewing parameters are changed in the GUI, the views are automatically regenerated to reflect this change.
This sub-system provides display services to the other sub-systems.
The visualisation sub-system generates a required picture from the supplied data, scales the picture to the appropriate window size and draws the picture into a graphics window.
The following services are provided by this sub-system: • Cross section view generates a cross-sectional view of the pipe taken at the nominated pipe position. The generated image is scaled appropriately to fill the viewing window. • Unwrapped view generates an unwrapped view of the pipe starting at the nominated pipe position. • 3D view generates a 3D perspective view of the pipe from the nominated pipe position. The orientation of the viewing camera is provided to enable the generation of the view from almost any position. The generated image is scaled appropriately to fill the viewing window. • Graphing produces a 2D line graph from the two data vector parameters.
A comparison sub-system relates multiple pipe condition assessment results. A time-based rating of assessment results can be assembled and displayed as a graph.
The following is an example of the service provided by this sub-system: • Comparison over time examines all analysed lists of regions of interest for a given pipe and for each list extract and time stamp the assessment rating.
An interfacing sub-system provides a level of abstraction between the system and the underlying operating system and any external systems that require interfacing to. This subsystem is intended to provide a library of interfacing services (which are implementation, or organisation, specific).
The following services are provided by this sub-system: • Data load reads data from the supporting file system and converts it into a form suitable for the internal set of regions structure. • Data save extracts data from the internal set of regions of interest and converts it into a form suitable for writing to the underlying file system. • Pipe directory interrogates the file system and returns the set of all pipe identifiers. • Analysis directory interrogates the file system and returns the set of all analysis identifiers for a given pipe. • Analysis export extracts information from the results of the analysis and formats the data in a form suitable for import into Water Authority databases. • Printing displays results on a printer device.
The data managed by the MIS system is primarily made up of list of regions of interest information for a pre-processed pipe image. The information stored in, or calculated for, each region of interest consists of region attributes which include: region location in the pipe; an image of the region, a region type descriptor, a region classification, area, perimeter, cross-sectional area loss, geometric descriptors, rating, etc. The list of regions of interest encapsulates the information used for condition assessment from the raw sensor pipe data. This list of regions of interest is a main data component within the system 2 and it is extensively used by both the analysis and comparison subsystems via an interfacing sub-system.
The pipe image is a form of annotated grid of pixel values representing a pipe surface in 3D space.
The list of regions of interest contains attribute and sub-image information for each segmented region.
A knowledge base representation is another data structure, which is an internal representation of an ASCII file of structured English-like declarative rules and functions. Raw pipe data from the sensors is supplied to the pre-processing module in real-time or as a file. After pre-processing and compression the pipe image data is transformed into a list of regions of interest and put into the data base. This data base can be archived or input directly into MIS to perform pipe analysis. For a given pipe there may be many lists of regions of interest, each one corresponding to a particular run of a given sensor through the pipe.
The interfacing sub-system provides an abstract conduit between the data used by the MIS and the data base structure within the underlying operating system, as well as external data bases to which the user required information about the list of regions of interest may be sent to, or retrieved from.
Assessment rules are input from the user into a knowledge base representation used by the knowledge base sub-system with a text editor. The knowledge base representation can also be compiled directly into the knowledge base sub-system.
The implementation of the graphical user interface (GUI) assumes the availability of a mouse and a colour graphical display. The MIS is designed to be interactive, and a user is required at most stages of MIS processing. However, most analysis actions of the system 2 can be performed, if desired, in a batch mode (i.e. without user intervention). The MIS can be operated independent of the data collection device, in a office environment using an appropriately configured and fast Sun Microsystem's Sparc 10 workstation. The recommended initial configuration of the workstation is:
Single or dual CPU SPARC 10 Mbus module
32 MB or more in parity SIMMS
SCSI-2 controller
1.05GB fast internal SCSI-2 hard disk
19" colour display capable of 1280x1024 resolution
Industry standard keyboard and mouse
IFEE 802.3 Ethernet and serial ports
150MB 1/4" Desktop Tape Drive
pre-installed: Solaris 2.x, SPARC Compiler C++, Motif Toolkit
A simplified interface to gather and pre-process raw sensor data can be run on a minimally configured Sparc workstation installed in a field environment. The transport of data from the field to the office workstation can be via magnetic tape. The recommended initial configuration is:
Single CPU SPARC 2 Sbus module
32 MB or more in parity SIMMS
SCSI-1 controller
424 MB internal SCSI-1 hard disk
19" colour display capable of 1280x1024 resolution
Industry standard keyboard and mouse
IFEE 802.3 Ethernet and serial ports
150MB 1/4" Desktop Tape Drive
pre-installed: Solaris 1.x, XGL Graphics Software Appendix: Vehicle motion correction
1 Introduction
Data from both laser and sonar scanners is processed into the form of cylindrical coordinates (r, θ, z). This data can be corrected for movement of the vehicle relative to the pipe axis. This can improve measurement accuracy as the vehicle may not remain at the center of the pipe and oriented parallel with the pipe's longitudinal axis.
There are two methods to determine the required corrections:
1. measure the motion using sensors, or
2. use the measured data to estimate the motion of the vehicle.
A combination of these two methods is utilised, with sensors being used to measure some of the motions and the measured data being used to obtain the others.
Figure 18 shows the orientation detector 12 at an arbitrary orientation and position in the pipe. The orientation of the vehicle can be described by roll, pitch and yaw angles (θ, ɸ, ѱ), while (x, y, z) coordinates can be used to describe its position.
2 Ellipse fitting method
The following steps are undertaken to remove the effects of vehicle motion from measurements:
1. the data is transformed into Cartesian coordinates using:
xi = ri cos θi (1) yi = ri sin θi, (2)
2. an inclusion zone is estimated from the orientation and radius of the previous point; all data not in this region is excluded from the estimator so as not to bias the estimates with the defects in the pipe,
3. an ellipse is fitted to the included data using algorithms outlined below,
4. the data is corrected for rotational misalignment and x- and y-axis translations,
5. the data is transformed data back into cylindrical coordinates using
Figure imgf000047_0001
The equation of a conic section can be expressed as:
x2 + w1y2 + w2xy + w3x + w4y + w5 = 0 (5) A general ellipse with center (xc, yc), major and minor axes of α and b respectively, rotated through γ radians from the x-axis is shown in Figure 19 .
The equation for the ellipse in the (x, y) coordinate frame can be determined by suitable transformations from the equation of the ellipse in the (x", y") coordinate frame,
Figure imgf000048_0001
Using the transformations:
and
Figure imgf000048_0002
yields the following equation for the ellipse in the (x y) coordinate frame:
Figure imgf000048_0003
Equating the coefficients of Equation 6 to those in Equation 5 results in the following equations for the wi
Figure imgf000048_0004
The N observations are set up as follows:
Figure imgf000048_0005
The required parameters (γ, α, b, xc, yc) can be obtained by solving the set of equations in Equation 7. First introduce the substitutions:
Figure imgf000049_0001
The required parameters (7, α, b, xc, yc) are given by:
Figure imgf000049_0002
3 Transforming elliptical data
Using the estimated ellipse parameters the data can be transformed into a circle by a single rotation (α) about a vector (k) from the center of the ellipse along its minor axis (b) which also forms the radius of the circle. From the ellipse fitting procedure we know the angle γ that the vector k makes with the y-axis, see Figure 20
Figure imgf000049_0003
The desired angle α is given by:
Figure imgf000049_0004
Using the general result from "Robot Manipulators: Mathematics, Programming, and Control" by R. P. Paul, MIT Press, 1981 we have that the transformation matrix associated with this rotation is given by:
Figure imgf000050_0001
4 Determining yaw and pitch angles
The yaw (ѱ) and pitch (ɸ) angles of the vehicle can be determined from the fitted ellipse. The transformation of the measured data point (x'i,y'i,z'i) to the point on the circle (xi,yi,zi) can either be performed by:
1. rotation of α about vector k, as above, or
2. rotation of ɸ about the x-axis and then ѱ about the y-axis.
where as previously
Figure imgf000050_0002
For a rotation of ɸ about the x-axis we have:
Figure imgf000050_0003
For a rotation of ѱ about the new y-axis
Figure imgf000050_0004
Therefore the combined transformation matrix is:
Figure imgf000050_0005
Thus the required angles can be obtained from equating the matrices in 16 and 19. Using the trace of both matrices gives one equation as follows:
cos ѱ + cos ɸ + cos ѱ cos ɸ + 1 = 2 + 2 cos α (20) and equating the differences between the elements on the respective off-diagonals of the two matrices (YP[3, 1] - YP[1, 3] = K[3, 1] - K [1, 3]) and (YP[3, 2) - YP[2, 3] = K[3, 2] - K[2, 3]) gives the following:
- sin ѱ - sin ѱ cos ɸ = -2 sin γ sin α (21) sin ɸ + cos ѱ sin ɸ = -2 sin γ sin α (22) Solving for ɸ and ѱ using Equations 20 and 21 yields:
Figure imgf000051_0001
while using Equations 20 and 22 yields:
Figure imgf000051_0002
5 Applying center correction to data in cylindrical coordinates
Figure 21 shows the geometry of the correction required to be applied to a measurement (xi, yi) taken from a position not in the center of the pipe.
The center of the pipe is given by the coordinates (xc, yc) from the current position. The known parameters are the angle θi and the measured radius (ri). It is desired to convert these parameters into those at the center of the pipe
Figure imgf000051_0004
The corrections required are:
Figure imgf000051_0003
where λ is the angle between the nominal horizontal given by , and rc is
Figure imgf000051_0005
the distance from the current position (xc, yc) to the center of the pipe given by
Figure imgf000051_0006

Claims

CLAIMS:
1. An inspection system for a conduit comprising measurement means for travelling in said conduit and obtaining data on said conduit, and processing means for processing said data to identify regions of said conduit corresponding to defects in said conduit.
2. An inspection system as claimed in claim 1, wherein said processing means includes means for extracting dimension data on said conduit from said data obtained by said measurement means.
3. An inspection system as claimed in claim 2, wherein said dimension data comprises a plurality of circumferential data of the inner surface of said conduit, said circumferential data each corresponding to points along the path of travel of said measurement means.
4. An inspection system as claimed in claim 3, wherein said processing means includes movement correction means for adjusting said circumferential data to account for movement of said measurement means within said conduit.
5. An inspection system as claimed in claim 2, wherein said processing means includes segmentation means for processing said dimension data into segments which represent regions which could correspond to defects in said conduit.
6. An inspection system as claimed in claim 5, wherein said processing means includes image preprocessing means for generating a grid map of depth values corresponding to the inner surface of said conduit from said dimension data, said grid map providing an image of said surface with pixels of said image having a corresponding depth value.
7. An inspection system as claimed in claim 6, wherein said segmentation means includes feature extraction means for generating feature data and forming a feature vector with the feature data for each pixel.
8. An inspection system as claimed in claim 7, wherein said segmentation means includes means for determining pixels of interest from the feature vectors and assigning respective labels to predetermined types of said pixels of interest.
9. An inspection system as claimed in claim 8, wherein said segmentation means includes means for forming said segments by grouping said pixels of interest having said labels which are similar into said regions.
10. An inspection system as claimed in claim 5, wherein said processing means includes classification means for processing said segments so as to classify said segments as corresponding to predetermined features of said conduit.
11. An inspection system as claimed in claim 10, wherein said classification means includes a neural network classifier for classifying said segments.
12. An inspection system as claimed in claim 11, wherein said classification means includes means for filtering said segments, means for rescaling the filtered segments, means for transforming the rescaled segments firom a spatial domain to a power domain, and means for wedge-ring sampling the transformed segments for application to the neural network classifier.
13. An inspection system as claimed in claim 11, wherein said neural network classifier includes a plurality of neural networks for determining whether said segments correspond to one of a plurality of predetermined features, respectively.
14. An inspection system as claimed in claim 10, wherein said processing means includes interpretation means for processing a classified segment to obtain attributes of the corresponding feature for identifying said feature on the basis of said attributes and feature data.
15. An inspection system as claimed in claim 1, wherein said measurement means includes laser scanning means.
16. An inspection system as claimed in claim 1 or 15, wherein said measuremen means includes sonar scanning means.
17. An inspection system as claimed in claim 1 or 4, wherein said measuremen means includes means for detecting the position and orientation of said measuremen means in said conduit.
18. An inspection system as claimed in claim 15, wherein said laser scanning means includes a laser, means for circulating a beam generated by the laser against the surface of said conduit, and camera means for producing image data representative of areas illuminated by said beam.
19. An inspection system as claimed in claim 17, wherein said processing means includes means for extracting, from said image data, position data for pixels of a laser trace produced by said beam.
20. An inspection system as claimed in claim 16, wherein said sonar scanning means includes a rotating sonar transducer for emitting sonar signals to and receiving sonar echoes from the inner surface of said conduit.
21. An inspection system as claimed in claim 20, wherein said processing means includes means for obtaining r,θ dimension data on said inner surface from said sonar echoes by performing curve analysis on said sonar echoes and monitoring the angular position of said rotating sonar transducer.
PCT/AU1994/000409 1993-07-20 1994-07-20 An inspection system for a conduit WO1995003526A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU72599/94A AU7259994A (en) 1993-07-20 1994-07-20 An inspection system for a conduit
EP94922792A EP0710351A4 (en) 1993-07-20 1994-07-20 An inspection system for a conduit

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPM0024 1993-07-20
AUPM002493 1993-07-20

Publications (1)

Publication Number Publication Date
WO1995003526A1 true WO1995003526A1 (en) 1995-02-02

Family

ID=3777068

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1994/000409 WO1995003526A1 (en) 1993-07-20 1994-07-20 An inspection system for a conduit

Country Status (3)

Country Link
EP (1) EP0710351A4 (en)
WO (1) WO1995003526A1 (en)
ZA (1) ZA945334B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2305796A (en) * 1995-09-26 1997-04-16 London Underground Ltd Monitoring track condition
FR2771184A1 (en) * 1997-11-17 1999-05-21 Jean Pierre Quatrefages Acoustic imaging system for use under water to identify objects
EP0831299A4 (en) * 1995-03-27 1999-09-22 Toa Grout Kogyo Co APPARATUS FOR OBSERVING THE INTERIOR SURFACE OF A PIPELINE
WO2000063607A1 (en) * 1999-04-16 2000-10-26 Hans Oberdorfer Device and method for inspecting hollow spaces
WO2002006848A3 (en) * 2000-07-14 2002-04-25 Lockheed Corp System and method for locating and positioning an ultrasonic signal generator for testing purposes
WO2002063607A3 (en) * 2001-01-19 2003-05-01 Lockheed Corp Remote laser beam delivery system and method for use with a gantry positioning system for ultrasonic testing purposes
WO2002044709A3 (en) * 2000-11-29 2003-08-14 Cooper Cameron Corp Ultrasonic testing system for tubulars
WO2003089833A1 (en) * 2002-04-19 2003-10-30 Norsk Elektro Optikk As Pipeline internal inspection device and method
EP1840505A1 (en) * 2006-03-31 2007-10-03 Coperion Werner & Pfleiderer GmbH & Co. KG Measuring device for the determination of the wear state of wells of screw extruders
WO2008000940A1 (en) * 2006-06-30 2008-01-03 V & M France Non-destructive testing by ultrasound of foundry products
NL1032345C2 (en) * 2006-08-18 2008-02-19 Martijn Van Der Valk Inspection system and device.
WO2008099177A1 (en) * 2007-02-14 2008-08-21 Sperry Rail (International) Limited Photographic recording of a rail surface
DE102010049401A1 (en) * 2010-10-26 2012-04-26 Leistritz Extrusionstechnik Gmbh Device for acquiring measurement information from an inner surface of a hollow body, in particular a bore of a single- or twin-screw extruder cylinder
NO334482B1 (en) * 2002-04-19 2014-03-17 Norsk Elektro Optikk As Apparatus and method for internal inspection of pipeline
BE1022397B1 (en) * 2014-01-17 2016-03-22 Vliegen Nv MEASURING TECHNOLOGY
EP2626624A4 (en) * 2010-10-04 2016-12-21 Mitsubishi Heavy Ind Ltd Device for monitoring thickness reduction of inner surface in heat transfer pipe or inner surface in evaporation pipe
RU169803U1 (en) * 2016-12-21 2017-04-03 Ильвина Гамировна Хуснутдинова Device for contactless control of stress-strain state and level of damage to metal structures
US9739411B1 (en) 2014-08-06 2017-08-22 The United States Of Americas As Represented By The Administrator Of The National Aeronautics And Space Administration System and method for traversing pipes
EP3084419A4 (en) * 2013-12-17 2017-11-15 Ontario Power Generation Inc. Improved ultrasound inspection
DE102019108743A1 (en) * 2019-04-03 2020-10-08 Jt-Elektronik Gmbh Inspection unit that can be moved in a channel
CN113223177A (en) * 2021-05-12 2021-08-06 武汉中仪物联技术股份有限公司 Pipeline three-dimensional model construction method and system based on standard attitude angle correction
WO2022032379A1 (en) * 2020-08-10 2022-02-17 Hifi Engineering Inc. Methods and systems for tracking a pipeline inspection gauge
CN114459353A (en) * 2022-02-25 2022-05-10 广东工业大学 A device and method for measuring the position and orientation of a pipeline
US11598728B2 (en) * 2018-05-04 2023-03-07 Hydromax USA, LLC Multi-sensor pipe inspection system and method
US11629807B1 (en) 2019-02-12 2023-04-18 Davaus, LLC Drainage tile inspection system
CN117646828A (en) * 2024-01-29 2024-03-05 中国市政工程西南设计研究总院有限公司 Device and method for detecting relative displacement and water leakage of pipe jacking interface
CN118583403A (en) * 2024-05-30 2024-09-03 点夺机电工程江苏有限公司 A kind of air leakage detection system for pipeline

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110376286B (en) * 2019-06-13 2021-08-27 国网浙江省电力有限公司电力科学研究院 Intelligent automatic ultrasonic detection system and method for in-service basin-type insulator

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0051912A1 (en) * 1980-11-11 1982-05-19 British Gas Corporation Apparatus for monitoring the topography of the internal surface of a pipe
GB2094470A (en) * 1980-11-18 1982-09-15 Dyk Johannes Wilhelmus Van Examining surface profile
GB2102565A (en) * 1981-07-11 1983-02-02 Draftrule Limited Surface inspection
WO1986003295A1 (en) * 1984-11-30 1986-06-05 Lennart Wettervik Method and apparatus for detecting leaks and other defects on sewers and the like channels
EP0282687A2 (en) * 1987-03-20 1988-09-21 Nippon Kokan Kabushiki Kaisha Intrapipe spot examination pig device
US4868648A (en) * 1988-08-19 1989-09-19 Kabushiki Kaisha Iseki Kaihatsu Koki Method and apparatus for inspecting a pipeline in order to determine eccentricity thereof
EP0504591A2 (en) * 1991-03-22 1992-09-23 Denso-Chemie Wedekind KG Device for detecting joints, fissures or the like in drain channels

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3236947A1 (en) * 1982-10-06 1984-04-12 Rainer 6074 Rödermark Hitzel PIPE MANIPULATOR FOR PIPING THROUGH PIPES

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0051912A1 (en) * 1980-11-11 1982-05-19 British Gas Corporation Apparatus for monitoring the topography of the internal surface of a pipe
GB2094470A (en) * 1980-11-18 1982-09-15 Dyk Johannes Wilhelmus Van Examining surface profile
GB2102565A (en) * 1981-07-11 1983-02-02 Draftrule Limited Surface inspection
WO1986003295A1 (en) * 1984-11-30 1986-06-05 Lennart Wettervik Method and apparatus for detecting leaks and other defects on sewers and the like channels
EP0282687A2 (en) * 1987-03-20 1988-09-21 Nippon Kokan Kabushiki Kaisha Intrapipe spot examination pig device
US4868648A (en) * 1988-08-19 1989-09-19 Kabushiki Kaisha Iseki Kaihatsu Koki Method and apparatus for inspecting a pipeline in order to determine eccentricity thereof
EP0504591A2 (en) * 1991-03-22 1992-09-23 Denso-Chemie Wedekind KG Device for detecting joints, fissures or the like in drain channels

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0710351A4 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0831299A4 (en) * 1995-03-27 1999-09-22 Toa Grout Kogyo Co APPARATUS FOR OBSERVING THE INTERIOR SURFACE OF A PIPELINE
GB2305796A (en) * 1995-09-26 1997-04-16 London Underground Ltd Monitoring track condition
FR2771184A1 (en) * 1997-11-17 1999-05-21 Jean Pierre Quatrefages Acoustic imaging system for use under water to identify objects
WO2000063607A1 (en) * 1999-04-16 2000-10-26 Hans Oberdorfer Device and method for inspecting hollow spaces
US6788334B2 (en) 1999-04-16 2004-09-07 Hans Oberdorfer Device and method for inspecting hollow spaces
WO2002006848A3 (en) * 2000-07-14 2002-04-25 Lockheed Corp System and method for locating and positioning an ultrasonic signal generator for testing purposes
US6643002B2 (en) 2000-07-14 2003-11-04 Lockheed Martin Corporation System and method for locating and positioning an ultrasonic signal generator for testing purposes
NO337559B1 (en) * 2000-11-29 2016-05-09 Cooper Cameron Corp Ultrasonic testing system
WO2002044709A3 (en) * 2000-11-29 2003-08-14 Cooper Cameron Corp Ultrasonic testing system for tubulars
GB2385129B (en) * 2000-11-29 2004-12-01 Cooper Cameron Corp Ultrasonic testing system
WO2002063607A3 (en) * 2001-01-19 2003-05-01 Lockheed Corp Remote laser beam delivery system and method for use with a gantry positioning system for ultrasonic testing purposes
CN100335838C (en) * 2002-04-19 2007-09-05 诺斯克埃莱克特罗奥普体克公司 Pipeline internal inspection device and method
US6931149B2 (en) 2002-04-19 2005-08-16 Norsk Elektro Optikk A/S Pipeline internal inspection device and method
NO334482B1 (en) * 2002-04-19 2014-03-17 Norsk Elektro Optikk As Apparatus and method for internal inspection of pipeline
WO2003089833A1 (en) * 2002-04-19 2003-10-30 Norsk Elektro Optikk As Pipeline internal inspection device and method
EP1840505A1 (en) * 2006-03-31 2007-10-03 Coperion Werner & Pfleiderer GmbH & Co. KG Measuring device for the determination of the wear state of wells of screw extruders
US7717003B2 (en) 2006-03-31 2010-05-18 Coperion Gmbh Measuring device for detecting the state of wear of the bore walls of two interpenetrating housing bores
EA012925B1 (en) * 2006-06-30 2010-02-26 В Э М Франс Non-destructive testing foundry products by ultrasound
AU2007264866B2 (en) * 2006-06-30 2012-04-26 V & M France Non-destructive testing by ultrasound of foundry products
NO340510B1 (en) * 2006-06-30 2017-05-02 V & M France Non-destructive testing, especially for pipes in production or in finished condition
FR2903187A1 (en) * 2006-06-30 2008-01-04 Setval Sarl NON-DESTRUCTIVE CONTROL, ESPECIALLY FOR TUBES DURING MANUFACTURING OR IN THE FINAL STATE
US8265886B2 (en) 2006-06-30 2012-09-11 V & M France Non-destructive testing, in particular for pipes during manufacture or in the finished state
AU2007264866C1 (en) * 2006-06-30 2014-01-16 V & M France Non-destructive testing by ultrasound of foundry products
WO2008000940A1 (en) * 2006-06-30 2008-01-03 V & M France Non-destructive testing by ultrasound of foundry products
NL1032345C2 (en) * 2006-08-18 2008-02-19 Martijn Van Der Valk Inspection system and device.
WO2008099177A1 (en) * 2007-02-14 2008-08-21 Sperry Rail (International) Limited Photographic recording of a rail surface
EP2626624A4 (en) * 2010-10-04 2016-12-21 Mitsubishi Heavy Ind Ltd Device for monitoring thickness reduction of inner surface in heat transfer pipe or inner surface in evaporation pipe
EP2447664A1 (en) * 2010-10-26 2012-05-02 Leistritz Extrusionstechnik GmbH Device for recording measurement information from an internal surface of a hollow body, in particular a borehole of a single or dual shaft extruder cylinder
DE102010049401A1 (en) * 2010-10-26 2012-04-26 Leistritz Extrusionstechnik Gmbh Device for acquiring measurement information from an inner surface of a hollow body, in particular a bore of a single- or twin-screw extruder cylinder
EP3084419A4 (en) * 2013-12-17 2017-11-15 Ontario Power Generation Inc. Improved ultrasound inspection
EP4386777A3 (en) * 2013-12-17 2024-08-07 Ontario Power Generation Inc. Improved ultrasound inspection
BE1022397B1 (en) * 2014-01-17 2016-03-22 Vliegen Nv MEASURING TECHNOLOGY
US9739411B1 (en) 2014-08-06 2017-08-22 The United States Of Americas As Represented By The Administrator Of The National Aeronautics And Space Administration System and method for traversing pipes
RU169803U1 (en) * 2016-12-21 2017-04-03 Ильвина Гамировна Хуснутдинова Device for contactless control of stress-strain state and level of damage to metal structures
US12222298B2 (en) 2018-05-04 2025-02-11 Hydromax USA, LLC Multi-sensor pipe inspection system and method
US11598728B2 (en) * 2018-05-04 2023-03-07 Hydromax USA, LLC Multi-sensor pipe inspection system and method
US11629807B1 (en) 2019-02-12 2023-04-18 Davaus, LLC Drainage tile inspection system
DE102019108743A1 (en) * 2019-04-03 2020-10-08 Jt-Elektronik Gmbh Inspection unit that can be moved in a channel
DE102019108743B4 (en) 2019-04-03 2022-01-13 Jt-Elektronik Gmbh Inspection unit that can be moved in a channel and a method for operating the inspection unit
WO2022032379A1 (en) * 2020-08-10 2022-02-17 Hifi Engineering Inc. Methods and systems for tracking a pipeline inspection gauge
CN113223177A (en) * 2021-05-12 2021-08-06 武汉中仪物联技术股份有限公司 Pipeline three-dimensional model construction method and system based on standard attitude angle correction
CN114459353A (en) * 2022-02-25 2022-05-10 广东工业大学 A device and method for measuring the position and orientation of a pipeline
CN117646828A (en) * 2024-01-29 2024-03-05 中国市政工程西南设计研究总院有限公司 Device and method for detecting relative displacement and water leakage of pipe jacking interface
CN117646828B (en) * 2024-01-29 2024-04-05 中国市政工程西南设计研究总院有限公司 Device and method for detecting relative displacement and water leakage of pipe jacking interface
CN118583403A (en) * 2024-05-30 2024-09-03 点夺机电工程江苏有限公司 A kind of air leakage detection system for pipeline

Also Published As

Publication number Publication date
EP0710351A1 (en) 1996-05-08
ZA945334B (en) 1995-02-28
EP0710351A4 (en) 1996-11-20

Similar Documents

Publication Publication Date Title
WO1995003526A1 (en) An inspection system for a conduit
CN109118542A (en) Scaling method, device, equipment and storage medium between laser radar and camera
Daniel et al. Side-scan sonar image matching
CN113281780B (en) Method and device for marking image data and electronic equipment
JP6756889B1 (en) Vortex detector, vortex detection method, program and trained model
CN110370287A (en) Subway column inspection robot path planning's system and method for view-based access control model guidance
Kawashima et al. Finding the next-best scanner position for as-built modeling of piping systems
CN110161053A (en) Defect detecting system
US6256035B1 (en) Image processing method and apparatus
AU7259994A (en) An inspection system for a conduit
JPH0854219A (en) Image processor
CN118191855A (en) Distance identification method, device and storage medium in power scene
CN115272248B (en) Intelligent detection method for fan gesture and electronic equipment
Magin et al. A dynamic 3D environmental model with real-time access functions for use in autonomous mobile robots
CN114167443B (en) Information completion method, device, computer equipment and storage medium
Witzgall et al. Recovering spheres from 3D point data
Mermigkas et al. Constructing Visibility Maps of Optimal Positions for Robotic Inspection in Ultra-High Voltage Centers
Elgazzar et al. 3D data acquisition for indoor environment modeling using a compact active range sensor
Wu et al. Power transmission line reconstruction from sequential oblique uav images
CN119225238A (en) Equipment monitoring visualization system based on digital twin technology
JP3048896B2 (en) Noise removal filter device for binary image
Liu et al. A Design and Experimental Method of Perception Fusion
CN114167441B (en) Information collection method, device, computer equipment and storage medium
Westling et al. Object recognition by fast hypothesis generation and reasoning about object interactions
Najjari et al. 3D CAD-based object recognition for a flexible assembly cell

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB GE HU JP KE KG KP KR KZ LK LT LU LV MD MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TJ TT UA US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE MW SD AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: US

Ref document number: 1996 583041

Date of ref document: 19960117

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1994922792

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1994922792

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWW Wipo information: withdrawn in national office

Ref document number: 1994922792

Country of ref document: EP

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载