+

WO2017198746A1 - Methods and systems for underwater digital image processing - Google Patents

Methods and systems for underwater digital image processing Download PDF

Info

Publication number
WO2017198746A1
WO2017198746A1 PCT/EP2017/061910 EP2017061910W WO2017198746A1 WO 2017198746 A1 WO2017198746 A1 WO 2017198746A1 EP 2017061910 W EP2017061910 W EP 2017061910W WO 2017198746 A1 WO2017198746 A1 WO 2017198746A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
colour
camera
colour image
pixels
Prior art date
Application number
PCT/EP2017/061910
Other languages
French (fr)
Inventor
Taibali Dossaji
George Sewell
Gavin Spence
Douglas Hetherington
Ivan Micovic
Nebojsa MRMAK
Original Assignee
Tomtom International B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tomtom International B.V. filed Critical Tomtom International B.V.
Publication of WO2017198746A1 publication Critical patent/WO2017198746A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • G03B17/08Waterproof bodies or housings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Definitions

  • the present invention relates to the processing of image data, and in particular to methods and systems for the processing of colour image data collected by a digital camera capable of operating underwater, e.g. so as to modify and improve the colours associated with the colour image data such that they are more realistic, i.e. closer to the actual environment as would be seen by a user when underwater.
  • the present invention is particularly beneficial in digital video cameras, but is equally applicable to digital still cameras.
  • Digital cameras such as video cameras and still cameras, are increasingly being used in outdoors and sports settings.
  • Such cameras which are often referred to as "action cameras” in the case of video cameras, are commonly attached to a user, sports equipment or a vehicle and are operated to capture video data, and typically also audio data, during a sports session with minimal user interaction.
  • WO 2011/047790 A1 discloses a video camera comprising some or all of an integrated GPS device, speed or acceleration measuring device, time measuring device, temperature measuring device, heart rate measuring device, barometric altitude measuring device and an electronic compass.
  • These sensors can be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection.
  • Action cameras are also commonly capable of being used underwater through the use of an watertight housing that is in either integral to the camera or a separate housing into which the camera can be fitted, are becoming more common place. Such cameras can be used, for example, by scuba divers to take photographs and video as they explore coral reefs or shipwrecks.
  • WO 20110060600A1 WO 20110060600A1
  • 2011/1 19336 A1 describes a camera with a pressure sensor that can be used to switch the camera between normal and underwater modes of operation.
  • a characteristic of imagery capture underwater is that the colour and contrast of captured images deteriorates with the subjects in such images appearing colourless and indistinct, since, at any given depth, the light entering the camera from an object in the underwater environment is attenuated based, for example, on the depth of the camera and the distance from the object to the camera. The attenuation becomes larger, both as the depth increases and as the distance to the underwater object increases.
  • Figure 1 depicts a graph taken from the article "Underwater Light Field and its Comparisons to Metal Halide Lighting” by Sanjay Joshi, PhD published in Advanced
  • a method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater comprising:
  • colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
  • the present invention extends to a system, e.g. an image or video processing system comprising one or more processors, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
  • the image or video processing system can be operatively connected to at least one image sensor and optionally to one or more senor devices (as discussed in more detail below), so as to form a digital camera that is capable of operating underwater.
  • the digital camera can be a still camera that is arranged to take photographs, i.e. digital images, but in preferred embodiments is a digital video camera that is arranged to record video, i.e. a series of digital images.
  • the method is repeatedly applied to colour image data that is collected by the at least one image sensor.
  • a system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater comprising: means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
  • the present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
  • system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa.
  • the present invention is concerned with methods and systems for processing colour image data representative of an underwater environment that is collected by an image sensor of a digital camera, and preferably a digital video camera.
  • the processing of the colour image data provided by the present invention, in accordance with any of its embodiments, allows the colours of the image data to be corrected during recording, i.e. before they are encoded and typically stored in a memory of the camera, so as to overcome the spectral attenuation that occurs when underwater, without the need for depth specific filters and/or other post processing techniques.
  • colour image data representative of underwater environment is obtained from at least one image sensor of a digital camera.
  • the colour image data will typically be in an unprocessed (or raw) format, e.g. as obtained directly from the at least one image sensor, but it is envisaged that the colour image data may be in a processed format, e.g. after a demosaicing step, such as an RGB colour format.
  • the image sensor is typically a complementary metal-oxide-semiconductor (CMOS) sensor or a charge-coupled device (CCD) sensor, and which comprises a colour filter array (CFA) or colour filter mosaic (CFM), so as to generate colour image data in a raw format.
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge-coupled device
  • CFA colour filter array
  • CFM colour filter mosaic
  • each pixel of the raw image data corresponds to a photosensor (or pixel sensor) of the image sensor, and thus each pixel has an intensity value for one of a plurality of colour components.
  • the plurality of colour components typically comprise red, blue and green, although the particular colour components will depend, as will be appreciated, on the CFA of the image sensor.
  • a common filter is the Bayer filter, which has a filter pattern that is 50% green, 25% red and 25% blue, and hence is also called RGBG.
  • RGBG RGBG
  • other filters could also be used such as CYYM, which is 50% yellow, 25% cyan and 25% magenta, or CYGM, which is 25% cyan, 25% yellow, 25% green and 25% magenta.
  • a pixel could be associated with an intensity value for each of the plurality of colour components.
  • the photosensors are vertically stacked, such that each pixel has an intensity value for each colour component.
  • each pixel has an intensity value for each colour component.
  • a pixel is associated with an intensity value for each of the colour components, e.g. red, green and blue (RGB).
  • RGB red, green and blue
  • the intensity value for each colour component is represented by one or more bits.
  • the intensity value for each colour component is represented by 8 bits, and so the intensity value can take a value from 0 to 255. It will be appreciated, however, that a greater or fewer number of bits can be used as desired to represent the intensity value.
  • the colour image data from the image sensor, or plurality of images sensors in some embodiments is divided into a plurality of regions, sometimes referred to as paxels, wherein each region comprises a plurality of pixels.
  • the division of the colour image data into a plurality of regions can be thought of as down sampling the image data, so as to create a low resolution version of the image data.
  • the down sampling can be performed using any suitable technique, but preferably the image data is divided into a grid of non-overlapping regions, such as a 4x4 grid, a 6x6 grid, a 8x8 grid, etc.
  • the number of regions can be selected as desired, although it has been found that there is little improvement in image quality if the colour image data is divided into regions than a 8x8 grid.
  • an intensity value for at least two of the colour components is determined for each of the regions into which the colour image data is divided.
  • an intensity value is determined for each of the colour components, e.g. red, green and blue, but as will be discussed in more detail below the method, at least in some embodiments, only makes use of the intensity values of two of the colour components, e.g. blue and red (for blue water environments, such as the ocean).
  • the intensity value of a colour component for a region is based on the intensity values of at least some of the pixels of the region, such that the intensity value for a region is representative of the intensity values of at least some of the plurality of pixels in the region.
  • the intensity value of a colour component for a region can be an average (or other similar measure) of the intensity values of at least some, and typically all, of the pixels in the region associated with the particular colour component.
  • the intensity value of a colour component for a region will typically be based on the intensity values of the colour component of all the pixels in the region associated with that colour component.
  • some of the pixels may be filtered for various reasons, such as being defective, over exposed, etc, , e.g. in a thresholding operation, and so the intensity value of a colour component for a region may be based on the intensity values of the colour component of only some of the pixels in the region associated with that colour component.
  • one or more ratios between determined intensity values for the region are determined.
  • the method may involve determining one or more or all of the ratios between the intensity values of the plurality of colour components, but typically, and in preferred embodiments, only one ratio is determined.
  • the determined ratio is compared to a predetermined threshold value associated with an underwater depth, so as to identify those regions of the colour image data that are primarily water.
  • each ratio is compared to a predetermined threshold value appropriate for the particular ratio, i.e. each ratio is preferably compared to a different predetermined threshold value.
  • the blue/red ratio is preferably determined (as this is appropriate for blue water environments, which are more common than green water environments). However, either of the red/green or the blue/green ratio could be determined and used if the digital camera is being used in green water environments.
  • the one or more ratios selected for use in the method can be based on a position of the digital camera, e.g. whether it is in the vicinity of a body of water having a blue water environment or a green water environment.
  • the selection of whether the camera is to be used in a blue water environment or a green water environment, and thus the selection of the ratio or ratios used in the method can be manual, i.e. based on a received user input, or automatic, e.g. based on a position obtained from a positioning determining device.
  • the position determining device can be, for example, a global navigation satellite system (GNSS) receiver, and the obtained position may be in any form as desired, but will commonly be a set of geographic coordinates, e.g.
  • GNSS global navigation satellite system
  • the position determining device could be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection. In these latter embodiments, the position of the digital camera as obtained from the position determining device is compared to digital map data comprising information about the location and type of bodies of water, with the selection of the ratio or ratios used in the method being based on whether the camera is in the vicinity, e.g. a predetermined distance, of a particular body of water.
  • the determined ratio, or ratios is compared to a predetermined threshold value associated with an underwater depth.
  • the threshold value is chosen based on a depth at which the digital camera is commonly going to be used. In such embodiments, the
  • a threshold value can be selected that is applicable when the camera is being used at a depth of 5m underwater.
  • a plurality of threshold values can be used, wherein each threshold value is associated with a different underwater depth. Any number of predetermined threshold values could be used as desired.
  • the method may be use a threshold value for 5m, for 10m, for 15m, etc.
  • a look up table could be stored in a memory of the camera that stores a predetermined threshold value for each of a plurality of underwater depths or depth ranges.
  • the predetermined threshold value to be used in the method can be selected from a plurality of predetermined threshold values based on an underwater depth of the digital camera.
  • the depth of the digital camera could be determined based on a received user input, e.g. the user could select the depth at which they will primarily be using the camera.
  • the depth of the digital camera could be determined automatically based on data received from a pressure sensor.
  • the pressure sensor could be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection. Accordingly, in such embodiments, the method automatically adjusts to the changing depth of the camera.
  • a subset of the plurality of regions is selected based on the result of the comparison between the determined at least one ratio and the relevant predetermined threshold value.
  • the subset of regions are those that have been determined not to be primarily water, and therefore are beneficial to use in the subsequent automatic white balance operation.
  • a first subset of regions is identified where the ratio for region is greater than the predetermined threshold value, and a second subset of regions is identified where the ratio for the region is less than the predetermined threshold value.
  • one of the first and second subsets of regions will be substantially representative of water in the underwater environment, and the other of the subset of regions will be substantially representative of objects in the underwater environment, e.g. rocks, people, fish, vegetation, the seabed, etc.
  • the method of the invention preferably makes use of the second subset of regions, and it is this second subset of regions that are selected based on the result of the comparison.
  • two subsets of regions are determined for each ratio, one substantially representative of water and the other substantially representative of objects in the underwater environment.
  • the selected subset of regions can, in one embodiment, be a combination of the subsets of regions that are substantially representative of objects in the underwater environment or, in another embodiment, be those regions that appear in each of the subsets of regions that are substantially representative of objects in the underwater environment.
  • an automatic white balancing operation is performed using the selected subset of regions to determine set of modifications to be applied to the intensity values of at least some of the pixels of the colour image data.
  • Any suitable or desirable automatic white balancing operation could be performed in the present invention, such as a Gray World algorithm, which incorporates the gray world assumption that argues the average reflectance of a scene is achromatic, or a White Patch algorithm, which incorporates the white world assumption and is based on the Retinex theory of visual colour constancy, or any combination of different techniques.
  • the white balancing operation is performed using pixels in the selected subset of regions, and thus does not use pixels in other regions of the colour image data that are not selected.
  • all the pixels of the subset of regions are used for white balancing, i.e. in the case of global techniques such as Gray World, White Patch, etc. It is envisaged, however, that in other embodiments local white balancing techniques may be used, and thus the white balancing operation may be performed using only some of the pixels of the selected subset of regions.
  • the result of the automatic white balancing operation is a set of modifications to be applied to the intensity values of at least some pixels of the colour image data.
  • the set of modifications comprise a set of gains for at least some of the colour components, typically for all but one of the colour components.
  • the term "gain" is used to mean a coefficient by which the obtained intensity value of a pixel is multiplied.
  • the set of modifications therefore cause the intensity values of the pixels of the colour image data to which they are applied to be modified.
  • the set of modifications are applied to those pixels that are associated with the relevant colour component, thereby causing the intensity value for those pixels to be changed.
  • the set of modifications are applied to all pixels of the colour image data, thereby causing the intensity value for the relevant colour components of each pixel to be changed.
  • a Gray World algorithm is used for white balancing, which is based on the assumption that the intensity value for each colour component of the subset of regions of the colour image data should be the same (or at least substantially the same).
  • the intensity value for a colour component of the subset of regions is determined as the average intensity value for the colour component in question of all the pixels in the subset of regions.
  • the aim of the white balancing operation is determine gains to be applied to two of the channel, typically the red and blue channels, that cause the average intensity values of the blue, green and red channels of the pixels in the subset of regions to be the same (or at least substantially the same).
  • the set of modifications that are determined using the automatic white balancing operation may be modified, such that the set of modifications applied to a frame do not differ from those applied to the previous frame by more than a predetermined amount.
  • the predetermined value can be 0.5%, i.e. a time constant of 200 frames, typically around 6-7 second; although any value could be chosen as desired.
  • a method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater comprising:
  • colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
  • the present invention extends to a system, e.g. an image or video processing system comprising one or more processors, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
  • the image or video processing system can be operatively connected to at least one image sensor and a pressure sensor, and optionally one or more other sensor devices, so as to form a digital camera that is capable of operating underwater.
  • the digital camera can be a still camera that is arranged to take photographs, i.e. digital images, but in preferred embodiments is a digital video camera that is arranged to record video, i.e. a series of digital images.
  • the method is repeatedly applied to colour image data that is collected by the at least one image sensor.
  • a system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater comprising:
  • colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components
  • the present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
  • the colour image data obtained from the at least one image sensor can be in an unprocessed (or raw) format, e.g. as obtained directly from the at least one image sensor, or can be in a processed format, e.g. after a demosaicing step.
  • the pressure sensor could be integrated in the camera itself, or could be remote from the camera and operatively connected to the camera using a wired or wireless connection.
  • the performance of the selected automatic white balancing operation results in determining a set of modifications, e.g. gains, to be applied to intensity values of at least some of the pixels of the colour image data, which, when applied, cause the generation of modified colour image data.
  • a set of modifications e.g. gains
  • an automatic white balancing operation is selected based on the depth of the camera as obtained from the pressure sensor.
  • the automatic white balancing operation may be a global algorithm in which all of the pixels of the colour image data are used for colour temperature estimation, or, in other embodiments, the automatic white balancing operation is a local algorithm, e.g. as described in the above, in which only those pixels of the colour image data that satisfy certain conditions are used for colour temperature estimation.
  • the specific algorithm that is used for colour temperature estimation can be chosen as desired, and may, for example, be a Gray World algorithm, a White Patch algorithm or the like.
  • the depth of the camera obtained from the pressure sensor can be used to select one of a plurality of different algorithms to be applied to the colour image data.
  • the obtained depth of the camera can be used to select one or more parameters to be used in an algorithm to be applied to the colour image data. Additionally, or alternatively, the obtained depth of the camera can be used to select a subset of the plurality of pixels of the colour image data to which an algorithm is applied, e.g. in the manner of the above described aspects and embodiments.
  • the method comprises obtaining a position of the camera from a position determining device, such as a GNSS receiver, and the selection of the automatic white balancing operation is further based on the obtained position of the camera.
  • a position determining device such as a GNSS receiver
  • the selection of the automatic white balancing operation is further based on the obtained position of the camera.
  • the position and depth of the camera obtained from the pressure sensor can be used to select one of a plurality of different algorithms to be applied to the colour image data.
  • the obtained positon and depth of the camera can be used to select one or more parameters to be used in an algorithm to be applied to the colour image data.
  • the obtained position and depth of the camera can be used to select a subset of the plurality of pixels of the colour image data to which an algorithm is applied.
  • the set of modifications e.g. gains
  • they are applied to the at least some of the pixels of the obtained colour image data, so as to generate modified colour image data.
  • the modified colour image data will be in the same format as the obtained colour image data, since the set of modifications are applied on a per pixel basis and cause the intensity value of at least some pixels to be changed.
  • the modified colour image data is subsequently subjected to a demosaicing step, so as to be converted into a processed format, such as an RGB format.
  • a processed format e.g. an RGB format
  • modified colour image data will also be in the processed format.
  • the modified colour image data once in a processed format such that each pixel has an intensity value for each of a plurality of colour components, e.g. red (R), green (G) and blue (B)
  • a colour correction operation e.g. RGB blending.
  • the plurality of colour components of the colour image data used in the colour correction operation may be the same or different as the plurality of colour components of the colour image data as used during the automatic white balancing operation.
  • the colour correction operation is used to adjust the image data to the human colour spectrum, and typically comprises a matrix transformation, optionally with an added offset, that is applied to the intensity values of each of the plurality of colour components of the modified colour image data (after conversion if required).
  • the colour correction operation may be based on a position of the digital camera and/or a depth of the digital camera, e.g. with one or more predetermined matrices stored in a memory of the camera being selected based on the position and/or depth of the camera.
  • the selection of the matrix or matrices (if an offset matrix is also used) is preferably based on the position and/or depth of the camera.
  • the selection of the matrix or matrices used in the colour correction operation may be manual, i.e. based on a received user input that indicates the position of the camera and/or the depth at which the camera is to be operated, or automatic, e.g. based on a position obtained from a position determining device and/or a depth obtained from a pressure sensor.
  • the position determining device and/or pressure sensor could be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection.
  • a look up table could be stored in a memory of the camera that stores at least a predetermined transformation matrix for each of a plurality of underwater depths or depth ranges and/or geographic areas. The obtained position and/or depth is then preferably used to select the appropriate matrix or matrices from the look up table.
  • a method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater comprising:
  • colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for each of a plurality of colour components;
  • the present invention extends to a system, e.g. an image or video processing system comprising one or more processors, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described.
  • the image or video processing system can be operatively connected to at least one image sensor and a pressure sensor, and optionally one or more other sensor devices, so as to form a digital camera that is capable of operating underwater.
  • the digital camera can be a still camera that is arranged to take photographs, i.e. digital images, but in preferred embodiments is a digital video camera that is arranged to record video, i.e. a series of digital images.
  • the method is repeatedly applied to colour image data that is collected by the at least one image sensor.
  • a system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater comprising:
  • colour image data comprises a plurality of pixels, each pixel having an intensity value for each of a plurality of colour components
  • the present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
  • the colour image data obtained from the at least one image sensor is preferably in a processed format, such that each pixel has an intensity value for each of a plurality of colour components, e.g. red (R), green (G) and blue (B).
  • the pressure sensor could be integrated in the camera itself, or could be remote from the camera and operatively connected to the camera using a wired or wireless connection.
  • the colour correction operation preferably comprises a matrix transformation, optionally with an added offset, that is applied to the intensity values of each of the plurality of colour components of the colour image data.
  • the selection of the colour correction operation based on the obtained depth of the camera preferably comprises selecting one of a plurality of predetermined transformation matrices, and optionally one of a plurality of predetermined offset matrices.
  • a look up table could be stored in a memory of the camera that stores at least a predetermined transformation matrix for each of a plurality of underwater depths or depth ranges, and the obtained depth is preferably used to select the matrix from the look up table associated with the depth or depth range matching the obtained depth.
  • the method comprises obtaining a position of the camera from a position determining device, such as a GNSS receiver, and the selection of the colour correction operation is further based on the obtained position of the camera.
  • a position determining device such as a GNSS receiver
  • the position and depth of the camera obtained from the pressure sensor can be used to select one of a plurality of predetermined transformation matrices, and optionally one of a plurality of predetermined offset matrices.
  • a look up table could be stored in a memory of the camera that stores at least a predetermined transformation matrix for each of a plurality of underwater depths or depth ranges and for each of a plurality of geographic areas, and the obtained depth and position is preferably used to select the matrix from the look up table associated with the depth or depth range and geographic area matching the obtained depth and position.
  • the generated colour image data is preferably passed to an encoder of the digital camera that processes the image data to generate an encoded image, and, in the case where the camera is a digital video camera, an encoded video stream.
  • the processed image data can be encoded using any suitable compression technique as desired, e.g. lossless compression or lossy compression, and could be, for example, an intraframe compression technique or an interframe compression technique.
  • the encoded image or video stream is preferably then stored on a memory of the digital camera.
  • the memory preferably comprises a non-volatile memory device for storing the data collected by the camera, and may comprise a removable non-volatile memory device that can be is attachable to and detachable from the video camera.
  • the first memory may comprise a memory card such as, for example, an SD card or the like.
  • the method of the present invention can be used to automatically process colour image data collected by at least one image sensor of a digital camera when it is operating under water.
  • the digital camera of the present invention is, however, preferably capable of operating out of water as well as underwater, and in such environments different image processing techniques are used since it is no longer necessary to cope with attenuation of light due to water.
  • the method of the present invention is thus preferably only one of a plurality of modes of operation of the digital camera.
  • the method preferably comprises receiving an instruction to change to the underwater mode of operation.
  • the instruction can be received from a user, i.e. the instruction to change mode is preferably based on a received user input.
  • the instructions could be generated automatically based on data from one or more sensors indicating that the camera is now operating under water.
  • the one or more sensors could include a pressure sensor and/or an exposure level sensor.
  • exposure is a measure of the amount of light per unit area that reaches the at least one image sensor of the digital camera.
  • the present invention can be implemented in any suitable system, such as a suitably configured micro-processor based system.
  • the present invention is implemented in a computer and/or micro-processor based system.
  • the method of present invention is preferably performed on an image or video processing device.
  • the image or video processing device preferably comprises a system on chip (SOC) comprising cores (or blocks) arranged to at process the raw image data received from the at least one image sensor and to encode the processed image data.
  • SOC system on chip
  • the image or video processing device is therefore preferably implemented in hardware, e.g. without using embedded processors.
  • the method aspects and embodiments of the present invention as described herein are preferably computer implemented methods, and may thus be implemented at least partially using software, e.g. computer programs. It will thus be seen that when viewed from further aspects the present invention provides computer software and such software installed on a computer software carrier for carrying out at least one of the steps of the methods set out herein.
  • the present invention may accordingly suitably be embodied as a computer program product for use with a computer system.
  • Such an implementation may comprise a series of computer readable instructions either fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CD ROM, ROM, RAM, flash memory, or hard disk.
  • the series of computer readable instructions embodies all or part of the functionality previously described herein.
  • association should not be interpreted to require any particular restriction on data storage locations. The phrase only requires that the features are identifiably related. Therefore association may for example be achieved by means of a reference to a file, potentially located in a remote server.
  • Figure 1 is a graph showing the attenuation of light due with depth in the ocean
  • Figure 2 shows a digital camera in accordance with an embodiment of the invention
  • FIG. 3 shows a more detailed view of the video processing apparatus of the digital camera of Figure 2;
  • Figure 4 shows a more detailed view of the image pipe of the video processing apparatus of Figure 3.
  • FIG. 5 shows a flow chart illustrating the steps of a method of processing colour image data in accordance with an embodiment of the invention. Detailed Description of the Preferred Embodiments
  • the present invention is, in preferred embodiments at least, directed to methods and systems for processing colour image data collected by an image sensor of a digital camera when operating underwater, such that the colour image data is representative of an underwater environment.
  • the goal of the invention is to adjust the colours of the image data during recording, i.e. before it is encoded and stored, such that the image data is closer to the actual environment as would be seen by a user when underwater.
  • a digital video camera 20 is shown in Figure 2.
  • the camera 20 comprises a image sensor 22, such as a CCD or CMOS sensor, operatively connected to a video processing apparatus 24, both of which are controlled by a controller 28.
  • the controller is further operatively connected to a global navigation satellite systems (GNSS) sensor or receiver 31 for determining a geographic position of the camera, e.g. as a set of longitude and latitude coordinates, and a pressure sensor 32 for determining a depth of the camera when underwater.
  • GNSS global navigation satellite systems
  • Position and depth information from the sensors 31 and 32 can be supplied by the controller to the video processing apparatus 24 for use in processing the image data collected by the image sensor 22, as will be described in more detail below.
  • the video processing apparatus 24 is further operatively connected to a memory 26 on which encoded video image data output from the video processing apparatus 24 is stored.
  • the various components of the camera are preferably all contained within a housing of the camera 20, and the connections between the components are therefore wired connections. It will be understood, however, that some of the components, such as the GNSS sensor 31 and the pressure sensor 32 can be external of the camera housing, and be operatively connected to the controller 28 using a wireless connection.
  • the camera 20 needs to be able to operate underwater, and therefore in embodiments the housing of the camera 20 can be arranged to be waterproof. Alternatively, the camera 20 could be insertable into a waterproof container, such that the camera 20 can be used underwater.
  • FIG 3 shows a more detailed view of the video processing apparatus 24.
  • the video processing apparatus 24 comprises a video processing front end (VPFE) 40 and a video processing back end (VPBE) 50.
  • the VPFE 40 comprises a controller 41 that receives raw image data, e.g. in Bayer format, from the image sensor 22.
  • the raw image data is passed from the controller 41 to the statistics engine 43 and to the image pipe (IPIPE) 42.
  • the statistics engine 43 is operatively connected to the IPIPE 42, such that data generated by the statistics engine 43 can be used to modify the image processing techniques that are performed in the IPIPE 42.
  • the raw image data is processed in the IPIPE 42, so as to generate YCbCr video image data, which is discussed in more detail below with reference to Figure 4.
  • the YCbCr video image data output from the IPIPE 42 is passed to resizer 44, which can be used to scale the video image data prior to it being passed to the VPBE 50.
  • the video image data is then processed by an encoder 51 in the VPBE 50, which acts to encode the video image data into a compressed format. Compression reduces the size of the data stream by removing redundant information, and can be lossless compression or lossy compression; lossless compression being where the reconstructed data is identical to the original, and lossy compression being where the reconstructed data is an approximation to the original, but not identical.
  • the compression technique could be an intraframe compression technique or an interframe compression technique, e.g. H.264.
  • intraframe compression techniques function by compressing each frame of the image data individually, e.g. as a jpeg image
  • interframe compression techniques function by compressing a plurality of neighbouring frames together (based on the recognition that a frame can be expressed in terms of one or more preceding and/or succeeding frames.
  • the encoder 51 could be used to generate a plurality of encoded video streams, e.g. a first encoded stream that is encoded using an interframe compression technique and a second, typically lower quality, encoded stream that is encoded using an intraframe compression technique.
  • the one or more encoded video streams output from the encoder 51 can be written to a digital media file, such as a AVI file or a MP4 file, which is stored in the memory 26.
  • a digital media file such as a AVI file or a MP4 file
  • the one or more video streams could be multiplexed with other data streams, such as an encoded audio stream from a microphone (not shown) of the camera 20, such that the digital media file stored in the memory 26 includes multiple data types.
  • a number of image processing functions are performed within the IPIPE 42.
  • the raw image data from the image sensor 22, which is typically in a Bayer format is processed in a 'white balance' block 61 in which an automatic white balancing (AWB) operation is performed.
  • the image data, after white balancing, is typically then converted in an interpolation and/or demosaicing process to an RGB format, whereupon the image data is then processed in a 'RGB blending' block 62 in which a colour correction (CC) operation is performed.
  • the image data is next processed in a 'gamma correction' block 63 before being converted into a YCbCr format in a 'YCbCr conversion' block 64.
  • the image processing enhancements of the present invention are found primarily in the 'white balance' block 61 and the 'RGB blending' block 62, and will be discussed in more detail below with reference to the method shown in Figure 5.
  • step 1 of the method colour image data representative of an underwater environment is obtained from an image sensor, i.e. sensor 22.
  • the colour image data is typically in a conventional Bayer pattern sensor format, such that the colour image data comprises a plurality of pixels, and wherein each pixel has an intensity value for only one of the three colours: red (R); blue (B); and green (G).
  • R red
  • B blue
  • G green
  • the intensity value could be represented by a 8-bit number, such that the intensity value can be an integer between 0 and 255.
  • the obtained colour image data is analysed in the statistics engine 43, by dividing the colour image data into a plurality of regions, which are also known as paxels.
  • Each region comprises a plurality of pixels.
  • the colour image is divided into 16 regions using a 4x4 grid, although it will be understood that different grid sizes could be used leading to a greater or lesser number of regions.
  • the colour image data is thus preferably represented by a set of 4x4 paxels for each colour component, i.e. red, blue and two green.
  • Each paxel (for a colour component) is assigned an intensity value representative of the individual pixels forming the paxel, e.g.
  • the intensity value for the paxel being the average intensity value of the pixels forming the paxel.
  • the set of paxels for a colour component thus forms a down sampled version of the colour image data for that colour component.
  • each region when the colour image data is in the Bayer format, is thus associated with four intensity values: a red; a blue; and two green. In some embodiments, the two green values for a region could be averaged to produce a single combined green value.
  • step 3 of the method at least one ratio between intensity values is determined for each region. For example, and since the aim of the invention is to identify areas of the colour image data that relate primarily to the ocean, i.e. blue water, the ratio between the blue and red intensity values is determined. The particular ratio that is chosen can be chosen based on location, since, for example, if the camera is being used in green water, then a different ratio, such as between red and green or between blue and green can be used.
  • the method of step 3 is typically performed in the 'white balance' block 61 of the IPIPE 42 based on the intensity values for the regions output from the statistics engine 43.
  • the at least one ratio is compared to a predetermined threshold value associated with an underwater depth (step 4), so as to select a subset of the regions based on the comparison (step 5).
  • a threshold value can be predetermined based on an assumption that the camera is primarily going to be operated underwater at a depth of around 5m.
  • the threshold value for a particular depth is approximately the ratio of the spectral irradiance at around 450nm (blue) to that at around 620nm (red) as shown in the graph of Figure 1 . Therefore, for example, the threshold value for the B:R ratio can be set at 3, although it will be understood that this is merely exemplary.
  • the predetermined threshold value can be selected based on a depth of the camera - the depth being determined from the pressure sensor 32 - with the threshold value being selected from a plurality of values that are each associated with a particular depth range.
  • a lookup table may be stored in a memory of the camera that associates a threshold value with a depth range.
  • the threshold value will increase with depth, as the colour image data becomes more heavily shifted toward blue.
  • the lookup table may also further associate a threshold value with a depth range and a geographic area (or particular body of water). In such instances, the predetermined threshold can be further selected based on a location of the camera as determined from the GNSS sensor 30.
  • a AWB operation is performed using the selected subset of regions, e.g. only those regions that have a B:R ratio less than the threshold value.
  • a set of gains to be applied to the intensity values of individual pixels are determined from the paxels of the selected subset of regions using a gray world normalisation.
  • the gray world normalisation makes the assumption that the scene represented by the colour image data is, on average, a neutral grey.
  • the R:G and B:G gains for the colour image data can be calculated as follows:
  • Red Gain ⁇ (green paxels) / ⁇ (red paxels)
  • the gains can be applied to the intensity values of all the pixels of the colour image data, so as to complete the AWB operation.
  • the difference in the red and blue gains between the previous and current frame should not exceed 0.5%, i.e. a time constant of 200 frames.
  • 0.5% is exemplary, and any suitable value can be used as desired. Therefore, in embodiments, the red and blue gains as determined in the AWB operation can be modified, before they are applied to the intensity values of the pixels of the colour image data, so as to maintain the time constant.
  • step 7 of the method a CC operation is performed on the modified colour image data.
  • This step of the method is performed in the 'RGB blending' block 62 of the IPIPE 42.
  • the CC operation involves applying a 3x3 matrix transformation to the RGB values of each pixel of the colour image data (in this case after the AWB operation), so as to further modify the colours of the image data.
  • the matrix can be selected based on a depth of the camera - the depth being determined from the pressure sensor 32 - from a plurality of predetermined matrices.
  • a lookup table may be stored in a memory of the camera that associates a matrix with a depth range.
  • the lookup table may also further associate a matrix with a depth range and a geographic area (or particular body of water).
  • the matrix can be further selected based on a location of the camera as determined from the GNSS sensor 30.
  • step 8 of the method the modified colour image date is encoded by the encoder 51 , so as to generate an encoded video stream, which, as discussed above, is then stored in memory 26.
  • the underwater mode of the video camera 20, i.e. wherein the raw image data from the image sensor 22 is processed using the method of Figure 5, can be manually selected by the user as they are about to go, or are, underwater.
  • the mode could be automatically selected based on an exposure level of the image data collected by the image sensor 22 and/or a depth of the camera as determined by the pressure sensor 32.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

A method is disclosed for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, e.g. so as to modify and improve the colours associated with the colour image data such that they are more realistic. The method can comprise selecting an automatic white balancing operation and/or a colour correction operation based on a depth of the camera from a pressure sensor operatively connected to the camera. The method may be implemented on digital video cameras or digital still cameras, or may be provided as a computer program product.

Description

METHODS AND SYSTEMS FOR UNDERWATER DIGITAL IMAGE PROCESSING
Field of the Invention
The present invention relates to the processing of image data, and in particular to methods and systems for the processing of colour image data collected by a digital camera capable of operating underwater, e.g. so as to modify and improve the colours associated with the colour image data such that they are more realistic, i.e. closer to the actual environment as would be seen by a user when underwater. The present invention is particularly beneficial in digital video cameras, but is equally applicable to digital still cameras.
Background of the Invention
Digital cameras, such as video cameras and still cameras, are increasingly being used in outdoors and sports settings. Such cameras, which are often referred to as "action cameras" in the case of video cameras, are commonly attached to a user, sports equipment or a vehicle and are operated to capture video data, and typically also audio data, during a sports session with minimal user interaction. It is known to integrate a number of additional sensor devices into such action cameras. For example, WO 2011/047790 A1 discloses a video camera comprising some or all of an integrated GPS device, speed or acceleration measuring device, time measuring device, temperature measuring device, heart rate measuring device, barometric altitude measuring device and an electronic compass. These sensors can be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection.
Action cameras are also commonly capable of being used underwater through the use of an watertight housing that is in either integral to the camera or a separate housing into which the camera can be fitted, are becoming more common place. Such cameras can be used, for example, by scuba divers to take photographs and video as they explore coral reefs or shipwrecks. As an example WO
2011/1 19336 A1 describes a camera with a pressure sensor that can be used to switch the camera between normal and underwater modes of operation.
A characteristic of imagery capture underwater is that the colour and contrast of captured images deteriorates with the subjects in such images appearing colourless and indistinct, since, at any given depth, the light entering the camera from an object in the underwater environment is attenuated based, for example, on the depth of the camera and the distance from the object to the camera. The attenuation becomes larger, both as the depth increases and as the distance to the underwater object increases. This effect is shown in Figure 1 , for example, which depicts a graph taken from the article "Underwater Light Field and its Comparisons to Metal Halide Lighting" by Sanjay Joshi, PhD published in Advanced
Aquarist, August 2005. This graph shows how the spectral irradiance varies with wavelength at a number of different underwater depths: 1 metre (m); 5m; 10m; 15m; and 20m, in the ocean. As can be seen, the longer wavelengths of sunlight, such as red or orange, are absorbed quickly by the surrounding water, so that even to the naked eye everything appears blue-green in colour. One method of compensating for the such underwater effects is to fit depth specific filters to the camera, and to use various post processing techniques to correct the colour of the recorded image data. Such techniques, however, require additional equipment to be carried by the user and additional manual effort to fit the filter over the lens of the camera, together with the fact that the recorded image data can't be shared or otherwise distributed immediately due to the need to correct the colour of the image data after it has been recorded.
The Applicants believe that there remains scope for improvements to techniques for processing colour image data recorded underwater. Summary of the Invention
According to an aspect of the present invention, there is provided a method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
dividing said colour image data into a plurality of regions, each region comprising a plurality of said pixels, and determining, for each region, and for at least two of the colour components, an intensity value for the region based on the intensity values of at least some of the pixels in the region;
determining, for each region, at least one ratio between the determined intensity values for the region, and comparing the determined at least one ratio to a predetermined threshold value associated with an underwater depth;
selecting a subset of the plurality of regions based on the result of the comparison;
performing an automatic white balancing operation using the selected subset of regions to determine a set of modifications to be applied to the intensity values of at least some of the pixels of the colour image data; and
generating modified colour image data representative of the underwater environment by applying the set of modifications to the at least some of the pixels of the colour image data.
The present invention extends to a system, e.g. an image or video processing system comprising one or more processors, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described. The image or video processing system can be operatively connected to at least one image sensor and optionally to one or more senor devices (as discussed in more detail below), so as to form a digital camera that is capable of operating underwater. The digital camera can be a still camera that is arranged to take photographs, i.e. digital images, but in preferred embodiments is a digital video camera that is arranged to record video, i.e. a series of digital images. In the case of video cameras, the method is repeatedly applied to colour image data that is collected by the at least one image sensor.
Thus, in accordance with another aspect of the invention, there is provided a system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the system comprising: means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
means for dividing said colour image data into a plurality of regions, each region comprising a plurality of said pixels, and determining, for each region, and for at least two of the colour components, an intensity value for the region based on the intensity values of at least some of the pixels in the region; means for determining, for each region, at least one ratio between the determined intensity values for the region, and comparing the determined at least one ratio to a predetermined threshold value associated with an underwater depth;
means for selecting a subset of the plurality of regions based on the result of the comparison; means for performing an automatic white balancing operation using the selected subset of regions to determine a set of modifications to be applied to the intensity values of at least some of the pixels of the colour image data; and
means for generating modified colour image data representative of the underwater environment by applying the set of modifications to the at least some of the pixels of the colour image data.
The present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate.
Accordingly, even if not explicitly stated, the system of the present invention may comprise means for carrying out any step described in relation to the method of the invention in any of its aspects or embodiments, and vice versa.
The present invention is concerned with methods and systems for processing colour image data representative of an underwater environment that is collected by an image sensor of a digital camera, and preferably a digital video camera. As discussed in more detail below, the processing of the colour image data provided by the present invention, in accordance with any of its embodiments, allows the colours of the image data to be corrected during recording, i.e. before they are encoded and typically stored in a memory of the camera, so as to overcome the spectral attenuation that occurs when underwater, without the need for depth specific filters and/or other post processing techniques.
In the present invention, colour image data representative of underwater environment is obtained from at least one image sensor of a digital camera. The colour image data will typically be in an unprocessed (or raw) format, e.g. as obtained directly from the at least one image sensor, but it is envisaged that the colour image data may be in a processed format, e.g. after a demosaicing step, such as an RGB colour format.
The image sensor is typically a complementary metal-oxide-semiconductor (CMOS) sensor or a charge-coupled device (CCD) sensor, and which comprises a colour filter array (CFA) or colour filter mosaic (CFM), so as to generate colour image data in a raw format. In these embodiments, each pixel of the raw image data corresponds to a photosensor (or pixel sensor) of the image sensor, and thus each pixel has an intensity value for one of a plurality of colour components. The plurality of colour components typically comprise red, blue and green, although the particular colour components will depend, as will be appreciated, on the CFA of the image sensor. For example, a common filter is the Bayer filter, which has a filter pattern that is 50% green, 25% red and 25% blue, and hence is also called RGBG. However, other filters could also be used such as CYYM, which is 50% yellow, 25% cyan and 25% magenta, or CYGM, which is 25% cyan, 25% yellow, 25% green and 25% magenta.
In other embodiments, rather than each pixel being associated with only one colour component, a pixel could be associated with an intensity value for each of the plurality of colour components. For example, in some image sensors, such as the Foveon X3 sensor, the photosensors are vertically stacked, such that each pixel has an intensity value for each colour component. Alternatively, if a plurality of image sensors are used, one for each colour component, then it can also be the case that each pixel has an intensity value for each colour component.
Similarly, in embodiments, wherein the colour image data is in a processed format, e.g. after a demosaicing step has been performed on the raw image data from the at least one sensor, then a pixel is associated with an intensity value for each of the colour components, e.g. red, green and blue (RGB).
As known in the art, the intensity value for each colour component is represented by one or more bits. For example, in embodiments, the intensity value for each colour component is represented by 8 bits, and so the intensity value can take a value from 0 to 255. It will be appreciated, however, that a greater or fewer number of bits can be used as desired to represent the intensity value.
The colour image data from the image sensor, or plurality of images sensors in some embodiments, is divided into a plurality of regions, sometimes referred to as paxels, wherein each region comprises a plurality of pixels. The division of the colour image data into a plurality of regions can be thought of as down sampling the image data, so as to create a low resolution version of the image data. The down sampling can be performed using any suitable technique, but preferably the image data is divided into a grid of non-overlapping regions, such as a 4x4 grid, a 6x6 grid, a 8x8 grid, etc. The number of regions can be selected as desired, although it has been found that there is little improvement in image quality if the colour image data is divided into regions than a 8x8 grid.
In the present invention, an intensity value for at least two of the colour components is determined for each of the regions into which the colour image data is divided. In embodiments, an intensity value is determined for each of the colour components, e.g. red, green and blue, but as will be discussed in more detail below the method, at least in some embodiments, only makes use of the intensity values of two of the colour components, e.g. blue and red (for blue water environments, such as the ocean). The intensity value of a colour component for a region is based on the intensity values of at least some of the pixels of the region, such that the intensity value for a region is representative of the intensity values of at least some of the plurality of pixels in the region. For example, the intensity value of a colour component for a region can be an average (or other similar measure) of the intensity values of at least some, and typically all, of the pixels in the region associated with the particular colour component. As will be appreciated, the intensity value of a colour component for a region will typically be based on the intensity values of the colour component of all the pixels in the region associated with that colour component. Although, it is envisaged that some of the pixels may be filtered for various reasons, such as being defective, over exposed, etc, , e.g. in a thresholding operation, and so the intensity value of a colour component for a region may be based on the intensity values of the colour component of only some of the pixels in the region associated with that colour component.
In accordance with the invention, for each of the regions into which the colour image data is divided, one or more ratios between determined intensity values for the region are determined. The method may involve determining one or more or all of the ratios between the intensity values of the plurality of colour components, but typically, and in preferred embodiments, only one ratio is determined. As will be discussed in more detail below, the determined ratio is compared to a predetermined threshold value associated with an underwater depth, so as to identify those regions of the colour image data that are primarily water. When a plurality of ratios are determined and used in the present invention, then each ratio is compared to a predetermined threshold value appropriate for the particular ratio, i.e. each ratio is preferably compared to a different predetermined threshold value. These regions that are identified as primarily water from the comparison can then be masked, i.e. not used, when performing white balancing, thereby removing the bias of effect of the water in the colour image data from the white balancing operation.
In embodiments where the colour components are red, blue and green, and thus where three ratios could be calculated: blue/red; red/green; and blue/green, the blue/red ratio is preferably determined (as this is appropriate for blue water environments, which are more common than green water environments). However, either of the red/green or the blue/green ratio could be determined and used if the digital camera is being used in green water environments.
In embodiments, the one or more ratios selected for use in the method can be based on a position of the digital camera, e.g. whether it is in the vicinity of a body of water having a blue water environment or a green water environment. The selection of whether the camera is to be used in a blue water environment or a green water environment, and thus the selection of the ratio or ratios used in the method, can be manual, i.e. based on a received user input, or automatic, e.g. based on a position obtained from a positioning determining device. The position determining device can be, for example, a global navigation satellite system (GNSS) receiver, and the obtained position may be in any form as desired, but will commonly be a set of geographic coordinates, e.g. latitude and longitude. The position determining device could be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection. In these latter embodiments, the position of the digital camera as obtained from the position determining device is compared to digital map data comprising information about the location and type of bodies of water, with the selection of the ratio or ratios used in the method being based on whether the camera is in the vicinity, e.g. a predetermined distance, of a particular body of water.
As discussed above, the determined ratio, or ratios, is compared to a predetermined threshold value associated with an underwater depth. In an embodiment, the threshold value is chosen based on a depth at which the digital camera is commonly going to be used. In such embodiments, the
improvements to the colour image data due to the processing of the present invention are most notable at the selected depth, but are still applicable, although to a lesser extent, at other depths. For example, a threshold value can be selected that is applicable when the camera is being used at a depth of 5m underwater. In other embodiments of the invention, a plurality of threshold values can be used, wherein each threshold value is associated with a different underwater depth. Any number of predetermined threshold values could be used as desired. For example, the method may be use a threshold value for 5m, for 10m, for 15m, etc. In such embodiments, a look up table could be stored in a memory of the camera that stores a predetermined threshold value for each of a plurality of underwater depths or depth ranges. Accordingly, in embodiments, the predetermined threshold value to be used in the method can be selected from a plurality of predetermined threshold values based on an underwater depth of the digital camera. The depth of the digital camera could be determined based on a received user input, e.g. the user could select the depth at which they will primarily be using the camera. Alternatively, the depth of the digital camera could be determined automatically based on data received from a pressure sensor. The pressure sensor could be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection. Accordingly, in such embodiments, the method automatically adjusts to the changing depth of the camera.
In the present invention, a subset of the plurality of regions is selected based on the result of the comparison between the determined at least one ratio and the relevant predetermined threshold value. The subset of regions are those that have been determined not to be primarily water, and therefore are beneficial to use in the subsequent automatic white balance operation. In other words, in an embodiment, and where only a single ratio is determined and used, a first subset of regions is identified where the ratio for region is greater than the predetermined threshold value, and a second subset of regions is identified where the ratio for the region is less than the predetermined threshold value.
Dependent on the definition of the ratio, one of the first and second subsets of regions will be substantially representative of water in the underwater environment, and the other of the subset of regions will be substantially representative of objects in the underwater environment, e.g. rocks, people, fish, vegetation, the seabed, etc. As will be appreciated, the method of the invention preferably makes use of the second subset of regions, and it is this second subset of regions that are selected based on the result of the comparison. In other embodiments, and where a plurality of ratios are used, it will be understood that two subsets of regions are determined for each ratio, one substantially representative of water and the other substantially representative of objects in the underwater environment. In these embodiments, the selected subset of regions can, in one embodiment, be a combination of the subsets of regions that are substantially representative of objects in the underwater environment or, in another embodiment, be those regions that appear in each of the subsets of regions that are substantially representative of objects in the underwater environment.
After the subset of regions of the colour image data have been determined, i.e. those regions of the image data that are substantially representative of objects in the underwater environment, an automatic white balancing operation is performed using the selected subset of regions to determine set of modifications to be applied to the intensity values of at least some of the pixels of the colour image data. Any suitable or desirable automatic white balancing operation could be performed in the present invention, such as a Gray World algorithm, which incorporates the gray world assumption that argues the average reflectance of a scene is achromatic, or a White Patch algorithm, which incorporates the white world assumption and is based on the Retinex theory of visual colour constancy, or any combination of different techniques. As will be appreciated, the white balancing operation is performed using pixels in the selected subset of regions, and thus does not use pixels in other regions of the colour image data that are not selected. In embodiments, all the pixels of the subset of regions are used for white balancing, i.e. in the case of global techniques such as Gray World, White Patch, etc. It is envisaged, however, that in other embodiments local white balancing techniques may be used, and thus the white balancing operation may be performed using only some of the pixels of the selected subset of regions.
Regardless of the choice of white balancing technique, the result of the automatic white balancing operation is a set of modifications to be applied to the intensity values of at least some pixels of the colour image data. In embodiments, the set of modifications comprise a set of gains for at least some of the colour components, typically for all but one of the colour components. The term "gain" is used to mean a coefficient by which the obtained intensity value of a pixel is multiplied. The set of modifications therefore cause the intensity values of the pixels of the colour image data to which they are applied to be modified. Thus, in embodiments where each pixel is associated with only one colour component, the set of modifications are applied to those pixels that are associated with the relevant colour component, thereby causing the intensity value for those pixels to be changed. In other embodiments where each pixel has an intensity value for each of the colour components, the set of modifications are applied to all pixels of the colour image data, thereby causing the intensity value for the relevant colour components of each pixel to be changed.
In a preferred embodiment of the invention, a Gray World algorithm is used for white balancing, which is based on the assumption that the intensity value for each colour component of the subset of regions of the colour image data should be the same (or at least substantially the same). In this embodiments, the intensity value for a colour component of the subset of regions is determined as the average intensity value for the colour component in question of all the pixels in the subset of regions. Thus, in embodiments where the colour components are red, blue and green, the aim of the white balancing operation is determine gains to be applied to two of the channel, typically the red and blue channels, that cause the average intensity values of the blue, green and red channels of the pixels in the subset of regions to be the same (or at least substantially the same).
In embodiments, and when the digital camera is a digital video camera, such that the method is applied to a series of colour image data, i.e. to each frame of the video, the set of modifications that are determined using the automatic white balancing operation may be modified, such that the set of modifications applied to a frame do not differ from those applied to the previous frame by more than a predetermined amount. This prevents any abrupt changes in illumination of the image that would be noticeable to someone watching the video. The predetermined value can be 0.5%, i.e. a time constant of 200 frames, typically around 6-7 second; although any value could be chosen as desired. In such embodiments, it is the modified set of modifications, e.g. gains, that are applied to the intensity values of at least some of the pixels of the colour image data.
It is believed that the concept of selecting an automatic white balancing operation for colour image data collected an image sensor of a digital camera based on the depth of the camera as determined from a pressure sensor is new and advantageous in its own right. Thus in accordance with another aspect of the invention, there is provided a method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
obtaining a depth of the camera from a pressure sensor operatively connected to the camera; selecting an automatic white balancing operation based on the obtained depth of the camera; and performing the selected automatic white balancing operation to adjust intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
The present invention extends to a system, e.g. an image or video processing system comprising one or more processors, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described. The image or video processing system can be operatively connected to at least one image sensor and a pressure sensor, and optionally one or more other sensor devices, so as to form a digital camera that is capable of operating underwater. The digital camera can be a still camera that is arranged to take photographs, i.e. digital images, but in preferred embodiments is a digital video camera that is arranged to record video, i.e. a series of digital images. In the case of video cameras, the method is repeatedly applied to colour image data that is collected by the at least one image sensor.
Thus, in accordance with another aspect of the invention, there is provided a system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
means for obtaining a depth of the camera from a pressure sensor operatively connected to the camera;
means for selecting an automatic white balancing operation based on the obtained depth of the camera; and
means for performing the selected automatic white balancing operation to adjust intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
The present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. For example, the colour image data obtained from the at least one image sensor can be in an unprocessed (or raw) format, e.g. as obtained directly from the at least one image sensor, or can be in a processed format, e.g. after a demosaicing step. Similarly, the pressure sensor could be integrated in the camera itself, or could be remote from the camera and operatively connected to the camera using a wired or wireless connection. Furthermore, and as will be appreciated, the performance of the selected automatic white balancing operation results in determining a set of modifications, e.g. gains, to be applied to intensity values of at least some of the pixels of the colour image data, which, when applied, cause the generation of modified colour image data.
In these aspects of the invention, and embodiments thereof, an automatic white balancing operation is selected based on the depth of the camera as obtained from the pressure sensor. The automatic white balancing operation may be a global algorithm in which all of the pixels of the colour image data are used for colour temperature estimation, or, in other embodiments, the automatic white balancing operation is a local algorithm, e.g. as described in the above, in which only those pixels of the colour image data that satisfy certain conditions are used for colour temperature estimation. The specific algorithm that is used for colour temperature estimation can be chosen as desired, and may, for example, be a Gray World algorithm, a White Patch algorithm or the like. The depth of the camera obtained from the pressure sensor can be used to select one of a plurality of different algorithms to be applied to the colour image data. Additionally, or alternatively, the obtained depth of the camera can be used to select one or more parameters to be used in an algorithm to be applied to the colour image data. Additionally, or alternatively, the obtained depth of the camera can be used to select a subset of the plurality of pixels of the colour image data to which an algorithm is applied, e.g. in the manner of the above described aspects and embodiments.
In embodiments, the method comprises obtaining a position of the camera from a position determining device, such as a GNSS receiver, and the selection of the automatic white balancing operation is further based on the obtained position of the camera. Thus, for example, the position and depth of the camera obtained from the pressure sensor can be used to select one of a plurality of different algorithms to be applied to the colour image data. Additionally, or alternatively, the obtained positon and depth of the camera can be used to select one or more parameters to be used in an algorithm to be applied to the colour image data. Additionally, or alternatively, the obtained position and depth of the camera can be used to select a subset of the plurality of pixels of the colour image data to which an algorithm is applied.
In the present invention, after the set of modifications, e.g. gains, have been determined, they then are applied to the at least some of the pixels of the obtained colour image data, so as to generate modified colour image data. The modified colour image data will be in the same format as the obtained colour image data, since the set of modifications are applied on a per pixel basis and cause the intensity value of at least some pixels to be changed. Thus, when the obtained colour image data is in an unprocessed (or raw) format, then the modified colour image data will also be in a raw format. In such embodiments, the modified colour image data is subsequently subjected to a demosaicing step, so as to be converted into a processed format, such as an RGB format. Clearly, in embodiments where the obtained colour image data is in a processed format, e.g. an RGB format, then modified colour image data will also be in the processed format.
In embodiments of the invention, the modified colour image data once in a processed format, such that each pixel has an intensity value for each of a plurality of colour components, e.g. red (R), green (G) and blue (B), can be subjected to a colour correction operation, e.g. RGB blending. As will be appreciated, the plurality of colour components of the colour image data used in the colour correction operation may be the same or different as the plurality of colour components of the colour image data as used during the automatic white balancing operation. The colour correction operation is used to adjust the image data to the human colour spectrum, and typically comprises a matrix transformation, optionally with an added offset, that is applied to the intensity values of each of the plurality of colour components of the modified colour image data (after conversion if required). The colour correction operation may be based on a position of the digital camera and/or a depth of the digital camera, e.g. with one or more predetermined matrices stored in a memory of the camera being selected based on the position and/or depth of the camera. For example, the selection of the matrix or matrices (if an offset matrix is also used) is preferably based on the position and/or depth of the camera. The selection of the matrix or matrices used in the colour correction operation may be manual, i.e. based on a received user input that indicates the position of the camera and/or the depth at which the camera is to be operated, or automatic, e.g. based on a position obtained from a position determining device and/or a depth obtained from a pressure sensor. As discussed above, the position determining device and/or pressure sensor could be integrated in the camera itself, or can be remote from the camera and operatively connected to the camera using a wired or wireless connection. Thus, for example, a look up table could be stored in a memory of the camera that stores at least a predetermined transformation matrix for each of a plurality of underwater depths or depth ranges and/or geographic areas. The obtained position and/or depth is then preferably used to select the appropriate matrix or matrices from the look up table.
It is believed that the concept of selecting a colour correction operation for colour image data collected an image sensor of a digital camera based on the depth of the camera as determined from a pressure sensor is new and advantageous in its own right.
Thus in accordance with another aspect of the invention, there is provided a method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for each of a plurality of colour components;
obtaining a depth of the camera from a pressure sensor operatively connected to the camera; selecting a colour correction operation based on the obtained depth of the camera; and performing the selected colour correction operation to adjust one or more of the intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
The present invention extends to a system, e.g. an image or video processing system comprising one or more processors, for carrying out a method in accordance with any of the aspects or embodiments of the invention herein described. The image or video processing system can be operatively connected to at least one image sensor and a pressure sensor, and optionally one or more other sensor devices, so as to form a digital camera that is capable of operating underwater. The digital camera can be a still camera that is arranged to take photographs, i.e. digital images, but in preferred embodiments is a digital video camera that is arranged to record video, i.e. a series of digital images. In the case of video cameras, the method is repeatedly applied to colour image data that is collected by the at least one image sensor.
Thus, in accordance with another aspect of the invention, there is provided a system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the system comprising:
means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for each of a plurality of colour components;
means for obtaining a depth of the camera from a pressure sensor operatively connected to the camera;
means for selecting a colour correction operation based on the obtained depth of the camera; and
means for performing the selected colour correction operation to adjust one or more of the intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
The present invention is a computer implemented invention, and any of the steps described in relation to any of the aspects or embodiments of the invention may be carried out by a set of one or more processors that execute software comprising computer readable instructions stored on a non-transitory computer readable medium.
As will be appreciated by those skilled in the art, these further aspects of the present invention can, and preferably do, include any one or more or all of the preferred and optional features of the invention described herein in respect of any of the other aspects of the invention, as appropriate. For example, the colour image data obtained from the at least one image sensor is preferably in a processed format, such that each pixel has an intensity value for each of a plurality of colour components, e.g. red (R), green (G) and blue (B). Similarly, the pressure sensor could be integrated in the camera itself, or could be remote from the camera and operatively connected to the camera using a wired or wireless connection. Furthermore, the colour correction operation preferably comprises a matrix transformation, optionally with an added offset, that is applied to the intensity values of each of the plurality of colour components of the colour image data. Thus, the selection of the colour correction operation based on the obtained depth of the camera preferably comprises selecting one of a plurality of predetermined transformation matrices, and optionally one of a plurality of predetermined offset matrices. For example, in embodiments, a look up table could be stored in a memory of the camera that stores at least a predetermined transformation matrix for each of a plurality of underwater depths or depth ranges, and the obtained depth is preferably used to select the matrix from the look up table associated with the depth or depth range matching the obtained depth. ln embodiments, the method comprises obtaining a position of the camera from a position determining device, such as a GNSS receiver, and the selection of the colour correction operation is further based on the obtained position of the camera. Thus, for example, the position and depth of the camera obtained from the pressure sensor can be used to select one of a plurality of predetermined transformation matrices, and optionally one of a plurality of predetermined offset matrices. For example, in embodiments, a look up table could be stored in a memory of the camera that stores at least a predetermined transformation matrix for each of a plurality of underwater depths or depth ranges and for each of a plurality of geographic areas, and the obtained depth and position is preferably used to select the matrix from the look up table associated with the depth or depth range and geographic area matching the obtained depth and position.
Following the modifications due automatic white balancing, and preferably also colour correction, and optionally other image processing operations, such as gamma correction and conversion from an RGB format to a YCbCr format, the generated colour image data is preferably passed to an encoder of the digital camera that processes the image data to generate an encoded image, and, in the case where the camera is a digital video camera, an encoded video stream. The processed image data can be encoded using any suitable compression technique as desired, e.g. lossless compression or lossy compression, and could be, for example, an intraframe compression technique or an interframe compression technique. The encoded image or video stream is preferably then stored on a memory of the digital camera. The memory preferably comprises a non-volatile memory device for storing the data collected by the camera, and may comprise a removable non-volatile memory device that can be is attachable to and detachable from the video camera. For example, the first memory may comprise a memory card such as, for example, an SD card or the like.
As will be appreciated, the method of the present invention can be used to automatically process colour image data collected by at least one image sensor of a digital camera when it is operating under water. The digital camera of the present invention is, however, preferably capable of operating out of water as well as underwater, and in such environments different image processing techniques are used since it is no longer necessary to cope with attenuation of light due to water. The method of the present invention is thus preferably only one of a plurality of modes of operation of the digital camera. In such embodiments, the method preferably comprises receiving an instruction to change to the underwater mode of operation. The instruction can be received from a user, i.e. the instruction to change mode is preferably based on a received user input. Alternatively, the instructions could be generated automatically based on data from one or more sensors indicating that the camera is now operating under water. The one or more sensors could include a pressure sensor and/or an exposure level sensor. As is known in the art, exposure is a measure of the amount of light per unit area that reaches the at least one image sensor of the digital camera.
The present invention can be implemented in any suitable system, such as a suitably configured micro-processor based system. In a preferred embodiment, the present invention is implemented in a computer and/or micro-processor based system. The method of present invention is preferably performed on an image or video processing device. The image or video processing device preferably comprises a system on chip (SOC) comprising cores (or blocks) arranged to at process the raw image data received from the at least one image sensor and to encode the processed image data. The image or video processing device is therefore preferably implemented in hardware, e.g. without using embedded processors.
The method aspects and embodiments of the present invention as described herein are preferably computer implemented methods, and may thus be implemented at least partially using software, e.g. computer programs. It will thus be seen that when viewed from further aspects the present invention provides computer software and such software installed on a computer software carrier for carrying out at least one of the steps of the methods set out herein. The present invention may accordingly suitably be embodied as a computer program product for use with a computer system. Such an implementation may comprise a series of computer readable instructions either fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CD ROM, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques. The series of computer readable instructions embodies all or part of the functionality previously described herein.
It should be noted that the phrase "associated with" as used herein should not be interpreted to require any particular restriction on data storage locations. The phrase only requires that the features are identifiably related. Therefore association may for example be achieved by means of a reference to a file, potentially located in a remote server.
It will also be appreciated by those skilled in the art that all of the described aspects and embodiments of the present invention can, and preferably do, include, as appropriate, any one or more or all of the preferred and optional features described herein. Brief Description of the Drawings
Various aspects of the teachings of the present invention, and arrangements embodying those teachings, will hereafter be described by way of illustrative example with reference to the accompanying drawings, in which:
Figure 1 is a graph showing the attenuation of light due with depth in the ocean;
Figure 2 shows a digital camera in accordance with an embodiment of the invention;
Figure 3 shows a more detailed view of the video processing apparatus of the digital camera of Figure 2;
Figure 4 shows a more detailed view of the image pipe of the video processing apparatus of Figure 3; and
Figure 5 shows a flow chart illustrating the steps of a method of processing colour image data in accordance with an embodiment of the invention. Detailed Description of the Preferred Embodiments
The present invention is, in preferred embodiments at least, directed to methods and systems for processing colour image data collected by an image sensor of a digital camera when operating underwater, such that the colour image data is representative of an underwater environment. The goal of the invention, at least in preferred embodiments, is to adjust the colours of the image data during recording, i.e. before it is encoded and stored, such that the image data is closer to the actual environment as would be seen by a user when underwater.
An embodiment of the present invention will now be described with reference to a digital video camera capable of operating underwater, although it should be appreciated that the image processing method is equally applicable to digital still cameras.
A digital video camera 20 is shown in Figure 2. The camera 20 comprises a image sensor 22, such as a CCD or CMOS sensor, operatively connected to a video processing apparatus 24, both of which are controlled by a controller 28. The controller is further operatively connected to a global navigation satellite systems (GNSS) sensor or receiver 31 for determining a geographic position of the camera, e.g. as a set of longitude and latitude coordinates, and a pressure sensor 32 for determining a depth of the camera when underwater. Position and depth information from the sensors 31 and 32 can be supplied by the controller to the video processing apparatus 24 for use in processing the image data collected by the image sensor 22, as will be described in more detail below. The video processing apparatus 24, the operation of which is discussed in more detail below with reference to Figures 3 and 4, is further operatively connected to a memory 26 on which encoded video image data output from the video processing apparatus 24 is stored. The various components of the camera are preferably all contained within a housing of the camera 20, and the connections between the components are therefore wired connections. It will be understood, however, that some of the components, such as the GNSS sensor 31 and the pressure sensor 32 can be external of the camera housing, and be operatively connected to the controller 28 using a wireless connection. The camera 20 needs to be able to operate underwater, and therefore in embodiments the housing of the camera 20 can be arranged to be waterproof. Alternatively, the camera 20 could be insertable into a waterproof container, such that the camera 20 can be used underwater.
Figure 3 shows a more detailed view of the video processing apparatus 24. In particular, the video processing apparatus 24 comprises a video processing front end (VPFE) 40 and a video processing back end (VPBE) 50. The VPFE 40 comprises a controller 41 that receives raw image data, e.g. in Bayer format, from the image sensor 22. The raw image data is passed from the controller 41 to the statistics engine 43 and to the image pipe (IPIPE) 42. The statistics engine 43 is operatively connected to the IPIPE 42, such that data generated by the statistics engine 43 can be used to modify the image processing techniques that are performed in the IPIPE 42. The raw image data is processed in the IPIPE 42, so as to generate YCbCr video image data, which is discussed in more detail below with reference to Figure 4. The YCbCr video image data output from the IPIPE 42 is passed to resizer 44, which can be used to scale the video image data prior to it being passed to the VPBE 50. The video image data is then processed by an encoder 51 in the VPBE 50, which acts to encode the video image data into a compressed format. Compression reduces the size of the data stream by removing redundant information, and can be lossless compression or lossy compression; lossless compression being where the reconstructed data is identical to the original, and lossy compression being where the reconstructed data is an approximation to the original, but not identical. The compression technique could be an intraframe compression technique or an interframe compression technique, e.g. H.264. As known in the art, intraframe compression techniques function by compressing each frame of the image data individually, e.g. as a jpeg image, whereas interframe compression techniques function by compressing a plurality of neighbouring frames together (based on the recognition that a frame can be expressed in terms of one or more preceding and/or succeeding frames. In embodiments, the encoder 51 could be used to generate a plurality of encoded video streams, e.g. a first encoded stream that is encoded using an interframe compression technique and a second, typically lower quality, encoded stream that is encoded using an intraframe compression technique. The one or more encoded video streams output from the encoder 51 can be written to a digital media file, such as a AVI file or a MP4 file, which is stored in the memory 26. As will be appreciated, the one or more video streams could be multiplexed with other data streams, such as an encoded audio stream from a microphone (not shown) of the camera 20, such that the digital media file stored in the memory 26 includes multiple data types.
A number of image processing functions are performed within the IPIPE 42. For example, and as shown in Figure 4, the raw image data from the image sensor 22, which is typically in a Bayer format, is processed in a 'white balance' block 61 in which an automatic white balancing (AWB) operation is performed. The image data, after white balancing, is typically then converted in an interpolation and/or demosaicing process to an RGB format, whereupon the image data is then processed in a 'RGB blending' block 62 in which a colour correction (CC) operation is performed. The image data is next processed in a 'gamma correction' block 63 before being converted into a YCbCr format in a 'YCbCr conversion' block 64.
The image processing enhancements of the present invention are found primarily in the 'white balance' block 61 and the 'RGB blending' block 62, and will be discussed in more detail below with reference to the method shown in Figure 5.
In step 1 of the method, colour image data representative of an underwater environment is obtained from an image sensor, i.e. sensor 22. The colour image data is typically in a conventional Bayer pattern sensor format, such that the colour image data comprises a plurality of pixels, and wherein each pixel has an intensity value for only one of the three colours: red (R); blue (B); and green (G). In the Bayer format, there are two green pixels for each blue and red pixel. The intensity value could be represented by a 8-bit number, such that the intensity value can be an integer between 0 and 255.
In step 2 of the method, the obtained colour image data is analysed in the statistics engine 43, by dividing the colour image data into a plurality of regions, which are also known as paxels. Each region comprises a plurality of pixels. In an embodiment the colour image is divided into 16 regions using a 4x4 grid, although it will be understood that different grid sizes could be used leading to a greater or lesser number of regions. The colour image data is thus preferably represented by a set of 4x4 paxels for each colour component, i.e. red, blue and two green. Each paxel (for a colour component) is assigned an intensity value representative of the individual pixels forming the paxel, e.g. with the intensity value for the paxel being the average intensity value of the pixels forming the paxel. It will therefore be understood that the set of paxels for a colour component thus forms a down sampled version of the colour image data for that colour component. Similarly it will be understood that each region, when the colour image data is in the Bayer format, is thus associated with four intensity values: a red; a blue; and two green. In some embodiments, the two green values for a region could be averaged to produce a single combined green value.
In step 3 of the method, at least one ratio between intensity values is determined for each region. For example, and since the aim of the invention is to identify areas of the colour image data that relate primarily to the ocean, i.e. blue water, the ratio between the blue and red intensity values is determined. The particular ratio that is chosen can be chosen based on location, since, for example, if the camera is being used in green water, then a different ratio, such as between red and green or between blue and green can be used. The method of step 3 is typically performed in the 'white balance' block 61 of the IPIPE 42 based on the intensity values for the regions output from the statistics engine 43.
In steps 4 and 5 of the method, the at least one ratio is compared to a predetermined threshold value associated with an underwater depth (step 4), so as to select a subset of the regions based on the comparison (step 5). In some embodiments, a threshold value can be predetermined based on an assumption that the camera is primarily going to be operated underwater at a depth of around 5m. The threshold value for a particular depth is approximately the ratio of the spectral irradiance at around 450nm (blue) to that at around 620nm (red) as shown in the graph of Figure 1 . Therefore, for example, the threshold value for the B:R ratio can be set at 3, although it will be understood that this is merely exemplary. Those regions that have a B:R ratio above the threshold value can be said to primarily represent the ocean, and so these regions should not be used in the AWB operation. In other words, these regions are masked, and the remaining regions are those that selected as the subset of regions for further processing. In embodiments, the predetermined threshold value can be selected based on a depth of the camera - the depth being determined from the pressure sensor 32 - with the threshold value being selected from a plurality of values that are each associated with a particular depth range. For example, a lookup table may be stored in a memory of the camera that associates a threshold value with a depth range. As will be appreciated, the threshold value will increase with depth, as the colour image data becomes more heavily shifted toward blue. The lookup table may also further associate a threshold value with a depth range and a geographic area (or particular body of water). In such instances, the predetermined threshold can be further selected based on a location of the camera as determined from the GNSS sensor 30.
In step 6 of the method, a AWB operation is performed using the selected subset of regions, e.g. only those regions that have a B:R ratio less than the threshold value. For example, a set of gains to be applied to the intensity values of individual pixels are determined from the paxels of the selected subset of regions using a gray world normalisation. The gray world normalisation makes the assumption that the scene represented by the colour image data is, on average, a neutral grey. The R:G and B:G gains for the colour image data can be calculated as follows:
Red Gain = ∑(green paxels) /∑(red paxels)
Blue Gain = ∑(green paxels) /∑(blue paxels) if∑(red paxels) \ \∑(blue paxels) == 0
Once the gains have been determined, then they can be applied to the intensity values of all the pixels of the colour image data, so as to complete the AWB operation. In embodiments, the difference in the red and blue gains between the previous and current frame should not exceed 0.5%, i.e. a time constant of 200 frames. Although the value of 0.5% is exemplary, and any suitable value can be used as desired. Therefore, in embodiments, the red and blue gains as determined in the AWB operation can be modified, before they are applied to the intensity values of the pixels of the colour image data, so as to maintain the time constant.
In step 7 of the method, a CC operation is performed on the modified colour image data. This step of the method is performed in the 'RGB blending' block 62 of the IPIPE 42. The CC operation involves applying a 3x3 matrix transformation to the RGB values of each pixel of the colour image data (in this case after the AWB operation), so as to further modify the colours of the image data. In
embodiments, the matrix can be selected based on a depth of the camera - the depth being determined from the pressure sensor 32 - from a plurality of predetermined matrices. For example, a lookup table may be stored in a memory of the camera that associates a matrix with a depth range. The lookup table may also further associate a matrix with a depth range and a geographic area (or particular body of water). In such instances, the matrix can be further selected based on a location of the camera as determined from the GNSS sensor 30.
Finally, in step 8 of the method, the modified colour image date is encoded by the encoder 51 , so as to generate an encoded video stream, which, as discussed above, is then stored in memory 26.
The underwater mode of the video camera 20, i.e. wherein the raw image data from the image sensor 22 is processed using the method of Figure 5, can be manually selected by the user as they are about to go, or are, underwater. Alternatively, the mode could be automatically selected based on an exposure level of the image data collected by the image sensor 22 and/or a depth of the camera as determined by the pressure sensor 32.
Finally, it should be noted that whilst the accompanying claims set out particular combinations of features described herein, the scope of the present invention is not limited to the particular combinations of hereafter claims, but instead extends to encompass any combination of features or embodiments herein disclosed irrespective of whether or not that particular combination has been specifically enumerated in the accompanying claims at this time.

Claims

CLAIMS:
1 . A method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
dividing said colour image data into a plurality of regions, each region comprising a plurality of said pixels, and determining, for each region, and for at least two of the colour components, an intensity value for the region based on the intensity values of at least some of the pixels in the region;
determining, for each region, at least one ratio between the determined intensity values for the region, and comparing the determined at least one ratio to a predetermined threshold value associated with an underwater depth;
selecting a subset of the plurality of regions based on the result of the comparison;
performing an automatic white balancing operation using the selected subset of regions to determine a set of modifications to be applied to the intensity values of at least some of the pixels of the colour image data; and
generating modified colour image data representative of the underwater environment by applying the set of modifications to the at least some of the pixels of the colour image data.
2. The method of claim 1 , wherein each pixel of the obtained colour image data corresponds to a photosensor of the at least one image sensor, and each pixel has an intensity value for only one of the plurality of colour components.
3. The method of claim 1 , wherein each pixel of the obtained colour image data has an intensity value for each of the plurality of colour components.
4. The method of any preceding claim, wherein the intensity value for each colour component is represented by a plurality of bits.
5. The method of any preceding claim, wherein the colour image data is divided into a grid of non- overlapping regions, such as a 4x4 grid, a 6x6 grid, or a 8x8 grid.
6. The method of any preceding claim, wherein the intensity value of a colour component for a region is the average of the intensity values of the colour component of the pixels of the region.
7. The method of any preceding claim, wherein the plurality of colour components comprise red, blue and green, and the determined at least one ratio for a region is the ratio between the blue and red intensity values of the region.
8. The method of any preceding claim, further comprising:
obtaining a geographic location of the camera from a position determining device, such as a global navigation satellite system (GNSS) receiver, operatively connected to the camera; and
selecting the at least one ratio based on the obtained geographic location.
9. The method of any preceding claim, further comprising:
obtaining a depth of the camera from a pressure sensor operatively connected to the camera; and
selecting the predetermined threshold value from a plurality of predetermined threshold values based on the obtained depth.
10. The method of any preceding claim, wherein a first subset of regions is identified where the at least one ratio is greater than the predetermined threshold value, and a second subset of regions is identified where the at least one ratio is less than the predetermined threshold value, and wherein one of the first and second subset of regions is substantially representative of water in the underwater environment, and the other of the first and second subset of regions is substantially representative of objects in the underwater environment and which are the selected subset of regions.
1 1. The method of any preceding claim, wherein the automatic white balancing operation is based on a Gray World algorithm, such that the average of the intensity values of the plurality of regions for each of the colour components is substantially the same.
12. The method of any preceding claim, wherein the digital camera is a digital video camera, and the method is applied to a series of frames, each frame comprising colour image data, the method further comprising modifying the set of modifications determined using the automatic white balancing operation such that the set of modifications applied to a frame do not differ from those applied to the previous frame by more than a predetermined amount.
13. A method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
obtaining a depth of the camera from a pressure sensor operatively connected to the camera; selecting an automatic white balancing operation based on the obtained depth of the camera; and performing the selected automatic white balancing operation to adjust intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
14. The method of claim 13, further comprising obtaining a geographic location of the camera from a position determining device, such as a global navigation satellite system (GNSS) receiver, operatively connected to the camera, and wherein the selection of the automatic white balancing operation is further based on the obtained geographic location.
15. The method of any preceding claim, further comprising:
selecting a colour correction operation based on an obtained depth and/or geographic location of the camera; and
performing the selected colour correction operation to adjust one or more of the intensity values of at least some of the pixels of the modified colour image data, so as to generate further modified colour image data representative of the underwater environment.
16. A method of processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for each of a plurality of colour components;
obtaining a depth of the camera from a pressure sensor operatively connected to the camera; selecting a colour correction operation based on the obtained depth of the camera; and performing the selected colour correction operation to adjust one or more of the intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
17. The method of claim 16, further comprising obtaining a geographic location of the camera from a position determining device operatively connected to the camera, and wherein the selection of the colour correction operation is further based on the obtained geographic location.
18. The method of any one of claims 15 to 17, wherein the selection of the colour correction operation comprises selecting one of a plurality of predetermined transformation matrices.
19. The method of any preceding claim, wherein the method is performed when the digital camera is operating in an underwater mode of operation, and wherein the digital camera is capable of operating in a plurality of modes of operation, the method further comprising receiving an instruction to change the camera to operate in the underwater mode of operation, the instruction being automatically generated based on data obtained from a pressure sensor and/or an exposure level sensor indicating that the camera is operating under water.
20. A computer program product comprising computer readable instructions that, when executed by at least one processor of a digital camera, cause the digital camera system to perform a method according to any preceding claim, optionally stored on a non-transitory computer readable medium.
21. A system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the system comprising:
means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
means for dividing said colour image data into a plurality of regions, each region comprising a plurality of said pixels, and determining, for each region, and for at least two of the colour components, an intensity value for the region based on the intensity values of at least some of the pixels in the region; means for determining, for each region, at least one ratio between the determined intensity values for the region, and comparing the determined at least one ratio to a predetermined threshold value associated with an underwater depth;
means for selecting a subset of the plurality of regions based on the result of the comparison; means for performing an automatic white balancing operation using the selected subset of regions to determine a set of modifications to be applied to the intensity values of at least some of the pixels of the colour image data; and
means for generating modified colour image data representative of the underwater environment by applying the set of modifications to the at least some of the pixels of the colour image data.
22. A system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the method comprising:
means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for at least one of a plurality of colour components;
means for obtaining a depth of the camera from a pressure sensor operatively connected to the camera;
means for selecting an automatic white balancing operation based on the obtained depth of the camera; and
means for performing the selected automatic white balancing operation to adjust intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
23. A system for processing colour image data collected by at least one image sensor of a digital camera capable of operating underwater, the system comprising:
means for obtaining colour image data from the at least one image sensor representative of an underwater environment, wherein said colour image data comprises a plurality of pixels, each pixel having an intensity value for each of a plurality of colour components;
means for obtaining a depth of the camera from a pressure sensor operatively connected to the camera; means for selecting a colour correction operation based on the obtained depth of the camera; and
means for performing the selected colour correction operation to adjust one or more of the intensity values of at least some of the pixels of the colour image data, so as to generate modified colour image data representative of the underwater environment.
24. The system of any one of claims 21 to 23, wherein the system comprises a digital still camera or a digital video camera.
PCT/EP2017/061910 2016-05-18 2017-05-18 Methods and systems for underwater digital image processing WO2017198746A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1608775.1 2016-05-18
GBGB1608775.1A GB201608775D0 (en) 2016-05-18 2016-05-18 Methods and systems for underwater digital image processing

Publications (1)

Publication Number Publication Date
WO2017198746A1 true WO2017198746A1 (en) 2017-11-23

Family

ID=56320627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/061910 WO2017198746A1 (en) 2016-05-18 2017-05-18 Methods and systems for underwater digital image processing

Country Status (2)

Country Link
GB (1) GB201608775D0 (en)
WO (1) WO2017198746A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10951812B2 (en) 2018-03-28 2021-03-16 Paralenz Group ApS Depth triggered auto record
CN113850747A (en) * 2021-09-29 2021-12-28 重庆理工大学 Underwater image sharpening processing method based on light attenuation and depth estimation
US11323666B1 (en) 2020-11-13 2022-05-03 Paralenz Group ApS Dynamic depth-color-correction
CN116579953A (en) * 2023-06-28 2023-08-11 陕西欧卡电子智能科技有限公司 Self-supervised water surface image enhancement method and related equipment
WO2025075609A1 (en) * 2023-10-03 2025-04-10 Google Llc Camera auto white balance: under-the-sea true color algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070104475A1 (en) * 2005-11-04 2007-05-10 Cheng Brett A Backlight compensation using threshold detection
US20110228074A1 (en) * 2010-03-22 2011-09-22 Parulski Kenneth A Underwater camera with presssure sensor
US20130176416A1 (en) * 2012-01-06 2013-07-11 Canon Kabushiki Kaisha Imaging apparatus, method for controlling imaging apparatus, and storage medium
US20140071264A1 (en) * 2012-09-11 2014-03-13 Samsung Electronics Co., Ltd. Image capture apparatus and control method thereof
US20150029356A1 (en) * 2013-07-25 2015-01-29 Olympus Corporation Imaging device, imaging method and non-transitory storage medium in which imaging program is stored

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070104475A1 (en) * 2005-11-04 2007-05-10 Cheng Brett A Backlight compensation using threshold detection
US20110228074A1 (en) * 2010-03-22 2011-09-22 Parulski Kenneth A Underwater camera with presssure sensor
US20130176416A1 (en) * 2012-01-06 2013-07-11 Canon Kabushiki Kaisha Imaging apparatus, method for controlling imaging apparatus, and storage medium
US20140071264A1 (en) * 2012-09-11 2014-03-13 Samsung Electronics Co., Ltd. Image capture apparatus and control method thereof
US20150029356A1 (en) * 2013-07-25 2015-01-29 Olympus Corporation Imaging device, imaging method and non-transitory storage medium in which imaging program is stored

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CARLEVARIS-BIANCO N ET AL: "Initial results in underwater single image dehazing", OCEANS 2010, IEEE, PISCATAWAY, NJ, USA, 20 September 2010 (2010-09-20), pages 1 - 8, XP031832668, ISBN: 978-1-4244-4332-1 *
SCHECHNER Y Y ET AL: "Clear underwater vision", PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 27 JUNE-2 JULY 2004 WASHINGTON, DC, USA, IEEE, PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION IEE, vol. 1, 27 June 2004 (2004-06-27), pages 536 - 543, XP010708613, ISBN: 978-0-7695-2158-9, DOI: 10.1109/CVPR.2004.1315078 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10951812B2 (en) 2018-03-28 2021-03-16 Paralenz Group ApS Depth triggered auto record
US11323666B1 (en) 2020-11-13 2022-05-03 Paralenz Group ApS Dynamic depth-color-correction
CN113850747A (en) * 2021-09-29 2021-12-28 重庆理工大学 Underwater image sharpening processing method based on light attenuation and depth estimation
CN116579953A (en) * 2023-06-28 2023-08-11 陕西欧卡电子智能科技有限公司 Self-supervised water surface image enhancement method and related equipment
WO2025075609A1 (en) * 2023-10-03 2025-04-10 Google Llc Camera auto white balance: under-the-sea true color algorithm

Also Published As

Publication number Publication date
GB201608775D0 (en) 2016-06-29

Similar Documents

Publication Publication Date Title
JP7646619B2 (en) Camera image processing method and camera
WO2017198746A1 (en) Methods and systems for underwater digital image processing
US8363131B2 (en) Apparatus and method for local contrast enhanced tone mapping
TWI407800B (en) Improved processing of mosaic images
KR101059403B1 (en) Adaptive spatial image filter for filtering image information
KR101263888B1 (en) Image processing apparatus and image processing method as well as computer program
US9886733B2 (en) Watermarking digital images to increase bit depth
US9426437B2 (en) Image processor performing noise reduction processing, imaging apparatus equipped with the same, and image processing method for performing noise reduction processing
TWI399080B (en) Denoising processing circuit, denoising processing method, and solid-state imaging device
CN101375589A (en) Adaptive image filter for filtering image information
US8339474B2 (en) Gain controlled threshold in denoising filter for image signal processing
US9877036B2 (en) Inter frame watermark in a digital video
US9886961B2 (en) Audio watermark in a digital video
KR20210018136A (en) Method and apparatus for image processing
WO2016114950A1 (en) Watermarking digital images to increase bit dept
KR20180118432A (en) Image Processing Apparatus and Method for Improving Sensitivity
JP2004282460A (en) Electronic camera and electronic camera system
US20090041364A1 (en) Image Processor, Imaging Apparatus and Image Processing Program
WO1994018801A1 (en) Color wide dynamic range camera using a charge coupled device with mosaic filter
US8385671B1 (en) Digital camera and method
US20100054594A1 (en) Image processing system and computer-readable recording medium for recording image processing program
JP2009100302A (en) Image processing device, image processing method, program, imaging apparatus, and imaging method
JP2007288573A (en) Chroma suppress processing device
US8390699B2 (en) Opponent color detail enhancement for saturated colors
WO2018234616A1 (en) Image processing

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17724047

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17724047

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载