US9329272B2 - 3D camera and method of image processing 3D images - Google Patents
3D camera and method of image processing 3D images Download PDFInfo
- Publication number
- US9329272B2 US9329272B2 US13/875,313 US201313875313A US9329272B2 US 9329272 B2 US9329272 B2 US 9329272B2 US 201313875313 A US201313875313 A US 201313875313A US 9329272 B2 US9329272 B2 US 9329272B2
- Authority
- US
- United States
- Prior art keywords
- information
- image processing
- phase
- depth
- unambiguity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
- G01S17/36—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
Definitions
- 3D cameras Three dimensional Depth-Cameras
- ToF principle time-of-flight principle
- 3D cameras may provide human gesture recognition in natural user interfaces or passenger recognition for automotive safety functions.
- 3D cameras provide an array of pixel in which each pixel is capable to provide information related to a distance of the object captured by the pixel. Such information may for example be based on a time of flight of light reflected from an object captured by the pixels.
- image processing is known as a digital technique which is in conventional systems applied after obtaining images.
- image processing has been used in restoring pictures by scanning them into a digital image and applying the digital image processing algorithms to improve the quality by removing blurring, increasing the contrast of the image etc.
- image processing techniques are also implemented in 2D cameras and video apparatuses to provide image processed digital images to users.
- FIG. 1 shows a block diagram according to an embodiment
- FIG. 2 shows example scenery
- FIGS. 3A and 3B show image data of the scenery of FIG. 2 ;
- FIGS. 4A and 4B show image processed data of the scenery of FIG. 2 ;
- FIG. 5 shows a modular architecture according to an embodiment
- FIG. 6 shows a flow diagram according to an embodiment
- FIG. 7 shows a flow diagram according to an embodiment
- FIG. 8 shows a flow diagram according to an embodiment
- FIG. 9 shows a flow diagram according to an embodiment.
- any direct connection or coupling between functional blocks, devices, components or other physical or functional units shown in the drawings or described herein can also be implemented by an indirect connection or coupling.
- Functional blocks may be implemented in hardware, firmware, software, or a combination thereof.
- first, second, third, etc. may be used herein to describe various elements, components, regions, layers, and/or sections, these elements, components, regions, layers, and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer, and/or section from another element, component, region, layer, and/or section. For example, a first element, component, region, layer, and/or section could be termed a second element, component, region, layer, and/or section without departing from the teachings of example embodiments.
- the embodiments described below are directed to a new concept for providing image processing for 3D image data obtained based on the time of flight (TOF) principle.
- the digital image processing for a 3D image is as a pipelined digital image processing which can provide reduced latency as will be described later on in more detail.
- a phase of reflected light is determined by mixing reflected modulated light in each pixel of a pixel array with a demodulation signal of the same modulation frequency.
- FIG. 1 shows an embodiment of a TOF camera 100 based on the photonic mixing principle.
- Light generated by a light source 102 is continuously amplitude modulated based on a modulation signal 104 generated by a signal source 106 .
- the modulation signal may include a rectangular waveform, a sine waveform of other signal waveforms.
- the modulated light signal is provided to determine the distance to an object 108 .
- the modulated light reflected by the object 108 is directed to an imager device 110 which includes pixels as shown for example in the embodiments described with respect to FIGS. 1A, 2A, 3, 4 and 5 .
- a signal 104 A which corresponds to the modulation signal 104 phase shifted by a predetermined phase, e.g.
- 0°, 90°, 180° and 270° is provided to the control electrodes for mixing and demodulation of the reflected light within each pixel.
- Certain time intervals are assigned for each of the predetermined phases. After integrating the signals in the respective time intervals for each phase 0°, 90°, 180° and 270°, output signals I 0 , I 1 , I 2 , and I 3 are obtained corresponding to each phase. Based on the output signals I 0 , I 1 , I 2 , I 3 , the phase information corresponding to the time to travel can be computed as is known to a person skilled in the art. It is to be noted that the structure of FIG. 2A having two read-out nodes at both sides allows to simultaneously obtain the phases I 0 and I 2 and the phases i 1 and I 3 , respectively.
- the signal 104 A is provided in time intervals phase shifted with respect to the modulation signal 104 . It is to be understood that only the relative phase shift of the modulation signal and the demodulation signal is required. Therefore, in other embodiments a system with interchanged signals 104 and 104 A may be provided in which the modulation signal 104 for the light modulation is phase shifted in different time intervals with respect to the signal 104 A which is provided with no phase change.
- the distance of objects which are distributed in a range greater than the range of unambiguity can be determined only with ambiguity as the demodulated signal can only determine a phase between 0 and 360°.
- Phase shifts greater than 360° cannot be distinguished by the photonic mixing system from a corresponding phase shift in the interval [0, 360]. For example a phase shift of x+n ⁇ 360° (0° ⁇ x ⁇ 360°, n being an integer number) cannot be distinguished from the corresponding phase shift x in the interval [0, 360°].
- the range of unambiguity is directly dependent on the modulation frequency. If modulations with different modulation frequency are used, then the range of unambiguity can be extended. In theory, two different modulation frequencies may be sufficient to extend the range of unambiguity if each distance within the extended range of unambiguity has a unique phase combination ( ⁇ 1 , ⁇ 2 ), with ⁇ 1 being the phase shift due to the time of flight corresponding to the modulation with a first frequency f 1 and ⁇ 1 being the phase shift due to the time of flight corresponding to the modulation with a second frequency f 2 .
- the distance can be measured within the range of unambiguity as is known to a person skilled in the art. Since however measurement errors exist for each phase measurement, in some embodiments two modulations with different frequencies may not be enough to determine the distance. Therefore, according to some embodiments three, four or even more different modulation frequencies may be used in order to extend the range of unambiguity. In some embodiments, the number of different frequencies used for extending the range of unambiguity may be programmed by a user.
- FIG. 2 shows example scenery including a plurality of objects 202 A- 202 D extending in z and x-directions. For the sake of simplicity, only the depth direction z and one (x-direction) of the two other directions is shown. It is however to be noted that the embodiments herein explained with respect to the z and x-direction are easily transformed by a person skilled in the art to include also the y-direction.
- FIG. 2 the objects 202 A- 202 D are shown to be positioned at some distances in the z-direction (depth direction) away from the 3D camera 100 .
- a boundary of the range of unambiguity is indicated with reference number 204 in FIG. 2 .
- objects which are distributed in a range extending beyond the boundary 204 cannot be determined unambiguous by the phase information of the 3D TOF camera 100 when using only the first modulation frequency.
- FIG. 3A shows the 3D pixel image data (pixel information from the pixel array including the measured phase shift or a representation of the measured phase shift) as a frame obtained by the 3D TOF camera 100 during a first time interval when the light is modulated with the first modulation frequency.
- the first modulation frequency allows distance information only to be determined unambiguous if the objects are in a range up to the boundary 204 , pixels which detect light reflected by objects beyond the boundary 204 such as objects 202 C and 202 D are therefore folded back in the range of unambiguity although they are actually located beyond the unambiguity boundary.
- embodiments described herein use a pipelined approach in which uncombined 3D image data having a respective limited range of unambiguity (non-extended range, when compared to the final 3D image with extended range) are processed with an image processing algorithm.
- the image processing is applied in embodiments to image data (e.g. phase information or representation of phase information measured by the pixel array) resulting from using only one modulation frequency even though the absolute distance information cannot be certain.
- image data e.g. phase information or representation of phase information measured by the pixel array
- the input data for the digital image processing provides a full 3D image, for measurements of objects which are distributed in a range extending beyond the unambiguity boundary such input data can be considered imperfect since it is based only on one modulation frequency and therefore has inherent a lack of certainty in the distance information.
- the uncombined 3D image data at this stage may contain incorrect absolute distance information for some pixels.
- image processing is to be distinguished from the analog or digital processing of signals to obtain an output signal for each single pixel which may for example also include some analog or digital filtering.
- Image processing is typically provided by using digital algorithms uses information of one pixel together with information of one or more other pixels in order to provide a modified image after the image processing.
- image processing is provided to the full set of image data obtained from the pixel array.
- image processing may include an image filtering which uses pixel information from all pixels of the pixel array provided in a frame as input data.
- Image processing may typically include the use of computing algorithms which are applied to the digital input data.
- Image processing may include a wide range of algorithms to be applied to the input data.
- the application of image processing is used for example to provide to the user a filtered 3D image in which objects of sizes out of interest for the application are not presented to the user.
- the term user may herein not be interpreted only as a human being but can according to embodiments also include applications such as driver assistance applications, human recognition applications etc.
- the size of the object of interest may for example be limited to a maximum size, a minimum size or both.
- the digital data may be modeled in the image processing in the form of multidimensional systems and specific image processing computing algorithm are applied to these data.
- Image processing may in some embodiments include surface reconstruction, depth image meshing using multiple steps, gradient detection, edge detection, object movement detection, object recognition, object suppression or any combination thereof.
- image processing may include one or more of classification, feature extraction, pattern recognition, edge recognition, projection, multi-scale signal analysis and other processing.
- One or more techniques which may be used in image processing include pixilation, linear filtering, non-linear filtering, principal components analysis, independent component analysis, hidden Markov models, anisotropic diffusion, partial differential equations, self-organizing maps, neural networks and wavelets.
- image processing uses morphological filter operations which are simple but efficient and applicable for several kinds of image processing.
- the methodology of such filters is set-theory based and the filter operations consist of minimum and maximum operation.
- Progressive morphological filters may be used to suppress ground objects and preserve the ground surface, but in other applications morphological filters may be used for a various range of applications.
- the algorithm typically takes a few cycles for applying different window sizes of the filter kernel.
- the ToF measurement procedure offers an architecture that allows a utilization of the algorithm with good performance.
- correction for absolute distance information is provided according to embodiments in order to obtain a final image processed 3D image with corrected absolute distance information.
- such corrections are applied in a pipeline stage further downstream, when at least 3D pixel data obtained with the second modulation frequency is taken into account to extend the range of unambiguity and to correct a distance.
- the errors in absolute distance has no or limited relevance. For example, if edge detection in the image is performed, an abrupt change of the relative phase information detected amongst nearest pixels or a group of nearest pixels gives the indication that an edge exists at these pixels or group of pixels. For the detection of the edge, the final absolute distances are of less importance.
- FIGS. 4A and 4B An embodiment including a simplified image filtering applied to image data in order to filter out certain features having a size out of interest will now be explained in more detail with respect to FIGS. 4A and 4B .
- FIG. 4A shows a filtering of the image data shown in FIG. 3A .
- the image data of FIG. 3A is imperfect image data in view of the limited range of unambiguity and the incorrect absolute distance values for the pixels corresponding to objects 202 C and 202 D.
- the filtering shown in FIG. 4A is a coarse filtering in order to filter out objects having a size below a minimum value.
- the curve resulting from the coarse filtering is provided in FIG. 4A with reference number 402 .
- the pixels corresponding to object 202 B have been filtered out since the size of object 202 B is below the filtering criteria.
- curve 402 provides a coarse outline of the objects 202 A, 202 C and 202 D with the absolute distance information for objects 202 C and 202 D being incorrect due to the reduced range of unambiguity.
- FIG. 4B it can be seen that a fine filtering has been performed resulting in a finer approximation curve 404 including representations of the objects 202 C, 202 D and 202 A.
- the distance information of objects 202 C and 202 D is corrected.
- the fine filtering may in some embodiments be based on the coarse filtering and information from the coarse filtering can be used in the fine filtering. Therefore, the pixel information corresponding to the light reflected from object 202 B is not considered for the fine-filtering since object 202 B had already been filtered out in the previous coarse filtering.
- FIG. 5 shows a schematic architecture according to an embodiment in which image processing is performed at different stages.
- Frames Z 1 , Z 2 . . . Zi with each frame including depth information (e.g. a measured phase or representations thereof) from each pixel of the camera's pixel array are entering the process flow at the different points in time, e.g. when the 3D image data are available.
- Frame Z 1 (reference number 502 A) has been obtained by using a first modulation frequency
- frame Z 2 reference number 502 B
- frame Z 2 reference number 502 B
- second modulation frequency with a value different from the first modulation frequency etc.
- the image processing such as a coarse filtering with a filter parameter (window size) w 1 for frame Z 1 starts to generate the image processed data frame 504 A (indicated in 504 A by m(Z 1 ,w 1 )).
- frame Z 1 and frame Z 2 can be combined to obtain EURZ frame 506 A (Extended Unambiguity Range Z-Frame 506 A).
- image processing on frame Z 2 such as a fine filtering with a filter parameter w 2 starts. Both processes, i.e. the combining of frame Z 1 and frame Z 2 and the image processing on frame Z 2 can be processed in some embodiments at least partially in parallel.
- frame 504 B is obtained by combining the image processing of frame Z 2 which is indicated by m(Z 2 , W 2 ) with the coarse filtering information and the extended unambiguity range information.
- the image processing operation can be immediately applied to frame Z 2 using the next image processing parameter in the sequence of image processing to be applied.
- an image processing parameter may be for example a filter parameter w 2 .
- the corrected EURZ data 506 A is available, the information can be used to update the filter output—the dashed line represents the update function u. Therefore, for each of the frames Z 1 , Z 2 , Zi different image processing may be applied, including image processing of same type with different parameters or image processing of a different type. This scheme can be repeated for each frame Zi until a final EURZ frame 508 and a final image processed frame 510 is obtained.
- image filters with filter function m suitable for the above architecture may include filters with more than one pass on the input image (w i denotes the filter parameter for pass i), in which the input of the subsequent filter stage m(w i ) depends on the previous filter output m(w i-1 ), in which the filter operation is robust against changes of depth information between two filter stages and in which an update operation u is available that corrects the output of m.
- T pipelined t Z1 +t Z2 + . . . +t Zi +t EURZ(Zi,Zi-1) +t u +t m(Wi) with t z1 , t z2 , . . . t zi corresponding to the time for providing frames z 1 , z 2 , . . .
- T direct t Z1 +t Z2 + . . . +t Zi +t EURZ(Zi,Zi-1) +t m(W1) +t m(W2) + . . . +t m(Wi)
- the right-side term of this inequation can decrease if the number of filter passes exceeds the number of frames required for calculation of a non-ambiguous z-Image, but a minimum of two frames is required to ensure unambiguity, therefore at least one term t m(W1) remains.
- an improvement in latency is achieved if the time for updating the image processing output with the corrected EURZ data is lower than the image processing operation m for a frame. This can be typically assumed to be true.
- Improvement of latency can be achieved for example for all filter algorithms which require multiple passes on the input image per se.
- the exact improvement of latency depends on the number of frames used for generating an unambiguous z-Frame and on the filter algorithm itself (number of passes).
- To apply the filter to the first captured frame an update function must exist that is able to update the filter result with the depth data of the next frame captured. The concrete implementation of this function depends on the filter algorithm itself and is therefore not part of this document.
- the method can be extended to multiple pipelines in which EURZ continuous output can be provided.
- the number of frames the EURZ continuous mode generates is equal to the number of input frames.
- the final z-Frame is still a function of two or more previously captured frames, therefore the generated range extended frame has still the same overall latency as in the EURZ block.
- a flow diagram 600 of an example method provided by the above described modular architecture is now explained with reference to FIG. 6 .
- the flow diagram starts at 602 with obtaining first information including depth information with a first range of unambiguity.
- a first image processing on the first information is performed to generate first image processed information.
- the first image processed information may for example include frame 504 A of FIG. 5 .
- second information are obtained, the second information including depth information with a second range of unambiguity.
- the first image processing and the obtaining of second information may be in some embodiments at least partially in parallel.
- the first information may be a data frame including depth information of a scene which has been determined by using a first modulation frequency (such as frame Z 1 of FIG.
- the second information may be a data frame including depth information of the same scene which has been determined by using a second modulation frequency with frequency values different from the first modulation frequency (such as frame Z 2 of FIG. 5 ).
- the depth information may be a representation of phase information, including first phase information derived from reflected light modulated by a first modulation frequency and second phase information derived from reflected light modulated by a second modulation frequency information different from the first modulation frequency.
- the obtaining of the first phase information may include measuring in a pixel array a phase of light modulated by a first modulation frequency.
- the obtaining of the second phase information may include measuring in the pixel array a phase of light modulated by a second modulation frequency different from the first modulation frequency.
- the first image processed information and the second information are used in a second image processing to generate second image processed information
- the second image processed information may be for example the image processed frame 504 B.
- a further flow diagram 700 of an example method which may be provided by the above described modular architecture is now explained with reference to FIG. 7 .
- the flow diagram starts at 702 with providing of first 3D information at a first stage of a pipelined digital image processing.
- the first 3D information includes depth information with a first range of unambiguity.
- a first image processing is performed on the first information to generate first image processed information.
- second 3D information is provided at a second stage of the pipelined digital image processing.
- the second 3D information includes depth information with a second range of unambiguity.
- the first image processed information and the second information are used in a second image processing to generate second image processed information.
- the first and second 3D information may include depth information of a same scenery generated by using different modulation frequencies.
- the second image processing may use in some embodiments at least partially information obtained from the first image filtering processing. Furthermore, the second image processing may further update a digital filtering result by using depth information of the third information. The combining of the first phase information and the second phase information at least increases the depth range unambiguity of the third information when compared to a depth range unambiguity of the first phase information or a depth range unambiguity of the second phase information.
- the first and second image processing may in some embodiments include image processing of a same type with different parameter values.
- the first and second image filtering processing may include in some embodiments image filter processing of a different type.
- a structure of an object or at least a part of a structure of an object may be filtered out based on the first filtering processing.
- the structure or part of the structure filtered out by the first filtering processing may then be masked to be not applicable to the second image filtering processing.
- the first and second image filtering processing may correspond to different stages of a modular image filtering.
- a further flow diagram 800 of an example method provided by the above described modular architecture is now explained with reference to FIG. 8 .
- the flow diagram starts at 802 with providing a first digital image processing to first information corresponding to 3D information of a scene, the first information inherently having uncertainty of depth information in a predetermined depth range.
- second information corresponding to 3D information of the scene are generated, the second information having no inherent uncertainty with respect to depth information or at least reduced inherent uncertainty of depth information compared to the first information.
- information derived from the first digital image processing is used for generating a modified 3D image of the scene.
- a 3D camera comprising a digital image processing circuit, the digital image circuit being capable to provide digital image filtering based on a first image filtering processing applied to first phase information and a second image filtering processing applied to a second phase information.
- the first phase information is phase information of a first reflected optical signal modulated with a first modulation frequency
- the second phase information is phase information of a second reflected optical signal modulated with a second modulation frequency different than the first modulation frequency.
- FIG. 9 shows a further embodiment.
- FIG. 9 starts at 902 with capturing in an array of pixels first phase information and second phase information of modulated optical signals reflected by an object.
- the first phase information and the second phase information are combined to obtain third information.
- digital image filtering is performed based on a first image filtering processing applied to the first phase information and a second image filtering processing applied to the second phase information.
- first and second modulation frequency or first and second phase information with first and second ranges of unambiguity or depth information with first and second ranges of unambiguity
- the above concept can be extended to any number of different modulation frequencies or any number of different ranges of unambiguity.
- other embodiments may use three or four or more different phase modulation frequencies or three or four or more ranges of unambiguity.
- a specific image processing may be applied in which the image processing parameters such as a filter width may be chosen and applied based on the respective modulation frequency or range of unambiguity. For example, if the filter parameters include different window sizes, the image processing may be applied in an order such that the window size increases with increasing range of unambiguity (decreasing modulation frequency).
- a computer program element which when executed on a computing machine controls a modular image processing as described herein for example with respect to FIG. 6 or FIG. 7 or 8 .
- inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
- inventive concept merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.
- circuit or circuitry
- circuitry used herein are to be interpreted in a sense not only including hardware but also software, firmware or any combinations thereof.
- data may be interpreted to include any form of representation data.
- information may in addition to any form of digital information also include other forms of representing information.
- entity or “unit” may in embodiments include any device, apparatus circuits, hardware, software, firmware, chips or other semiconductors as well as logical units or physical implementations of protocol layers etc.
- coupled or “connected” may be interpreted in a broad sense not only covering direct but also indirect coupling.
- embodiments described in combination with specific entities may in addition to an implementation in these entity also include one or more implementations in one or more sub-entities or sub-divisions of said described entity.
- specific embodiments described herein described herein to be implemented in a transmitter or receiver may be implemented in sub-entities such as a chip or a circuit provided in such an entity.
- a single step may include or may be broken into multiple substeps. Such substeps may be included and part of the disclosure of this single step unless explicitly excluded.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
Description
T pipelined =t Z1 +t Z2 + . . . +t Zi +t EURZ(Zi,Zi-1) +t u +t m(Wi)
with tz1, tz2, . . . tzi corresponding to the time for providing frames z1, z2, . . . zi, tm (Wi) being the time for image processing with parameter wi, tEURZ (Zi, Zi-1) corresponding to the time calculating the extended unambiguity range frame between frames Zi, Zi−1 and tu corresponding to the time for updating and the filter output after the final EURZ image is available. This can be compared to an approach in which the image processing is applied only when the final EURZ image is available. The overall time from receiving the first input frame for the EURZ calculation and finishing the filter operation can be defined here as
T direct =t Z1 +t Z2 + . . . +t Zi +t EURZ(Zi,Zi-1) +t m(W1) +t m(W2) + . . . +t m(Wi)
T difference =T direct −T pipelined =t m(W1) +t m(W2) + . . . +t m(Wi-1) −t u
t u <t m(W1) +t m(W2) + . . . +t m(Wi-1)
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/875,313 US9329272B2 (en) | 2013-05-02 | 2013-05-02 | 3D camera and method of image processing 3D images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/875,313 US9329272B2 (en) | 2013-05-02 | 2013-05-02 | 3D camera and method of image processing 3D images |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140327741A1 US20140327741A1 (en) | 2014-11-06 |
US9329272B2 true US9329272B2 (en) | 2016-05-03 |
Family
ID=51841241
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/875,313 Active 2034-02-17 US9329272B2 (en) | 2013-05-02 | 2013-05-02 | 3D camera and method of image processing 3D images |
Country Status (1)
Country | Link |
---|---|
US (1) | US9329272B2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4016981A3 (en) * | 2013-12-24 | 2022-09-21 | Sony Depthsensing Solutions | A time-of-flight camera system |
US9897699B2 (en) * | 2014-07-09 | 2018-02-20 | Massachusetts Institute Of Technology | Methods and apparatus for virtual sensor array |
US10425628B2 (en) | 2017-02-01 | 2019-09-24 | Microsoft Technology Licensing, Llc | Alternating frequency captures for time of flight depth sensing |
US10209360B2 (en) * | 2017-02-01 | 2019-02-19 | Microsoft Technology Licensing, Llc | Reduced phase sampling for high speed depth sensing |
JP7211005B2 (en) * | 2018-10-29 | 2023-01-24 | 富士通株式会社 | Terrain estimation program, terrain estimation method, and terrain estimation device |
CN116635748A (en) * | 2020-12-15 | 2023-08-22 | 索尼半导体解决方案公司 | Time-of-flight image sensor circuit and time-of-flight image sensor circuit control method |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080069553A1 (en) * | 2006-09-20 | 2008-03-20 | Jingqiang Li | Predictive focus value calculation for image capture devices |
US20100002912A1 (en) * | 2005-01-10 | 2010-01-07 | Solinsky James C | Facial feature evaluation based on eye location |
US20100183236A1 (en) * | 2009-01-21 | 2010-07-22 | Samsung Electronics Co., Ltd. | Method, medium, and apparatus of filtering depth noise using depth information |
US7791715B1 (en) * | 2006-10-02 | 2010-09-07 | Canesta, Inc. | Method and system for lossless dealiasing in time-of-flight (TOF) systems |
US20110069155A1 (en) * | 2009-09-18 | 2011-03-24 | Samsung Electronics Co., Ltd. | Apparatus and method for detecting motion |
US20110188028A1 (en) * | 2007-10-02 | 2011-08-04 | Microsoft Corporation | Methods and systems for hierarchical de-aliasing time-of-flight (tof) systems |
US20110292370A1 (en) * | 2009-05-29 | 2011-12-01 | Canesta, Inc. | Method and system to maximize space-time resolution in a Time-of-Flight (TOF) system |
US20110304841A1 (en) * | 2008-06-30 | 2011-12-15 | Microsoft Corporation | System architecture design for time-of- flight system having reduced differential pixel size, and time-of- flight systems so designed |
US8174539B1 (en) * | 2007-08-15 | 2012-05-08 | Adobe Systems Incorporated | Imprint for visualization and manufacturing |
US20120120073A1 (en) * | 2009-05-11 | 2012-05-17 | Universitat Zu Lubeck | Method for the Real-Time-Capable, Computer-Assisted Analysis of an Image Sequence Containing a Variable Pose |
US20130093852A1 (en) * | 2011-10-12 | 2013-04-18 | Board Of Trustees Of The University Of Arkansas | Portable robotic device |
-
2013
- 2013-05-02 US US13/875,313 patent/US9329272B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100002912A1 (en) * | 2005-01-10 | 2010-01-07 | Solinsky James C | Facial feature evaluation based on eye location |
US20080069553A1 (en) * | 2006-09-20 | 2008-03-20 | Jingqiang Li | Predictive focus value calculation for image capture devices |
US7791715B1 (en) * | 2006-10-02 | 2010-09-07 | Canesta, Inc. | Method and system for lossless dealiasing in time-of-flight (TOF) systems |
US8174539B1 (en) * | 2007-08-15 | 2012-05-08 | Adobe Systems Incorporated | Imprint for visualization and manufacturing |
US20110188028A1 (en) * | 2007-10-02 | 2011-08-04 | Microsoft Corporation | Methods and systems for hierarchical de-aliasing time-of-flight (tof) systems |
US20110304841A1 (en) * | 2008-06-30 | 2011-12-15 | Microsoft Corporation | System architecture design for time-of- flight system having reduced differential pixel size, and time-of- flight systems so designed |
US20100183236A1 (en) * | 2009-01-21 | 2010-07-22 | Samsung Electronics Co., Ltd. | Method, medium, and apparatus of filtering depth noise using depth information |
US20120120073A1 (en) * | 2009-05-11 | 2012-05-17 | Universitat Zu Lubeck | Method for the Real-Time-Capable, Computer-Assisted Analysis of an Image Sequence Containing a Variable Pose |
US20110292370A1 (en) * | 2009-05-29 | 2011-12-01 | Canesta, Inc. | Method and system to maximize space-time resolution in a Time-of-Flight (TOF) system |
US20110069155A1 (en) * | 2009-09-18 | 2011-03-24 | Samsung Electronics Co., Ltd. | Apparatus and method for detecting motion |
US20130093852A1 (en) * | 2011-10-12 | 2013-04-18 | Board Of Trustees Of The University Of Arkansas | Portable robotic device |
Non-Patent Citations (2)
Title |
---|
A Progressive Morphological Filter for Removing Nonground Measurements From Airborne LIDAR Data Kecu Zhang et al. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 4, Apr. 2003, pp. 872-882. |
Computer Aided Wafer Inspection: Integration of a CCD-Camera module and investigation of image processing methods for supporting human operator's work Gerwin Fleischmann, Diploma Thesis, University of Salzburg,Jun. 12, 2007, pp. 1-118. |
Also Published As
Publication number | Publication date |
---|---|
US20140327741A1 (en) | 2014-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9329272B2 (en) | 3D camera and method of image processing 3D images | |
US11215700B2 (en) | Method and system for real-time motion artifact handling and noise removal for ToF sensor images | |
US10462447B1 (en) | Electronic system including image processing unit for reconstructing 3D surfaces and iterative triangulation method | |
JP6285958B2 (en) | Stereo support with rolling shutter | |
US8331652B2 (en) | Simultaneous localization and map building method and medium for moving robot | |
KR101758058B1 (en) | Apparatus and method for estimating camera motion using depth information, augmented reality system | |
CN108765480B (en) | Deep processing equipment | |
US8774551B2 (en) | Image processing apparatus and image processing method for reducing noise | |
JP5810314B2 (en) | Stereo image processing apparatus and stereo image processing method | |
CN102169577A (en) | Method and apparatus for determining misalignment | |
JP2015206798A (en) | distance calculation device | |
CN103905746B (en) | Method and device for localization and superposition of sub-pixel-level image offset and video device | |
EP3663799A1 (en) | Apparatuses and methods for determining depth motion relative to a time-of-flight camera in a scene sensed by the time-of-flight camera | |
JP4403477B2 (en) | Image processing apparatus and image processing method | |
CN105683707A (en) | Image photographing device and phase difference detecting method | |
JP2008309637A (en) | Obstruction measuring method, obstruction measuring apparatus, and obstruction measuring system | |
Gilliam et al. | Local all-pass filters for optical flow estimation | |
JP5501084B2 (en) | Planar area detection apparatus and stereo camera system | |
Lee et al. | Motion blur-free time-of-flight range sensor | |
JP2008216127A (en) | Distance image generation device, distance image generation method, and program | |
JP5098369B2 (en) | Distance image generating apparatus, distance image generating method and program | |
JP2009092551A (en) | Method, apparatus and system for measuring obstacle | |
CN109934768B (en) | Sub-pixel displacement image acquisition method based on registration mode | |
WO2020095549A1 (en) | Imaging device | |
Zhang et al. | High quality depth maps from stereo matching and ToF camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INFINEON TECHNOLOGIES AG, GERMANY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FLEISCHMANN, GERWIN;REEL/FRAME:037871/0115 Effective date: 20130419 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |