+

US20150279075A1 - Recording animation of rigid objects using a single 3d scanner - Google Patents

Recording animation of rigid objects using a single 3d scanner Download PDF

Info

Publication number
US20150279075A1
US20150279075A1 US14/671,313 US201514671313A US2015279075A1 US 20150279075 A1 US20150279075 A1 US 20150279075A1 US 201514671313 A US201514671313 A US 201514671313A US 2015279075 A1 US2015279075 A1 US 2015279075A1
Authority
US
United States
Prior art keywords
recording
reference model
detection
analyzing
newtonian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/671,313
Inventor
Stephen Brooks Myers
Jacob Abraham Kuttothara
Steven Donald Paddock
John Moore Wathen
Andrew Slatton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knockout Concepts LLC
Original Assignee
Knockout Concepts LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Knockout Concepts LLC filed Critical Knockout Concepts LLC
Priority to US14/671,313 priority Critical patent/US20150279075A1/en
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEYERS, STEPHEN B
Assigned to KNOCKOUT CONCEPTS, LLC reassignment KNOCKOUT CONCEPTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUTTOTHARA, JACOB A, PADDOCK, STEVEN D, SLATTON, ANDREW, WATHEN, JOHN M
Publication of US20150279075A1 publication Critical patent/US20150279075A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/26Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • G06T7/0081
    • G06T7/2046
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • G06T2207/20144
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Definitions

  • Some embodiments may generally relate to the field of extracting elements of 3D images in motion.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • Some embodiments may relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm; recording movement of the object; analyzing the recording using feature detection and localization algorithms; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation.
  • Embodiments may further comprise the step of saving the reference model in association with the animation on a computer readable medium.
  • data for determining the reference model is obtained with a three-dimensional scanning device.
  • the step of separating the three-dimensional model of the object from its environment is conducted by the three-dimensional scanning device.
  • the step of analyzing the reference model is conducted by the three dimensional scanning device.
  • the data for determining the reference model of the object, and from recording movement of the object are obtained with the same three-dimensional scanning device.
  • the feature detection and localization algorithm for analyzing the reference model is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
  • the feature detection and localization algorithm for analyzing the recording is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
  • a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter to the step of analyzing the recording using feature detection and localization algorithms.
  • Embodiments may also relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional reconstruction of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface; recording movement of the object; analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter; matching features of the recording to features of the reference
  • Embodiments may also relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional reconstruction of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface; recording movement of the object; analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter; matching features of the recording to features of the reference
  • FIG. 1 is a process according to an embodiment of the invention
  • FIG. 2 illustrates capturing 3D reconstructed model of a static object according to one embodiment
  • FIG. 3 illustrates separating an element of a 3D model from its background
  • FIG. 4 illustrates obtaining additional detail of a scanned and separated object by recording it in motion.
  • a method for recording animation of a three-dimensional real world object includes separating a 3D model of the object from a 3D model of its surroundings. Many known 3D scanners and cameras are capable of achieving obtaining the data necessary for method according to embodiments of this invention.
  • This model of the 3D object separated from the model of its environment may be used as a reference model.
  • the reference model may be further analyzed using a feature detection and localization algorithm to identify various features of the reference model that may be used for comparison with live feed from the 3D scanning device. Movement, manually induced or otherwise, of the object may be recorded using the 3D scanning device. Once again the features of the recording of the object in motion may be analyzed utilizing similar feature detection and localization algorithms.
  • the features of the recording can be compared with the features of the reference model, and when matches are found said matches may comprise poses of the object for rendering an animation. Finally, the poses may be recombined in any order to formulate an animation of the object.
  • the combination of a time series of poses arranged in any order and an arbitrary background allows one to create animations of the object that differ from the motion observed in the previously recorded video.
  • the term posed includes the generally accepted meaning in the 3D imaging arts.
  • FIG. 1 depicts a flow diagram 100 of an illustrative embodiment for recording animation of a real world three dimensional object.
  • 3D model data of an object may be captured by any arbitrary 3D digital imaging device and/or may be retrieved from storage in a database.
  • a reference model of the object may be obtained by separating the object from its environment 110 according to known mathematical methods.
  • the act of separating the model of the object from its environment may be achieved using a 3D scanning device configured with such capabilities; however it is contemplated that any 3D digital scanning device may be used to carry out methods taught herein.
  • the reference model may be analyzed using feature detection and localization algorithms 112 in order to enable later comparison of the features and related data with live feed from the scanning device.
  • the feature detection and localization algorithm used for analyzing the reference model may be chosen from many processes and algorithms now known or developed in the future. Some such feature detection and localization algorithms include RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface.
  • RANSAC Random Sample Consensus
  • iterative closest point includes RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface.
  • the methodology and corresponding algorithms of all of these processes are known in the art and incorporated by reference herein.
  • the quantity of digital computations of a microprocessor may be reduced by applying a Kalman filter.
  • a Kalman filter allows embodiments to accurately predict the next position and/or orientation of the object which enables embodiments to apply feature detection calculations to smaller regions of the 3D data. Kalman filter methodology is known in the art and is incorporated by reference herein.
  • Movement of the real world three-dimensional object may be manually induced and recorded using a 3D scanning device 114 .
  • Features of the object in the recording may be analyzed using similar feature detection and localization algorithms 116 .
  • the feature detection and localization algorithm used for analyzing the recording may be chosen from many processes and algorithms now known or developed in the future. Some such feature detection and localization algorithms include RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface.
  • RANSAC Random Sample Consensus
  • iterative closest point includes RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface.
  • the methodology and corresponding algorithms of all of these processes are incorporated by reference herein.
  • Such features may be compared with the features of the reference model 118 .
  • a match between the features of the recording and the features of the reference model comprises a pose of the object.
  • the feature comparison may be continuously made until multiple matches result in multiple poses 120 being obtained.
  • the matching of the features to obtain poses is done in real time when the recording is being made.
  • a time series of the various poses may be recorded in any order comprising an animation of the object 122 .
  • the reference model initially obtained may be saved in association with the animation. This may be saved on any computer readable medium.
  • FIG. 2 depicts an illustrative embodiment 200 wherein a 3D scanner 210 is used to obtain an image 216 of a real world object 212 .
  • the scanner 210 may collect images of the static object 212 from all directions and orientations 214 to ensure a complete modeling 216 of the object 212 .
  • a reconstruction of this image data may be used to obtain a reference model of the real world object 212 .
  • images of the static object 212 may be collected from less than all vantage points, and missing data may be filled in by correlating areas of missing data to areas of the object in a later-collected video image showing the object in motion.
  • FIG. 3 depicts an illustrative embodiment 300 wherein the model 216 of the object is obtained on a 3D data processing device 314 for further processing.
  • a data processing device may be used to separate the model of the object 312 from the model of its environment 310 . This separation of the object from its environment may then be used as a reference model of the object, or may be used to produce a reference model of the object through further data processing.
  • FIG. 4 depicts an illustrative embodiment 400 wherein the movement of the real world object 410 is recorded 412 using a 3D scanning device 210 .
  • the features of the recording 412 are analyzed using feature detection and localization algorithms and the features of the recording are compared with the features of the reference model.
  • a match between the features of the recording 412 and the features of the reference model comprises a pose of the three-dimensional object.
  • a continuous matching of the features results in multiple poses and a time series of the various poses may be recorded comprising an animation of the object.
  • the reference model may be saved in association with the animation on a computer readable medium, device storage or server (including cloud server).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

This application teaches a method or methods related to recording animation. Such a method may include determining a reference model of an object by separating a 3D image of the object from a 3D image of its environment. The method may also include analyzing the reference model using a feature detection and localization algorithm(s). The object may then be recorded in motion, and the recording may be analyzed using feature detection and localization algorithms(s). Features of the recording may be matched to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object. A video animation may be created by recording a time series of poses of the object.

Description

    I. BACKGROUND OF THE INVENTION
  • A. Field of Invention
  • Some embodiments may generally relate to the field of extracting elements of 3D images in motion.
  • B. Description of the Related Art
  • Various video recording methodologies are known in the art as well as various methods of computer analysis of video. However, current recording analysis technologies tend to confine users to merely recognizing features in image data. Furthermore, objects in recorded digital video cannot be manipulated as in the manner of a 3D CAD drawing. What is missing is methodology for separating an object from its background in a 3D reconstructed model of a static scene, then using video of the same object in motion to obtain further structural detail of the object, and creating a 3D model object that can be reoriented, manipulated, and moved independent of the image or video from which it was created.
  • Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.
  • II. SUMMARY OF THE INVENTION
  • Some embodiments may relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm; recording movement of the object; analyzing the recording using feature detection and localization algorithms; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation.
  • Embodiments may further comprise the step of saving the reference model in association with the animation on a computer readable medium.
  • According to some embodiments data for determining the reference model is obtained with a three-dimensional scanning device.
  • According to some embodiments the step of separating the three-dimensional model of the object from its environment is conducted by the three-dimensional scanning device.
  • According to some embodiments the step of analyzing the reference model is conducted by the three dimensional scanning device.
  • According to some embodiments the data for determining the reference model of the object, and from recording movement of the object, are obtained with the same three-dimensional scanning device.
  • According to some embodiments the feature detection and localization algorithm for analyzing the reference model is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
  • According to some embodiments the feature detection and localization algorithm for analyzing the recording is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
  • According to some embodiments a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter to the step of analyzing the recording using feature detection and localization algorithms.
  • Embodiments may also relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional reconstruction of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface; recording movement of the object; analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation.
  • Embodiments may also relate to a method for recording animation comprising the steps of: determining a reference model of an object by separating a three-dimensional reconstruction of the object from its environment in a 3D reconstruction of a static scene; analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface; recording movement of the object; analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter; matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and recording a time series of poses of the object, the time series comprising an animation; wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device, and wherein the data for determining the reference model of the object, and from recording movement of the object, are obtained with the same three-dimensional scanning device.
  • Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.
  • III. BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:
  • FIG. 1 is a process according to an embodiment of the invention;
  • FIG. 2 illustrates capturing 3D reconstructed model of a static object according to one embodiment;
  • FIG. 3 illustrates separating an element of a 3D model from its background; and
  • FIG. 4 illustrates obtaining additional detail of a scanned and separated object by recording it in motion.
  • IV. DETAILED DESCRIPTION OF THE INVENTION
  • A method for recording animation of a three-dimensional real world object includes separating a 3D model of the object from a 3D model of its surroundings. Many known 3D scanners and cameras are capable of achieving obtaining the data necessary for method according to embodiments of this invention. This model of the 3D object separated from the model of its environment, may be used as a reference model. The reference model may be further analyzed using a feature detection and localization algorithm to identify various features of the reference model that may be used for comparison with live feed from the 3D scanning device. Movement, manually induced or otherwise, of the object may be recorded using the 3D scanning device. Once again the features of the recording of the object in motion may be analyzed utilizing similar feature detection and localization algorithms. The features of the recording can be compared with the features of the reference model, and when matches are found said matches may comprise poses of the object for rendering an animation. Finally, the poses may be recombined in any order to formulate an animation of the object. The combination of a time series of poses arranged in any order and an arbitrary background allows one to create animations of the object that differ from the motion observed in the previously recorded video. As used herein the term posed includes the generally accepted meaning in the 3D imaging arts.
  • Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 depicts a flow diagram 100 of an illustrative embodiment for recording animation of a real world three dimensional object. In a first step (not shown) 3D model data of an object may be captured by any arbitrary 3D digital imaging device and/or may be retrieved from storage in a database. A reference model of the object may be obtained by separating the object from its environment 110 according to known mathematical methods. In one embodiment, the act of separating the model of the object from its environment may be achieved using a 3D scanning device configured with such capabilities; however it is contemplated that any 3D digital scanning device may be used to carry out methods taught herein.
  • The reference model may analyzed using feature detection and localization algorithms 112 in order to enable later comparison of the features and related data with live feed from the scanning device. The feature detection and localization algorithm used for analyzing the reference model may be chosen from many processes and algorithms now known or developed in the future. Some such feature detection and localization algorithms include RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface. The methodology and corresponding algorithms of all of these processes are known in the art and incorporated by reference herein. In an illustrative embodiment, during the step of analyzing the recording using a feature detection and localization algorithm, the quantity of digital computations of a microprocessor may be reduced by applying a Kalman filter. In this context a Kalman filter allows embodiments to accurately predict the next position and/or orientation of the object which enables embodiments to apply feature detection calculations to smaller regions of the 3D data. Kalman filter methodology is known in the art and is incorporated by reference herein.
  • Movement of the real world three-dimensional object may be manually induced and recorded using a 3D scanning device 114. Features of the object in the recording may be analyzed using similar feature detection and localization algorithms 116. The feature detection and localization algorithm used for analyzing the recording may be chosen from many processes and algorithms now known or developed in the future. Some such feature detection and localization algorithms include RANSAC (Random Sample Consensus), iterative closest point, least squares methods, Newtonian methods, quasi-Newtonian methods, expectation-maximization methods, detection of principal curvatures, or detection of distance to a medial surface. The methodology and corresponding algorithms of all of these processes are incorporated by reference herein. In an illustrative embodiment, during the step of analyzing the recording using a feature detection and localization algorithm, the quantity of digital computations of a microprocessor may be reduced by applying a Kalman filter.
  • Once the features of the recording are obtained, such features may be compared with the features of the reference model 118. A match between the features of the recording and the features of the reference model comprises a pose of the object. The feature comparison may be continuously made until multiple matches result in multiple poses 120 being obtained. In an alternate embodiment, the matching of the features to obtain poses is done in real time when the recording is being made. A time series of the various poses may be recorded in any order comprising an animation of the object 122. In an illustrative embodiment, the reference model initially obtained may be saved in association with the animation. This may be saved on any computer readable medium.
  • FIG. 2 depicts an illustrative embodiment 200 wherein a 3D scanner 210 is used to obtain an image 216 of a real world object 212. The scanner 210 may collect images of the static object 212 from all directions and orientations 214 to ensure a complete modeling 216 of the object 212. A reconstruction of this image data may be used to obtain a reference model of the real world object 212. In another embodiment, images of the static object 212 may be collected from less than all vantage points, and missing data may be filled in by correlating areas of missing data to areas of the object in a later-collected video image showing the object in motion.
  • FIG. 3 depicts an illustrative embodiment 300 wherein the model 216 of the object is obtained on a 3D data processing device 314 for further processing. After the model is captured 216, a data processing device may be used to separate the model of the object 312 from the model of its environment 310. This separation of the object from its environment may then be used as a reference model of the object, or may be used to produce a reference model of the object through further data processing.
  • FIG. 4 depicts an illustrative embodiment 400 wherein the movement of the real world object 410 is recorded 412 using a 3D scanning device 210. The features of the recording 412 are analyzed using feature detection and localization algorithms and the features of the recording are compared with the features of the reference model. A match between the features of the recording 412 and the features of the reference model comprises a pose of the three-dimensional object. A continuous matching of the features results in multiple poses and a time series of the various poses may be recorded comprising an animation of the object. In one embodiment, the reference model may be saved in association with the animation on a computer readable medium, device storage or server (including cloud server).
  • It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
  • Having thus described the invention, it is now claimed:

Claims (13)

I/we claim:
1. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and localization algorithm;
recording movement of the object;
analyzing the recording using feature detection and localization algorithms;
matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and
recording a time series of poses of the object, the time series comprising an animation.
2. The method of claim 1 further comprising the step of saving the reference model in association with the animation on a computer readable medium.
3. The method of claim 1, wherein data for determining the reference model is obtained with a three-dimensional scanning device.
4. The method of claim 3, wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device.
5. The method of claim 3, wherein the step of analyzing the reference model is conducted by the three dimensional scanning device.
6. The method of claim 3, wherein the data for determining the reference model of the object, and for recording movement of the object, are obtained with the same three-dimensional scanning device.
7. The method of claim 1, wherein the feature detection and localization algorithm for analyzing the reference model is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
8. The method of claim 1, wherein the feature detection and localization algorithm for analyzing the recording is selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface.
9. The method of claim 1, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter to the step of analyzing the recording using feature detection and localization algorithms.
10. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a three-dimensional model of the object from its an environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface;
recording movement of the object;
analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter;
matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and
recording a time series of poses of the object, the time series comprising an animation.
11. A method for recording animation comprising the steps of:
determining a reference model of an object by separating a three-dimensional model of the object from its environment in a 3D reconstruction of a static scene;
analyzing the reference model using a feature detection and localization algorithm selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface;
recording movement of the object;
analyzing the recording using feature detection and localization algorithms selected from one or more of RANSAC, iterative closest point, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, detection of principal curvatures, or detection of distance to a medial surface, wherein a quantity of digital computations of a microprocessor is reduced by applying a Kalman filter;
matching features of the recording to features of the reference model, wherein a match between the reference model and a frame of the recording comprises a pose of the object; and
recording a time series of poses of the object, the time series comprising an animation;
wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device, and wherein the data for determining the reference model of the object, and from recording movement of the object, are obtained with the same three-dimensional scanning device.
12. The method of claim 11, wherein the step of separating the three-dimensional image of the object from the three-dimensional image of the environment of the object is conducted by the three-dimensional scanning device.
13. The method of claim 12, wherein the step of analyzing the reference model is conducted by the three dimensional scanning device.
US14/671,313 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner Abandoned US20150279075A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/671,313 US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461971036P 2014-03-27 2014-03-27
US14/671,313 US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner

Publications (1)

Publication Number Publication Date
US20150279075A1 true US20150279075A1 (en) 2015-10-01

Family

ID=54189850

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/671,313 Abandoned US20150279075A1 (en) 2014-03-27 2015-03-27 Recording animation of rigid objects using a single 3d scanner
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents

Family Applications After (4)

Application Number Title Priority Date Filing Date
US14/671,749 Abandoned US20150279121A1 (en) 2014-03-27 2015-03-27 Active Point Cloud Modeling
US14/672,048 Active 2035-11-14 US9841277B2 (en) 2014-03-27 2015-03-27 Graphical feedback during 3D scanning operations for obtaining optimal scan resolution
US14/671,373 Abandoned US20150278155A1 (en) 2014-03-27 2015-03-27 Identifying objects using a 3d scanning device, images, and 3d models
US14/671,420 Abandoned US20150279087A1 (en) 2014-03-27 2015-03-27 3d data to 2d and isometric views for layout and creation of documents

Country Status (1)

Country Link
US (5) US20150279075A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125638A1 (en) * 2014-11-04 2016-05-05 Dassault Systemes Automated Texturing Mapping and Animation from Images
US20210074052A1 (en) * 2019-09-09 2021-03-11 Samsung Electronics Co., Ltd. Three-dimensional (3d) rendering method and apparatus
US11138306B2 (en) * 2016-03-14 2021-10-05 Amazon Technologies, Inc. Physics-based CAPTCHA

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469446A (en) * 2014-09-05 2016-04-06 富泰华工业(深圳)有限公司 Point cloud mesh simplification system and method
EP3040946B1 (en) * 2014-12-30 2019-11-13 Dassault Systèmes Viewpoint selection in the rendering of a set of objects
US9866815B2 (en) * 2015-01-05 2018-01-09 Qualcomm Incorporated 3D object segmentation
JP2017041022A (en) * 2015-08-18 2017-02-23 キヤノン株式会社 Information processor, information processing method and program
CN105551078A (en) * 2015-12-02 2016-05-04 北京建筑大学 Method and system of virtual imaging of broken cultural relics
JP6869023B2 (en) * 2015-12-30 2021-05-12 ダッソー システムズDassault Systemes 3D to 2D reimaging for exploration
US10127333B2 (en) 2015-12-30 2018-11-13 Dassault Systemes Embedded frequency based search and 3D graphical data processing
US10049479B2 (en) 2015-12-30 2018-08-14 Dassault Systemes Density based graphical mapping
US10360438B2 (en) 2015-12-30 2019-07-23 Dassault Systemes 3D to 2D reimaging for search
CN106524920A (en) * 2016-10-25 2017-03-22 上海建科工程咨询有限公司 Application of field measurement in construction project based on three-dimensional laser scanning
US10999602B2 (en) 2016-12-23 2021-05-04 Apple Inc. Sphere projected motion estimation/compensation and mode decision
CN106650700B (en) * 2016-12-30 2020-12-01 上海联影医疗科技股份有限公司 Die body, method and device for measuring system matrix
KR102534170B1 (en) 2017-01-06 2023-05-17 나이키 이노베이트 씨.브이. System, platform and method for personalized shopping using an automated shopping assistant
US11259046B2 (en) 2017-02-15 2022-02-22 Apple Inc. Processing of equirectangular object data to compensate for distortion by spherical projections
US10924747B2 (en) 2017-02-27 2021-02-16 Apple Inc. Video coding techniques for multi-view video
US11093752B2 (en) 2017-06-02 2021-08-17 Apple Inc. Object tracking in multi-view video
US20190005709A1 (en) * 2017-06-30 2019-01-03 Apple Inc. Techniques for Correction of Visual Artifacts in Multi-View Images
US10754242B2 (en) 2017-06-30 2020-08-25 Apple Inc. Adaptive resolution and projection format in multi-direction video
CN107677221B (en) * 2017-10-25 2024-03-19 贵州大学 Plant leaf movement angle measuring method and device
US10762595B2 (en) * 2017-11-08 2020-09-01 Steelcase, Inc. Designated region projection printing of spatial pattern for 3D object on flat sheet in determined orientation
US10699404B1 (en) 2017-11-22 2020-06-30 State Farm Mutual Automobile Insurance Company Guided vehicle capture for virtual model generation
EP3496388A1 (en) 2017-12-05 2019-06-12 Thomson Licensing A method and apparatus for encoding a point cloud representing three-dimensional objects
CN108921045B (en) * 2018-06-11 2021-08-03 佛山科学技术学院 A method and device for spatial feature extraction and matching of three-dimensional model
US10600230B2 (en) * 2018-08-10 2020-03-24 Sheng-Yen Lin Mesh rendering system, mesh rendering method and non-transitory computer readable medium
CN112381919B (en) 2019-07-29 2022-09-27 浙江商汤科技开发有限公司 Information processing method, positioning method and device, electronic equipment and storage medium
GB2586838B (en) * 2019-09-05 2022-07-27 Sony Interactive Entertainment Inc Free-viewpoint method and system
CN110610045A (en) * 2019-09-16 2019-12-24 杭州群核信息技术有限公司 Intelligent cloud processing system and method for generating three views by selecting cabinet and wardrobe
US11074708B1 (en) * 2020-01-06 2021-07-27 Hand Held Products, Inc. Dark parcel dimensioning
WO2021161865A1 (en) * 2020-02-13 2021-08-19 三菱電機株式会社 Dimension creation device, dimension creation method, and program
CN111443091B (en) * 2020-04-08 2023-07-25 中国电力科学研究院有限公司 Defect Judgment Method for Cable Line Tunnel Engineering
CN111814691B (en) * 2020-07-10 2022-01-21 广东电网有限责任公司 Space expansion display method and device for transmission tower image
CN116817771B (en) * 2023-08-28 2023-11-17 南京航空航天大学 Aerospace part coating thickness measurement method based on cylindrical voxel characteristics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027840A1 (en) * 2006-07-20 2010-02-04 The Regents Of The University Of California System and method for bullet tracking and shooter localization
US20140219550A1 (en) * 2011-05-13 2014-08-07 Liberovision Ag Silhouette-based pose estimation
US8896607B1 (en) * 2009-05-29 2014-11-25 Two Pic Mc Llc Inverse kinematics for rigged deformable characters

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003021532A2 (en) 2001-09-06 2003-03-13 Koninklijke Philips Electronics N.V. Method and apparatus for segmentation of an object
US8108929B2 (en) * 2004-10-19 2012-01-31 Reflex Systems, LLC Method and system for detecting intrusive anomalous use of a software system using multiple detection algorithms
US7860301B2 (en) 2005-02-11 2010-12-28 Macdonald Dettwiler And Associates Inc. 3D imaging system
US7768656B2 (en) 2007-08-28 2010-08-03 Artec Group, Inc. System and method for three-dimensional measurement of the shape of material objects
KR20090047172A (en) * 2007-11-07 2009-05-12 삼성디지털이미징 주식회사 Digital camera control method for test shooting
US8255100B2 (en) * 2008-02-27 2012-08-28 The Boeing Company Data-driven anomaly detection to anticipate flight deck effects
DE102008021558A1 (en) * 2008-04-30 2009-11-12 Advanced Micro Devices, Inc., Sunnyvale Process and system for semiconductor process control and monitoring using PCA models of reduced size
US8199988B2 (en) * 2008-05-16 2012-06-12 Geodigm Corporation Method and apparatus for combining 3D dental scans with other 3D data sets
EP2297705B1 (en) * 2008-06-30 2012-08-15 Thomson Licensing Method for the real-time composition of a video
US8750446B2 (en) * 2008-08-01 2014-06-10 Broadcom Corporation OFDM frame synchronisation method and system
US8817019B2 (en) * 2009-07-31 2014-08-26 Analogic Corporation Two-dimensional colored projection image from three-dimensional image data
GB0913930D0 (en) * 2009-08-07 2009-09-16 Ucl Business Plc Apparatus and method for registering two medical images
US8085279B2 (en) * 2009-10-30 2011-12-27 Synopsys, Inc. Drawing an image with transparent regions on top of another image without using an alpha channel
EP2677938B1 (en) * 2011-02-22 2019-09-18 Midmark Corporation Space carving in 3d data acquisition
US8724880B2 (en) * 2011-06-29 2014-05-13 Kabushiki Kaisha Toshiba Ultrasonic diagnostic apparatus and medical image processing apparatus
EP2780826B1 (en) * 2011-11-15 2020-08-12 Trimble Inc. Browser-based collaborative development of a 3d model
US20150153476A1 (en) * 2012-01-12 2015-06-04 Schlumberger Technology Corporation Method for constrained history matching coupled with optimization
US9208550B2 (en) 2012-08-15 2015-12-08 Fuji Xerox Co., Ltd. Smart document capture based on estimated scanned-image quality
DE102013203667B4 (en) * 2013-03-04 2024-02-22 Adidas Ag Cabin for trying out one or more items of clothing
WO2015006791A1 (en) 2013-07-18 2015-01-22 A.Tron3D Gmbh Combining depth-maps from different acquisition methods
US20150070468A1 (en) 2013-09-10 2015-03-12 Faro Technologies, Inc. Use of a three-dimensional imager's point cloud data to set the scale for photogrammetry

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100027840A1 (en) * 2006-07-20 2010-02-04 The Regents Of The University Of California System and method for bullet tracking and shooter localization
US8896607B1 (en) * 2009-05-29 2014-11-25 Two Pic Mc Llc Inverse kinematics for rigged deformable characters
US20140219550A1 (en) * 2011-05-13 2014-08-07 Liberovision Ag Silhouette-based pose estimation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160125638A1 (en) * 2014-11-04 2016-05-05 Dassault Systemes Automated Texturing Mapping and Animation from Images
US11138306B2 (en) * 2016-03-14 2021-10-05 Amazon Technologies, Inc. Physics-based CAPTCHA
US20210074052A1 (en) * 2019-09-09 2021-03-11 Samsung Electronics Co., Ltd. Three-dimensional (3d) rendering method and apparatus
US12198245B2 (en) * 2019-09-09 2025-01-14 Samsung Electronics Co., Ltd. Three-dimensional (3D) rendering method and apparatus

Also Published As

Publication number Publication date
US9841277B2 (en) 2017-12-12
US20150279121A1 (en) 2015-10-01
US20150278155A1 (en) 2015-10-01
US20150276392A1 (en) 2015-10-01
US20150279087A1 (en) 2015-10-01

Similar Documents

Publication Publication Date Title
US20150279075A1 (en) Recording animation of rigid objects using a single 3d scanner
US11051000B2 (en) Method for calibrating cameras with non-overlapping views
KR101333871B1 (en) Method and arrangement for multi-camera calibration
US9710912B2 (en) Method and apparatus for obtaining 3D face model using portable camera
US20170193693A1 (en) Systems and methods for generating time discrete 3d scenes
US20160071318A1 (en) Real-Time Dynamic Three-Dimensional Adaptive Object Recognition and Model Reconstruction
CN108108748A (en) A kind of information processing method and electronic equipment
US11620730B2 (en) Method for merging multiple images and post-processing of panorama
JP2009134693A5 (en)
CN111383252B (en) Multi-camera target tracking method, system, device and storage medium
JP5936561B2 (en) Object classification based on appearance and context in images
US10346709B2 (en) Object detecting method and object detecting apparatus
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN116012432A (en) Stereoscopic panoramic image generation method and device and computer equipment
JP6132996B1 (en) Image processing apparatus, image processing method, and image processing program
US9098746B2 (en) Building texture extracting apparatus and method thereof
JP2009301242A (en) Head candidate extraction method, head candidate extraction device, head candidate extraction program and recording medium recording the program
CN111783497B (en) Method, apparatus and computer readable storage medium for determining characteristics of objects in video
CN112396654A (en) Method and device for determining pose of tracking object in image tracking process
KR102067423B1 (en) Three-Dimensional Restoration Cloud Point Creation Method Using GPU Accelerated Computing
Halperin et al. Clear Skies Ahead: Towards Real‐Time Automatic Sky Replacement in Video
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
Yang et al. Design flow of motion based single camera 3D mapping
Chand et al. Implementation of Panoramic Image Stitching using Python
Pollok et al. Computer vision meets visual analytics: Enabling 4D crime scene investigation from image and video data

Legal Events

Date Code Title Description
AS Assignment

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KUTTOTHARA, JACOB A;WATHEN, JOHN M;PADDOCK, STEVEN D;AND OTHERS;REEL/FRAME:035776/0299

Effective date: 20150528

Owner name: KNOCKOUT CONCEPTS, LLC, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEYERS, STEPHEN B;REEL/FRAME:035776/0218

Effective date: 20150528

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载