US20050232513A1 - System and method for aligning images - Google Patents
System and method for aligning images Download PDFInfo
- Publication number
- US20050232513A1 US20050232513A1 US11/133,544 US13354405A US2005232513A1 US 20050232513 A1 US20050232513 A1 US 20050232513A1 US 13354405 A US13354405 A US 13354405A US 2005232513 A1 US2005232513 A1 US 2005232513A1
- Authority
- US
- United States
- Prior art keywords
- image
- reference points
- positioning
- template
- geometrical object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 238000004590 computer program Methods 0.000 claims description 37
- 238000005259 measurement Methods 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 2
- 239000007787 solid Substances 0.000 claims description 2
- 230000003993 interaction Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 21
- 238000012545 processing Methods 0.000 description 11
- 238000001514 detection method Methods 0.000 description 8
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 241000220225 Malus Species 0.000 description 2
- 235000021016 apples Nutrition 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000000275 quality assurance Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 229920001690 polydopamine Polymers 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present invention relates in general to the alignment of images. More specifically, the present invention relates to a system or method for aligning two or more images (collectively “alignment system” or simply the “system”).
- image alignment Another possible application of image alignment is for quality assurance measurements. For example, radiation oncology often requires image treatment plans to be compared to quality assurance films to determine if the treatment plan is actually being executed. There are also numerous non-medical applications for which image alignment can be very useful.
- the invention is a system or method for aligning images (the “system”).
- a definition subsystem including a first image, a second image, one or more target reference points, one or more template reference points, and a geometrical object.
- the definition subsystem identifies one or more target reference points associated with the first image and one or more template reference points associated with the second image by providing a geometrical object for positioning the first image in relation to the second image.
- a combination subsystem is configured to generate an aligned image from the first image and second image.
- An interface subsystem may be used to facilitate interactions between users and the system.
- the alignment system can be applied to images involving two, three, or more dimensions.
- an Affine transform heuristic is performed using various target and template points.
- the Affine transform can eliminate shift, rotational, and magnification differences between different images.
- different types of combination heuristics may be used.
- FIG. 1 is an environmental block diagram illustrating an example of an image alignment system accessible by a user.
- FIG. 2A is a subsystem-level block diagram illustrating an example of a definition subsystem and a combination subsystem.
- FIG. 2B is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, and an interface subsystem.
- FIG. 2C is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, an interface subsystem, and a detection subsystem.
- FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output.
- FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images.
- FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system or method may execute to generate an aligned image.
- FIG. 6 is a flow diagram illustrating an example of steps that a user of an image alignment system may perform to generate an aligned image.
- FIG. 7A is a diagram illustrating one example of target reference points associated with a first image.
- FIG. 7B is a diagram illustrating one example of a geometrical object connecting target reference points associated with a first image.
- FIG. 7C is a diagram illustrating an example of a geometrical object and a centroid associated with that geometrical object.
- FIG. 7D is a diagram illustrating a geometrical object and various template reference points positioned in relation to a second image.
- the present invention relates generally to methods and systems for aligning images (collectively an “image alignment system” or “the system”) by producing an aligned image from a number of images and a relationship between various reference points associated with those images.
- a geometrical object can be formed from selected reference points in one image, copied or transferred to a second image, and positioned within that second image to establish a relationship between reference points.
- FIG. 1 is a block diagram illustrating an example of some of the elements that can be incorporated into an image alignment system 20 .
- FIG. 1 shows a human being to represent a user 22 , a computer terminal to represent an access device 24 , a GUI to represent an interface 26 , and a computer tower to represent a computer 28 .
- a user 22 can access the system 20 through an access device 24 .
- the user 22 is a human being.
- the user 22 may be an automated agent, a robot, a neural network, an expert system, an artificial technology device, or some other form of intelligence technology (collectively “intelligence technology”).
- intelligence technology a form of intelligence technology
- the system 20 can be implemented in many different ways, giving users 22 a potentially wide variety of different ways to configure the processing performed by the system 20 .
- the access device 24 can be any device that is either: (a) capable of performing the programming logic of the system 20 ; or (b) communicating a device that is capable of performing the programming logic of the system 20 .
- Access devices 24 can include desktop computers, laptop computers, mainframe computers, mini-computers, programmable logic devices, embedded computers, hardware devices capable of performing the processing required by the system 20 , cell phones, satellite pagers, personal data assistants (“PDAs”), and a wide range of future devices that may not yet currently exist.
- PDAs personal data assistants
- the access device 24 can also include various peripherals associated with the device such as a terminal, keyboard, mouse, screen, printer, input device, output device, or any other apparatus that can relay data or commands between a user 22 and an interface 26 .
- the user 22 uses the access device 24 to interact with an interface 26 .
- the interface 26 is typically web page that is viewable from a browser in the access device 22 .
- the interface 26 is likely to be influenced by the operating system and other characteristics of the access device 24 .
- Users 22 can view system 20 outputs through the interface 26 , and users 22 can also provide system 20 inputs by interacting with the interface 26 .
- the interface 26 can be describe as a combination of the various information technology layers relevant to communications between various software applications and the user 22 .
- the interface 26 can be the aggregate characteristics of a graphical user interface (“GUI”), an intranet, an extranet, the Internet, a local area network (“LAN”), a wide area network (“WAN”), a software application, other type of network, and any other factor relating to the relaying of data or commands between an access device 24 and a computer 28 , or between a user 22 and a computer 28 .
- GUI graphical user interface
- LAN local area network
- WAN wide area network
- software application other type of network
- a computer 28 is any device or combination of devices that allows the processing of the system 20 to be performed.
- the computer 28 may be a general purpose computer capable of running a wide variety of different software applications or a specialized device limited to particular functions.
- the computer 28 is the same device as the access device 24 .
- the computer 28 is a network of computers 28 accessed by the accessed device 24 .
- the system 20 can incorporate a wide variety of different information technology architectures.
- the computer 28 is able to receive, incorporate, store, and process information that may relate to operation of the image alignment system 20 .
- the computer 28 may include any type, number, form, or configuration of processors, system memory, computer-readable mediums, peripheral devices, and operating systems.
- the computer 28 is a server and the access device 24 is a client device accessing the server.
- Images to be aligned by the system 20 are examples of processing elements existing as representations within the computer 28 .
- An image may include various reference points, and those reference points can exist as representations within the computer 28 .
- a geometrical object 35 of reference point(s) that is used to align a first image 30 with respect to a second image 32 also exist as representations within the computer 28 .
- An image is potentially any visual representation that can potentially be aligned with one or more other visual representations.
- images are captured through the use of a light-based sensor, such as a camera.
- images can be generated from non-light based sensors or the sources of information and data.
- An ultrasound image is an example of an image that is generated from a non-light based sensor.
- the images processed by the system 20 are preferably digital images.
- the images are initially captured in a digital format and are passed unmodified to the system 20 .
- digital images may be generated from analog images.
- Various enhancement heuristics may be applied to an image before it is aligned by the system 20 , but the system 20 does not require the performance of such pre-alignment enhancement processing.
- target reference points 34 are associated with the first image 30 (the “target image”) and template reference points 36 are associated with a second image 32 (the “template image”). Any number of target images can be aligned with respect to a single template image.
- the target reference points 34 and template reference points 36 are locations in relation to an image, and the system 20 uses the locations of the target reference points 34 and the template reference points 36 to determine a relationship so that an aligned 38 can be generated. Locations of the template reference points 36 may be determined by positioning the geometrical object 35 within the second image 32 .
- the geometrical object 35 can be used to facilitate a generation of an aligned image 38 .
- a geometrical object 35 is transmitted or copied from a first image 30 to a second image 32 . In alternative embodiments, the geometrical object 35 may be reproduced in the second image 32 in some other way.
- the geometrical object 35 is the configuration of target reference point(s) 34 within the target image 30 that are used to align the target image 30 with the template image 32 .
- the geometrical object 35 is made up at least three points.
- the system 20 can be implemented in the form of various subsystems. A wide variety of different subsystem configurations can be incorporated into the system 20 .
- FIGS. 2A, 2B , and 2 C illustrate different subsystem-level configurations of the system 20 .
- FIG. 2A shows a system 20 made up of two subsystems: a definition subsystem 40 and a combination subsystem 42 .
- FIG. 2B illustrates a system 20 made up of three subsystems: the definition subsystem 40 , the combination subsystem 42 , and an interface subsystem 44 .
- FIG. 2C displays an association of a four subsystems: the definition subsystem 40 , the combination subsystem 42 , the interface subsystem 44 , and a detection subsystem 45 .
- Interaction between subsystems 40 - 44 can include an exchange of data, algorithms, instructions, commands, locations of points in relation to images, or any other communication helpful for implementation of the system 20 .
- the definition subsystem 40 allows the system 20 to define the relationship(s) between the first image 30 and the second image 32 so that the combination subsystem 42 can create the aligned image 38 from the first image 30 and the second image 32 .
- the processing elements of the definition subsystem 40 can include the first image 30 , the second image 32 , the target reference points 34 , the template reference points 36 , and the geometrical object 35 .
- the target reference points 34 are associated with the first image 34
- the template reference points 36 are associated with the second image 32 .
- the target reference points 34 may be selected through an interface subsystem 44 or by any other method readable to the definition subsystem 40 .
- the definition subsystem 40 is configured to define or create the geometrical object 35 .
- the definition subsystem 40 generates the geometrical object 35 by connecting at least the subset of target reference points 34 .
- the definition subsystem 40 may further identify a centroid of the geometrical object 35 .
- the definition subsystem 40 may impose a constraint upon one or more target reference points 34 . Constraints may be purely user defined on a case-by-case basis, or may be created by the system 20 through the implementation of user-defined processing rules. By imposing the constraint upon one or more target reference points 34 , the definition subsystem 40 can ensure that the target reference points 34 are adequate for generation of the geometrical object 35 .
- the definition subsystem 40 can impose any number, combination, or type of constraint. These restraints may include a requirement that a minimum number of target reference points 34 be identified, that a minimum number of target reference points 34 not be co-linear, or that target reference points 34 be within or outside of a specified area of an image.
- the definition subsystem 40 generates the geometrical object 35 and coordinates the geometrical object 35 with the second image 32 , which generation and coordination can be accomplished by any method known to a person skilled in the art, including by transferring or copying the geometrical object 35 to the second image 32 .
- the definition subsystem 40 can provide a plurality of controls for positioning the geometrical object 35 within the second image 32 .
- the controls may include any one of or any combination of a control for shifting the geometrical object 35 along a dimensional axis, a control for rotating the geometrical object 35 , a control for changing a magnification of the geometrical object 35 , a course position control, a fine position control, or any other control helpful for a positioning of the geometrical object 35 in relation to the second image 32 .
- the definition subsystem 40 can include a thumbnail image of the geometrical object 35 .
- the definition subsystem 40 can identify a plurality of positions of the geometrical object 35 in relation to the second image 32 . Those positions may include a gross position and a fine position.
- the thumbnail image may be used to identify gross or fine positions of the geometrical object 35 in relation to the second image 32 .
- the definition subsystem 40 can identify a plurality of positions of the geometrical object in a substantially similar and consistent manner.
- the definition subsystem 40 adjusts the geometrical object 35 within the second image 32 .
- the definition subsystem 40 may adjust a positioning of the geometrical object 35 within the second image 32 .
- the geometrical object 35 can be used to define template reference points 36 .
- vertices of the geometrical object 35 correspond with template reference points 36 when the geometrical object 35 is located within or about the second image 32 .
- a positioning of the geometrical object 35 in relation to the second image 32 positions the vertices or other relevant points of the geometrical object 35 so as to define the template reference points 36 .
- the definition subsystem 40 can provide for an accuracy metric related to at least one of the template reference points 36 .
- the accuracy metric can identify a measurement of accuracy of a positioning of at least one of the template reference points 36 in relation to an estimated or predicted position of reference points within the second image 32 .
- the alignment system 20 can be applied to images involving two, three, or more dimensions.
- an Affine transform heuristic is performed using various target reference points 34 and template points 36 .
- the Affine transform can eliminate shift, rotational, and magnification differences between different images.
- different types of relationship-related heuristics may be used by the definition subsystem 40 and/or the combination subsystem 42 .
- Other examples of heuristics known in the art that relate to potential relationships between images and/or points include a linear conformal heuristic, a projective heuristic, a polynomial heuristic, a piecewise linear heuristic, and a locally weighted mean heuristic.
- the various relationship-related heuristics allow the system 20 to compare images and points that would otherwise not be in a format suitable for the establishment of a relationship between the various images and/or points.
- the relationship-related heuristics such as the Affine transform heuristic are used to “compare apples to apples and oranges to oranges.”
- the combination subsystem 42 is responsible for creating the aligned image 38 from the images and relationships maintained in the definition subsystem 40 .
- the combination subsystem 42 includes the aligned image 38 .
- the combination subsystem 42 is configured to generate the aligned image 38 from the first image 30 , the second image 32 , at least one of the target reference points 34 , and at least one of the template reference points 36 .
- the generation of the aligned image 38 by the combination subsystem 42 can be accomplished in a number of ways.
- the combination subsystem 42 may access the target reference points 34 and the template reference points 36 from the definition subsystem 42 .
- the combination subsystem 42 can generate an alignment calculation or determine a relationship between at least one of the target reference points 34 and at least one of the template reference points 36 .
- the combination subsystem 42 can use an alignment calculation or relationship to align the first image 30 and the second image 32 .
- the combination subsystem 42 uses locations of the target reference points 34 and the template reference points 36 to generate the aligned image 38 .
- a detection subsystem 45 can be configured to detect distortions, or other indications of a problem, relating to an aligned image 38 .
- the detection subsystem 45 also allows a user 22 to check for distortions in an aligned image 38 . Once a distortion has been detected, the detection subsystem 45 identifies the extent and nature of the distortion.
- the user 22 can use data provided by the detection subsystem 45 to check for a misalignment of a device or system that generated the first image 30 or the second image 32 .
- the detection subsystem 45 can be configured by a user 22 through the use of the interface subsystem 44 .
- FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output.
- a computer program 50 residing on a computer-readable medium receives user input 46 through an input interface 48 and provides output 54 to the user 22 through an output interface 52 .
- the computer program 50 includes the target reference points 34 , the geometrical shape 35 ; FIGS. 1-2 , the first image 30 , the second image 32 , the template reference points 36 , a third image, and the interface 26 .
- the target reference points 34 are associated with the first image 30 .
- the computer program 50 can generate a geometrical object 35 or shape in a number of ways, including by connecting at least a subset of the target reference points 34 .
- the geometrical shape 35 can be any number or combination of any shape, including but not limited to a segment, line, ellipse, arc, polygon, and triangle.
- the input 46 may include a constraint imposed upon the target reference points 34 or the geometrical shape 35 by the computer program 50 .
- the computer program 50 ensures that the target reference points 34 are adequate for generation of the geometrical shape 35 .
- the system 20 can impose any number, combination, or type of constraint, including a requirement that a minimum number of target reference points 34 be identified, that a minimum number of target reference points 34 not be co-linear, or that target reference points 34 be within or without an area.
- the computer program 50 requires more than four target reference points 34 .
- the computer program 50 may identify a centroid of the geometrical shape 35 .
- the second image 32 can be configured to include the geometrical shape 35 .
- the geometrical shape 35 is generated by the computer program 50 within the second image 32 .
- the computer program 50 can accomplish a generation of the geometrical shape 35 within the second image 32 in a number of ways. For example, the computer program 50 may transfer or copy the geometrical shape 35 from one image to another.
- the computer program 50 provides for identifying the template reference points 36 or locations of the template reference points 36 in relation to the second image 32 .
- the template reference points 36 can be identified by a positioning of the geometrical shape 35 in relation to a second image 32 , which positioning is provided for by the computer program 50 .
- the computer program 50 provides for a number of controls for positioning the geometrical shape 35 within the second image 32 .
- the manipulation of the controls is a form of input 46 .
- the controls may include any one of or any combination of a shift control, a rotation control, a magnification control, a course position control, a fine position control, or any other control helpful for a positioning of the geometrical shape 35 in relation to a second image 32 .
- the controls can function in a number of modes, including a coarse mode and a fine mode.
- the computer program 50 provides for positioning the geometrical shape 35 by shifting the geometrical shape 35 along a dimensional axis, rotating the geometrical shape 35 , and changing a magnification of the geometrical shape 35 .
- a positioning of the geometrical shape 35 can include a coarse adjustment and a fine adjustment.
- the computer program 50 is capable of identifying of plurality of positions of the geometrical shape 35 in relation to the second image 32 , including a gross position and a fine position of the geometrical shape 35 in relation to the second image 32 . This identification can be performed in a substantially simultaneous manner.
- a thumbnail image of an area adjacent to a vertex of the geometrical shape 35 can be provided by the computer program 50 .
- the computer program 50 can provide for an accuracy metric related to at least one of the template reference points 36 .
- the accuracy metric is a form of output 54 .
- the accuracy metric can identify a measurement of accuracy of a positioning of at least one of the template reference points 36 in relation to an estimated or predicted position of reference points within the second image 32 .
- the third image (the aligned image 38 ) is created from the first image 30 , the second image 32 , and a relationship between the target reference points 34 and the template reference points 36 .
- the creation of the third image by the computer program 50 can be accomplished in a number of ways.
- the computer program 50 can generate an alignment calculation or determine a relationship between at least one of the target reference points 34 and at least one of the template reference points 36 .
- the computer program 50 can use an alignment calculation or relationship to align the first image 30 and the second image 32 .
- the computer program 50 uses locations of the target reference points 34 and the template reference points 36 to generate the third image.
- the computer program 50 can be configured to detect distortions of the third image. Once a distortion has been detected, the computer program 50 can identify the extent and nature of the distortion. A user 22 can use data generated by the computer program 50 to check for a misalignment of a device or system that generated the first image 30 or the second image 32 .
- the output 54 of the computer program 50 can include various distortion metrics, misalignment metrics, and other forms of error metrics (collectively “accuracy metrics”).
- the interface 26 of the computer program 50 is configured to receive input.
- the interface 26 can include an input interface 48 and an output interface 52 .
- the input can include but is not limited to an instruction for defining the target reference points 34 and a command for positioning the geometrical shape 35 in relation to the second image 32 .
- the computer program 50 can be configured to execute other operations disclosed herein or known to a person skilled in the art that are relevant to the present invention.
- FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images.
- a relationship is defined between the various images to be aligned.
- the system 20 facilitates the positioning of the images in accordance with the previously defined relationship. For example, the template image 32 is positioned in relation the target image 30 and the target image 30 is positioned in relation to the template image 32 .
- the system generates the aligned image 38 in accordance with the positioning performed at 57 .
- the system 20 can perform the three steps identified above in a wide number of different ways.
- the positioning of the images can be facilitated by providing controls for the positioning of the template image in relation to the object image.
- FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system 20 may execute to generate the aligned image 38 .
- the system 20 receives input for defining the target reference points 34 associated with a first image 30 . Once the input 46 is received, or as it is received, the system 20 can then at 62 generate at the geometrical object 35 .
- the input 46 may include a command.
- the geometrical object 35 can be generated in a variety of ways, such as by connecting the target reference points 34 . In the preferred embodiment, the system 20 may be configured to require that at least four target reference points 34 be connected in generating the geometrical object 35 .
- the geometrical object 35 can take any form or shape that connects the target reference points 34 , and each target reference point 34 is a vertex or other defining feature of the geometrical object 35 . In one category of embodiments, the geometrical object 35 is a polygon.
- the system 20 imposes and checks a constraint against the target reference points 34 . If the target reference points 34 at 66 do not meet constraints imposed by the system 20 , the system 20 at 68 prompts and waits for input changing or adding to definitions of the target reference points 34 . Once addition reference point data is received, the system 20 again generates a geometrical object 35 at 62 and checks constraints against the target reference points 34 at 64 . The system 20 may repeat these steps until the target reference points 34 satisfy constraints. Any type of a constraint can be imposed upon the target reference points 34 , including requiring enough target reference points 34 to define a particular form of geometrical object 35 . For example, the system 20 may require that at least four target reference points 34 are defined. If more than two target reference points 34 are co-linear, the system 20 may require that additional target reference points 34 be defined. The system 20 may use the geometrical object 35 to impose constraints upon the target reference points 34 .
- the system 20 generates a geometrical object 35 within the first image 30 and regenerates the geometrical object 35 in the second image 32 space at 70 .
- the geometrical object 35 can be generated in the second image 32 space in a number of ways, including transferring or copying the geometrical object 35 from the first image 30 to the second image 32 .
- the geometrical object 35 can be represented by a set of connected points, a solid object, a semi-transparent object, a transparent object, or any other object that can be used to represent a geometrical object 35 . Any such representation can be displayed by the system 20 .
- the system 20 identifies the template reference points 36 based on a placement of the geometrical object 35 in relation to the second image 32 .
- the method of identifying the template reference points is providing controls for positioning at 71 the geometrical object 35 in relation to the second image 32 .
- a variety of controls can be made available, including one of or a combination of controls for shifting the geometrical object 35 up, down, left, or right in relation to the second image 32 , rotating the geometrical object 35 in relation to the second image 32 , changing the magnification or size of the geometrical object 35 in relation to the second image 32 , moving the geometrical object 35 through multiple dimensions, switching between coarse and fine positioning of the geometrical object 35 , or any other control that can be used to adjust the geometrical object 35 in relation to the second image 32 .
- a command can be received as an input 46 allowing for the positioning of the geometrical object 35 by at least one of rotating the geometrical object 35 , adjusting a magnification of the geometrical object 35 , and shifting the geometrical object 35 along a dimensional axis.
- a command may allow for coarse and fine adjustments of the geometrical object 35 .
- the system 20 provides a thumbnail image to the interface 26 for displaying an area proximate to at least one of the template reference points 36 .
- the thumbnail image can be configured to allow for a substantially simultaneous display of fine positioning detail and coarse positioning detail, for example, by providing both a view of thumbnail images and a larger view of the geometrical object 35 in relation to the second image 32 to the user 22 for simultaneous viewing.
- the system 20 may provide for an accuracy metric or an accuracy measurement detail for either a composite of the template reference points 36 or individually for at least one or more of the individual template reference points 36 .
- the system 20 may provide accuracy metrics by calculating a number of accuracy metrics.
- Accuracy metrics facilitate an optimal positioning of the geometrical object 35 within the second image 32 .
- the system 20 receives input commands from an interface or from the user 22 for positioning the geometrical object 35 at 72 in relation to the second image 32 .
- the system 20 adjusts a positioning of the geometrical object 35 at 74 within the second image 32 . This adjustment can be based upon the accuracy metric.
- the system 20 may use a computer implemented process, such as a refinement heuristic, or any other image alignment tool for adjusting a placement of the geometrical object 35 in relation to the second image 32 .
- the locations of the template reference points 36 can be determined in other ways.
- the user 22 may define the template reference points 36 by pointing and clicking on locations within the second image 32 , or the template reference points 36 can be predefined.
- the system 20 can determine a relationship at 78 between the target reference points 34 and the template reference points 36 .
- Such a relationship can be a mathematical relationship and can be determined in any of a number of ways.
- the system 20 at 80 generates the aligned image 38 from the first image 30 and the second image 32 .
- the generation occurs by the system producing the aligned image 38 from the first image 30 , the second image 32 , and a relationship between at least one of the target reference points 34 in the first image 30 and at least one of the template reference points 36 in the second image 32 .
- the system 20 can use an alignment calculation or a computer implemented combination heuristic to generate the aligned image 38 . Some such heuristics are known in the prior art.
- the system 20 analyzes the degree and nature of any misalignment between locations of the vertices of the geometrical object 35 and defined locations of the template reference points 36 to reveal information about the degree and nature of any misalignment of an image generating device or system. Analyzing distortions allows the system 20 or the user 22 to analyze the alignment status of an image generation device.
- FIG. 6 is a flow diagram illustrating an example of steps that a user 22 of an image alignment system 20 can perform through an access device 24 and an interface 26 to generate an aligned image 38 .
- the user 22 selects or inputs images for alignment 84 .
- the user 22 can provide images to the system 20 in any form recognizable by the system 20 , including digital representations of images.
- the user 22 at 86 defines the target reference points 34 of a first image 30 .
- the target reference points 34 can be defined by pointing and clicking on locations within the first image 30 , by importing or selecting predefined target reference points 34 , or by any other way understandable to the system 20 .
- the user 22 of the system 20 at 88 can initiate generation of the geometrical object 35 .
- the geometrical object 35 can be initiated in a number of ways, including the defining the target reference points 34 , defining a set number of the target reference points 34 that satisfy constraints, submitting a specific instruction to the system 20 to generate the geometrical object 35 , or any other means by which the user 22 may signal the system 20 to generate the geometrical object 35 .
- the system 20 can select an appropriate type of geometrical object 35 to generate, or the user 22 may select a type of geometrical object 35 to be generated. In one embodiment, the system 20 generates a geometrical object 35 by connecting the target reference points 34 .
- a user determines a centroid 104 of a geometrical object 35 .
- the system 20 can determine and indicate the centroid 104 of the geometrical object 35 .
- a determination of the centroid 104 is helpful for eliminating or at least mitigating errors that can occur in the image alignment system 20 .
- the user 22 can verify that the centroid 104 is near the center of a critical area of the first image 30 . If the system 20 or the user 22 of the system 20 determines that the centroid 104 of the geometrical object 35 is not near enough to a critical area of the first image 30 as is desired, the user 22 can redefine the target reference points 34 .
- the user 22 positions the geometrical object 35 within the second image 32 space.
- the user 22 can use controls provided by the system 20 or that are a part of the system 20 to position the geometrical object 35 .
- the user 22 can shift the geometrical object 35 up, down, left, or right in relation to the second image 32 , rotate the geometrical object 35 in relation to a second image 32 , change the magnification or size of the geometrical object 35 in relation to the second image 32 , move the geometrical object 35 through multiple dimensions, switch between coarse and fine positioning of the geometrical object 35 , or execute any other control that can be used to adjust the geometrical object 35 in relation to the second image 32 .
- the user 22 may use a thumbnail view or an accuracy metric to position the geometrical object 35 .
- the user 22 of the system 20 initiates alignment of the first image 30 and the second image 32 .
- the user 22 may signal the system 20 to transfer the geometrical object 35 in any way recognizable by the system 20 .
- One such way is to send an instruction for alignment to the system 20 via the access device 24 or the interface 26 .
- the system 20 Upon receipt of an alignment signal, the system 20 generates the aligned image 38 from the first image 30 and the second image 32 .
- the user 22 of the system 20 checks for distortion of the aligned image 38 .
- the user 22 determines and inputs to the system 20 desired locations of the template reference points 36 in relation to the second image 32 .
- the system 20 can analyze the desired locations and post-alignment locations of template reference points 34 to discover information about any distortions in an aligned image 38 .
- the system 20 may reveal to the user 22 any information pertaining to a distortion analysis.
- FIG. 7A illustrates the target reference points 34 defined in relation to the first image 32 .
- FIG. 7B illustrates the geometrical object 35 connecting the target reference points 35 .
- FIG. 7C illustrates an indication of the centroid 104 of the geometrical object 35 .
- FIG. 7D illustrates a transferred geometrical object 35 and the template reference points 36 in relation to the second image 32 .
- FIG. 7A is a diagram illustrating one example of target reference points 34 defined in relation to the first image 32 .
- FIG. 7B is a diagram illustrating one example of a geometrical object 35 connecting target reference points 34 associated with a first image 30 .
- FIG. 7C is a diagram illustrating an example of a geometrical object 35 and a centroid 104 associated with that geometrical object 35 .
- FIG. 7D is a diagram illustrating a transferred geometrical object 35 and various template reference points 36 positioned in relation to a second image 32 .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention is a system or method for aligning images (the “system”). A definition subsystem, including a first image, a second image, one or more target reference points, one or more template reference points, and a geometrical object. The definition subsystem identifies one or more target reference points associated with the first image and one or more template reference points associated with the second image by providing a geometrical object for positioning the first image in relation to the second image. A combination subsystem is configured to generate an aligned image from the first image and second image. An interface subsystem may be used to facilitate interactions between a user and the system.
Description
- This application is a continuation of U.S. application Ser. No. 10/630,015, filed on Jul. 30, 2003, which application is hereby incorporated herein by reference in its entirety.
- The present invention relates in general to the alignment of images. More specifically, the present invention relates to a system or method for aligning two or more images (collectively “alignment system” or simply the “system”).
- Image processing often requires that two or more images from the same source or from different sources be “registered,” or aligned, so that they can occupy the same image space. Once properly aligned to the same image space, images can then be compared or combined to form a multidimensional image. Image alignment can be useful in many applications. One such possible application is in medical imaging. For example, an image produced by magnetic resonance imaging (“MRI”) and an image produced by computerized axial tomography (“CAT” or “CT”) originate from different sources. When the images are overlaid, information acquired in relation to soft tissue (MRI) and hard tissue (CT) can be combined to more accurately depict an area of the body. The total value of the combined integrated image can exceed the sum of its parts.
- Another possible application of image alignment is for quality assurance measurements. For example, radiation oncology often requires image treatment plans to be compared to quality assurance films to determine if the treatment plan is actually being executed. There are also numerous non-medical applications for which image alignment can be very useful.
- Several methods are available for image alignment, including automated and manual alignment methods. However, currently available image alignment tools and techniques are inadequate. In many instances, computer automated methods are unsuccessful in aligning images because boundaries are not well defined and images can be poorly focused. Although automated alignment methods perform alignment activities more quickly than existing manual alignment methods, manual alignment methods are often more accurate than automated methods. Thus, manual image alignment methods are often used to make up for deficiencies and inaccuracies of automated alignment methods. However, existing manual alignment systems and methods can be tedious, time consuming, and error prone. It would be desirable for an alignment system to perform in an efficient, accurate, and automated manner.
- The invention is a system or method for aligning images (the “system”). A definition subsystem, including a first image, a second image, one or more target reference points, one or more template reference points, and a geometrical object. The definition subsystem identifies one or more target reference points associated with the first image and one or more template reference points associated with the second image by providing a geometrical object for positioning the first image in relation to the second image. A combination subsystem is configured to generate an aligned image from the first image and second image. An interface subsystem may be used to facilitate interactions between users and the system.
- The alignment system can be applied to images involving two, three, or more dimensions. In some embodiments, an Affine transform heuristic is performed using various target and template points. The Affine transform can eliminate shift, rotational, and magnification differences between different images. In other embodiments, different types of combination heuristics may be used.
- Certain embodiments of present invention will now be described, by way of examples, with reference to the accompanying drawings, in which:
-
FIG. 1 is an environmental block diagram illustrating an example of an image alignment system accessible by a user. -
FIG. 2A is a subsystem-level block diagram illustrating an example of a definition subsystem and a combination subsystem. -
FIG. 2B is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, and an interface subsystem. -
FIG. 2C is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, an interface subsystem, and a detection subsystem. -
FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output. -
FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images. -
FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system or method may execute to generate an aligned image. -
FIG. 6 is a flow diagram illustrating an example of steps that a user of an image alignment system may perform to generate an aligned image. -
FIG. 7A is a diagram illustrating one example of target reference points associated with a first image. -
FIG. 7B is a diagram illustrating one example of a geometrical object connecting target reference points associated with a first image. -
FIG. 7C is a diagram illustrating an example of a geometrical object and a centroid associated with that geometrical object. -
FIG. 7D is a diagram illustrating a geometrical object and various template reference points positioned in relation to a second image. - The present invention relates generally to methods and systems for aligning images (collectively an “image alignment system” or “the system”) by producing an aligned image from a number of images and a relationship between various reference points associated with those images. A geometrical object can be formed from selected reference points in one image, copied or transferred to a second image, and positioned within that second image to establish a relationship between reference points.
- The system can be used in a wide variety of different contexts, including medical applications, photography, geology, and any other field that involves the use of images. The system can be implemented in a wide variety of different devices and hardware configurations. A wide variety of different interfaces, software applications, operating systems, computer hardware, and peripheral components may be incorporated into or interface with the system. There are numerous combinations and environments that can utilize one or more different embodiments of the system. Referring now to the drawings,
FIG. 1 is a block diagram illustrating an example of some of the elements that can be incorporated into animage alignment system 20. For illustrative purposes only,FIG. 1 shows a human being to represent auser 22, a computer terminal to represent anaccess device 24, a GUI to represent aninterface 26, and a computer tower to represent acomputer 28. - A. User
- A
user 22 can access thesystem 20 through anaccess device 24. In many embodiments of thesystem 20, theuser 22 is a human being. In some embodiments of thesystem 20, theuser 22 may be an automated agent, a robot, a neural network, an expert system, an artificial technology device, or some other form of intelligence technology (collectively “intelligence technology”). Thesystem 20 can be implemented in many different ways, giving users 22 a potentially wide variety of different ways to configure the processing performed by thesystem 20. - B. Access Device
- The
user 22 accesses thesystem 20 through theaccess device 24. Theaccess device 24 can be any device that is either: (a) capable of performing the programming logic of thesystem 20; or (b) communicating a device that is capable of performing the programming logic of thesystem 20.Access devices 24 can include desktop computers, laptop computers, mainframe computers, mini-computers, programmable logic devices, embedded computers, hardware devices capable of performing the processing required by thesystem 20, cell phones, satellite pagers, personal data assistants (“PDAs”), and a wide range of future devices that may not yet currently exist. Theaccess device 24 can also include various peripherals associated with the device such as a terminal, keyboard, mouse, screen, printer, input device, output device, or any other apparatus that can relay data or commands between auser 22 and aninterface 26. - C. Interface
- The
user 22 uses theaccess device 24 to interact with aninterface 26. In an Internet embodiment of thesystem 20, theinterface 26 is typically web page that is viewable from a browser in theaccess device 22. In other embodiments, theinterface 26 is likely to be influenced by the operating system and other characteristics of theaccess device 24.Users 22 can viewsystem 20 outputs through theinterface 26, andusers 22 can also providesystem 20 inputs by interacting with theinterface 26. - In many embodiments, the
interface 26 can be describe as a combination of the various information technology layers relevant to communications between various software applications and theuser 22. For example, theinterface 26 can be the aggregate characteristics of a graphical user interface (“GUI”), an intranet, an extranet, the Internet, a local area network (“LAN”), a wide area network (“WAN”), a software application, other type of network, and any other factor relating to the relaying of data or commands between anaccess device 24 and acomputer 28, or between auser 22 and acomputer 28. - D. Computer
- A
computer 28 is any device or combination of devices that allows the processing of thesystem 20 to be performed. Thecomputer 28 may be a general purpose computer capable of running a wide variety of different software applications or a specialized device limited to particular functions. In some embodiments, thecomputer 28 is the same device as theaccess device 24. In other embodiments, thecomputer 28 is a network ofcomputers 28 accessed by the accesseddevice 24. Thesystem 20 can incorporate a wide variety of different information technology architectures. Thecomputer 28 is able to receive, incorporate, store, and process information that may relate to operation of theimage alignment system 20. Thecomputer 28 may include any type, number, form, or configuration of processors, system memory, computer-readable mediums, peripheral devices, and operating systems. In many embodiments, thecomputer 28 is a server and theaccess device 24 is a client device accessing the server. - Many of the processing elements of the
system 20 exist as representations within thecomputer 28. Images to be aligned by thesystem 20, such as afirst image 30 and asecond image 32, are examples of processing elements existing as representations within thecomputer 28. An image may include various reference points, and those reference points can exist as representations within thecomputer 28. Ageometrical object 35 of reference point(s) that is used to align afirst image 30 with respect to asecond image 32, also exist as representations within thecomputer 28. - E. Images
- The
images computer 28, including graphical or data representations. The representations can involve two-dimensional, three-dimensional, or even greater than three-dimensional information. One or more of the images may be a digital image. An alignedimage 38 can be formed from any number of images. - An image is potentially any visual representation that can potentially be aligned with one or more other visual representations. In many embodiments, images are captured through the use of a light-based sensor, such as a camera. In other embodiments, images can be generated from non-light based sensors or the sources of information and data. An ultrasound image is an example of an image that is generated from a non-light based sensor.
- The images processed by the
system 20 are preferably digital images. In some embodiments, the images are initially captured in a digital format and are passed unmodified to thesystem 20. In other embodiments, digital images may be generated from analog images. Various enhancement heuristics may be applied to an image before it is aligned by thesystem 20, but thesystem 20 does not require the performance of such pre-alignment enhancement processing. - The
computer 28 may act upon multiple images in myriad ways, including the execution of commands or instructions that are provided by theuser 22 of thesystem 20. For example, thecomputer 28 can receive input from theuser 22 through theinterface 26 and from a first image 30 (a “target” image), a second image 32 (a “template image”),target reference points 34, ageometrical object 35, andtemplate reference points 36 generate an alignedimage 38. - F. Reference Points
- A reference point is a location on an image that is used by the
system 20 to align the image with another image. Reference points may be as small as an individual pixel, or a large constellation of pixels. In a preferred embodiment, theuser 22 identifies the reference points and thesystem 20 generates the alignedimage 38 from the reference points in an automated fashion withoutfurther user 22 intervention. - As seen in
FIG. 1 ,target reference points 34 are associated with the first image 30 (the “target image”) andtemplate reference points 36 are associated with a second image 32 (the “template image”). Any number of target images can be aligned with respect to a single template image. Thetarget reference points 34 andtemplate reference points 36 are locations in relation to an image, and thesystem 20 uses the locations of thetarget reference points 34 and thetemplate reference points 36 to determine a relationship so that an aligned 38 can be generated. Locations of thetemplate reference points 36 may be determined by positioning thegeometrical object 35 within thesecond image 32. Thus, thegeometrical object 35 can be used to facilitate a generation of an alignedimage 38. In the embodiment shown inFIG. 1 , ageometrical object 35 is transmitted or copied from afirst image 30 to asecond image 32. In alternative embodiments, thegeometrical object 35 may be reproduced in thesecond image 32 in some other way. - G. Geometrical Object
- The
geometrical object 35 is the configuration of target reference point(s) 34 within thetarget image 30 that are used to align thetarget image 30 with thetemplate image 32. In a preferred embodiment, thegeometrical object 35 is made up at least three points. - II. Subsystem-Level Views
- The
system 20 can be implemented in the form of various subsystems. A wide variety of different subsystem configurations can be incorporated into thesystem 20. -
FIGS. 2A, 2B , and 2C illustrate different subsystem-level configurations of thesystem 20.FIG. 2A shows asystem 20 made up of two subsystems: adefinition subsystem 40 and acombination subsystem 42.FIG. 2B illustrates asystem 20 made up of three subsystems: thedefinition subsystem 40, thecombination subsystem 42, and aninterface subsystem 44.FIG. 2C displays an association of a four subsystems: thedefinition subsystem 40, thecombination subsystem 42, theinterface subsystem 44, and adetection subsystem 45. Interaction between subsystems 40-44 can include an exchange of data, algorithms, instructions, commands, locations of points in relation to images, or any other communication helpful for implementation of thesystem 20. - A. Definition Subsystem
- The
definition subsystem 40 allows thesystem 20 to define the relationship(s) between thefirst image 30 and thesecond image 32 so that thecombination subsystem 42 can create the alignedimage 38 from thefirst image 30 and thesecond image 32. - The processing elements of the
definition subsystem 40 can include thefirst image 30, thesecond image 32, thetarget reference points 34, thetemplate reference points 36, and thegeometrical object 35. Thetarget reference points 34 are associated with thefirst image 34, and thetemplate reference points 36 are associated with thesecond image 32. Thetarget reference points 34 may be selected through aninterface subsystem 44 or by any other method readable to thedefinition subsystem 40. Thedefinition subsystem 40 is configured to define or create thegeometrical object 35. - In one embodiment, the
definition subsystem 40 generates thegeometrical object 35 by connecting at least the subset oftarget reference points 34. Thedefinition subsystem 40 may further identify a centroid of thegeometrical object 35. In addition, thedefinition subsystem 40 may impose a constraint upon one or moretarget reference points 34. Constraints may be purely user defined on a case-by-case basis, or may be created by thesystem 20 through the implementation of user-defined processing rules. By imposing the constraint upon one or moretarget reference points 34, thedefinition subsystem 40 can ensure that thetarget reference points 34 are adequate for generation of thegeometrical object 35. Thedefinition subsystem 40 can impose any number, combination, or type of constraint. These restraints may include a requirement that a minimum number oftarget reference points 34 be identified, that a minimum number oftarget reference points 34 not be co-linear, or thattarget reference points 34 be within or outside of a specified area of an image. - The
definition subsystem 40 generates thegeometrical object 35 and coordinates thegeometrical object 35 with thesecond image 32, which generation and coordination can be accomplished by any method known to a person skilled in the art, including by transferring or copying thegeometrical object 35 to thesecond image 32. Thedefinition subsystem 40 can provide a plurality of controls for positioning thegeometrical object 35 within thesecond image 32. The controls may include any one of or any combination of a control for shifting thegeometrical object 35 along a dimensional axis, a control for rotating thegeometrical object 35, a control for changing a magnification of thegeometrical object 35, a course position control, a fine position control, or any other control helpful for a positioning of thegeometrical object 35 in relation to thesecond image 32. - The
definition subsystem 40 can include a thumbnail image of thegeometrical object 35. In some embodiments, thedefinition subsystem 40 can identify a plurality of positions of thegeometrical object 35 in relation to thesecond image 32. Those positions may include a gross position and a fine position. The thumbnail image may be used to identify gross or fine positions of thegeometrical object 35 in relation to thesecond image 32. Thedefinition subsystem 40 can identify a plurality of positions of the geometrical object in a substantially similar and consistent manner. In some embodiments, thedefinition subsystem 40 adjusts thegeometrical object 35 within thesecond image 32. Thedefinition subsystem 40 may adjust a positioning of thegeometrical object 35 within thesecond image 32. - The
geometrical object 35 can be used to define template reference points 36. In one embodiment, vertices of thegeometrical object 35 correspond withtemplate reference points 36 when thegeometrical object 35 is located within or about thesecond image 32. A positioning of thegeometrical object 35 in relation to thesecond image 32 positions the vertices or other relevant points of thegeometrical object 35 so as to define the template reference points 36. Thedefinition subsystem 40 can provide for an accuracy metric related to at least one of the template reference points 36. The accuracy metric can identify a measurement of accuracy of a positioning of at least one of thetemplate reference points 36 in relation to an estimated or predicted position of reference points within thesecond image 32. - The
alignment system 20 can be applied to images involving two, three, or more dimensions. In some embodiments, an Affine transform heuristic is performed using varioustarget reference points 34 and template points 36. The Affine transform can eliminate shift, rotational, and magnification differences between different images. In other embodiments, different types of relationship-related heuristics may be used by thedefinition subsystem 40 and/or thecombination subsystem 42. Other examples of heuristics known in the art that relate to potential relationships between images and/or points include a linear conformal heuristic, a projective heuristic, a polynomial heuristic, a piecewise linear heuristic, and a locally weighted mean heuristic. The various relationship-related heuristics allow thesystem 20 to compare images and points that would otherwise not be in a format suitable for the establishment of a relationship between the various images and/or points. In other words, the relationship-related heuristics such as the Affine transform heuristic are used to “compare apples to apples and oranges to oranges.” - B. Combination Subsystem
- The
combination subsystem 42 is responsible for creating the alignedimage 38 from the images and relationships maintained in thedefinition subsystem 40. Thecombination subsystem 42 includes the alignedimage 38. Thecombination subsystem 42 is configured to generate the alignedimage 38 from thefirst image 30, thesecond image 32, at least one of thetarget reference points 34, and at least one of the template reference points 36. The generation of the alignedimage 38 by thecombination subsystem 42 can be accomplished in a number of ways. Thecombination subsystem 42 may access thetarget reference points 34 and thetemplate reference points 36 from thedefinition subsystem 42. Thecombination subsystem 42 can generate an alignment calculation or determine a relationship between at least one of thetarget reference points 34 and at least one of the template reference points 36. Thecombination subsystem 42 can use an alignment calculation or relationship to align thefirst image 30 and thesecond image 32. In another embodiment, thecombination subsystem 42 uses locations of thetarget reference points 34 and thetemplate reference points 36 to generate the alignedimage 38. - C. Interface Subsystem
- An
interface subsystem 44 can be included in thesystem 20 and configured to allow thesystem 20 to interact withusers 22. Inputs may received by thesystem 20 from theuser 22 through theinterface subsystem 44, andusers 22 may view the outputs of thesystem 20 through theinterface subsystem 44. Any data, command, or other item understandable to thesystem 20 may be communicated to or from theinterface subsystem 44. In a preferred embodiment, theuser 22 can create processing rules through theinterface subsystem 44 that can be applied to many different processing contexts in an ongoing basis. Theinterface subsystem 44 includes theinterface 26 discussed above. - D. Detection Subsystem
- A
detection subsystem 45 can be configured to detect distortions, or other indications of a problem, relating to an alignedimage 38. Thedetection subsystem 45 also allows auser 22 to check for distortions in an alignedimage 38. Once a distortion has been detected, thedetection subsystem 45 identifies the extent and nature of the distortion. Theuser 22 can use data provided by thedetection subsystem 45 to check for a misalignment of a device or system that generated thefirst image 30 or thesecond image 32. Thedetection subsystem 45 can be configured by auser 22 through the use of theinterface subsystem 44. - III. Input/Output View
-
FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output. Acomputer program 50 residing on a computer-readable medium receivesuser input 46 through aninput interface 48 and providesoutput 54 to theuser 22 through anoutput interface 52. Thecomputer program 50 includes thetarget reference points 34, thegeometrical shape 35;FIGS. 1-2 , thefirst image 30, thesecond image 32, thetemplate reference points 36, a third image, and theinterface 26. As previously discussed, thetarget reference points 34 are associated with thefirst image 30. Thecomputer program 50 can generate ageometrical object 35 or shape in a number of ways, including by connecting at least a subset of thetarget reference points 34. Thegeometrical shape 35 can be any number or combination of any shape, including but not limited to a segment, line, ellipse, arc, polygon, and triangle. - The
input 46 may include a constraint imposed upon thetarget reference points 34 or thegeometrical shape 35 by thecomputer program 50. By imposing a constraint upontarget reference points 34, thecomputer program 50 ensures that thetarget reference points 34 are adequate for generation of thegeometrical shape 35. Thesystem 20 can impose any number, combination, or type of constraint, including a requirement that a minimum number oftarget reference points 34 be identified, that a minimum number oftarget reference points 34 not be co-linear, or thattarget reference points 34 be within or without an area. In one embodiment, thecomputer program 50 requires more than fourtarget reference points 34. Thecomputer program 50 may identify a centroid of thegeometrical shape 35. - The
second image 32 can be configured to include thegeometrical shape 35. Thegeometrical shape 35 is generated by thecomputer program 50 within thesecond image 32. Thecomputer program 50 can accomplish a generation of thegeometrical shape 35 within thesecond image 32 in a number of ways. For example, thecomputer program 50 may transfer or copy thegeometrical shape 35 from one image to another. - The
computer program 50 provides for identifying thetemplate reference points 36 or locations of thetemplate reference points 36 in relation to thesecond image 32. In one embodiment, thetemplate reference points 36 can be identified by a positioning of thegeometrical shape 35 in relation to asecond image 32, which positioning is provided for by thecomputer program 50. Thecomputer program 50 provides for a number of controls for positioning thegeometrical shape 35 within thesecond image 32. The manipulation of the controls is a form ofinput 46. The controls may include any one of or any combination of a shift control, a rotation control, a magnification control, a course position control, a fine position control, or any other control helpful for a positioning of thegeometrical shape 35 in relation to asecond image 32. The controls can function in a number of modes, including a coarse mode and a fine mode. Thecomputer program 50 provides for positioning thegeometrical shape 35 by shifting thegeometrical shape 35 along a dimensional axis, rotating thegeometrical shape 35, and changing a magnification of thegeometrical shape 35. A positioning of thegeometrical shape 35 can include a coarse adjustment and a fine adjustment. Thecomputer program 50 is capable of identifying of plurality of positions of thegeometrical shape 35 in relation to thesecond image 32, including a gross position and a fine position of thegeometrical shape 35 in relation to thesecond image 32. This identification can be performed in a substantially simultaneous manner. A thumbnail image of an area adjacent to a vertex of thegeometrical shape 35 can be provided by thecomputer program 50. - The
computer program 50 can provide for an accuracy metric related to at least one of the template reference points 36. The accuracy metric is a form ofoutput 54. The accuracy metric can identify a measurement of accuracy of a positioning of at least one of thetemplate reference points 36 in relation to an estimated or predicted position of reference points within thesecond image 32. - The third image (the aligned image 38) is created from the
first image 30, thesecond image 32, and a relationship between thetarget reference points 34 and the template reference points 36. The creation of the third image by thecomputer program 50 can be accomplished in a number of ways. Thecomputer program 50 can generate an alignment calculation or determine a relationship between at least one of thetarget reference points 34 and at least one of the template reference points 36. Thecomputer program 50 can use an alignment calculation or relationship to align thefirst image 30 and thesecond image 32. In another embodiment, thecomputer program 50 uses locations of thetarget reference points 34 and thetemplate reference points 36 to generate the third image. - The
computer program 50 can be configured to detect distortions of the third image. Once a distortion has been detected, thecomputer program 50 can identify the extent and nature of the distortion. Auser 22 can use data generated by thecomputer program 50 to check for a misalignment of a device or system that generated thefirst image 30 or thesecond image 32. Theoutput 54 of thecomputer program 50 can include various distortion metrics, misalignment metrics, and other forms of error metrics (collectively “accuracy metrics”). - The
interface 26 of thecomputer program 50 is configured to receive input. Theinterface 26 can include aninput interface 48 and anoutput interface 52. The input can include but is not limited to an instruction for defining thetarget reference points 34 and a command for positioning thegeometrical shape 35 in relation to thesecond image 32. Thecomputer program 50 can be configured to execute other operations disclosed herein or known to a person skilled in the art that are relevant to the present invention. - IV. Process-Flow Views
- A. Example 1
-
FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images. At 56, a relationship is defined between the various images to be aligned. At 57, thesystem 20 facilitates the positioning of the images in accordance with the previously defined relationship. For example, thetemplate image 32 is positioned in relation thetarget image 30 and thetarget image 30 is positioned in relation to thetemplate image 32. At 58, the system generates the alignedimage 38 in accordance with the positioning performed at 57. - The
system 20 can perform the three steps identified above in a wide number of different ways. For example, the positioning of the images can be facilitated by providing controls for the positioning of the template image in relation to the object image. - B. Example 2
-
FIG. 5 is a flow diagram illustrating an example of steps that animage alignment system 20 may execute to generate the alignedimage 38. - At 60, the
system 20 receives input for defining thetarget reference points 34 associated with afirst image 30. Once theinput 46 is received, or as it is received, thesystem 20 can then at 62 generate at thegeometrical object 35. Theinput 46 may include a command. Thegeometrical object 35 can be generated in a variety of ways, such as by connecting thetarget reference points 34. In the preferred embodiment, thesystem 20 may be configured to require that at least fourtarget reference points 34 be connected in generating thegeometrical object 35. Thegeometrical object 35 can take any form or shape that connects thetarget reference points 34, and eachtarget reference point 34 is a vertex or other defining feature of thegeometrical object 35. In one category of embodiments, thegeometrical object 35 is a polygon. - At 64, the
system 20 imposes and checks a constraint against thetarget reference points 34. If thetarget reference points 34 at 66 do not meet constraints imposed by thesystem 20, thesystem 20 at 68 prompts and waits for input changing or adding to definitions of thetarget reference points 34. Once addition reference point data is received, thesystem 20 again generates ageometrical object 35 at 62 and checks constraints against thetarget reference points 34 at 64. Thesystem 20 may repeat these steps until thetarget reference points 34 satisfy constraints. Any type of a constraint can be imposed upon thetarget reference points 34, including requiring enoughtarget reference points 34 to define a particular form ofgeometrical object 35. For example, thesystem 20 may require that at least fourtarget reference points 34 are defined. If more than twotarget reference points 34 are co-linear, thesystem 20 may require that additionaltarget reference points 34 be defined. Thesystem 20 may use thegeometrical object 35 to impose constraints upon thetarget reference points 34. - Once the
target reference points 34 are deemed at 66 to meet the constraints imposed by thesystem 20, thesystem 20 generates ageometrical object 35 within thefirst image 30 and regenerates thegeometrical object 35 in thesecond image 32 space at 70. Thegeometrical object 35 can be generated in thesecond image 32 space in a number of ways, including transferring or copying thegeometrical object 35 from thefirst image 30 to thesecond image 32. Thegeometrical object 35 can be represented by a set of connected points, a solid object, a semi-transparent object, a transparent object, or any other object that can be used to represent ageometrical object 35. Any such representation can be displayed by thesystem 20. - The
system 20 identifies thetemplate reference points 36 based on a placement of thegeometrical object 35 in relation to thesecond image 32. In one embodiment, the method of identifying the template reference points is providing controls for positioning at 71 thegeometrical object 35 in relation to thesecond image 32. A variety of controls can be made available, including one of or a combination of controls for shifting thegeometrical object 35 up, down, left, or right in relation to thesecond image 32, rotating thegeometrical object 35 in relation to thesecond image 32, changing the magnification or size of thegeometrical object 35 in relation to thesecond image 32, moving thegeometrical object 35 through multiple dimensions, switching between coarse and fine positioning of thegeometrical object 35, or any other control that can be used to adjust thegeometrical object 35 in relation to thesecond image 32. A command can be received as aninput 46 allowing for the positioning of thegeometrical object 35 by at least one of rotating thegeometrical object 35, adjusting a magnification of thegeometrical object 35, and shifting thegeometrical object 35 along a dimensional axis. A command may allow for coarse and fine adjustments of thegeometrical object 35. - In one embodiment, the
system 20 provides a thumbnail image to theinterface 26 for displaying an area proximate to at least one of the template reference points 36. The thumbnail image can be configured to allow for a substantially simultaneous display of fine positioning detail and coarse positioning detail, for example, by providing both a view of thumbnail images and a larger view of thegeometrical object 35 in relation to thesecond image 32 to theuser 22 for simultaneous viewing. - The
system 20 may provide for an accuracy metric or an accuracy measurement detail for either a composite of thetemplate reference points 36 or individually for at least one or more of the individual template reference points 36. Thesystem 20 may provide accuracy metrics by calculating a number of accuracy metrics. Accuracy metrics facilitate an optimal positioning of thegeometrical object 35 within thesecond image 32. In one embodiment, thesystem 20 receives input commands from an interface or from theuser 22 for positioning thegeometrical object 35 at 72 in relation to thesecond image 32. In one category of embodiments, thesystem 20 adjusts a positioning of thegeometrical object 35 at 74 within thesecond image 32. This adjustment can be based upon the accuracy metric. Thesystem 20 may use a computer implemented process, such as a refinement heuristic, or any other image alignment tool for adjusting a placement of thegeometrical object 35 in relation to thesecond image 32. - In other embodiments, the locations of the
template reference points 36 can be determined in other ways. For example, theuser 22 may define thetemplate reference points 36 by pointing and clicking on locations within thesecond image 32, or thetemplate reference points 36 can be predefined. Once locations of thetemplate reference points 36 have been determined, thesystem 20 can determine a relationship at 78 between thetarget reference points 34 and the template reference points 36. Such a relationship can be a mathematical relationship and can be determined in any of a number of ways. - The
system 20 at 80 generates the alignedimage 38 from thefirst image 30 and thesecond image 32. In one embodiment, the generation occurs by the system producing the alignedimage 38 from thefirst image 30, thesecond image 32, and a relationship between at least one of thetarget reference points 34 in thefirst image 30 and at least one of thetemplate reference points 36 in thesecond image 32. Thesystem 20 can use an alignment calculation or a computer implemented combination heuristic to generate the alignedimage 38. Some such heuristics are known in the prior art. - The
system 20 can check at 82 for distortions of the alignedimage 38. By checking for a distortion in the alignedimage 38, thesystem 20 can detect a possible misalignment of a device used to generate thefirst image 30 or thesecond image 32. In one embodiment of the present invention, thesystem 20 checks for distortions in the alignedimage 38 by comparing the locations of the vertices of thegeometrical object 35 in relation to thesecond image 32 with defined or desired locations of thetemplate reference points 36, which defined or desired points may be indicated by theuser 22 of thesystem 20. This comparison of discrepancies produces an alignment status. Thesystem 20 analyzes the degree and nature of any misalignment between locations of the vertices of thegeometrical object 35 and defined locations of thetemplate reference points 36 to reveal information about the degree and nature of any misalignment of an image generating device or system. Analyzing distortions allows thesystem 20 or theuser 22 to analyze the alignment status of an image generation device. - C. Example 3
-
FIG. 6 is a flow diagram illustrating an example of steps that auser 22 of animage alignment system 20 can perform through anaccess device 24 and aninterface 26 to generate an alignedimage 38. - At 84, the
user 22 selects or inputs images foralignment 84. Theuser 22 can provide images to thesystem 20 in any form recognizable by thesystem 20, including digital representations of images. Theuser 22 at 86 defines thetarget reference points 34 of afirst image 30. Thetarget reference points 34 can be defined by pointing and clicking on locations within thefirst image 30, by importing or selecting predefinedtarget reference points 34, or by any other way understandable to thesystem 20. - The
user 22 of thesystem 20 at 88 can initiate generation of thegeometrical object 35. Thegeometrical object 35 can be initiated in a number of ways, including the defining thetarget reference points 34, defining a set number of thetarget reference points 34 that satisfy constraints, submitting a specific instruction to thesystem 20 to generate thegeometrical object 35, or any other means by which theuser 22 may signal thesystem 20 to generate thegeometrical object 35. Thesystem 20 can select an appropriate type ofgeometrical object 35 to generate, or theuser 22 may select a type ofgeometrical object 35 to be generated. In one embodiment, thesystem 20 generates ageometrical object 35 by connecting thetarget reference points 34. - A user determines a
centroid 104 of ageometrical object 35. In an alternative embodiment, thesystem 20 can determine and indicate thecentroid 104 of thegeometrical object 35. A determination of thecentroid 104 is helpful for eliminating or at least mitigating errors that can occur in theimage alignment system 20. By determining thecentroid 104 of thegeometrical object 35, theuser 22 can verify that thecentroid 104 is near the center of a critical area of thefirst image 30. If thesystem 20 or theuser 22 of thesystem 20 determines that thecentroid 104 of thegeometrical object 35 is not near enough to a critical area of thefirst image 30 as is desired, theuser 22 can redefine thetarget reference points 34. - The
user 22 of thesystem 20 at 92 initiates a transfer or copy of thegeometrical object 35 to thesecond image 32 space. Theuser 22 may signal thesystem 20 to transfer thegeometrical object 35 in any way recognizable by thesystem 20. One such way is by sending an instruction to thesystem 20 for generation of thegeometrical object 35 in thesecond image 32. Upon receipt of an appropriate signal, thesystem 20 transfers or copies thegeometrical object 35 to thesecond image 32 space. - Once the
geometrical object 35 is transferred to thesecond image 32 space, theuser 22 positions thegeometrical object 35 within thesecond image 32 space. Theuser 22 can use controls provided by thesystem 20 or that are a part of thesystem 20 to position thegeometrical object 35. In one embodiment, theuser 22 can shift thegeometrical object 35 up, down, left, or right in relation to thesecond image 32, rotate thegeometrical object 35 in relation to asecond image 32, change the magnification or size of thegeometrical object 35 in relation to thesecond image 32, move thegeometrical object 35 through multiple dimensions, switch between coarse and fine positioning of thegeometrical object 35, or execute any other control that can be used to adjust thegeometrical object 35 in relation to thesecond image 32. Theuser 22 may use a thumbnail view or an accuracy metric to position thegeometrical object 35. - The
user 22 of thesystem 20 initiates alignment of thefirst image 30 and thesecond image 32. Theuser 22 may signal thesystem 20 to transfer thegeometrical object 35 in any way recognizable by thesystem 20. One such way is to send an instruction for alignment to thesystem 20 via theaccess device 24 or theinterface 26. Upon receipt of an alignment signal, thesystem 20 generates the alignedimage 38 from thefirst image 30 and thesecond image 32. - At 98, the
user 22 of thesystem 20 checks for distortion of the alignedimage 38. In one embodiment, theuser 22 determines and inputs to thesystem 20 desired locations of thetemplate reference points 36 in relation to thesecond image 32. Thesystem 20 can analyze the desired locations and post-alignment locations oftemplate reference points 34 to discover information about any distortions in an alignedimage 38. Thesystem 20 may reveal to theuser 22 any information pertaining to a distortion analysis. - V. Examples of Reference Points and Geometric Objects
-
FIG. 7A illustrates thetarget reference points 34 defined in relation to thefirst image 32.FIG. 7B illustrates thegeometrical object 35 connecting thetarget reference points 35.FIG. 7C illustrates an indication of thecentroid 104 of thegeometrical object 35.FIG. 7D illustrates a transferredgeometrical object 35 and thetemplate reference points 36 in relation to thesecond image 32. -
FIG. 7A is a diagram illustrating one example oftarget reference points 34 defined in relation to thefirst image 32.FIG. 7B is a diagram illustrating one example of ageometrical object 35 connectingtarget reference points 34 associated with afirst image 30.FIG. 7C is a diagram illustrating an example of ageometrical object 35 and acentroid 104 associated with thatgeometrical object 35.FIG. 7D is a diagram illustrating a transferredgeometrical object 35 and varioustemplate reference points 36 positioned in relation to asecond image 32. - VI. Alternative Embodiments
- The above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent to those of skill in the art upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in image alignment systems and methods, and that the invention will be incorporated into such future embodiments.
Claims (17)
1-49. (canceled)
50. A method for aligning radiographic images, comprising:
facilitating a positioning of a template image in relation to an object image; and
generating an aligned image from a target image and said object image according to said positioning of said template image in relation to said object image.
51. The method of claim 50 , wherein said positioning is facilitated by providing controls for said positioning of said reference image in relation to said object image.
52. An apparatus for aligning images, comprising:
a computer program tangibly embodied on a computer-readable medium, said computer program including instructions for:
facilitating a positioning of a template image in relation to an object image; and
generating an aligned image from a target image and said object image according to said positioning of said template image in relation to said object image.
53. The apparatus of claim 52 , wherein said positioning is facilitated by providing controls for said positioning of said reference image in relation to said object image.
54. An apparatus for aligning images, comprising:
a computer program tangibly embodied on a computer-readable medium, said computer program including instructions for:
receiving an input for defining target reference points associated with a first image;
generating a geometrical object by connecting at least four said target reference points;
identifying template reference points based on a placement of said geometrical object in relation to said second image; and
producing an aligned image from said first image, said second image, and a relationship between at least one of said target reference points in said first image and at least one of said template reference points in said second image.
55. The apparatus of claim 54 , said computer program further including instructions for imposing a constraint on said target reference points.
56. The apparatus of claim 54 , wherein said input includes a command.
57. The apparatus of claim 56 , wherein said command allows for at least one of shifting said geometrical object along a dimensional axis, rotating said geometrical object, and adjusting a magnification of said geometrical object.
58. The apparatus of claim 56 , wherein said command allows for coarse adjustments and fine adjustments of said geometrical object.
59. The apparatus of claim 54 , said computer program further including instructions for calculating a plurality of accuracy metrics.
60. The apparatus of claim 54 , said computer program further including instructions for displaying said geometrical object as one of a solid object, a semi-transparent object, and a transparent object.
61. The apparatus of claim 54 , said computer program further including instructions for displaying a thumbnail image of an area proximate to at least one of said template reference points.
62. The apparatus of claim 61 , wherein said thumbnail image is configured to allow for a substantially simultaneous display of fine positioning detail and coarse positioning detail.
63. The apparatus of claim 54 , said computer program further including instructions for providing an accuracy measurement detail for at least one of said template reference points or for a composite of said template reference points.
64. The apparatus of claim 61 , said computer program further including instructions for analyzing an alignment status of an image generation device, wherein said alignment status is determined from a discrepancy between said accuracy measurement detail and defined locations for said template reference points.
65. The apparatus of claim 54 , further comprising adjusting said positioning of said geometrical object within said second image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/133,544 US20050232513A1 (en) | 2003-07-30 | 2005-05-20 | System and method for aligning images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/630,015 US6937751B2 (en) | 2003-07-30 | 2003-07-30 | System and method for aligning images |
US11/133,544 US20050232513A1 (en) | 2003-07-30 | 2005-05-20 | System and method for aligning images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/630,015 Continuation US6937751B2 (en) | 2003-07-30 | 2003-07-30 | System and method for aligning images |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050232513A1 true US20050232513A1 (en) | 2005-10-20 |
Family
ID=34103737
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/630,015 Expired - Lifetime US6937751B2 (en) | 2003-07-30 | 2003-07-30 | System and method for aligning images |
US11/133,544 Abandoned US20050232513A1 (en) | 2003-07-30 | 2005-05-20 | System and method for aligning images |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/630,015 Expired - Lifetime US6937751B2 (en) | 2003-07-30 | 2003-07-30 | System and method for aligning images |
Country Status (5)
Country | Link |
---|---|
US (2) | US6937751B2 (en) |
EP (1) | EP1649424A2 (en) |
JP (1) | JP2007501451A (en) |
CA (1) | CA2529760A1 (en) |
WO (1) | WO2005020150A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050285812A1 (en) * | 2004-06-23 | 2005-12-29 | Fuji Photo Film Co., Ltd. | Image display method, apparatus and program |
US20080059205A1 (en) * | 2006-04-26 | 2008-03-06 | Tal Dayan | Dynamic Exploration of Electronic Maps |
US7933897B2 (en) | 2005-10-12 | 2011-04-26 | Google Inc. | Entity display priority in a distributed geographic information system |
US20170147552A1 (en) * | 2015-11-19 | 2017-05-25 | Captricity, Inc. | Aligning a data table with a reference table |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8724865B2 (en) * | 2001-11-07 | 2014-05-13 | Medical Metrics, Inc. | Method, computer software, and system for tracking, stabilizing, and reporting motion between vertebrae |
US7298876B1 (en) * | 2002-11-04 | 2007-11-20 | R2 Technology, Inc. | Method and apparatus for quality assurance and quality control in radiological equipment using automatic analysis tools |
US20050219558A1 (en) * | 2003-12-17 | 2005-10-06 | Zhengyuan Wang | Image registration using the perspective of the image rotation |
US20050285947A1 (en) * | 2004-06-21 | 2005-12-29 | Grindstaff Gene A | Real-time stabilization |
US7849796B2 (en) | 2005-03-30 | 2010-12-14 | Goss International Americas, Inc | Web offset printing press with articulated tucker |
EP1863640B1 (en) | 2005-03-30 | 2014-07-16 | Goss International Americas, Inc. | Cantilevered blanket cylinder lifting mechanism |
CN101495313B (en) * | 2005-03-30 | 2011-11-09 | 高斯国际美洲公司 | Print unit having blanket cylinder throw-off bearer surfaces |
EP1863639B1 (en) * | 2005-03-30 | 2012-05-02 | Goss International Americas, Inc. | Web offset printing press with autoplating |
JP4829291B2 (en) * | 2005-04-11 | 2011-12-07 | ゴス インターナショナル アメリカス インコーポレイテッド | Printing unit that enables automatic plating using a single motor drive |
US8094895B2 (en) * | 2005-06-08 | 2012-01-10 | Koninklijke Philips Electronics N.V. | Point subselection for fast deformable point-based imaging |
JP2009544446A (en) * | 2006-07-28 | 2009-12-17 | トモセラピー・インコーポレーテッド | Method and apparatus for calibrating a radiation therapy system |
US8872911B1 (en) * | 2010-01-05 | 2014-10-28 | Cognex Corporation | Line scan calibration method and apparatus |
WO2014133849A2 (en) | 2013-02-26 | 2014-09-04 | Accuray Incorporated | Electromagnetically actuated multi-leaf collimator |
US11884311B2 (en) * | 2016-08-05 | 2024-01-30 | Transportation Ip Holdings, Llc | Route inspection system |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5696835A (en) * | 1994-01-21 | 1997-12-09 | Texas Instruments Incorporated | Apparatus and method for aligning and measuring misregistration |
US5926568A (en) * | 1997-06-30 | 1999-07-20 | The University Of North Carolina At Chapel Hill | Image object matching using core analysis and deformable shape loci |
US6219462B1 (en) * | 1997-05-09 | 2001-04-17 | Sarnoff Corporation | Method and apparatus for performing global image alignment using any local match measure |
US6351660B1 (en) * | 2000-04-18 | 2002-02-26 | Litton Systems, Inc. | Enhanced visualization of in-vivo breast biopsy location for medical documentation |
US6351573B1 (en) * | 1994-01-28 | 2002-02-26 | Schneider Medical Technologies, Inc. | Imaging device and method |
US20020048393A1 (en) * | 2000-09-19 | 2002-04-25 | Fuji Photo Film Co., Ltd. | Method of registering images |
US6528803B1 (en) * | 2000-01-21 | 2003-03-04 | Radiological Imaging Technology, Inc. | Automated calibration adjustment for film dosimetry |
US6563942B2 (en) * | 1994-03-07 | 2003-05-13 | Fuji Photo Film Co., Ltd. | Method for adjusting positions of radiation images |
US6675116B1 (en) * | 2000-09-22 | 2004-01-06 | Radiological Imaging Technology, Inc. | Automated calibration for radiation dosimetry using fixed or moving beams and detectors |
US6754374B1 (en) * | 1998-12-16 | 2004-06-22 | Surgical Navigation Technologies, Inc. | Method and apparatus for processing images with regions representing target objects |
US6839454B1 (en) * | 1999-09-30 | 2005-01-04 | Biodiscovery, Inc. | System and method for automatically identifying sub-grids in a microarray |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010036302A1 (en) * | 1999-12-10 | 2001-11-01 | Miller Michael I. | Method and apparatus for cross modality image registration |
-
2003
- 2003-07-30 US US10/630,015 patent/US6937751B2/en not_active Expired - Lifetime
-
2004
- 2004-07-21 CA CA002529760A patent/CA2529760A1/en not_active Abandoned
- 2004-07-21 WO PCT/US2004/023290 patent/WO2005020150A2/en active Application Filing
- 2004-07-21 EP EP04778677A patent/EP1649424A2/en not_active Withdrawn
- 2004-07-21 JP JP2006521896A patent/JP2007501451A/en active Pending
-
2005
- 2005-05-20 US US11/133,544 patent/US20050232513A1/en not_active Abandoned
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5696835A (en) * | 1994-01-21 | 1997-12-09 | Texas Instruments Incorporated | Apparatus and method for aligning and measuring misregistration |
US6351573B1 (en) * | 1994-01-28 | 2002-02-26 | Schneider Medical Technologies, Inc. | Imaging device and method |
US6563942B2 (en) * | 1994-03-07 | 2003-05-13 | Fuji Photo Film Co., Ltd. | Method for adjusting positions of radiation images |
US6219462B1 (en) * | 1997-05-09 | 2001-04-17 | Sarnoff Corporation | Method and apparatus for performing global image alignment using any local match measure |
US5926568A (en) * | 1997-06-30 | 1999-07-20 | The University Of North Carolina At Chapel Hill | Image object matching using core analysis and deformable shape loci |
US6754374B1 (en) * | 1998-12-16 | 2004-06-22 | Surgical Navigation Technologies, Inc. | Method and apparatus for processing images with regions representing target objects |
US6839454B1 (en) * | 1999-09-30 | 2005-01-04 | Biodiscovery, Inc. | System and method for automatically identifying sub-grids in a microarray |
US6528803B1 (en) * | 2000-01-21 | 2003-03-04 | Radiological Imaging Technology, Inc. | Automated calibration adjustment for film dosimetry |
US6351660B1 (en) * | 2000-04-18 | 2002-02-26 | Litton Systems, Inc. | Enhanced visualization of in-vivo breast biopsy location for medical documentation |
US20020048393A1 (en) * | 2000-09-19 | 2002-04-25 | Fuji Photo Film Co., Ltd. | Method of registering images |
US6675116B1 (en) * | 2000-09-22 | 2004-01-06 | Radiological Imaging Technology, Inc. | Automated calibration for radiation dosimetry using fixed or moving beams and detectors |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7626597B2 (en) * | 2004-06-23 | 2009-12-01 | Fujifilm Corporation | Image display method, apparatus and program |
US20050285812A1 (en) * | 2004-06-23 | 2005-12-29 | Fuji Photo Film Co., Ltd. | Image display method, apparatus and program |
US8965884B2 (en) | 2005-10-12 | 2015-02-24 | Google Inc. | Entity display priority in a distributed geographic information system |
US7933897B2 (en) | 2005-10-12 | 2011-04-26 | Google Inc. | Entity display priority in a distributed geographic information system |
US8290942B2 (en) | 2005-10-12 | 2012-10-16 | Google Inc. | Entity display priority in a distributed geographic information system |
US9715530B2 (en) | 2005-10-12 | 2017-07-25 | Google Inc. | Entity display priority in a distributed geographic information system |
US9785648B2 (en) | 2005-10-12 | 2017-10-10 | Google Inc. | Entity display priority in a distributed geographic information system |
US9870409B2 (en) | 2005-10-12 | 2018-01-16 | Google Llc | Entity display priority in a distributed geographic information system |
US10592537B2 (en) | 2005-10-12 | 2020-03-17 | Google Llc | Entity display priority in a distributed geographic information system |
US11288292B2 (en) | 2005-10-12 | 2022-03-29 | Google Llc | Entity display priority in a distributed geographic information system |
US7616217B2 (en) * | 2006-04-26 | 2009-11-10 | Google Inc. | Dynamic exploration of electronic maps |
US20080059205A1 (en) * | 2006-04-26 | 2008-03-06 | Tal Dayan | Dynamic Exploration of Electronic Maps |
US20170147552A1 (en) * | 2015-11-19 | 2017-05-25 | Captricity, Inc. | Aligning a data table with a reference table |
US10417489B2 (en) * | 2015-11-19 | 2019-09-17 | Captricity, Inc. | Aligning grid lines of a table in an image of a filled-out paper form with grid lines of a reference table in an image of a template of the filled-out paper form |
Also Published As
Publication number | Publication date |
---|---|
JP2007501451A (en) | 2007-01-25 |
WO2005020150A3 (en) | 2005-12-22 |
US6937751B2 (en) | 2005-08-30 |
US20050025386A1 (en) | 2005-02-03 |
EP1649424A2 (en) | 2006-04-26 |
WO2005020150A2 (en) | 2005-03-03 |
CA2529760A1 (en) | 2005-03-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6937751B2 (en) | System and method for aligning images | |
US6954212B2 (en) | Three-dimensional computer modelling | |
US8831324B2 (en) | Surgical method and workflow | |
US6975326B2 (en) | Image processing apparatus | |
US6459821B1 (en) | Simultaneous registration of multiple image fragments | |
US8917924B2 (en) | Image processing apparatus, image processing method, and program | |
US7496222B2 (en) | Method to define the 3D oblique cross-section of anatomy at a specific angle and be able to easily modify multiple angles of display simultaneously | |
EP3893198A1 (en) | Method and system for computer aided detection of abnormalities in image data | |
Hering et al. | Unsupervised learning for large motion thoracic CT follow-up registration | |
US11020189B2 (en) | System and method for component positioning by registering a 3D patient model to an intra-operative image | |
US10743945B2 (en) | Surgical method and workflow | |
JP5461782B2 (en) | Camera image simulator program | |
JP4513365B2 (en) | Medical image processing apparatus and medical image processing program | |
JP2008510247A (en) | Display system for mammography evaluation | |
CN111582222A (en) | An accurate correction method of bill image position based on title position reference template | |
US11334997B2 (en) | Hinge detection for orthopedic fixation | |
US20240169538A1 (en) | Improved spinal hardware rendering | |
Shao et al. | Automatic 3D pelvimetry framework in CT images and its validation | |
WO2021166574A1 (en) | Image processing device, image processing method, and computer-readable recording medium | |
US11600030B2 (en) | Transforming digital design objects utilizing dynamic magnetic guides | |
US20250086894A1 (en) | Correcting topological defects on a surface mesh representing an organ | |
Shkurti et al. | Precision camera calibration using known target motions along three perpendicular axes | |
Lobonc Jr | Human supervised tools for digital photogrammetric systems | |
JP2023104399A (en) | Image processing device, image processing method, and program | |
KR20220006292A (en) | Apparatus for Generating Learning Data and Driving Method Thereof, and Computer Readable Recording Medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |