WO1997012480A2 - Method and apparatus for implanting images into a video sequence - Google Patents
Method and apparatus for implanting images into a video sequence Download PDFInfo
- Publication number
- WO1997012480A2 WO1997012480A2 PCT/IL1996/000110 IL9600110W WO9712480A2 WO 1997012480 A2 WO1997012480 A2 WO 1997012480A2 IL 9600110 W IL9600110 W IL 9600110W WO 9712480 A2 WO9712480 A2 WO 9712480A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- segments
- graph
- topology graph
- background
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000009471 action Effects 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims abstract description 14
- 238000002513 implantation Methods 0.000 claims description 35
- 230000009466 transformation Effects 0.000 claims description 24
- 238000002156 mixing Methods 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 3
- 244000025254 Cannabis sativa Species 0.000 description 10
- 230000003068 static effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000000945 filler Substances 0.000 description 3
- 239000007943 implant Substances 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 235000021443 coca cola Nutrition 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 244000045947 parasite Species 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
Definitions
- the present invention relates generally to orientation of an image within a video sequence of a scene, to such orientation which determines the locations of elements in a scene and to replacing certain elements with a prepared image within the video sequence.
- Sports arenas typically include a game area where the game occurs, a seating area where the spectators sit and a wall of some kind separating the two areas.
- the wall is at least partially covered with advertisements from the companies which sponsor the game.
- the advertisements on the wall are filmed as part of the sports arena.
- the advertisements cannot be presented to the public at large unless they are filmed by the television cameras.
- Systems are known which merge predefined advertisements onto surfaces in a video of a sports arena. One system has an operator define a target surface in the arena. The system then locks on the target surface and merges a predetermined advertisement with the portion of the video stream corresponding to the surface. When the camera ceases to look at the surface, the system loses the target surface and the operator has to indicate again which surface is to be utilized.
- PCT Application PCT/FR91/00296 describes a procedure and device for modifying a zone in successive images.
- the images show a non-deformable target zone which has register marks nearby.
- the system searches for the register marks and uses them to determine the location of the zone.
- a previously prepared image can then be superimposed on the zone.
- the register marks are any easily identifiable marks (such as crosses or other "graphemes") within or near the target zone.
- the system of PCT/FR91/00296 produces the captured image at many resolutions and utilizes the many resolutions in its identification process.
- PCT Application PCT/US94/01649 describes a system and method for electrically exchanging the physical images on designated targets by preselected virtual ones.
- the physical image to be substituted is detected, recognized and located automatically.
- the a priori knowledge includes knowledge of the rules regarding the playing area itself (its shape and the lines and curves thereon) and elements of the entire arena which form part of the background and is utilized to orient the current frame within the background arena.
- a two-dimensional picture of a three- dimensional scene will have certain characteristics which change due to the projection of the three-dimensional world onto the two-dimensional surface and certain characteristics which do not change. For example, angles of lines change as do the shapes of curves.
- the textures of neighboring fixed objects such as signs on along the edge of a professional soccer field, will not change with perspective, nor will the relationship of one neighbor to the next (e.g. a COCA- COLA sign will remain next to a SPRITE sign, no matter the perspective).
- lines may be shown differently, but they remain lines.
- the present invention first maps the background arena by listing the minimally changing or invariant characteristics (adjacency relationships, the locations of lines and well-defined curves, etc.) and the topology of the static objects in the arena.
- the present invention determines which static objects are being viewed in each frame of the video sequence of the action. Since the action is of interest, the static objects of the arena will form part of the background. Thus, the frame of the video sequence will include objects not in the empty arena which may or may not occlude some of the static objects.
- the present invention determines the minimally changing characteristics of the arena and then attempts to match the topology of the frame to that of the arena. The occluding objects are determined and then not considered during the matching operation.
- the frame Upon matching the current frame to the map of the background arena, the frame has been oriented with respect to the background arena. Many processing actions can occur with the orientation information. For example, implantation can occur.
- the map includes indications of which objects are to be replaced with a desired image.
- the orientation system includes a background topology graph, a graph creator and a graph correlator.
- the background topology graph graphs the relationships of the background objects of the arena with each other, wherein the relationships are those which change minimally from view to view of the arena.
- the graph creator creates a frame topology graph of relationships of segments of the frame.
- the graph correlator correlates the frame topology graph with the background topology graph to determine which objects of the arena, if any, are represented by segments of the frame.
- the orientation system additionally includes a background topology graph creator which includes a standards topology graph, a second graph creator, a second graph correlator and a background topology graph creator.
- the standards topology graph graphs standard elements known to be present in arenas of a type similar to the arena.
- the second graph creator which creates a frame topology graph of relationships of segments of each frame of an initial video sequence.
- the second graph correlator correlates the frame topology graph, for each frame of the initial video sequence, with the standards topology graph to determine which standard elements, if any, are represented by which segments of the frame and which objects of the frame should be added to the background topology graph.
- the background topology graph creator creates the background topology graph from the output of the graph correlator.
- the graph correlator includes an occlusion evaluator, a graph matcher and a perspective evaluator.
- the occlusion evaluator determines which segments of the frame might represent occluding objects.
- the graph matcher matches the frame topology graph to the background topology graph and includes apparatus for removing one or more of the segments representing possibly occluding objects from the frame topology graph to create a reduced frame topology graph and apparatus for matching the reduced frame topology graph to the background topology graph and for producing, on output, the objects of the background and the segments matched by to the objects.
- the perspective evaluator determines a perspective transformation between the matched objects and the matched segments.
- the relationships include at least one of: the adjacency relationships of neighboring segments, the textures of each segment, and the boundary equations of each segment.
- a frame description unit for describing a frame viewing an arena.
- the frame description unit includes a texture segmenter which segments the frame into segments of uniform texture and an adjacency determiner which creates a graph listing which segments are neighbors of which segments.
- the frame description unit also includes a boundary analyzer which determines which pixels of each segment form its borders and which determines if the border pixels generally form one of a straight line and a quadratic curve and what their coefficients are.
- an implantation unit for implanting an image into a frame on a surface within an arena in which action occurs.
- the implantation unit includes an orientation unit, as described hereinabove, for orienting the frame within the arena and for indicating where in the frame the surface is and an implanter for implanting the image into the portion of the frame indicated by the orientation unit.
- the orientation unit additionally includes an implantation location determiner for determining which of the matched segments corresponds to said surface to be implanted upon and the implanter includes a transformer, a permission mask and a mixer. The transformer transforms said image in accordance with said perspective transformation thereby creating a transformed image.
- the permission mask creator creates a permission mask from said matched segments corresponding to said surface to be implanted upon.
- the mixer mixes said frame with said transformed image in accordance with said permission mask.
- the method includes the steps of a) providing an initial model, independent of the plurality of video frames, of a selected one of the fixed surfaces, the initial model comprising a graph of the relationships of the background objects of the selected fixed surface with each other, wherein the relationships are those which change minimally from view to view of the background space, b) generating a background model of objects of the background space from initial frames of the video frames which view only the background space and the initial model, the background model comprising a graph of the relationships of the background objects of the background space with each other, c) utilizing the background model for identifying the objects viewed in each video frame and d) perspectively implanting the image into the frame into the portion of the frame viewing a previously selected one of the fixed planar surfaces.
- Fig. 1 A is an isometric illustration of a soccer stadium useful in understanding the present invention
- Fig. 1B is a two-dimensional illustration of a section of the soccer stadium of Fig. 1A;
- Fig. 2A is an isometric illustration of the soccer stadium of Fig. 1A with the objects therein labeled;
- Fig. 2B is the same two-dimensional illustration of the section shown in Fig.
- Figs. 3A and 3B are graph illustrations of the objects of Figs. 2A and 2B, respectively;
- Fig. 4 is a block diagram illustration of an orientation and implantation system utilizing the graphs of Figs. 3A and 3B, constructed and operative in accordance with a preferred embodiment of the present invention
- Fig. 5 is a block diagram illustration of the elements of a mapper forming part of the system of Fig. 4, wherein the mapper creates the graph of Fig. 3A;
- Fig. 6 is an illustration of pixels on the boundary of two segments, useful in understanding the operations of the mapper of Fig. 5;
- Fig. 7 is a block diagram illustration of the elements of an orientation system, forming part of the system of Fig. 4, which matches the graph of Fig. 3B to that of Fig. 3A;
- Fig. 8A is an illustration of an exemplary background scene having 11 segments, useful in understanding the operation of the system of the present invention.
- Fig. 8B is a graph of the topology of the scene of Fig. 8A;
- Fig. 9A is an illustration of a portion of the scene of Fig. 8A with action occurring therein;
- Fig. 9B is a graph of the topology of the scene of Fig. 9A;
- Fig. 10 is a flow chart illustration of the process of matching graphs, useful in understanding the operations of the orientation system of Fig. 7 and the mapper of Fig. 5;
- Fig. 11 is a block diagram illustration of the implantation unit forming part of the system of Fig. 4.
- the present invention will be described in the context of a televised soccer game, it being understood that the present invention is effective for all action occurring within any relatively fixed arena, such as a sports event, an evening of entertainment, etc.
- Fig. 1A illustrates an exemplary professional soccer stadium 58.
- the stadium includes a field 60 on which are painted lines 62 and curves 63 which mark the various boundaries of interest in a soccer game.
- the lines and curves are typically in accordance with the official rules of soccer.
- Also on the field are two goals 64 and a series of signs 66 onto which various advertisers place their advertisements, indicated by the many different patterns on signs 66.
- An advertiser can utilize one or many signs; for example, Fig. 1A shows two signs with the interconnecting circles on them.
- the stadium 58 includes bleachers 70, a fence 72 marking the borders of the field 60, flagpoles 74 for supporting flags 76 and a camera viewing stand 77.
- Fence 72 often includes many posts 78.
- FIG. 1B illustrates such a frame view of the arena, taken from camera stand 77 and viewing one of the areas near the left goal 64.
- the curve labeled 63a of Fig. 1A and the lines labeled 62A, 62B, 62C and 62D are visible as are a few of the signs 66, labeled 66A and 66B.
- Other elements which are partially visible are the posts 78 of the fence.
- the present invention maps the stadium 58 by mapping its static objects, their adjacencies and other characteristics of the objects which change minimally when viewing them at different angles.
- Each object is defined as a planar domain having a single texture and having a shape defined through a listing of the edges.
- FIGs. 2A and 2B Examples of the labeling of the objects within the stadium 58 and the frame 70 are provided in Figs. 2A and 2B, respectively. For simplicity's sake, only the objects on the field of the stadium 58 are mapped. Each separate object on the field is labeled with a number from 1 to 53, where a field marking line or curve, being, in reality, a band and not a line, is considered as an object. The grass bordered by the marking lines is also considered to be an object. Each sign is shown as a single object, labeled 41 - 53, though the pattems on the signs can, alternatively, be divided into separate objects, one per portion of the pattern thereon.
- Fig. 2A shows the full set of objects, since Fig. 2A illustrates the entire stadium 58, while Fig. 2B, which illustrates only frame 80, has only a few objects.
- Figs. 3A and 3B The corresponding topological graphs are illustrated in Figs. 3A and 3B where an open circle indicates an object with the texture of a marking line, a dotted square indicates an object with the texture of grass, an open square indicates an object with a texture other than grass or marking lines, a thin line indicates that the object is bounded by a straight line and a thick line indicates that the object is bounded by a curve.
- an open circle indicates an object with the texture of a marking line
- a dotted square indicates an object with the texture of grass
- an open square indicates an object with a texture other than grass or marking lines
- a thin line indicates that the object is bounded by a straight line
- a thick line indicates that the object is bounded by a curve.
- Each object is labeled with its number from the corresponding Figure.
- object 4 the grass in front of the leftmost goal. It is bounded by marking lines 1 , 3, 5 and 11 , each of which is a straight band. Thus, object 4 is marked with a dark square (grass texture), and connected with thin lines to each of the other objects, all of which are open circles.
- object 40 the grass outside of the playing field. It borders each of the signs 41 - 53 with straight line borders and also borders the outer marking lines of the field, labeled 1 , 20, 21 and 39.
- the map of Fig. 3A reflects this.
- object 16 the left field grass. It borders curved marking lines 9, 7, 15 and 17 (connected with thick lines) and straight marking lines 1 , 2, 6, 13, 19, 20 and 21 (connected with thin lines).
- the topology graph for frame 80 is much smaller as frame 80 has far fewer objects within it. In frame 80 we view a portion only of object 16. Thus, the graph of Fig. 3B has only some of the connections for object 16, those to objects 1, 2, 9, 13 and 15. Similarly, object 40 (the grass outside of the field) is only connected to three of the signs, those labeled 43, 44 and 45.
- the present invention can orient the view of frame 80 within the world of stadium 58. This orientation is performed by using the information in the topology graphs of Figs. 3A and 3B; it does not require pattern recognition as in the prior art.
- the topology graphs can also include information regarding which signs are to have their advertisements replaced. If so, once the topology graph of Fig. 3B is matched to that of Fig. 3A, the graph of Fig. 3A can be reviewed to determine if any of the matched objects are to be replaced. For example, in Fig. 3A sign 44 is marked with an X, indicating that it is to be replaced. Since, after matching, it is determined that frame 80 includes sign 44, the advertisement on sign 44 can be readily replaced, as will be described in more detail hereinbelow. Altematively, if Fig. 3A indicated that only sign 50 is to be replaced (which is not present in the graph of Fig. 3B), then, for frame 80, no signs would have their advertisements replaced.
- Fig. 4 illustrates, in partial block diagram format, a system which implements the concepts outlined hereinabove for replacing advertisements seen in a video stream.
- the system comprises a video digitizer 100, such as the Targa2000 manufactured by Truevision Inc. of Indianapolis, Indiana, USA, an orientation unit 102, an implantation unit 104, and a host computer 106, all connected together via a bus 105, such as a peripheral component interconnect (PCI) bus.
- the host computer 106 typically also is associated with input devices 116, such as a keyboard and/or a mouse and/or a tablet, and a monitor 108.
- input devices 116 such as a keyboard and/or a mouse and/or a tablet, and a monitor 108.
- the video digitizer 100 receives incoming video frames for television broadcasting to many countries.
- the video frames can be formatted in any one of many formats, such as NTSC (National Television Standards Committee) or PAL (Phase Alternate Lines) formats, and can be either analog or digital signals.
- the video digitizer 100 processes the video frames as necessary on input and on output.
- the output signals are those altered by the present system, as will be described in more detail hereinbelow, in the same format as the incoming video frames.
- the orientation unit 102 determines, per frame, what static objects appear in the frame and where they are, if and where there are occluding objects, such as players, which static objects are to have new advertising images implanted thereon and the perspective view of the frame with which the implantation can occur.
- the implantation unit 104 utilizes the information from the orientation unit 102 to transform an advertisement, input through the host computer 106, to the correct perspective and to implant it in the proper location within the present frame. The altered frame is then provided to the video digitizer 100 for output.
- orientation unit 102 and implantation unit 104 are implemented on one or more parallel, high speed processing units, such as the HyperShark Board, manufactured by HyperSpeed Technology Inc. of San Diego, California, USA.
- the orientation unit 102 and implantation unit 104 can be implemented on standard platforms, whether with a single or multiple processors, such as personal computers (PC), or workstations operating the Unix or Windows NT, of Microsoft Corporation of the USA, operating systems.
- PC personal computers
- workstations operating the Unix or Windows NT, of Microsoft Corporation of the USA, operating systems.
- the host computer 106 controls the operations of the units 100, 102 and 104 and, in addition, provides user commands, received from input devices 116, to the units 100 - 104.
- the orientation unit 102 is divided into two processing units which share similar operations.
- the first unit is a mapper 110 which receives the initial video sequence in which, typically, the stadium 58 is scanned. Mapper 110 determines which objects in the stadium 58 must be part of the stadium in accordance with the official rules of the game and which objects are present in this particular stadium.
- the standard elements are available in a standards database 118 and the remaining objects are placed into a stadium database 114. Both the standards database 118 and the stadium database 114 store the invariant properties of the objects of the stadium in the form of a topology graph such as is described hereinabove.
- the second unit is a frame orientation unit 112 which receives the video stream during the exemplary soccer game, determines the objects in a current frame, and utilizes the topology information in database 114 to determine which static objects of the stadium 58 occur in the current frame and what the projection transform is for the current frame.
- orientation unit 104 determines which foreground objects, such as players or the ball, are in the current frame and how and where they occlude the sign to be replaced. This information is provided as output to the implantation unit 104.
- stadium database 114 lists the texture, neighbors, boundary pixels and boundary equations. Objects to be replaced (e.g. some or all of the signs) are so marked.
- Mapper 110 comprises a segmenter 120, a boundary analyzer 124, two correlators 128 and 130, a graph builder 132 and a database updater 134.
- the segmenter 120 segments the current frame into its component segments, in accordance with any suitable segmentation operation, such as those described in the article "Application of the Gibbs Distribution to Image Segmentation", in the book Statistical Image Processing and Graphics, edited by Edward J. Wegman and Douglas J. DePriest, Marcel Dekker, Inc., New York and Basel, 1986, pp. 3 - 11. The book is incorporated herein by reference.
- the segmenter 120 divides the frame into segments by determining which connected sections of neighboring pixels have approximately the same texture, where "texture" is an externally defined quality which describes a group of neighboring pixels.
- texture can be color and thus, each object is one with a single color or with colors near to a central color. Texture can also be luminance level or a complex description of the color range of an object, where the color range can be listed as a color vector.
- the color of grass is a combination of green, yellow and brown pixels. The average color of a group of pixels of grass will be relatively constant, as is the covariance of all components of the color vector.
- the texture definition must be robust with respect to lighting conditions; otherwise, as the sun changes brightness, the objects in the arena will change.
- the average color and the covariance of the colors within a group of color provides this robustness, as does the consideration that textures are equal if they differ within a prescribed tolerance.
- the segmenter 120 searches for "parasite" segments, which do not correspond to real objects but result from noisy pixels of the frame. Criteria for identifying such segments are: the size of the segments and an extraordinary texture (i.e one not previously seen, one previously defined as extraordinary or a texture out of place, such as a few pixels of one texture within a segment of another, completely different texture).
- an extraordinary texture i.e one not previously seen, one previously defined as extraordinary or a texture out of place, such as a few pixels of one texture within a segment of another, completely different texture.
- the boundary analyzer 124 reviews the segment data and produces mathematical equations describing the imaginary curve which approximates the boundaries between neighboring segments. To do so, boundary analyzer 124, for each segment, identifies the bordering pixels, namely those pixels belonging to the texture of the segment but having neighboring pixels which belong to different textures. This is illustrated in Fig. 6, to which reference is now briefly made. Pixels of two segments 136 and 138 are illustrated where each segment has a different texture. The bordering pixels of segments 136 and 138 are labeled 135 and 137, respectively.
- Boundary analyzer 124 typically utilizes contour following techniques to determine the locations of the bordering pixels. One such technique is described on pages 290 - 293 in the book Pattern Classification and Scene Analysis, by
- the boundary analyzer 124 attempts to fit straight lines or quadratic curves to varying-length sections of the bordering pixels, where the section length is a function of the quality of the fit of the bordering pixels to the straight or quadratic curves. Boundaries which match straight lines or quadratic curves are so marked.
- boundary analyzer 124 indicates to the segmenter 120 to repeat the segmentation in an attempt to smooth the boundary.
- Boundary analyzer 124 produces the coefficients describing the boundaries on output.
- Graph builder 132 creates the topology graph for the current frame, such as is shown in Fig. 3B, from the segments and the boundary equations. Since the boundary analyzer 124 also provides the adjacency information for each segment, graph builder 132 can create the topology graph and add to it the information regarding the texture and boundary specification for each segment. The graph typically is not drawn as shown in Figs. 3A and 3B but appropriately represented with each segment as a node and, for each segment, its neighbors, texture type and boundary equations are represented by connected nodes, node labels and edge labels. It is noted that segmenter 120, boundary analyzer 124 and graph builder 132 form a graph creator 140 which produces a topology graph for a frame.
- Correlator 128 is a previous frame correlator which matches the current frame with the previous frame or frames to determine which segments are common to the two frames, thereby determining which segments are new segments in the current frame.
- the present frame may show many field markings, only one of which has not yet been seen but three of which were seen in a frame which is three frames previous to the present one.
- Correlator 128 operates by comparing the topology graph of the current frame with those of the previous frames. The comparison involves first matching the topologies of textures between the current frame and each of the previous frames. For the matched portions of the graph, the correlator 128 then matches the topologies of the boundary types. If the texture topology and the boundary type topology matches, then the two portions of the graphs match. It will be appreciated that changes in perspective do not affect the graph matching operation since the graph lists the relatively perspective-invariant elements of the arena.
- the graph matching operation can be performed in accordance with any suitable graph matching techniques.
- a pseudo-isomorphism operation can be performed as described hereinbelow with respect to Figs. 7, 8A, 8B, 9A and 9B.
- the correlator 128 determines which of the segments of the current frame have not been matched to any of the graphs of the previous frames. These segments indicate new segments which may have to be added to the stadium database 114 and are so marked on the graph for the current frame.
- Correlator 128 provides the marked graph for the current frame to the correlator 130 which determines which, if any, of the segments form part of the standard objects of the playing field. To do so, correlator 130 matches the topology graph of the standard elements of the playing field, as stored in standards database 118, with the graph of the current frame. The correlation operation is similar to that described hereinabove whereby first the texture topology is matched and then the boundary topology is matched, for the matched texture topology sections.
- the standards database 118 provides the present invention with a set of already known objects in the field from which to begin to map the stadium 58.
- the database updater 134 receives the marked graph, marked with segments found in previous frames and segments conforming to standard elements of the playing field, and determines which segments are new. Updater 134 then determines which of the new segments are likely to be "interesting" objects, such as by not including segments having non-simple boundaries, and which segments, of the segments conforming to the standard elements of the playing field, have not already been added into the stadium database 114. Updater 134 then includes only the selected segments as objects in the stadium database 114. The process involves providing the selected segments corresponding to non-standard objects with object numbers, defining the adjacency relationships of the new segments with the segments already in the stadium database 114 and listing the boundary equations and textures of the new segments.
- the updater 134 also marks the new segments for replacement if the user indicates as such.
- updater 134 creates a perspective transformation from the perspective of the standard elements (which are typically provided as a top view) to that of the corresponding segments in the current frame.
- the perspective transformation is created utilizing the boundary equations of the standard objects in the standards database 118 and the boundary equations of the corresponding segments in the current frame.
- the transformation can be produced in any suitable way. Pages 386 - 441 of the book Pattern Classification and Scene Analysis, referenced hereinabove, describes how to determine perspective transformations.
- updater 134 then transforms these other segments with the perspective transformation thus determined, thereby to provide these other segments with the same perspective as that of the standard objects of the field. Since the perspective transformation relates the actual field to the field as defined by the official bodies, updater 134 can optionally determine, from the transformation information, the sizes of the standard objects and, of course, of the non-standard objects.
- the orientation unit 112 operates on frames of the video sequence of the game and comprises a graph creator 140, an occlusion evaluator 142, a graph matcher 144, a perspective evaluator 146 and an implantation identifier 148.
- the operations of occlusion evaluator 142, graph matcher 144 and perspective evaluator 146 iterate until a match of a desired quality is reached.
- the graph creator 140 reviews the current, game frame and creates a topology graph for it.
- the output is a topology graph indicating the segments in the current, game frame, their boundary equations, neighbors and textures.
- Fig. 8A illustrates a simple arena having 11 objects therein, labeled 1 - 11
- Fig. 8B provides their topology graph where the circles, x's, squares and triangles indicate different textures.
- Fig. 9A illustrates a frame view of the arena of Fig.
- FIG. 8A having five objects therein, labeled A - E.
- Fig. 9B is the corresponding graph to Fig. 9A. It is noted that objects A - D match objects 2 - 5 and object E is an occluding object which occludes objects 2 and 4.
- Graph creator 140 provides the occlusion evaluator 142 with the segments of the game frame, their textures and boundary types. Occlusion evaluator 142 reviews each segment and determines which of them fulfills any or some of the following occluding object criteria: a) some of the boundaries of the object are non-simple (i.e. not straight lines or quadratic curves) as shown in Fig. 9A for object E; and b) its texture is one not seen in previous frames or one defined in previous frames as being of an occluding object.
- the occlusion evaluator 142 provides the list of segments which are possible occluding segments to the graph matcher 144.
- Graph matcher 144 also receives the graph for the current game frame from the graph creator 140. Graph matcher 144 attempts to match the small graph of the current, game frame to the topology of the stadium database. To do so, it operates in a number of ways, depending on the state of the video sequence.
- Graph matcher 144 attempts to match the current game frame to the topology of the stadium database 114. For this matching, graph matcher 144 operates similarly to the previous frame correlator 128 of Fig. 5.
- graph matcher 144 just matches the current graph to that of the stadium database 114 and produces a match quality measurement. Subsequently, since the graph of the current game frame includes occluding objects, graph matcher 144 removes the suspected occluding segments one at a time and, if desired, a group at a time, and produces a match quality measurement for the graph with the removed segment. It will be appreciated that graph matcher 144 performs a matching operation similar to that of correlator 128. Specifically, the matcher 144 first attempts to match the topologies of textures between the current frame and the stadium database 114. The number of segments matched out of the total number of segments in the graph indicates the quality of the match. For the matched portions of the graph, if any, the graph matcher 144 then matches the topologies of the boundary types. If the texture topology and the boundary type topology matches, then the two portions of the graphs match.
- the graph matcher 144 identifies which segments of the current frame are part of the background and which segments occlude the objects of the background.
- the graph matcher 144 provides this information, as well as the boundary equations of the segments as output.
- the perspective evaluator 146 operates similarly to part of updater 134 and determines, from the boundary equations of the segments corresponding to objects of the background (received from graph creator 140), the perspective transformation for the current game frame.
- the perspective evaluator 146 can utilize all of the boundary equations or only some of them, for example, the boundary equations corresponding to objects forming part of the standard elements of the field.
- the transformation produces a transformation matrix M whose parameters are provided as output of the perspective evaluator 146.
- the transformation matrix M will be utilized to transform the image to be implanted from the perspective of the standard field elements (i.e. top view) to the perspective of the current, game frame.
- the perspective evaluator 146 typically tests the perspective transform on the objects of the arena which have been identified. The result should be a frame which closely matches the game frame. However, since the perspective transformation describes the transformation of the background elements of the frame (which occurs only due to the movement of the camera viewing them), it will not successfully describe the movements of a human being who moves of his own. Thus, any segment not well matched via the transformation is considered a possibly occluding object and this information is provided to the occlusion evaluator 142 for the next iteration.
- the occlusion evaluator 142, graph matcher 144 and perspective evaluator 146 iterate until the graph of the current, game frame, less the occluding elements, perfectly matches a section of the graph of the stadium database 114.
- the perfect match indicates that the current, game frame has been oriented with respect to the stadium 58.
- a lack of a perfect match indicates that the camera is showing something which is not part of the stadium 58, such as a video of an advertisement.
- the orientation information is provided to the implantation identifier 148 which reviews the matched objects, provided by graph matcher 144, and determines if any of them are marked for implantation in the stadium database 114.
- implantation identifier 148 provides, on output, the segments which are to be implanted and their boundary equations. It will be appreciated that signs with patterns on them are formed of many connected segments, all of which are marked for implantation and all of which are marked as being part of the same sign.
- the implantation identifier 148 determines the outer boundary of the collection of segments forming the sign and provides the boundary equations of the outer boundary of the sign on output.
- Fig. 7 indicates that the output to the orientation unit 112 is the transformation matrix for transforming the image to be implanted into the current, game frame, and the areas to be implanted.
- FIG. 10 illustrates the operations of graph matcher 144 and back to Figs. 8A and 8B which are useful for understanding Fig. 10.
- step 150 the current game frame (or other current frame) is reviewed to enumerate its interior cycles and isthmus edges.
- a cycle is interior if it is the union of segments in the frame corresponding to nodes which are a 1 -connected set.
- Isthmus edges are edges (i.e. connections between nodes) whose removal increases the number of connected components of the graph.
- An isthmus edge provides an isthmus between two parts of the graph.
- Step 150 involves determining all cycles of length 3 and selecting those which are interior.
- Steps 152 - 159 form the method of searching through the graph to find matching graph sections and are performed per interior cycle and per isthmus edges of the graph.
- the current interior cycle is compared (step 152) to all interior cycles of the database. If, for one interior cycle of the database, the textures at the nodes of the two cycles match (step 154), adjacent cycles to the current interior cycle are compared (step 156) to the adjacent cycles of the interior cycle of the database. If the adjacent cycles of the current interior cycle match the adjacent cycles of the interior cycle of the database (step 158), then the set of adjacent cycles of the current frame are marked (step 159) as having matched the database. In any case, the next interior cycle of the current frame is now considered. The process is repeated until all interior cycles have been reviewed. Analogous operations are performed for all of the isthmuses of the current frame and the database. The matching of adjacent nodes to both nodes of the considered isthmus is checked.
- Unit 104 comprises a segment filler 160, a transformer 164 and a mixer 166.
- Segment filler 160 receives the information of the implantation areas from the orientation unit 112 and determines the pixels of the current, game frame which are included therein. It is noted that the implantation areas include in them information of where the occluding areas are. This is illustrated in Fig. 9A. If segment A is an object to be replaced, its shape is not a triangle but a triangle less most of occluding object E. Thus, the pixels of segment A do not include any pixels of occluding object E.
- segment filler 160 produces a permission mask which, for the current, game frame, masks out all but the areas of the frame in which the implantation will occur. This involves placing a '1' value in all pixels of the filled implantation areas and a '0' value at all other pixels. The image will be implanted onto the pixels of value .
- the transformer 164 utilizes the transformation matrix M to distort each of the advertising image into the plane of the video frame.
- a blending mask can be provided for the advertising image. If so, transformer 164 transforms the blending mask also.
- the mixer 166 combines the distorted advertising image with the video frame in accordance with the blending and permission masks.
- the formula which is implemented for each pixel (x,y) is typically:
- Output(x.y) is the value of the pixel of the output frame
- image(x,y) and video(x.y) are the values in the transformed, advertising image and the current, game frame, respectively
- P(x,y) is the value of the permission mask multiplied by the blending mask.
- the output, output(x.y), is a video signal into which the advertising image has been implanted onto a desired surface.
- the present invention is an orientation system for orienting a frame of activity data within a mapped background scene.
- the orientation system operates through creation of a topology graph of the relatively invariant elements of the background scene.
- This orientation system can be utilized in many systems, one embodiment of which, shown herein, is an implantation system. Other systems which can utilize the present orientation
- SUBSTTTUTE SHEET (RULE 26) system are systems for highlighting, changing the color or deleting background objects.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Closed-Circuit Television Systems (AREA)
- Display Devices Of Pinball Game Machines (AREA)
- Processing Or Creating Images (AREA)
- Studio Circuits (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002231849A CA2231849A1 (en) | 1995-09-13 | 1996-09-12 | Method and apparatus for implanting images into a video sequence |
JP9513270A JPH11512894A (en) | 1995-09-13 | 1996-09-12 | Method and apparatus for inserting an image into a sequence of videos |
EP96930339A EP0850536A4 (en) | 1995-09-13 | 1996-09-12 | METHOD AND DEVICE FOR IMPORTING IMAGES INTO A VIDEO SEQUENCE |
BR9610721-9A BR9610721A (en) | 1995-09-13 | 1996-09-12 | guidance units for orienting a video image viewing a stadium, frame description for describing a frame viewing a stadium and implantation for deploying an image in a frame on a surface inside a stadium and processes for implanting an image in a a picture on a surface inside a stadium, and to orient and describe a picture visualizing a stadium |
AU69422/96A AU6942296A (en) | 1995-09-13 | 1996-09-12 | Method and apparatus for implanting images into a video sequence |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IL11528895A IL115288A (en) | 1995-09-13 | 1995-09-13 | Method and apparatus for implanting images into a video sequence |
IL115288 | 1995-09-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO1997012480A2 true WO1997012480A2 (en) | 1997-04-03 |
WO1997012480A3 WO1997012480A3 (en) | 1997-06-12 |
Family
ID=11067984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL1996/000110 WO1997012480A2 (en) | 1995-09-13 | 1996-09-12 | Method and apparatus for implanting images into a video sequence |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP0850536A4 (en) |
JP (1) | JPH11512894A (en) |
AU (1) | AU6942296A (en) |
BR (1) | BR9610721A (en) |
CA (1) | CA2231849A1 (en) |
IL (1) | IL115288A (en) |
WO (1) | WO1997012480A2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1051843A1 (en) * | 1998-01-23 | 2000-11-15 | Princeton Video Image, Inc. | Event linked insertion of indicia into video |
US7206434B2 (en) | 2001-07-10 | 2007-04-17 | Vistas Unlimited, Inc. | Method and system for measurement of the duration an area is included in an image stream |
US7230653B1 (en) | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
DE102016119640A1 (en) * | 2016-10-14 | 2018-04-19 | Uniqfeed Ag | System for generating enriched images |
CN109635769A (en) * | 2018-12-20 | 2019-04-16 | 天津天地伟业信息系统集成有限公司 | A kind of Activity recognition statistical method for ball-shaped camera |
US10740905B2 (en) | 2016-10-14 | 2020-08-11 | Uniqfeed Ag | System for dynamically maximizing the contrast between the foreground and background in images and/or image sequences |
US10832732B2 (en) | 2016-10-14 | 2020-11-10 | Uniqfeed Ag | Television broadcast system for generating augmented images |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3667217B2 (en) * | 2000-09-01 | 2005-07-06 | 日本電信電話株式会社 | System and method for supplying advertisement information in video, and recording medium recording this program |
JP4600793B2 (en) * | 2000-09-20 | 2010-12-15 | 株式会社セガ | Image processing device |
CN110276839B (en) * | 2019-06-20 | 2023-04-25 | 武汉大势智慧科技有限公司 | Bottom fragment removing method based on live-action three-dimensional data |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5123057A (en) * | 1989-07-28 | 1992-06-16 | Massachusetts Institute Of Technology | Model based pattern recognition |
US5255211A (en) * | 1990-02-22 | 1993-10-19 | Redmond Productions, Inc. | Methods and apparatus for generating and processing synthetic and absolute real time environments |
US5276789A (en) * | 1990-05-14 | 1994-01-04 | Hewlett-Packard Co. | Graphic display of network topology |
US5323321A (en) * | 1990-06-25 | 1994-06-21 | Motorola, Inc. | Land vehicle navigation apparatus |
WO1992000654A1 (en) * | 1990-06-25 | 1992-01-09 | Barstow David R | A method for encoding and broadcasting information about live events using computer simulation and pattern matching techniques |
ATE181631T1 (en) * | 1991-07-19 | 1999-07-15 | Princeton Video Image Inc | TELEVISION ADS WITH SELECTED CHARACTERS DISPLAYED |
GB9119964D0 (en) * | 1991-09-18 | 1991-10-30 | Sarnoff David Res Center | Pattern-key video insertion |
US5435554A (en) * | 1993-03-08 | 1995-07-25 | Atari Games Corporation | Baseball simulation system |
-
1995
- 1995-09-13 IL IL11528895A patent/IL115288A/en not_active IP Right Cessation
-
1996
- 1996-09-12 WO PCT/IL1996/000110 patent/WO1997012480A2/en not_active Application Discontinuation
- 1996-09-12 AU AU69422/96A patent/AU6942296A/en not_active Abandoned
- 1996-09-12 JP JP9513270A patent/JPH11512894A/en not_active Ceased
- 1996-09-12 CA CA002231849A patent/CA2231849A1/en not_active Abandoned
- 1996-09-12 BR BR9610721-9A patent/BR9610721A/en not_active Application Discontinuation
- 1996-09-12 EP EP96930339A patent/EP0850536A4/en not_active Withdrawn
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1051843A1 (en) * | 1998-01-23 | 2000-11-15 | Princeton Video Image, Inc. | Event linked insertion of indicia into video |
EP1051843A4 (en) * | 1998-01-23 | 2002-05-29 | Princeton Video Image Inc | Event linked insertion of indicia into video |
US7230653B1 (en) | 1999-11-08 | 2007-06-12 | Vistas Unlimited | Method and apparatus for real time insertion of images into video |
US7206434B2 (en) | 2001-07-10 | 2007-04-17 | Vistas Unlimited, Inc. | Method and system for measurement of the duration an area is included in an image stream |
DE102016119640A1 (en) * | 2016-10-14 | 2018-04-19 | Uniqfeed Ag | System for generating enriched images |
US10740905B2 (en) | 2016-10-14 | 2020-08-11 | Uniqfeed Ag | System for dynamically maximizing the contrast between the foreground and background in images and/or image sequences |
US10805558B2 (en) | 2016-10-14 | 2020-10-13 | Uniqfeed Ag | System for producing augmented images |
US10832732B2 (en) | 2016-10-14 | 2020-11-10 | Uniqfeed Ag | Television broadcast system for generating augmented images |
CN109635769A (en) * | 2018-12-20 | 2019-04-16 | 天津天地伟业信息系统集成有限公司 | A kind of Activity recognition statistical method for ball-shaped camera |
Also Published As
Publication number | Publication date |
---|---|
IL115288A0 (en) | 1995-12-31 |
AU6942296A (en) | 1997-04-17 |
JPH11512894A (en) | 1999-11-02 |
EP0850536A2 (en) | 1998-07-01 |
EP0850536A4 (en) | 1998-12-02 |
BR9610721A (en) | 1999-12-21 |
IL115288A (en) | 1999-06-20 |
CA2231849A1 (en) | 1997-04-03 |
WO1997012480A3 (en) | 1997-06-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100260786B1 (en) | System to insert video into video stream | |
JP4370387B2 (en) | Apparatus and method for generating label object image of video sequence | |
EP0595808B1 (en) | Television displays having selected inserted indicia | |
US7894669B2 (en) | Foreground detection | |
EP0683961B1 (en) | Apparatus and method for detecting, identifying and incorporating advertisements in a video | |
JP2021511729A (en) | Extension of the detected area in the image or video data | |
CN108141547B (en) | Digitally overlay an image with another image | |
WO2012094959A1 (en) | Method and apparatus for video insertion | |
WO1997003517A1 (en) | Methods and apparatus for producing composite video images | |
CN113516696B (en) | Video advertising embedding method, device, electronic device and storage medium | |
US9154710B2 (en) | Automatic camera identification from a multi-camera video stream | |
CN110300316A (en) | Method, apparatus, electronic equipment and the storage medium of pushed information are implanted into video | |
WO1997012480A2 (en) | Method and apparatus for implanting images into a video sequence | |
CN107241610A (en) | A kind of virtual content insertion system and method based on augmented reality | |
JPH11507796A (en) | System and method for inserting still and moving images during live television broadcasting | |
KR20030002919A (en) | realtime image implanting system for a live broadcast | |
KR20010025404A (en) | System and Method for Virtual Advertisement Insertion Using Camera Motion Analysis | |
EP1418561A1 (en) | An advertisement print and a method of generating an advertisement print | |
CN111986133B (en) | Virtual advertisement implantation method applied to bullet time | |
KR20050008246A (en) | An apparatus and method for inserting graphic images using camera motion parameters in sports video | |
Shah et al. | Automated billboard insertion in video | |
Tan | Virtual imaging in sports broadcasting: an overview | |
Owen et al. | Adaptive video segmentation and summarization | |
MXPA96004084A (en) | A system for implanting an image into a video stream | |
IL104725A (en) | System for exchanging sections of video background with virutal images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM |
|
AK | Designated states |
Kind code of ref document: A3 Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1996930339 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2231849 Country of ref document: CA Ref country code: CA Ref document number: 2231849 Kind code of ref document: A Format of ref document f/p: F |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 1997 513270 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019980701888 Country of ref document: KR |
|
WWR | Wipo information: refused in national office |
Ref document number: 1019980701888 Country of ref document: KR |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1019980701888 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 1996930339 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1996930339 Country of ref document: EP |