WO2016167160A1 - Data generation device and reproduction device - Google Patents
Data generation device and reproduction device Download PDFInfo
- Publication number
- WO2016167160A1 WO2016167160A1 PCT/JP2016/061159 JP2016061159W WO2016167160A1 WO 2016167160 A1 WO2016167160 A1 WO 2016167160A1 JP 2016061159 W JP2016061159 W JP 2016061159W WO 2016167160 A1 WO2016167160 A1 WO 2016167160A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image
- azimuth
- frame
- display
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims description 97
- 238000009826 distribution Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 abstract description 67
- 238000005520 cutting process Methods 0.000 abstract description 6
- 238000000034 method Methods 0.000 description 56
- 238000007405 data analysis Methods 0.000 description 27
- 230000008569 process Effects 0.000 description 27
- 230000033001 locomotion Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 238000006243 chemical reaction Methods 0.000 description 9
- 239000000284 extract Substances 0.000 description 9
- 230000000694 effects Effects 0.000 description 6
- 102220588444 Keratin, type I cytoskeletal 18_S44A_mutation Human genes 0.000 description 5
- 230000002730 additional effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 241000110058 Candidatus Phytoplasma pini Species 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
Definitions
- the present invention mainly relates to a data generation device that generates data relating to an omnidirectional image and a reproduction device that reproduces an image cut out from the omnidirectional image.
- a method using a camera equipped with a wide-angle fisheye lens or an omnidirectional mirror a method of photographing the same scene from a plurality of cameras with different viewpoint positions, and connecting the captured images of each camera, etc.
- An omnidirectional image with a field of view can be taken.
- the image provider uses a large number of omnidirectional images obtained by shooting at various locations, so that the viewer can view the field of view according to an arbitrary viewpoint position and an arbitrary line-of-sight direction specified by the viewer. Can be browsed.
- image data indicating the state of the field of view is extracted from omnidirectional images at various viewpoint positions held by the supply center, and the extracted image data is transmitted to the terminal device of the viewer.
- Google Inc. Headquarters: Google Inc., California, USA
- Street View registered trademark
- a service that displays is displayed.
- Japanese Patent Publication Japanese Laid-Open Patent Publication No. 2001-008232 (published on January 12, 2001)” Japanese Patent Publication “JP 2013-090257 A (published May 13, 2013)”
- Patent Document 1 and “Street View” are a technique for presenting a still image. That is, in Patent Document 1 and “Street View”, when a viewer designates an arbitrary line-of-sight direction from an arbitrary viewpoint position, a still image corresponding to the designation is presented.
- Patent Document 1 Due to the fact that the image viewed by the viewer is not a moving image taken while moving, the technique of Patent Document 1 and the technique of “street view” have the following problems.
- Patent Document 2 is based on an operation of moving the current location on a map along a virtual route, and a plurality of images obtained by photographing the surroundings while moving a real-world route corresponding to the virtual route.
- the following processing is performed for omnidirectional still images.
- a visual field image showing a visual field in the sight line direction designated by the viewer is cut out from the omnidirectional still image for the plurality of omnidirectional still images, and the cut out visual field image is displayed.
- a moving image showing the surroundings viewed from the shooting point at each shooting time is displayed.
- Patent Document 2 solves the problems of the technique of Patent Document 1 and the “street view” technique described above to some extent.
- Patent Document 2 when a viewer who is browsing the moving image alone desires to see the state of a certain gaze direction, an operation to change the gaze direction is performed as necessary. Thus, it is possible to see the state of the certain gaze direction.
- Patent Document 2 a viewer who wants to confirm the state of view when viewing a certain direction from a certain point and the state of view when viewing another direction from the same point at the same time. There is a problem that the request cannot be satisfied. This is because the invention disclosed in Patent Document 2 cannot display the visual field image showing the former visual field and the visual field image showing the latter visual field at the same time.
- the present invention has been made in view of the above problems, and its main purpose is to allow viewers of a moving image for display cut out from an omnidirectional moving image generated by moving shooting to have a plurality of different images from the same shooting point.
- An object of the present invention is to realize a data generation device that generates data that enables the state of each field of view when viewing a direction to be confirmed at the same time. It is also included in the object of the present invention to realize a playback device that allows the viewer to check the state of each field of view when viewing a plurality of different directions from the same shooting point.
- a data generation device provides a plurality of images that a playback device should refer to in order to extract a plurality of images to be displayed from a target frame for all or a part of frames of an omnidirectional video generated by moving shooting.
- a data generation unit for generating the data wherein the data generation unit generates azimuth data indicating the azimuth for each of a plurality of different azimuths, thereby generating a plurality of azimuth data as the plurality of data,
- Each of the plurality of azimuth data is referred to by the playback device to cut out an image showing a field of view when viewing the azimuth indicated by the azimuth data from the shooting point of the target frame as the display target image.
- a playback device includes a plurality of azimuth data to be referred to in order to extract a plurality of images to be displayed from a target frame for all or some frames of an omnidirectional video generated by moving shooting.
- a data reference processing unit to be referenced, and each of the plurality of azimuth data includes a field of view when the data reference processing unit views the azimuth indicated by the azimuth data from the shooting point of the target frame as the display target image.
- This is data that should be referred to in order to cut out an image showing the above situation, and includes a reproduction processing unit that reproduces a plurality of images cut out from the target frame for all or some of the frames.
- the data generation device is a view of each field of view when viewing a plurality of different directions from the same shooting point to a viewer of a display moving image cut out from an omnidirectional moving image generated by moving shooting. It is to realize a data generation apparatus that generates data that enables confirmation at the same time.
- the playback device has an effect that the viewer can check the state of each field of view when viewing a plurality of different directions from the same shooting point at the same time.
- FIG. 1 It is a block diagram which shows the structure of the free viewpoint moving image data generator which concerns on Embodiment 1 of this invention. It is explanatory drawing for demonstrating the map data which the said free viewpoint moving image data production
- the content creator who is the user of the free viewpoint video data generation apparatus causes the free viewpoint video data generation apparatus to generate metadata for causing the free viewpoint video playback apparatus to reproduce a recommended scene by any operation.
- FIG. 17 is a flowchart showing another specific step in the flowchart of FIG. 16 in detail. It is the figure which illustrated the display mode of the visual field image which shows the mode of a field of view at the time of seeing the above-mentioned other direction from the above-mentioned certain photography point by the above-mentioned free viewpoint animation reproducing device set as the 2nd display mode. It is the schematic of the system which concerns on Embodiment 3 containing the free viewpoint moving image data production
- Embodiment 1 which is a preferred embodiment of a data generation apparatus according to the present invention will be described below in detail with reference to the accompanying drawings.
- symbol also in different drawing shall be abbreviate
- the free viewpoint moving image data generation device generates free viewpoint moving image data that is referenced by a free viewpoint moving image playback device according to a second embodiment to be described later for playing back a free viewpoint moving image.
- the free viewpoint moving image data generated by the free viewpoint moving image data generating device includes omnidirectional moving image data input to the free viewpoint moving image data generating device.
- This omnidirectional video data is generated by shooting (moving shooting) while moving a shooting route including a plurality of shooting points.
- the metadata that allows the user of the free viewpoint video playback device according to Embodiment 2 to confirm the state of each field of view when viewing a plurality of different directions from the same shooting point at the same time is free viewpoint video. It is included in the data.
- this metadata is also referred to as reproduction control data.
- the free-viewpoint video data generating device includes a broadcasting device that generates the free-viewpoint video data, a server on the cloud, and software that generates the free-viewpoint video data.
- An installed PC (Personal Computer) or the like can be mentioned.
- FIG. 1 is a block diagram showing the configuration of a free-viewpoint video data generation device 100 (hereinafter abbreviated as “data generation device 100”) according to an embodiment of the present invention. First, an outline of the configuration of the data generation device 100 will be described with reference to this figure.
- the data generation device 100 includes a control unit 110 and an operation reception unit 120.
- the control unit 110 is a CPU and controls the entire data generation apparatus 100 in an integrated manner.
- the operation reception unit 120 is an operation device that receives an operation by a content creator (a user of the data generation device 100).
- the control unit 110 executes a specific program, thereby performing a basic azimuth data generation unit 111 (data generation unit), an extended azimuth data generation unit (data generation unit) 112, a shooting point data conversion processing unit 113, and an encoding process.
- the encoding processing unit 114 and the multiplexing processing unit 116 may be realized by hardware (LSI) instead of software.
- the basic azimuth data generation unit 111 cuts out the visual field image to be reproduced from the target frame by the free viewpoint video reproduction device according to the second embodiment. Data to be referred to is generated.
- This field-of-view image is an image showing the state of the field of view when viewing a certain direction (the direction specified by the content creator or the default direction) from the shooting point of the target frame.
- the basic azimuth data generation unit 111 generates azimuth data indicating the certain azimuth (hereinafter referred to as basic azimuth data) as the data. Specific contents of the basic azimuth data will be described later.
- the subject that the user wants to show most to the user of the free-viewpoint video playback device according to the second embodiment is selected from the various subjects included in the target frame.
- the certain direction is designated so as to be included in the field-of-view image.
- the extended azimuth data generation unit 112 generates, for each image frame of the omnidirectional video data, data to be referred to so that the free viewpoint video playback device according to the second embodiment cuts out the playback target view image from the target frame.
- This field-of-view image is an image showing the state of the field of view when another azimuth (the azimuth designated by the content creator or the default azimuth) is viewed from the shooting point of the target frame.
- the extended azimuth data generation unit 112 generates azimuth data indicating another azimuth (extended azimuth data indicating an azimuth different from the azimuth indicated by the basic azimuth data) as the data. Specific contents of the extended azimuth data will be described later.
- the extended azimuth data generation unit 112 may generate only one extended azimuth data as the extended azimuth data related to the target frame, or may generate a plurality of extended azimuth data indicating different azimuths.
- the shooting point data conversion processing unit 113 outputs map data input from the outside to the multiplexing processing unit 116.
- the shooting point data conversion processing unit 113 corresponds to the shooting point in the map indicated by the map data input from the outside with the GPS coordinate data indicating the shooting point of the frame. It is converted into coordinate data (map coordinate data) indicating the position.
- the GPS coordinate data may be included in the omnidirectional video data as additional information of the omnidirectional video data. That is, the GPS coordinate data may be generated by this camera when shooting an omnidirectional video by a camera incorporating a GPS module.
- a type of “coordinate data indicating a shooting point” different from the GPS coordinate data may be included.
- the coordinate data may be generated by using a GPS module and a gyro sensor and / or a vehicle speed sensor in combination, or may be manually input as additional information of an omnidirectional video.
- FIG. 2 is an explanatory diagram for explaining the map data.
- the map data representing the map 10 in FIG. 2 is two-dimensional map image data indicating a map of the area where the above-described shooting route is located.
- map data map data, such as a sightseeing map and a road map, is mentioned, for example.
- the map coordinate data (shooting point data) corresponding to the shooting point of the first frame of the omnidirectional video data is a coordinate indicating the point 11 on the map indicated by the map data in FIG.
- map coordinate data (shooting point data) corresponding to the shooting point of the last frame of the omnidirectional video data is assumed to be a coordinate indicating the point 13 on the map indicated by the map data in FIG.
- each map coordinate data (photographing point data) corresponding to the photographing points of the remaining frames of the omnidirectional video data is coordinates on a line 12 connecting the points 11 and 13 in FIG. .
- the shooting point data conversion processing unit 113 is configured to perform the process of converting the GPS coordinate data indicating the shooting point of the target frame into map coordinate data for all frames. You may perform only about some frames (key frame).
- the shooting point data conversion processing unit 113 for each frame that is not a key frame, map coordinate data indicating the shooting point of the target frame, map coordinate data indicating a shooting point of the key frame immediately before the target frame, and the target You may produce
- the shooting point data conversion processing unit 113 may indicate that “the camera that shoots an omnidirectional video has the immediately preceding key frame over a period from the shooting time of the immediately preceding key frame to the shooting time of the immediately following key frame.
- Map coordinate data indicating a shooting point of a frame that is not a key frame may be generated on the assumption that it has moved at a constant speed on a line segment connecting the shooting point of the key frame and the shooting point of the immediately following key frame.
- the shooting point data conversion processing unit 113 generates map coordinate data indicating the shooting point of the target frame based on speed information (speed vector) input by the content creator for the target frame for each frame that is not a key frame. May be.
- the shooting point data conversion processing unit 113 indicates that “a camera that shoots an omnidirectional video has moved from the shooting point of the frame immediately before the target frame at a speed indicated by the speed information for a certain period of time.
- map coordinate data indicating the shooting point of the target frame may be generated.
- the fixed time may be a time corresponding to a quotient obtained by dividing the total shooting time of the omnidirectional video by the total number of frames n of the omnidirectional video.
- the map coordinate data regarding the frame is composed of the frame number of the frame and the map coordinate value indicating the shooting point of the frame, but using the shooting time of the frame indicated by the frame number instead of the frame number. Also good.
- the encoding processing unit 114 encodes omnidirectional moving image data input from the outside, and outputs the encoded omnidirectional moving image data to the multiplexing processing unit 116.
- the encoding method used by the encoding processing unit 114 for example, the MPEG-2 method, which is a method standardized by MPEG (Motion Picture Experts Group), H.264, and H.264. Any method can be applied as long as it is a method for encoding a moving image, such as the H.264 method or the HEVC (High-efficiency-Video Coding) method. Since these encoding methods do not directly characterize the present invention and are known techniques, detailed description thereof will be omitted.
- the encoding processing unit 114 is not essential in the data generation apparatus according to the present invention. That is, the data generation apparatus according to another embodiment different from the first embodiment may output the input omnidirectional moving image data without compression without encoding.
- the reproduction control data generation unit 115 When the reproduction control data generation unit 115 receives basic azimuth data, one or more extended azimuth data, and shooting point data (map coordinate data), the reproduction control data generation unit 115 generates reproduction control data including these data, and the generated reproduction control data. Is output to the multiplexing processing unit 116. The specific contents of the reproduction control data will be described later.
- the multiplexing processing unit 116 generates free viewpoint moving image data by multiplexing map data, reproduction control data for each frame, and encoded omnidirectional moving image data that are input from the outside, The generated free viewpoint video data is output to the outside.
- the free-viewpoint video data (distribution data) generated by the multiplexing processing unit 116 is automatically transmitted to the free-viewpoint video playback apparatus according to the second embodiment or manually using a removable recording medium. Distributed.
- FIG. 3 is a flowchart illustrating an example of the operation of the data generation device 100.
- the data generation device 100 starts the operation according to the flowchart of FIG. 3 at the timing when the omnidirectional video data and the map data are input from the outside.
- the data generation device 100 performs the processing from step S1 to step S6 for each frame of the omnidirectional video data.
- step S1 the shooting point data conversion processing unit 113 converts the GPS coordinate data indicating the shooting point of the target frame into map coordinate data using the map data input from the outside, and the reproduction control data generation unit 115. Output to.
- step S2 the basic azimuth data generation unit 111 generates basic azimuth data (step S2).
- Step S2 will be specifically described with reference to FIG.
- FIG. 4 (a) shows a flowchart showing step S2 in detail.
- step S21 the basic orientation data generation unit 111 determines whether the operation reception unit 120 has received an operation for generating basic orientation data.
- step S22 the basic azimuth data generating unit 111 proceeds to step S22, and the operation receiving unit 120 performs an operation for generating basic azimuth data. If it is determined that it has not been received, the process proceeds to step S23.
- step S22 the basic azimuth data generation unit 111 generates basic azimuth data corresponding to the contents of the above-described operation (specifically, an operation for inputting the cut-out center coordinates described below).
- step S23 the basic azimuth data generation unit 111 automatically generates basic azimuth data.
- FIG. 5 is a view image showing a state of view when a certain direction is viewed from a certain photographing point, which is cut out from an omnidirectional image photographed at a certain photographing point by the free viewpoint moving image reproducing device according to the second embodiment. It is a figure which shows an example.
- the omnidirectional video data according to the present embodiment is configured to include all 360-degree videos, but the free-viewpoint video playback device according to the second embodiment (for example, a large display such as a TV or a projector, a tablet, a smartphone, or the like)
- the small display or the head mounted display cannot display the entire image. That is, the free viewpoint moving image playback apparatus according to the second embodiment needs to cut out a part (view image) from the target frame and display the view image for each frame of the omnidirectional moving image.
- the basic azimuth data generation unit 111 performs an operation on the target frame (omnidirectional image) corresponding to the center position of the visual field image that the content creator wants the free viewpoint video playback apparatus according to the second embodiment to cut out.
- Direction data including the coordinates (cutout center coordinates) is generated.
- the free viewpoint video playback device acquires orientation data including the cut-out center coordinates corresponding to the point 15 at the center position of the image 14 in FIG. 5 as the orientation data related to the target frame.
- the image 14 can be cut out from the target frame as a visual field image.
- the basic azimuth data generation unit 111 may proceed to step S22 when an operation of inputting a cut-out center coordinate is accepted, or when another type of operation is accepted.
- the other type of operation will be described below with reference to FIG. FIG. 6 is an explanatory diagram which is referred to for explaining the different operation.
- the basic azimuth data generation unit 111 may proceed to step S22 when an operation for designating an arbitrary point on the map 10 indicated by the map data (for example, the point 16 in FIG. 6) is received.
- the basic azimuth data generation unit 111 may specify the azimuth in which the point designated by the above operation is located as seen from the point on the map 10 corresponding to the shooting point of the target frame. Then, the basic azimuth data generation unit 111 may generate basic azimuth data including a value indicating the azimuth instead of the cut-out center coordinates.
- the basic azimuth data generation unit 111 includes a coordinate value indicating a point on the map corresponding to the shooting point of the target frame and a coordinate value indicating a point designated by the operation on the map. Such basic azimuth data may be generated.
- step S2 the extended azimuth data generation unit 112 generates extended azimuth data (step S3), and the process proceeds to step S4.
- step S4 the data generating apparatus 100 returns to step S3 when an operation for further generating extended azimuth data is received, and returns to step S5 when not receiving an operation for further generating extended azimuth data. move on.
- Step S3 will be specifically described with reference to FIG.
- FIG. 4 (b) shows a flowchart showing step S3 in detail.
- step S31 the extended orientation data generation unit 112 determines whether or not the operation receiving unit 120 has received an operation for generating extended orientation data.
- step S32 the extended azimuth data generating unit 112 proceeds to step S32, and the operation receiving unit 120 performs an operation for generating extended azimuth data. If it is determined that it has not been received, the process proceeds to step S33.
- step S32 the extended azimuth data generation unit 112 expands the azimuth data according to the content of the above-described operation (specifically, an operation for inputting the cut-out center coordinates different from the cut-out center coordinates included in the basic direction data). Is generated.
- step S33 the extended azimuth data generation unit 112 automatically generates extended azimuth data including cutout center coordinates different from the cutout center coordinates included in the basic azimuth data.
- the extended azimuth data generation unit 112 may automatically calculate four extended azimuth data each indicating the east, west, south, and north directions, or automatically generate only one extended azimuth data indicating a predetermined direction. May be calculated automatically.
- the extended azimuth data generation unit 112 may automatically generate extended azimuth data including cut-out center coordinates corresponding to a combination of a predetermined azimuth angle and a predetermined elevation angle.
- the extended azimuth data generation unit 112 may automatically generate extended azimuth data such that the coordinates in the moving area in the omnidirectional video become the cut-out center coordinates.
- the extended azimuth data generation unit 112 may calculate a motion vector in pixel units or block units with reference to the forward or backward frame of the omnidirectional video. Then, in any of the following cases, the extended azimuth data generation unit 112 generates extended azimuth data such that the coordinates of the center or the center of gravity of each of the following one or more areas become the cut center coordinates. It may be automatically generated.
- the extended orientation data generation unit 112 has the center or the center of gravity of each of the one or more areas described above. Extended azimuth data may be automatically generated so that these coordinates become the cut-out center coordinates.
- Pixel The direction of the calculated motion vector is different from the direction of the motion vector of each pixel around the area.
- Pixel block The direction of the calculated motion vector is different from the direction of the motion vector of each block around the area.
- the extended azimuth data generation unit 112 may compare the image data with a target frame in the omnidirectional video to determine whether or not the subject image is included in the target frame. . Then, when the extended orientation data generation unit 112 determines that the subject image is included in the target frame, the extended orientation data generation unit 112 determines the region including the subject image among the plurality of predetermined regions. Extended azimuth data may be automatically generated such that the coordinates of the center or the center of gravity become the cut-out center coordinates.
- the image data input by the user may be image data taken at an opportunity different from the shooting of the omnidirectional video.
- the image data may be data of an image captured by a user when a desired field-of-view image of a desired frame (omnidirectional still image) in the omnidirectional video is displayed by a playback device.
- the extended azimuth data generation unit 112 may generate extended azimuth data in n steps S33, or generate extended azimuth data only in some steps S33 of the n steps S33. May be.
- the operation receiving unit 120 can receive input of a plurality of different cut-out center coordinates.
- the extended azimuth data generation unit 112 generates a plurality of extended azimuth data by generating extended azimuth data that includes the cut out center coordinates for each of the plurality of input cut out center coordinates.
- FIG. 7 is a diagram showing that there are a plurality of subjects that the content creator recommends browsing around a certain section in the shooting route.
- FIG. 8 shows a visual field image showing the state of the field of view when a different direction (the direction of the subject 18) is seen from a certain photographing point, which is cut out from a frame photographed at the certain photographing point by the free viewpoint moving image reproducing device. It is a figure which shows an example.
- the point on the map 10 represented by the map data corresponding to the point where the subject 18 exists is the point 20, and the point on the map 10 corresponding to the point where the subject 19 exists is the point 21.
- the subject 18 is shown in P frames taken at P shooting points corresponding to the part included in the region 22 in the line 12, and the part included in the region 23 in the line 12. It is assumed that the subject 19 is shown in Q frames taken at Q shooting points corresponding to.
- the content creator for each of the P frames, displays a field image (for example, field image 24 in FIG. 18) in which the subject 18 is captured from the target frame (omnidirectional image) on the free viewpoint video reproduction device. What is necessary is just to input the cutting center coordinate (for example, the coordinate in this omnidirectional image corresponding to the point 25 of FIG. 18) which makes it cut out.
- the content creator may input, for each of the Q frames, a cut-out center coordinate that causes the free-viewpoint video playback device to cut out a field-of-view image showing the subject 19 from the frame.
- the extended azimuth data generation unit 112 generates extended azimuth data including a certain cutout center coordinate and extended azimuth data including another cutout center coordinate for a frame in which the two subjects 18 and 19 are captured. become.
- the expanded orientation data may include identification information corresponding to the orientation represented by the X component of the cut-out center coordinates.
- the content creator can highlight a part of the frames in the omnidirectional video.
- a certain scene or a plurality of subjects that the viewer is interested in or wants to show to the viewer are included, it is possible to prevent the viewer from overlooking them.
- the viewer can view a visual field image of a desired orientation among a plurality of orientations corresponding to a plurality of extended orientation data. For example, it is possible to browse recommended sightseeing spots, famous landmarks on the road map, facial expressions with highlights of one's acquaintances and family, and the like.
- step S5 the reproduction control data generation unit 115 generates reproduction control data including the input shooting point data, basic azimuth data, and extended azimuth data.
- reproduction control data generated by the reproduction control data generation unit 115 will be described in detail with reference to FIGS.
- FIG. 9 is a diagram schematically showing an example of the data structure of the free-viewpoint video data generated by the data generation device 100.
- FIG. 10 is a diagram schematically illustrating an example of the data structure of the reproduction control data included in the free viewpoint moving image data.
- FIG. 11 is a diagram schematically illustrating another example of the data structure of the reproduction control data included in the free viewpoint moving image data.
- the free viewpoint moving image data 26 includes omnidirectional moving image data and map data, and for each i from 1 to n, a frame whose frame number is i (hereinafter also referred to as a frame i). Eye-gaze direction control data Vi.
- the reproduction control data V1 includes shooting point data P1 indicating the shooting point of the frame 1 and basic azimuth data O1 related to the frame 1.
- the extended azimuth data E1j related to the frame 1 is included. (M, n, i, j are positive integers).
- one or more extended azimuth data may be included in each reproduction control data of the free viewpoint moving image data generated by the data generation device 100 of the present embodiment.
- the extended azimuth data may not be included in a part of the reproduction control data (reproduction control data Vn).
- the azimuth data included in the reproduction control data will be specifically described again with reference to FIGS.
- the basic azimuth data includes cut-out center coordinates and identification information for identifying the basic azimuth data from the extended azimuth data.
- Each extended azimuth data includes cut-out center coordinates and identification information for identifying the extended azimuth data from basic azimuth data and other extended azimuth data.
- the basic azimuth data and the extended azimuth data may include priority display order data as shown in FIG.
- the priority display order data indicates the order in which the field of view image cut out by the orientation data including the priority display order data is to be displayed among the plurality of field of view images that can be extracted from the frame i. It is the data shown.
- the azimuth video reproduction device can select the azimuth among a plurality of field images that can be extracted from the frame i. It shows that the visual field image cut out by the data should be displayed first.
- the value of the priority display order data included in certain azimuth data of the reproduction control data Vi is k.
- the free viewpoint moving image reproduction apparatus can select the view field image that can be cut out from the frame i. This indicates that the field-of-view image cut out by the azimuth data is to be displayed k-th (k is an integer of 2 or more).
- the content creator wants the free-viewpoint video playback device to display a plurality of field-of-view images cut out from the frame i so that they can be simultaneously viewed, the same value is used for the plurality of orientation data for cutting out the plurality of image data. May be included.
- viewpoint data may be included in each azimuth data.
- the viewpoint restriction data includes information indicating whether or not there is a viewpoint restriction (a restriction for allowing the free-viewpoint video playback device to cut out the visual field image only from a part of the entire area of the omnidirectional image). Also good.
- the viewpoint restriction data including information indicating that there is a viewpoint restriction may include information indicating the partial area.
- the data generation device 100 allows the content creator to determine whether or not the viewpoint is restricted, and the above-mentioned partial area (for example, an area that the viewer in the omnidirectional image is particularly interested in, or a viewer in the omnidirectional image).
- the user may be configured to perform an operation of designating a region that may be browsed.
- the content creator can allow the viewer who is the user of the free viewpoint video playback device to browse only the visual field image in the region that is particularly desired to be noticed.
- the content creator can restrict browsing of the field-of-view images in other areas that the viewer does not want to browse.
- An area that the content creator does not want the viewer to browse includes an area that includes a scene that the viewer does not want to browse from the viewpoint of confidentiality, portrait rights, customs, or the like.
- the data generation apparatus 100 that performs streaming distribution of omnidirectional video in real time and the data generation apparatus 100 that distributes omnidirectional video by broadcast have the above-described configuration.
- step S6 the encoding processing unit 114 encodes the input frame (omnidirectional image) using a preset encoding method (for example, HEVC), and outputs encoded data of the omnidirectional image.
- a preset encoding method for example, HEVC
- the data generation device 100 proceeds to step S7 after performing the above steps S1 to S6 for each image frame.
- step S7 the multiplexing processing unit 7 multiplexes the encoded data of each frame, the map data, and the reproduction control data of each frame to generate free viewpoint moving image data, and the data generating apparatus 100 ends the operation.
- the data generation device 100 generates free-viewpoint video data using omnidirectional video data, map data, and playback control data for each frame.
- the reproduction control data for each frame includes shooting point data indicating a point on the map data corresponding to the shooting point of the target frame, basic orientation data for extracting a field-of-view image from the target frame, and extended gaze direction data. Yes.
- the free-viewpoint video playback device with respect to each frame of the omnidirectional video data included in the free-viewpoint video data created in this way, partly from the target frame (a subject, scenery, scene with a highlight)
- the reproduction control data relating to the target frame is used in order to cut out and display the field-of-view image in which the image is captured.
- the data generation device 100 can reduce (or eliminate) the time and effort required for the viewer to search for a subject, scenery, or scene with a highlight.
- the data generation apparatus 100 can solve the problem that the time when the scene can be reproduced has passed while searching for a scene with highlight, and the viewer misses the scene.
- the content creator uses the data generation device 100 to create azimuth data for displaying a field-of-view image including the subject on the free-viewpoint video playback device for a plurality of subjects desired to be shown to the viewer. Can do.
- the content creator can also control the display priority order of a plurality of field-of-view images in each frame by using the data generation device 100, so that content satisfying various viewers with different preferences. Can be produced and distributed.
- the data generation apparatus may be configured to receive omnidirectional image data (still image data) and output free viewpoint still image data.
- the content creator can use this data generation device for the purpose of displaying advertisements using signage or the like. That is, the content creator can sell the right to determine the display priority of the visual field image for each of the multiple visual field images that signage cuts out from the omnidirectional image data to the advertiser or the like.
- the signage is configured so that an advertisement video or message can be superimposed on each visual field image
- the content creator signs the advertising video or message into the target visual field image for each line-of-sight image.
- the right to superimpose display may be sold.
- the free viewpoint moving image data generating apparatus generates free viewpoint moving image data that is referenced by a free viewpoint moving image reproduction system according to an embodiment different from the second embodiment to be described later to reproduce a free viewpoint moving image. You may comprise.
- Such a free-viewpoint video playback system has, for example, a plurality of display devices (display, projector, etc.) surrounding the viewer and a structure that allows the viewer to face a predetermined direction when the viewer is sitting. And a seating system.
- display devices display, projector, etc.
- the basic azimuth data generated by the free viewpoint moving image data generating device is displayed on a display device (display device positioned in the predetermined direction as viewed from the seat) located in front of the viewer of the free viewpoint moving image playback system. Cutout center coordinates that are coordinates on the image frame (omnidirectional image) corresponding to the center position of the displayed visual field image may be included.
- the extended azimuth data includes coordinates (cutout center coordinates) on the image frame (omnidirectional image) corresponding to the center position of the field-of-view image displayed on the display device in front of the viewer of the free viewpoint video playback device. May be included.
- the basic azimuth data generation unit 111 may calculate the cut-out center coordinates included in the basic azimuth data by performing image processing on the omnidirectional moving image data.
- the basic azimuth data generation unit 111 may divide two adjacent image frames (omnidirectional images) into a plurality of regions with the same division pattern, and acquire a motion vector of each region by lock matching or the like.
- the basic azimuth data generation unit 111 determines which area is the area in the moving direction of the camera from the direction and magnitude of the motion vector of each area, and the center coordinates of the area determined to be the moving direction May be the cut-out center coordinates included in the basic azimuth data.
- the basic orientation data generation unit 111 may divide the image frame (omnidirectional image) into a plurality of regions. When the photographer appears in a certain area in the omnidirectional image, the basic azimuth data generation unit 111 recognizes that the area is the direction opposite to the moving direction of the camera. It may be determined whether the region is in the traveling direction.
- the basic orientation data generation unit 111 may use the center coordinates of the area determined to be the traveling direction as the cut-out center coordinates included in the basic orientation data.
- the area where the photographer moves in the omnidirectional image may be designated by the operation of the content creator, and the basic azimuth data generation unit 111 selects the subject shown throughout the photographic period. , It may be recognized as a photographer.
- Step S33 is not essential in the data generation device according to the present invention. That is, the extended azimuth data generation unit 112 may not generate the extended azimuth data when it is determined that the operation reception unit 120 has not received an operation for generating the extended azimuth data.
- step S4 is not essential in the data generation apparatus according to the present invention. That is, the extended azimuth data generation unit 112 may generate only one extended azimuth data for each frame.
- the data generation device 100 may determine the priority display order data based on the preference information of a specific viewer that is input in advance, or a number of viewers who have already viewed the target omnidirectional video. It may be generated with reference to browsing information (big data indicating which part of the omnidirectional image is viewed by many viewers at each reproduction time of the omnidirectional video).
- the data generation device 100 determines the priority display order based on the season and time zone at the time of reproduction. Data may be generated. By doing in this way, the data generation device 100 preferentially displays, for example, a visual field image including a subject or a landscape that is difficult to see unless it is in the season or time zone during playback on the free viewpoint video playback device. Can do. In particular, it is possible to provide an optimal display for tourist attraction and navigation.
- the viewpoint restriction data may include level information related to viewpoint restriction instead of information indicating presence / absence of viewpoint restriction.
- the level represented by the level information may be any of the following three levels, for example.
- ⁇ A level that allows viewers to display any field-of-view image in an omnidirectional image unconditionally on a free-viewpoint video playback device.
- ⁇ Browsing that meets a predetermined condition for example, paid for the content creator
- Only for the user a level at which the arbitrary viewpoint image can be displayed on the free-viewpoint video playback apparatus.
- the viewer can display only the viewfield image in a part of the omnidirectional image on the free-viewpoint video playback apparatus. In this way, the content creator can allow the viewer who does not satisfy the predetermined condition to browse only a part of the visual field image.
- the content creator can loosen the viewpoint restriction according to the amount of money paid by the viewer to the content creator.
- the reproduction control data may not be included in the free viewpoint moving image data.
- the data generation apparatus 100 when the data generation apparatus 100 is implemented in the form of a PC having a function of distributing free viewpoint video data, the data generation apparatus 100 includes free viewpoint video data including map data and omnidirectional video data, omnidirectional
- the reproduction control data (reproduction control data set) of each frame of the moving image data may be distributed individually.
- the content creator who is the user of the data generation device 100 can send two different reproduction control data sets created according to his / her preference to the partner to which the free viewpoint video data is distributed. Then, the content creator causes the other PC to play the omnidirectional video based on the first playback control data set and to play the omnidirectional video based on the second playback control data set. Can do.
- a content creator causes a partner to see a certain landscape (scenery that appears in the field-of-view image displayed by the former playback) when shooting an omnidirectional video, and is watching it when shooting a omnidirectional video. It is possible to make the other party see another landscape (the landscape shown in the visual field image displayed by the latter reproduction).
- the playback control data may include information related to the content creator (information related to the copyright such as the name of the content creator). This mechanism is useful when the content creator desires to handle the reproduction control data itself as a secondary work.
- the data generation apparatus 100 that handles omnidirectional video data has been described.
- the data generation apparatus according to the present invention is not limited to such a configuration. That is, the range of the data generation device according to the present invention is created by photographing a three-dimensional three-dimensional model (three-dimensional three-dimensional model using CG or a plurality of directions) instead of omnidirectional video data.
- a data generation apparatus that handles a three-dimensional three-dimensional model) is also included.
- Embodiment 2 which is a preferred embodiment of a playback apparatus according to the present invention will be described below in detail with reference to the accompanying drawings.
- symbol also in different drawing shall be abbreviate
- the free viewpoint video playback apparatus is an apparatus that plays back an omnidirectional video by referring to the free viewpoint video data generated by the data generation apparatus 100 according to the first embodiment. It is not structured to display the entire image frame.
- the free-viewpoint video playback apparatus is configured to display a field-of-view image by cutting out the field-of-view image from the target frame for each frame of the omnidirectional video.
- the free viewpoint video playback device may be a device having a touch panel function such as a smartphone or a tablet.
- the free-viewpoint video playback device includes a magneto-optical disk typified by DVD, Blu-ray (registered trademark) Disc, and / or a semiconductor memory typified by USB memory, SD (registered trademark) card, etc.
- An apparatus that reads and reproduces content data from an electronic medium may be used.
- the free-viewpoint video playback device may be a television receiver that receives broadcast waves of TV broadcasts, or a device that receives content data distributed from the Internet or other communication lines.
- the free-viewpoint video playback device is configured to include an HDMI (registered trademark) Multi-Media Interface (HDMI) (registered trademark) receiver that receives an image signal from an external device such as a Blu-ray (registered trademark) disc player. It may be.
- HDMI registered trademark
- HDMI Multi-Media Interface
- Blu-ray registered trademark
- the free viewpoint video playback device may be any device as long as it has a function of receiving content data from outside and playing back the input content data.
- FIG. 12 is a block diagram showing a configuration of a free-viewpoint video playback apparatus 200 (hereinafter abbreviated as “playback apparatus 200”) according to the present embodiment.
- playback apparatus 200 a free-viewpoint video playback apparatus 200 (hereinafter abbreviated as “playback apparatus 200”) according to the present embodiment.
- playback apparatus 200 the outline of the configuration of the playback apparatus 200 will be described with reference to FIG.
- the playback device 200 includes a control unit 210, a display unit 220, and an operation reception unit 230.
- the control unit 210 is a CPU and controls the entire playback apparatus 200 in an integrated manner.
- the display unit 220 is a display on which a visual field image is displayed.
- the operation accepting unit 230 is an operation device that accepts an operation by a viewer of the omnidirectional video (a user of the playback device 200).
- the control unit 210 functions as a demultiplexing processing unit 211, a map display processing unit 212, a decoding processing unit 213, an orientation data analysis unit 214, and an image cutout unit 215 by executing a specific program.
- decoding processing unit 213 and the demultiplexing processing unit 211 may be realized by hardware (LSI) instead of software.
- the demultiplexing processing unit 211 When the demultiplexing processing unit 211 receives an input of free viewpoint moving image data from the outside, the demultiplexing processing unit 211 performs demultiplexing processing on the free viewpoint moving image data, thereby reproducing map data and each frame from the free viewpoint moving image data. Control data and encoded omnidirectional video data are extracted.
- the demultiplexing processing unit 211 outputs the map data and the shooting point data of each frame to the map display processing unit 212, outputs each encoded frame to the decoding processing unit 213, and sets the azimuth data group of each frame as a direction.
- the data is output to the data analysis unit 214.
- the map display processing unit 212 displays a map represented by the map data on the display unit 220, and displays a line indicating the shooting route on the map using the shooting point data of each frame.
- the decoding processing unit 213 decodes each frame input from the outside and outputs each decoded frame to the image cutout unit 215.
- the azimuth data analysis unit 214 selects, for each frame, all or part of one or more azimuth data related to the frame.
- the azimuth data analysis unit 214 performs this selection process based on a browser operation or automatically.
- the azimuth data analysis unit 214 outputs the cutout coordinates included in the azimuth data to the image cutout unit 215 for each selected azimuth data.
- the image cutout unit 215 cuts out one or more visual field images from each frame with reference to one or more cutout coordinates regarding the frame.
- the image cutout unit 215, for each of the one or more cutout center coordinates referred to, displays a visual field image in a region having a predetermined length and width in the region centered on the cutout center coordinate. , Cut out from the frame.
- the image cutout unit 215 displays, for each frame, one or more visual field images cut out from the frame on the display unit 220 within the period of the frame.
- the playback apparatus 200 has the following first mode and second mode, and the operation in the first mode is different from the operation in the second mode.
- First mode The field-of-view image cut out with reference to the cut-out center coordinates of the basic azimuth data is displayed in full screen, and the field-of-view image cut out with reference to the cut-out center coordinates of the extended azimuth data is displayed in a small screen (wipe display).
- Mode second mode The field-of-view image cut out with reference to the cut-out center coordinates of the basic azimuth data is displayed by default, and the cut-out center coordinates of the extended azimuth data are referenced based on the operation of switching the view-field image displayed by the viewer.
- FIG. 13 is a flowchart showing the operation
- FIG. 14 is a flowchart showing in detail one step of the flowchart of FIG.
- FIG. 15 is a diagram illustrating a display mode (PinP display) of the visual field image by the playback device 200 set in the first display mode.
- the playback device 200 starts the operation according to the flowchart of FIG. 13 at the timing when the free viewpoint video data is input from the outside.
- step S41 the multiplexing processing unit 7 performs a demultiplexing process on the free viewpoint moving image data, so that the map data, the reproduction control data of each frame, and the encoded omnidirectional moving image are converted from the free viewpoint moving image data. Extract the data.
- the playback device 200 performs the processing from step S43 to step S46 for each frame of the omnidirectional video data within the period of the frame.
- step S43 the decoding processing unit 213 decodes the frame i (omnidirectional image) using a preset decoding method (for example, HEVC), and outputs the decoded frame i to the image clipping unit 215.
- a preset decoding method for example, HEVC
- step S44 the orientation data analysis unit 214 selects one or more orientation data included in the reproduction control data Vi.
- the azimuth data analysis unit 214 extracts the cutout center coordinates from the azimuth data for each of the selected one or more azimuth data, and outputs the extracted cutout center coordinates to the image cutout unit 215.
- Step S44 will be specifically described with reference to FIG.
- step S441 the azimuth data analyzing unit 214 determines whether or not priority display order data is included in each of the one or more azimuth data of the reproduction control data Vi.
- step S442 If the direction data analysis unit 214 determines that the priority display order data is included in each direction data of the reproduction control data Vi, the process proceeds to step S442, and the priority display order data is included in each direction data of the reproduction control data Vi. If it is determined that it is not included, the process proceeds to step S443.
- step S442 the azimuth data analysis unit 214 selects azimuth data including priority display order data indicating the highest priority display order among all the azimuth data included in the reproduction control data Vi, and step S444. Proceed to
- step S443 the azimuth data analysis unit 214 selects basic azimuth data by referring to the identification information of each azimuth data included in the reproduction control data Vi.
- step S444 the azimuth data analyzing unit 214 selects the azimuth data selected in step S442 or step S443 (specific azimuth data which is the azimuth data that the playback device 200 should use most preferentially to cut out a plurality of field images). ) To extract the cut-out center coordinates.
- the orientation data analysis unit 214 outputs the extracted cut-out center coordinates to the image cut-out unit 215.
- the orientation data analysis unit 214 proceeds to step S445 after step S444.
- step S445 the azimuth data analysis unit 214 determines whether or not the extended azimuth data is included in the reproduction control data Vi.
- the reproducing device 200 proceeds to step S446 when it is determined that the extended control data Vi is included in the playback control data Vi, and proceeds to step S45 when it is determined that the extended control data Vi is not included in the playback control data Vi. Proceed to
- step S446 the azimuth data analyzing unit 214 determines whether or not the operation receiving unit 230 has received an operation for causing the playback device 200 to select the extended azimuth data.
- step S447 When the operation accepting unit 230 determines that the operation has been accepted, the reproducing device 200 proceeds to step S447, and when the operation accepting unit 230 determines that the operation has not been accepted, the reproducing device 200 proceeds to step S45.
- step S447 the azimuth data analysis unit 214 selects extended azimuth data according to the operation. Further, in step S448, the azimuth data analysis unit 214 extracts cutout center coordinates from the extended azimuth data selected in step S447, and outputs the extracted cutout center coordinates to the image cutout unit 215.
- the reproducing device 200 proceeds to step S45 after step S448.
- step S45 the image cutout unit 215 refers to the cutout center coordinates output in step S444 and cuts out the view image for full screen display from the frame i.
- step S45 the image cutout unit 215 refers to the cutout center coordinates output in step S448, and displays the field-of-view image for sub-screen display from the frame i. cut.
- the image cutout unit 215 displays the full-screen view field image in full screen after step S45.
- the image cutout unit 215 superimposes and displays a visual field image for sub-screen display on the visual field image.
- the display unit 220 displays a full-screen display field-of-view image (full-screen image) 27 and a sub-screen display field-of-view image (wipe image) 28. Become.
- the playback apparatus 200 may be configured to accept an operation for displaying an image of an arbitrary desired orientation included in the frame i (omnidirectional image) as a visual field image.
- the playback device 200 may perform the following processing within the period of the frame i. That is, the playback device 200 may display the visual field image over a predetermined valid period, and perform the process of step S45 after the expiration date.
- the above operation may be an operation for inputting the value of the cutout center coordinate in the omnidirectional image, or an operation for specifying the cutout center coordinate (mouse click or touch operation).
- the playback apparatus 200 receives a mouse operation, a flick operation, or an operation of pressing a controller button during the predetermined valid period, the playback device 200 changes the displayed field image of one direction to a field image of another direction. It may be changed.
- the playback device 200 converts a field image of a certain direction being displayed into a field image of another direction according to the amount and direction of movement of the mouse, a field image of another direction according to the amount and direction of flick, or
- the viewing image may be changed to a different azimuth image according to the type of button to be pressed.
- the image cutout unit 215 may perform a process of correcting the cutout center coordinates as necessary for each frame except the first frame.
- the cutout center coordinate C1 related to the frame immediately before the target frame and the cutout center coordinate C2 related to the target frame exceeds a predetermined value, it is desirable to correct the cutout central coordinate related to the target frame.
- Cut-out center coordinates C3 coordinates on a line segment connecting the coordinates C1 and C2, and the distance between the coordinates C1 is the predetermined value. In addition, this is because the viewer can always grasp which field of view image is displayed, and also to prevent the viewer from getting sick.
- the size of the visual field image cut out from the omnidirectional image by the image cutout unit 215 may be a preset size corresponding to the size of the display area of the display unit 220.
- the reproduction control data Vi may include size information indicating the size of the visual field image cut out from the omnidirectional image by the image cutout unit 215.
- the image cutout unit 215 cuts out the visual field image with reference to the size information and the cutout center coordinates.
- the image cutout unit 215 may perform a process of correcting distortion (distortion due to a lens or a mirror) with respect to the omnidirectional image, and may cut out the visual field image from the corrected omnidirectional image. Note that the distortion correction method does not directly characterize the present invention, and a known method can be applied, and thus detailed description thereof is omitted.
- the azimuth data analysis unit 214 may output control information indicating that the field-of-view image to be cut out with reference to the cut-out center coordinates together with the cut-out center coordinates should be displayed in full screen. .
- the orientation data analysis unit 214 may output control information indicating that the visual field image to be cut out with reference to the cut-out center coordinate should be wipe displayed together with the cut-out center coordinate.
- the image cutout unit 215 may determine whether to view the visual field image to be cut out by referring to the control information with reference to the cutout center coordinates acquired together with the control information to be displayed in full screen or wiped.
- the azimuth data analysis unit 214 may output the priority display order data included in the azimuth data together with the cutout center coordinates to the image cutout unit 215 together with the cutout center coordinates.
- the image cutout unit 215 may specify priority display order data indicating the highest priority display order from the plurality of acquired priority display order data.
- the image cutout unit 215 may determine to display the visual field image to be cut out with reference to the cutout center coordinates acquired together with the priority display order data. In addition, the image cutout unit 215 may determine to wipe-display a field-of-view image to be cut out with reference to the cut-out center coordinates acquired together with other priority display order data.
- the image cutout unit 215 may compare the value of the X component of the cutout center coordinate of the basic azimuth data with the value of the X component of the cutout center coordinate of the extended gaze direction data. When the value of the former X component is smaller than the value of the latter X component, the image cutout unit 215 displays the wipe image 28 on the right end of the display, and the value of the former X component is greater than the value of the latter X component. May be displayed on the left end of the display.
- a field image in a certain direction in which a certain subject (a wonderful scenery) is displayed is displayed as a full screen image 27, and another subject related to the subject (the face of a person who is moved by seeing the scenery) is reflected.
- a field image of a different orientation is displayed as the wipe image 28, the viewer can feel as if he / she is simultaneously viewing the wonderful scenery and the face of the person who is impressed by seeing it. .
- the display position of the wipe screen may be the upper end and the lower end instead of the left end and the right end.
- the image cutout unit 215 may compare the value of the Y component of the cutout center coordinate of the basic orientation data with the value of the Y component of the cutout center coordinate of the extended gaze direction data.
- the image cutout unit 215 displays the wipe image 28 at the upper end of the display, and the value of the former Y component is greater than the value of the latter Y component. May be larger, the wipe image 28 may be displayed at the lower end of the display.
- the image cutout unit 215 may create an image button using the field-of-view image 28 and display the created image button instead of wiping the field-of-view image 28.
- the image cutout unit 215 may perform the following processing after deleting the image button and the field-of-view image 27 displayed on the full screen. That is, the image cutout unit 215 may display the field image 28 in full screen, create an image button using the field image 27, and display the created image button.
- the user can quickly display the field-of-view image cut out with reference to the cut-out center coordinates of the extended orientation data on the playback device 200 in full screen.
- the playback device 200 may play back the omnidirectional video twice.
- the playback apparatus 200 displays the field-of-view image of each frame using only the cut-out center coordinates of the basic orientation data during the first playback, and uses only the cut-out center coordinates of the extended orientation data during the second and subsequent playbacks.
- the field-of-view image of each frame may be displayed.
- the playback apparatus 200 may be a digital signage configured as described above.
- the playback device 200 as digital signage can allow the viewer to browse various advertisements shown in the omnidirectional video without getting the viewer bored.
- the reproducing device 200 displays a button corresponding to the identification information included in the direction data for each of the plurality of direction data in a full screen display. It may be displayed on the field image.
- the playback apparatus 200 may refer to the cut-out center coordinates of the orientation data identified by the identification information corresponding to the pressed button when any of the displayed buttons is pressed. Then, the playback apparatus 200 may switch the image to be displayed on the full screen to a reference image cut out using the cut-out center coordinates.
- the playback device 200 may display a button for setting whether or not to display a field-of-view image of the orientation indicated by the target extended orientation data for each extended orientation data.
- the playback device 200 may wipe-display only the visual field image to be displayed based on the setting by the button.
- a plurality of field images are displayed according to the priority display order indicated by the priority display order data during the period of the frame. May be wiped in order, or a plurality of wipe screens may be prepared to wipe display all the field-of-view images.
- the playback device 200 separates the omnidirectional video data, map data, and playback control data for each frame from the free-viewpoint video data generated by the data generation device 100. .
- the playback apparatus 200 selects all or a part of the azimuth data included in the line-of-sight control data, and cuts out and displays a part of the omni-directional image (field-of-view image) using the cut-out center coordinates included in the line-of-sight direction data. To do.
- the playback device 200 (a device having a display screen with a limited resolution and size, such as a tablet, a TV, a PC, a smartphone, or an HMD) can play back each frame (omnidirectional) For the image), a field image in a specific direction among the target omnidirectional images (field image that the content creator wants the viewer to browse) is displayed.
- the visual field image that the content creator wants the viewer to browse after a predetermined period has elapsed Can do even when the viewer performs an operation to display an image of a desired orientation as a visual field image, the visual field image that the content creator wants the viewer to browse after a predetermined period has elapsed Can do.
- FIG. 16 is a flowchart showing the operation
- FIG. 17 is a flowchart showing in detail one step (step S44) of the flowchart of FIG.
- FIG. 18 is a diagram illustrating a state in which the playback device 200 set in the second display mode displays a default visual field image on a map.
- FIG. 19 is a flowchart showing in detail another process (step S48) of the flowchart of FIG. 16, and FIG. 20 shows the case where the playback device 200 set in the second display mode is designated by the user. It is the figure which illustrated a mode that the performed visual field image is displayed on a map.
- the playback device 200 starts the operation according to the flowchart of FIG. 16 at the timing when the free viewpoint video data is input from the outside.
- the playback device 200 proceeds to step S42 after performing step S41 already described.
- step S42 the map display processing unit 212 displays the map represented by the map data on the display unit 220, and displays a line indicating the shooting route on the map 29 using the shooting point data of each frame.
- the playback apparatus 200 performs the processing from step S43 to step S48 for each frame of the omnidirectional video data during the frame period.
- the reproducing device 200 proceeds to step S444A after performing step S43 already described.
- step S44A the azimuth data analysis unit 214 automatically selects one azimuth data included in the reproduction control data Vi.
- the azimuth data analysis unit 214 extracts cutout center coordinates from the selected azimuth data, and outputs the extracted cutout center coordinates to the image cutout unit 215.
- step S44A The specific processing in step S44A is as shown in FIG. 17, but the processing in steps S441 to S444 in FIG. 17 is the same as the processing in steps S441 to S444 in FIG. 13 already described. The description of the specific processing in step S44A is omitted.
- step S45 the image cutout unit 215 cuts out the default display visual field image 31 from the frame i with reference to the cutout center coordinates output in step S444.
- the image cutout unit 215 displays the visual field image 31 for default display on the map 29 after step S45 (step S46).
- the reproducing device 200 proceeds to step S47 after step S46.
- step S47 the map display processing unit 212 displays the symbol 30 indicating the shooting point of the frame i at the position on the map indicated by the shooting point data of the frame i.
- step S47 the map display processing unit 212 obtains each azimuth data not selected by the azimuth data analysis unit 214 in step S44A from the azimuth data analysis unit 214, and performs the following processing on each azimuth data regarding the frame i. I do.
- the map display processing unit 212 has a field of view when the azimuth indicated by the target azimuth data is viewed from the shooting point of the frame i from the cut-out center coordinates included in the target azimuth data and the shooting point data of the frame i.
- the position on the map of the subject to be entered is estimated, and the symbol 32 is displayed at the estimated position.
- the map display processing unit 212 refers to the cut-out center coordinates included in the target orientation data, extracts the field image from the frame i, and applies a known distance estimation technique to the field image. The distance between the shooting point of the frame i and the point where the subject in the field-of-view image exists may be estimated. Then, the map display processing unit 212 may estimate the position of the subject on the map from the shooting point data of the frame i, the cut-out center coordinates indicating the orientation, and the estimated distance.
- the reproducing device 200 proceeds to step S48 after step S47.
- step S48 the playback device 200 performs a visual field image switching process.
- the visual field image switching process in step S48 will be specifically described with reference to FIG.
- the direction data analysis unit 214 determines whether or not the extended control data is included in the reproduction control data Vi (step S481).
- step S48 If the playback device 200 determines that the extended control data Vi does not include the extended orientation data, the playback device 200 ends step S48. If the playback control data Vi determines that the extended control data Vi includes the extended orientation data, the playback device 200 ends the process. The process proceeds to S482.
- step S482 the image cutout unit 215 performs the following processing on each extended orientation data included in the reproduction control data Vi.
- the image cutout unit 215 obtains the cutout center coordinates included in the target extended azimuth data from the azimuth data analysis unit 214, extracts the field image from the frame i with reference to the obtained cutout center coordinates, and extracts The displayed field-of-view image is displayed as a thumbnail.
- step S482 the image cutout unit 215 displays a broken line connecting the thumbnail 33 of the visual field image and the symbol 32 indicating the position of the subject in the visual field image on the map. indicate.
- step S482 the orientation data analysis unit 214 determines whether the operation reception unit 230 has received an operation for selecting any thumbnail.
- step S48 the playback device 200 ends step S48, and the operation receiving unit 230 has received an operation for selecting any thumbnail. If it is determined, the process proceeds to step S484.
- step S484 the azimuth data analyzing unit 214 selects the extended azimuth data corresponding to the selected thumbnail 33 from one or more extended azimuth data related to the frame i, and the process proceeds to step S485.
- step 485 the azimuth data analysis unit 214 extracts the cutout center coordinates from the selected extended azimuth data, and outputs the extracted cutout center coordinates to the image cutout unit 215.
- step S485 the image cutout unit 215 refers to the cutout center coordinates output in step S485, cuts out the visual field image from the frame i (step S486), and cuts the displayed visual field image in step S486.
- step S487 the image cutout unit 215 displays a thick frame surrounding the selected thumbnail 33 (the field image corresponding to the thumbnail 33 is displayed as a result of the user selecting the thumbnail 33). A thick frame).
- the map display processing unit 212 displays a large symbol 30 indicating the shooting point of the frame i while the default visual field image is displayed, and the visual field image designated by the user. While the symbol is displayed, the symbol 30 indicating the shooting point of the frame i may be displayed small.
- step S482 the image cutout unit 215 may perform the following process instead of performing the process of displaying the thumbnail.
- the image cutout unit 215 may display a button corresponding to the orientation indicated by the extended orientation data for each extended orientation data included in the reproduction control data Vi.
- the image cutout unit 215 may display a plurality of the above buttons side by side on the edge of the display screen.
- the image cutout unit 215 may display a visual field image having an orientation corresponding to the pressed button.
- the image cutout unit 215 displays a pull-down menu that can display a field-of-view image indicated by any extended azimuth data among one or more extended azimuth data included in the reproduction control data Vi. Also good.
- the viewer can display the shooting position of the current frame and the playback time of the current frame by viewing the map screen.
- the desired field-of-view image can be displayed while confirming the thumbnail of the field-of-view image, the location of the object in the field-of-view image, and the orientation in which the object is viewed from the shooting point. it can.
- the playback device 200 is not only a playback device such as a television or a digital video recorder described above, but also a digital camera, a digital movie, a portable movie player, a mobile phone, a car navigation system, a portable DVD player, a PC, etc. Is widely applicable to devices that handle
- the playback device according to the present invention is not limited to a playback device including a display, and a playback device that does not include a display and displays a moving image on an external display is also included in the scope of the playback device according to the present invention. .
- the playback device 200 includes the first mode and the second mode, the playback device according to the present invention is not limited to such a configuration.
- a playback apparatus that performs only one of the first mode operation and the second mode operation is also included in the scope of the playback apparatus according to the present invention.
- the playback device 200 may play back an omnidirectional video as follows.
- the playback device 200 includes a virtual dome-shaped screen on which the movie texture of the omnidirectional video is pasted, and a virtual camera that is arranged at the center of the screen and that can change the orientation and position. It may be displayed in a certain area in the display screen, and a portion reflected in the virtual camera in the omnidirectional video may be displayed in another area in the display screen.
- the playback apparatus 200 identifies the direction in which the virtual camera should face from the cut-out center coordinates included in the basic orientation data, and sets the orientation of the virtual camera so that it faces the same direction as the identified direction. I do not care.
- the basic orientation data may include data indicating the direction in which the virtual camera should face instead of the cut-out center coordinates, and the playback device 200 sets the orientation of the virtual camera with reference to the data. May be.
- step S2 the data generation device 100 accepts an operation for designating the orientation of the virtual camera instead of the operation for inputting the cut-out center coordinates, and the data generated by the above operation is used instead of the cut-out center coordinates.
- Basic azimuth data as it is included is generated.
- FIG. 21 is a schematic diagram of the free viewpoint moving image processing system according to the present embodiment.
- a free viewpoint video processing system 1 (hereinafter referred to as “system 1”) according to the present embodiment includes a data generation device 100 according to the first embodiment and a playback device 200 according to the second embodiment. Contains.
- the data generation device 100 generates free viewpoint video data using the method described in the first embodiment, and the playback device 200 reads the free viewpoint video data and uses the method described in the second embodiment. Play an omnidirectional video.
- the method of passing free viewpoint moving image data from the data generation device 100 to the playback device 200 may be a method using broadcasting or communication, or a method using a removable recording medium as a medium.
- the system 1 may be owned by one user, or shared by a first user who owns the data generation device 100 and a second user who owns the playback device 200. There may be.
- the first user creates free viewpoint video data using the data generation device 100 of the system 1, and the second user browses the omnidirectional video using the playback device 200 of the system 1. May be.
- the control blocks of the data generation device 100 and the playback device 200 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or realized by software using a CPU (Central Processing Unit). May be.
- a logic circuit hardware
- IC chip integrated circuit
- CPU Central Processing Unit
- the data generation device 100 and the playback device 200 include a CPU that executes instructions of a program that is software that implements each function, and a ROM (in which the program and various data are recorded so as to be readable by a computer (or CPU)).
- the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it.
- a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
- the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
- a transmission medium such as a communication network or a broadcast wave
- the present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
- the playback device uses the target frame for all or part of the frames of the omnidirectional video generated by moving shooting.
- a data generation unit (for example, basic azimuth data generation unit 111 and extended azimuth data generation unit 112) that generates a plurality of data (for example, basic azimuth data and extended azimuth data) to be referred to in order to extract a plurality of images to be displayed from
- the data generation unit generates a plurality of azimuth data as the plurality of data by generating azimuth data indicating the azimuth for each of a plurality of different azimuths, and the plurality of azimuths
- Each piece of data is indicated by the playback device as the display target image indicated by the direction data from the shooting point of the target frame.
- the omnidirectional video refers to a video in which all or almost all directions from each shooting point on the moving shooting shooting route are shown. Further, it is assumed that the target frame is omnidirectional still image data shot at a certain shooting point.
- the data generation device includes an image showing a state of view when viewing a certain direction from the shooting point, and a view showing a state of view when viewing another direction from the shooting point.
- a plurality of azimuth data for cutting out the image from the target frame is generated.
- the playback device cuts out these images from the target frame with reference to the plurality of azimuth data, and plays back these images in a period in which these images should be played back.
- the data generation device is configured such that each viewer when viewing a plurality of different directions from the same shooting point is viewed by a viewer (a user of the playback device) of a playback video clipped from the omnidirectional video data generated by moving shooting. It can be said that there is an effect of generating data that makes it possible to confirm the state of view at the same time.
- the data generation unit (for example, a portion including the basic azimuth data generation unit 111, the extended azimuth data generation unit 112, and the reproduction control data generation unit 115)
- the control data including the plurality of azimuth data (for example, reproduction control data) is generated, and the data generation unit adds the azimuth data to each of the plurality of azimuth data as the control data.
- Control data that includes identification information for identification from the orientation data may be generated.
- the data generation device determines that the playback device specifies which azimuth data of the plurality of azimuth data is simply by referring to the azimuth data. It has the further effect of making it possible.
- the data generation unit in the aspect 1 or 2, the data generation unit generates control data including the plurality of azimuth data, and the data generation unit
- specific azimuth data for example, basic azimuth data
- Control data may be generated.
- the specific image among the plurality of images that can be cut out from the target frame using the plurality of azimuth data is the image that the user of the reproduction device most wants to view for the user of the data generation device. It shall be.
- the data generation device generates the specific image as the specific orientation data by generating the orientation data for the playback device to cut out the specific image from the target frame. It is possible to make it easier for the user to touch the eyes.
- the display order of the image that the data generation unit extracts for each of the plurality of azimuth data with reference to the azimuth data is generated, and the playback device displays a plurality of images so that images with a relatively high display order are displayed relatively quickly during the period of the target frame. An image may be displayed.
- Display order data e.g., priority display order data
- the data generation device can cause the user of the reproduction device to view the plurality of images in the order that the user of the data generation device wants the user of the reproduction device to view the image. , There is an additional effect.
- the data generation device is the data generation apparatus according to any one of the aspects 1 to 4, wherein the omnidirectional video is a video generated by shooting while moving along a predetermined route.
- the data generation device has an additional effect that the user of the playback device can check the map of the area where the omnidirectional video is captured.
- a playback apparatus refers to cutting out a plurality of display target images from the target frame for all or some of the omnidirectional video generated by moving shooting.
- a data reference processing unit (for example, an azimuth data analyzing unit 214) that refers to a plurality of data to be processed, and the plurality of data are a plurality of azimuth data indicating different azimuths, Is the data to be referred to in order for the data reference processing unit to cut out an image showing the state of the field of view when the azimuth indicated by the azimuth data is viewed from the shooting point of the target frame as the display target image.
- a reproduction processing unit (for example, the image clipping unit 215) that reproduces a plurality of images clipped from the target frame for all or some of the frames. Eteiru.
- the omnidirectional video refers to a video in which all or almost all directions from each shooting point on the moving shooting shooting route are shown. Further, it is assumed that the target frame is omnidirectional still image data shot at a certain shooting point.
- the data generation device includes an image showing a state of view when viewing a certain direction from the shooting point, and a view showing a state of view when viewing another direction from the shooting point.
- a plurality of azimuth data for cutting out the image from the target frame is generated.
- the playback device cuts out these images from the target frame with reference to the plurality of azimuth data, and plays back these images in a period in which these images should be played back.
- the playback device has different views when viewing a plurality of different directions from the same shooting point to a viewer (a user of the playback device) of the playback video clipped from the omnidirectional video data generated by moving shooting. It can be said that there is an effect that the state of can be confirmed at the same time.
- the data reference processing unit refers to control data including the plurality of azimuth data
- the control data referred to by the data reference processing unit includes Specific orientation data, which is orientation data to be referred to most preferentially in order for the device to cut out the image, may be included in the plurality of orientation data.
- the specific image of the plurality of images that can be cut out from the target frame using the plurality of azimuth data is the image that the user of the playback device most wants to view for the producer of the omnidirectional video.
- the specific image is an image cut out using the specific azimuth data.
- the reproducing apparatus has an additional effect that the omnidirectional video can be reproduced in such a manner that the specific image is easily touched by the user's eyes.
- the data reference processing unit refers to the plurality of azimuth data, and the own device uses the azimuth data for each of the plurality of azimuth data.
- the reproduction processing unit refers to display order data indicating the display order of images to be cut out by reference, and the reproduction processing unit is configured to display the plurality of images having a relatively high display order relatively quickly in a target frame period. One image may be reproduced.
- the reproducing apparatus has an additional effect that the user can browse the plurality of images in the order in which the creator of the omnidirectional video wants the user to browse.
- the omnidirectional video is a video generated by shooting while moving along a predetermined route
- the predetermined video A map display processing unit (for example, a map display processing unit 212) that displays a map of the region where the route of the map is located (for example, the map 29), and the map display processing unit applies a target for each frame of the omnidirectional video.
- information for example, the symbol 30
- the shooting point of the target frame may be displayed on the map.
- regenerating apparatus can make a user grasp
- the map display processing unit includes, for each frame of the omnidirectional video, information indicating the shooting point of the target frame and the playback process in the period of the target frame.
- Information (for example, symbol 30) indicating the position of the subject in the image to be cut out from the target frame by the unit may be displayed on the map.
- the playback device can cause the user to grasp the approximate position of the subject (non-animal such as a building) shown in the visual field image when the visual field image is displayed. The effect which becomes.
- the present invention can also be configured as follows.
- the omnidirectional image data having a 360 ° field of view, which is taken using an omnidirectional camera while moving along a predetermined route, and map data that is a map of the route are input, and the input omnidirectional data is used.
- An image data generation device that generates free viewpoint image data that enables reproduction of an image in an arbitrary line-of-sight direction from a shooting position corresponding to coordinates on map data,
- a basic line-of-sight direction data generation unit that generates basic line-of-sight direction data, which is initial line-of-sight direction data, for each frame of omnidirectional image data;
- An extended gaze direction data generating unit that generates at least one extended gaze direction data that is gaze direction data different from the basic gaze direction data for each frame of omnidirectional image data;
- a map position data generating unit that generates map position data obtained by converting the shooting position of each frame of the omnidirectional image data into coordinates on the map data;
- a line-of-sight direction control data generating unit that generates line
- the basic line-of-sight direction data and the plurality of extended line-of-sight direction data include information for identifying each of the basic line-of-sight direction data and information about the line-of-sight direction when displaying the omnidirectional image.
- An image data generation device according to the configuration.
- the image data generation device In the gaze direction control data, when the basic gaze direction data and at least one or more extended gaze direction data are included for each frame of omnidirectional image data, which gaze direction data is preferentially displayed.
- the image data generation device characterized in that the image data generation device includes information related to a ranking indicating the above.
- a separation unit that separates omnidirectional image data, the map data, and the line-of-sight direction control data from free viewpoint image data;
- a gaze direction control unit that selects at least one or more basic gaze direction data or extended gaze direction data from the gaze direction control data, and outputs cut-out center coordinates of the selected extended gaze direction data;
- An image cutout unit that obtains cutout center coordinates, cuts out part of the previous omnidirectional image data around the cutout center coordinates and generates a display image, and a display unit that obtains and displays the display image is provided.
- An image data reproducing apparatus characterized by the above.
- the line-of-sight direction control unit outputs cut-out center coordinates of a plurality of line-of-sight direction data for the same frame of the omnidirectional image data, and simultaneously uses the cut-out center coordinates of the plurality of line-of-sight direction data from the image cut-out unit.
- the image data reproducing device according to the fourth configuration, wherein the display unit is notified of which of the display images generated is the main screen.
- the present invention can be suitably applied to a device that distributes an omnidirectional video and a device that plays back an omnidirectional video.
- Free-viewpoint video data generator 110 Control Unit
- Basic Direction Data Generation Unit 112 Extended orientation data generator (data generator)
- Reproduction control data generation unit data generation unit
- Multiplexing processing unit (distribution data generation processing unit)
- Free viewpoint video playback device playback device
- Control unit 212
- Map display processing unit maps display processing unit
- Direction Data Analysis Unit Data Reference Processing Unit
- Image cutout unit (reproduction processing unit)
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The present invention makes it possible for the viewer of a dynamic image for display that is cut out from an omnidirectional dynamic image generated by a moving shot, to confirm the state of each visual field around the same time when a plurality of different directions are seen from the same shooting spot. A data generation device (100) is provided with a basic-bearing data generation unit (111) for generating, as to each frame of an omnidirectional dynamic image, basic-bearing data to be referenced by a reproduction device for the purpose of cutting out an image to be displayed from the frame, and an extended-bearing data generation unit (112) for generating extended-bearing data to be referenced by the reproduction device for the said purpose.
Description
本発明は、主に、全方位画像に関するデータを生成するデータ生成装置、及び、全方位画像から切り出された画像を再生する再生装置に関する。
The present invention mainly relates to a data generation device that generates data relating to an omnidirectional image and a reproduction device that reproduces an image cut out from the omnidirectional image.
近年、広角な魚眼レンズや全方位ミラーを装着したカメラを使った手法や、複数台のカメラで異なる視点位置から同一シーンを撮影し、各カメラの撮影画像を繋ぎ合わせる手法等により、周囲360°の視界をもつ全方位画像を撮影することができる。画像提供者は、様々な場所での撮影により得られた多数の全方位画像を用いることにより、閲覧者に、閲覧者自身が指定する任意の視点位置および任意の視線方向に応じた視界の様子を示す画像を閲覧させることができる。
In recent years, a method using a camera equipped with a wide-angle fisheye lens or an omnidirectional mirror, a method of photographing the same scene from a plurality of cameras with different viewpoint positions, and connecting the captured images of each camera, etc. An omnidirectional image with a field of view can be taken. The image provider uses a large number of omnidirectional images obtained by shooting at various locations, so that the viewer can view the field of view according to an arbitrary viewpoint position and an arbitrary line-of-sight direction specified by the viewer. Can be browsed.
例えば、下記特許文献1には、供給センターが保持している様々な視点位置の全方位画像から、上記視界の様子を示す画像データを抽出し、抽出した画像データを閲覧者の端末装置に送信する技術が開示されている。また、グーグル株式会社(本社:米国カリフォルニア州のGoogle Inc.)は、「ストリートビュー(登録商標)」という、道路上の任意の視点位置および任意の視線方向に応じた視界の様子を示す静止画像を表示するサービスを行っている。
For example, in Patent Document 1 below, image data indicating the state of the field of view is extracted from omnidirectional images at various viewpoint positions held by the supply center, and the extracted image data is transmitted to the terminal device of the viewer. Techniques to do this are disclosed. Google Inc. (Headquarters: Google Inc., California, USA) is a "Street View (registered trademark)" still image that shows the state of view according to any viewpoint position and any line-of-sight direction on the road. A service that displays is displayed.
上記特許文献1や「ストリートビュー」における、全方位画像を利用した従来の画像表示技術は、静止画像を提示する技術である。即ち、上記特許文献1や「ストリートビュー」においては、閲覧者が任意の視点位置から任意の視線方向を指定すると、指定に応じた静止画像が提示されるようになっている。
The conventional image display technique using an omnidirectional image in Patent Document 1 and “Street View” is a technique for presenting a still image. That is, in Patent Document 1 and “Street View”, when a viewer designates an arbitrary line-of-sight direction from an arbitrary viewpoint position, a still image corresponding to the designation is presented.
閲覧者が閲覧する画像が、移動しながら撮影された動画ではないことに起因して、上記特許文献1の技術や「ストリートビュー」の技術には、以下の問題がある。
Due to the fact that the image viewed by the viewer is not a moving image taken while moving, the technique of Patent Document 1 and the technique of “street view” have the following problems.
即ち、閲覧者は、「ある景色を被写体とする、その景色が見える地域内の様々な地点での撮影」で得られた複数の静止画像を閲覧しても「自分自身が実際にその景色が見える地域に来ている」かのような臨場感を得ることが難しい、という問題がある。加えて、閲覧者が閲覧する画像からコンテンツとしての面白みを感じにくい、という問題もある。
In other words, even if a viewer browses a plurality of still images obtained by “photographing at various points in an area where the scenery can be seen”. There is a problem that it is difficult to get a sense of realism as if you are in a visible area. In addition, there is a problem that it is difficult to feel the fun of the content from the image viewed by the viewer.
ところで、特許文献2の発明は、マップ上の現在地を仮想の経路に沿って移動させる操作に基づき、仮想の経路に対応する実世界の経路を移動しながら周囲を撮影して得られた複数の全方位の静止画像について、以下の処理を行うようになっている。
By the way, the invention of Patent Document 2 is based on an operation of moving the current location on a map along a virtual route, and a plurality of images obtained by photographing the surroundings while moving a real-world route corresponding to the virtual route. The following processing is performed for omnidirectional still images.
即ち、特許文献2の発明は、上記複数の全方位の静止画像について「閲覧者が指定した視線方向の視界の様子を示す視野画像を全方位の静止画像から切り出し、切り出した視野画像を表示する処理」を行うことにより、各撮影時刻における撮影地点から見た周囲の様子を示す動画を表示するようになっている。
That is, in the invention of Patent Document 2, “a visual field image showing a visual field in the sight line direction designated by the viewer is cut out from the omnidirectional still image for the plurality of omnidirectional still images, and the cut out visual field image is displayed. By performing the “processing”, a moving image showing the surroundings viewed from the shooting point at each shooting time is displayed.
特許文献2のこの技術により、上述した特許文献1の技術や「ストリートビュー」の技術の問題はある程度解決される。
This technique of Patent Document 2 solves the problems of the technique of Patent Document 1 and the “street view” technique described above to some extent.
また、特許文献2の発明では、上記動画を1人で閲覧している閲覧者が、ある視線方向の様子を見ることを所望した場合には、必要に応じて視線方向を変更する操作を行うことにより、該ある視線方向の様子を見ることができる。
Moreover, in the invention of Patent Document 2, when a viewer who is browsing the moving image alone desires to see the state of a certain gaze direction, an operation to change the gaze direction is performed as necessary. Thus, it is possible to see the state of the certain gaze direction.
しかしながら、特許文献2の発明には、ある地点からある方向を見た場合の視界の様子と同じ地点から別の方向を見た場合の視界の様子とを同時期に確認したい、という閲覧者の要求を満たすことができない、という問題がある。特許文献2の発明は、前者の視界の様子を示す視野画像と後者の視界の様子を示す視野画像とを同時期に表示できないからである。
However, in the invention of Patent Document 2, a viewer who wants to confirm the state of view when viewing a certain direction from a certain point and the state of view when viewing another direction from the same point at the same time. There is a problem that the request cannot be satisfied. This is because the invention disclosed in Patent Document 2 cannot display the visual field image showing the former visual field and the visual field image showing the latter visual field at the same time.
本発明は、上記問題に鑑みて成されたものであり、その主な目的は、移動撮影によって生成された全方位動画から切り出された表示用動画の閲覧者に、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を同時期に確認させることを可能にするデータを生成するデータ生成装置を実現することにある。また、上記閲覧者に、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を同時期に確認させることが可能な再生装置を実現することも本発明の目的に含まれる。
The present invention has been made in view of the above problems, and its main purpose is to allow viewers of a moving image for display cut out from an omnidirectional moving image generated by moving shooting to have a plurality of different images from the same shooting point. An object of the present invention is to realize a data generation device that generates data that enables the state of each field of view when viewing a direction to be confirmed at the same time. It is also included in the object of the present invention to realize a playback device that allows the viewer to check the state of each field of view when viewing a plurality of different directions from the same shooting point.
本発明の一態様に係るデータ生成装置は、移動撮影によって生成された全方位動画の全部又は一部のフレームについて、再生装置が対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数のデータを生成するデータ生成部を備え、上記データ生成部は、相異なる複数の方位の各々について該方位を示す方位データを生成することにより、上記複数のデータとして複数の方位データを生成し、上記複数の方位データの各々は、上記再生装置が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータである。
A data generation device according to an aspect of the present invention provides a plurality of images that a playback device should refer to in order to extract a plurality of images to be displayed from a target frame for all or a part of frames of an omnidirectional video generated by moving shooting. A data generation unit for generating the data, wherein the data generation unit generates azimuth data indicating the azimuth for each of a plurality of different azimuths, thereby generating a plurality of azimuth data as the plurality of data, Each of the plurality of azimuth data is referred to by the playback device to cut out an image showing a field of view when viewing the azimuth indicated by the azimuth data from the shooting point of the target frame as the display target image. Data.
本発明の一態様に係る再生装置は、移動撮影によって生成された全方位動画の全部又は一部のフレームについて、対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数の方位データを参照するデータ参照処理部を備え、上記複数の方位データの各々は、上記データ参照処理部が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータであり、上記全部又は一部のフレームについて、対象フレームから切り出した複数枚の画像を再生する再生処理部を備えている。
A playback device according to an aspect of the present invention includes a plurality of azimuth data to be referred to in order to extract a plurality of images to be displayed from a target frame for all or some frames of an omnidirectional video generated by moving shooting. A data reference processing unit to be referenced, and each of the plurality of azimuth data includes a field of view when the data reference processing unit views the azimuth indicated by the azimuth data from the shooting point of the target frame as the display target image. This is data that should be referred to in order to cut out an image showing the above situation, and includes a reproduction processing unit that reproduces a plurality of images cut out from the target frame for all or some of the frames.
本発明の一態様に係るデータ生成装置は、移動撮影によって生成された全方位動画から切り出された表示用動画の閲覧者に、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を同時期に確認させることを可能にするデータを生成するデータ生成装置を実現することにある。
The data generation device according to an aspect of the present invention is a view of each field of view when viewing a plurality of different directions from the same shooting point to a viewer of a display moving image cut out from an omnidirectional moving image generated by moving shooting. It is to realize a data generation apparatus that generates data that enables confirmation at the same time.
また、本発明の一態様に係る再生装置は、上記閲覧者に、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を同時期に確認させることができる、という効果を奏する。
Also, the playback device according to an aspect of the present invention has an effect that the viewer can check the state of each field of view when viewing a plurality of different directions from the same shooting point at the same time.
<実施形態1>
以下に添付図面を参照して、本発明に係るデータ生成装置の好適な実施形態である実施形態1について詳細に説明する。なお、異なる図面においても同じ符号を付した構成は同様のものであるとして、その説明を省略することとする。 <Embodiment 1>
Embodiment 1 which is a preferred embodiment of a data generation apparatus according to the present invention will be described below in detail with reference to the accompanying drawings. In addition, the structure which attached | subjected the same code | symbol also in different drawing shall be abbreviate | omitted, and the description is abbreviate | omitted.
以下に添付図面を参照して、本発明に係るデータ生成装置の好適な実施形態である実施形態1について詳細に説明する。なお、異なる図面においても同じ符号を付した構成は同様のものであるとして、その説明を省略することとする。 <
(自由視点動画データ生成装置の概要)
本実施形態に係る自由視点動画データ生成装置は、後述の実施形態2に係る自由視点動画再生装置が自由視点動画を再生するために参照する自由視点動画データを生成する。 (Outline of free viewpoint video data generator)
The free viewpoint moving image data generation device according to the present embodiment generates free viewpoint moving image data that is referenced by a free viewpoint moving image playback device according to a second embodiment to be described later for playing back a free viewpoint moving image.
本実施形態に係る自由視点動画データ生成装置は、後述の実施形態2に係る自由視点動画再生装置が自由視点動画を再生するために参照する自由視点動画データを生成する。 (Outline of free viewpoint video data generator)
The free viewpoint moving image data generation device according to the present embodiment generates free viewpoint moving image data that is referenced by a free viewpoint moving image playback device according to a second embodiment to be described later for playing back a free viewpoint moving image.
自由視点動画データ生成装置が生成する自由視点動画データには、自由視点動画データ生成装置に入力された全方位動画データが含まれている。この全方位動画データは、複数の撮影地点を含む撮影経路を移動しながらの撮影(移動撮影)によって生成されたものである。
The free viewpoint moving image data generated by the free viewpoint moving image data generating device includes omnidirectional moving image data input to the free viewpoint moving image data generating device. This omnidirectional video data is generated by shooting (moving shooting) while moving a shooting route including a plurality of shooting points.
本実施形態において特筆すべきは、自由視点動画データ生成装置が生成する自由視点動画データに以下のようなメタデータが含まれていることにある。
Noteworthy in the present embodiment is that the following metadata is included in the free viewpoint video data generated by the free viewpoint video data generation device.
即ち、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を、実施形態2に係る自由視点動画再生装置のユーザに同時期に確認させることを可能にするメメタデータが、自由視点動画データに含まれている。以下では、このメタデータを再生制御データとも称する。
That is, the metadata that allows the user of the free viewpoint video playback device according to Embodiment 2 to confirm the state of each field of view when viewing a plurality of different directions from the same shooting point at the same time is free viewpoint video. It is included in the data. Hereinafter, this metadata is also referred to as reproduction control data.
なお、本実施形態に係る自由視点動画データ生成装置の具体的な例としては、上記自由視点動画データを生成する放送機器装置およびクラウド上のサーバ、並びに、上記自由視点動画データを生成するソフトウェアがインストールされたPC(Personal Computer)等が挙げられる。
Note that specific examples of the free-viewpoint video data generating device according to the present embodiment include a broadcasting device that generates the free-viewpoint video data, a server on the cloud, and software that generates the free-viewpoint video data. An installed PC (Personal Computer) or the like can be mentioned.
(自由視点動画データ生成装置の構成)
図1は、本発明の一実施形態に係る自由視点動画データ生成装置100(以下、「データ生成装置100」と略称する)の構成を示すブロック図である。まず、この図を用いてデータ生成装置100の構成の概略を説明する。 (Configuration of free viewpoint video data generation device)
FIG. 1 is a block diagram showing the configuration of a free-viewpoint video data generation device 100 (hereinafter abbreviated as “data generation device 100”) according to an embodiment of the present invention. First, an outline of the configuration of the data generation device 100 will be described with reference to this figure.
図1は、本発明の一実施形態に係る自由視点動画データ生成装置100(以下、「データ生成装置100」と略称する)の構成を示すブロック図である。まず、この図を用いてデータ生成装置100の構成の概略を説明する。 (Configuration of free viewpoint video data generation device)
FIG. 1 is a block diagram showing the configuration of a free-viewpoint video data generation device 100 (hereinafter abbreviated as “
データ生成装置100は、制御部110、及び、操作受付部120を備えている。
The data generation device 100 includes a control unit 110 and an operation reception unit 120.
制御部110は、CPUであり、データ生成装置100全体を統括的に制御する。
The control unit 110 is a CPU and controls the entire data generation apparatus 100 in an integrated manner.
操作受付部120は、コンテンツ制作者(データ生成装置100のユーザ)による操作を受け付ける操作デバイスである。
The operation reception unit 120 is an operation device that receives an operation by a content creator (a user of the data generation device 100).
制御部110は、特定のプログラムを実行することによって、基本方位データ生成部111(データ生成部)、拡張方位データ生成部(データ生成部)112、撮影地点データ変換処理部113と、符号化処理部114と、再生制御データ生成部115、及び、多重化処理部116として機能する。
The control unit 110 executes a specific program, thereby performing a basic azimuth data generation unit 111 (data generation unit), an extended azimuth data generation unit (data generation unit) 112, a shooting point data conversion processing unit 113, and an encoding process. Unit 114, reproduction control data generation unit 115, and multiplexing processing unit 116.
なお、符号化処理部114および多重化処理部116は、ソフトウェアではなくハードウェア(LSI)によって実現されてもよい。
Note that the encoding processing unit 114 and the multiplexing processing unit 116 may be realized by hardware (LSI) instead of software.
基本方位データ生成部111は、外部から入力される全方位動画データの各画像フレーム(各全方位画像)について、実施形態2に係る自由視点動画再生装置が対象フレームから再生対象の視野画像を切り出すために参照すべきデータを生成する。
For each image frame (each omnidirectional image) of omnidirectional video data input from the outside, the basic azimuth data generation unit 111 cuts out the visual field image to be reproduced from the target frame by the free viewpoint video reproduction device according to the second embodiment. Data to be referred to is generated.
この視野画像は、対象フレームの撮影地点からある方位(コンテンツ制作者に指定された方位、又は、デフォルトの方位)を見た時の視界の様子を示す画像である。
This field-of-view image is an image showing the state of the field of view when viewing a certain direction (the direction specified by the content creator or the default direction) from the shooting point of the target frame.
具体的には、基本方位データ生成部111は、上記データとして、上記ある方位を示す方位データ(以下、基本方位データと称する)を生成する。基本方位データの具体的な内容については後述する。
Specifically, the basic azimuth data generation unit 111 generates azimuth data indicating the certain azimuth (hereinafter referred to as basic azimuth data) as the data. Specific contents of the basic azimuth data will be described later.
なお、コンテンツ制作者は、上記ある方位を指定する場合には、対象フレームに含まれている様々な被写体のうちの自身が実施形態2に係る自由視点動画再生装置のユーザに最も見せたい被写体が上記視野画像に含まれるように、上記ある方位を指定することになる。
When the content creator specifies the certain direction, the subject that the user wants to show most to the user of the free-viewpoint video playback device according to the second embodiment is selected from the various subjects included in the target frame. The certain direction is designated so as to be included in the field-of-view image.
拡張方位データ生成部112は、上記全方位動画データの各画像フレームについて、実施形態2に係る自由視点動画再生装置が対象フレームから再生対象の視野画像を切り出すために参照すべきデータを生成する。
The extended azimuth data generation unit 112 generates, for each image frame of the omnidirectional video data, data to be referred to so that the free viewpoint video playback device according to the second embodiment cuts out the playback target view image from the target frame.
この視野画像は、対象フレームの撮影地点から別の方位(コンテンツ制作者に指定された方位、又は、デフォルトの方位)を見た時の視界の様子を示す画像である。
This field-of-view image is an image showing the state of the field of view when another azimuth (the azimuth designated by the content creator or the default azimuth) is viewed from the shooting point of the target frame.
具体的には、拡張方位データ生成部112は、上記データとして、上記別の方位を示す方位データ(基本方位データが示す方位とは異なる方位を示す拡張方位データ)を生成する。拡張方位データの具体的な内容については後述する。
Specifically, the extended azimuth data generation unit 112 generates azimuth data indicating another azimuth (extended azimuth data indicating an azimuth different from the azimuth indicated by the basic azimuth data) as the data. Specific contents of the extended azimuth data will be described later.
なお、拡張方位データ生成部112は、対象フレームに関する拡張方位データとして、ただ1つの拡張方位データを生成してもよいし、互いに相異なる方位を示す複数の拡張方位データを生成してもよい。
The extended azimuth data generation unit 112 may generate only one extended azimuth data as the extended azimuth data related to the target frame, or may generate a plurality of extended azimuth data indicating different azimuths.
撮影地点データ変換処理部113は、外部から入力されたマップデータを多重化処理部116に出力する。
The shooting point data conversion processing unit 113 outputs map data input from the outside to the multiplexing processing unit 116.
また、撮影地点データ変換処理部113は、全方位動画データの各フレームについて、該フレームの撮影地点を示すGPS座標データを、外部から入力されたマップデータが示す地図における、該撮影地点に対応する位置を示す座標データ(地図座標データ)に変換する。
Further, for each frame of the omnidirectional video data, the shooting point data conversion processing unit 113 corresponds to the shooting point in the map indicated by the map data input from the outside with the GPS coordinate data indicating the shooting point of the frame. It is converted into coordinate data (map coordinate data) indicating the position.
なお、このGPS座標データは、全方位動画データの付加情報として、全方位動画データに含まれていてもよい。即ち、このGPS座標データは、GPSモジュールを内蔵したカメラによる全方位動画の撮影時にこのカメラによって生成されたものであってもよい。
The GPS coordinate data may be included in the omnidirectional video data as additional information of the omnidirectional video data. That is, the GPS coordinate data may be generated by this camera when shooting an omnidirectional video by a camera incorporating a GPS module.
また、全方位動画の付加情報として、GPS座標データとは異なる種類の「撮影地点を示す座標データ」が含まれていてもよい。例えば、座標データは、GPSモジュールとジャイロセンサ及び/又は車速センサとを併用して生成されるものであってもよいし、全方位動画の付加情報として手入力されるものであってもよい。
In addition, as additional information of the omnidirectional video, a type of “coordinate data indicating a shooting point” different from the GPS coordinate data may be included. For example, the coordinate data may be generated by using a GPS module and a gyro sensor and / or a vehicle speed sensor in combination, or may be manually input as additional information of an omnidirectional video.
ここで、マップデータについて図2を参照して説明する。図2は、マップデータについて説明するための説明図である。
Here, the map data will be described with reference to FIG. FIG. 2 is an explanatory diagram for explaining the map data.
図2の地図10を表すマップデータは、前述の撮影経路が位置する地域の地図を示す2次元の地図画像データである。マップデータとしては、例えば観光マップや、道路マップ等のマップデータが挙げられる。
The map data representing the map 10 in FIG. 2 is two-dimensional map image data indicating a map of the area where the above-described shooting route is located. As map data, map data, such as a sightseeing map and a road map, is mentioned, for example.
なお、本実施形態では、全方位動画データの最初のフレームの撮影地点に対応する地図座標データ(撮影地点データ)は、図2における、マップデータが示す地図上の点11を示す座標であるものとする。また、全方位動画データの最後のフレームの撮影地点に対応する地図座標データ(撮影地点データ)は、図2における、マップデータが示す地図上の点13を示す座標であるものとする。
In the present embodiment, the map coordinate data (shooting point data) corresponding to the shooting point of the first frame of the omnidirectional video data is a coordinate indicating the point 11 on the map indicated by the map data in FIG. And Also, map coordinate data (shooting point data) corresponding to the shooting point of the last frame of the omnidirectional video data is assumed to be a coordinate indicating the point 13 on the map indicated by the map data in FIG.
さらに、全方位動画データの残りの各フレームの撮影地点に対応する各地図座標データ(撮影地点データ)は、図2における、点11と点13とを結ぶ線12上の座標であるものとする。
Furthermore, each map coordinate data (photographing point data) corresponding to the photographing points of the remaining frames of the omnidirectional video data is coordinates on a line 12 connecting the points 11 and 13 in FIG. .
また、本実施形態では、撮影地点データ変換処理部113は、対象フレームの撮影地点を示すGPS座標データを地図座標データに変換する処理を全フレームについて行うように構成されているが、該処理を一部のフレーム(キーフレーム)についてのみ行ってもよい。
In this embodiment, the shooting point data conversion processing unit 113 is configured to perform the process of converting the GPS coordinate data indicating the shooting point of the target frame into map coordinate data for all frames. You may perform only about some frames (key frame).
この場合、撮影地点データ変換処理部113は、キーフレームではない各フレームについて、対象フレームの撮影地点を示す地図座標データを、対象フレームの直前のキーフレームの撮影地点を示す地図座標データと、対象フレームの直後のキーフレームの撮影地点を示す地図座標データとを用いた補間処理によって生成してもよい。
In this case, the shooting point data conversion processing unit 113, for each frame that is not a key frame, map coordinate data indicating the shooting point of the target frame, map coordinate data indicating a shooting point of the key frame immediately before the target frame, and the target You may produce | generate by the interpolation process using the map coordinate data which shows the imaging | photography point of the key frame immediately after a flame | frame.
例えば、撮影地点データ変換処理部113は、「全方位動画を撮影するカメラが、上記直前のキーフレームの撮影時刻から上記直後のキーフレームの撮影時刻までの期間に亘って、上記直前のキーフレームの撮影地点と上記直後のキーフレームの撮影地点とを結ぶ線分上を等速で移動した」と仮定して、キーフレームではないフレームの撮影地点を示す地図座標データを生成してもよい。
For example, the shooting point data conversion processing unit 113 may indicate that “the camera that shoots an omnidirectional video has the immediately preceding key frame over a period from the shooting time of the immediately preceding key frame to the shooting time of the immediately following key frame. Map coordinate data indicating a shooting point of a frame that is not a key frame may be generated on the assumption that it has moved at a constant speed on a line segment connecting the shooting point of the key frame and the shooting point of the immediately following key frame.
あるいは、撮影地点データ変換処理部113は、キーフレームではない各フレームについて、コンテンツ制作者が対象フレームについて入力した速度情報(速度ベクトル)に基づいて、対象フレームの撮影地点を示す地図座標データを生成してもよい。
Alternatively, the shooting point data conversion processing unit 113 generates map coordinate data indicating the shooting point of the target frame based on speed information (speed vector) input by the content creator for the target frame for each frame that is not a key frame. May be.
即ち、撮影地点データ変換処理部113は、「全方位動画を撮影するカメラが、対象フレームの1つ前のフレームの撮影地点から上記速度情報が示す速度で一定時間だけ移動した結果、対象フレームの撮影地点に到達した」と仮定して、対象フレームの撮影地点を示す地図座標データを生成してもよい。なお、該一定時間は、全方位動画の全撮影時間を全方位動画の全フレーム数nで割った商に相当する時間であってもよい。
That is, the shooting point data conversion processing unit 113 indicates that “a camera that shoots an omnidirectional video has moved from the shooting point of the frame immediately before the target frame at a speed indicated by the speed information for a certain period of time. On the assumption that the shooting point has been reached, map coordinate data indicating the shooting point of the target frame may be generated. The fixed time may be a time corresponding to a quotient obtained by dividing the total shooting time of the omnidirectional video by the total number of frames n of the omnidirectional video.
なお、フレームに関する地図座標データは、そのフレームのフレーム番号とそのフレームの撮影地点を示す地図座標値とから構成されるが、フレーム番号の代わりに、そのフレーム番号が示すフレームの撮影時刻を用いてもよい。
The map coordinate data regarding the frame is composed of the frame number of the frame and the map coordinate value indicating the shooting point of the frame, but using the shooting time of the frame indicated by the frame number instead of the frame number. Also good.
符号化処理部114は、外部から入力された全方位動画データを符号化し、符号化した全方位動画データを多重化処理部116に出力する。
The encoding processing unit 114 encodes omnidirectional moving image data input from the outside, and outputs the encoded omnidirectional moving image data to the multiplexing processing unit 116.
ここで、符号化処理部114が用いる符号化方式としては、例えばMPEG(Motion Picture Exparts Group)で標準化された方式であるMPEG-2方式や、H.264方式、HEVC(High Efficiency VideoCoding)方式等、動画を符号化する方式であればいずれの方式であっても適用できる。これら符号化方式は、本発明を直接特徴づけるものではなく公知の技術であるためその詳細な説明は省略する。
Here, as the encoding method used by the encoding processing unit 114, for example, the MPEG-2 method, which is a method standardized by MPEG (Motion Picture Experts Group), H.264, and H.264. Any method can be applied as long as it is a method for encoding a moving image, such as the H.264 method or the HEVC (High-efficiency-Video Coding) method. Since these encoding methods do not directly characterize the present invention and are known techniques, detailed description thereof will be omitted.
なお、本発明に係るデータ生成装置において符号化処理部114は必須ではない。即ち、実施形態1とは別の実施形態に係るデータ生成装置は、入力された全方位動画データを符号化することなく非圧縮のまま出力しても構わない。
Note that the encoding processing unit 114 is not essential in the data generation apparatus according to the present invention. That is, the data generation apparatus according to another embodiment different from the first embodiment may output the input omnidirectional moving image data without compression without encoding.
再生制御データ生成部115は、基本方位データと1つ以上の拡張方位データと撮影地点データ(地図座標データ)とを受け付けると、これらのデータを含む再生制御データを生成し、生成した再生制御データを多重化処理部116に出力する。なお、再生制御データの具体的な内容については後述する。
When the reproduction control data generation unit 115 receives basic azimuth data, one or more extended azimuth data, and shooting point data (map coordinate data), the reproduction control data generation unit 115 generates reproduction control data including these data, and the generated reproduction control data. Is output to the multiplexing processing unit 116. The specific contents of the reproduction control data will be described later.
多重化処理部116は、それぞれ外部から入力される、マップデータと、各フレームの再生制御データと、符号化された全方位動画データとを多重化することによって、自由視点動画データを生成し、生成した自由視点動画データを外部に出力する。
The multiplexing processing unit 116 generates free viewpoint moving image data by multiplexing map data, reproduction control data for each frame, and encoded omnidirectional moving image data that are input from the outside, The generated free viewpoint video data is output to the outside.
多重化処理部116が生成した自由視点動画データ(配布用データ)は、通信により自動的に、又は、着脱可能な記録媒体を用いた手作業で、実施形態2に係る自由視点動画再生装置に配布される。
The free-viewpoint video data (distribution data) generated by the multiplexing processing unit 116 is automatically transmitted to the free-viewpoint video playback apparatus according to the second embodiment or manually using a removable recording medium. Distributed.
(データ生成装置100の動作)
次に、データ生成装置100の動作について図3を参照しながら説明する。図3は、データ生成装置100の動作の一例を示すフローチャートである。 (Operation of Data Generation Device 100)
Next, the operation of thedata generation device 100 will be described with reference to FIG. FIG. 3 is a flowchart illustrating an example of the operation of the data generation device 100.
次に、データ生成装置100の動作について図3を参照しながら説明する。図3は、データ生成装置100の動作の一例を示すフローチャートである。 (Operation of Data Generation Device 100)
Next, the operation of the
データ生成装置100は、全方位動画データとマップデータとが外部から入力されたタイミングで図3のフローチャートに従った動作を開始する。
The data generation device 100 starts the operation according to the flowchart of FIG. 3 at the timing when the omnidirectional video data and the map data are input from the outside.
図3に示すように、データ生成装置100は、全方位動画データの各フレームについて、ステップS1~ステップS6までの処理を行う。
As shown in FIG. 3, the data generation device 100 performs the processing from step S1 to step S6 for each frame of the omnidirectional video data.
即ち、ステップS1において、撮影地点データ変換処理部113は、外部から入力されたマップデータを用いて、対象フレームの撮影地点を示すGPS座標データを地図座標データに変換して再生制御データ生成部115に出力する。
That is, in step S1, the shooting point data conversion processing unit 113 converts the GPS coordinate data indicating the shooting point of the target frame into map coordinate data using the map data input from the outside, and the reproduction control data generation unit 115. Output to.
ステップS1の後、基本方位データ生成部111は、基本方位データを生成する(ステップS2)。
After step S1, the basic azimuth data generation unit 111 generates basic azimuth data (step S2).
ステップS2について図4を参照しながら具体的に説明する。
Step S2 will be specifically described with reference to FIG.
図4の(a)には、ステップS2を詳細に示したフローチャートが示されている。
FIG. 4 (a) shows a flowchart showing step S2 in detail.
図4の(a)からわかるように、ステップS21において、基本方位データ生成部111は、操作受付部120が基本方位データを生成するための操作を受け付けたか否かを判定する。
4A, in step S21, the basic orientation data generation unit 111 determines whether the operation reception unit 120 has received an operation for generating basic orientation data.
基本方位データ生成部111は、基本方位データを生成するための操作を操作受付部120が受け付けたと判定した場合にはステップS22に進み、操作受付部120が基本方位データを生成するための操作を受け付けていないと判定した場合にはステップS23に進む。
When it is determined that the operation receiving unit 120 has received an operation for generating basic azimuth data, the basic azimuth data generating unit 111 proceeds to step S22, and the operation receiving unit 120 performs an operation for generating basic azimuth data. If it is determined that it has not been received, the process proceeds to step S23.
ステップS22において、基本方位データ生成部111は、上記操作(具体的には、以下に説明する切り出し中心座標を入力する操作)の内容に応じた基本方位データを生成する。また、ステップS23において、基本方位データ生成部111は、基本方位データを自動生成する。
In step S22, the basic azimuth data generation unit 111 generates basic azimuth data corresponding to the contents of the above-described operation (specifically, an operation for inputting the cut-out center coordinates described below). In step S23, the basic azimuth data generation unit 111 automatically generates basic azimuth data.
ここで、方位データ(基本方位データおよび拡張方位データ)の具体的内容について、図5を参照して説明する。図5は、実施形態2に係る自由視点動画再生装置によって、ある撮影地点で撮影された全方位画像から切り出される、該ある撮影地点からある方向を見た場合における視界の様子を示す視野画像の一例を示す図である。
Here, the specific contents of the orientation data (basic orientation data and extended orientation data) will be described with reference to FIG. FIG. 5 is a view image showing a state of view when a certain direction is viewed from a certain photographing point, which is cut out from an omnidirectional image photographed at a certain photographing point by the free viewpoint moving image reproducing device according to the second embodiment. It is a figure which shows an example.
本実施形態に係る全方位動画データは全360度方向の映像を含んで構成されるが、実施形態2に係る自由視点動画再生装置(例えば、テレビやプロジェクタ等の大型ディスプレイや、タブレット、スマートフォン等の小型ディスプレイ、ヘッドマウントディスプレイ)は該映像全体を表示することができない。即ち、実施形態2に係る自由視点動画再生装置は、全方位動画の各フレームについて、対象フレームからその一部分(視野画像)を切り出して視野画像を表示する必要がある。
The omnidirectional video data according to the present embodiment is configured to include all 360-degree videos, but the free-viewpoint video playback device according to the second embodiment (for example, a large display such as a TV or a projector, a tablet, a smartphone, or the like) The small display or the head mounted display) cannot display the entire image. That is, the free viewpoint moving image playback apparatus according to the second embodiment needs to cut out a part (view image) from the target frame and display the view image for each frame of the omnidirectional moving image.
これを可能にするため、基本方位データ生成部111は、コンテンツ制作者が実施形態2に係る自由視点動画再生装置に切り出させたい視野画像の中心位置に対応する、対象フレーム(全方位画像)上の座標(切り出し中心座標)を含む方位データを生成する。
In order to enable this, the basic azimuth data generation unit 111 performs an operation on the target frame (omnidirectional image) corresponding to the center position of the visual field image that the content creator wants the free viewpoint video playback apparatus according to the second embodiment to cut out. Direction data including the coordinates (cutout center coordinates) is generated.
これにより、実施形態2に係る自由視点動画再生装置は、対象フレームに関する方位データとして、図5の画像14の中心位置にある点15に対応する切り出し中心座標を含むような方位データを取得した場合、対象フレームから視野画像として画像14を切り出すことができる。
Thereby, the free viewpoint video playback device according to the second embodiment acquires orientation data including the cut-out center coordinates corresponding to the point 15 at the center position of the image 14 in FIG. 5 as the orientation data related to the target frame. The image 14 can be cut out from the target frame as a visual field image.
以上、方位データの具体的内容について説明した。
So far, the specific contents of the bearing data have been described.
なお、基本方位データ生成部111は、切り出し中心座標を入力する操作を受け付けた場合ではなく、別の種類の操作を受け付けた場合にステップS22に進んでもよい。該別の種類の操作について図6を参照しながら以下に説明する。図6は、該別の操作について説明するために参照する説明図である。
Note that the basic azimuth data generation unit 111 may proceed to step S22 when an operation of inputting a cut-out center coordinate is accepted, or when another type of operation is accepted. The other type of operation will be described below with reference to FIG. FIG. 6 is an explanatory diagram which is referred to for explaining the different operation.
基本方位データ生成部111は、マップデータが示す地図10上の任意の地点(例えば、図6における地点16)を指定する操作を受け付けた場合にステップS22に進んでもよい。
The basic azimuth data generation unit 111 may proceed to step S22 when an operation for designating an arbitrary point on the map 10 indicated by the map data (for example, the point 16 in FIG. 6) is received.
この場合、ステップS22において、基本方位データ生成部111は、対象フレームの撮影地点に対応する地図10上の地点から見た、上記操作によって指定された地点が位置する方位を特定してもよい。そして、基本方位データ生成部111は、切り出し中心座標に代えて、該方位を示す値を含むような基本方位データを生成してもよい。
In this case, in step S22, the basic azimuth data generation unit 111 may specify the azimuth in which the point designated by the above operation is located as seen from the point on the map 10 corresponding to the shooting point of the target frame. Then, the basic azimuth data generation unit 111 may generate basic azimuth data including a value indicating the azimuth instead of the cut-out center coordinates.
あるいは、ステップS22において、基本方位データ生成部111は、対象フレームの撮影地点に対応する地図上の地点を示す座標値と、上記地図上の上記操作によって指定された地点を示す座標値とを含むような基本方位データを生成してもよい。
Alternatively, in step S22, the basic azimuth data generation unit 111 includes a coordinate value indicating a point on the map corresponding to the shooting point of the target frame and a coordinate value indicating a point designated by the operation on the map. Such basic azimuth data may be generated.
図3のフローチャートに戻って、データ生成装置100の動作の説明を続ける。
Referring back to the flowchart of FIG. 3, the description of the operation of the data generation device 100 is continued.
ステップS2の後、拡張方位データ生成部112は、拡張方位データを生成する(ステップS3)し、ステップS4に進む。
After step S2, the extended azimuth data generation unit 112 generates extended azimuth data (step S3), and the process proceeds to step S4.
ステップS4において、データ生成装置100は、拡張方位データを更に生成するための操作を受け付けた場合にはステップS3に戻り、拡張方位データを更に生成するための操作を受け付けない場合にはステップS5に進む。
In step S4, the data generating apparatus 100 returns to step S3 when an operation for further generating extended azimuth data is received, and returns to step S5 when not receiving an operation for further generating extended azimuth data. move on.
ステップS3について図4を参照しながら具体的に説明する。
Step S3 will be specifically described with reference to FIG.
図4の(b)には、ステップS3を詳細に示したフローチャートが示されている。
FIG. 4 (b) shows a flowchart showing step S3 in detail.
図4の(b)からわかるように、ステップS31において、拡張方位データ生成部112は、操作受付部120が拡張方位データを生成するための操作を受け付けたか否かを判定する。
As can be seen from FIG. 4B, in step S31, the extended orientation data generation unit 112 determines whether or not the operation receiving unit 120 has received an operation for generating extended orientation data.
拡張方位データ生成部112は、拡張方位データを生成するための操作を操作受付部120が受け付けたと判定した場合にはステップS32に進み、操作受付部120が拡張方位データを生成するための操作を受け付けていないと判定した場合にはステップS33に進む。
When it is determined that the operation receiving unit 120 has received an operation for generating extended azimuth data, the extended azimuth data generating unit 112 proceeds to step S32, and the operation receiving unit 120 performs an operation for generating extended azimuth data. If it is determined that it has not been received, the process proceeds to step S33.
ステップS32において、拡張方位データ生成部112は、上記操作(具体的には、基本方位データに含まれている切り出し中心座標とは異なる切り出し中心座標を入力する操作)の内容に応じた拡張方位データを生成する。
In step S32, the extended azimuth data generation unit 112 expands the azimuth data according to the content of the above-described operation (specifically, an operation for inputting the cut-out center coordinates different from the cut-out center coordinates included in the basic direction data). Is generated.
また、ステップS33において、拡張方位データ生成部112は、基本方位データに含まれている切り出し中心座標とは異なる切り出し中心座標を含む拡張方位データを自動生成する。
In step S33, the extended azimuth data generation unit 112 automatically generates extended azimuth data including cutout center coordinates different from the cutout center coordinates included in the basic azimuth data.
例えば、ステップS33において、拡張方位データ生成部112は、それぞれ東西南北の方位を示す4つの拡張方位データを自動的に算出してもよいし、所定の方位を示すただ1つの拡張方位データを自動的に算出してもよい。あるいは、ステップS33において、拡張方位データ生成部112は、所定の方位角と所定の仰角との組み合わせに応じた切り出し中心座標を含む拡張方位データを自動生成してもよい。
For example, in step S33, the extended azimuth data generation unit 112 may automatically calculate four extended azimuth data each indicating the east, west, south, and north directions, or automatically generate only one extended azimuth data indicating a predetermined direction. May be calculated automatically. Alternatively, in step S33, the extended azimuth data generation unit 112 may automatically generate extended azimuth data including cut-out center coordinates corresponding to a combination of a predetermined azimuth angle and a predetermined elevation angle.
あるいは、拡張方位データ生成部112は、全方位動画内における動きのある領域内の座標が切り出し中心座標となるような拡張方位データを自動生成しても構わない。
Alternatively, the extended azimuth data generation unit 112 may automatically generate extended azimuth data such that the coordinates in the moving area in the omnidirectional video become the cut-out center coordinates.
例えば、拡張方位データ生成部112は、全方位動画の前方向又は後方向のフレームを参照して、画素単位またはブロック単位で動きベクトルを算出してもよい。そして、拡張方位データ生成部112は、下記のいずれかの場合には、下記の1つ以上の領域の各々について、該領域の中心または重心の座標が切り出し中心座標となるような拡張方位データを自動生成しても構わない。
・所定の複数の領域(ブロック)のうちの1つ以上の領域が、上記動きベクトルの大きさを示す値が所定の大きさ以上の値であるような領域である場合
・上記1つ以上の領域が、上記動きベクトルの大きさを示す値が上記所定の大きさ以上の値であるような画素が一定数以上存在するような領域である場合、
また、例えば、拡張方位データ生成部112は、下記の画素又はブロックからなる所定の大きさ以上の領域が1つ以上存在する場合、1つ以上の上記領域の各々について、該領域の中心または重心の座標が切り出し中心座標となるような拡張方位データを自動生成しても構わない。
画素:算出した動きベクトルの方向が上記領域の周辺の各画素の動きベクトルの方向と異なっている画素
ブロック:算出した動きベクトルの方向が上記領域の周辺の各ブロックの動きベクトルの方向と異なっているブロック
あるいは、データ生成装置100は、図3のフローチャートに従った動作の開始前に、全方位動画内に含まれる被写体を撮影した画像データがユーザによって入力された場合には、ステップS33において、画像データと全方位動画とを比較する処理を行ってもよい。 For example, the extended azimuthdata generation unit 112 may calculate a motion vector in pixel units or block units with reference to the forward or backward frame of the omnidirectional video. Then, in any of the following cases, the extended azimuth data generation unit 112 generates extended azimuth data such that the coordinates of the center or the center of gravity of each of the following one or more areas become the cut center coordinates. It may be automatically generated.
When one or more of a plurality of predetermined regions (blocks) are regions in which the value indicating the magnitude of the motion vector is a value greater than or equal to a predetermined size When the region is a region where there are a predetermined number or more of pixels whose value indicating the magnitude of the motion vector is a value equal to or greater than the predetermined size,
Further, for example, when there are one or more areas having a predetermined size or more composed of the following pixels or blocks, the extended orientationdata generation unit 112 has the center or the center of gravity of each of the one or more areas described above. Extended azimuth data may be automatically generated so that these coordinates become the cut-out center coordinates.
Pixel: The direction of the calculated motion vector is different from the direction of the motion vector of each pixel around the area. Pixel block: The direction of the calculated motion vector is different from the direction of the motion vector of each block around the area. If the image data obtained by photographing the subject included in the omnidirectional video is input by the user before the start of the operation according to the flowchart of FIG. You may perform the process which compares image data and an omnidirectional moving image.
・所定の複数の領域(ブロック)のうちの1つ以上の領域が、上記動きベクトルの大きさを示す値が所定の大きさ以上の値であるような領域である場合
・上記1つ以上の領域が、上記動きベクトルの大きさを示す値が上記所定の大きさ以上の値であるような画素が一定数以上存在するような領域である場合、
また、例えば、拡張方位データ生成部112は、下記の画素又はブロックからなる所定の大きさ以上の領域が1つ以上存在する場合、1つ以上の上記領域の各々について、該領域の中心または重心の座標が切り出し中心座標となるような拡張方位データを自動生成しても構わない。
画素:算出した動きベクトルの方向が上記領域の周辺の各画素の動きベクトルの方向と異なっている画素
ブロック:算出した動きベクトルの方向が上記領域の周辺の各ブロックの動きベクトルの方向と異なっているブロック
あるいは、データ生成装置100は、図3のフローチャートに従った動作の開始前に、全方位動画内に含まれる被写体を撮影した画像データがユーザによって入力された場合には、ステップS33において、画像データと全方位動画とを比較する処理を行ってもよい。 For example, the extended azimuth
When one or more of a plurality of predetermined regions (blocks) are regions in which the value indicating the magnitude of the motion vector is a value greater than or equal to a predetermined size When the region is a region where there are a predetermined number or more of pixels whose value indicating the magnitude of the motion vector is a value equal to or greater than the predetermined size,
Further, for example, when there are one or more areas having a predetermined size or more composed of the following pixels or blocks, the extended orientation
Pixel: The direction of the calculated motion vector is different from the direction of the motion vector of each pixel around the area. Pixel block: The direction of the calculated motion vector is different from the direction of the motion vector of each block around the area. If the image data obtained by photographing the subject included in the omnidirectional video is input by the user before the start of the operation according to the flowchart of FIG. You may perform the process which compares image data and an omnidirectional moving image.
具体的には、拡張方位データ生成部112は、上記画像データと全方位動画中の対象フレームとを比較して、対象フレームに上記被写体の像が含まれているか否かを判定してもよい。そして、拡張方位データ生成部112は、対象フレームに上記被写体の像が含まれていると判定した場合には、上記所定の複数の領域のうちの上記被写体の像を含む領域について、該領域の中心または重心の座標が切り出し中心座標となるような拡張方位データを自動生成しても構わない。
Specifically, the extended azimuth data generation unit 112 may compare the image data with a target frame in the omnidirectional video to determine whether or not the subject image is included in the target frame. . Then, when the extended orientation data generation unit 112 determines that the subject image is included in the target frame, the extended orientation data generation unit 112 determines the region including the subject image among the plurality of predetermined regions. Extended azimuth data may be automatically generated such that the coordinates of the center or the center of gravity become the cut-out center coordinates.
なお、ユーザが入力する画像データは、全方位動画の撮影とは別の機会に撮影した画像のデータでも構わない。あるいは、上記画像データは、上記全方位動画における所望のフレーム(全方位静止画)の所望の視野画像が再生装置によって表示されている時にユーザがキャプチャした画像のデータであってもよい。
It should be noted that the image data input by the user may be image data taken at an opportunity different from the shooting of the omnidirectional video. Alternatively, the image data may be data of an image captured by a user when a desired field-of-view image of a desired frame (omnidirectional still image) in the omnidirectional video is displayed by a playback device.
なお、拡張方位データ生成部112は、n回のステップS33において、拡張方位データを生成してもよいし、n回のステップS33のうちの一部のステップS33においてのみ、拡張方位データを生成してもよい。
The extended azimuth data generation unit 112 may generate extended azimuth data in n steps S33, or generate extended azimuth data only in some steps S33 of the n steps S33. May be.
ところで、ステップS3とステップS4の説明からわかるように、本実施形態では、操作受付部120は、相異なる複数の切り出し中心座標の入力を受け付けることができるようになっている。拡張方位データ生成部112は、入力された複数の切り出し中心座標の各々について該切り出し中心座標を含むような拡張方位データを生成することにより、複数の拡張方位データを生成する。
By the way, as can be seen from the description of step S3 and step S4, in the present embodiment, the operation receiving unit 120 can receive input of a plurality of different cut-out center coordinates. The extended azimuth data generation unit 112 generates a plurality of extended azimuth data by generating extended azimuth data that includes the cut out center coordinates for each of the plurality of input cut out center coordinates.
ここで、コンテンツ制作者が相異なる複数の切り出し中心座標を入力するケースについて図7及び図8を参照しながら以下に説明する。
Here, a case where a content creator inputs a plurality of different cut-out center coordinates will be described below with reference to FIGS.
図7は、撮影経路中のある区間において、その区間の周囲にコンテンツ制作者が閲覧をお勧めする被写体が複数存在することを示す図である。図8は、自由視点動画再生装置によって、ある撮影地点で撮影されたフレームから切り出される、該ある撮影地点から別の方向(被写体18の方向)を見た場合における視界の様子を示す視野画像の一例を示す図である。
FIG. 7 is a diagram showing that there are a plurality of subjects that the content creator recommends browsing around a certain section in the shooting route. FIG. 8 shows a visual field image showing the state of the field of view when a different direction (the direction of the subject 18) is seen from a certain photographing point, which is cut out from a frame photographed at the certain photographing point by the free viewpoint moving image reproducing device. It is a figure which shows an example.
図7に関し、線12が表わす撮影経路の周辺に2つの被写体18、19があり、あるフレーム(全方位画像)内に2つの被写体18、19が写っているものとする。そして、コンテンツ制作者は、両方の被写体を閲覧者に見せたいものとする。
7, it is assumed that there are two subjects 18 and 19 around the photographing path represented by the line 12, and the two subjects 18 and 19 are shown in a certain frame (omnidirectional image). The content creator wants to show both subjects to the viewer.
この場合、コンテンツ制作者は、被写体18が写った視野画像と被写体19が写った視野画像とを実施形態2に係る自由視点動画再生装置に切り出させるために、2つの切り出し中心座標を入力すればよい。
In this case, if the content creator inputs the two cut-out center coordinates in order to cause the free-viewpoint video playback apparatus according to the second embodiment to cut out the field-of-view image of the subject 18 and the field-of-view image of the subject 19. Good.
例えば、被写体18が存在する地点に対応する、マップデータが表わす地図10上の地点が点20であり、被写体19が存在する地点に対応する地図10上の地点が点21であるものとする。そして、線12のうち領域22に含まれている部分に対応するP箇所の撮影地点で撮影されたP枚のフレームに被写体18が写っており、線12のうち領域23に含まれている部分に対応するQ箇所の撮影地点で撮影されたQ枚のフレームに被写体19が写っているものとする。
For example, it is assumed that the point on the map 10 represented by the map data corresponding to the point where the subject 18 exists is the point 20, and the point on the map 10 corresponding to the point where the subject 19 exists is the point 21. The subject 18 is shown in P frames taken at P shooting points corresponding to the part included in the region 22 in the line 12, and the part included in the region 23 in the line 12. It is assumed that the subject 19 is shown in Q frames taken at Q shooting points corresponding to.
この場合、コンテンツ制作者は、上記P枚のフレームの各々について、自由視点動画再生装置に対象フレーム(全方位画像)から被写体18が写っている視野画像(例えば、図18の視野画像24)を切り出させるような切り出し中心座標(例えば、図18の点25に対応する、該全方位画像における座標)を入力すればよい。同様に、コンテンツ制作者は、上記Q枚のフレームの各々について、自由視点動画再生装置に該フレームから被写体19が写っている視野画像を切り出させるような切り出し中心座標を入力すればよい。
In this case, the content creator, for each of the P frames, displays a field image (for example, field image 24 in FIG. 18) in which the subject 18 is captured from the target frame (omnidirectional image) on the free viewpoint video reproduction device. What is necessary is just to input the cutting center coordinate (for example, the coordinate in this omnidirectional image corresponding to the point 25 of FIG. 18) which makes it cut out. Similarly, the content creator may input, for each of the Q frames, a cut-out center coordinate that causes the free-viewpoint video playback device to cut out a field-of-view image showing the subject 19 from the frame.
結果として、拡張方位データ生成部112は、2つの被写体18、19が写っているフレームについては、ある切り出し中心座標を含む拡張方位データと別の切り出し中心座標を含む拡張方位データとを生成することになる。
As a result, the extended azimuth data generation unit 112 generates extended azimuth data including a certain cutout center coordinate and extended azimuth data including another cutout center coordinate for a frame in which the two subjects 18 and 19 are captured. become.
以上、コンテンツ制作者が相異なる複数の切り出し中心座標を入力するケースについて説明した。
As described above, the case where the content creator inputs a plurality of different cut-out center coordinates has been described.
なお、拡張方位データには、切り出し中心座標のX成分が表わす方位に応じた識別情報が含まれていてもよい。
Note that the expanded orientation data may include identification information corresponding to the orientation represented by the X component of the cut-out center coordinates.
以上のようにして、データ生成装置100がフレーム毎に拡張方位データを複数含めることができるように構成されていることで、コンテンツ制作者は、全方位動画内の一部のフレームに、見どころのあるシーンや、閲覧者が興味のありそうな、もしくは閲覧者に見せたいと考える被写体が複数含まれている場合に、それらを閲覧者が見逃すことを防ぐことができる。
As described above, since the data generation apparatus 100 is configured to include a plurality of extended azimuth data for each frame, the content creator can highlight a part of the frames in the omnidirectional video. When a certain scene or a plurality of subjects that the viewer is interested in or wants to show to the viewer are included, it is possible to prevent the viewer from overlooking them.
また、閲覧者は、複数の拡張方位データに対応する複数の方位のうちの所望の方位の視野画像を閲覧することができる。例えば、お勧めの観光地や、ロードマップ上の有名なランドマークや、自分の知人や家族等の見どころのある表情等を閲覧することができる。
In addition, the viewer can view a visual field image of a desired orientation among a plurality of orientations corresponding to a plurality of extended orientation data. For example, it is possible to browse recommended sightseeing spots, famous landmarks on the road map, facial expressions with highlights of one's acquaintances and family, and the like.
図3のフローチャートに戻って、データ生成装置100の動作の説明を続ける。
Referring back to the flowchart of FIG. 3, the description of the operation of the data generation device 100 is continued.
ステップS5において、再生制御データ生成部115は、入力された撮影地点データと基本方位データと拡張方位データとを含む再生制御データを生成する。
In step S5, the reproduction control data generation unit 115 generates reproduction control data including the input shooting point data, basic azimuth data, and extended azimuth data.
ここで、再生制御データ生成部115によって生成される再生制御データについて図9~図11を参照して詳細に説明する。
Here, the reproduction control data generated by the reproduction control data generation unit 115 will be described in detail with reference to FIGS.
図9は、データ生成装置100が生成する自由視点動画データのデータ構造の一例を模式的に示した図である。図10は、自由視点動画データに含まれる再生制御データのデータ構造の一例を模式的に示した図である。図11は、自由視点動画データに含まれる再生制御データのデータ構造の別の一例を模式的に示した図である。
FIG. 9 is a diagram schematically showing an example of the data structure of the free-viewpoint video data generated by the data generation device 100. FIG. 10 is a diagram schematically illustrating an example of the data structure of the reproduction control data included in the free viewpoint moving image data. FIG. 11 is a diagram schematically illustrating another example of the data structure of the reproduction control data included in the free viewpoint moving image data.
自由視点動画データ26は、図9に示すように、全方位動画データと、マップデータとを含み、1からnまでの各iについて、フレーム番号がiであるフレーム(以下、フレームiとも称する)の視線方向制御データViを含んでいる。
As shown in FIG. 9, the free viewpoint moving image data 26 includes omnidirectional moving image data and map data, and for each i from 1 to n, a frame whose frame number is i (hereinafter also referred to as a frame i). Eye-gaze direction control data Vi.
例えば、再生制御データV1は、フレーム1の撮影地点を示す撮影地点データP1と、フレーム1に関する基本方位データO1とを含み、1からmまでの各jについて、フレーム1に関する拡張方位データE1jを含んでいる(m、n、i、jは正の整数)。
For example, the reproduction control data V1 includes shooting point data P1 indicating the shooting point of the frame 1 and basic azimuth data O1 related to the frame 1. For each j from 1 to m, the extended azimuth data E1j related to the frame 1 is included. (M, n, i, j are positive integers).
なお、本実施形態のデータ生成装置100が生成する自由視点動画データの各再生制御データに1つ以上の拡張方位データが含まれていてもよい。あるいは、図9の自由視点動画データのように、一部の再生制御データ(再生制御データVn)に拡張方位データが含まれていなくてもよい。
It should be noted that one or more extended azimuth data may be included in each reproduction control data of the free viewpoint moving image data generated by the data generation device 100 of the present embodiment. Alternatively, as in the free viewpoint moving image data of FIG. 9, the extended azimuth data may not be included in a part of the reproduction control data (reproduction control data Vn).
再生制御データに含まれている方位データについて、図10及び図11を参照しながら改めて具体的に説明する。
The azimuth data included in the reproduction control data will be specifically described again with reference to FIGS.
図10に示すように、基本方位データは、切り出し中心座標と、該基本方位データを拡張方位データと識別するための識別情報とを含んでいる。また、各拡張方位データは、切り出し中心座標と、該拡張方位データを基本方位データおよび他の拡張方位データと識別するための識別情報とを含んでいる。
As shown in FIG. 10, the basic azimuth data includes cut-out center coordinates and identification information for identifying the basic azimuth data from the extended azimuth data. Each extended azimuth data includes cut-out center coordinates and identification information for identifying the extended azimuth data from basic azimuth data and other extended azimuth data.
なお、基本方位データおよび拡張方位データのデータ構造は図10に示すデータ構造に限定されない。
Note that the data structure of the basic azimuth data and the extended azimuth data is not limited to the data structure shown in FIG.
即ち、基本方位データおよび拡張方位データは、図11に示すように、優先表示順位データを含んでいてもよい。優先表示順位データは、自由視点動画再生装置が、フレームiから切り出すことが可能な複数の視野画像のうち、該優先表示順位データを含む方位データによって切り出される視野画像を何番目に表示すべきかを示すデータである。
That is, the basic azimuth data and the extended azimuth data may include priority display order data as shown in FIG. The priority display order data indicates the order in which the field of view image cut out by the orientation data including the priority display order data is to be displayed among the plurality of field of view images that can be extracted from the frame i. It is the data shown.
例えば、再生制御データViのある方位データに含まれる優先表示順位データの値が1であることは、自由視点動画再生装置が、フレームiから切り出すことが可能な複数の視野画像のうち、該方位データによって切り出される視野画像を1番最初に表示すべきことを示している。同様に、再生制御データViのある方位データに含まれる優先表示順位データの値がkであることは、自由視点動画再生装置が、フレームiから切り出すことが可能な複数の視野画像のうち、該方位データによって切り出される視野画像をk番目に表示すべきことを示している(kは2以上の整数)。
For example, if the value of the priority display order data included in certain azimuth data of the reproduction control data Vi is 1, this means that the azimuth video reproduction device can select the azimuth among a plurality of field images that can be extracted from the frame i. It shows that the visual field image cut out by the data should be displayed first. Similarly, the value of the priority display order data included in certain azimuth data of the reproduction control data Vi is k. The free viewpoint moving image reproduction apparatus can select the view field image that can be cut out from the frame i. This indicates that the field-of-view image cut out by the azimuth data is to be displayed k-th (k is an integer of 2 or more).
なお、コンテンツ制作者は、自由視点動画再生装置に、フレームiから切り出した複数の視野画像を同時に視認できるように表示させたい場合、該複数の画像データを切り出すための複数の方位データに同じ値の優先表示順位データを含めればよい。
When the content creator wants the free-viewpoint video playback device to display a plurality of field-of-view images cut out from the frame i so that they can be simultaneously viewed, the same value is used for the plurality of orientation data for cutting out the plurality of image data. May be included.
また、図11に示すように、各方位データには視点制限用データを含めてもよい。
Also, as shown in FIG. 11, viewpoint data may be included in each azimuth data.
視点制限用データは、視点制限(自由視点動画再生装置に、全方位画像の全領域のうちの一部の領域からしか視野画像を切り出させないための制限)の有無を示す情報を含んでいてもよい。視点制限が有ることを示す情報を含む視点制限用データは、該一部の領域を示す情報を含んでいてもよい。
The viewpoint restriction data includes information indicating whether or not there is a viewpoint restriction (a restriction for allowing the free-viewpoint video playback device to cut out the visual field image only from a part of the entire area of the omnidirectional image). Also good. The viewpoint restriction data including information indicating that there is a viewpoint restriction may include information indicating the partial area.
そして、データ生成装置100は、コンテンツ制作者が、視点制限の有無、及び、上記一部の領域(例えば、全方位画像中の閲覧者に特に注目させたい領域や、全方位画像中の閲覧者に閲覧させてもよい領域)を指定する操作を行うことができるように構成されていてもよい。
Then, the data generation device 100 allows the content creator to determine whether or not the viewpoint is restricted, and the above-mentioned partial area (for example, an area that the viewer in the omnidirectional image is particularly interested in, or a viewer in the omnidirectional image). The user may be configured to perform an operation of designating a region that may be browsed.
データ生成装置100をこのように構成した場合、コンテンツ制作者は、自由視点動画再生装置のユーザである閲覧者に、特に注目させたい領域内の視野画像のみを閲覧させることができる。あるいは、コンテンツ制作者は、閲覧者に閲覧させたくない他の領域内の視野画像の閲覧を制限することができる。コンテンツ制作者が閲覧者に閲覧させたくない領域としては、機密や、肖像権、風紀等の観点から閲覧者に閲覧させたくないシーンを含むような領域が挙げられる。
When the data generation device 100 is configured in this way, the content creator can allow the viewer who is the user of the free viewpoint video playback device to browse only the visual field image in the region that is particularly desired to be noticed. Alternatively, the content creator can restrict browsing of the field-of-view images in other areas that the viewer does not want to browse. An area that the content creator does not want the viewer to browse includes an area that includes a scene that the viewer does not want to browse from the viewpoint of confidentiality, portrait rights, customs, or the like.
なお、全方位動画をリアルタイムでストリーミング配信を行うデータ生成装置100、及び、全方位動画を放送で配信するデータ生成装置100を、上記のような構成とすることが望ましい。
In addition, it is desirable that the data generation apparatus 100 that performs streaming distribution of omnidirectional video in real time and the data generation apparatus 100 that distributes omnidirectional video by broadcast have the above-described configuration.
図3のフローチャートに戻って、データ生成装置100の動作の説明を続ける。
Referring back to the flowchart of FIG. 3, the description of the operation of the data generation device 100 is continued.
ステップS6において、符号化処理部114は、入力されたフレーム(全方位画像)を、あらかじめ設定された符号化方式(例えば、HEVCなど)で符号化し、全方位画像の符号化データを出力する。
In step S6, the encoding processing unit 114 encodes the input frame (omnidirectional image) using a preset encoding method (for example, HEVC), and outputs encoded data of the omnidirectional image.
データ生成装置100は、各画像フレームについて以上のステップS1~ステップS6の処理を行った後、ステップS7に進む。
The data generation device 100 proceeds to step S7 after performing the above steps S1 to S6 for each image frame.
ステップS7において、多重化処理部7は、各フレームの符号化データとマップデータと各フレームの再生制御データとを多重化して自由視点動画データを生成し、データ生成装置100は動作を終了する。
In step S7, the multiplexing processing unit 7 multiplexes the encoded data of each frame, the map data, and the reproduction control data of each frame to generate free viewpoint moving image data, and the data generating apparatus 100 ends the operation.
(データ生成装置100の利点)
以上の説明からわかるように、データ生成装置100は、全方位動画データと、マップデータと、各フレームの再生制御データとを用いて、自由視点動画データを生成する。 (Advantages of the data generation device 100)
As can be seen from the above description, thedata generation device 100 generates free-viewpoint video data using omnidirectional video data, map data, and playback control data for each frame.
以上の説明からわかるように、データ生成装置100は、全方位動画データと、マップデータと、各フレームの再生制御データとを用いて、自由視点動画データを生成する。 (Advantages of the data generation device 100)
As can be seen from the above description, the
各フレームの再生制御データには、対象フレームの撮影地点に対応する、マップデータ上の地点を示す撮影地点データと、対象フレームから視野画像を切り出すための基本方位データおよび拡張視線方向データを含んでいる。
The reproduction control data for each frame includes shooting point data indicating a point on the map data corresponding to the shooting point of the target frame, basic orientation data for extracting a field-of-view image from the target frame, and extended gaze direction data. Yes.
実施形態2で説明するように、自由視点動画再生装置は、こうして作成された自由視点動画データに含まれる全方位動画データの各フレームについて、対象フレームから一部(見どころのある被写体や景色、シーンが写っている視野画像)を切り出して表示させるために対象フレームに関する再生制御データを利用する。
As will be described in the second embodiment, the free-viewpoint video playback device, with respect to each frame of the omnidirectional video data included in the free-viewpoint video data created in this way, partly from the target frame (a subject, scenery, scene with a highlight) The reproduction control data relating to the target frame is used in order to cut out and display the field-of-view image in which the image is captured.
これにより、データ生成装置100は、閲覧者が見どころのある被写体や景色、シーンを探させる手間を減らす(あるいは、無くす)ことができる。また、データ生成装置100は、見どころのあるシーンを探しているうちに、該シーンを再生可能な時刻が過ぎてしまい、該シーンを閲覧者が見逃してしまうという問題を解決可能である。
Thereby, the data generation device 100 can reduce (or eliminate) the time and effort required for the viewer to search for a subject, scenery, or scene with a highlight. In addition, the data generation apparatus 100 can solve the problem that the time when the scene can be reproduced has passed while searching for a scene with highlight, and the viewer misses the scene.
また、コンテンツの制作者は、データ生成装置100を用いることにより、視聴者に見せたい複数の被写体について、該被写体を含む視野画像を自由視点動画再生装置に表示させるための方位データを作成することができる。そして、コンテンツ制作者は、データ生成装置100を用いることによって各フレーム内の複数の視野画像の表示優先順位についても制御することができるため、嗜好が異なる様々な視聴者にとって満足の行くようなコンテンツの制作および配信を行うことができる。
In addition, the content creator uses the data generation device 100 to create azimuth data for displaying a field-of-view image including the subject on the free-viewpoint video playback device for a plurality of subjects desired to be shown to the viewer. Can do. The content creator can also control the display priority order of a plurality of field-of-view images in each frame by using the data generation device 100, so that content satisfying various viewers with different preferences. Can be produced and distributed.
(実施形態1の変形例)
データ生成装置は、全方位画像データ(静止画データ)を入力とし、自由視点静止画データを出力するように構成されていてもよい。 (Modification of Embodiment 1)
The data generation apparatus may be configured to receive omnidirectional image data (still image data) and output free viewpoint still image data.
データ生成装置は、全方位画像データ(静止画データ)を入力とし、自由視点静止画データを出力するように構成されていてもよい。 (Modification of Embodiment 1)
The data generation apparatus may be configured to receive omnidirectional image data (still image data) and output free viewpoint still image data.
この場合、コンテンツ制作者は、サイネージ等を用いた広告表示を目的として、このデータ生成装置を用いることができる。即ち、コンテンツ制作者は、サイネージが全方位画像データから切り出す複数の視野画像の各々について該視野画像の表示優先順位を決める権利を、広告主等に向けて販売することができる。
In this case, the content creator can use this data generation device for the purpose of displaying advertisements using signage or the like. That is, the content creator can sell the right to determine the display priority of the visual field image for each of the multiple visual field images that signage cuts out from the omnidirectional image data to the advertiser or the like.
また、サイネージが各視野画像上に広告映像やメッセージを重畳表示することができるように構成されている場合、コンテンツ制作者は、視線画像毎に、対象の視野画像に広告映像やメッセージをサイネージに重畳表示させる権利を販売しても構わない。
In addition, when the signage is configured so that an advertisement video or message can be superimposed on each visual field image, the content creator signs the advertising video or message into the target visual field image for each line-of-sight image. The right to superimpose display may be sold.
(実施形態1の付記事項1)
本実施形態に係る自由視点動画データ生成装置を、後述の実施形態2とは別の実施形態に係る自由視点動画再生システムが自由視点動画を再生するために参照する自由視点動画データを生成するように構成してもよい。 (Appendix 1 of Embodiment 1)
The free viewpoint moving image data generating apparatus according to the present embodiment generates free viewpoint moving image data that is referenced by a free viewpoint moving image reproduction system according to an embodiment different from the second embodiment to be described later to reproduce a free viewpoint moving image. You may comprise.
本実施形態に係る自由視点動画データ生成装置を、後述の実施形態2とは別の実施形態に係る自由視点動画再生システムが自由視点動画を再生するために参照する自由視点動画データを生成するように構成してもよい。 (
The free viewpoint moving image data generating apparatus according to the present embodiment generates free viewpoint moving image data that is referenced by a free viewpoint moving image reproduction system according to an embodiment different from the second embodiment to be described later to reproduce a free viewpoint moving image. You may comprise.
そのような自由視点動画再生システムは、例えば、閲覧者の周囲を取り囲む複数の表示装置(ディスプレイや、プロジェクタ等)と、閲覧者が座った時に閲覧者に所定の方向を向かせるような構造になっている座席と、を含むシステムであってもよい。
Such a free-viewpoint video playback system has, for example, a plurality of display devices (display, projector, etc.) surrounding the viewer and a structure that allows the viewer to face a predetermined direction when the viewer is sitting. And a seating system.
この場合、自由視点動画データ生成装置が生成する基本方位データは、上記自由視点動画再生システムの閲覧者の正面に位置する表示装置(上記座席から見て上記所定の方向に位置する表示装置)に表示される視野画像の中心位置に対応する、画像フレーム(全方位画像)上の座標である切り出し中心座標を含んでいてもよい。
In this case, the basic azimuth data generated by the free viewpoint moving image data generating device is displayed on a display device (display device positioned in the predetermined direction as viewed from the seat) located in front of the viewer of the free viewpoint moving image playback system. Cutout center coordinates that are coordinates on the image frame (omnidirectional image) corresponding to the center position of the displayed visual field image may be included.
また、拡張方位データは、上記自由視点動画再生装置の閲覧者の正面の表示装置に表示される視野画像の中心位置に対応する、画像フレーム(全方位画像)上の座標(切り出し中心座標)を含んでいてもよい。
The extended azimuth data includes coordinates (cutout center coordinates) on the image frame (omnidirectional image) corresponding to the center position of the field-of-view image displayed on the display device in front of the viewer of the free viewpoint video playback device. May be included.
(実施形態1の付記事項2)
ステップS23に関し、基本方位データ生成部111は、全方位動画データに対して画像処理を施すことによって基本方位データに含める切り出し中心座標を算出しても構わない。 (Appendix 2 of Embodiment 1)
Regarding step S23, the basic azimuthdata generation unit 111 may calculate the cut-out center coordinates included in the basic azimuth data by performing image processing on the omnidirectional moving image data.
ステップS23に関し、基本方位データ生成部111は、全方位動画データに対して画像処理を施すことによって基本方位データに含める切り出し中心座標を算出しても構わない。 (Appendix 2 of Embodiment 1)
Regarding step S23, the basic azimuth
例えば、基本方位データ生成部111は、隣接する2つの画像フレーム(全方位画像)を同じ分割パターンで複数の領域に分割し、ロックマッチング等で各領域の動きベクトルを取得してもよい。
For example, the basic azimuth data generation unit 111 may divide two adjacent image frames (omnidirectional images) into a plurality of regions with the same division pattern, and acquire a motion vector of each region by lock matching or the like.
そして、基本方位データ生成部111は、各領域の動きベクトルの向きと大きさから、どの領域がカメラの進行方向の領域であるかを判定し、進行方向であると判定された領域の中心座標を、基本方位データに含める切り出し中心座標としても構わない。
Then, the basic azimuth data generation unit 111 determines which area is the area in the moving direction of the camera from the direction and magnitude of the motion vector of each area, and the center coordinates of the area determined to be the moving direction May be the cut-out center coordinates included in the basic azimuth data.
また、ステップS22およびステップS23に関し、基本方位データ生成部111は、画像フレーム(全方位画像)を複数の領域に分割してもよい。基本方位データ生成部111は、全方位画像内のある領域に撮影者が映っている場合に、その領域はカメラの進行方向とは反対側の方向であると認識することによって、どの領域がカメラの進行方向の領域であるかを判定してもよい。
Further, with regard to step S22 and step S23, the basic orientation data generation unit 111 may divide the image frame (omnidirectional image) into a plurality of regions. When the photographer appears in a certain area in the omnidirectional image, the basic azimuth data generation unit 111 recognizes that the area is the direction opposite to the moving direction of the camera. It may be determined whether the region is in the traveling direction.
そして、基本方位データ生成部111は、進行方向であると判定された領域の中心座標を、基本方位データに含める切り出し中心座標としても構わない。
The basic orientation data generation unit 111 may use the center coordinates of the area determined to be the traveling direction as the cut-out center coordinates included in the basic orientation data.
なお、全方位画像内の撮影者が移っている領域はコンテンツ制作者の操作によって指定されるようになっていてもよいし、基本方位データ生成部111が、全撮影期間を通して映っている被写体を、撮影者と認識するようになっていてもよい。
It should be noted that the area where the photographer moves in the omnidirectional image may be designated by the operation of the content creator, and the basic azimuth data generation unit 111 selects the subject shown throughout the photographic period. , It may be recognized as a photographer.
(実施形態1の付記事項3)
ステップS33は本発明に係るデータ生成装置において必須ではない。即ち、拡張方位データ生成部112は、操作受付部120が拡張方位データを生成するための操作を受け付けていないと判定した場合には、拡張方位データを生成しなくてもよい。 (Appendix 3 of Embodiment 1)
Step S33 is not essential in the data generation device according to the present invention. That is, the extended azimuthdata generation unit 112 may not generate the extended azimuth data when it is determined that the operation reception unit 120 has not received an operation for generating the extended azimuth data.
ステップS33は本発明に係るデータ生成装置において必須ではない。即ち、拡張方位データ生成部112は、操作受付部120が拡張方位データを生成するための操作を受け付けていないと判定した場合には、拡張方位データを生成しなくてもよい。 (Appendix 3 of Embodiment 1)
Step S33 is not essential in the data generation device according to the present invention. That is, the extended azimuth
また、ステップS4は本発明に係るデータ生成装置において必須ではない。即ち、拡張方位データ生成部112は、フレーム毎にただ1つの拡張方位データを生成するようになっていてもよい。
Further, step S4 is not essential in the data generation apparatus according to the present invention. That is, the extended azimuth data generation unit 112 may generate only one extended azimuth data for each frame.
(実施形態1の付記事項4)
優先表示順位データはコンテンツ制作者が外部から入力してもよいが、本発明はそのような構成に限定されない。 (Appendix 4 of Embodiment 1)
Although the priority display order data may be input from the outside by the content creator, the present invention is not limited to such a configuration.
優先表示順位データはコンテンツ制作者が外部から入力してもよいが、本発明はそのような構成に限定されない。 (Appendix 4 of Embodiment 1)
Although the priority display order data may be input from the outside by the content creator, the present invention is not limited to such a configuration.
即ち、データ生成装置100は、予め入力されている特定の閲覧者の嗜好情報に基づいて、優先表示順位データを決定してもよいし、対象の全方位動画を既に閲覧した多数の閲覧者の閲覧情報(全方位動画の各再生時刻において、全方位画像のうちのどの部分が多くの閲覧者に閲覧されたかを示すビッグデータ)を参照して生成しても構わない。
That is, the data generation device 100 may determine the priority display order data based on the preference information of a specific viewer that is input in advance, or a number of viewers who have already viewed the target omnidirectional video. It may be generated with reference to browsing information (big data indicating which part of the omnidirectional image is viewed by many viewers at each reproduction time of the omnidirectional video).
あるいは、全方位動画が再生される時期が決まっている場合(例えば、自由視点動画データを放送によって配信する場合)、データ生成装置100は、再生時の季節や時間帯に基づいて、優先表示順位データを生成しても構わない。このようにすることにより、データ生成装置100は、例えば、再生時の季節や時間帯でなければ見ることが難しい被写体や景色を含む視野画像を、自由視点動画再生装置に優先的に表示させることができる。特に、観光誘致や、ナビにとって、最適な表示をすることができる。
Alternatively, when the time when the omnidirectional video is reproduced is determined (for example, when free viewpoint video data is distributed by broadcasting), the data generation device 100 determines the priority display order based on the season and time zone at the time of reproduction. Data may be generated. By doing in this way, the data generation device 100 preferentially displays, for example, a visual field image including a subject or a landscape that is difficult to see unless it is in the season or time zone during playback on the free viewpoint video playback device. Can do. In particular, it is possible to provide an optimal display for tourist attraction and navigation.
(実施形態1の付記事項5)
視点制限用データには、視点制限の有無を示す情報の代わりに、視点制限に関するレベル情報が含まれていてもよい。 (Appendix 5 of Embodiment 1)
The viewpoint restriction data may include level information related to viewpoint restriction instead of information indicating presence / absence of viewpoint restriction.
視点制限用データには、視点制限の有無を示す情報の代わりに、視点制限に関するレベル情報が含まれていてもよい。 (Appendix 5 of Embodiment 1)
The viewpoint restriction data may include level information related to viewpoint restriction instead of information indicating presence / absence of viewpoint restriction.
レベル情報が表わすレベルは、例えば、以下の3つのレベルのうちのいずれかであってもよい。
・閲覧者が自由視点動画再生装置に全方位画像内の任意の視野画像を無条件で表示させることが可能なレベル
・所定の条件を満たした(例えば、コンテンツ制作者にお金を支払った)閲覧者に限り、自由視点動画再生装置に該任意の視野画像を表示させることが可能なレベル
・閲覧者が自由視点動画再生装置に全方位画像内の一部の領域内の視野画像しか表示させることができないレベル
これにより、コンテンツ制作者は、所定の条件を満たさない閲覧者には一部の視野画像しか閲覧させないようにすることができる。 The level represented by the level information may be any of the following three levels, for example.
・ A level that allows viewers to display any field-of-view image in an omnidirectional image unconditionally on a free-viewpoint video playback device. ・ Browsing that meets a predetermined condition (for example, paid for the content creator) Only for the user, a level at which the arbitrary viewpoint image can be displayed on the free-viewpoint video playback apparatus. The viewer can display only the viewfield image in a part of the omnidirectional image on the free-viewpoint video playback apparatus. In this way, the content creator can allow the viewer who does not satisfy the predetermined condition to browse only a part of the visual field image.
・閲覧者が自由視点動画再生装置に全方位画像内の任意の視野画像を無条件で表示させることが可能なレベル
・所定の条件を満たした(例えば、コンテンツ制作者にお金を支払った)閲覧者に限り、自由視点動画再生装置に該任意の視野画像を表示させることが可能なレベル
・閲覧者が自由視点動画再生装置に全方位画像内の一部の領域内の視野画像しか表示させることができないレベル
これにより、コンテンツ制作者は、所定の条件を満たさない閲覧者には一部の視野画像しか閲覧させないようにすることができる。 The level represented by the level information may be any of the following three levels, for example.
・ A level that allows viewers to display any field-of-view image in an omnidirectional image unconditionally on a free-viewpoint video playback device. ・ Browsing that meets a predetermined condition (for example, paid for the content creator) Only for the user, a level at which the arbitrary viewpoint image can be displayed on the free-viewpoint video playback apparatus. The viewer can display only the viewfield image in a part of the omnidirectional image on the free-viewpoint video playback apparatus. In this way, the content creator can allow the viewer who does not satisfy the predetermined condition to browse only a part of the visual field image.
また、コンテンツ制作者は、閲覧者がコンテンツ制作者に支払った金額の大きさに応じて、視点制限を緩くすることができる。
Also, the content creator can loosen the viewpoint restriction according to the amount of money paid by the viewer to the content creator.
(実施形態1の付記事項6)
再生制御データは自由視点動画データに含めなくてもよい。 (Appendix 6 of Embodiment 1)
The reproduction control data may not be included in the free viewpoint moving image data.
再生制御データは自由視点動画データに含めなくてもよい。 (Appendix 6 of Embodiment 1)
The reproduction control data may not be included in the free viewpoint moving image data.
例えば、自由視点動画データを配信する機能を備えたPCの形態で、データ生成装置100を実施する場合、データ生成装置100は、マップデータおよび全方位動画データを含む自由視点動画データと、全方位動画データの各フレームの再生制御データ(再生制御データセット)とを個別に配信するようになっていてもよい。
For example, when the data generation apparatus 100 is implemented in the form of a PC having a function of distributing free viewpoint video data, the data generation apparatus 100 includes free viewpoint video data including map data and omnidirectional video data, omnidirectional The reproduction control data (reproduction control data set) of each frame of the moving image data may be distributed individually.
この場合、データ生成装置100のユーザであるコンテンツ制作者は、自由視点動画データを配信する相手に対し、自分の好みで作成した、相異なる2つの再生制御データセットを送ることができる。そして、コンテンツ制作者は、該相手のPCに、1つ目の再生制御データセットに基づく全方位動画の再生と、2つ目の再生制御データセットに基づく全方位動画の再生と、行わせることができる。
In this case, the content creator who is the user of the data generation device 100 can send two different reproduction control data sets created according to his / her preference to the partner to which the free viewpoint video data is distributed. Then, the content creator causes the other PC to play the omnidirectional video based on the first playback control data set and to play the omnidirectional video based on the second playback control data set. Can do.
例えば、コンテンツ制作者は、全方位動画の撮影時に自身が見ていたある風景(前者の再生により表示される視野画像に写る風景)を相手に見させ、全方位動画の撮影時に自身が見ていた別の風景(後者の再生により表示される視野画像に写る風景)を相手に見させることができる。
For example, a content creator causes a partner to see a certain landscape (scenery that appears in the field-of-view image displayed by the former playback) when shooting an omnidirectional video, and is watching it when shooting a omnidirectional video. It is possible to make the other party see another landscape (the landscape shown in the visual field image displayed by the latter reproduction).
なお、再生制御データには、コンテンツ制作者に関する情報(コンテンツ制作者の名前等の著作権に関する情報)を含めてもよい。コンテンツ制作者が再生制御データ自体を二次著作物として取り扱うことを所望する場合に、この仕組みは有用である。
Note that the playback control data may include information related to the content creator (information related to the copyright such as the name of the content creator). This mechanism is useful when the content creator desires to handle the reproduction control data itself as a secondary work.
(実施形態1の付記事項7)
実施形態1では、全方位動画データを取り扱うデータ生成装置100について説明したが、本発明に係るデータ生成装置はそのような構成に限定されない。即ち、本発明に係るデータ生成装置の範囲には、全方位動画データの代わりに、立体3次元モデル(CGを使った立体3次元モデルや、複数の方向から対象物を撮影することによって作成した立体3次元モデルを取り扱うデータ生成装置)も含まれる。 (Additional Item 7 of Embodiment 1)
In the first embodiment, thedata generation apparatus 100 that handles omnidirectional video data has been described. However, the data generation apparatus according to the present invention is not limited to such a configuration. That is, the range of the data generation device according to the present invention is created by photographing a three-dimensional three-dimensional model (three-dimensional three-dimensional model using CG or a plurality of directions) instead of omnidirectional video data. A data generation apparatus that handles a three-dimensional three-dimensional model) is also included.
実施形態1では、全方位動画データを取り扱うデータ生成装置100について説明したが、本発明に係るデータ生成装置はそのような構成に限定されない。即ち、本発明に係るデータ生成装置の範囲には、全方位動画データの代わりに、立体3次元モデル(CGを使った立体3次元モデルや、複数の方向から対象物を撮影することによって作成した立体3次元モデルを取り扱うデータ生成装置)も含まれる。 (Additional Item 7 of Embodiment 1)
In the first embodiment, the
<実施形態2>
以下に添付図面を参照して、本発明に係る再生装置の好適な実施形態である実施形態2について詳細に説明する。なお、異なる図面においても同じ符号を付した構成は同様のものであるとして、その説明を省略することとする。 <Embodiment 2>
Embodiment 2 which is a preferred embodiment of a playback apparatus according to the present invention will be described below in detail with reference to the accompanying drawings. In addition, the structure which attached | subjected the same code | symbol also in different drawing shall be abbreviate | omitted, and the description is abbreviate | omitted.
以下に添付図面を参照して、本発明に係る再生装置の好適な実施形態である実施形態2について詳細に説明する。なお、異なる図面においても同じ符号を付した構成は同様のものであるとして、その説明を省略することとする。 <Embodiment 2>
Embodiment 2 which is a preferred embodiment of a playback apparatus according to the present invention will be described below in detail with reference to the accompanying drawings. In addition, the structure which attached | subjected the same code | symbol also in different drawing shall be abbreviate | omitted, and the description is abbreviate | omitted.
(自由視点動画再生装置の概要)
本実施形態に係る自由視点動画再生装置は、前述の実施形態1に係るデータ生成装置100が生成した自由視点動画データを参照することで全方位動画を再生する装置であるが、全方位動画の各画像フレームの全体を表示できるような構造にはなっていない。 (Outline of free viewpoint video playback device)
The free viewpoint video playback apparatus according to the present embodiment is an apparatus that plays back an omnidirectional video by referring to the free viewpoint video data generated by thedata generation apparatus 100 according to the first embodiment. It is not structured to display the entire image frame.
本実施形態に係る自由視点動画再生装置は、前述の実施形態1に係るデータ生成装置100が生成した自由視点動画データを参照することで全方位動画を再生する装置であるが、全方位動画の各画像フレームの全体を表示できるような構造にはなっていない。 (Outline of free viewpoint video playback device)
The free viewpoint video playback apparatus according to the present embodiment is an apparatus that plays back an omnidirectional video by referring to the free viewpoint video data generated by the
本実施施形態に係る自由視点動画再生装置は、全方位動画の各フレームについて、対象フレームから視野画像を切り出して視野画像を表示するようになっている。
The free-viewpoint video playback apparatus according to this embodiment is configured to display a field-of-view image by cutting out the field-of-view image from the target frame for each frame of the omnidirectional video.
なお、自由視点動画再生装置は、例えば、スマートフォンやタブレットのようなタッチパネル機能を持つ装置であっても構わない。
Note that the free viewpoint video playback device may be a device having a touch panel function such as a smartphone or a tablet.
あるいは、自由視点動画再生装置は、DVD、Blu-ray(登録商標)Discなどに代表される光磁気ディスク、及び/又は、USBメモリやSD(登録商標)カードなどに代表される半導体メモリ等の電子媒体からコンテンツデータを読み込んで再生する装置であってもよい。
Alternatively, the free-viewpoint video playback device includes a magneto-optical disk typified by DVD, Blu-ray (registered trademark) Disc, and / or a semiconductor memory typified by USB memory, SD (registered trademark) card, etc. An apparatus that reads and reproduces content data from an electronic medium may be used.
あるいは、自由視点動画再生装置は、TV放送の放送波を受信するテレビ受信機、もしくは、インターネットやその他の通信回線から配信されるコンテンツデータを受信する装置であってもよい。
Alternatively, the free-viewpoint video playback device may be a television receiver that receives broadcast waves of TV broadcasts, or a device that receives content data distributed from the Internet or other communication lines.
あるいは、自由視点動画再生装置は、ブルーレイ(Blu-ray(登録商標))ディスクプレイヤーなどの外部機器からの画像信号を受け付けるHDMI(登録商標)(High-Definition Multimedia Interface)レシーバを備えるように構成されていてもよい。
Alternatively, the free-viewpoint video playback device is configured to include an HDMI (registered trademark) Multi-Media Interface (HDMI) (registered trademark) receiver that receives an image signal from an external device such as a Blu-ray (registered trademark) disc player. It may be.
すなわち、自由視点動画再生装置は、外部からコンテンツデータの入力を受け、入力されたコンテンツデータを再生する機能を備えた装置であれば、どのような装置であっても構わない。
That is, the free viewpoint video playback device may be any device as long as it has a function of receiving content data from outside and playing back the input content data.
(自由視点動画再生装置の構成)
図12は、本実施形態に係る自由視点動画再生装置200(以下、「再生装置200」と略称する)の構成を示すブロック図である。まず、この図を用いて再生装置200の構成の概略を説明する。 (Configuration of free viewpoint video playback device)
FIG. 12 is a block diagram showing a configuration of a free-viewpoint video playback apparatus 200 (hereinafter abbreviated as “playback apparatus 200”) according to the present embodiment. First, the outline of the configuration of the playback apparatus 200 will be described with reference to FIG.
図12は、本実施形態に係る自由視点動画再生装置200(以下、「再生装置200」と略称する)の構成を示すブロック図である。まず、この図を用いて再生装置200の構成の概略を説明する。 (Configuration of free viewpoint video playback device)
FIG. 12 is a block diagram showing a configuration of a free-viewpoint video playback apparatus 200 (hereinafter abbreviated as “
再生装置200は、制御部210、表示部220、及び、操作受付部230を備えている。
The playback device 200 includes a control unit 210, a display unit 220, and an operation reception unit 230.
制御部210は、CPUであり、再生装置200全体を統括的に制御する。
The control unit 210 is a CPU and controls the entire playback apparatus 200 in an integrated manner.
表示部220は、視野画像が表示されるディスプレイである。
The display unit 220 is a display on which a visual field image is displayed.
操作受付部230は、全方位動画の閲覧者(再生装置200のユーザ)による操作を受け付ける操作デバイスである。
The operation accepting unit 230 is an operation device that accepts an operation by a viewer of the omnidirectional video (a user of the playback device 200).
制御部210は、特定のプログラムを実行することによって、逆多重化処理部211、マップ表示処理部212、復号処理部213、方位データ解析部214、及び、画像切り出し部215として機能する。
The control unit 210 functions as a demultiplexing processing unit 211, a map display processing unit 212, a decoding processing unit 213, an orientation data analysis unit 214, and an image cutout unit 215 by executing a specific program.
なお、復号処理部213および逆多重化処理部211は、ソフトウェアではなくハードウェア(LSI)によって実現されてもよい。
Note that the decoding processing unit 213 and the demultiplexing processing unit 211 may be realized by hardware (LSI) instead of software.
逆多重化処理部211は、外部から自由視点動画データの入力を受け付けると、自由視点動画データに対して逆多重化処理を施すことによって、自由視点動画データから、マップデータと、各フレームの再生制御データと、符号化された全方位動画データとを抽出する。
When the demultiplexing processing unit 211 receives an input of free viewpoint moving image data from the outside, the demultiplexing processing unit 211 performs demultiplexing processing on the free viewpoint moving image data, thereby reproducing map data and each frame from the free viewpoint moving image data. Control data and encoded omnidirectional video data are extracted.
逆多重化処理部211は、マップデータおよび各フレームの撮影地点データをマップ表示処理部212に出力し、符号化された各フレームを復号処理部213に出力し、各フレームの方位データ群を方位データ解析部214に出力する。
The demultiplexing processing unit 211 outputs the map data and the shooting point data of each frame to the map display processing unit 212, outputs each encoded frame to the decoding processing unit 213, and sets the azimuth data group of each frame as a direction. The data is output to the data analysis unit 214.
マップ表示処理部212は、マップデータが表わす地図を表示部220に表示するとともに、各フレームの撮影地点データを用いて、撮影経路を示す線を地図上に表示する。
The map display processing unit 212 displays a map represented by the map data on the display unit 220, and displays a line indicating the shooting route on the map using the shooting point data of each frame.
復号処理部213は、外部から入力された各フレームを復号し、復号された各フレームを画像切り出し部215に出力する。
The decoding processing unit 213 decodes each frame input from the outside and outputs each decoded frame to the image cutout unit 215.
方位データ解析部214は、各フレームについて、該フレームに関する1つ以上の方位データのうちの全部または一部を選択する。方位データ解析部214は、この選択処理を、閲覧者の操作に基づき、又は、自動的に行う。
The azimuth data analysis unit 214 selects, for each frame, all or part of one or more azimuth data related to the frame. The azimuth data analysis unit 214 performs this selection process based on a browser operation or automatically.
方位データ解析部214は、選択した各方位データについて、方位データに含まれる切り出し座標を画像切り出し部215に出力する。
The azimuth data analysis unit 214 outputs the cutout coordinates included in the azimuth data to the image cutout unit 215 for each selected azimuth data.
画像切り出し部215は、各フレームについて、該フレームに関する1つ以上の切り出し座標を参照して、該フレームから1つ以上の視野画像を切り出す。即ち、画像切り出し部215は、参照した1つ以上の切り出し中心座標の各々について、縦幅および横幅が所定の長さである領域であって該切り出し中心座標を中心とする領域内の視野画像を、フレームから切り出す。
The image cutout unit 215 cuts out one or more visual field images from each frame with reference to one or more cutout coordinates regarding the frame. In other words, the image cutout unit 215, for each of the one or more cutout center coordinates referred to, displays a visual field image in a region having a predetermined length and width in the region centered on the cutout center coordinate. , Cut out from the frame.
画像切り出し部215は、各フレームについて、該フレームの期間内に、該フレームから切り出した1つ以上の視野画像を表示部220に表示する。
The image cutout unit 215 displays, for each frame, one or more visual field images cut out from the frame on the display unit 220 within the period of the frame.
(再生装置200の動作)
次に、再生装置200の動作について説明する。 (Operation of the playback device 200)
Next, the operation of theplayback apparatus 200 will be described.
次に、再生装置200の動作について説明する。 (Operation of the playback device 200)
Next, the operation of the
再生装置200は、以下の第1のモードおよび第2のモードを備えており、第1のモードにおける動作と第2のモードにおける動作とが異なっている。
第1のモード:基本方位データの切り出し中心座標を参照して切り出した視野画像をフルスクリーン表示し、拡張方位データの切り出し中心座標を参照して切り出した視野画像を子画面表示(ワイプ表示)するモード
第2のモード:基本方位データの切り出し中心座標を参照して切り出した視野画像をデフォルトで表示し、閲覧者による表示する視野画像を切り替える操作に基づき、拡張方位データの切り出し中心座標を参照して切り出した視野画像を表示するモード
まず、第1のモードに設定されている再生装置の動作について、図13~図15を参照しながら説明する。 Theplayback apparatus 200 has the following first mode and second mode, and the operation in the first mode is different from the operation in the second mode.
First mode: The field-of-view image cut out with reference to the cut-out center coordinates of the basic azimuth data is displayed in full screen, and the field-of-view image cut out with reference to the cut-out center coordinates of the extended azimuth data is displayed in a small screen (wipe display). Mode second mode: The field-of-view image cut out with reference to the cut-out center coordinates of the basic azimuth data is displayed by default, and the cut-out center coordinates of the extended azimuth data are referenced based on the operation of switching the view-field image displayed by the viewer. First, the operation of the playback device set to the first mode will be described with reference to FIGS. 13 to 15. FIG.
第1のモード:基本方位データの切り出し中心座標を参照して切り出した視野画像をフルスクリーン表示し、拡張方位データの切り出し中心座標を参照して切り出した視野画像を子画面表示(ワイプ表示)するモード
第2のモード:基本方位データの切り出し中心座標を参照して切り出した視野画像をデフォルトで表示し、閲覧者による表示する視野画像を切り替える操作に基づき、拡張方位データの切り出し中心座標を参照して切り出した視野画像を表示するモード
まず、第1のモードに設定されている再生装置の動作について、図13~図15を参照しながら説明する。 The
First mode: The field-of-view image cut out with reference to the cut-out center coordinates of the basic azimuth data is displayed in full screen, and the field-of-view image cut out with reference to the cut-out center coordinates of the extended azimuth data is displayed in a small screen (wipe display). Mode second mode: The field-of-view image cut out with reference to the cut-out center coordinates of the basic azimuth data is displayed by default, and the cut-out center coordinates of the extended azimuth data are referenced based on the operation of switching the view-field image displayed by the viewer. First, the operation of the playback device set to the first mode will be described with reference to FIGS. 13 to 15. FIG.
図13は該動作を示すフローチャートであり、図14は、図13のフローチャートの一工程を詳細に示したフローチャートである。また、図15は、第1の表示モードに設定されている再生装置200による、視野画像の表示態様(PinP表示)を例示した図である。
FIG. 13 is a flowchart showing the operation, and FIG. 14 is a flowchart showing in detail one step of the flowchart of FIG. FIG. 15 is a diagram illustrating a display mode (PinP display) of the visual field image by the playback device 200 set in the first display mode.
再生装置200は、自由視点動画データが外部から入力されたタイミングで図13のフローチャートに従った動作を開始する。
The playback device 200 starts the operation according to the flowchart of FIG. 13 at the timing when the free viewpoint video data is input from the outside.
ステップS41において、多重化処理部7は、自由視点動画データに逆多重化処理を施すことによって、自由視点動画データから、マップデータと、各フレームの再生制御データと、符号化された全方位動画データとを抽出する。
In step S41, the multiplexing processing unit 7 performs a demultiplexing process on the free viewpoint moving image data, so that the map data, the reproduction control data of each frame, and the encoded omnidirectional moving image are converted from the free viewpoint moving image data. Extract the data.
図13に示すように、再生装置200は、全方位動画データの各フレームについて、該フレームの期間内に、ステップS43~ステップS46までの処理を行う。
As shown in FIG. 13, the playback device 200 performs the processing from step S43 to step S46 for each frame of the omnidirectional video data within the period of the frame.
ステップS43において、復号処理部213は、フレームi(全方位画像)をあらかじめ設定された復号方式(例えば、HEVCなど)で復号し、復号したフレームiを画像切り出し部215に出力する。
In step S43, the decoding processing unit 213 decodes the frame i (omnidirectional image) using a preset decoding method (for example, HEVC), and outputs the decoded frame i to the image clipping unit 215.
ステップS44において、方位データ解析部214は、再生制御データViに含まれている1つ以上の方位データを選択する。方位データ解析部214は、選択した1つ以上の方位データの各々について、該方位データから切り出し中心座標を抽出し、抽出した切り出し中心座標を画像切り出し部215に出力する。
In step S44, the orientation data analysis unit 214 selects one or more orientation data included in the reproduction control data Vi. The azimuth data analysis unit 214 extracts the cutout center coordinates from the azimuth data for each of the selected one or more azimuth data, and outputs the extracted cutout center coordinates to the image cutout unit 215.
ステップS44について、図14を参照しながら具体的に説明する。
Step S44 will be specifically described with reference to FIG.
まず、ステップS441において、方位データ解析部214は、再生制御データViの1つ以上の方位データの各々に優先表示順位データが含まれているか否かを判定する。
First, in step S441, the azimuth data analyzing unit 214 determines whether or not priority display order data is included in each of the one or more azimuth data of the reproduction control data Vi.
方位データ解析部214は、再生制御データViの各方位データに優先表示順位データが含まれていると判定した場合にはステップS442に進み、再生制御データViの各方位データに優先表示順位データが含まれていないと判定した場合にはステップS443に進む。
If the direction data analysis unit 214 determines that the priority display order data is included in each direction data of the reproduction control data Vi, the process proceeds to step S442, and the priority display order data is included in each direction data of the reproduction control data Vi. If it is determined that it is not included, the process proceeds to step S443.
ステップS442において、方位データ解析部214は、再生制御データViに含まれている全方位データのうち、最も高い優先表示順位を示す優先表示順位データが含まれている方位データを選択し、ステップS444に進む。
In step S442, the azimuth data analysis unit 214 selects azimuth data including priority display order data indicating the highest priority display order among all the azimuth data included in the reproduction control data Vi, and step S444. Proceed to
ステップS443において、方位データ解析部214は、再生制御データViに含まれている各方位データの識別情報を参照することによって、基本方位データを選択する。
In step S443, the azimuth data analysis unit 214 selects basic azimuth data by referring to the identification information of each azimuth data included in the reproduction control data Vi.
ステップS444において、方位データ解析部214は、ステップS442またはステップS443で選択した方位データ(再生装置200が複数枚の視野画像を切り出すために最も優先的に使用すべき方位データである特定の方位データ)から切り出し中心座標を抽出する。
In step S444, the azimuth data analyzing unit 214 selects the azimuth data selected in step S442 or step S443 (specific azimuth data which is the azimuth data that the playback device 200 should use most preferentially to cut out a plurality of field images). ) To extract the cut-out center coordinates.
方位データ解析部214は、抽出した切り出し中心座標を画像切り出し部215に出力する。方位データ解析部214は、ステップS444の後、ステップS445に進む。
The orientation data analysis unit 214 outputs the extracted cut-out center coordinates to the image cut-out unit 215. The orientation data analysis unit 214 proceeds to step S445 after step S444.
ステップS445において、方位データ解析部214は、再生制御データViに拡張方位データが含まれているか否かを判定する。
In step S445, the azimuth data analysis unit 214 determines whether or not the extended azimuth data is included in the reproduction control data Vi.
再生装置200は、再生制御データViに拡張方位データが含まれていると判定した場合にはステップS446に進み、再生制御データViに拡張方位データが含まれていないと判定した場合にはステップS45に進む。
The reproducing device 200 proceeds to step S446 when it is determined that the extended control data Vi is included in the playback control data Vi, and proceeds to step S45 when it is determined that the extended control data Vi is not included in the playback control data Vi. Proceed to
ステップS446において、方位データ解析部214は、操作受付部230が再生装置200に拡張方位データを選択させる操作を受け付けたか否かを判定する。
In step S446, the azimuth data analyzing unit 214 determines whether or not the operation receiving unit 230 has received an operation for causing the playback device 200 to select the extended azimuth data.
再生装置200は、操作受付部230が該操作を受け付けたと判定した場合にはステップS447に進み、操作受付部230が該操作を受け付けていないと判定した場合にはステップS45に進む。
When the operation accepting unit 230 determines that the operation has been accepted, the reproducing device 200 proceeds to step S447, and when the operation accepting unit 230 determines that the operation has not been accepted, the reproducing device 200 proceeds to step S45.
方位データ解析部214は、ステップS447において、該操作に従って拡張方位データを選択する。更に、方位データ解析部214は、ステップS448において、ステップS447で選択した拡張方位データから切り出し中心座標を抽出し、抽出した切り出し中心座標を画像切り出し部215に出力する。
In step S447, the azimuth data analysis unit 214 selects extended azimuth data according to the operation. Further, in step S448, the azimuth data analysis unit 214 extracts cutout center coordinates from the extended azimuth data selected in step S447, and outputs the extracted cutout center coordinates to the image cutout unit 215.
再生装置200は、ステップS448の後、ステップS45に進む。
The reproducing device 200 proceeds to step S45 after step S448.
図13のフローチャートに戻って、再生装置200の動作の説明を続ける。
Returning to the flowchart of FIG. 13, the description of the operation of the playback device 200 will be continued.
ステップS45において、画像切り出し部215は、ステップS444で出力された切り出し中心座標を参照して、フルスクリーン表示用の視野画像をフレームiから切り出す。
In step S45, the image cutout unit 215 refers to the cutout center coordinates output in step S444 and cuts out the view image for full screen display from the frame i.
また、ステップS45の直近のステップがステップS448である場合、ステップS45において、画像切り出し部215は、ステップS448で出力された切り出し中心座標を参照して、子画面表示用の視野画像をフレームiから切り出す。
When the step immediately preceding step S45 is step S448, in step S45, the image cutout unit 215 refers to the cutout center coordinates output in step S448, and displays the field-of-view image for sub-screen display from the frame i. cut.
画像切り出し部215は、ステップS45の後、フルスクリーン表示用の視野画像をフルスクリーン表示する。また、直近のステップS45の直近のステップがステップS448である場合、画像切り出し部215は、該視野画像の上に、子画面表示用の視野画像を重畳表示する。
The image cutout unit 215 displays the full-screen view field image in full screen after step S45. In addition, when the most recent step of step S45 is step S448, the image cutout unit 215 superimposes and displays a visual field image for sub-screen display on the visual field image.
その結果、例えば図15に示すように、表示部220には、フルスクリーン表示用の視野画像(フルスクリーン画像)27と子画面表示用の視野画像(ワイプ画像)28とが表示されることになる。
As a result, for example, as shown in FIG. 15, the display unit 220 displays a full-screen display field-of-view image (full-screen image) 27 and a sub-screen display field-of-view image (wipe image) 28. Become.
(第1のモードに関する付記事項1)
ステップS44において、再生装置200は、フレームi(全方位画像)に含まれる、任意の所望の方位の画像を視野画像として表示させる操作を受け付けることができるようになっていてもよい。 (Appendix 1 regarding the first mode)
In step S44, theplayback apparatus 200 may be configured to accept an operation for displaying an image of an arbitrary desired orientation included in the frame i (omnidirectional image) as a visual field image.
ステップS44において、再生装置200は、フレームi(全方位画像)に含まれる、任意の所望の方位の画像を視野画像として表示させる操作を受け付けることができるようになっていてもよい。 (
In step S44, the
この場合、再生装置200は、フレームiの期間内において、以下の処理を行ってもよい。即ち、再生装置200は、所定の有効期間に亘って該視野画像を表示し、該有効期限の経過後に、ステップS45の処理を行ってもよい。
In this case, the playback device 200 may perform the following processing within the period of the frame i. That is, the playback device 200 may display the visual field image over a predetermined valid period, and perform the process of step S45 after the expiration date.
上記操作は、全方位画像における切り出し中心座標の値を入力する操作であってもよいし、該切り出し中心座標を指定する操作(マウスクリックやタッチ操作)であってもよい。また、再生装置200は、上記所定の有効期間に、マウス操作、フリック操作、又は、コントローラのボタンを押下する操作を受け付けた場合、表示中のある方位の視野画像を別の方位の視野画像に変化させてもよい。
The above operation may be an operation for inputting the value of the cutout center coordinate in the omnidirectional image, or an operation for specifying the cutout center coordinate (mouse click or touch operation). In addition, when the playback apparatus 200 receives a mouse operation, a flick operation, or an operation of pressing a controller button during the predetermined valid period, the playback device 200 changes the displayed field image of one direction to a field image of another direction. It may be changed.
即ち、再生装置200は、表示中のある方位の視野画像を、マウスの移動量および移動方向に応じた別の方位の視野画像、フリックの量および方向に応じた別の方位の視野画像、又は、押下するボタンの種類に応じた別の方位の視野画像に変化させてもよい。
That is, the playback device 200 converts a field image of a certain direction being displayed into a field image of another direction according to the amount and direction of movement of the mouse, a field image of another direction according to the amount and direction of flick, or The viewing image may be changed to a different azimuth image according to the type of button to be pressed.
(第1のモードに関する付記事項2)
画像切り出し部215は、最初のフレームを除く各フレームについて、必要に応じて切り出し中心座標を補正する処理を行ってもよい。 (Appendix 2 regarding the first mode)
Theimage cutout unit 215 may perform a process of correcting the cutout center coordinates as necessary for each frame except the first frame.
画像切り出し部215は、最初のフレームを除く各フレームについて、必要に応じて切り出し中心座標を補正する処理を行ってもよい。 (Appendix 2 regarding the first mode)
The
例えば、対象フレームの直前のフレームに関する切り出し中心座標C1と、対象フレームに関する切り出し中心座標C2との間の距離が所定の値を越える場合には、対象フレームに関する切り出し中心座標を補正することが望ましい。
For example, when the distance between the cutout center coordinate C1 related to the frame immediately before the target frame and the cutout center coordinate C2 related to the target frame exceeds a predetermined value, it is desirable to correct the cutout central coordinate related to the target frame.
即ち、対象フレームの直前のフレームに関する切り出し中心座標を、切り出し中心座標C1から、以下の切り出し中心座標C3に補正することが望ましい。
切り出し中心座標C3:座標C1と座標C2とを結ぶ線分上の座標であって、座標C1との間の距離が上記所定の値であるような座標
その理由は、再生装置200の上記動作中に、閲覧者が、どの方位の視野画像が表示されているかを常に把握できるようにするためであり、閲覧者の映像酔いを防止するためでもある。 That is, it is desirable to correct the cut-out center coordinates for the frame immediately before the target frame from the cut-out center coordinates C1 to the following cut-out center coordinates C3.
Cut-out center coordinates C3: coordinates on a line segment connecting the coordinates C1 and C2, and the distance between the coordinates C1 is the predetermined value. In addition, this is because the viewer can always grasp which field of view image is displayed, and also to prevent the viewer from getting sick.
切り出し中心座標C3:座標C1と座標C2とを結ぶ線分上の座標であって、座標C1との間の距離が上記所定の値であるような座標
その理由は、再生装置200の上記動作中に、閲覧者が、どの方位の視野画像が表示されているかを常に把握できるようにするためであり、閲覧者の映像酔いを防止するためでもある。 That is, it is desirable to correct the cut-out center coordinates for the frame immediately before the target frame from the cut-out center coordinates C1 to the following cut-out center coordinates C3.
Cut-out center coordinates C3: coordinates on a line segment connecting the coordinates C1 and C2, and the distance between the coordinates C1 is the predetermined value. In addition, this is because the viewer can always grasp which field of view image is displayed, and also to prevent the viewer from getting sick.
(第1のモードに関する付記事項3)
画像切り出し部215が全方位画像から切り出す視野画像のサイズは、表示部220の表示領域の大きさに応じた予め設定されたサイズであってもよい。 (Appendix 3 regarding the first mode)
The size of the visual field image cut out from the omnidirectional image by theimage cutout unit 215 may be a preset size corresponding to the size of the display area of the display unit 220.
画像切り出し部215が全方位画像から切り出す視野画像のサイズは、表示部220の表示領域の大きさに応じた予め設定されたサイズであってもよい。 (Appendix 3 regarding the first mode)
The size of the visual field image cut out from the omnidirectional image by the
あるいは、再生制御データViに、画像切り出し部215が全方位画像から切り出す視野画像のサイズを示すサイズ情報が含まれていてもよい。この場合、画像切り出し部215は、このサイズ情報と切り出し中心座標とを参照して視野画像を切り出すことになる。
Alternatively, the reproduction control data Vi may include size information indicating the size of the visual field image cut out from the omnidirectional image by the image cutout unit 215. In this case, the image cutout unit 215 cuts out the visual field image with reference to the size information and the cutout center coordinates.
(第1のモードに関する付記事項4)
画像切り出し部215は、全方位画像に対して歪み(レンズやミラーによる歪み)を補正する処理を行い、補正後の全方位画像から視野画像を切り出してもよい。なお、歪補正の方式は、本発明を直接特徴づけるものではなく公知の手法を適用できるため、その詳細な説明は省略する。 (Appendix 4 regarding the first mode)
Theimage cutout unit 215 may perform a process of correcting distortion (distortion due to a lens or a mirror) with respect to the omnidirectional image, and may cut out the visual field image from the corrected omnidirectional image. Note that the distortion correction method does not directly characterize the present invention, and a known method can be applied, and thus detailed description thereof is omitted.
画像切り出し部215は、全方位画像に対して歪み(レンズやミラーによる歪み)を補正する処理を行い、補正後の全方位画像から視野画像を切り出してもよい。なお、歪補正の方式は、本発明を直接特徴づけるものではなく公知の手法を適用できるため、その詳細な説明は省略する。 (Appendix 4 regarding the first mode)
The
(第1のモードに関する付記事項5)
方位データ解析部214は、S444において、切り出し中心座標とともに、該切り出し中心座標を参照して切り出す視野画像をフルスクリーン表示すべきであることを示す制御情報を画像切り出し部215に出力してもよい。 (Appendix 5 regarding the first mode)
In S444, the azimuthdata analysis unit 214 may output control information indicating that the field-of-view image to be cut out with reference to the cut-out center coordinates together with the cut-out center coordinates should be displayed in full screen. .
方位データ解析部214は、S444において、切り出し中心座標とともに、該切り出し中心座標を参照して切り出す視野画像をフルスクリーン表示すべきであることを示す制御情報を画像切り出し部215に出力してもよい。 (Appendix 5 regarding the first mode)
In S444, the azimuth
方位データ解析部214は、S448において、切り出し中心座標とともに、該切り出し中心座標を参照して切り出す視野画像をワイプ表示すべきであることを示す制御情報を画像切り出し部215に出力してもよい。
In S <b> 448, the orientation data analysis unit 214 may output control information indicating that the visual field image to be cut out with reference to the cut-out center coordinate should be wipe displayed together with the cut-out center coordinate.
この場合、画像切り出し部215は、制御情報を参照することによって、該制御情報とともに取得した切り出し中心座標を参照して切り出す視野画像をフルスクリーン表示すべきかワイプ表示すべきかを決定してもよい。
In this case, the image cutout unit 215 may determine whether to view the visual field image to be cut out by referring to the control information with reference to the cutout center coordinates acquired together with the control information to be displayed in full screen or wiped.
また、方位データ解析部214は、S444およびS448において、切り出し中心座標とともに、該切り出し中心座標と一緒に方位データに含まれている優先表示順位データを画像切り出し部215に出力してもよい。
In S444 and S448, the azimuth data analysis unit 214 may output the priority display order data included in the azimuth data together with the cutout center coordinates to the image cutout unit 215 together with the cutout center coordinates.
この場合、画像切り出し部215は、取得した複数の優先表示順位データの中から、最も高い優先表示順位を示す優先表示順位データを特定してもよい。
In this case, the image cutout unit 215 may specify priority display order data indicating the highest priority display order from the plurality of acquired priority display order data.
そして、画像切り出し部215は、該優先表示順位データとともに取得した切り出し中心座標を参照して切り出す視野画像をフルスクリーン表示することを決定してもよい。また、画像切り出し部215は、他の優先表示順位データとともに取得した切り出し中心座標を参照して切り出す視野画像をワイプ表示することを決定してもよい。
Then, the image cutout unit 215 may determine to display the visual field image to be cut out with reference to the cutout center coordinates acquired together with the priority display order data. In addition, the image cutout unit 215 may determine to wipe-display a field-of-view image to be cut out with reference to the cut-out center coordinates acquired together with other priority display order data.
(第1のモードに関する付記事項6)
画像切り出し部215は、基本方位データの切り出し中心座標のX成分の値と拡張視線方向データの切り出し中心座標のX成分の値とを比較してもよい。画像切り出し部215は、前者のX成分の値が後者のX成分の値よりも小さい場合にはディスプレイの右端にワイプ画像28を表示し、前者のX成分の値が後者のX成分の値よりも大きい場合にはディスプレイの左端にワイプ画像28を表示してもよい。 (Appendix 6 regarding the first mode)
Theimage cutout unit 215 may compare the value of the X component of the cutout center coordinate of the basic azimuth data with the value of the X component of the cutout center coordinate of the extended gaze direction data. When the value of the former X component is smaller than the value of the latter X component, the image cutout unit 215 displays the wipe image 28 on the right end of the display, and the value of the former X component is greater than the value of the latter X component. May be displayed on the left end of the display.
画像切り出し部215は、基本方位データの切り出し中心座標のX成分の値と拡張視線方向データの切り出し中心座標のX成分の値とを比較してもよい。画像切り出し部215は、前者のX成分の値が後者のX成分の値よりも小さい場合にはディスプレイの右端にワイプ画像28を表示し、前者のX成分の値が後者のX成分の値よりも大きい場合にはディスプレイの左端にワイプ画像28を表示してもよい。 (Appendix 6 regarding the first mode)
The
こうすることにより、閲覧者は、全方位画像の撮影地点から見て、ワイプ画像28に写っている被写体がフルスクリーン画像27に写っている被写体の右側にあるか(あったか)左側にあるか(あったか)を直観的に理解できる。
By doing so, the viewer can see from the shooting point of the omnidirectional image whether the subject in the wipe image 28 is on the right side of the subject in the full screen image 27 or on the left side. I was able to understand it intuitively.
また、ある被写体(素晴らしい景色)の写っているある方位の視野画像がフルスクリーン画像27として表示され、該被写体と関係のある別の被写体(その景色を見て感動する人物の顔)が写っている別の方位の視野画像がワイプ画像28として表示されている場合、閲覧者は、素晴らしい景色とそれを見て感動する人物の顔とを現場で同時に見ているような感覚を得ることができる。
In addition, a field image in a certain direction in which a certain subject (a wonderful scenery) is displayed is displayed as a full screen image 27, and another subject related to the subject (the face of a person who is moved by seeing the scenery) is reflected. When a field image of a different orientation is displayed as the wipe image 28, the viewer can feel as if he / she is simultaneously viewing the wonderful scenery and the face of the person who is impressed by seeing it. .
なお、ワイプ画面の表示位置は、左端および右端ではなく、上端および下端であってもよい。
Note that the display position of the wipe screen may be the upper end and the lower end instead of the left end and the right end.
この場合、画像切り出し部215は、基本方位データの切り出し中心座標のY成分の値と拡張視線方向データの切り出し中心座標のY成分の値とを比較してもよい。画像切り出し部215は、前者のY成分の値が後者のY成分の値よりも小さい場合にはディスプレイの上端にワイプ画像28を表示し、前者のY成分の値が後者のY成分の値よりも大きい場合にはディスプレイの下端にワイプ画像28を表示してもよい。
In this case, the image cutout unit 215 may compare the value of the Y component of the cutout center coordinate of the basic orientation data with the value of the Y component of the cutout center coordinate of the extended gaze direction data. When the value of the former Y component is smaller than the value of the latter Y component, the image cutout unit 215 displays the wipe image 28 at the upper end of the display, and the value of the former Y component is greater than the value of the latter Y component. May be larger, the wipe image 28 may be displayed at the lower end of the display.
(第1のモードに関する付記事項7)
画像切り出し部215は、視野画像28をワイプ表示する代わりに、視野画像28を用いて画像ボタンを作成し、作成した画像ボタンを表示してもよい。 (Appendix 7 regarding the first mode)
Theimage cutout unit 215 may create an image button using the field-of-view image 28 and display the created image button instead of wiping the field-of-view image 28.
画像切り出し部215は、視野画像28をワイプ表示する代わりに、視野画像28を用いて画像ボタンを作成し、作成した画像ボタンを表示してもよい。 (Appendix 7 regarding the first mode)
The
この画像ボタンがユーザによって押されると、画像切り出し部215は、画像ボタン、及び、フルスクリーン表示されている視野画像27を消去した上で次の処理を行ってもよい。即ち、画像切り出し部215は、視野画像28をフルスクリーン表示し、視野画像27を用いて画像ボタンを作成し、作成した画像ボタンを表示してもよい。
When this image button is pressed by the user, the image cutout unit 215 may perform the following processing after deleting the image button and the field-of-view image 27 displayed on the full screen. That is, the image cutout unit 215 may display the field image 28 in full screen, create an image button using the field image 27, and display the created image button.
これにより、ユーザは、拡張方位データの切り出し中心座標を参照して切り出された視野画像を素早く再生装置200にフルスクリーン表示させることができる。
Thereby, the user can quickly display the field-of-view image cut out with reference to the cut-out center coordinates of the extended orientation data on the playback device 200 in full screen.
(第1のモードに関する付記事項8)
再生装置200は、全方位動画を2回再生するようになっていてもよい。 (Appendix 8 regarding the first mode)
Theplayback device 200 may play back the omnidirectional video twice.
再生装置200は、全方位動画を2回再生するようになっていてもよい。 (Appendix 8 regarding the first mode)
The
即ち、再生装置200は、1回目の再生時には、基本方位データの切り出し中心座標のみを用いて各フレームの視野画像を表示し、2回目以降の再生時には、拡張方位データの切り出し中心座標のみを用いて各フレームの視野画像を表示してもよい。また、再生装置200は、このように構成されたデジタルサイネージであってもよい。
That is, the playback apparatus 200 displays the field-of-view image of each frame using only the cut-out center coordinates of the basic orientation data during the first playback, and uses only the cut-out center coordinates of the extended orientation data during the second and subsequent playbacks. The field-of-view image of each frame may be displayed. Further, the playback apparatus 200 may be a digital signage configured as described above.
この場合、デジタルサイネージとしての再生装置200は、閲覧者を飽きさせることなく、全方位動画に写っている様々な広告を閲覧者に閲覧させることができる。
In this case, the playback device 200 as digital signage can allow the viewer to browse various advertisements shown in the omnidirectional video without getting the viewer bored.
(第1のモードに関する付記事項9)
また、再生装置200は、再生制御データViに複数の方向データが含まれる場合、複数の方向データの各々について、該方向データに含まれている識別情報に応じたボタンを、フルスクリーン表示されている視野画像の上に表示してもよい。 (Appendix 9 regarding the first mode)
In addition, when the reproduction control data Vi includes a plurality of direction data, the reproducingdevice 200 displays a button corresponding to the identification information included in the direction data for each of the plurality of direction data in a full screen display. It may be displayed on the field image.
また、再生装置200は、再生制御データViに複数の方向データが含まれる場合、複数の方向データの各々について、該方向データに含まれている識別情報に応じたボタンを、フルスクリーン表示されている視野画像の上に表示してもよい。 (Appendix 9 regarding the first mode)
In addition, when the reproduction control data Vi includes a plurality of direction data, the reproducing
再生装置200は、表示された複数のボタンのうちのいずれかが押された場合、押されたボタンに対応する識別情報で識別される方位データの切り出し中心座標を参照してもよい。そして、再生装置200は、フルスクリーン表示する画像を、該切り出し中心座標を用いて切り出した参照画像に切り替えてもよい。
The playback apparatus 200 may refer to the cut-out center coordinates of the orientation data identified by the identification information corresponding to the pressed button when any of the displayed buttons is pressed. Then, the playback apparatus 200 may switch the image to be displayed on the full screen to a reference image cut out using the cut-out center coordinates.
これにより、閲覧者は、自身の好みのシーンを含む視野画像を自身の意のままにフルスクリーン表示させることができる。
This makes it possible for the viewer to display a field-of-view image including his / her favorite scene in full screen as he wishes.
(第1のモードに関する付記事項10)
再生装置200は、各拡張方位データについて、対象の拡張方位データが示す方位の視野画像を表示させるか否かを設定するためのボタンを表示してもよい。 (Appendix 10 regarding the first mode)
Theplayback device 200 may display a button for setting whether or not to display a field-of-view image of the orientation indicated by the target extended orientation data for each extended orientation data.
再生装置200は、各拡張方位データについて、対象の拡張方位データが示す方位の視野画像を表示させるか否かを設定するためのボタンを表示してもよい。 (
The
この場合、再生装置200は、上記ボタンによる設定に基づいて表示すべき視野画像のみをワイプ表示してもよい。
In this case, the playback device 200 may wipe-display only the visual field image to be displayed based on the setting by the button.
また、上記ボタンによる設定に基づいて、あるフレーム内に、表示すべき視野画像が複数枚存在する場合には、そのフレームの期間において、優先表示順位データが示す優先表示順位に従って複数枚の視野画像を順にワイプ表示してもよいし、複数のワイプ画面を用意してすべての視野画像をワイプ表示してもよい。
Further, when there are a plurality of field images to be displayed in a certain frame based on the setting by the button, a plurality of field images are displayed according to the priority display order indicated by the priority display order data during the period of the frame. May be wiped in order, or a plurality of wipe screens may be prepared to wipe display all the field-of-view images.
(第1のモードに設定されている再生装置200の利点)
第1のモードに関する以上の説明からわかるように、再生装置200は、データ生成装置100が生成した自由視点動画データから、全方位動画データと、マップデータと、各フレームの再生制御データを分離する。 (Advantages of theplayback device 200 set to the first mode)
As can be seen from the above description regarding the first mode, theplayback device 200 separates the omnidirectional video data, map data, and playback control data for each frame from the free-viewpoint video data generated by the data generation device 100. .
第1のモードに関する以上の説明からわかるように、再生装置200は、データ生成装置100が生成した自由視点動画データから、全方位動画データと、マップデータと、各フレームの再生制御データを分離する。 (Advantages of the
As can be seen from the above description regarding the first mode, the
再生装置200は、視線制御データに含まれる方位データの全部又は一部を選択し、その視線方向データに含まれる切り出し中心座標を用いて、全方位画像の一部(視野画像)を切り出して表示する。
The playback apparatus 200 selects all or a part of the azimuth data included in the line-of-sight control data, and cuts out and displays a part of the omni-directional image (field-of-view image) using the cut-out center coordinates included in the line-of-sight direction data. To do.
これにより、再生装置200(タブレットやテレビ、PC、スマートフォン、HMD等のような、限られた解像度とサイズの表示画面を備えた装置)は、全方位動画を再生する間、各フレーム(全方位画像)について、対象の全方位画像のうちの特定の方位の視野画像(コンテンツ制作者が閲覧者に閲覧させたい視野画像)を表示する。
As a result, the playback device 200 (a device having a display screen with a limited resolution and size, such as a tablet, a TV, a PC, a smartphone, or an HMD) can play back each frame (omnidirectional) For the image), a field image in a specific direction among the target omnidirectional images (field image that the content creator wants the viewer to browse) is displayed.
これにより、閲覧者は、特別な操作を行わない場合、コンテンツ制作者が閲覧者に閲覧させたい視野画像を閲覧することができる。
This allows the viewer to browse the visual field image that the content creator wants the viewer to browse if no special operation is performed.
また、閲覧者が、所望の方位の画像を視野画像として表示させる操作を行った場合であっても、所定の期間の経過後に、コンテンツ制作者が閲覧者に閲覧させたい視野画像を閲覧することができる。
In addition, even when the viewer performs an operation to display an image of a desired orientation as a visual field image, the visual field image that the content creator wants the viewer to browse after a predetermined period has elapsed Can do.
次に、第2のモードに設定されている再生装置の動作について、図16~図20を参照しながら説明する。
Next, the operation of the playback apparatus set to the second mode will be described with reference to FIGS.
図16は該動作を示すフローチャートであり、図17は、図16のフローチャートの一工程(ステップS44)を詳細に示したフローチャートである。また、図18は、第2の表示モードに設定されている再生装置200が、地図上にデフォルトの視野画像を表示している様子を例示した図である。
FIG. 16 is a flowchart showing the operation, and FIG. 17 is a flowchart showing in detail one step (step S44) of the flowchart of FIG. FIG. 18 is a diagram illustrating a state in which the playback device 200 set in the second display mode displays a default visual field image on a map.
また、図19は、図16のフローチャートの別の一工程(ステップS48)を詳細に示したフローチャートであり、図20は、第2の表示モードに設定されている再生装置200が、ユーザによって指定された視野画像を地図上に表示している様子を例示した図である。
FIG. 19 is a flowchart showing in detail another process (step S48) of the flowchart of FIG. 16, and FIG. 20 shows the case where the playback device 200 set in the second display mode is designated by the user. It is the figure which illustrated a mode that the performed visual field image is displayed on a map.
再生装置200は、自由視点動画データが外部から入力されたタイミングで図16のフローチャートに従った動作を開始する。
The playback device 200 starts the operation according to the flowchart of FIG. 16 at the timing when the free viewpoint video data is input from the outside.
再生装置200は、既に説明したステップS41を行った後、ステップS42に進む。
The playback device 200 proceeds to step S42 after performing step S41 already described.
ステップS42において、マップ表示処理部212は、マップデータが表わす地図を表示部220に表示するとともに、各フレームの撮影地点データを用いて、撮影経路を示す線を地図29上に表示する。
In step S42, the map display processing unit 212 displays the map represented by the map data on the display unit 220, and displays a line indicating the shooting route on the map 29 using the shooting point data of each frame.
その後、図16に示すように、再生装置200は、全方位動画データの各フレームについて、該フレームの期間に、ステップS43~ステップS48までの処理を行う。
Thereafter, as shown in FIG. 16, the playback apparatus 200 performs the processing from step S43 to step S48 for each frame of the omnidirectional video data during the frame period.
即ち、再生装置200は、既に説明したステップS43を行った後、ステップS444Aに進む。
That is, the reproducing device 200 proceeds to step S444A after performing step S43 already described.
ステップS44Aにおいて、方位データ解析部214は、再生制御データViに含まれている1つの方位データを自動的に選択する。方位データ解析部214は、選択した方位データから切り出し中心座標を抽出し、抽出した切り出し中心座標を画像切り出し部215に出力する。
In step S44A, the azimuth data analysis unit 214 automatically selects one azimuth data included in the reproduction control data Vi. The azimuth data analysis unit 214 extracts cutout center coordinates from the selected azimuth data, and outputs the extracted cutout center coordinates to the image cutout unit 215.
ステップS44Aの具体的な処理は図17に示されている通りであるが、図17のステップS441~ステップS444の処理は、既に説明した図13のステップS441~ステップS444の処理と同じであるので、ステップS44Aの具体的な処理についての説明は省略する。
The specific processing in step S44A is as shown in FIG. 17, but the processing in steps S441 to S444 in FIG. 17 is the same as the processing in steps S441 to S444 in FIG. 13 already described. The description of the specific processing in step S44A is omitted.
ステップS44Aの後、ステップS45において、画像切り出し部215は、ステップS444で出力された切り出し中心座標を参照して、デフォルト表示用の視野画像31をフレームiから切り出す。
After step S44A, in step S45, the image cutout unit 215 cuts out the default display visual field image 31 from the frame i with reference to the cutout center coordinates output in step S444.
画像切り出し部215は、ステップS45の後、デフォルト表示用の視野画像31を地図29上に表示する(ステップS46)。
The image cutout unit 215 displays the visual field image 31 for default display on the map 29 after step S45 (step S46).
再生装置200は、ステップS46の後、ステップS47に進む。
The reproducing device 200 proceeds to step S47 after step S46.
ステップS47において、マップ表示処理部212は、フレームiの撮影地点データが示す地図上の位置に、フレームiの撮影地点を示すシンボル30を表示する。
In step S47, the map display processing unit 212 displays the symbol 30 indicating the shooting point of the frame i at the position on the map indicated by the shooting point data of the frame i.
また、ステップS47において、マップ表示処理部212は、ステップS44Aで方位データ解析部214が選択しなかった各方位データを方位データ解析部214から取得し、フレームiに関する各方位データについて、以下の処理を行う。
In step S47, the map display processing unit 212 obtains each azimuth data not selected by the azimuth data analysis unit 214 in step S44A from the azimuth data analysis unit 214, and performs the following processing on each azimuth data regarding the frame i. I do.
即ち、マップ表示処理部212は、対象の方位データに含まれている切り出し中心座標とフレームiの撮影地点データとから、フレームiの撮影地点から対象の方位データが示す方位を見た時の視界に入る被写体の地図上における位置を推定し、推定した位置にシンボル32を表示する。
That is, the map display processing unit 212 has a field of view when the azimuth indicated by the target azimuth data is viewed from the shooting point of the frame i from the cut-out center coordinates included in the target azimuth data and the shooting point data of the frame i. The position on the map of the subject to be entered is estimated, and the symbol 32 is displayed at the estimated position.
例えば、マップ表示処理部212は、対象の方位データに含まれている切り出し中心座標を参照して、フレームiから視野画像を抽出し、視野画像に対して公知の距離推定技術を適用することで、フレームiの撮影地点と視野画像に写っている上記被写体の存在する地点との間の距離を推定してもよい。そして、マップ表示処理部212は、フレームiの撮影地点データと、上記方位を示す切り出し中心座標と、推定した上記距離とから、上記被写体の地図上における位置を推定してもよい。
For example, the map display processing unit 212 refers to the cut-out center coordinates included in the target orientation data, extracts the field image from the frame i, and applies a known distance estimation technique to the field image. The distance between the shooting point of the frame i and the point where the subject in the field-of-view image exists may be estimated. Then, the map display processing unit 212 may estimate the position of the subject on the map from the shooting point data of the frame i, the cut-out center coordinates indicating the orientation, and the estimated distance.
再生装置200は、ステップS47の後、ステップS48に進む。
The reproducing device 200 proceeds to step S48 after step S47.
ステップS48において、再生装置200は、視野画像切り替え処理を行う。ステップS48の視野画像切り替え処理について、図19を参照しながら具体的に説明する。
In step S48, the playback device 200 performs a visual field image switching process. The visual field image switching process in step S48 will be specifically described with reference to FIG.
まず、方位データ解析部214は、再生制御データViに拡張方位データが含まれているか否かを判定する(ステップS481)。
First, the direction data analysis unit 214 determines whether or not the extended control data is included in the reproduction control data Vi (step S481).
再生装置200は、再生制御データViに拡張方位データが含まれていないと判定した場合にはステップS48を終了し、再生制御データViに拡張方位データが含まれていると判定した場合にはステップS482に進む。
If the playback device 200 determines that the extended control data Vi does not include the extended orientation data, the playback device 200 ends step S48. If the playback control data Vi determines that the extended control data Vi includes the extended orientation data, the playback device 200 ends the process. The process proceeds to S482.
ステップS482にて、画像切り出し部215は、再生制御データViに含まれている各拡張方位データについて、以下の処理を行う。
In step S482, the image cutout unit 215 performs the following processing on each extended orientation data included in the reproduction control data Vi.
即ち、画像切り出し部215は、対象の拡張方位データに含まれている切り出し中心座標を方位データ解析部214から取得し、取得した切り出し中心座標を参照してフレームiから視野画像を抽出し、抽出した視野画像をサムネイル表示する。
That is, the image cutout unit 215 obtains the cutout center coordinates included in the target extended azimuth data from the azimuth data analysis unit 214, extracts the field image from the frame i with reference to the obtained cutout center coordinates, and extracts The displayed field-of-view image is displayed as a thumbnail.
なお、本実施形態では、ステップS482にて、画像切り出し部215は、視野画像のサムネイル33とともに、サムネイル33と該視野画像に写っている被写体の地図上における位置を示すシンボル32とを結ぶ破線を表示する。
In the present embodiment, in step S482, the image cutout unit 215 displays a broken line connecting the thumbnail 33 of the visual field image and the symbol 32 indicating the position of the subject in the visual field image on the map. indicate.
その結果、表示部220には、図18に示すような映像が表示される。
As a result, an image as shown in FIG. 18 is displayed on the display unit 220.
ステップS482の後、方位データ解析部214は、操作受付部230がいずれかのサムネイルを選択する操作を受け付けたか否かを判定する。
After step S482, the orientation data analysis unit 214 determines whether the operation reception unit 230 has received an operation for selecting any thumbnail.
再生装置200は、操作受付部230がいずれかのサムネイルを選択する操作を受け付けていないと判定した場合にはステップS48を終了し、操作受付部230がいずれかのサムネイルを選択する操作を受け付けたと判定した場合にはステップS484に進む。
When it is determined that the operation receiving unit 230 has not received an operation for selecting any thumbnail, the playback device 200 ends step S48, and the operation receiving unit 230 has received an operation for selecting any thumbnail. If it is determined, the process proceeds to step S484.
ステップS484にて、方位データ解析部214は、フレームiに関する1つ以上の拡張方位データの中から、選択されたサムネイル33に対応する拡張方位データを選択し、ステップS485に進む。
In step S484, the azimuth data analyzing unit 214 selects the extended azimuth data corresponding to the selected thumbnail 33 from one or more extended azimuth data related to the frame i, and the process proceeds to step S485.
ステップ485にて、方位データ解析部214は、選択した拡張方位データから切り出し中心座標を抽出し、抽出した切り出し中心座標を画像切り出し部215に出力する。
In step 485, the azimuth data analysis unit 214 extracts the cutout center coordinates from the selected extended azimuth data, and outputs the extracted cutout center coordinates to the image cutout unit 215.
ステップS485の後、画像切り出し部215は、ステップS485にて出力された切り出し中心座標を参照して、フレームiから視野画像を切り出し(ステップS486)、表示中の視野画像をステップS486にて切り出した視野画像に切り替える(ステップS487)
なお、本実施形態では、ステップS487にて、画像切り出し部215が、選択されたサムネイル33を囲む太枠(ユーザがサムネイル33を選択した結果、サムネイル33に対応する視野画像が表示されていることを示す太枠)を表示する。 After step S485, theimage cutout unit 215 refers to the cutout center coordinates output in step S485, cuts out the visual field image from the frame i (step S486), and cuts the displayed visual field image in step S486. Switch to view image (step S487)
In this embodiment, in step S487, theimage cutout unit 215 displays a thick frame surrounding the selected thumbnail 33 (the field image corresponding to the thumbnail 33 is displayed as a result of the user selecting the thumbnail 33). A thick frame).
なお、本実施形態では、ステップS487にて、画像切り出し部215が、選択されたサムネイル33を囲む太枠(ユーザがサムネイル33を選択した結果、サムネイル33に対応する視野画像が表示されていることを示す太枠)を表示する。 After step S485, the
In this embodiment, in step S487, the
その結果、表示部220には、図20に示すような映像が表示される。
As a result, an image as shown in FIG. 20 is displayed on the display unit 220.
(第2のモードに関する付記事項1)
図18と図19とからわかるように、マップ表示処理部212は、デフォルトの視野画像が表示されている間はフレームiの撮影地点を示すシンボル30を大きく表示し、ユーザによって指定された視野画像が表示されている間はフレームiの撮影地点を示すシンボル30を小さく表示してもよい。 (Appendix 1 regarding the second mode)
As can be seen from FIGS. 18 and 19, the mapdisplay processing unit 212 displays a large symbol 30 indicating the shooting point of the frame i while the default visual field image is displayed, and the visual field image designated by the user. While the symbol is displayed, the symbol 30 indicating the shooting point of the frame i may be displayed small.
図18と図19とからわかるように、マップ表示処理部212は、デフォルトの視野画像が表示されている間はフレームiの撮影地点を示すシンボル30を大きく表示し、ユーザによって指定された視野画像が表示されている間はフレームiの撮影地点を示すシンボル30を小さく表示してもよい。 (
As can be seen from FIGS. 18 and 19, the map
(第2のモードに関する付記事項2)
画像切り出し部215は、ステップS482にて、サムネイルを表示する処理を行うのではなく、以下の処理を行ってもよい。 (Appendix 2 regarding the second mode)
In step S482, theimage cutout unit 215 may perform the following process instead of performing the process of displaying the thumbnail.
画像切り出し部215は、ステップS482にて、サムネイルを表示する処理を行うのではなく、以下の処理を行ってもよい。 (Appendix 2 regarding the second mode)
In step S482, the
即ち、画像切り出し部215は、再生制御データViに含まれている各拡張方位データについて、該拡張方位データが示す方位に応じたボタンを表示してもよい。
That is, the image cutout unit 215 may display a button corresponding to the orientation indicated by the extended orientation data for each extended orientation data included in the reproduction control data Vi.
具体的には、画像切り出し部215は、複数の上記ボタンを表示画面の端に並べて表示してもよい。
Specifically, the image cutout unit 215 may display a plurality of the above buttons side by side on the edge of the display screen.
そして、画像切り出し部215は、いずれかのボタンが押された場合に、押されたボタンに応じた方位の視野画像を表示してもよい。
Then, when any button is pressed, the image cutout unit 215 may display a visual field image having an orientation corresponding to the pressed button.
あるいは、画像切り出し部215は、再生制御データViに含まれている1つ以上の拡張方位データうちの任意の拡張方位データが示す方位の視野画像を表示させることが可能なプルダウンメニューを表示してもよい。
Alternatively, the image cutout unit 215 displays a pull-down menu that can display a field-of-view image indicated by any extended azimuth data among one or more extended azimuth data included in the reproduction control data Vi. Also good.
(第2のモードに設定されている再生装置200の利点)
第2のモードに設定されている再生装置200の構成によれば、閲覧者は、マップ画面を見ることにより、現在のフレームの撮影地点と、現在のフレームの再生時刻において閲覧者が表示させることが可能な視野画像のサムネイルと、視野画像に写っている対象物の場所と、該対象物が撮影地点から見てどの方位にあるかと、を確認しながら、所望の視野画像を表示させることができる。 (Advantages of theplayback device 200 set in the second mode)
According to the configuration of theplayback device 200 set to the second mode, the viewer can display the shooting position of the current frame and the playback time of the current frame by viewing the map screen. The desired field-of-view image can be displayed while confirming the thumbnail of the field-of-view image, the location of the object in the field-of-view image, and the orientation in which the object is viewed from the shooting point. it can.
第2のモードに設定されている再生装置200の構成によれば、閲覧者は、マップ画面を見ることにより、現在のフレームの撮影地点と、現在のフレームの再生時刻において閲覧者が表示させることが可能な視野画像のサムネイルと、視野画像に写っている対象物の場所と、該対象物が撮影地点から見てどの方位にあるかと、を確認しながら、所望の視野画像を表示させることができる。 (Advantages of the
According to the configuration of the
(実施形態2の付記事項1)
再生装置200は、上述した例えばテレビやデジタルビデオレコーダーのような再生装置だけでなく、デジタルカメラ、デジタルムービー、携帯型ムービープレイヤー、携帯電話、カーナビゲーションシステム、携帯型DVDプレイヤー、PC等、動画データを扱う装置に広く適用可能である。 (Appendix 1 of Embodiment 2)
Theplayback device 200 is not only a playback device such as a television or a digital video recorder described above, but also a digital camera, a digital movie, a portable movie player, a mobile phone, a car navigation system, a portable DVD player, a PC, etc. Is widely applicable to devices that handle
再生装置200は、上述した例えばテレビやデジタルビデオレコーダーのような再生装置だけでなく、デジタルカメラ、デジタルムービー、携帯型ムービープレイヤー、携帯電話、カーナビゲーションシステム、携帯型DVDプレイヤー、PC等、動画データを扱う装置に広く適用可能である。 (
The
また、本発明に係る再生装置はディスプレイを備えた再生装置に限定されず、装置自体がディスプレイを備えず、外部のディスプレイに動画を表示させる再生装置も本発明に係る再生装置の範囲に含まれる。
In addition, the playback device according to the present invention is not limited to a playback device including a display, and a playback device that does not include a display and displays a moving image on an external display is also included in the scope of the playback device according to the present invention. .
(実施形態2の付記事項2)
再生装置200は、第1のモードと第2のモードとを備えるものとしたが、本発明に係る再生装置はそのような構成には限定されない。 (Appendix 2 of Embodiment 2)
Although theplayback device 200 includes the first mode and the second mode, the playback device according to the present invention is not limited to such a configuration.
再生装置200は、第1のモードと第2のモードとを備えるものとしたが、本発明に係る再生装置はそのような構成には限定されない。 (Appendix 2 of Embodiment 2)
Although the
即ち、第1のモードの動作と第2のモードの動作とのうちのいずれか一方のみを行う再生装置も本発明に係る再生装置の範囲に含まれる。
That is, a playback apparatus that performs only one of the first mode operation and the second mode operation is also included in the scope of the playback apparatus according to the present invention.
(実施形態2の付記事項3)
再生装置200は、例えば、以下のようにして全方位動画を再生してもよい。 (Appendix 3 of Embodiment 2)
For example, theplayback device 200 may play back an omnidirectional video as follows.
再生装置200は、例えば、以下のようにして全方位動画を再生してもよい。 (Appendix 3 of Embodiment 2)
For example, the
即ち、再生装置200は、上記全方位動画のムービーテクスチャが張り付けられた仮想のドーム型スクリーンと、その内部の中心に配置されている、向きや位置を変えることが可能な仮想カメラと、を上記表示画面内のある領域に表示し、上記全方位動画における上記仮想カメラに映る部分を上記表示画面内の別の領域に表示してもよい。
That is, the playback device 200 includes a virtual dome-shaped screen on which the movie texture of the omnidirectional video is pasted, and a virtual camera that is arranged at the center of the screen and that can change the orientation and position. It may be displayed in a certain area in the display screen, and a portion reflected in the virtual camera in the omnidirectional video may be displayed in another area in the display screen.
この場合、再生装置200は、基本方位データに含まれている切り出し中心座標から、仮想カメラが向くべき方向を特定し、特定した方向と同じ方向を向くように仮想カメラの向きを設定しても構わない。
In this case, the playback apparatus 200 identifies the direction in which the virtual camera should face from the cut-out center coordinates included in the basic orientation data, and sets the orientation of the virtual camera so that it faces the same direction as the identified direction. I do not care.
あるいは、基本方位データには、切り出し中心座標の代わりに仮想カメラが向くべき方向を示すデータが含まれていてもよく、再生装置200は、当該データを参照して、仮想カメラの向きを設定してもよい。
Alternatively, the basic orientation data may include data indicating the direction in which the virtual camera should face instead of the cut-out center coordinates, and the playback device 200 sets the orientation of the virtual camera with reference to the data. May be.
この場合、データ生成装置100は、ステップS2において、切り出し中心座標を入力する操作の代わりに仮想カメラの向きを指定する操作を受け付け、切り出し中心座標の代わりに、上記操作によって生成された当該データを含んでいるような基本方位データを生成することになる。
In this case, in step S2, the data generation device 100 accepts an operation for designating the orientation of the virtual camera instead of the operation for inputting the cut-out center coordinates, and the data generated by the above operation is used instead of the cut-out center coordinates. Basic azimuth data as it is included is generated.
<実施形態3>
本発明の更に別の実施形態に係る自由視点動画処理システムについて図21を参照しながら説明する。 <Embodiment 3>
A free viewpoint moving image processing system according to still another embodiment of the present invention will be described with reference to FIG.
本発明の更に別の実施形態に係る自由視点動画処理システムについて図21を参照しながら説明する。 <Embodiment 3>
A free viewpoint moving image processing system according to still another embodiment of the present invention will be described with reference to FIG.
図21は、本実施形態に係る自由視点動画処理システムの概略図である。
FIG. 21 is a schematic diagram of the free viewpoint moving image processing system according to the present embodiment.
図21に示すように、本実施形態に係る自由視点動画処理システム1(以下、「システム1」と称する)は、実施形態1に係るデータ生成装置100と実施形態2に係る再生装置200とを含んでいる。
As shown in FIG. 21, a free viewpoint video processing system 1 (hereinafter referred to as “system 1”) according to the present embodiment includes a data generation device 100 according to the first embodiment and a playback device 200 according to the second embodiment. Contains.
システム1では、データ生成装置100が実施形態1で説明した方法を用いて自由視点動画データを生成し、再生装置200が、自由視点動画データを読み出して、実施形態2で説明した方法を用いて全方位動画を再生する。
In the system 1, the data generation device 100 generates free viewpoint video data using the method described in the first embodiment, and the playback device 200 reads the free viewpoint video data and uses the method described in the second embodiment. Play an omnidirectional video.
データ生成装置100から再生装置200に自由視点動画データを渡す方法は、放送または通信を用いる方法であってもよいし、着脱可能な記録媒体を媒介とする方法であってもよい。
The method of passing free viewpoint moving image data from the data generation device 100 to the playback device 200 may be a method using broadcasting or communication, or a method using a removable recording medium as a medium.
また、システム1は1人のユーザによって所有されるものであってもよいし、データ生成装置100を所有する第1のユーザと再生装置200を所有する第2のユーザとによって共有されるものであってもよい。
The system 1 may be owned by one user, or shared by a first user who owns the data generation device 100 and a second user who owns the playback device 200. There may be.
そして、第1のユーザは、システム1のデータ生成装置100を利用して自由視点動画データを作成し、第2のユーザは、システム1の再生装置200を利用して、全方位動画を閲覧してもよい。
Then, the first user creates free viewpoint video data using the data generation device 100 of the system 1, and the second user browses the omnidirectional video using the playback device 200 of the system 1. May be.
以上、この発明の実施形態について図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、本発明の効果を発揮する範囲内で適宜変更することが可能である。その他、本発明の目的の範囲を逸脱しない限りにおいて適宜変更して実施することが可能である。
The embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and can be changed as appropriate within the scope of the effects of the present invention. It is. In addition, various modifications can be made without departing from the scope of the object of the present invention.
<実施形態4>
データ生成装置100および再生装置200の制御ブロックは、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 <Embodiment 4>
The control blocks of thedata generation device 100 and the playback device 200 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or realized by software using a CPU (Central Processing Unit). May be.
データ生成装置100および再生装置200の制御ブロックは、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。 <Embodiment 4>
The control blocks of the
後者の場合、データ生成装置100および再生装置200は、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラム及び各種データがコンピュータ(又はCPU)で読み取り可能に記録されたROM(Read Only Memory)又は記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)などを備えている。そして、コンピュータ(又はCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。
In the latter case, the data generation device 100 and the playback device 200 include a CPU that executes instructions of a program that is software that implements each function, and a ROM (in which the program and various data are recorded so as to be readable by a computer (or CPU)). A Read Only Memory) or a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like. And the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. The present invention can also be realized in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。
The present invention is not limited to the above-described embodiments, and various modifications are possible within the scope shown in the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention.
〔まとめ〕
本発明の態様1に係るデータ生成装置(例えば、データ生成装置100)は、移動撮影によって生成された全方位動画の全部又は一部のフレームについて、再生装置(例えば、再生装置200)が対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数のデータ(例えば、基本方位データおよび拡張方位データ)を生成するデータ生成部(例えば、基本方位データ生成部111と拡張方位データ生成部112から成る部分)を備え、上記データ生成部は、相異なる複数の方位の各々について該方位を示す方位データを生成することにより、上記複数のデータとして複数の方位データを生成し、上記複数の方位データの各々は、上記再生装置が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータである。 [Summary]
In the data generation device (for example, the data generation device 100) according toaspect 1 of the present invention, the playback device (for example, the playback device 200) uses the target frame for all or part of the frames of the omnidirectional video generated by moving shooting. A data generation unit (for example, basic azimuth data generation unit 111 and extended azimuth data generation unit 112) that generates a plurality of data (for example, basic azimuth data and extended azimuth data) to be referred to in order to extract a plurality of images to be displayed from The data generation unit generates a plurality of azimuth data as the plurality of data by generating azimuth data indicating the azimuth for each of a plurality of different azimuths, and the plurality of azimuths Each piece of data is indicated by the playback device as the display target image indicated by the direction data from the shooting point of the target frame. The data to be referred to excise the image showing the state of the field of view when viewed.
本発明の態様1に係るデータ生成装置(例えば、データ生成装置100)は、移動撮影によって生成された全方位動画の全部又は一部のフレームについて、再生装置(例えば、再生装置200)が対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数のデータ(例えば、基本方位データおよび拡張方位データ)を生成するデータ生成部(例えば、基本方位データ生成部111と拡張方位データ生成部112から成る部分)を備え、上記データ生成部は、相異なる複数の方位の各々について該方位を示す方位データを生成することにより、上記複数のデータとして複数の方位データを生成し、上記複数の方位データの各々は、上記再生装置が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータである。 [Summary]
In the data generation device (for example, the data generation device 100) according to
ここで、全方位動画とは、移動撮影の撮影経路上の各撮影地点からの全てまたはほぼ全ての方向の様子が写し出された動画のことを指す。また、対象フレームが、ある撮影地点において撮影された全方位静止画データであるものとする。
Here, the omnidirectional video refers to a video in which all or almost all directions from each shooting point on the moving shooting shooting route are shown. Further, it is assumed that the target frame is omnidirectional still image data shot at a certain shooting point.
上記の構成によれば、上記データ生成装置は、その撮影地点からある方向を見たときの視界の様子を示す画像と、その撮影地点から別の方向を見たときの視界の様子を示す別の画像とを対象フレームから切り出すための複数の方位データを生成する。
According to the above configuration, the data generation device includes an image showing a state of view when viewing a certain direction from the shooting point, and a view showing a state of view when viewing another direction from the shooting point. A plurality of azimuth data for cutting out the image from the target frame is generated.
また、上記再生装置は、上記複数の方位データを参照してこれらの画像を対象フレームから切り出し、これらの画像を再生すべき期間においてこれらの画像を再生する。
Further, the playback device cuts out these images from the target frame with reference to the plurality of azimuth data, and plays back these images in a period in which these images should be played back.
従って、上記データ生成装置は、移動撮影によって生成された全方位動画データから切り出された再生動画の閲覧者(上記再生装置のユーザ)に、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を同時期に確認させることを可能にするデータを生成する、という効果を奏すると言える。
Therefore, the data generation device is configured such that each viewer when viewing a plurality of different directions from the same shooting point is viewed by a viewer (a user of the playback device) of a playback video clipped from the omnidirectional video data generated by moving shooting. It can be said that there is an effect of generating data that makes it possible to confirm the state of view at the same time.
本発明の態様2に係るデータ生成装置では、上記態様1において、上記データ生成部(例えば、基本方位データ生成部111と拡張方位データ生成部112と再生制御データ生成部115から成る部分)が、上記複数の方位データを含む制御データ(例えば、再生制御データ)を生成するようになっており、上記データ生成部は、上記制御データとして、上記複数の方位データの各々に該方位データを他の方位データと識別するための識別情報が含まれているような制御データを生成してもよい。
In the data generation device according to aspect 2 of the present invention, in the above aspect 1, the data generation unit (for example, a portion including the basic azimuth data generation unit 111, the extended azimuth data generation unit 112, and the reproduction control data generation unit 115) The control data including the plurality of azimuth data (for example, reproduction control data) is generated, and the data generation unit adds the azimuth data to each of the plurality of azimuth data as the control data. Control data that includes identification information for identification from the orientation data may be generated.
上記の構成によれば、上記データ生成装置は、「上記再生装置が、方位データを参照するだけで該方位データが上記複数の方位データのうちのどの方位データであるのかを特定すること」を可能にする、という更なる効果を奏する。
According to the above-described configuration, the data generation device determines that the playback device specifies which azimuth data of the plurality of azimuth data is simply by referring to the azimuth data. It has the further effect of making it possible.
本発明の態様3に係るデータ生成装置では、上記態様1または2において、上記データ生成部が、上記複数の方位データを含む制御データを生成するようになっており、上記データ生成部は、上記制御データとして、上記再生装置が上記画像を切り出すために最も優先的に使用すべき方位データである特定の方位データ(例えば、基本方位データ)が上記複数の方位データの中に含まれているような制御データを生成してもよい。
In the data generation device according to aspect 3 of the present invention, in the aspect 1 or 2, the data generation unit generates control data including the plurality of azimuth data, and the data generation unit As the control data, specific azimuth data (for example, basic azimuth data), which is azimuth data that the playback device should use most preferentially in order to cut out the image, is included in the plurality of azimuth data. Control data may be generated.
ここで、上記複数の方位データを用いて対象フレームから切り出すことが可能な複数枚の画像のうちの特定の画像が、上記データ生成装置のユーザにとって上記再生装置のユーザに最も閲覧させたい画像であるものとする。
Here, the specific image among the plurality of images that can be cut out from the target frame using the plurality of azimuth data is the image that the user of the reproduction device most wants to view for the user of the data generation device. It shall be.
上記の構成によれば、上記データ生成装置は、上記再生装置が対象フレームから上記特定の画像を切り出すための方位データを上記特定の方位データとして生成することによって、上記特定の画像を上記再生装置のユーザの目に触れやすくすることができる、という更なる効果を奏する。
According to the above configuration, the data generation device generates the specific image as the specific orientation data by generating the orientation data for the playback device to cut out the specific image from the target frame. It is possible to make it easier for the user to touch the eyes.
本発明の態様4に係るデータ生成装置では、上記態様1または2において、上記データ生成部が、上記複数の方位データの各々について、上記再生装置が該方位データを参照して切り出す画像の表示順位を示す表示順位データ(例えば、優先表示順位データ)を生成し、上記再生装置は、対象フレームの期間において、上記表示順位が相対的に高い画像が相対的に早く表示されるように複数枚の画像を表示してもよい。
In the data generation device according to aspect 4 of the present invention, in the above aspect 1 or 2, the display order of the image that the data generation unit extracts for each of the plurality of azimuth data with reference to the azimuth data. Display order data (e.g., priority display order data) is generated, and the playback device displays a plurality of images so that images with a relatively high display order are displayed relatively quickly during the period of the target frame. An image may be displayed.
上記の構成によれば、上記データ生成装置は、上記複数枚の画像を、上記データ生成装置のユーザが上記再生装置のユーザに閲覧させたい順で、上記再生装置のユーザに閲覧させることができる、という更なる効果を奏する。
According to the above configuration, the data generation device can cause the user of the reproduction device to view the plurality of images in the order that the user of the data generation device wants the user of the reproduction device to view the image. , There is an additional effect.
本発明の態様5に係るデータ生成装置は、上記態様1から4のいずれかの態様において、上記全方位動画が、所定の経路に沿って移動しながらの撮影により生成される動画であり、上記全方位動画データと上記所定の経路(例えば、線12で表わされる撮影経路)が位置する地域の地図(例えば、地図10)を示すマップデータとを含む配布用データを生成する配布用データ生成処理部を備えていてもよい。
The data generation device according to aspect 5 of the present invention is the data generation apparatus according to any one of the aspects 1 to 4, wherein the omnidirectional video is a video generated by shooting while moving along a predetermined route. Distribution data generation processing for generating distribution data including omnidirectional video data and map data indicating a map (for example, map 10) of an area where the predetermined route (for example, a shooting route represented by line 12) is located. May be provided.
上記の構成によれば、上記データ生成装置は、上記再生装置のユーザに、上記全方位動画が撮影された地域の地図を確認させることができる、という更なる効果を奏する。
According to the above configuration, the data generation device has an additional effect that the user of the playback device can check the map of the area where the omnidirectional video is captured.
本発明の態様6に係る再生装置(例えば、再生装置200)は、移動撮影によって生成された全方位動画の全部又は一部のフレームについて、対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数のデータを参照するデータ参照処理部(例えば、方位データ解析部214)を備え、上記複数のデータは、互いに相異なる方位を示す複数の方位データであり、上記複数の方位データの各々は、上記データ参照処理部が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータであり、上記全部又は一部のフレームについて、対象フレームから切り出した複数枚の画像を再生する再生処理部(例えば、画像切り出し部215)を備えている。
A playback apparatus (for example, playback apparatus 200) according to aspect 6 of the present invention refers to cutting out a plurality of display target images from the target frame for all or some of the omnidirectional video generated by moving shooting. A data reference processing unit (for example, an azimuth data analyzing unit 214) that refers to a plurality of data to be processed, and the plurality of data are a plurality of azimuth data indicating different azimuths, Is the data to be referred to in order for the data reference processing unit to cut out an image showing the state of the field of view when the azimuth indicated by the azimuth data is viewed from the shooting point of the target frame as the display target image. A reproduction processing unit (for example, the image clipping unit 215) that reproduces a plurality of images clipped from the target frame for all or some of the frames. Eteiru.
ここで、全方位動画とは、移動撮影の撮影経路上の各撮影地点からの全てまたはほぼ全ての方向の様子が写し出された動画のことを指す。また、対象フレームが、ある撮影地点において撮影された全方位静止画データであるものとする。
Here, the omnidirectional video refers to a video in which all or almost all directions from each shooting point on the moving shooting shooting route are shown. Further, it is assumed that the target frame is omnidirectional still image data shot at a certain shooting point.
上記の構成によれば、上記データ生成装置は、その撮影地点からある方向を見たときの視界の様子を示す画像と、その撮影地点から別の方向を見たときの視界の様子を示す別の画像とを対象フレームから切り出すための複数の方位データを生成する。
According to the above configuration, the data generation device includes an image showing a state of view when viewing a certain direction from the shooting point, and a view showing a state of view when viewing another direction from the shooting point. A plurality of azimuth data for cutting out the image from the target frame is generated.
また、上記再生装置は、上記複数の方位データを参照してこれらの画像を対象フレームから切り出し、これらの画像を再生すべき期間においてこれらの画像を再生する。
Further, the playback device cuts out these images from the target frame with reference to the plurality of azimuth data, and plays back these images in a period in which these images should be played back.
従って、上記再生装置は、移動撮影によって生成された全方位動画データから切り出された再生動画の閲覧者(上記再生装置のユーザ)に、同じ撮影地点から異なる複数の方向を見た場合における各視界の様子を同時期に確認させることできる、という効果を奏すると言える。
Therefore, the playback device has different views when viewing a plurality of different directions from the same shooting point to a viewer (a user of the playback device) of the playback video clipped from the omnidirectional video data generated by moving shooting. It can be said that there is an effect that the state of can be confirmed at the same time.
本発明の態様7に係る再生装置では、上記態様6において、上記データ参照処理部が、上記複数の方位データを含む制御データを参照し、上記データ参照処理部が参照する制御データには、自装置が上記画像を切り出すために最も優先的に参照すべき方位データである特定の方位データが上記複数の方位データの中に含まれていてもよい。
In the playback device according to aspect 7 of the present invention, in the aspect 6, the data reference processing unit refers to control data including the plurality of azimuth data, and the control data referred to by the data reference processing unit includes Specific orientation data, which is orientation data to be referred to most preferentially in order for the device to cut out the image, may be included in the plurality of orientation data.
ここで、上記複数の方位データを用いて対象フレームから切り出すことが可能な複数枚の画像のうちの特定の画像が、上記全方位動画の制作者にとって上記再生装置のユーザに最も閲覧させたい画像であるものとする。そして、上記特定の画像が上記特定の方位データを用いて切り出される画像であるものとする。
Here, the specific image of the plurality of images that can be cut out from the target frame using the plurality of azimuth data is the image that the user of the playback device most wants to view for the producer of the omnidirectional video. Suppose that The specific image is an image cut out using the specific azimuth data.
上記の構成によれば、上記再生装置は、上記特定の画像がユーザの目に触れやすくなる形で上記全方位動画を再生することができる、という更なる効果を奏する。
According to the above configuration, the reproducing apparatus has an additional effect that the omnidirectional video can be reproduced in such a manner that the specific image is easily touched by the user's eyes.
本発明の態様8に係る再生装置では、上記態様6または7において、上記データ参照処理部が、上記複数の方位データを参照するとともに、上記複数の方位データの各々について自装置が該方位データを参照して切り出す画像の表示順位を示す表示順位データを参照し、上記再生処理部は、対象フレームの期間において、上記表示順位が相対的に高い画像が相対的に早く表示されるように上記複数枚の画像を再生してもよい。
In the playback device according to aspect 8 of the present invention, in the aspect 6 or 7, the data reference processing unit refers to the plurality of azimuth data, and the own device uses the azimuth data for each of the plurality of azimuth data. The reproduction processing unit refers to display order data indicating the display order of images to be cut out by reference, and the reproduction processing unit is configured to display the plurality of images having a relatively high display order relatively quickly in a target frame period. One image may be reproduced.
上記の構成によれば、上記再生装置は、上記複数枚の画像を、上記全方位動画の制作者がユーザに閲覧させたい順で、ユーザに閲覧させることができる、という更なる効果を奏する。
According to the above configuration, the reproducing apparatus has an additional effect that the user can browse the plurality of images in the order in which the creator of the omnidirectional video wants the user to browse.
本発明の態様9に係る再生装置では、上記態様6から8のいずれかの態様において、上記全方位動画は、所定の経路に沿って移動しながらの撮影により生成される動画であり、上記所定の経路が位置する地域の地図(例えば、地図29)を表示する地図表示処理部(例えば、マップ表示処理部212)を備え、上記地図表示処理部は、上記全方位動画の各フレームについて、対象フレームの期間において、対象フレームの撮影地点を示す情報(例えば、シンボル30)を上記地図上に表示してもよい。
In the playback device according to aspect 9 of the present invention, in any of the aspects 6 to 8, the omnidirectional video is a video generated by shooting while moving along a predetermined route, and the predetermined video A map display processing unit (for example, a map display processing unit 212) that displays a map of the region where the route of the map is located (for example, the map 29), and the map display processing unit applies a target for each frame of the omnidirectional video. During the frame period, information (for example, the symbol 30) indicating the shooting point of the target frame may be displayed on the map.
上記の構成によれば、上記再生装置は、上記視野画像の表示時に、該視野画像に写っている被写体(建造物等の不動物)が見える位置である撮影地点をユーザに把握させることができる、という更なる効果を奏する。
According to said structure, the said reproducing | regenerating apparatus can make a user grasp | ascertain the imaging | photography point which is a position which can see the to-be-photographed object (non-animals, such as a building) reflected in this visual field image at the time of the said visual field image display. , There is an additional effect.
本発明の態様10に係る再生装置では、上記態様9において、上記地図表示処理部は、上記全方位動画の各フレームについて、対象フレームの期間において、対象フレームの撮影地点を示す情報と上記再生処理部による対象フレームからの切り出し対象の画像に写っている被写体の位置を示す情報(例えば、シンボル30)とを上記地図上に表示してもよい。
In the playback device according to aspect 10 of the present invention, in the aspect 9 described above, the map display processing unit includes, for each frame of the omnidirectional video, information indicating the shooting point of the target frame and the playback process in the period of the target frame. Information (for example, symbol 30) indicating the position of the subject in the image to be cut out from the target frame by the unit may be displayed on the map.
上記の構成によれば、上記再生装置は、上記視野画像の表示時に、該視野画像に写っている被写体(建造物等の不動物)のおおよその位置をユーザに把握させることができる、という更なる効果を奏する。
According to the above configuration, the playback device can cause the user to grasp the approximate position of the subject (non-animal such as a building) shown in the visual field image when the visual field image is displayed. The effect which becomes.
なお、本発明は、以下のように構成することもできる。
The present invention can also be configured as follows.
(第1の構成)
所定の経路に沿って移動しながら全方位カメラを用いて撮影した360°の視界をもつ全方位画像データと、前記経路の地図であるマップデータを入力し、該入力した全方位データを使用して、マップデータ上の座標に対応した撮影位置から任意の視線方向の画像の再生を可能とした自由視点画像データを生成する画像データ生成装置であって、
全方位画像データの各フレームに対する、初期値の視線方向データである基本視線方向データを生成する基本視線方向データ生成部と、
全方位画像データの各フレームに対する、前記基本視線方向データとは異なる視線方向データである少なくとも一つ以上の拡張視線方向データを生成する拡張視線方向データ生成部と、
前記全方位画像データの各フレームの撮影位置を前記マップデータ上の座標に変換したマップ位置データを生成するマップ位置データ生成部と、
前記基本視線方向データと、前記拡張視線方向データと、前記マップ位置データから、視線方向制御データを生成する視線方向制御データ生成部と、
前記全方位画像データを符号化する符号化部と、
前記全方位画像データ、前記マップデータ、および前記視線方向制御データを多重化し、前記自由視点画像データを生成する多重化部と、
を具備することを特徴とする画像データ生成装置。 (First configuration)
The omnidirectional image data having a 360 ° field of view, which is taken using an omnidirectional camera while moving along a predetermined route, and map data that is a map of the route are input, and the input omnidirectional data is used. An image data generation device that generates free viewpoint image data that enables reproduction of an image in an arbitrary line-of-sight direction from a shooting position corresponding to coordinates on map data,
A basic line-of-sight direction data generation unit that generates basic line-of-sight direction data, which is initial line-of-sight direction data, for each frame of omnidirectional image data;
An extended gaze direction data generating unit that generates at least one extended gaze direction data that is gaze direction data different from the basic gaze direction data for each frame of omnidirectional image data;
A map position data generating unit that generates map position data obtained by converting the shooting position of each frame of the omnidirectional image data into coordinates on the map data;
A line-of-sight direction control data generating unit that generates line-of-sight direction control data from the basic line-of-sight direction data, the extended line-of-sight direction data, and the map position data;
An encoding unit for encoding the omnidirectional image data;
A multiplexing unit that multiplexes the omnidirectional image data, the map data, and the line-of-sight direction control data, and generates the free viewpoint image data;
An image data generation apparatus comprising:
所定の経路に沿って移動しながら全方位カメラを用いて撮影した360°の視界をもつ全方位画像データと、前記経路の地図であるマップデータを入力し、該入力した全方位データを使用して、マップデータ上の座標に対応した撮影位置から任意の視線方向の画像の再生を可能とした自由視点画像データを生成する画像データ生成装置であって、
全方位画像データの各フレームに対する、初期値の視線方向データである基本視線方向データを生成する基本視線方向データ生成部と、
全方位画像データの各フレームに対する、前記基本視線方向データとは異なる視線方向データである少なくとも一つ以上の拡張視線方向データを生成する拡張視線方向データ生成部と、
前記全方位画像データの各フレームの撮影位置を前記マップデータ上の座標に変換したマップ位置データを生成するマップ位置データ生成部と、
前記基本視線方向データと、前記拡張視線方向データと、前記マップ位置データから、視線方向制御データを生成する視線方向制御データ生成部と、
前記全方位画像データを符号化する符号化部と、
前記全方位画像データ、前記マップデータ、および前記視線方向制御データを多重化し、前記自由視点画像データを生成する多重化部と、
を具備することを特徴とする画像データ生成装置。 (First configuration)
The omnidirectional image data having a 360 ° field of view, which is taken using an omnidirectional camera while moving along a predetermined route, and map data that is a map of the route are input, and the input omnidirectional data is used. An image data generation device that generates free viewpoint image data that enables reproduction of an image in an arbitrary line-of-sight direction from a shooting position corresponding to coordinates on map data,
A basic line-of-sight direction data generation unit that generates basic line-of-sight direction data, which is initial line-of-sight direction data, for each frame of omnidirectional image data;
An extended gaze direction data generating unit that generates at least one extended gaze direction data that is gaze direction data different from the basic gaze direction data for each frame of omnidirectional image data;
A map position data generating unit that generates map position data obtained by converting the shooting position of each frame of the omnidirectional image data into coordinates on the map data;
A line-of-sight direction control data generating unit that generates line-of-sight direction control data from the basic line-of-sight direction data, the extended line-of-sight direction data, and the map position data;
An encoding unit for encoding the omnidirectional image data;
A multiplexing unit that multiplexes the omnidirectional image data, the map data, and the line-of-sight direction control data, and generates the free viewpoint image data;
An image data generation apparatus comprising:
(第2の構成)
前記基本視線方向データと、複数の前記拡張視線方向データは、それぞれを識別するための情報と、前記全方位画像を表示する際の視線方向に関する情報を含むことを特徴とする、前記第1の構成に係る画像データ生成装置。 (Second configuration)
The basic line-of-sight direction data and the plurality of extended line-of-sight direction data include information for identifying each of the basic line-of-sight direction data and information about the line-of-sight direction when displaying the omnidirectional image. An image data generation device according to the configuration.
前記基本視線方向データと、複数の前記拡張視線方向データは、それぞれを識別するための情報と、前記全方位画像を表示する際の視線方向に関する情報を含むことを特徴とする、前記第1の構成に係る画像データ生成装置。 (Second configuration)
The basic line-of-sight direction data and the plurality of extended line-of-sight direction data include information for identifying each of the basic line-of-sight direction data and information about the line-of-sight direction when displaying the omnidirectional image. An image data generation device according to the configuration.
(第3の構成)
前記視線方向制御データにおいて、全方位画像データの各フレームに対して、前記基本視線方向データと、少なくとも一つ以上の前記拡張視線方向データが含まれる場合、どの視線方向データを優先的に表示するかを示す順位に関する情報を含むことを特徴とする、第1の構成に係る画像データ生成装置。 (Third configuration)
In the gaze direction control data, when the basic gaze direction data and at least one or more extended gaze direction data are included for each frame of omnidirectional image data, which gaze direction data is preferentially displayed. The image data generation device according to the first configuration, characterized in that the image data generation device includes information related to a ranking indicating the above.
前記視線方向制御データにおいて、全方位画像データの各フレームに対して、前記基本視線方向データと、少なくとも一つ以上の前記拡張視線方向データが含まれる場合、どの視線方向データを優先的に表示するかを示す順位に関する情報を含むことを特徴とする、第1の構成に係る画像データ生成装置。 (Third configuration)
In the gaze direction control data, when the basic gaze direction data and at least one or more extended gaze direction data are included for each frame of omnidirectional image data, which gaze direction data is preferentially displayed. The image data generation device according to the first configuration, characterized in that the image data generation device includes information related to a ranking indicating the above.
(第4の構成)
所定の経路に沿って移動する視点から見た任意方向の視界の画像の再生を可能とした自由視点画像データを再生する画像データ再生装置において、
前記経路に沿って移動しながら全方位カメラを用いて撮影した360°の視界をもつ全方位画像データと、前記経路の地図であるマップデータと、全方位画像データの各フレームに対する、初期値の視線方向データである基本視線方向データと、前記基本視線方向データとは異なる視線方向データである少なくとも一つ以上の拡張視線方向データと、前記全方位画像データの各フレームの撮影位置を前記マップデータ上の座標に変換したマップ位置データと、を含んで構成される視線方向制御データが多重化された自由視点画像データを取得するとともに、
自由視点画像データから、全方位画像データと、前記マップデータと、前記視線方向制御データを分離する分離部と、
前記視線方向制御データから、基本視線方向データもしくは拡張視線方向データを、少なくとも一つ以上選択し、選択した拡張視線方向データの切り出し中心座標を出力する視線方向制御部と、
切り出し中心座標を取得し、切り出し中心座標を中心に前期全方位画像データの一部を切り出して表示用画像を生成する画像切り出し部と、表示用画像を取得し、表示する表示部を具備することを特徴とする画像データ再生装置。 (Fourth configuration)
In an image data reproduction apparatus for reproducing free viewpoint image data that enables reproduction of an image of a field of view in an arbitrary direction viewed from a viewpoint moving along a predetermined path,
An initial value for each frame of 360 ° field of view taken with an omnidirectional camera while moving along the route, map data that is a map of the route, and each frame of the omnidirectional image data. Map data indicating basic gaze direction data that is gaze direction data, at least one or more extended gaze direction data that is gaze direction data different from the basic gaze direction data, and the shooting position of each frame of the omnidirectional image data And obtaining free viewpoint image data in which line-of-sight direction control data including the map position data converted into the upper coordinates is multiplexed,
A separation unit that separates omnidirectional image data, the map data, and the line-of-sight direction control data from free viewpoint image data;
A gaze direction control unit that selects at least one or more basic gaze direction data or extended gaze direction data from the gaze direction control data, and outputs cut-out center coordinates of the selected extended gaze direction data;
An image cutout unit that obtains cutout center coordinates, cuts out part of the previous omnidirectional image data around the cutout center coordinates and generates a display image, and a display unit that obtains and displays the display image is provided. An image data reproducing apparatus characterized by the above.
所定の経路に沿って移動する視点から見た任意方向の視界の画像の再生を可能とした自由視点画像データを再生する画像データ再生装置において、
前記経路に沿って移動しながら全方位カメラを用いて撮影した360°の視界をもつ全方位画像データと、前記経路の地図であるマップデータと、全方位画像データの各フレームに対する、初期値の視線方向データである基本視線方向データと、前記基本視線方向データとは異なる視線方向データである少なくとも一つ以上の拡張視線方向データと、前記全方位画像データの各フレームの撮影位置を前記マップデータ上の座標に変換したマップ位置データと、を含んで構成される視線方向制御データが多重化された自由視点画像データを取得するとともに、
自由視点画像データから、全方位画像データと、前記マップデータと、前記視線方向制御データを分離する分離部と、
前記視線方向制御データから、基本視線方向データもしくは拡張視線方向データを、少なくとも一つ以上選択し、選択した拡張視線方向データの切り出し中心座標を出力する視線方向制御部と、
切り出し中心座標を取得し、切り出し中心座標を中心に前期全方位画像データの一部を切り出して表示用画像を生成する画像切り出し部と、表示用画像を取得し、表示する表示部を具備することを特徴とする画像データ再生装置。 (Fourth configuration)
In an image data reproduction apparatus for reproducing free viewpoint image data that enables reproduction of an image of a field of view in an arbitrary direction viewed from a viewpoint moving along a predetermined path,
An initial value for each frame of 360 ° field of view taken with an omnidirectional camera while moving along the route, map data that is a map of the route, and each frame of the omnidirectional image data. Map data indicating basic gaze direction data that is gaze direction data, at least one or more extended gaze direction data that is gaze direction data different from the basic gaze direction data, and the shooting position of each frame of the omnidirectional image data And obtaining free viewpoint image data in which line-of-sight direction control data including the map position data converted into the upper coordinates is multiplexed,
A separation unit that separates omnidirectional image data, the map data, and the line-of-sight direction control data from free viewpoint image data;
A gaze direction control unit that selects at least one or more basic gaze direction data or extended gaze direction data from the gaze direction control data, and outputs cut-out center coordinates of the selected extended gaze direction data;
An image cutout unit that obtains cutout center coordinates, cuts out part of the previous omnidirectional image data around the cutout center coordinates and generates a display image, and a display unit that obtains and displays the display image is provided. An image data reproducing apparatus characterized by the above.
(第5の構成)
前記視線方向制御部は、前記全方位画像データの同一のフレームに対して、複数の視線方向データの切り出し中心座標を出力すると同時に、画像切り出し部から前記複数の視線方向データの切り出し中心座標を用いて生成される表示用画像のうちのどれがメインの画面であるかを、前記表示部に通知することを特徴とする、第4の構成に係る画像データ再生装置。 (Fifth configuration)
The line-of-sight direction control unit outputs cut-out center coordinates of a plurality of line-of-sight direction data for the same frame of the omnidirectional image data, and simultaneously uses the cut-out center coordinates of the plurality of line-of-sight direction data from the image cut-out unit. The image data reproducing device according to the fourth configuration, wherein the display unit is notified of which of the display images generated is the main screen.
前記視線方向制御部は、前記全方位画像データの同一のフレームに対して、複数の視線方向データの切り出し中心座標を出力すると同時に、画像切り出し部から前記複数の視線方向データの切り出し中心座標を用いて生成される表示用画像のうちのどれがメインの画面であるかを、前記表示部に通知することを特徴とする、第4の構成に係る画像データ再生装置。 (Fifth configuration)
The line-of-sight direction control unit outputs cut-out center coordinates of a plurality of line-of-sight direction data for the same frame of the omnidirectional image data, and simultaneously uses the cut-out center coordinates of the plurality of line-of-sight direction data from the image cut-out unit. The image data reproducing device according to the fourth configuration, wherein the display unit is notified of which of the display images generated is the main screen.
本発明は、全方位動画を配信する機器、及び、全方位動画を再生する機器に好適に適用することができる。
The present invention can be suitably applied to a device that distributes an omnidirectional video and a device that plays back an omnidirectional video.
100 自由視点動画データ生成装置(データ生成装置)
110 制御部
111 基本方位データ生成部(データ生成部)
112 拡張方位データ生成部(データ生成部)
115 再生制御データ生成部(データ生成部)
116 多重化処理部(配布用データ生成処理部)
200 自由視点動画再生装置(再生装置)
210 制御部
212 マップ表示処理部(地図表示処理部)
214 方位データ解析部(データ参照処理部)
215 画像切り出し部(再生処理部) 100 Free-viewpoint video data generator (data generator)
110Control Unit 111 Basic Direction Data Generation Unit (Data Generation Unit)
112 Extended orientation data generator (data generator)
115 Reproduction control data generation unit (data generation unit)
116 Multiplexing processing unit (distribution data generation processing unit)
200 Free viewpoint video playback device (playback device)
210Control unit 212 Map display processing unit (map display processing unit)
214 Direction Data Analysis Unit (Data Reference Processing Unit)
215 Image cutout unit (reproduction processing unit)
110 制御部
111 基本方位データ生成部(データ生成部)
112 拡張方位データ生成部(データ生成部)
115 再生制御データ生成部(データ生成部)
116 多重化処理部(配布用データ生成処理部)
200 自由視点動画再生装置(再生装置)
210 制御部
212 マップ表示処理部(地図表示処理部)
214 方位データ解析部(データ参照処理部)
215 画像切り出し部(再生処理部) 100 Free-viewpoint video data generator (data generator)
110
112 Extended orientation data generator (data generator)
115 Reproduction control data generation unit (data generation unit)
116 Multiplexing processing unit (distribution data generation processing unit)
200 Free viewpoint video playback device (playback device)
210
214 Direction Data Analysis Unit (Data Reference Processing Unit)
215 Image cutout unit (reproduction processing unit)
Claims (10)
- 移動撮影によって生成された全方位動画の全部又は一部のフレームについて、再生装置が対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数のデータを生成するデータ生成部を備え、
上記データ生成部は、相異なる複数の方位の各々について該方位を示す方位データを生成することにより、上記複数のデータとして複数の方位データを生成し、
上記複数の方位データの各々は、上記再生装置が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータである、ことを特徴とするデータ生成装置。 A data generation unit that generates a plurality of data to be referred to in order for the playback device to extract a plurality of images to be displayed from the target frame for all or a part of the frames of the omnidirectional video generated by the moving shooting,
The data generation unit generates a plurality of azimuth data as the plurality of data by generating azimuth data indicating the azimuth for each of a plurality of different azimuths,
Each of the plurality of azimuth data is referred to by the playback device to cut out an image showing a field of view when viewing the azimuth indicated by the azimuth data from the shooting point of the target frame as the display target image. A data generation device characterized in that the data is power data. - 上記データ生成部は、上記複数の方位データを含む制御データを生成するようになっており、
上記データ生成部は、上記制御データとして、上記複数の方位データの各々に該方位データを他の方位データと識別するための識別情報が含まれているような制御データを生成する、ことを特徴とする請求項1に記載のデータ生成装置。 The data generation unit generates control data including the plurality of azimuth data,
The data generation unit generates control data such that each of the plurality of azimuth data includes identification information for identifying the azimuth data from other azimuth data as the control data. The data generation device according to claim 1. - 上記データ生成部は、上記複数の方位データを含む制御データを生成するようになっており、
上記データ生成部は、上記制御データとして、上記再生装置が上記画像を切り出すために最も優先的に使用すべき方位データである特定の方位データが上記複数の方位データの中に含まれているような制御データを生成する、ことを特徴とする請求項1または2に記載のデータ生成装置。 The data generation unit generates control data including the plurality of azimuth data,
The data generation unit may include, as the control data, specific azimuth data, which is azimuth data to be used most preferentially for the playback device to cut out the image, is included in the plurality of azimuth data. The data generation apparatus according to claim 1, wherein the control data is generated. - 上記データ生成部は、上記複数の方位データの各々について、上記再生装置が該方位データを参照して切り出す画像の表示順位を示す表示順位データを生成し、
上記再生装置は、対象フレームの期間において、上記表示順位が相対的に高い画像が相対的に早く表示されるように複数枚の画像を表示できるように構成されている、ことを特徴とする請求項1または2に記載のデータ生成装置。 The data generation unit generates, for each of the plurality of azimuth data, display rank data indicating a display rank of an image to be cut out by referring to the azimuth data.
The playback apparatus is configured to display a plurality of images so that an image with a relatively high display order is displayed relatively quickly during a target frame period. Item 3. The data generation device according to Item 1 or 2. - 上記全方位動画は、所定の経路に沿って移動しながらの撮影により生成される動画であり、
上記全方位動画データと上記所定の経路が位置する地域の地図を示すマップデータとを含む配布用データを生成する配布用データ生成処理部を備えている、ことを特徴とする請求項1から4のいずれか1項に記載のデータ生成装置。 The omnidirectional video is a video generated by shooting while moving along a predetermined route,
5. A distribution data generation processing unit that generates distribution data including the omnidirectional video data and map data indicating a map of an area where the predetermined route is located. The data generation device according to any one of the above. - 移動撮影によって生成された全方位動画の全部又は一部のフレームについて、対象フレームから表示対象の画像を複数枚切り出すために参照すべき複数のデータを参照するデータ参照処理部を備え、
上記複数のデータは、互いに相異なる方位を示す複数の方位データであり、
上記複数の方位データの各々は、上記データ参照処理部が、上記表示対象の画像として、対象フレームの撮影地点から該方位データが示す方位を見た時の視界の様子を示す画像を切り出すために参照すべきデータであり、
上記全部又は一部のフレームについて、対象フレームから切り出した複数枚の画像を再生する再生処理部を備えている、ことを特徴とする再生装置。 A data reference processing unit that refers to a plurality of data to be referred to in order to cut out a plurality of images to be displayed from the target frame for all or some of the frames of the omnidirectional video generated by the moving shooting,
The plurality of data is a plurality of azimuth data indicating different azimuths,
Each of the plurality of azimuth data is for the data reference processing unit to cut out an image showing a field of view when viewing the azimuth indicated by the azimuth data from the shooting point of the target frame as the display target image. Data to refer to
A playback apparatus comprising: a playback processing unit that plays back a plurality of images cut out from a target frame for all or some of the frames. - 上記データ参照処理部は、上記複数の方位データを含む制御データを参照し、
上記データ参照処理部が参照する制御データには、自装置が上記画像を切り出すために最も優先的に使用すべき方位データである特定の方位データが上記複数の方位データの中に含まれている、ことを特徴とする請求項6に記載の再生装置。 The data reference processing unit refers to control data including the plurality of azimuth data,
The control data referred to by the data reference processing unit includes specific orientation data, which is orientation data that should be most preferentially used for the device to cut out the image, in the plurality of orientation data. The playback apparatus according to claim 6. - 上記データ参照処理部は、上記複数の方位データを参照するとともに、上記複数の方位データの各々について自装置が該方位データを参照して切り出す画像の表示順位を示す表示順位データを参照し、
上記再生処理部は、対象フレームの期間において、上記表示順位が相対的に高い画像が相対的に早く表示されるように上記複数枚の画像を再生する、ことを特徴とする請求項6または7に記載の再生装置。 The data reference processing unit refers to the plurality of azimuth data, and refers to display order data indicating the display order of images that the own apparatus refers to the azimuth data for each of the plurality of azimuth data,
8. The reproduction processing unit reproduces the plurality of images so that an image having a relatively high display order is displayed relatively quickly during a target frame period. The playback device described in 1. - 上記全方位動画は、所定の経路に沿って移動しながらの撮影により生成される動画であり、
上記所定の経路が位置する地域の地図を表示する地図表示処理部を備え、
上記地図表示処理部は、上記全方位動画の各フレームについて、対象フレームの期間において、対象フレームの撮影地点を示す情報を上記地図上に表示する、ことを特徴とする請求項6から8のいずれか1項に記載の再生装置。 The omnidirectional video is a video generated by shooting while moving along a predetermined route,
A map display processing unit for displaying a map of an area where the predetermined route is located;
9. The map display processing unit according to any one of claims 6 to 8, wherein the map display processing unit displays, on the map, information indicating a shooting point of the target frame for each frame of the omnidirectional video during the target frame period. The playback apparatus according to claim 1. - 上記地図表示処理部は、上記全方位動画の各フレームについて、対象フレームの期間において、対象フレームの撮影地点を示す情報と上記再生処理部による対象フレームからの切り出し対象の画像に写っている被写体の位置を示す情報とを上記地図上に表示する、ことを特徴とする、請求項9に記載の再生装置。 The map display processing unit, for each frame of the omnidirectional video, during the period of the target frame, information indicating the shooting point of the target frame and the subject that is captured in the image to be cut out from the target frame by the reproduction processing unit The reproduction apparatus according to claim 9, wherein information indicating a position is displayed on the map.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015085406 | 2015-04-17 | ||
JP2015-085406 | 2015-04-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016167160A1 true WO2016167160A1 (en) | 2016-10-20 |
Family
ID=57126136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/061159 WO2016167160A1 (en) | 2015-04-17 | 2016-04-05 | Data generation device and reproduction device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016167160A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110892361A (en) * | 2017-07-19 | 2020-03-17 | 三星电子株式会社 | Display apparatus, control method of display apparatus, and computer program product thereof |
CN111739121A (en) * | 2020-06-08 | 2020-10-02 | 北京联想软件有限公司 | Method, device and equipment for drawing virtual line and storage medium |
JP2021114787A (en) * | 2017-07-04 | 2021-08-05 | キヤノン株式会社 | Information processing apparatus, information processing method, and program |
JP2022171739A (en) * | 2018-03-09 | 2022-11-11 | キヤノン株式会社 | Generation device, generation method and program |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013183249A (en) * | 2012-03-01 | 2013-09-12 | Dainippon Printing Co Ltd | Moving image display device |
JP2014132325A (en) * | 2012-12-04 | 2014-07-17 | Nintendo Co Ltd | Information processing system, information processor, program and display method |
JP2014228952A (en) * | 2013-05-20 | 2014-12-08 | 政人 矢川 | Information provision system and method thereof and program |
-
2016
- 2016-04-05 WO PCT/JP2016/061159 patent/WO2016167160A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013183249A (en) * | 2012-03-01 | 2013-09-12 | Dainippon Printing Co Ltd | Moving image display device |
JP2014132325A (en) * | 2012-12-04 | 2014-07-17 | Nintendo Co Ltd | Information processing system, information processor, program and display method |
JP2014228952A (en) * | 2013-05-20 | 2014-12-08 | 政人 矢川 | Information provision system and method thereof and program |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021114787A (en) * | 2017-07-04 | 2021-08-05 | キヤノン株式会社 | Information processing apparatus, information processing method, and program |
JP7087158B2 (en) | 2017-07-04 | 2022-06-20 | キヤノン株式会社 | Information processing equipment, information processing methods and programs |
CN110892361A (en) * | 2017-07-19 | 2020-03-17 | 三星电子株式会社 | Display apparatus, control method of display apparatus, and computer program product thereof |
JP2022171739A (en) * | 2018-03-09 | 2022-11-11 | キヤノン株式会社 | Generation device, generation method and program |
JP7459195B2 (en) | 2018-03-09 | 2024-04-01 | キヤノン株式会社 | Generation device, generation method, and program |
US12113950B2 (en) | 2018-03-09 | 2024-10-08 | Canon Kabushiki Kaisha | Generation apparatus, generation method, and storage medium |
CN111739121A (en) * | 2020-06-08 | 2020-10-02 | 北京联想软件有限公司 | Method, device and equipment for drawing virtual line and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10778951B2 (en) | Camerawork generating method and video processing device | |
US10271082B2 (en) | Video distribution method, video reception method, server, terminal apparatus, and video distribution system | |
US10691202B2 (en) | Virtual reality system including social graph | |
US10805592B2 (en) | Apparatus and method for gaze tracking | |
CN106331732B (en) | Generate, show the method and device of panorama content | |
JP6558587B2 (en) | Information processing apparatus, display apparatus, information processing method, program, and information processing system | |
JP6309749B2 (en) | Image data reproducing apparatus and image data generating apparatus | |
US8730354B2 (en) | Overlay video content on a mobile device | |
KR101482025B1 (en) | Augmented reality presentations | |
KR101210315B1 (en) | Recommended depth value for overlaying a graphics object on three-dimensional video | |
JP2015187797A (en) | Image data generation device and image data reproduction device | |
JP2014215828A (en) | Image data reproduction device, and viewpoint information generation device | |
US10623792B1 (en) | Dynamic generation of on-demand video | |
JP2013505636A (en) | 3D video insert hyperlinked to interactive TV | |
WO2016167160A1 (en) | Data generation device and reproduction device | |
US20230018560A1 (en) | Virtual Reality Systems and Methods | |
JP7385385B2 (en) | Image distribution system and image distribution method | |
US10051342B1 (en) | Dynamic generation of on-demand video | |
KR102140077B1 (en) | Master device, slave device and control method thereof | |
JP6934052B2 (en) | Display control device, display control method and program | |
KR102084970B1 (en) | Virtual reality viewing method and virtual reality viewing system | |
WO2020206647A1 (en) | Method and apparatus for controlling, by means of following motion of user, playing of video content | |
JP2020101847A (en) | Image file generator, method for generating image file, image generator, method for generating image, image generation system, and program | |
US11287658B2 (en) | Picture processing device, picture distribution system, and picture processing method | |
JP2022545880A (en) | Codestream processing method, device, first terminal, second terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16779951 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16779951 Country of ref document: EP Kind code of ref document: A1 |