WO2018127280A1 - Commande d'un agencement de caméra - Google Patents
Commande d'un agencement de caméra Download PDFInfo
- Publication number
- WO2018127280A1 WO2018127280A1 PCT/EP2017/050154 EP2017050154W WO2018127280A1 WO 2018127280 A1 WO2018127280 A1 WO 2018127280A1 EP 2017050154 W EP2017050154 W EP 2017050154W WO 2018127280 A1 WO2018127280 A1 WO 2018127280A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- camera unit
- camera
- arrangement
- output
- images captured
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 52
- 230000000007 visual effect Effects 0.000 claims abstract description 50
- 230000033001 locomotion Effects 0.000 claims abstract description 44
- 230000004044 response Effects 0.000 claims abstract description 33
- 238000004590 computer program Methods 0.000 claims description 31
- 239000002131 composite material Substances 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 abstract description 4
- 239000000203 mixture Substances 0.000 description 15
- 210000003128 head Anatomy 0.000 description 13
- 230000000694 effects Effects 0.000 description 11
- 238000009877 rendering Methods 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/163—Wearable computers, e.g. on a belt
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1684—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
- G06F1/1686—Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/282—Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40002—Camera, robot follows direction movement of operator head, helmet, headstick
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Definitions
- Embodiments presented herein relate to a method, a video controller, a computer program, and a computer program product for controlling a camera arrangement.
- Immersive videos also known as 360 0 videos or spherical videos
- ⁇ are generally defined as video recordings where a view in every direction of a scene is recorded at the same time, shot using an omnidirectional camera or a collection of cameras. During playback the viewer has control of the viewing direction like a panorama, a form of virtual reality.
- Immersive video combined with a visual user interface device can thus provide a feeling for the user of actually being in the scene.
- Rendering a stereoscopic view of a virtual scene comprises producing two separate views from slightly different angles corresponding to a left eye view and a right eye view for the user.
- a similar approach can be used when capturing a real-life scene by using two cameras mounted slightly apart to simulate a pair of eyes and then rotate both cameras to capture different angles as the user turns his/her head.
- One issue with this approach is that there will always be a certain amount of latency involved in tracking the movement of the user's head, transferring this movement information from the visual user interface device at the render side to the cameras at the capture side and then physically rotate the cameras. This latency should be kept small since even a small amount of delay between the user's movements and the video following these movements may cause illness.
- An object of embodiments herein is to provide efficient control of a camera arrangement.
- a method for controlling a camera arrangement comprises a first camera unit configured to provide a 360 0 view of a scene and a second camera unit configured to provide a view of part of the scene.
- the first camera unit and the second camera unit are configured to at least together provide a stereoscopic view of the part of the scene.
- the method is performed by a video controller.
- the method comprises obtaining first input from a visual user interface device indicating beginning of user movement.
- the method comprises, in response thereto, setting an output of the camera arrangement to only include images captured by the first camera unit.
- the method comprises obtaining second input from the visual user interface device indicating stopping of the user movement.
- the method comprises, in response thereto, setting the output of the camera arrangement to include images at least captured by the second camera unit.
- this method provides efficient control of the camera arrangement, which in turn can be used to provide immersive videos for real time applications.
- the effect of any latency introduced by the second camera unit is minimized by smoothly switching to only output images only from the first camera unit when the user moves his/her head.
- this method enables a cost effective solution for real-time immersive stereoscopic video capture with only two camera units.
- a video controller for controlling a camera arrangement.
- the camera arrangement comprises a first camera unit configured to provide a 360 0 view of a scene and a second camera unit configured to provide a view of part of the scene.
- the first camera unit and the second camera unit are configured to at least together provide a stereoscopic view of the part of the scene.
- the video controller comprises processing circuitry.
- the processing circuitry is configured to cause the video controller to obtain first input from a visual user interface device indicating beginning of user movement.
- the processing circuitry is configured to cause the video controller to, in response thereto, set an output of the camera arrangement to only include images captured by the first camera unit.
- the processing circuitry is configured to cause the video controller to obtain second input from the visual user interface device indicating stopping of the user movement.
- the processing circuitry is configured to cause the video controller to, in response thereto, set the output of the camera arrangement to include images at least captured by the second camera unit.
- the camera arrangement comprises a first camera unit configured to provide a 360 0 view of a scene and a second camera unit configured to provide a view of part of the scene.
- the first camera unit and the second camera unit are configured to at least together provide a stereoscopic view of the part of the scene.
- the video controller comprises processing circuitry and a storage medium.
- the storage medium stores instructions that, when executed by the processing circuitry, cause the video controller to perform operations, or steps.
- the operations, or steps cause the video controller to obtain first input from a visual user interface device indicating beginning of user movement.
- the operations, or steps cause the video controller to, in response thereto, set an output of the camera arrangement to only include images captured by the first camera unit.
- the operations, or steps, cause the video controller to obtain second input from the visual user interface device indicating stopping of the user movement.
- the operations, or steps, cause the video controller to, in response thereto, set the output of the camera arrangement to include images at least captured by the second camera unit.
- a video controller for controlling a camera arrangement.
- the camera arrangement comprises a first camera unit configured to provide a 360 0 view of a scene and a second camera unit configured to provide a view of part of the scene.
- the first camera unit and the second camera unit are configured to at least together provide a stereoscopic view of the part of the scene.
- the video controller comprises an obtain module configured to obtain first input from a visual user interface device indicating beginning of user movement.
- the video controller comprises a set module configured to, in response thereto, set an output of the camera arrangement to only include images captured by the first camera unit.
- the video controller comprises an obtain module configured to obtain second input from the visual user interface device indicating stopping of the user movement.
- the video controller comprises a set module configured to, in response thereto, set the output of the camera arrangement to include images at least captured by the second camera unit.
- a computer program product comprising a computer program according to the fifth aspect and a computer readable storage medium on which the computer program is stored.
- the computer readable storage medium could be a non -transitory computer readable storage medium.
- any advantage of the first aspect may equally apply to the second, third, fourth, fifth and/or sixth aspect, respectively, and vice versa.
- Other objectives, features and advantages of the enclosed embodiments will be apparent from the following detailed disclosure, from the attached dependent claims as well as from the drawings.
- all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein.
- All references to "a/an/the element, apparatus, component, means, step, etc.” are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise.
- the steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
- FIGS. 1, 2, and 3 are schematic diagrams illustrating a stereoscopic imaging system according to embodiments
- Figs. 4, 5, and 7 are flowcharts of methods according to embodiments
- Fig. 6 schematically illustrates views of a scene captured using the
- Fig. 8 is a schematic diagram showing functional units of a video controller 200 according to an embodiment
- Fig. 9 is a schematic diagram showing functional modules of a video controller 200 according to an embodiment
- Fig. 10 shows one example of a computer program product comprising computer readable storage medium according to an embodiment.
- the inventors of the present disclosure have realized that by combining a camera configured to provide a 360 0 view with a rotatable camera following the user's head movements it is possible to provide a stereoscopic view when the user's head is stationary whilst smoothly switching over to a 2D video view as the user's head is turning.
- Fig. 1 is a schematic diagram illustrating a stereoscopic imaging system 100 where embodiments presented herein can be applied.
- the stereoscopic imaging system 100 comprises a camera arrangement 110.
- the camera arrangement 110 comprises a first camera unit 120 and a second camera unit 130.
- Each of the camera units 120, 130 is configured to capture a respective sequence of image.
- the sequences of images, either from only one of the camera units 120, 130 or from both the camera units 120, 130 results in a video stream to be produced. This video stream can be rendered and displayed to a user.
- the first camera unit 120 is configured to provide a 360 0 view of a scene. In some aspects the first camera unit 120 is stationary relative the scene. In some aspects the first camera unit 120 comprises a 360 0 lens. In some aspects the first camera unit 120 comprises a plurality of image capturing devices that collectively are configured to provide the 360 0 view of the scene. The first camera unit 120 thus ensures that there is always available a 2D video in all directions of the view.
- the second camera unit 130 is configured to provide a view of part of the scene. In some aspects the second camera unit 130 is rotatably arranged with respect to a center axis of the first camera unit 120. In short, the second camera unit 130 is mounted with an offset relative the first camera unit 120 that produces the correct parallax effect for the stereoscopic view. This will be further exemplified in Fig. 3.
- the stereoscopic imaging system 100 further comprises a visual user interface device 140.
- the visual user interface device 140 is configured to render and display a video stream to a user using a sequence of images as captured by at least one of the camera units 120, 130.
- the visual user interface device 140 is further configured to provide an indication of user movement as well as displaying an immersive stereoscopic view of the scene to the user.
- the camera arrangement 110 together with a visual user interface device 140 configured to provide an indication of user movement can thus produce an immersive stereoscopic view of the scene. Further aspects of the visual user interface device 140 will be disclosed below.
- the camera arrangement 110, the video controller 200, and the visual user interface device 140 are configured to communicate and exchange data over an interface 150.
- the interface 150 may be wired, wireless, or a combination thereof.
- the first camera unit 120 and the second camera unit 130 are configured to at least together provide a stereoscopic view of part of the scene.
- the second camera unit 130 thus provides the stereoscopic effect, but only in one direction at a time. Thus, as the user's head is moving, the second camera unit 130 must follow this motion, which always involves a certain amount of latency.
- the stereoscopic imaging system 100 therefore further comprises a video controller 200.
- the video controller 200 interfaces the camera arrangement 110 and the visual user interface device 140. Further aspects of the video controller 200 and how it can be used to minimize latency will be disclosed below. Fig.
- FIG. 2(a) schematically illustrates an embodiment of the video controller 200 where the video controller 200 comprises a video position matching and composition unit 160 and a coding and transport unit 170.
- the visual user interface device 140 comprises a rendering unit 180 and a display unit 190.
- information about the position of the second camera unit 130 is not needed on the rendering side (i.e. by the visual user interface device 140) since the video composition is performed completely on the capture side (viz., by the video controller 200). This can be beneficial since only the capture side need to know the camera setup.
- the video controller 200 can be fine tuned to the exact properties of the camera units 120, 130 used in order to produce a distortion free stereoscopic effect.
- Fig. 2(b) schematically illustrates an embodiment of the video controller 200 where the video controller 200 comprises a coding and transport unit 170 but not any video position matching and composition unit.
- the composing of the view using images from both the camera units 120, 130 is performed on the rendering side (i.e. by the visual user interface device 140).
- the sequences of images from the camera units 120, 130 are encoded and sent as two separate streams along with side information about the current rotational angle of the second camera unit 130. With this embodiment most processing is performed at the rendering side, which means that knowledge about the camera setup needs to be transmitted to the rendering side.
- Static information such as parameters for the needed video matching and stitching can be sent at session setup time, whereas the current rotational angle of the second camera unit 130 needs to be transmitted as continuous side information to the sequences of images.
- This embodiment is beneficial when the video controller 200 is not powerful enough to perform the needed video processing. Also, with this embodiment the video controller 200 does not need to know exactly how the video is to be rendered by the visual user interface device 140. This is beneficial if the visual user interface device 140 is replaced or if several different visual user interface devices 140 are used simultaneously.
- Fig. 3(a) illustrates an embodiment where the second camera unit 130 is a 2D camera.
- the second camera unit 130 could comprise one single image capturing device provided at an offset D to the center axis of the first camera unit 120.
- the first camera unit 120 and the second camera unit 130 are thus mounted at an offset D that provides the correct parallax effect.
- the first camera unit 120 is configured to provide image coverage in an
- the second camera unit 130 is configured to provide image coverage in a directional view 320. Having the second camera unit 130 rotatably arranged with respect to the center axis of the first camera unit 120 enables the second camera unit 130 to be rotated to provide the view of the user's right eye in different directions.
- Fig. 3(b) illustrates an embodiment where the second camera unit 130 is a stereoscopic camera.
- the second camera unit 130 could thus comprise two image capturing devices 130a, 130b, each provided at a respective offset Di, D2 to the center axis of the first camera unit 120.
- the image capturing devices 130a, 130b of the second camera unit 130 are thus mounted at an offset D1+D2 that provides the correct parallax effect.
- the first camera unit 120 is configured to provide image coverage in an omnidirectional view 310.
- the image capturing device 130a is configured to provide image coverage in a left directional view 320a
- the image capturing device 130b is configured to provide image coverage in a right directional view 320b.
- Having the image capturing devices 130a, 130b rotatably arranged with respect to the center axis of the first camera unit 120 enables the image capturing devices 130a, 130b to be rotated to provide a stereoscopic of the view of the user's left eye and right eye, respectively, in different directions, whereas the first camera unit 120 provides a 360 0 view that is always available and can be used during periods when the second camera unit (defined by the image capturing devices 130a, 130b) is rotated.
- the embodiments disclosed herein relate to mechanisms for controlling a camera arrangement 110.
- a video controller 200 In order to obtain such mechanisms there is provided a video controller 200, a method performed by the video controller 200, a computer program product comprising code, for example in the form of a computer program, that when run on the video controller 200, causes the video controller 200 to perform the method.
- Figs. 4 and 5 are flow charts illustrating embodiments of methods for controlling a camera arrangement 110. The methods are performed by the video controller 200. The methods are advantageously provided as computer programs 1020.
- Fig. 4 illustrating a method for controlling a camera arrangement 110 as performed by the video controller 200 according to an embodiment.
- the camera arrangement comprises a first camera unit 120 configured to provide a 360 0 view of a scene, and a second camera unit 130 configured to provide a view of part of the scene, where the first camera unit 120 and the second camera unit 130 are configured to at least together provide a stereoscopic view of this part of the scene.
- the video controller 200 obtains first input from the visual user interface device 140.
- the first input indicates beginning of user movement.
- the first input could be obtained over the interface 150.
- the video controller 200 is configured to perform step S104 in response to having obtained the first input:
- the video controller 200 sets an output of the camera arrangement 110 to only include images captured by the first camera unit 120.
- the stream of images from the second camera unit 130 is not used. Instead the stream of images from the first camera unit 120 is used for both eyes.
- the output of the camera arrangement 110 is thus set to only include images captured by the first camera unit 120 as long as the movement continues.
- the sequence of images from the second camera unit 130 is thus not used during periods when the second camera unit 130 is moving. This allows the transmission of the sequence of images from the second camera unit 130 to be stopped until the second camera unit 130 has reached its new position.
- the bitrate of the output of the camera arrangement 110 can thus be set very low during these periods in order to not send data that is not to be rendered.
- the video controller 200 obtains second input from the visual user interface device 140.
- the second input indicates stopping of the user movement.
- the second input could be obtained over the interface 150.
- the video controller 200 is configured to perform step S110 in response to having obtained the second input: Siio: The video controller 200 sets the output of the camera arrangement 110 to include images at least captured by the second camera unit 130.
- the output of the camera arrangement 110 is to be delivered to the visual user interface device 140 over the interface 150, see steps S118 and S122 below for examples of this.
- the static view i.e., when there is not any indication of user movement
- the output of the camera arrangement 110 is set to include images captured by the first camera unit 120 and images captured by the second camera unit 130 in response to obtaining the second input in step S108.
- the static view (i.e., when there is not any indication of user movement) includes images captured only by the second camera unit 130.
- the output of the camera arrangement 110 is set to include only images captured by the second camera unit 130 in response to obtaining the second input.
- the second camera unit 130 to be stereoscopic (such as in the embodiment of Fig. 3(b)) in order for the output of the camera arrangement 110 to represent a stereoscopic view of the part of the scene.
- the view from the first camera unit 120 is only used during periods of user movement.
- the images captured by the first camera unit 120 are only included in the output of the camera arrangement 110 during absence of images captured by the second camera unit 130.
- Fig. 5 illustrating methods for controlling a camera arrangement 110 as performed by the video controller 200 according to further embodiments. It is assumed that steps S102, S104, S108, S110 are performed as described above with reference to Fig. 4 and a thus repeated description thereof is therefore omitted.
- the video controller 200 may be different ways for the video controller 200 to set the output of the camera arrangement 110 to only include images captured by the first camera unit 120 as in step S104.
- the video controller 200 is configured to perform step Si04a as part of setting the output of the camera arrangement 110 to only include images captured by the first camera unit 120 in step S104:
- Si04a The video controller 200 gradually excludes the images captured by the second camera unit 130 from the output of the camera arrangement 110.
- image morphing could be performed in order to fade out the images captured by the second camera unit 130 from the output of the camera arrangement 110.
- the video controller 200 may set the output of the camera arrangement 110 to include images at least captured by the second camera unit 130 as in step S110.
- the video controller 200 is configured to perform step Siioa as part of setting the output of the camera arrangement 110 to include images at least captured by the second camera unit 130 in step S110:
- the video controller 200 gradually includes the images captured by the second camera unit 130 in the output of the camera arrangement 110.
- step S106 when the first input further comprises movement data (such as head-tracking information) of the user movement:
- S106 The video controller steers the second camera unit 130 according to the movement data.
- the second camera unit 130 can thereby be rotated to match the new viewing direction.
- the bitrate of the output from the camera arrangement 110 could be kept low by using differential coding of the images from the second camera unit 130 (when present in the output).
- the video controller 200 is configured to perform step S112 (for example by the coding and transport unit 170) when the output of the camera arrangement 110 comprises images captured by the first camera unit 120 and images captured by the second camera unit 130:
- the video controller 200 differentially quantizes the images captured by one of the first camera unit 120 and the second camera unit 130 based on the images captured by the other of the first camera unit 120 and the second camera unit 130.
- the video controller 200 is configured to perform step S114 in an embodiment where the output of the camera arrangement 110 comprises images captured by the first camera unit 120 and images captured by the second camera unit 130:
- S114 The video controller 200 stitches the images captured by the second camera unit 130 to the images captured by the first camera unit 120.
- the stitching in step S114 comprises image blending between the images captured by the second camera unit 130 and the images captured by the first camera unit 120.
- the stitching might be implemented as a simple crossfade.
- Position matching between images from the first camera unit 120 and the second camera unit 130 can be used in order to avoid artifacts in the mixed output comprising sequences of images from both the first camera unit 120 and the second camera unit 130. Further, depending on physical properties of the first camera unit 120 and the second camera unit 130 there could be differences in resolution as well as hue, barrel distortion etc., that need to be compensated for before the two sequences of images can be successfully stitched together without noticeable unwanted effects. Since stitching of images from the first camera unit 120 and the second camera unit 130 is only performed in the outer regions of the users view, the quality of stitching is less important than with at least some existing solutions where the stitching might be required in the middle of the user's view.
- the images from the second camera 130 might be used on their own (without being stitched together with the images from the first camera unit 120) and the output from the camera arrangement 110 may thus comprise images only captured by the second camera unit 130 when the second camera unit 130 is stationary.
- video composition is performed on the capture side.
- the video controller 200 is configured to perform steps S116 and S118:
- the video controller 200 combines the images captured by the first camera unit 120 and the images captured by the second camera unit 130 into a composite stream of images.
- the combining of the images results in video composition being performed (for example by the video position matching and composition unit 160).
- the video controller 200 provides the composite stream of images to the visual user interface device 140.
- the video composition is performed at the rendering side.
- the sequences of images from the first camera unit 120 and the second camera unit 130 are encoded and sent as two separate streams.
- the video controller 200 is configured to perform step S120:
- the video controller 200 provides the output of the camera
- this embodiment may require side information about the current rotational angle of the second camera unit 130 to be provided to the visual user interface device 140.
- the video controller 200 is configured to perform step S122:
- the video controller 200 provides side information indicating pointing angle of the second camera unit 130 during capture of its images along the separate streams of images.
- Figs. 6 (a) and (b) illustrate an example where the sequences of images are represented in panoramic video layout for the embodiment in Fig. 3(a).
- Fig. 6(a) illustrates the view for the user's left eye defined by a 360 0 panoramic view 610 provided by the first camera unit 120.
- Fig. 6(b) illustrates the view for the user's right eye defined a 360 0 panoramic view 610 provided by the first camera unit 120 and a panoramic view 620 provided by the second camera unit 130.
- the composition of the sequence of images from the second camera unit 130 is performed by matching the position of the images from the second camera unit 130 and mixing the sequences of images from the second camera unit 130 with the sequences of images from the first camera unit 120 with stitching so that they are joined seamlessly. Stitching artifacts are only visible in the peripheral parts of the view defined by the images from the second camera unit 130. In this way a stereoscopic effect will be achieved within the view defined by the images from the second camera unit 130.
- Figs. 6 (c) and (d) illustrate an example of how the composition of the views for the left and right eye would be when the second camera unit 130 is a stereoscopic camera (i.e., for the embodiment in Fig. 3(b)).
- Fig. 6(c) illustrates the panoramic view for the user's left eye defined by a 360 0 panoramic view 610 provided by the first camera unit 120 and a panoramic view 620a provided by the image capturing device 130a.
- Fig. 6(d) illustrates the view for the user's right eye defined by a 360 0 panoramic view 610 provided by the first camera unit 120 and a panoramic view 620b provided by the image capturing device 130b.
- the second camera unit 130 is used for both views in the region it covers and the first camera unit 120 is used for the rest of the view.
- the panoramic views 620a and 620b are relative offset each other by a factor xi-x2.
- Fig. 7 is a flowchart describing steps involved during the rendering when a user turns his/her head and how the video controller 200 causes the rendering to be performed during these steps.
- S201 The user looks in one specific direction and the second camera unit 130 is in correct position for this direction.
- the video controller 200 enables the output of the camera arrangement (110) to include images captured by the first camera unit 120 and the second camera unit 130. This causes the left eye to see a view from the first camera unit 120 and the l8 right eye to see a composition from the first camera unit 120 and the second camera unit 130.
- the head-tracking device detects start of movement of the user's head.
- the video controller 200 obtains first input from the visual user interface device 140 indicating beginning of user movement; and in response thereto sets the output of the camera arrangement 110 to only include images captured by the first camera unit 120 as in steps S102 and S104. This causes the left eye to see a view from the first camera unit 120 and the right eye to see a composition from the first camera unit 120 and the second camera unit 130, where the contribution from the second camera unit 130 is smoothly faded out.
- step S203 The user's head is moving.
- the video controller 200 keeps setting the output of the camera arrangement 110 to only include images captured by the first camera unit 120 as in step S104. This causes both eyes to see a view from only the first camera unit 120.
- the head-tracking device detects slowing down of movement of the user's head.
- the video controller 200 keeps setting the output of the camera arrangement 110 to only include images captured by the first camera unit 120 as in step S104. This causes both eyes to see a view from only the first camera unit 120.
- S205 The second camera unit 130 reached its correct position for the new direction.
- the video controller 200 obtains second input from the visual user interface device 140 indicating stopping of the user movement; and in response thereto sets the output of the camera
- Fig. 8 schematically illustrates, in terms of a number of functional units, the components of a video controller 200 according to an embodiment.
- Processing circuitry 210 is provided using any combination of one or more of a suitable central processing unit (CPU), multiprocessor, microcontroller, digital signal processor (DSP), etc., capable of executing software instructions stored in a computer program product 1010 (as in Fig. 10), e.g. in the form of a storage medium 230.
- the processing circuitry 210 may further be provided as at least one application specific integrated circuit (ASIC), or field programmable gate array (FPGA).
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- the processing circuitry 210 is configured to cause the video controller 200 to perform a set of operations, or steps, S102-S122, as disclosed above.
- the storage medium 230 may store the set of operations
- the processing circuitry 210 may be configured to retrieve the set of operations from the storage medium 230 to cause the video controller 200 to perform the set of operations.
- the set of operations may be provided as a set of executable instructions.
- the processing circuitry 210 is thereby arranged to execute methods as herein disclosed.
- the storage medium 230 may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
- the video controller 200 may further comprise a communications interface 220 at least configured for communications with the camera arrangement 110 and the visual user interface device 140.
- the communications interface 220 may comprise one or more transmitters and receivers, comprising analogue and digital components.
- the processing circuitry 210 controls the general operation of the video controller 200 e.g. by sending data and control signals to the communications interface 220 and the storage medium 230, by receiving data and reports from the
- Fig. 9 schematically illustrates, in terms of a number of functional modules, the components of a video controller 200 according to an embodiment.
- the video controller 200 of Fig. 9 comprises a number of functional modules; an obtain module 210a configured to perform step S102, a set module 210b configured to perform step S104, an obtain module 2ioe configured to perform step S108, and a set module 2iof configured to perform step S110.
- the video controller 200 of Fig. 9 comprises a number of functional modules; an obtain module 210a configured to perform step S102, a set module 210b configured to perform step S104, an obtain module 2ioe configured to perform step S108, and a set module 2iof configured to perform step S110.
- 9 may further comprise a number of optional functional modules, such as any of an include module 210c configured to perform step Si04a, a steer module 2iod configured to perform step S106, an exclude module 2iog configured to perform step Snoa, a quantize module 2ioh configured to perform step S112, a stitch module 2101 configured to perform step S114, a combine module 2ioj configured to perform step S116, a provide module 210k configured to perform step S118, a provide module 210I configured to perform step S120, and a provide module 210m configured to perform step S122.
- optional functional modules such as any of an include module 210c configured to perform step Si04a, a steer module 2iod configured to perform step S106, an exclude module 2iog configured to perform step Snoa, a quantize module 2ioh configured to perform step S112, a stitch module 2101 configured to perform step S114, a combine module 2ioj configured to perform step S116, a provide module 210k configured to perform step S
- each functional module 2ioa-2iom may in one
- modules correspond to parts of a computer program, they do not need to be separate modules therein, but the way in which they are implemented in software is dependent on the programming language used.
- one or more or all functional modules 2ioa-2iom may be implemented by the processing circuitry 210, possibly in cooperation with the communications interface 220 and/or the storage medium 230.
- the processing circuitry 210 may thus be configured to from the storage medium 230 fetch instructions as provided by a functional module 2ioa-2iom and to execute these instructions, thereby performing any steps as disclosed herein.
- the video controller 200 may be provided as a standalone device or as a part of at least one further device. Thus, a first portion of the instructions performed by the video controller 200 may be executed in a first device, and a second portion of the of the instructions performed by the video controller 200 may be executed in a second device; the herein disclosed embodiments are not limited to any particular number of devices on which the instructions performed by the video controller 200 may be executed. Hence, the methods according to the herein disclosed embodiments are suitable to be performed by a video controller 200 residing in a cloud computational environment.
- processing circuitry 210 may be distributed among a plurality of devices, or nodes. The same applies to the functional modules 2ioa-2iom of Fig. 9 and the computer program 1020 of Fig. 10 (see below).
- Fig. 10 shows one example of a computer program product 1010 comprising computer readable storage medium 1030.
- a computer program 1020 can be stored, which computer program 1020 can cause the processing circuitry 210 and thereto operatively coupled entities and devices, such as the communications interface 220 and the storage medium 230, to execute methods according to embodiments described herein.
- the computer program 1020 and/or computer program product 1010 may thus provide means for performing any steps as herein disclosed.
- the computer program product 1010 is illustrated as an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc.
- the computer program product 1010 could also be embodied as a memory, such as a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or an electrically erasable programmable read-only memory (EEPROM) and more particularly as a non-volatile storage medium of a device in an external memory such as a USB (Universal Serial Bus) memory or a Flash memory, such as a compact Flash memory.
- the computer program 1020 is here schematically shown as a track on the depicted optical disk, the computer program 1020 can be stored in any way which is suitable for the computer program product 1010.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
L'invention concerne des mécanismes de commande d'un agencement de caméra. L'agencement de caméra comprend une première unité de caméra configurée pour fournir une vue de 360 degrés d'une scène, et une seconde unité de caméra configurée pour fournir une vue d'une partie de la scène. La première unité de caméra et la seconde unité de caméra sont configurées pour fournir au moins ensemble une vue stéréoscopique de la partie de la scène. Un procédé est exécuté par un dispositif de commande vidéo. Le procédé comprend l'obtention d'une première entrée à partir d'un dispositif d'interface visuelle d'utilisateur indiquant le début d'un mouvement d'utilisateur. Le procédé comprend, en réponse à cela, le réglage d'une sortie de l'agencement de caméra de façon à n'inclure que des images capturées par la première unité de caméra. Le procédé consiste à obtenir une seconde entrée à partir du dispositif d'interface visuelle d'utilisateur indiquant l'arrêt du mouvement de l'utilisateur. Le procédé consiste, en réponse à cela, à régler la sortie de l'agencement de caméra de façon à inclure des images au moins capturées par la seconde unité de caméra.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2017/050154 WO2018127280A1 (fr) | 2017-01-04 | 2017-01-04 | Commande d'un agencement de caméra |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2017/050154 WO2018127280A1 (fr) | 2017-01-04 | 2017-01-04 | Commande d'un agencement de caméra |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018127280A1 true WO2018127280A1 (fr) | 2018-07-12 |
Family
ID=57796337
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2017/050154 WO2018127280A1 (fr) | 2017-01-04 | 2017-01-04 | Commande d'un agencement de caméra |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018127280A1 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4818858A (en) * | 1984-10-25 | 1989-04-04 | Canon Kabushiki Kaisha | Visual sensor system for producing stereoscopic visual information |
EP0713331A1 (fr) * | 1994-11-17 | 1996-05-22 | Canon Kabushiki Kaisha | Dispositif de commande de caméra |
US20090256908A1 (en) * | 2008-04-10 | 2009-10-15 | Yong-Sheng Chen | Integrated image surveillance system and image synthesis method thereof |
EP3007038A2 (fr) * | 2014-09-22 | 2016-04-13 | Samsung Electronics Co., Ltd. | Interaction avec une image vidéo tridimensionnelle |
-
2017
- 2017-01-04 WO PCT/EP2017/050154 patent/WO2018127280A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4818858A (en) * | 1984-10-25 | 1989-04-04 | Canon Kabushiki Kaisha | Visual sensor system for producing stereoscopic visual information |
EP0713331A1 (fr) * | 1994-11-17 | 1996-05-22 | Canon Kabushiki Kaisha | Dispositif de commande de caméra |
US20090256908A1 (en) * | 2008-04-10 | 2009-10-15 | Yong-Sheng Chen | Integrated image surveillance system and image synthesis method thereof |
EP3007038A2 (fr) * | 2014-09-22 | 2016-04-13 | Samsung Electronics Co., Ltd. | Interaction avec une image vidéo tridimensionnelle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fan et al. | A survey on 360 video streaming: Acquisition, transmission, and display | |
US10645369B2 (en) | Stereo viewing | |
JP6410918B2 (ja) | パノラマ映像コンテンツの再生に使用するシステム及び方法 | |
US20200288113A1 (en) | System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view | |
JP6501904B2 (ja) | 球面ビデオのストリーミング | |
US20190132569A1 (en) | Image processing for 360-degree camera | |
US20180342043A1 (en) | Auto Scene Adjustments For Multi Camera Virtual Reality Streaming | |
WO2018193330A1 (fr) | Procédé et appareil d'émission d'images panoramiques en continu | |
US10012982B2 (en) | System and method for focus and context views for telepresence and robotic teleoperation | |
US10601889B1 (en) | Broadcasting panoramic videos from one server to multiple endpoints | |
WO2012166593A2 (fr) | Système et procédé pour créer un environnement de réalité virtuelle tridimensionnel, panoramique, navigable ayant un champ de vision ultra-large | |
US20180160119A1 (en) | Method and Apparatus for Adaptive Region-Based Decoding to Enhance User Experience for 360-degree VR Video | |
JP2013085223A (ja) | 立体パノラマ映像を生成する装置及び方法 | |
CN111226264A (zh) | 回放装置和方法以及生成装置和方法 | |
CN107211081A (zh) | 基于独立编码的背景更新的视频传输 | |
EP3629584A1 (fr) | Appareil et procédé permettant de générer et de rendre un flux vidéo | |
CN115883882A (zh) | 图像处理方法、装置、系统、网络设备、终端及存储介质 | |
EP3437319A1 (fr) | Codage d'image multi-caméra | |
GB2557175A (en) | Method for multi-camera device | |
WO2018127280A1 (fr) | Commande d'un agencement de caméra | |
JP7356579B2 (ja) | コードストリームの処理方法、装置、第1端末、第2端末及び記憶媒体 | |
WO2019072861A1 (fr) | Sélection d'un angle de visionnage animé dans un environnement virtuel immersif | |
JP7395725B2 (ja) | メディアリソースの再生およびテキストレンダリング方法、装置、機器および記憶媒体 | |
GB2556017A (en) | Image compression method and technical equipment for the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17700315 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17700315 Country of ref document: EP Kind code of ref document: A1 |