+

WO2018133996A1 - Procédé permettant de combiner une pluralité d'images caméra - Google Patents

Procédé permettant de combiner une pluralité d'images caméra Download PDF

Info

Publication number
WO2018133996A1
WO2018133996A1 PCT/EP2017/082248 EP2017082248W WO2018133996A1 WO 2018133996 A1 WO2018133996 A1 WO 2018133996A1 EP 2017082248 W EP2017082248 W EP 2017082248W WO 2018133996 A1 WO2018133996 A1 WO 2018133996A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
areas
environment
overall
Prior art date
Application number
PCT/EP2017/082248
Other languages
German (de)
English (en)
Inventor
Raphael Cano
Jose Domingo Esparza Garcia
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2018133996A1 publication Critical patent/WO2018133996A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/607Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the present invention relates to a method of combining a plurality of camera images. Moreover, the invention relates to a control device for a vehicle and a vehicle comprising such a control device.
  • Camera images allow the number of shadow areas where there is no footage to minimize in the combined overall picture.
  • a check is made as to whether shaded areas in the Camera image of a single camera also by other cameras
  • the shaded area in the camera image may be filled with imagery from another camera. In this way, the overall content of the combined overall image is increased.
  • Camera images to an overall image includes the following steps: First, an environment with a plurality of cameras is detected. In this way, a large number of camera images is generated. It is particularly advantageous for the camera images to overlap at least partially in their detection areas. After capturing, an overall perspective is defined, from which the overall picture is to be viewed. The overall perspective is thus a virtual perspective from which the overall picture is to be viewed. This means that a distortion of the camera images is necessary, the camera images also to the
  • Shadows do not show a reproduction of the environment. This means that the detection of the shadow areas after the transformation of the camera images, wherein the transformation of the camera images, a conversion of the
  • Viewing perspective of the camera images includes.
  • the shadow areas can thus not contribute to the information content of the overall picture and should be avoided if possible.
  • Information in the overall picture is maximized.
  • a maximum utilization of information from the individual camera images takes place. If the entire image is presented to a user for viewing, this user receives a full picture of the environment, in particular disturbing
  • a step of displaying predefined graphics or colored areas on the shadow areas occurs if it is determined in the checking step that no other camera image represents a shadow area.
  • no other camera image represents a shadow area.
  • the shaded area of the environment is thus not represented by the overall picture.
  • the presentation of distortions of the obstacle is avoided.
  • the individual camera images in the overall image adjoin each other at respective seam lines.
  • the above-described replacement step is advantageously carried out such that the seam lines are displaced. In this way, it is thus provided that such areas of the individual camera images are not used for the generation of the overall image if they comprise shadow areas and at the same time the same
  • Seam lines are advantageously carried out dynamically, so that the seams are arranged differently according to existing obstacles in an environment can. In any case, the seam lines are laid out so that there is a minimum number of shadow areas in the overall image.
  • viewing the overall image is simplified for a user because he sees an almost complete picture of the environment. Only a minimized number of areas of the environment can not be displayed.
  • the determination of the overall perspective is advantageously carried out by a user input. Specifically, the overall perspective is determined by the user requesting a corresponding image. This may include selecting from a variety of predefined overall perspectives. Alternatively, the selection of the overall perspective can also be done automatically, especially if an animation or the like is to be displayed, in which the overall perspective is analogous to a
  • the recognition of shadow areas is preferably carried out such that an object in the environment is detected by distance sensors.
  • the object shadows at least a portion of the detection range of one of the cameras.
  • Obstacle is distorted, however, an information content of the environment behind the obstacle can not be displayed. Thus, one is
  • Shadow area available which, as described above, advantageously replaced by areas of other camera images.
  • the distance sensors which advantageously comprise ultrasonic sensors, in particular a size of the object can be estimated. Based on the size of the object can be determined which part of the detection range of the camera is shaded. Thus, the extent of the shadow area can be determined. The position of the shadow area in the transformed image of the camera can thus be determined.
  • a transformed image is understood to mean that the image has already been converted to the overall perspective.
  • the invention also relates to a control device.
  • the control unit is used to carry out the method described above.
  • the controller is advantageously connected to a plurality of cameras.
  • the control unit is part of a vehicle.
  • the invention also relates to a vehicle comprising a plurality of cameras and a control device as described above.
  • the plurality of cameras is configured to detect an environment of the vehicle.
  • an environment of the vehicle can be represented from the camera images of the plurality of cameras, with the camera images advantageously becoming one
  • the cameras overlap
  • the cameras provide a 360 degree surveillance. This means that a comprehensive representation of the environment of the vehicle can be made. In particular, it is possible to generate a bird's eye view that is completely one
  • This bird's-eye view can be optimized in particular by minimizing the shadow areas as described above.
  • image areas of low quality for the generation of the overall image, as long as these image areas allow the replacement of shadow areas.
  • Figure 1 is a schematic view of a sequence of the method
  • Figure 2 is a schematic view of a vehicle according to a
  • Figure 3 is a schematic view of an overall picture during the
  • FIG. 1 schematically shows a flow of a method for combining a plurality of camera images according to an embodiment of the invention.
  • the method is in particular executable by a control unit 22.
  • the controller 22 is shown in FIG. FIG. 2 shows a vehicle 1 comprising the control unit 22, wherein the vehicle 1 also comprises a first camera 3, a second camera 4, a third camera 5 and a fourth camera 6.
  • the first camera 3, the second camera 4, the third camera 5 and the fourth camera 6 are connected to the control unit 22 for signal transmission. It is provided that each of the cameras 3, 4, 5, 6 detects a direction of the vehicle 1.
  • the first camera 3 is forward, that means aligned in the direction of travel of the vehicle 1, while the fourth camera 6 to the rear, that is opposite to a direction of travel, is aligned.
  • the second camera 4 points to a right side of the vehicle 1, while the third camera 5 points to a left side of the vehicle 1.
  • the detection areas of the first camera 3 and the second camera 4 and the third camera 5 partially overlap.
  • the detection range of the second camera 4 additionally overlaps with the detection range of the fourth camera 6
  • Detection range of the fourth camera 6 additionally overlaps with the detection range of the third camera 5.
  • an environment 2 of the vehicle 1 is completely detectable.
  • individual subregions of the environment 2 are scanned by two cameras 3, 4, 5, 6.
  • the control unit is set up here, to combine the individual camera pictures of the cameras 3, 4, 5, 6 to a total picture. This is done according to the method, the sequence of which is shown schematically in FIG. First, a detection 100 of the environment 2 with the cameras 3, 4, 5, 6 takes place.
  • each camera 3, 4, 5, 6 generates a single image, so that there is a multiplicity of individual camera images.
  • This is followed by setting 200 of an overall perspective, from which the overall image, which is to be generated from the individual camera images, is to be viewed.
  • the overall perspective is a bird's-eye view so that the images of the cameras 3, 4, 5, 6 are to be viewed from a virtual bird's eye view.
  • the detection 100 and the setting 200 in particular means that the captured camera images are converted into a perspective that the
  • FIG. 3 already shows the overall image 1 1, wherein the overall image 1 1 has not yet gone through all the steps of the method, as shown schematically in FIG.
  • a first image 7 of the first camera 3, a second image 8 of the second camera 4, a third image 9 of the third camera 5 and a fourth image 10 of the fourth camera 6 were detected.
  • These camera images 7, 8, 9, 10 partially overlap, with the overlapping areas not shown. If the transformation of the camera images from the real perspective into the overall perspective results in the risk that not all areas of the surroundings 2 are blocked by the cameras 3, 4, 5,
  • the shadow areas 12, 13, 14, 15, 16, 17 are those areas of the camera images 7, 8, 9, 10, which in the overall perspective due to
  • a first object 23 in front of the vehicle 1 available.
  • the first camera 3 can thus not detect the entire environment 2 in front of the vehicle 1, since a partial area of the detection area of the first camera 3 is shielded by the first object 23.
  • a first shadow area 12 on the left side of the first object 23 and a second shadow area 12 can thus be arranged
  • Shadow area 13 on the right side of the first object 23 does not fill with information about the environment 2, but rather it is the first camera 3 not possible to detect said areas of the environment. Will the generated by the first camera 3 first camera image 7 from the real
  • Detection area is partially shadowed by a second obstacle 24. In this way, a second shadow area 14 and a third one
  • Shadow area 15 available.
  • a third object 25 partially shadows the detection area of the third camera 5, so that the fifth shadow area 16 and the sixth shadow area 17 are present. All of these shadow areas 12, 13, 14, 15, 16, 17 are determined in the step of recognizing 300.
  • a check 400 is carried out to determine whether the shadow areas 12, 13, 14, 15, 16, 17 are in one
  • Camera image 7, 8, 9, 10 correspond to areas of the environment 2, which are represented by another camera image 7, 8, 9, 10.
  • the checking step 400 it is thus determined, in particular, that the first shadow area 12 in the first camera image 7 is aligned with a region of the camera
  • a partial region (not shown in FIG. 3) of the third camera image 9 shows that region of the surroundings 2 which is not shown in the first camera image 7 due to the first shadow region 12.
  • Shadow area 13 includes a region of the environment 2, which is detected by the second camera 4.
  • a region (not shown in FIG. 3) of the second camera image 8 is present, which shows the region of the environment 2 not shown in the first camera image 7 due to the second shadow region 13.
  • the region of the environment 2 which corresponds to the third shadow region 14 is detected by the first camera 3.
  • Shadow area 16 corresponds, in turn, is detected by the first camera 3. Finally, the region of the environment 2 that corresponds to the sixth shadow region 17 is detected by the fourth camera 6. Thus, areas in other camera images 7, 8, 9, 10 can always be found that reproduce the surroundings 2 in the areas of the shadow area 12, 13,
  • the replacement 500 of the shadow areas 12, 13, 14, 15, 16, 17 is made such that they pass through corresponding areas of the associated other
  • Camera image 9 is replaced.
  • the second shadow area 13 is replaced by a corresponding area from the second camera image 8.
  • the third shadow area 14 is replaced by a corresponding area of the first
  • the fourth shadow area 15 is replaced by a
  • This overall picture 1 1 has a maximum of information about the environment
  • Shadow area 13 to avoid. This could in particular be carried out such that the first seam line 18 between the first camera image 7 and the third camera image 9 and the second seam line 19 between the first
  • Camera image 7 and the second camera image 8 are shifted so that those areas of the first camera image 7, which include the shadow areas 12, 13, are no longer used for the generation of the overall image 1 1. This assumes that such a displacement of the seam line 18, 19 due to the dimensions of the second camera image 8 and the third
  • Camera image 9 is possible. In other words, the second camera image 8 and the third camera image 9 must be large enough that these ones
  • Camera images 7, 8, 9, 10 can be replaced, a corresponding graphic and / or a colored area displayed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

La présente invention concerne un procédé permettant de combiner une pluralité d'images caméra en une image combinée, comprenant les étapes consistant à: détecter (100) un environnement au moyen d'une pluralité de caméras de telle sorte que la pluralité d'images caméra sont générées ; déterminer (200) une perspective d'ensemble à partir de laquelle l'image combinée devrait être affichée ; détecter (300) des zones d'ombre dans les images caméra, lesquelles, lors de l'affichage des images caméra de la perspective d'ensemble, ne montrent, en raison des projections d'ombre, aucune reproduction de l'environnement ; vérifier (400) si les zones d'ombre dans une image caméra correspondent à des zones d'environnement, lesquelles sont représentées par un autre appareil ; remplacer (500) les zones d'ombre par des zones correspondantes de l'autre image caméra si il a été déterminé dans l'étape de la vérification (400) qu'une telle autre image caméra est disponible ; et combiner (600) les images caméra en image combinée.
PCT/EP2017/082248 2017-01-23 2017-12-11 Procédé permettant de combiner une pluralité d'images caméra WO2018133996A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017201000.2 2017-01-23
DE102017201000.2A DE102017201000B4 (de) 2017-01-23 2017-01-23 Verfahren zum Kombinieren einer Vielzahl von Kamerabildern, Steuergerät und Fahrzeug

Publications (1)

Publication Number Publication Date
WO2018133996A1 true WO2018133996A1 (fr) 2018-07-26

Family

ID=60788573

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/082248 WO2018133996A1 (fr) 2017-01-23 2017-12-11 Procédé permettant de combiner une pluralité d'images caméra

Country Status (2)

Country Link
DE (1) DE102017201000B4 (fr)
WO (1) WO2018133996A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2574697A (en) * 2018-04-12 2019-12-18 Motherson Innovations Co Ltd Method, system and device of obtaining 3D-information of objects
CN112640413A (zh) * 2018-08-29 2021-04-09 罗伯特·博世有限公司 用于显示周围环境的模型的方法、控制设备和车辆

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102023207634A1 (de) * 2023-08-09 2025-02-13 Continental Autonomous Mobility Germany GmbH Verfahren zum Erzeugen einer eine Fahrzeugumgebung darstellenden Umgebungsdarstellung, Steuereinrichtung, Fahrerassistenzsystem und Fahrzeug
DE102023207633A1 (de) * 2023-08-09 2025-02-13 Continental Autonomous Mobility Germany GmbH Verfahren zum Erzeugen einer eine Fahrzeugumgebung darstellenden Umgebungsdarstellung, Steuereinrichtung, Fahrerassistenzsystem und Fahrzeug

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1775952A2 (fr) * 2005-10-17 2007-04-18 Sanyo Electric Co., Ltd. Système d'assistance à la conduite pour véhicule
DE102009036200A1 (de) * 2009-08-05 2010-05-06 Daimler Ag Verfahren zur Überwachung einer Umgebung eines Fahrzeugs
US20100259372A1 (en) * 2009-04-14 2010-10-14 Hyundai Motor Japan R&D Center, Inc. System for displaying views of vehicle and its surroundings
DE102010042026A1 (de) * 2010-10-06 2012-04-12 Robert Bosch Gmbh Verfahren und Vorrichtung zum Erzeugen eines Abbildes mindestens eines Objekts in einer Umgebung eines Fahrzeugs
DE102014012250A1 (de) * 2014-08-19 2015-03-26 Daimler Ag Verfahren zur Bilderverarbeitung und -darstellung
DE102014207897A1 (de) * 2014-04-28 2015-10-29 Robert Bosch Gmbh Verfahren zum Erzeugen eines Anzeigebildes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1775952A2 (fr) * 2005-10-17 2007-04-18 Sanyo Electric Co., Ltd. Système d'assistance à la conduite pour véhicule
US20100259372A1 (en) * 2009-04-14 2010-10-14 Hyundai Motor Japan R&D Center, Inc. System for displaying views of vehicle and its surroundings
DE102009036200A1 (de) * 2009-08-05 2010-05-06 Daimler Ag Verfahren zur Überwachung einer Umgebung eines Fahrzeugs
DE102010042026A1 (de) * 2010-10-06 2012-04-12 Robert Bosch Gmbh Verfahren und Vorrichtung zum Erzeugen eines Abbildes mindestens eines Objekts in einer Umgebung eines Fahrzeugs
DE102014207897A1 (de) * 2014-04-28 2015-10-29 Robert Bosch Gmbh Verfahren zum Erzeugen eines Anzeigebildes
DE102014012250A1 (de) * 2014-08-19 2015-03-26 Daimler Ag Verfahren zur Bilderverarbeitung und -darstellung

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2574697A (en) * 2018-04-12 2019-12-18 Motherson Innovations Co Ltd Method, system and device of obtaining 3D-information of objects
US10733464B2 (en) 2018-04-12 2020-08-04 Motherson Innovations Company Limited Method, system and device of obtaining 3D-information of objects
GB2574697B (en) * 2018-04-12 2021-10-20 Motherson Innovations Co Ltd Method, system and device of obtaining 3D-information of objects
CN112640413A (zh) * 2018-08-29 2021-04-09 罗伯特·博世有限公司 用于显示周围环境的模型的方法、控制设备和车辆
CN112640413B (zh) * 2018-08-29 2023-10-13 罗伯特·博世有限公司 用于显示周围环境的模型的方法、控制设备和车辆

Also Published As

Publication number Publication date
DE102017201000B4 (de) 2025-02-27
DE102017201000A1 (de) 2018-07-26

Similar Documents

Publication Publication Date Title
DE60219730T2 (de) Anzeigevorrichtung zur Fahrunterstützung
DE102012001835B4 (de) Sichtsystem für ein Nutzfahrzeug zur Darstellung von gesetzlich vorgeschriebenen Sichtfeldern eines Hauptspiegels und eines Weitwinkelspiegels
DE69425481T2 (de) Bildverarbeitungsverfahren und -gerät zur Erzeugung eines Zielbildes von einem Quellenbild mit Veränderung der Perspektive
DE102009046114B4 (de) Verfahren und Vorrichtung zum Erzeugen einer kalibrierten Projektion
DE102014108684B4 (de) Fahrzeug mit Umfeldüberwachungseinrichtung sowie Verfahren zum Betreiben einer solchen Überwachungseinrichtung
DE102006003538B3 (de) Verfahren zum Zusammenfügen mehrerer Bildaufnahmen zu einem Gesamtbild in der Vogelperspektive
EP2464098B1 (fr) Dispositif de représentation d'environnement ainsi qu'un véhicule doté d'un tel dispositif de représentation d'environnement et procédé de représentation d'une image panoramique
DE102006055641A1 (de) Anordnung und Verfahren zur Aufnahme und Wiedergabe von Bildern einer Szene und/oder eines Objektes
WO2018133996A1 (fr) Procédé permettant de combiner une pluralité d'images caméra
EP3281178A1 (fr) Procédé de représentation d'un environnement d'un véhicule
WO2011015283A1 (fr) Procédé de surveillance de l'environnement d'un véhicule
DE102015210453B3 (de) Verfahren und vorrichtung zum erzeugen von daten für eine zwei- oder dreidimensionale darstellung zumindest eines teils eines objekts und zum erzeugen der zwei- oder dreidimensionalen darstellung zumindest des teils des objekts
DE102011088332A1 (de) Verfahren zur Verbesserung der Objektdetektion bei Multikamerasystemen
EP2996327B1 (fr) Systeme de vue environnante pour vehicules dote d'appareils de montage
DE102014012250B4 (de) Verfahren zur Bilderverarbeitung und -darstellung
DE102014201409A1 (de) Parkplatz - trackinggerät und verfahren desselben
DE102010041870A1 (de) Verfahren und System zur horizontrichtigen stereoskopischen Bildverarbeitung
EP3073446A1 (fr) Procede de representation d'un environnement d'un vehicule automobile
EP1912431A2 (fr) Procédé et dispositif destinés à la commande d'une caméra pivotante
EP3833576B1 (fr) Systeme de camera de surveillance
EP3011729B1 (fr) Systeme et procede d'assembler des images d'une pluralité de capteurs optiques
EP3106349A1 (fr) Système de vision pour véhicule utilitaire destiné à la représentation de champs de vision règlementaires d'un rétroviseur principal et d'un rétroviseur grand angle
WO2013178358A1 (fr) Procédé de visualisation dans l'espace d'objets virtuels
DE102015212370A1 (de) Verfahren und Vorrichtung zum Erzeugen einer Darstellung einer Fahrzeugumgebung eines Fahrzeuges
DE102013207323B4 (de) Kamerasystem für ein Fahrzeug und Fahrzeug

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17818495

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17818495

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载