+

WO2018166170A1 - Procédé et dispositif de traitement d'image, et terminal de conférence intelligent - Google Patents

Procédé et dispositif de traitement d'image, et terminal de conférence intelligent Download PDF

Info

Publication number
WO2018166170A1
WO2018166170A1 PCT/CN2017/103282 CN2017103282W WO2018166170A1 WO 2018166170 A1 WO2018166170 A1 WO 2018166170A1 CN 2017103282 W CN2017103282 W CN 2017103282W WO 2018166170 A1 WO2018166170 A1 WO 2018166170A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
determining
image frame
current
depth
Prior art date
Application number
PCT/CN2017/103282
Other languages
English (en)
Chinese (zh)
Inventor
运如靖
Original Assignee
广州视源电子科技股份有限公司
广州视睿电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州视源电子科技股份有限公司, 广州视睿电子科技有限公司 filed Critical 广州视源电子科技股份有限公司
Publication of WO2018166170A1 publication Critical patent/WO2018166170A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and an intelligent conference terminal for image processing.
  • a smart terminal usually has a video call function, and after the smart terminal establishes a connection with other smart terminals, it can perform a video call based on the video call function it has.
  • the smart terminal captures the target object in real time through the camera to form an image frame, and continuously transmits the captured image frame to other intelligent terminal devices.
  • a large intelligent terminal with a video call function such as a smart conference tablet
  • the terminal itself is often fixed and generally disposed at a position opposite to the window, and the user participating in the video is based on the smart terminal performing a video call. It is often in a state of backlight.
  • the image information of the user cannot be clearly displayed in the image frame captured by the camera on the smart terminal device, and the closer the user is located to the window, the less clear the user image information displayed in the image frame is.
  • the image information in the image frame needs to be processed before the image frame is sent to other smart terminal devices.
  • Embodiments of the present invention provide a method, an apparatus, and an intelligent conference terminal for image processing, which increase the flexibility of image processing, thereby achieving the purpose of clearly displaying a target object in a captured image frame during a video call.
  • an embodiment of the present invention provides a method for image processing, including:
  • the image region information is subjected to adjustment processing on the image region corresponding to the depth of field limit value.
  • an apparatus for image processing including:
  • a real image acquisition module configured to acquire a current live image frame captured by the camera
  • a focused image determining module configured to determine a target focused image in the current live image frame
  • a depth of field limit determining module configured to determine a depth of field limit value of the current live image frame according to the target focused image
  • the image parameter adjustment module is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
  • an embodiment of the present invention provides an intelligent conference terminal, including: at least two cameras having optical axes parallel, and an apparatus for image processing according to the foregoing embodiment of the present invention.
  • device and intelligent conference terminal for image processing, first, a current live image frame captured by a camera is acquired, and a target focused image in a current live image frame is determined; and then a depth of field of the current live image frame is determined according to the target focused image. The threshold value is finally adjusted for the image parameter information corresponding to the image region corresponding to the depth of field limit value.
  • the above method, device and intelligent conference terminal can adjust the partial image in the image frame captured during the video call, and efficiently realize the determination and processing of the target area to be processed, thereby further increasing the flexibility of image processing. Effectively enhance the display effect of video participants on the smart terminal.
  • FIG. 1 is a schematic flowchart diagram of a method for image processing according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic flowchart of a method for image processing according to Embodiment 2 of the present invention
  • FIG. 3 is a structural block diagram of an apparatus for image processing according to Embodiment 3 of the present invention.
  • FIG. 1 is a schematic flowchart of a method for image processing according to a first embodiment of the present invention.
  • the method is applicable to image processing of a captured image frame during a video call, and the method may be performed by an image processing device, where The device can be implemented by software and/or hardware and is generally integrated on a smart terminal having a video call function.
  • the smart terminal may be a smart mobile terminal such as a mobile phone, a tablet computer, or a notebook, or a fixed electronic device with a video call function such as a desktop computer or a smart conference terminal.
  • the application scenario is preferably a video call.
  • the specific image area where the indoor window is located is determined, thereby adjusting the image parameters such as image brightness and image sharpness of the image area where the indoor window is located.
  • a method for image processing according to Embodiment 1 of the present invention includes the following operations:
  • the image of the capture space can be captured by the camera in real time, thereby forming a current live image frame.
  • one subject is selected as the target focused image.
  • the dynamic subject in the capture space can be used as the target focused image.
  • the image area corresponding to the dynamic subject is determined in the current real image frame, and the image corresponding to the preset pixel information may be used as the target focused image. In this case, the preset pixel information needs to be in the current real scene.
  • the corresponding image area in the image frame is determined to be the target focused image.
  • the actual distance of the target focused image to the front node of the camera may be determined, and the actual distance is equivalent to the focus distance of the camera at this time.
  • the focus is The distance may be determined according to current pixel information of the image focused image and corresponding depth of field information, and further, the depth of field range of the image frame captured by the camera may be determined according to the focus distance and the attribute parameter of the camera.
  • the depth of field range is formed by a depth of field near threshold value and a depth of field limit value
  • the depth of field near threshold value can display the closest distance between the image in the current live image frame and the camera
  • the depth of field limit value may specifically think of the farthest between the image and the camera that can be displayed in the current live image frame The distance, therefore, determines the depth of field limit value of the current live image frame based on the determined depth of field range.
  • the current real image frame can be understood as an image frame having depth of field information.
  • the image region corresponding to the far depth limit value of the depth of field can be determined in the current real image frame, and then determined.
  • the image area is subjected to mediation processing based on its image parameter information.
  • the video participant is reduced in order to reduce the light intensity of the indoor window in the captured current live picture frame.
  • the effect of the display screen can be regarded as the area where the indoor window is located by using the image area corresponding to the far depth limit value, thereby locally adjusting the determined image area, thereby achieving the purpose of clearly displaying the video participants.
  • a method for image processing according to Embodiment 1 of the present invention first acquires a current live image frame captured by a camera, and determines a target focused image in a current live image frame; and then determines a current live image frame according to the target focused image.
  • the depth of field is far from the limit value; finally, the image parameter information is adjusted to the image area corresponding to the depth of field limit value.
  • FIG. 2 is a schematic flowchart diagram of a method for image processing according to Embodiment 2 of the present invention.
  • the embodiment of the present invention is optimized based on the foregoing embodiment.
  • the current real-life image frame captured by the camera is further optimized to be: captured by at least two cameras respectively. a pre-image frame; performing image synthesizing processing on the at least two current image frames respectively captured to obtain a current real-image frame; wherein each pixel in the current live image frame has corresponding depth-of-field information.
  • the target focused image in the current real image frame is also determined to be: determining a subject in the current live image frame according to the character image feature, and determining to form the taken person Current pixel information; determining whether the subject is present in the acquired previous live image frame; if the subject is present, determining the composition of the subject in the previous live image frame Historical pixel information, and determining whether the current pixel information matches the historical pixel information, and if not, determining that the position of the subject is changed, determining the subject to be a target focused image; if yes, Determining average pixel information according to current pixel information of each of the captured persons, determining an area corresponding to the average pixel information as a target focused image; if the captured person does not exist, acquiring preset focused pixel information, The corresponding area of the focused pixel information in the current live image frame is determined as the target focused image.
  • determining the depth of field limit value of the current live image frame according to the target focused image may be optimized to: determine, according to current pixel information of the target focused image in the current live image frame, a plane coordinate information of the target focused image; determining a depth value of the target focused image according to the depth information corresponding to the current pixel information; determining the target focused image to the camera according to the plane coordinate information and the depth value The actual focus distance; determining the depth of field limit value of the current live image frame according to the actual focus distance and the acquired camera attribute parameter.
  • the present embodiment further performs an adjustment process of the image parameter information on the image region corresponding to the depth of field limit value, and is specifically optimized to: acquire image parameter information of the image region corresponding to the depth of field limit value, the image parameter information.
  • the method includes: an image RGB ratio, a color contrast, and an image sharpness; and when the image parameter information does not meet the set standard parameter information, controlling to adjust the image region Image brightness, color contrast, and/or image sharpness are such that the image parameter information conforms to the standard parameter information.
  • a second embodiment of the present invention provides a method for image processing, which specifically includes the following operations:
  • the setting positions of the at least two cameras used on the smart terminal are different, and for the same subject, the image captured by the object in different cameras is different.
  • the pixel positions in the frame are different, and the depth of field information of the object can be determined according to different pixel position information.
  • S202 Perform image synthesis processing on at least two current image frames respectively captured to obtain a current real image frame.
  • the current image frames captured by different cameras can be combined to obtain a current live image frame having a stereoscopic sense.
  • each pixel in the synthesized current real image frame has corresponding depth of field information.
  • the process of determining the depth information of each pixel may be described as: stereo matching the current image frame captured by different cameras. Therefore, the disparity values of the same corresponding points in different current image frames are obtained, and then the depth information of different pixel points can be determined according to the relationship between the disparity values and the depths.
  • the depth information of each pixel in the current live image frame may be stored for selection of a subsequent image area to be processed.
  • steps S203 to S209 specifically specify the determination process of the target focused image.
  • This step specifically identifies, by the preset character image feature, the determined person included in the current live image frame.
  • the current pixel information of each subject in the current live image frame can also be determined, and the current pixel information can be specifically understood as a range of pixel values of all the pixels constituting one subject.
  • step S204 Determine whether the subject is present in the acquired previous live image frame. If yes, go to step S205; if no, go to step S209.
  • This step can be used to determine whether the subject in the current live image frame also appears in the previous live image frame.
  • different captured characters themselves have characteristics that are different from other captured characters (such as the clothes of the captured person). Color and wearing ornaments, etc., so it is possible to determine whether the subject is present in the previous real image frame according to the characteristics of the subject in the current live image frame, and the determined scene is not present in the current live image frame.
  • the operation of step S209 can be performed; if there is the determined subject person, the operation of step S205 can be performed.
  • the step of determining the pixel position of the subject in the previous real image frame may be determined as the pixel position of the subject. Historical pixel information.
  • step S206 Determine whether the current pixel information matches the historical pixel information. If not, perform the step. S207; if yes, step S208 is performed.
  • the step may determine the historical pixel information of the determined person. Matches the current pixel information.
  • step S207 if the subject is in the active dynamics, the historical pixel information in the previous real image frame and the current pixel information in the current live image frame cannot be completely matched, and the operation of step S207 can be performed. If the subject is in a stationary state, and its historical pixel information has a possibility of matching with the current pixel information, the operation of step S208 can be performed at this time.
  • step S207 It is determined that the position of the subject is changed, and the subject is determined as the target focused image, and then step S210 is performed.
  • step S210 when the historical pixel information of the object does not match the current pixel information, it may be determined that there is a change in the subject, and the subject may be determined as the target focused image, and the target may be determined. After the image is focused, the operation of step S210 is performed.
  • the subject with the lowest degree of matching of the historical pixel information and the current pixel information may be selected as the target focused image.
  • the degree of matching between the historical pixel information and the current pixel information may be specifically determined according to the number of matching pixel points, and the smaller the number of matched pixel points, the lower the matching degree.
  • step S210 if the current pixel information of each of the captured persons in the current live image frame matches the historical pixel information, it may be determined that the subject is still, and the step may be based on current pixel information of each of the captured characters. Determining the average pixel information of all the subjects in the current live image frame, thereby The area corresponding to the average pixel information is determined as the target focused image, and the operation of step S210 can be performed after the target focused image is determined.
  • step S209 The preset focus pixel information is acquired, and the corresponding region of the focused pixel information in the current live image frame is determined as the target focused image, and then step S210 is performed.
  • This step processes the case where there is no object in the previous real image frame.
  • the case where there is no object is generally that the captured current live image frame is the captured first frame, and there is no previous real scene. Image frame; or, the captured previous live image frame does not actually have a subject.
  • the preset focus pixel information may be acquired, and then the area corresponding to the focused pixel information is determined in the current live image frame, and the determined The area is focused on the image as the target, and the operation of step S210 can be performed after the target focused image is determined.
  • the captureable range of the camera disposed on the smart terminal is generally fixed, so that the present embodiment can set the focused pixel information according to the pixel information corresponding to the focused image determined during the capture of the historical image frame. .
  • the current live image frame is synthesized by the current image frame captured by at least two cameras, and the current live image frame includes spatial information of each image (plane coordinate information displayed on the screen and depth of rendering stereoscopic effect) value).
  • the embodiment may determine the corresponding plane coordinate information according to the current pixel information of the target focused image. Specifically, an average pixel coordinate value may be determined according to the pixel coordinate value of each pixel point in the current pixel information, which may be averaged in this embodiment. The pixel coordinate value is regarded as the plane coordinate information of the target focused image.
  • the present embodiment can determine the depth information corresponding to the average pixel coordinate value, and use the depth information as the depth value of the target focused image.
  • the projection point of the target focused image in the stereoscopic space may be determined according to the plane coordinate information and the depth value, specifically, the pixel origin of the upper left corner of the screen of the smart terminal, and the target focused image is determined in the stereoscopic space.
  • the actual distance value of the projection point to the pixel origin can be determined according to the plane coordinate information and the depth value, and the calculated actual distance value can be regarded as the actual focus distance of the target focused image to the camera.
  • the camera attribute parameters may include: a hyper focus distance and a lens focal length, wherein the hyper focus distance and the lens focal length are both determined by the type of camera used. Specifically, according to the actual focus distance and the acquired camera attribute parameters, and the calculation formula of the depth of field near the boundary And the formula for calculating the far depth of the depth of field To determine the current real image frame depth near the limit value and King reaching the limit value, wherein, S near represents depth near the limit value, S away showing scene reaching the limit value, H denotes hyperfocal camera distance, D denote the actual focusing distance, F represents the lens focal length of the camera.
  • the hyperfocal distance is 6.25 meters (the blurring circle standard)
  • the lens focal length of the camera is 50 mm
  • the real-focus distance is 4 meters
  • the value is 11.36 meters.
  • the depth of field limit value is equivalent to the farthest distance that the camera can capture the image, and corresponds to the farthest image area in the current real image frame.
  • the embodiment may determine the far depth limit value according to the depth of field.
  • the image area and image parameter information of the image area such as image RGB ratio, color contrast, and image sharpness.
  • the image RGB ratio can be used to determine the brightness value of the image region;
  • the color contrast can be a measure of different brightness levels between the brightest white and the darkest black in the light and dark regions of the image region.
  • the image sharpness can be understood as an index reflecting the image plane sharpness and the image edge sharpness, and the image sharpness is higher.
  • the detail contrast on the image plane is also higher and looks more clear.
  • the image parameter information may be compared with the set standard parameter information, and the image brightness, the color contrast, and/or the image sharpness are respectively adjusted according to the comparison result, and finally the image parameters are finally obtained.
  • the information conforms to the standard parameter information.
  • the image area corresponding to the far depth limit value is a window image with higher brightness
  • the display brightness of the window image can be appropriately reduced, thereby achieving clearness in the current real image frame.
  • the purpose of displaying video participant image information is a window image with higher brightness
  • the method for image processing provided by the second embodiment of the present invention embodies the process of acquiring the image frame, and at the same time, the process of determining the target focused image is embodied, and the process of determining the depth threshold of the depth of field and the far boundary of the depth of field are also embodied.
  • the method can acquire the image frame synthesized by the dual camera capture, and can determine the depth of field limit value of the image frame according to the depth information of the synthesized image frame and the determined target focus image, thereby performing an image on the region corresponding to the far depth limit value of the depth of field Mediation processing.
  • the determination and processing of the target area to be processed are efficiently realized, the overall processing of the entire image frame is effectively avoided, the flexibility of image processing is better, and the image processing efficiency during video call is improved. This further enhances the display effect of video participants on the smart terminal.
  • the embodiment further optimizes: performing brightness enhancement processing on the subject.
  • the adjustment processing of the image region corresponding to the depth of field limit value can be realized, so that the current live image frame has a clear image of the video participant.
  • the recognized person recognized in the current live image frame can be regarded as a video participant, the selected image area can be processed directly, and the recognized subject can be directly subjected to brightness enhancement processing.
  • the specific pixel to be processed may be determined according to the current pixel information of the captured person and the corresponding depth information, and then the image parameter information of the to-be-processed area is determined, and the image parameter information is adjusted to improve the brightness of the captured person. Improved, so that the subject can have a better display in the current live image frame.
  • FIG. 3 is a structural block diagram of an apparatus for image processing according to Embodiment 3 of the present invention.
  • the device is suitable for image processing of captured image frames during a video call, wherein the device can be implemented by software and/or hardware and is generally integrated on a smart terminal having a video call function.
  • the apparatus includes: a live image acquisition module 31, a focused image determination module 32, a depth of field limit determination module 33, and an image parameter adjustment module 34.
  • the real image acquisition module 31 is configured to acquire a current real image frame captured by the camera;
  • a focused image determining module 32 configured to determine a target focused image in the current live image frame
  • a depth of field limit determining module 33 configured to determine a depth of field limit value of the current live image frame according to the target focused image
  • the image parameter adjustment module 34 is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
  • the device first acquires the current live image frame captured by the camera through the live image acquisition module 31; then determines the target focused image in the current live image frame by the focused image determination module 32; and then passes the depth of field determination module 33 Determining the depth of field limit value of the current live image frame according to the target focused image; finally, the image parameter adjustment module 34 performs the adjustment processing of the image parameter information on the image region corresponding to the far depth limit value.
  • An apparatus for image processing according to Embodiment 3 of the present invention can perform adjustment processing on a partial image in an image frame captured during a video call, and efficiently realizes determination and processing of a target area to be processed, and further increases an image.
  • the flexibility of processing effectively improves the display effect of video participants on smart terminals.
  • the real-time image obtaining module 31 is specifically configured to: acquire at least two photos by using The current image frame captured by the image headers is respectively subjected to image synthesis processing on the at least two current image frames respectively captured to obtain a current live image frame; wherein each pixel point in the current live image frame has corresponding depth of field information.
  • the focused image determining module 32 includes:
  • a subject determining unit configured to determine a subject of the current live image frame according to a character image feature, and determine current pixel information constituting the subject; an information determining unit, configured to acquire the previous real scene Determining whether the subject is present in the image frame; a first execution unit, configured to determine historical pixel information constituting the subject in the previous live image frame when the subject is present, and Determining whether the current pixel information matches the historical pixel information, and if not, determining that the position of the subject is changed, determining the subject to be a target focused image; if so, according to each of the subjects The current pixel information determines the average pixel information, and the region corresponding to the average pixel information is determined as the target focused image; and the second execution unit is configured to acquire the preset focused pixel information when the captured person does not exist, The focused pixel information is determined as a target focused image in a corresponding region in the current live image frame.
  • the focused image determining module 32 further includes: a subject processing unit, configured to perform brightness enhancement processing on the subject after the subject of the current live image frame is determined according to the character image feature .
  • the depth of field determination module 33 is configured to: determine plane coordinate information of the target focused image according to current pixel information of the target focused image in the current live image frame; Determining, according to the depth of field information corresponding to the current pixel information, a depth value of the target focused image; determining an actual focus distance of the target focused image to the camera according to the plane coordinate information and the depth value; The actual focus distance and the acquired camera properties a parameter that determines a depth of field limit value of the current live image frame.
  • the image parameter adjustment module 34 is specifically configured to: acquire image parameter information of an image region corresponding to the depth of field limit value, where the image parameter information includes: an image RGB ratio, a color contrast, and an image sharpness; When the image parameter information does not conform to the set standard parameter information, controlling to adjust image brightness, color contrast, and/or image sharpness of the image area, so that the image parameter information conforms to the standard parameter information.
  • the fourth embodiment of the present invention further provides an intelligent conference terminal, including: at least two cameras with parallel optical axes, and an apparatus for image processing provided by the foregoing embodiments of the present invention.
  • the image processing can be performed by the image processing methods provided in the first embodiment and the second embodiment.
  • the smart conference terminal belongs to a type of electronic device having a video call function, wherein the smart conference terminal is integrated with a video call system, and at the same time, at least two cameras with parallel optical axes and the above
  • the apparatus for image processing provided by the embodiment.
  • the device for image processing provided by the foregoing embodiment of the present invention is integrated in the smart conference terminal, when a video call is made with other smart terminals having a video call function, the partial image in the current live view image frame captured in real time can be performed.
  • the adjustment and processing of the image parameter information effectively improves the display effect of the video participant on the intelligent conference terminal, and further improves the user experience of the intelligent conference terminal.
  • the storage medium is, for example, a ROM/RAM, a magnetic disk, an optical disk, or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Selon des modes de réalisation, la présente invention concerne un procédé et un dispositif de traitement d'image, et un terminal de conférence intelligent. Le procédé consiste à : obtenir une trame d'image de scène réelle courante capturée par une caméra, et déterminer une image de mise au point cible dans la trame d'image de scène réelle courante ; déterminer une limite éloignée de profondeur de champ de la trame d'image de scène réelle courante en fonction de l'image de mise au point cible ; et régler des informations de paramètre d'image d'une zone d'image correspondant à la limite éloignée de profondeur de champ. Au moyen du procédé, une image locale dans une trame d'image capturée pendant un appel vidéo peut être réglée, une zone cible à traiter peut être déterminée et traitée efficacement, la souplesse du traitement d'image est considérablement améliorée, et les effets d'affichage de participants à la vidéo sur des terminaux intelligents sont efficacement améliorés.
PCT/CN2017/103282 2017-03-17 2017-09-25 Procédé et dispositif de traitement d'image, et terminal de conférence intelligent WO2018166170A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710160930.7A CN106803920B (zh) 2017-03-17 2017-03-17 一种图像处理的方法、装置及智能会议终端
CN201710160930.7 2017-03-17

Publications (1)

Publication Number Publication Date
WO2018166170A1 true WO2018166170A1 (fr) 2018-09-20

Family

ID=58988136

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103282 WO2018166170A1 (fr) 2017-03-17 2017-09-25 Procédé et dispositif de traitement d'image, et terminal de conférence intelligent

Country Status (2)

Country Link
CN (1) CN106803920B (fr)
WO (1) WO2018166170A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351197A (zh) * 2020-09-25 2021-02-09 南京酷派软件技术有限公司 一种拍摄参数调整方法、装置、存储介质及电子设备
CN114926765A (zh) * 2022-05-18 2022-08-19 上海庄生晓梦信息科技有限公司 图像处理方法及装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803920B (zh) * 2017-03-17 2020-07-10 广州视源电子科技股份有限公司 一种图像处理的方法、装置及智能会议终端
CN111210471B (zh) * 2018-11-22 2023-08-25 浙江欣奕华智能科技有限公司 一种定位方法、装置及系统
CN110545384B (zh) * 2019-09-23 2021-06-08 Oppo广东移动通信有限公司 对焦方法和装置、电子设备、计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204034A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Modification of viewing parameters for digital images using face detection information
CN103324004A (zh) * 2012-03-19 2013-09-25 联想(北京)有限公司 对焦方法和图像捕捉装置
CN104982029A (zh) * 2012-12-20 2015-10-14 微软技术许可有限责任公司 具有隐私模式的摄像机
CN105611167A (zh) * 2015-12-30 2016-05-25 联想(北京)有限公司 一种对焦平面调整方法及电子设备
CN106803920A (zh) * 2017-03-17 2017-06-06 广州视源电子科技股份有限公司 一种图像处理的方法、装置及智能会议终端

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657171B2 (en) * 2006-06-29 2010-02-02 Scenera Technologies, Llc Method and system for providing background blurring when capturing an image using an image capture device
JP2009290660A (ja) * 2008-05-30 2009-12-10 Seiko Epson Corp 画像処理装置、画像処理方法、画像処理プログラムおよび印刷装置
CN104184935B (zh) * 2013-05-27 2017-09-12 鸿富锦精密工业(深圳)有限公司 影像拍摄设备及方法
US9282285B2 (en) * 2013-06-10 2016-03-08 Citrix Systems, Inc. Providing user video having a virtual curtain to an online conference
CN103945118B (zh) * 2014-03-14 2017-06-20 华为技术有限公司 图像虚化方法、装置及电子设备
CN105100615B (zh) * 2015-07-24 2019-02-26 青岛海信移动通信技术股份有限公司 一种图像的预览方法、装置及终端
CN105303543A (zh) * 2015-10-23 2016-02-03 努比亚技术有限公司 图像增强方法及移动终端
CN106331510B (zh) * 2016-10-31 2019-10-15 维沃移动通信有限公司 一种逆光拍照方法及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204034A1 (en) * 2003-06-26 2006-09-14 Eran Steinberg Modification of viewing parameters for digital images using face detection information
CN103324004A (zh) * 2012-03-19 2013-09-25 联想(北京)有限公司 对焦方法和图像捕捉装置
CN104982029A (zh) * 2012-12-20 2015-10-14 微软技术许可有限责任公司 具有隐私模式的摄像机
CN105611167A (zh) * 2015-12-30 2016-05-25 联想(北京)有限公司 一种对焦平面调整方法及电子设备
CN106803920A (zh) * 2017-03-17 2017-06-06 广州视源电子科技股份有限公司 一种图像处理的方法、装置及智能会议终端

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351197A (zh) * 2020-09-25 2021-02-09 南京酷派软件技术有限公司 一种拍摄参数调整方法、装置、存储介质及电子设备
CN114926765A (zh) * 2022-05-18 2022-08-19 上海庄生晓梦信息科技有限公司 图像处理方法及装置

Also Published As

Publication number Publication date
CN106803920B (zh) 2020-07-10
CN106803920A (zh) 2017-06-06

Similar Documents

Publication Publication Date Title
US11431915B2 (en) Image acquisition method, electronic device, and non-transitory computer readable storage medium
JP5222939B2 (ja) テレビ電話におけるプライバシーを最大にするための浅い被写界深度のシュミレート
CN108846807B (zh) 光效处理方法、装置、终端及计算机可读存储介质
US8749607B2 (en) Face equalization in video conferencing
WO2018166170A1 (fr) Procédé et dispositif de traitement d'image, et terminal de conférence intelligent
EP4050881B1 (fr) Procédé de synthèse d'image de plage dynamique élevée et dispositif électronique
CN103973963B (zh) 图像获取装置及其图像处理方法
TW201432616A (zh) 影像擷取裝置及其影像處理方法
KR101294735B1 (ko) 이미지 처리 방법 및 이를 이용한 촬영 장치
CN110324532A (zh) 一种图像虚化方法、装置、存储介质及电子设备
CN108734676A (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2018188277A1 (fr) Procédé et dispositif de correction de visée, terminal de conférence intelligent et support de stockage
CN108154514A (zh) 图像处理方法、装置及设备
CN111182208B (zh) 拍照方法、装置、存储介质及电子设备
TW201801516A (zh) 影像擷取裝置及其攝影構圖的方法
CN104853080B (zh) 图像处理装置
CN111246093A (zh) 图像处理方法、装置、存储介质及电子设备
TW201340704A (zh) 攝像裝置及其影像合成方法
WO2018196854A1 (fr) Procédé de photographie, appareil de photographie et terminal mobile
CN106878606B (zh) 一种基于电子设备的图像生成方法和电子设备
CN105933613A (zh) 一种图像处理的方法、装置及移动终端
WO2016123850A1 (fr) Procédé de commande de prise de photographies par un terminal et terminal
WO2016202073A1 (fr) Procédé et appareil de traitement d'image
CN109345602A (zh) 图像处理方法和装置、存储介质、电子设备
CN114125408A (zh) 图像处理方法及装置、终端和可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17900818

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.01.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17900818

Country of ref document: EP

Kind code of ref document: A1

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载