+

HK1240369A - Image processing method, apparatus and intelligent terminal - Google Patents

Image processing method, apparatus and intelligent terminal Download PDF

Info

Publication number
HK1240369A
HK1240369A HK17113789.9A HK17113789A HK1240369A HK 1240369 A HK1240369 A HK 1240369A HK 17113789 A HK17113789 A HK 17113789A HK 1240369 A HK1240369 A HK 1240369A
Authority
HK
Hong Kong
Prior art keywords
picture
information
determining
display
data
Prior art date
Application number
HK17113789.9A
Other languages
Chinese (zh)
Other versions
HK1240369A1 (en
Inventor
张俊文
康琳
程艳
Original Assignee
斑马智行网络(香港)有限公司
Filing date
Publication date
Application filed by 斑马智行网络(香港)有限公司 filed Critical 斑马智行网络(香港)有限公司
Publication of HK1240369A publication Critical patent/HK1240369A/en
Publication of HK1240369A1 publication Critical patent/HK1240369A1/en

Links

Description

Picture processing method and device and intelligent terminal
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a picture processing method, a picture processing apparatus, and an intelligent terminal.
Background
With the development of the intelligent terminal technology, more and more users usually directly adopt the intelligent terminal to take pictures, download pictures and the like, so that a large amount of picture data is usually stored in the intelligent terminal.
In an intelligent terminal with a large amount of picture data, a user generally needs to browse pictures one by one when searching for the pictures, and the searching is very complicated and wastes time.
Therefore, one technical problem that needs to be urgently solved by those skilled in the art is: a picture processing method, a picture processing device and an intelligent terminal are provided to quickly search picture data.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a picture processing method to quickly search picture data.
Correspondingly, the embodiment of the application also provides an image processing device and an intelligent terminal, which are used for ensuring the realization and the application of the method.
In order to solve the above problem, the present application discloses an image processing method, including: determining data labels corresponding to all the categories and display information of the data labels according to the categories corresponding to all the pictures, wherein the display information is used for indicating the display range of the data labels on a display interface; and displaying the data label according to the display information.
Optionally, the determining, according to the category corresponding to each picture, the data tag corresponding to each category and the display information of the data tag includes: determining data labels corresponding to all categories; and determining the priority of each category, and determining the display information of the data label corresponding to each category according to the priority.
Optionally, determining the priority of each category includes: determining attributes corresponding to each category, and determining priority according to the attributes.
Optionally, the attributes include: dimensions and/or number of pictures.
Optionally, the step of determining the attribute corresponding to each category and determining the priority according to the attribute includes at least one of the following steps: acquiring the number of pictures in a picture set corresponding to each category, and determining the priority of the category according to the number of the pictures; obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information.
Optionally, the dimensions include at least one of: time dimension, location dimension, image feature dimension, source dimension.
Optionally, the higher the priority, the larger the display range of the corresponding data tag.
Optionally, the display information is further used to indicate appearance information of the data tag on a display interface, where the appearance information includes at least one of: shape, color.
Optionally, displaying the data tag according to the display information includes: and distributing the data labels in a display interface according to the display information.
Optionally, displaying the data tag according to the display information includes: configuring the data label in a set area of the display interface according to the display information; and when a preset gesture is received, the data label is expanded and displayed to the display interface from a set area.
Optionally, before displaying the data tag according to the display information, the method further includes: and determining the first N data labels according to the display range, and performing a display step on the first N data labels.
Optionally, the method further includes: and when a preset operation is received, the data label displayed in the display interface is folded.
Optionally, the method further includes: and responding to the trigger of the data label, and displaying the picture under the category corresponding to the data label.
Optionally, the method further includes: determining characteristic information of each picture; and aggregating the pictures according to the categories corresponding to the characteristic information, and determining the picture set of each category.
Optionally, the determining the feature information of each picture includes: collecting the stored pictures; feature information of at least one dimension is extracted from each picture.
Optionally, the extracting feature information of at least one dimension from each picture includes: determining the target image characteristics of each picture through image characteristic extraction; and/or extracting target condition features from the image description information of each picture; and/or extracting source characteristics from the source information of each picture.
Optionally, the determining the target image feature of each picture through image feature extraction includes: extracting image characteristics of each picture, and determining the image characteristics of each picture; comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity; and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
Optionally, the obtaining of the target condition feature from the image description information of each picture includes: and searching for a time condition and/or a position condition from the image description information of each picture, and taking the time condition and/or the position condition as a target condition characteristic.
Optionally, the extracting source features from the source information of each picture includes: and extracting a storage source and/or a generation source from the source information of each picture, and using the storage source and/or the generation source as source characteristics.
Optionally, aggregating the pictures according to the categories corresponding to the feature information, and determining a picture set of each category, including: determining the category according to the characteristic information, and aggregating the pictures with the same category into a picture set.
Optionally, the method further includes: determining feature information of an updated picture after the updated picture is detected; and aggregating the updated pictures into corresponding categories according to the characteristic information.
The embodiment of the present application further discloses an image processing apparatus, including: the label determining module is used for determining the data labels corresponding to all the categories and the display information of the data labels according to the categories corresponding to all the pictures, wherein the display information is used for indicating the display range of the data labels on a display interface; and the display module is used for displaying the data label according to the display information.
Optionally, the tag determining module includes: the label determining submodule is used for determining the data labels corresponding to all the categories; and the display determining submodule is used for determining the priority of each category and determining the display information of the data labels corresponding to each category according to the priority.
Optionally, the display determining sub-module is configured to determine an attribute corresponding to each category, and determine a priority according to the attribute.
Optionally, the attributes include: dimensions and/or number of pictures.
Optionally, the display determining sub-module is configured to obtain the number of pictures in a picture set corresponding to each category, and determine the priority of the category according to the number of the pictures; and obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information.
Optionally, the dimensions include at least one of: time dimension, location dimension, image feature dimension, source dimension.
Optionally, the higher the priority, the larger the display range of the corresponding data tag.
Optionally, the display information is further used to indicate appearance information of the data tag on a display interface, where the appearance information includes at least one of: shape, color.
Optionally, the display module is configured to distribute the data tags in a display interface according to the display information.
Optionally, the display module is configured to configure the data tag in a setting area of the display interface according to the display information; and when a preset gesture is received, the data label is expanded and displayed to the display interface from a set area.
Optionally, the display module is further configured to determine the first N data tags according to the display range, and perform the display step on the first N data tags.
Optionally, the display module is further configured to, when a preset operation is received, retract the data tag displayed in the display interface.
Optionally, the method further includes: and the response module is used for responding to the trigger of the data label and displaying the picture under the category corresponding to the data label.
Optionally, the method further includes: the characteristic aggregation module is used for determining the characteristic information of each picture; and aggregating the pictures according to the categories corresponding to the characteristic information, and determining the picture set of each category.
Optionally, the feature aggregation module includes: the picture collecting submodule is used for collecting the stored pictures; and the feature extraction submodule is used for extracting feature information of at least one dimension from each picture.
Optionally, the feature extraction sub-module is configured to determine a target image feature of each picture through image feature extraction; and/or extracting target condition features from the image description information of each picture; and/or extracting source characteristics from the source information of each picture.
Optionally, the feature extraction sub-module is configured to perform image feature extraction on each picture, and determine an image feature of each picture; comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity; and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
Optionally, the feature extraction sub-module is configured to search a time condition and/or a location condition from image description information of each picture, and use the time condition and/or the location condition as a target condition feature.
Optionally, the feature extraction sub-module is configured to extract a storage source and/or a generation source from the source information of each picture, and use the storage source and/or the generation source as the source feature.
Optionally, the feature aggregation module includes: and the aggregation sub-module is used for determining the categories according to the characteristic information and aggregating the pictures with the same category into a picture set.
Optionally, the feature aggregation module is further configured to determine feature information of the updated picture after the updated picture is detected; and aggregating the updated pictures into corresponding categories according to the characteristic information.
Optionally, the memory, the display, the processor, and the input unit, wherein the input unit includes: a touch screen; the processor is configured to perform the method according to the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, the data labels corresponding to all the categories and the display information of the data labels are determined according to the categories corresponding to all the pictures, wherein the display information is used for indicating the display range of the data labels on the display interface, then the data labels are displayed according to the display information, and the pictures can be displayed according to the data labels in a classified mode, so that the searching by a user is facilitated, the searching operation is simplified, and the searching efficiency is improved.
Drawings
FIG. 1 is a flowchart illustrating steps of an embodiment of a method for processing pictures according to the present application;
FIG. 2 is a flow chart of steps of another embodiment of a method of picture processing according to the present application;
FIGS. 3A, 3B, and 3C are schematic diagrams of an interface display according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of an embodiment of a picture processing apparatus according to the present application;
FIG. 5 is a block diagram of another embodiment of a picture processing apparatus according to the present application;
fig. 6 is a block diagram illustrating a structure of an embodiment of an intelligent terminal according to the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
One of the core concepts of the embodiments of the present application is to provide a picture processing method, an apparatus and an intelligent terminal, so as to quickly search picture data. And determining the data labels corresponding to the various categories and the display information of the data labels according to the categories corresponding to the various pictures, wherein the display information is used for indicating the display range of the data labels on a display interface, then displaying the data labels according to the display information, and displaying the pictures according to the data labels in a classified manner, so that the pictures are conveniently searched by a user, the searching operation is simplified, and the searching efficiency is improved.
In this embodiment, the image processing method can be applied to an intelligent terminal, where the intelligent terminal refers to a terminal device with a multimedia function, and the device supports audio, video, data and other functions. In this embodiment, the intelligent terminal has a touch screen, and includes an intelligent mobile terminal such as a smart phone, a tablet computer, and an intelligent wearable device, and may also be a smart television, a personal computer, and other devices having a touch screen.
Example one
Referring to fig. 1, a flowchart illustrating steps of an embodiment of an image processing method according to the present application is shown, which may specifically include the following steps:
and step 102, determining the data label corresponding to each category and the display information of the data label according to the category corresponding to each picture.
And 104, displaying the data label according to the display information.
When storing more pictures, pictures and other picture data in the intelligent terminal, it is more complicated to look for the pictures, and this embodiment classifies the pictures and displays corresponding classification labels so that the user can look for the pictures conveniently.
The pictures in the intelligent terminal can be collected firstly, and then the category of each collected picture is determined. In this embodiment, the type of the picture may be determined according to various different characteristics of the source, the content, the description information, and the like of the picture, such as a landscape type, a screenshot type, a recent shooting type, a year-ahead shooting type, a beijing shooting type, and a beijing recent shooting type, and the types are dependent, that is, one picture may belong to a plurality of types.
And for each category, determining a data label corresponding to the category and display information of the data label according to various category information such as pictures corresponding to the category, wherein the display information is used for indicating a display range of the data label on a display interface, such as the size and the position displayed in the display interface. That is, the display of the data tag is related to the category, and for example, the size and position (display range) of the tag display are configured according to the number of pictures corresponding to the category, or the size and position of the tag display are configured according to the length of time, distance, and the like corresponding to the category. When the data label is displayed according to the display information, the related information of the corresponding pictures of the category, such as time, place, quantity and the like, can be visually displayed through the label, so that the user can conveniently search.
In summary, according to the category corresponding to each picture, the data label corresponding to each category and the display information of the data label are determined, wherein the display information is used for indicating the display range of the data label on the display interface, then the data label is displayed according to the display information, and the pictures can be displayed according to the data label in a classified manner, so that the searching by a user is facilitated, the searching operation is simplified, and the searching efficiency is improved.
Example two
In this embodiment, the pictures are classified and aggregated, and corresponding data tags are set for each category, so that the data tags are displayed in the display interface, and a user can conveniently find the pictures.
Before determining the data label, the following steps may be adopted for the classification aggregation of the pictures: determining characteristic information of each picture; and aggregating the pictures according to the categories corresponding to the characteristic information, and determining the picture set of each category. The method comprises the steps of firstly determining characteristic information of various dimensions of each picture, such as image characteristics, acquired time, acquired place, acquired source and other characteristics, then determining categories corresponding to the characteristic information, aggregating the pictures according to the categories, and determining a picture set of each category.
Therefore, the category can be determined based on the characteristics of the picture, and the data label can be determined according to the category, so that the data label can reflect the characteristics of the picture, a user can conveniently search the picture according to the characteristics, and the accuracy and the efficiency of searching are improved. An example of determining a data tag lookup picture based on feature classification is as follows.
Referring to fig. 2, a flowchart illustrating steps of another embodiment of the image processing method of the present application is shown, which may specifically include the following steps:
in step 202, the stored pictures are collected.
Step 204, extracting feature information of at least one dimension from each picture.
In order to facilitate the user to search for the picture, the stored picture data may be collected, and the picture data may include picture data stored in the intelligent terminal, for example, pictures stored in each storage location, a folder, and the like are obtained by traversing the intelligent terminal, and picture data stored in the intelligent terminal in a network corresponding to the user account, and the like. Therefore, the pictures can be generated in various manners such as downloading, screenshot, shooting and the like, and can be acquired from various storage positions such as local, network and the like.
And then, respectively extracting the characteristics of each collected image data, and extracting characteristic information of at least one dimension from each image data. In this embodiment, feature information of multiple dimensions of the picture data, such as dimensions of time dimension, position dimension, content of the picture itself, and source, can be extracted, so that the features of the picture are extracted in a fine and multidimensional manner, and subsequent classification and search are facilitated.
The feature information in the embodiment of the present application includes a target image feature, a target condition feature, a source feature, and the like, where the target image feature may include features of an image itself, such as a person, a landscape, and the like; the target condition features are picture related condition features such as shooting time, shooting place and the like; the source characteristics are characteristics of picture generation and acquisition, such as downloading, screenshot, shooting and the like. Therefore, feature information of at least one dimension is extracted from each picture, and the feature information comprises: determining the target image characteristics of each picture through image characteristic extraction; and/or extracting target condition features from the image description information of each picture; and/or extracting source characteristics from the source information of each picture. Image feature recognition may be performed on the images to extract target image features of each picture, such as people, scenery, self-timer, party, etc., to extract target condition features from image description information of each picture, such as within-a-week shooting, Beijing shooting, etc., and to extract source features from source information, such as network downloads, shooting, screenshots, etc.
The method for determining the target image characteristics of each picture through image characteristic extraction comprises the following steps: extracting image characteristics of each picture, and determining the image characteristics of each picture; comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity; and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
In the present embodiment, a standard feature is preset, and the standard feature can determine the feature of each picture data, such as a human image or a landscape image. Therefore, when the characteristics of the picture data are determined, firstly, the image characteristics of the picture data are extracted, the image characteristics of the picture data are determined, then the image characteristics are respectively compared with the preset standard characteristics, the similarity between the image characteristics and each preset standard characteristic is determined, whether the similarity exceeds a comparison threshold value or not is sequentially detected, and when the similarity exceeds the comparison threshold value, the preset standard characteristic corresponding to the similarity is used as the target image characteristic of the picture data. Thereby determining the target image characteristics (preset standard characteristics) of the picture data.
In addition, for the image features, the target image features of the picture can be determined directly based on a camera used by the user and other information related to the picture, for example, when the user takes a picture in a specific mode of the camera or the camera in the smart terminal, the camera can improve the shooting quality through face recognition and other processing, so that the target image features are determined through processing operations during picture taking, such as a picture of an object, or a picture of a party when a plurality of people exist in the picture. The user can sometimes share pictures through social application and the like after taking the pictures, and other information such as text messages or photo marks can be attached during sharing, so that target image characteristics of the pictures can be determined based on the sharing and other information such as related text messages and photo marks, for example, people in the pictures in a picture publishing picture circle such as WeChat and people are related to corresponding account numbers. Thereby, the target image feature can be extracted based on the shooting-related information and/or the release information of the picture.
The method for acquiring the target condition characteristics from the image description information of each picture comprises the following steps: and searching for a time condition and/or a position condition from the image description information of each picture, and taking the time condition and/or the position condition as a target condition characteristic.
The embodiment may also search for target condition features from the image description information of the picture, for example, search for time conditions such as capture time of a screenshot or acquisition practice of a downloaded picture, shooting time of a picture, and position conditions such as a shooting position, a positioning position during downloading or screenshot, and the like, so as to determine various condition features related to the picture.
For example, for picture data such as a photo, Exif, which is an image file format, whose data storage is exactly the same as the JPEG format, may be obtained as the image description information. Actually, the Exif format is information of inserting a digital photo into a JPEG format header, including various shooting conditions such as an aperture, a shutter, a white balance, ISO, a focal length, and a date and time at the time of shooting, a camera brand, a camera model, a color code, a sound recorded at the time of shooting, and GPS data, a thumbnail, and the like. Therefore, various shooting conditions of the picture data can be acquired through the Exif, and the target condition characteristics are determined.
Wherein, extracting the source characteristics from the source information of each picture comprises: and extracting a storage source and/or a generation source from the source information of each picture, and using the storage source and/or the generation source as source characteristics.
In this embodiment, the intelligent terminal has multiple modes of generating and storing pictures, so that when pictures are collected in a traversal manner, source information, such as a default folder, a folder set by a user, a network folder and the like, can be determined according to picture sources, and thus, the storage sources can be determined according to the setting of the user on the folder names. The pictures can also be obtained by network downloading, screenshot, shooting and the like, so that the generation source of the pictures is determined, and the source characteristics are extracted from the storage source and the generation source.
Therefore, the characteristic information of various dimensions such as time dimension, position dimension, image characteristic dimension, source dimension and the like can be determined through characteristic extraction.
Step 206, determining the category according to the characteristic information.
In step 208, the pictures with the same category are aggregated into a picture set.
In this embodiment, the feature information may be used to determine a classification category of the picture data, where one or more features may be used to correspond to one category, and the category belongs to at least one of the following dimensions: time dimension, location dimension, image feature dimension, source dimension. For example, the time-wise divisions may include categories within half a year, half a year ago, one year ago, etc.; according to the position dimension, the device can comprise the categories of Beijing, Shanghai, Hangzhou and the like; the dimensions may include categories such as landscape, people, self-portrait, travel, etc. by image feature; the source dimension may include categories such as web downloads, screenshots, shots, etc., and the location and image feature dimension may include hangzhou travel, lijiang travel, etc.
Then, determining the feature information of each picture data, and placing the picture data into a picture set corresponding to the category according to the category to which the feature information belongs, so as to aggregate the picture data with the same category into a picture set, wherein each category is not independent, and the picture sets corresponding to different categories can have the same or different picture data, that is, one picture data can belong to a plurality of picture sets.
Step 210, determining the data label corresponding to each category.
In this embodiment, for each category, a data tag may be further configured, so as to configure a data tag for the category-corresponding picture set, which is convenient for searching the picture set, and thus after determining that each category corresponds to the picture set, the data tag may be determined according to the category, so that in this embodiment, one category may correspond to one data tag, and a name of the data tag may be determined by using a category name or description information of the category, for example, in a category included in a time dimension, a category description is that a name of a picture-corresponding data tag acquired half a year ago is half a year ago, a name of a picture-corresponding data tag acquired for a memorial day is a memorial day, and for example, in a location dimension-corresponding category, configuring a data tag name according to the category name includes lijiang, hangzhou, and the like, and determining the name of the data: flowers, plants, postsea, xiaobao, lijiang tourism, etc. In the embodiment of the application, the name of the data tag includes a name displayed in the display interface.
Step 212, determining the priority of each category, and determining the display information of the data label corresponding to each category according to the priority.
In this embodiment, since there are multiple types of categories and multiple types of corresponding data tags, in order to make the data tags displayed in the display interface more intuitive and facilitate searching, a priority may be configured for each category, and display information of the data tags corresponding to the category is configured according to the priority, where the display information is used to indicate a display range of the data tags on the display interface, and in this embodiment, the higher the priority is, the larger the display range of the corresponding data tags is.
In this embodiment, the attributes corresponding to each category may be determined, and the priority is determined according to the attributes, that is, the priority is determined according to the attributes of the category, and then the display range of the data tag corresponding to the category is determined, for example, the priority is determined according to various attributes such as time, position, and number of pictures corresponding to the category, where the attributes include: the dimensions and/or the number of pictures, that is, the priority may be determined based on the dimensions to which the category belongs and the number of pictures included in the category, and then the display range of the data tag may be determined, for example, the larger the number of pictures in the picture set corresponding to the category, the larger the display range, the closer the shooting time and the position time, the larger the display range, and the like.
In this embodiment, the step of determining the attribute corresponding to each category and determining the priority according to the attribute includes at least one of the following steps: acquiring the number of pictures in a picture set corresponding to each category, and determining the priority of the category according to the number of the pictures; obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information. And then the display information such as the display range of the data label corresponding to the category is determined by adopting the priority.
The number of pictures in the picture set corresponding to each category can be obtained, and the priority level can be configured according to the number of the pictures, for example, the higher the number of the pictures is, the higher the priority level is. Dimension sorting information corresponding to the dimension to which the category belongs, such as time sorting, position sorting and the like, can also be acquired, so that the priority level of the time and the position is configured according to the distance. In this embodiment, the categories may be divided according to multiple dimensions, so dimension ordering information may be further set for different dimensions, including overall dimension ordering and in-dimension ordering, for example, the dimension ordering is, from high to low, time dimension, image feature dimension, position dimension, and source dimension, and the priorities may be determined according to the inside ordering of each dimension of the dimension ordering information set, so as to determine the display range of the data tag, for example, configure the maximum priority for the data tag closest in time, and configure the minimum priority for downloading the picture in the source dimension. Or for the sorting in the dimensionality, the priorities are configured for the respective sorting of different dimensionalities, the same priorities are set corresponding to the same sorting positions in different sorts, and the like. If the number of pictures is the highest priority, the shooting time is the second highest priority, the shooting position is the second highest, and the like, the pictures can be sorted according to the number of the pictures, and the same number of the pictures can be sorted according to the shooting time. And correspondingly configuring the priority for each category, and configuring the display range of the data label corresponding to the category according to the priority.
In this embodiment, the display information may also be used to indicate appearance information of the data tag on the display interface, so that the appearance information of the data tag may also be configured according to attributes of each category, where the appearance information includes at least one of the following: the shape and the color, for example, the elements of the display interface corresponding to the data tag are configured as a circle, a square, and the like, and the corresponding color is configured, or different shapes and colors are configured according to the dimension sorting information, the number of pictures, the priority, and the like.
And 214, displaying the data tag according to the display information.
After the data tags and the information such as the appearances and the display ranges of the data tags are configured, the data tags can be displayed according to the display information, namely, the data tags are displayed in the display interface of the intelligent terminal.
In an optional embodiment, displaying the data tag according to the display information includes: and distributing the data labels in a display interface according to the display information. That is, the display elements of the data tag in the current application are configured according to the display information, and then the display elements are dispersed in the display interface, and may be randomly distributed, and may be distributed according to the display range, such as the distribution with a large range in the middle, the distribution with a small range in the periphery, and the like, as shown in fig. 3B.
In another optional embodiment, displaying the data tag according to the display information includes: configuring the data label at one side of the display interface according to the display information; and when a preset gesture is received, the data label is expanded and displayed to the display interface from one side. Configuring display elements of a data tag in a current application according to display information, and then displaying the display elements in an interface, wherein the display elements may be displayed on one side of the display interface first, as shown in fig. 3A, an upper portion configures the display elements corresponding to the data tag, and a lower portion displays picture data, then a user may trigger through preset gestures such as sliding and clicking, and when receiving the preset gestures, the data tag is expanded from one side to the display interface, as shown in fig. 3B, the data tag is expanded to the whole interface to display the data tag.
Before displaying the data tag according to the display information, the method further includes: and determining the first N data labels according to the display range, and performing a display step on the first N data labels. Due to the fact that the size of the screen of the intelligent terminal is displayed, if the size of a touch screen such as an intelligent mobile phone is small, the data tags can be sequenced according to the display range, then the first N data tags are obtained, N is a positive integer, and the display step is executed on the first N screened data tags, namely the data tags are displayed on the whole display interface or one side of the display interface.
In this embodiment, after the data tag is displayed in the display interface, the data tag may also be retracted according to a preset operation, that is, when the preset operation is received, the data tag displayed in the display interface is retracted. For example, through the operations of sliding up, down, left and right, shaking left and right, and triggering a preset identifier, the data tag displayed in the display interface is collected, and then the picture is directly displayed or the picture application is quitted. The data label is not limited to the storage direction when stored, and for example, the data label is stored in the direction of the sliding operation, or the data label is stored by being gathered to the middle when the left-right shaking operation is received. A preset identifier of the display interface may also be triggered, such as a "triangle" identifier at the bottom of the display interface in fig. 3B, so as to collapse the data tag.
And step 216, responding to the trigger of the data label, and displaying the picture under the category corresponding to the data label.
After the corresponding data tag is displayed, the user can search the picture data based on the data tag, so that the searching efficiency is improved, the data tag can be triggered in a clicking mode and the like, the picture in the category corresponding to the data tag is displayed in response to the triggering of the data tag, and as shown in fig. 3C, the user can display the picture corresponding to the data tag after the data tag is triggered.
In this embodiment, the picture data in the intelligent terminal is dynamically changed, for example, a user may delete some picture data or add some picture data, the picture data may be removed from the picture set after the picture data is deleted, and for adding new picture data, the picture data may be continuously collected, that is, whether updated picture data exists is determined. If yes, that is, updated picture data exists, step 204 is executed, that is, after the updated picture data is detected, feature information of the updated picture data is determined, and the updated picture data is aggregated into a corresponding category according to the feature information. If not, the updated picture data does not exist, and the process is ended.
Furthermore, as the number of pictures in the picture set increases or decreases, the appearance information of the data tag, such as the display range, can be adjusted correspondingly.
In the actual process, the arrangement and display of the full quantity of photos of the user are closely related to the user experience, because each piece of picture data contains certain information, such as the shooting time and the shooting place, and the characteristics contained in the photos, such as animals, scenery and the like. In order to more reasonably display the pictures, the embodiment arranges the pictures from multiple dimensions. The method comprises the steps of extracting keywords from all photos of a user by recording time, places and person information contained in the photos of the user, and comprehensive information such as frequency and number of the photos browsed, and presenting the keywords in a tag form. The invention can display the relative relation between the weight and each data label by using the label form, namely the display range category is related, thereby the display of the data label is more visual.
By taking photo album application in an intelligent terminal as an example, the embodiment of the application can display the arranged picture data in an intuitive mode, and can be divided into three steps: and collecting, sorting and presenting.
First, a collecting part is used, information of picture data needs to be collected before the photos (picture data) are sorted, and for example, the following three parts can be collected:
1) and extracting the features of the photos, comparing the features with preset standard features through a feature comparator, and adding corresponding data labels to the photos if the similarity exceeds a comparison threshold. Thus, a photograph may contain multiple tags.
2) And collecting the shooting time information of the picture, and reading the Exif information of the picture to acquire the shooting time information of the picture.
3) And collecting shooting place information of the picture, and reading information in the Exif of the picture to acquire the shooting place information of the picture.
The above feature information of each photo is stored in the database.
Then, a finishing part is executed: in the embodiment of the application, the arrangement can be to perform aggregation on the photos, and the photos with the same characteristic information can be classified into one type; pictures of the same shooting location are classified into one group; photos with similar time can be classified into one type, and corresponding time labels are configured for better time sorting, such as the week, the month, the half year ago and the like, so that a picture set corresponding to the type is obtained. And after the classification is complete, counting the number of the pictures in each picture set, and sequencing and storing the pictures.
Again, the display section: for example, the size of the display area is dynamically adjusted according to the number of pictures in each picture set, and the category of the characteristic picture is preferentially displayed to be closer to the expectation of the user. Therefore, the user can intuitively know the number of a certain classified picture in the full picture by observing the size of the display area.
And displaying part of the tags at the top of the home page of the album, displaying all the tags by pulling down, displaying the tags according to the number and the association degree of the photos, and clicking the tags to check the corresponding photos.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
EXAMPLE III
On the basis of the above embodiments, the present embodiment also provides an image processing apparatus.
Referring to fig. 4, a block diagram of a structure of an embodiment of an image processing apparatus according to the present application is shown, which may specifically include the following modules:
a tag determining module 402, configured to determine, according to the category corresponding to each picture, a data tag corresponding to each category and display information of the data tag, where the display information is used to indicate a display range of the data tag on a display interface.
A display module 404, configured to display the data tag according to the display information.
Referring to fig. 5, a block diagram of another embodiment of the image processing apparatus according to the present application is shown, and specifically, the block diagram may include the following modules:
the feature aggregation module 500 is configured to determine feature information of each picture, and aggregate the pictures according to the category corresponding to the feature information.
The tag determining module 502 is configured to determine, according to the category corresponding to each picture, a data tag corresponding to each category and display information of the data tag, where the display information is used to indicate a display range of the data tag on a display interface.
A display module 504, configured to display the data tag according to the display information.
A response module 506, configured to respond to the trigger on the data tag, and display the picture in the category corresponding to the data tag.
In an optional embodiment of the present application, the tag determining module 502 includes:
the tag determination submodule 5022 is used for determining data tags corresponding to each category.
The display determination sub-module 5024 is configured to determine priorities of the categories, and determine display information of the data tags corresponding to the categories according to the priorities.
The display determination sub-module 5024 is configured to determine attributes corresponding to each category, and determine a priority according to the attributes. The attributes include: dimensions and/or number of pictures.
The display determining sub-module 5024 is configured to obtain the number of pictures in the picture set corresponding to each category, and determine the priority of the category according to the number of the pictures; and obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information.
The dimensions include: time dimension, location dimension, image feature dimension, source dimension.
Wherein, the higher the priority is, the larger the display range of the corresponding data tag is.
The display information is further used for indicating appearance information of the data tag on a display interface, wherein the appearance information includes at least one of the following: shape, color.
The display module 504 is configured to distribute the data tags in a display interface according to the display information.
The display module 504 is configured to configure the data tag in a setting area of the display interface according to the display information; and when a preset gesture is received, the data label is expanded and displayed to the display interface from a set area.
The display module 504 is further configured to determine the first N data tags according to the display range, and perform a display step on the first N data tags.
The display module 504 is further configured to, when a preset operation is received, collapse the data tag displayed in the display interface.
Wherein the feature aggregation module 500 comprises:
and a picture collecting sub-module 5002 for collecting the stored pictures.
A feature extraction sub-module 5004 configured to extract feature information of at least one dimension from each picture.
The aggregating submodule 5006 is configured to aggregate the pictures with the same category into a picture set according to the category determined by the feature information.
The feature extraction sub-module 5004 is configured to determine target image features of each picture through image feature extraction; and/or extracting target condition features from the image description information of each picture; and/or extracting source characteristics from the source information of each picture.
The feature extraction sub-module 5004 is configured to perform image feature extraction on each picture, and determine image features of each picture; comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity; and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
The feature extraction sub-module 5004 is configured to search for a time condition and/or a location condition from the image description information of each picture, and use the time condition and/or the location condition as a target condition feature.
The feature extraction sub-module 5004 is configured to extract a storage source and/or a generation source from the source information of each picture, and use the storage source and/or the generation source as a source feature.
The feature aggregation module 500 is further configured to determine feature information of the updated picture after the updated picture is detected; and aggregating the updated pictures into corresponding categories according to the characteristic information.
In the actual process, the arrangement and display of the full quantity of photos of the user are closely related to the user experience, because each piece of picture data contains certain information, such as the shooting time and the shooting place, and the characteristics contained in the photos, such as animals, scenery and the like. In order to more reasonably display the pictures, the embodiment arranges the pictures from multiple dimensions. The method comprises the steps of extracting keywords from all photos of a user by recording time, places and person information contained in the photos of the user, and comprehensive information such as frequency and number of the photos browsed, and presenting the keywords in a tag form. The invention can display the relative relation between the weight and each data label by using the label form, namely the display range category is related, thereby the display of the data label is more visual.
By taking photo album application in an intelligent terminal as an example, the embodiment of the application can display the arranged picture data in an intuitive mode, and can be divided into three steps: and collecting, sorting and presenting.
First, a collecting part is used, information of picture data needs to be collected before the photos (picture data) are sorted, and for example, the following three parts can be collected:
1) and extracting the features of the photos, comparing the features with preset standard features through a feature comparator, and adding corresponding data labels to the photos if the similarity exceeds a comparison threshold. Thus, a photograph may contain multiple tags.
2) And collecting the shooting time information of the picture, and reading the Exif information of the picture to acquire the shooting time information of the picture.
3) And collecting shooting place information of the picture, and reading information in the Exif of the picture to acquire the shooting place information of the picture.
The above feature information of each photo is stored in the database.
Then, a finishing part is executed: in the embodiment of the application, the arrangement can be to perform aggregation on the photos, and the photos with the same characteristic information can be classified into one type; pictures of the same shooting location are classified into one group; photos with similar time can be classified into one type, and corresponding time labels are configured for better time sorting, such as the week, the month, the half year ago and the like, so that a picture set corresponding to the type is obtained. And after the classification is complete, counting the number of the pictures in each picture set, and sequencing and storing the pictures.
Again, the display section: for example, the size of the display area is dynamically adjusted according to the number of pictures in each picture set, and the category of the characteristic picture is preferentially displayed to be closer to the expectation of the user. Therefore, the user can intuitively know the number of a certain classified picture in the full picture by observing the size of the display area.
And displaying part of the tags at the top of the home page of the album, displaying all the tags by pulling down, displaying the tags according to the number and the association degree of the photos, and clicking the tags to check the corresponding photos.
Example four
On the basis of the above embodiment, the embodiment also discloses an intelligent terminal.
Referring to fig. 6, a block diagram of a structure of an embodiment of an intelligent terminal according to the present application is shown, which may specifically include the following modules:
this intelligent terminal 600 includes: memory 610, display 620, processor 630, and input unit 640.
The input unit 640 may be used to receive numeric or character information input by a user and a control signal. Specifically, in the embodiment of the present invention, the input unit 640 may include a touch screen 641, which can collect a touch operation of the user (for example, an operation of the user on the touch screen 641 by using any suitable object or accessory such as a finger, a stylus pen, etc.) on or near the touch screen 641, and drive the corresponding connection device according to a preset program. Of course, the input unit 640 may include other input devices such as a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a mouse, etc., in addition to the touch screen 641.
The display 620 includes a display panel, and optionally, the display panel may be configured in the form of a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED). The touch screen may cover the display panel to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen transmits the touch operation to the processor 630 to perform corresponding processing.
In the embodiment of the present invention, by calling a software program, and/or a module, and/or data stored in the memory 610, the processor 630 is configured to determine, according to a category corresponding to each picture, a data tag corresponding to each category and display information of the data tag, where the display information is used to indicate a display range of the data tag on a display interface; and displaying the data label according to the display information.
Optionally, the determining, according to the category corresponding to each picture, the data tag corresponding to each category and the display information of the data tag includes: determining data labels corresponding to all categories; and determining the priority of each category, and determining the display information of the data label corresponding to each category according to the priority.
Optionally, determining the priority of each category includes: determining attributes corresponding to each category, and determining priority according to the attributes.
Optionally, the attributes include: dimensions and/or number of pictures.
Optionally, the step of determining the attribute corresponding to each category and determining the priority according to the attribute includes at least one of the following steps: acquiring the number of pictures in a picture set corresponding to each category, and determining the priority of the category according to the number of the pictures; obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information.
Optionally, the dimensions include: time dimension, location dimension, image feature dimension, source dimension.
Optionally, the higher the priority, the larger the display range of the corresponding data tag.
Optionally, the display information is further used to indicate appearance information of the data tag on a display interface, where the appearance information includes at least one of: shape, color.
Optionally, displaying the data tag according to the display information includes: and distributing the data labels in a display interface according to the display information.
Optionally, displaying the data tag according to the display information includes: configuring the data label in a set area of the display interface according to the display information; and when a preset gesture is received, the data label is expanded and displayed to the display interface from a set area.
Optionally, before displaying the data tag according to the display information, the method further includes: and determining the first N data labels according to the display range, and performing a display step on the first N data labels.
Optionally, the method further includes: and when a preset operation is received, the data label displayed in the display interface is folded.
Optionally, the method further includes: and responding to the trigger of the data label, and displaying the picture under the category corresponding to the data label.
Optionally, the method further includes: determining characteristic information of each picture; and aggregating the pictures according to the categories corresponding to the characteristic information, and determining the picture set of each category.
Optionally, the determining the feature information of each picture includes: collecting the stored pictures; feature information of at least one dimension is extracted from each picture.
Optionally, the extracting feature information of at least one dimension from each picture includes: determining the target image characteristics of each picture through image characteristic extraction; and/or extracting target condition features from the image description information of each picture; and/or extracting source characteristics from the source information of each picture.
Optionally, the determining the target image feature of each picture through image feature extraction includes: extracting image characteristics of each picture, and determining the image characteristics of each picture; comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity; and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
Optionally, the obtaining of the target condition feature from the image description information of each picture includes: and searching for a time condition and/or a position condition from the image description information of each picture, and taking the time condition and/or the position condition as a target condition characteristic.
Optionally, the extracting source features from the source information of each picture includes: and extracting a storage source and/or a generation source from the source information of each picture, and using the storage source and/or the generation source as source characteristics.
Optionally, aggregating the pictures according to the categories corresponding to the feature information, and determining a picture set of each category, including: determining the category according to the characteristic information, and aggregating the pictures with the same category into a picture set.
Optionally, the method further includes: determining feature information of an updated picture after the updated picture is detected; and aggregating the updated pictures into corresponding categories according to the characteristic information.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
In a typical configuration, the computer device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (fransitory media), such as modulated data signals and carrier waves.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is given to a picture processing method, a picture processing apparatus, and an intelligent terminal, and a specific example is applied in the description to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (43)

1. An image processing method, comprising:
determining data labels corresponding to all the categories and display information of the data labels according to the categories corresponding to all the pictures, wherein the display information is used for indicating the display range of the data labels on a display interface;
and displaying the data label according to the display information.
2. The method according to claim 1, wherein the determining, according to the category corresponding to each picture, the data tag corresponding to each category and the display information of the data tag comprises:
determining data labels corresponding to all categories;
and determining the priority of each category, and determining the display information of the data label corresponding to each category according to the priority.
3. The method of claim 2, wherein determining the priority of each category comprises:
determining attributes corresponding to each category, and determining priority according to the attributes.
4. The method of claim 3, wherein the attributes comprise: dimensions and/or number of pictures.
5. The method of claim 4, wherein the step of determining the attributes corresponding to each category and determining the priority according to the attributes comprises at least one of:
acquiring the number of pictures in a picture set corresponding to each category, and determining the priority of the category according to the number of the pictures;
obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information.
6. The method of claim 4 or 5, wherein the dimensions comprise at least one of: time dimension, location dimension, image feature dimension, source dimension.
7. The method according to any one of claims 2 to 5, wherein the higher the priority, the larger the display range of the corresponding data tag.
8. The method of any one of claims 1 to 5, wherein the display information is further used for indicating appearance information of the data tag on a display interface, wherein the appearance information includes at least one of: shape, color.
9. The method of claim 1, wherein displaying the data tag according to the display information comprises:
and distributing the data labels in a display interface according to the display information.
10. The method of claim 1, wherein displaying the data tag according to the display information comprises:
configuring the data label in a set area of the display interface according to the display information;
and when a preset gesture is received, the data label is expanded and displayed to the display interface from a set area.
11. The method of claim 1, 9 or 10, wherein before displaying the data tag according to the display information, further comprising:
and determining the first N data labels according to the display range, and performing a display step on the first N data labels.
12. The method of claim 1, 9 or 10, further comprising:
and when a preset operation is received, the data label displayed in the display interface is folded.
13. The method of claim 1, further comprising:
and responding to the trigger of the data label, and displaying the picture under the category corresponding to the data label.
14. The method of claim 1, further comprising:
determining characteristic information of each picture;
and aggregating the pictures according to the categories corresponding to the characteristic information, and determining the picture set of each category.
15. The method according to claim 14, wherein the determining the feature information of each picture comprises:
collecting the stored pictures;
feature information of at least one dimension is extracted from each picture.
16. The method according to claim 15, wherein the extracting feature information of at least one dimension from each picture comprises:
determining the target image characteristics of each picture through image characteristic extraction; and/or
Extracting target condition features from the image description information of each picture; and/or
And extracting source characteristics from the source information of each picture.
17. The method of claim 16, wherein determining the target image feature of each picture by image feature extraction comprises:
extracting image characteristics of each picture, and determining the image characteristics of each picture;
comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity;
and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
18. The method according to claim 16, wherein the obtaining of the target condition feature from the image description information of each picture comprises:
and searching for a time condition and/or a position condition from the image description information of each picture, and taking the time condition and/or the position condition as a target condition characteristic.
19. The method of claim 16, wherein the extracting source features from the source information of each picture comprises:
and extracting a storage source and/or a generation source from the source information of each picture, and using the storage source and/or the generation source as source characteristics.
20. The method according to claim 14, wherein aggregating the pictures according to the categories corresponding to the feature information, and determining a picture set of each category comprises:
determining the category according to the characteristic information, and aggregating the pictures with the same category into a picture set.
21. The method of claim 16, further comprising:
determining feature information of an updated picture after the updated picture is detected;
and aggregating the updated pictures into corresponding categories according to the characteristic information.
22. A picture processing apparatus, comprising:
the label determining module is used for determining the data labels corresponding to all the categories and the display information of the data labels according to the categories corresponding to all the pictures, wherein the display information is used for indicating the display range of the data labels on a display interface;
and the display module is used for displaying the data label according to the display information.
23. The apparatus of claim 22, wherein the tag determination module comprises:
the label determining submodule is used for determining the data labels corresponding to all the categories;
and the display determining submodule is used for determining the priority of each category and determining the display information of the data labels corresponding to each category according to the priority.
24. The apparatus of claim 23,
and the display determining submodule is used for determining the attributes corresponding to all the categories and determining the priority according to the attributes.
25. The apparatus of claim 24, wherein the attributes comprise: dimensions and/or number of pictures.
26. The apparatus of claim 25,
the display determining submodule is used for acquiring the number of pictures in the picture set corresponding to each category and determining the priority of the category according to the number of the pictures; and obtaining dimension sorting information corresponding to the dimension to which each category belongs, and determining the priority of the category according to the dimension sorting information.
27. The apparatus of claim 25 or 26, wherein the dimensions comprise at least one of: time dimension, location dimension, image feature dimension, source dimension.
28. The apparatus according to any one of claims 23 to 27, wherein the higher the priority, the larger the display range of the corresponding data tag.
29. The apparatus of any one of claims 22 to 27, wherein the display information is further configured to indicate appearance information of the data tag on a display interface, wherein the appearance information includes at least one of: shape, color.
30. The apparatus of claim 22,
and the display module is used for distributing the data labels in a display interface according to the display information.
31. The apparatus of claim 22,
the display module is used for configuring the data label in a set area of the display interface according to the display information; and when a preset gesture is received, the data label is expanded and displayed to the display interface from a set area.
32. The apparatus of claim 22, 30 or 31,
the display module is further configured to determine the first N data tags according to the display range, and perform a display step on the first N data tags.
33. The apparatus of claim 22, 30 or 31,
the display module is further used for receiving the preset operation and collecting the data label displayed in the display interface.
34. The apparatus of claim 22, further comprising:
and the response module is used for responding to the trigger of the data label and displaying the picture under the category corresponding to the data label.
35. The apparatus of claim 22, further comprising:
the characteristic aggregation module is used for determining the characteristic information of each picture; and aggregating the pictures according to the categories corresponding to the characteristic information, and determining the picture set of each category.
36. The apparatus of claim 35, wherein the feature aggregation module comprises:
the picture collecting submodule is used for collecting the stored pictures;
and the feature extraction submodule is used for extracting feature information of at least one dimension from each picture.
37. The apparatus of claim 36,
the feature extraction submodule is used for extracting and determining target image features of each picture through image features; and/or extracting target condition features from the image description information of each picture; and/or extracting source characteristics from the source information of each picture.
38. The apparatus of claim 37,
the feature extraction submodule is used for extracting image features of each picture and determining the image features of each picture; comparing the image characteristics with each preset standard characteristic respectively to determine corresponding similarity; and when the similarity exceeds a comparison threshold, taking a preset standard feature corresponding to the similarity as a target image feature of the picture.
39. The apparatus of claim 37,
the feature extraction submodule is used for searching a time condition and/or a position condition from the image description information of each picture, and the time condition and/or the position condition are used as target condition features.
40. The apparatus of claim 37,
the feature extraction submodule is used for extracting a storage source and/or a generation source from the source information of each picture, and the storage source and/or the generation source are used as source features.
41. The apparatus of claim 35, wherein the feature aggregation module comprises:
and the aggregation sub-module is used for determining the categories according to the characteristic information and aggregating the pictures with the same category into a picture set.
42. The apparatus of claim 37,
the feature aggregation module is further configured to determine feature information of the updated picture after the updated picture is detected; and aggregating the updated pictures into corresponding categories according to the characteristic information.
43. An intelligent terminal, characterized in that, intelligent terminal includes: memory, display, processor and input unit, wherein, the input unit includes: a touch screen;
the processor is configured to perform the method of any of the preceding claims 1-21.
HK17113789.9A 2017-12-27 Image processing method, apparatus and intelligent terminal HK1240369A1 (en)

Publications (2)

Publication Number Publication Date
HK1240369A true HK1240369A (en) 2018-05-18
HK1240369A1 HK1240369A1 (en) 2018-05-18

Family

ID=

Similar Documents

Publication Publication Date Title
US10846324B2 (en) Device, method, and user interface for managing and interacting with media content
US20220004573A1 (en) Method for creating view-based representations from multimedia collections
TWI498843B (en) Portable electronic device, content recommendation method and computer-readable medium
JP5934653B2 (en) Image classification device, image classification method, program, recording medium, integrated circuit, model creation device
US20230214091A1 (en) Multimedia object arrangement method, electronic device, and storage medium
US8731308B2 (en) Interactive image selection method
EP3005055B1 (en) Apparatus and method for representing and manipulating metadata
CN104572847B (en) A kind of method and device of photo name
JP6351219B2 (en) Image search apparatus, image search method and program
CN103477317B (en) Content display processing device, content display processing method and integrated circuit
CN106407358B (en) Image searching method and device and mobile terminal
JP2014092955A (en) Similar content search processing device, similar content search processing method and program
WO2017067485A1 (en) Picture management method and device, and terminal
CN111046205A (en) Image searching method, device and readable storage medium
KR101747299B1 (en) Method and apparatus for displaying data object, and computer readable storage medium
CN101499087B (en) Storage management system and method
CN105320514A (en) Picture processing method and device
WO2016048311A1 (en) Media organization
KR102523006B1 (en) Method, apparatus and computer program for providing contents list
HK1240369A1 (en) Image processing method, apparatus and intelligent terminal
HK1240369A (en) Image processing method, apparatus and intelligent terminal
US20140153836A1 (en) Electronic device and image processing method
CN106156252B (en) information processing method and electronic equipment
CN117992628A (en) Image display control method, device and electronic equipment
WO2018076640A1 (en) Information processing method and apparatus
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载