WO2018184465A1 - Procédé de traitement de fichier d'image, appareil et support de stockage - Google Patents
Procédé de traitement de fichier d'image, appareil et support de stockage Download PDFInfo
- Publication number
- WO2018184465A1 WO2018184465A1 PCT/CN2018/079463 CN2018079463W WO2018184465A1 WO 2018184465 A1 WO2018184465 A1 WO 2018184465A1 CN 2018079463 W CN2018079463 W CN 2018079463W WO 2018184465 A1 WO2018184465 A1 WO 2018184465A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- color
- rgb
- color table
- table information
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 252
- 238000000034 method Methods 0.000 claims abstract description 172
- 238000006243 chemical reaction Methods 0.000 claims abstract description 85
- 238000012549 training Methods 0.000 claims description 281
- 238000012986 modification Methods 0.000 claims description 78
- 230000004048 modification Effects 0.000 claims description 78
- 238000013139 quantization Methods 0.000 claims description 11
- 230000005540 biological transmission Effects 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 57
- 238000004891 communication Methods 0.000 description 20
- 230000006835 compression Effects 0.000 description 10
- 238000007906 compression Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 8
- 230000001174 ascending effect Effects 0.000 description 6
- NUHSROFQTUXZQQ-UHFFFAOYSA-N isopentenyl diphosphate Chemical compound CC(=C)CCO[P@](O)(=O)OP(O)(O)=O NUHSROFQTUXZQQ-UHFFFAOYSA-N 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005538 encapsulation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234318—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234336—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440218—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440236—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
Definitions
- the present application relates to the field of computer technologies, and in particular, to a picture file processing method, apparatus, and storage medium.
- Image files are animated images composed of continuous multi-frame images.
- image files have been widely used due to their dynamic display effects, such as instant messaging, web page display, etc.
- the content of the file is relatively complicated, resulting in a large amount of file data of the image file, requiring a large bandwidth for transmission, and increasing the bandwidth cost.
- the embodiment of the present invention provides a method for processing a picture file and a device thereof, which can reduce the amount of file data in the transmission process of the picture file and reduce the bandwidth cost.
- An embodiment of the present application provides a method for processing a picture file, which is applied to a computing device, and may include:
- the code stream data is information generated by the sending device to encode first color data of each pixel in the original picture file
- the code stream data is decoded to generate second color data, and the second color data is processed based on the color table to generate a second picture file.
- the embodiment of the present application further provides a method for processing a picture file, which is applied to a computing device, and may include:
- each color value in the initial color table is arranged according to a preset sorting manner of values of one color component
- the updated color data is image encoded to generate the second picture file.
- the embodiment of the present application further provides a method for processing a picture file, which is applied to a computing device, and may include:
- a transcoded code stream sent by the sending end for the original picture file parsing the transcoded code stream to obtain a picture header information data segment and code stream data of the picture file, where the code stream data is the transmitting end to the YUV ( Y is a luminance component, UV is a chrominance component, YUV is a video data format) data is encoded and generated, and the YUV data is data generated by the transmitting end to convert each frame image in the image file;
- the embodiment of the present application further provides a method for processing a picture file, which may include:
- transcoded code stream includes the picture header information data segment and the code stream data
- the embodiment of the present application further provides a method for processing a picture file, which is applied to a computing device, and may include:
- RGB Red Green Blue
- the initial color table information of the RGB data is trained by using the RGB values of the respective pixels to generate local color table information of the RGB data;
- the RGB values of the respective pixels are updated using the local color table information of the RGB data.
- the embodiment of the present application further provides a picture file processing apparatus, which may include a processor and a memory, where the memory stores computer readable instructions, and the processor may be:
- Obtaining a transcoded code stream for the original picture file parsing the transcoded code stream to obtain a picture header information data segment and code stream data, where the picture header information data segment includes a color table of the original picture file, the code
- the stream data is information generated by the sending device to encode the first color data of each pixel in the original picture file;
- the embodiment of the present application further provides a picture file processing apparatus, which may include: a processor and a memory, where the memory stores computer readable instructions, and the processor may be:
- each color value in the initial color table is arranged according to a preset sorting manner of values of one color component
- the updated color data is image encoded to generate the second picture file.
- the embodiment of the present application further provides a computer readable storage medium storing computer readable instructions, which may cause a processor to execute the methods of the embodiments.
- the second color data when the transcoded stream is received, the second color data may be generated according to the code stream data, and the second color data is processed according to the color table to generate a second image file.
- the time for generating and displaying the second picture file can be shortened, and the complexity of generating the second picture file can be reduced.
- the search time of the color table can be shortened, and the processing efficiency can be improved.
- FIG. 1A is a schematic flowchart of a method for processing a picture file according to an embodiment of the present application.
- FIG. 1B is a schematic flowchart of a method for processing a picture file according to an embodiment of the present application
- FIG. 2 is a sequence diagram of a process of a picture file processing method according to an embodiment of the present application
- FIG. 3 is a schematic diagram of an example of processing a picture file provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of another example of image file processing provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of another example of image file processing provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of another example of image file processing provided by an embodiment of the present application.
- FIG. 7 is a sequence diagram of a process of another image file processing method provided by an embodiment of the present application.
- FIG. 8 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present application.
- FIG. 9 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 10 is a schematic diagram of an example of generating compressed image data according to an embodiment of the present application.
- FIG. 11 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 12 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 13 is a schematic diagram showing an example of generating a picture file according to an embodiment of the present application.
- FIG. 14 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 15 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 16 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 17 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 18 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 19 is a schematic diagram showing an example of a primary chrominance component of a YUV420 mode according to an embodiment of the present application.
- FIG. 20 is a schematic diagram showing an example of a target chrominance component of a YUV420 mode according to an embodiment of the present application
- 21A is a schematic flowchart of still another method for processing a picture file according to an embodiment of the present application.
- 21B is a schematic flowchart of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 22 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- FIG. 23 is a schematic structural diagram of a picture file processing apparatus according to an embodiment of the present disclosure.
- FIG. 24 is a schematic structural diagram of an image conversion unit according to an embodiment of the present disclosure.
- FIG. 25 is a schematic structural diagram of another picture file processing apparatus according to an embodiment of the present disclosure.
- FIG. 26 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present disclosure.
- FIG. 27 is a schematic structural diagram of an image coding unit according to an embodiment of the present disclosure.
- 29 is a schematic structural diagram of an image processing system according to an embodiment of the present application.
- FIG. 30 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- FIG. 31 is a schematic structural diagram of another image conversion unit according to an embodiment of the present disclosure.
- FIG. 32 is a schematic structural diagram of another image coding unit according to an embodiment of the present application.
- FIG. 33 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- FIG. 34 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present disclosure.
- FIG. 35 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- FIG. 1A is a schematic flowchart of still another method for processing a picture file according to an embodiment of the present application.
- the method can be performed by a computing device.
- the method can include the following steps.
- S101A Receive a transcoded code stream of the original picture file sent by the sending device, parse the transcoded code stream to obtain a picture header information data segment and code stream data, where the picture header information data segment includes the color of the original picture file.
- the code stream data is information generated by the sending device to encode first color data of each pixel in the original picture file.
- the first color data may be RGB data, or YUV data, or HSV data or the like.
- the color table may be a color table corresponding to the picture.
- the color table may be a global color table of the picture file or a partial color table of a frame image in the picture file.
- the picture header information data segment further includes delay information, a total number of frames.
- the computing device may perform image encoding on the updated second color data by using the delay time and the total number of frames to generate the second picture file.
- the computing device may generate a local color table corresponding to the second color data by using a color value of each pixel in the second color data and the color table, and use the local color table to The color value of each pixel is updated.
- the computing device may use the second color data to train an initial color table of the first frame image, Obtaining a partial color table corresponding to the first frame image, wherein the initial color table of the first frame image is the color table.
- the computing device may use the second color data to train the initial color table corresponding to the Nth frame image to obtain a A partial color table corresponding to the image of the Nth frame.
- the initial color table corresponding to the Nth frame image is a local color table corresponding to the N-1th frame image of the original picture file, and N is an integer greater than 1.
- the computing device can sort the color values in the initial color table in a predetermined ordering of one color component to generate a training color table. Obtaining a first color value of the color component closest to the color component of the pixel point in the training color table for each pixel in the second color data, and acquiring a first color value The first color index. Obtaining, in a preset range centered on the first color index in the training color table, a second color value having a smallest error with a color value of the pixel point, and acquiring a second color index of the second color value .
- a pre-position of the second color index in the training color table Set a number of color values within the range to modify.
- the modified training color table is determined as the partial color table.
- the color component described above may be any one of a plurality of parameters representing a color.
- the above color component may be an R component, a G component, or a B component.
- the above color component may be a Y component, a U component, or a V component.
- the transcoded stream of the original picture file may be sent by the sending end, and the computing device may receive the transcoded code stream sent by the second computing device.
- the transmitting device may generate the picture header information data segment and the first color data according to the original picture file. The first color data is encoded to generate the code stream data. And storing the transcoded code stream, where the transcoded code stream includes the picture header information data segment and the code stream data. The transmitting device can read the stored transcoded stream and transmit the transcoded stream to the receiving device as needed.
- the picture file processing method in the embodiment of the present application can be applied to a scene in which a picture file is encoded, decoded, and transmitted, for example, instant messaging, web page display, and the like.
- the transmitting end generates a picture header information data segment of the picture file according to the original picture file, and converts each frame image in the picture file into YUV data, and the transmitting end encodes the YUV data to generate code stream data.
- the transmitting end generates compressed image data, and sends the compressed image data to a receiving end, where the compressed image data includes the picture header information data segment and the code stream data.
- the receiving end receives the compressed image data sent by the sending end, and parses the compressed image data to obtain the picture header information data segment and the code stream data.
- the receiving end generates the YUV data according to the code stream data, and performs encoding processing on the YUV data based on the picture header information data segment to generate a scene of the picture file and the like.
- the transmitting end of the embodiment of the present application may be a terminal device or a background service device having functions such as decoding, encoding, and the like, or may be an image processing module in the terminal device or the service device.
- the receiving end may be a terminal device or a background service device having a function of encoding a picture file, a video decompression, or the like, or may be an image processing module in the terminal device or the service device.
- the above terminal devices may include computers, tablets, smart phones, notebook computers, palmtop computers, and mobile internet devices (MIDs).
- MIDs mobile internet devices
- the sending end may be a terminal device that sends a picture file
- the receiving end may be a background service device that receives an instant messaging application of the picture file.
- the sending end is a background service device of an instant messaging application that forwards a picture file
- the receiving end is a terminal device that receives a picture file, and the like.
- the transmitting end may be a terminal device that sends a picture file
- the receiving end may be a terminal device that receives a picture file.
- the above scenario is only an example. The type of the sender and the receiver can be determined according to the actual running scenario.
- the image file may be a graphics exchange format.
- GIF Graphics Interchange Format Any of an image, an Audio Video Interleaved (AVI) image, a SWF (Shock Wave Flash), and an Animated Portable Network Graphics (APNG).
- An image used to represent a frame of a picture file An image used to represent a frame of a picture file.
- a component that represents a single matrix in a matrix or matrix of three sample matrices of an image A component that represents a single matrix in a matrix or matrix of three sample matrices of an image.
- Brightness used to represent a matrix of samples of the luminance signal Y or a single sample.
- Chromaticity used to represent the sample matrix and single sample of either of the Cr and Cb color difference signals.
- the code stream data is used to represent the data obtained after encoding, and may also be described by using a name such as video frame data.
- the image feature information segment is used to represent information such as delay information, total number of frames, and global color table information of the image file.
- a user-defined information segment that indicates information such as configuration parameters, encoder complexity, and the like when encoding three primary colors (Red Green Blue, RGB) data or YUV data.
- the picture header information data segment is used to indicate the beginning end of the compressed image data, and may include an image feature information segment and a user-defined information segment.
- the compressed image data is used to represent the data generated by the image format encapsulation of the code stream data and the image header information data segment, and may also be described by using a name such as an image sequence or a compressed code stream.
- FIG. 1B is a schematic flowchart of a method for processing a picture file according to an embodiment of the present application.
- the embodiment of the present application jointly illustrates a specific process of the image file processing method from the sending end and the receiving end, and the method may include the following steps.
- S101B The sending end generates a picture header information data segment of the picture file according to the original picture file, and converts each frame image in the picture file into YUV data.
- the sender may decode the original picture file to generate a picture header information data segment of the picture file.
- the image file may be an image input by the developer after the image is created, or may be an image received from the other end.
- the picture header information data segment may include delay information of the picture file, a total number of frames, and the like. The delay information records a play interval between each frame of the picture file. The total number of frames is the number of image frames in the picture file.
- the picture header information data segment may further include global color table information of the picture file.
- the global color table information contains RGB values for each pixel of each frame of image. For some image files, all images contained in it share a global color table information. For some other image files, each frame image has its own local color table information.
- the transmitting end further converts each frame image in the picture file into YUV data.
- the transmitting end encodes the YUV data to generate code stream data.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- the encoding may include predictive coding, transform coding, quantization coding, and entropy coding.
- the transmitting end may compress the YUV data by using an IPPP mode.
- the first frame YUV data is an I frame
- the I frame is an intra prediction frame
- the remaining frame YUV data is a P frame
- the P frame is an interframe.
- the prediction frame can effectively compress the file data amount of the picture file, and can also use a fixed quantization parameter (QP) to stabilize the quality between different frames.
- QP fixed quantization parameter
- the entropy coding may include Huffman coding, arithmetic coding, and the like.
- S103B The transmitting end sends the compressed image data to the receiving end.
- the transmitting end may generate compressed image data, and the transmitting end may store the compressed image data.
- the transmitting end sends the compressed image data to the receiving end. For example: when the sender detects that the webpage containing the image file is open.
- the transmitting end may also directly send the compressed image data to the receiving end. For example, when a client in an instant messaging application needs to send the image file to another client, the application service device needs to forward the image file, and the compressed image data includes the image header information data segment. And the code stream data.
- the receiving end receives the compressed image data sent by the sending end, and parses the compressed image data to obtain the picture header information data segment and the code stream data.
- the receiving end receives the compressed image data sent by the sending end, and the receiving end may perform parsing processing on the compressed image data to obtain the picture header information in the compressed image data. Data segment and the code stream data.
- the receiving end generates the YUV data according to the code stream data, and processes the YUV data according to the picture header information data segment to generate the picture file.
- the receiving end decodes the code stream data to generate the YUV data, and encodes the YUV data based on delay information, total frame number, global color table information, and the like in a picture header information data segment. Processing to generate the picture file.
- the picture header information data segment of the picture file is generated according to the original picture file, and each frame image in the picture file is converted into YUV data, and then the code stream data and the picture header information obtained by encoding according to the YUV data are obtained.
- the data segment generates compressed image data, and transmits the compressed image data.
- YUV data may be generated according to the code stream data, and the YUV data is processed based on the image header information data segment to generate a picture file.
- FIG. 2 a flow chart of a method for processing a picture file is provided in an embodiment of the present application.
- the specific process of the image file processing method is jointly illustrated by the sending end side and the receiving end side, and the method may include the following steps.
- the sending end decodes a picture header information data segment of the original picture file to generate a picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end may decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end converts the RGB data into YUV data by using a color space conversion formula.
- the transmitting end may convert the RGB data into YUV data using a color space conversion formula.
- the range of the luminance component can be selected as [16, 235], and the range of the chrominance component is in the range of [16, 240]. Due to the reduction of the range of values, the amount of data of the converted YUV data can be greatly reduced. It is also possible to select the range of the luminance component and the chrominance component to be the [0, 255] color space conversion formula, which can reduce the distortion of the converted YUV data.
- the color space conversion formula for the range of luminance components and chrominance components with the range of [0, 255] is:
- V 0.5R - 0.4187G - 0.0813B + 128.
- YUV is a video data format
- Y is the luminance component
- UV is the chrominance component.
- the color space conversion formula can perform raster scan processing on the pixels of each frame of image, and convert each frame of RGB data into corresponding YUV data. .
- the transmitting end encodes the YUV data to generate code stream data.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- the user may be configured to add a configuration parameter, where the configuration parameter may be a parameter that encodes the YUV data, and the configuration parameter may include an SD mode parameter and a HD mode.
- the configuration parameter may be a parameter that encodes the YUV data
- the configuration parameter may include an SD mode parameter and a HD mode.
- the SD mode parameters for example: YUV420 mode parameters
- HD mode parameters for example: YUV444 mode parameters.
- the transmitting end may encode the YUV data by using configuration parameters to generate code stream data.
- the user may be added to add encoder complexity during encoding of the YUV data.
- the encoder complexity may be a fineness parameter of the encoding determined according to the hardware performance of the transmitting end.
- the encoder complexity may include any of the first complexity, the second complexity, and the third complexity.
- the first complexity is higher than the second complexity, and the second complexity is higher than the third complexity.
- the hardware performance of the sender can be detected to generate a performance value. For example: testing the calculation rate of the central processing unit of the transmitting end, and the like. When the performance value is within the first preset value range, it may be determined that the hardware performance of the transmitting end is high, and the encoding method of the first complexity may be recommended.
- the performance value When the performance value is within the second preset value range, it may be determined that the hardware performance of the transmitting end is medium, and the encoding method of the second complexity may be recommended.
- the performance value When the performance value is within the third preset value range, it may be determined that the hardware performance of the transmitting end is poor, or the current real-time transcoding is required, and the encoding method of the third complexity may be recommended.
- the transmitting end may configure an encoder complexity to encode the YUV data to generate code stream data.
- the foregoing two coding modes may be parallel coding modes, that is, the transmitting end may simultaneously encode the YUV data to generate code stream data by using configuration parameters and configured encoder complexity.
- the above performance value, the first preset value range, the second pass preset value range, and the third preset value range may be specifically set according to the developer's human experience.
- the transmitting end sends the compressed image data to the receiving end.
- the receiving end receives the compressed image data sent by the sending end, and parses the compressed image data to obtain the picture header information data segment and the code stream data.
- the receiving end decodes the code stream data to generate the YUV data, and converts the YUV data into RGB data by using a color space conversion formula.
- the receiving end decodes the code stream data to generate the YUV data, and may convert the YUV data into RGB data by using a color space conversion formula.
- the receiving end needs to be based on a luminance component.
- the range of values determines the color space conversion formula used. It can be understood that for the YUV data in the YUV444 mode, the range of the luminance component can be selected as [16, 235], and the range of the chrominance component is [ 16,240] color space conversion formula, you can also select the range of the luminance component and the chrominance component are all [0, 255] color space conversion formula, specifically according to the above embodiment in the conversion of RGB data to YUV data The color space conversion formula selected at the time corresponds.
- the chromaticity includes a sample matrix and a single sample of any one of Cb and Cr color difference signals.
- the two color difference signals need to be separately subjected to upsampling processing, and the two methods of performing upsampling processing are the same, wherein Cb corresponds to U in YUV, and Cr corresponds to V in YUV.
- the color space conversion formula is:
- the color space conversion formula can perform raster scan processing on the pixels of each frame of image, and convert each frame of YUV data into corresponding RGB data.
- the receiving end updates the RGB values of each pixel in the RGB data by using the global color table information.
- the RGB data converted by the YUV data is distorted, and the receiving end may use the global color table information in the RGB data.
- the RGB values of each pixel are updated.
- the RGB data may include one or more pixel points, and when only one pixel point exists in the RGB data, the RGB value of the pixel point may be updated by using the global color table information, when the RGB When there are multiple pixels in the data, the RGB values of the pixels in the plurality of pixels may be updated by using the global color table information, and the receiving end needs to re-train the global color table information to generate Corresponding to the local color table information of the RGB data, and in the case where there is no global color table information, the receiving end may generate initial color table information, and perform training by initializing the color table information to generate the Local color table information for RGB data.
- the receiving end may adopt each pixel in the RGB data.
- the RGB values of the points train the global color table information to generate local color table information of the RGB data, and update the RGB values of the respective pixel points by using local color table information of the RGB data.
- the receiving end may use the global color table information.
- the at least one source RGB value is sorted according to a preset ordering of the G components (eg, ascending, descending, etc.) to generate training color table information. As shown in FIG.
- the global color table information includes a color index and a source RGB value, for example: 5, (8, 1, 10), 6, (8, 9, 8), 7, (1, 7, 6), 8, (10,8,6), 9, (5,8,5), 10, (9,10,1), according to the ascending order of the G component can generate training color table information, for example: 5, (8,1,10), 6, (1,7,6), 7, (5,8,5), 8, (10,8,6), 9, (8,9,8), 10, (9,10,1).
- the receiving end acquires, in the training color table information, a first source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquires a first color index of the first source RGB value. .
- the receiving end may traverse the source RGB value in the training color table information by using the G component of the current pixel point, and acquire the first source RGB value that is closest to the G component. As shown in FIG. 4, it is assumed that there are four pixel points in the first frame RGB data, and the current pixel point is the first pixel point in the RGB data, and the RGB value of the current pixel point is (9, 9, 9).
- the G component of (8, 9, 8) is closest to the G component of the RGB values of the current pixel point, and (8, 9, 8) is determined as the first Source RGB values, and a first color index "9" of the first source RGB value is obtained.
- the receiving end And acquiring, by the receiving end, a second source RGB value having the smallest error of the RGB value of the current pixel point in the preset range centered on the first color index in the training color table information, and acquiring the A second color index of the second source RGB value.
- the receiving end may be centered on the first color index, acquire a plurality of source RGB values in a preset range before and after the first color index, and calculate the first source RGB value and the multiple The error of the source RGB value and the RGB value of the current pixel point, and the source RGB value that minimizes the error among the plurality of source RGB values is determined as the second source RGB value.
- the two source RGB values of (10, 8, 6) and (9, 10, 1) are obtained centered on “9” and calculated separately (10) Error values of (8,9,8), (8,9,8) and (9,10,1) and (9,9,9).
- the error value of the color index "8" is
- 5
- the error value of the color index "9” is
- 2
- the error value of the color index "10” is
- 9
- the source RGB value is determined (8, 9, 8) Is the second source RGB value, and acquires a second color index "9" of the second source RGB value.
- the first source RGB value and the first color index obtained are the same as the second source RGB value and the second color index, and may also be different, which is specifically determined by the actual execution process.
- the receiving end replaces the second source RGB value with the RGB value of the current pixel point, and adopts a preset weight value and an RGB value of the current pixel point, and the training color table information is
- the second color index is modified for a plurality of source RGB values within a preset range of the center. Referring to FIG. 5, according to the above example, when the second source RGB value is (8, 9, 8), (8, 9, 8) is replaced with the RGB value (9, 9, 9) of the current pixel point, and A plurality of source RGB values within a preset range centered on the second color index may be modified according to a preset weight value and an RGB value of the current pixel point.
- the preset weight value may be a dynamic weight value, and the closer the source RGB value is to the second color index, the greater the influence of the RGB value of the current pixel point.
- the (9, 8, 9) pair (10, 8, 6) is modified.
- the source RGB value of the color index "8" is modified from (10, 8, 6) to (9, 9, 8).
- the source RGB value of the color index "10" is modified from (9, 10, 1) to (9, 9, 7).
- the receiving end acquires Modifying the training color table information, using the next pixel point of the current pixel as the current pixel point, and transferring to perform acquiring the G component and the current pixel in the RGB data in the training color table information
- the G component of the point is closest to the first source RGB value.
- the second pixel point in the RGB data is used as the current pixel point, and the training color table information obtained after the modification is once again trained.
- the training process can be referred to the description of the above training process, and details are not described herein.
- the receiving end acquires the modification.
- the training color table information obtained later is determined, and the training color table information is determined as local color table information of the RGB data.
- the receiving end may update the RGB values of the pixel points by using local color table information of the RGB data.
- the receiving end may sequentially acquire, in the local color table information of the RGB data, source RGB values that are the same as the pixel points or have the smallest error, and replace the RGB values of the respective pixel points.
- the receiving end may replace the RGB values of the respective pixel points with color indexes corresponding to the source RGB values that are the same or the smallest error of the respective pixel points. Referring to FIG. 6, according to the above example, the RGB value of the first pixel of the RGB data is (9, 9, 9), and the error with the (9, 9, 9) in the partial color table information is the smallest.
- the color index "8" corresponding to (9, 9, 8) is replaced by the RGB value of the first pixel.
- the color index of the second pixel is "10”.
- the color index of the third pixel is "9”
- the color index of the fourth pixel is "6".
- the receiving end may adopt the RGB data.
- the RGB values of the respective pixels are trained to initialize the color table information to generate local color table information of the RGB data, and the RGB values of the respective pixel points are updated using the local color table information of the RGB data.
- the receiving end may generate an initialization color table information, For example: (0,0,0), (1,1,1), (2,2,2), ..., (255,255,255).
- the receiving end acquires a third source RGB value of the G component closest to the G component of the current pixel point in the RGB data in the initialization color table information, and acquires a third color index of the third source RGB value.
- a fourth source RGB value having the smallest error of the RGB value of the current pixel point in the preset range centered on the third color index in the initialization color table information and acquiring the A fourth color index of the fourth source RGB value.
- the receiving end replaces the fourth source RGB value with the RGB value of the current pixel point, and adopts a preset weight value and an RGB value of the current pixel point, and the initial color table information is The fourth color index is modified by a plurality of source RGB values within a preset range of the center.
- the receiving end acquires the initialized color table information obtained after the modification, and uses the next pixel point of the current pixel point as the current pixel.
- Pointing and transferring to perform a third source RGB value that obtains the G component closest to the G component of the current pixel point in the RGB data in the initialization color table information When the current pixel point is the last pixel point in the RGB data, the receiving end acquires the modified color table information obtained after the modification, and determines the initialization color table information as the RGB data.
- the local color table information the receiving end may use the local color table information of the RGB data to update the RGB values of the respective pixel points.
- the RGB data is not the first frame image in the picture file, that is, the RGB data is the Nth frame image in the picture file, where N is greater than 1 and less than or A positive integer equal to the total number of frames.
- the receiving end may train the local color table information of the N-1th frame RGB data by using the RGB values of the pixels in the RGB data to generate local color table information of the RGB data, and adopt the RGB The local color table information of the data updates the RGB values of the respective pixel points.
- the receiving end may use at least one source RGB value in the local color table information of the N-1th frame RGB data according to the G component. Sort the presets to generate training color table information.
- the receiving end acquires, in the training color table information, a fifth source RGB value whose G component is closest to the G component of the current pixel point in the RGB data, and acquires a fifth color index of the fifth source RGB value. .
- the receiving end replaces the sixth source RGB value with the RGB value of the current pixel point, and adopts a preset weight value and an RGB value of the current pixel point, and the training color table information is
- the sixth color index is modified by a plurality of source RGB values within a preset range of the center.
- the receiving end acquires the modified color table information obtained by the modification, and uses the next pixel of the current pixel as the current pixel. Pointing, and transferring to a fifth source RGB value that performs the closest acquisition of the G component in the training color table information to the G component of the current pixel point in the RGB data.
- the receiving end acquires the modified color table information obtained after the modification, and determines the training color table information as the RGB data. Local color table information.
- the receiving end may update the RGB values of the pixel points by using local color table information of the RGB data. It should be noted that, in this example, the process of training the training color table information and updating the RGB values of each pixel by using the local color table information of the RGB data can be referred to the execution process of the foregoing manner, and details are not described herein.
- the G component as the training for training the color table information is only an example.
- the R component or the B component can also be used as the guidance for training the color table information.
- the description in the content is not described here.
- the index, the fourth color index, the fifth color index, and the sixth color index may all represent the same source RGB value and the meaning of the color index.
- the naming in this manner is only to distinguish different execution scenarios, for example: when the RGB data And being the first frame image in the picture file, and the global color table information is present in the picture header information data segment. When the RGB data is the first frame image in the picture file, and the global color table information does not exist in the picture header information data segment. When the RGB data is the Nth frame image in the picture file, the above three execution scenarios.
- the receiving end performs image coding on the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the receiving end performs image encoding on the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the receiving end may use the Lempel-Ziv-Welch Encoding (LZW) pair in the RGB data based on the delay time and the total number of frames.
- LZW Lempel-Ziv-Welch Encoding
- the color index of each pixel is image-encoded to generate the image file, and the receiving end may store or display the image file.
- the manner of the image encoding is specifically determined by the image format of the image file. For example, if the image file to be generated is a GIF image, the image encoding may be GIF encoding or the like.
- FIG. 7 a flow chart of another method for processing a picture file is provided in the embodiment of the present application.
- the specific process of the image file processing method is jointly illustrated by the sending end side, the transiting device side, and the receiving end side, and the method includes the following steps.
- the sending end decodes the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end may decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end converts the RGB data into YUV data by using a color space conversion formula.
- the transmitting end may convert the RGB data into YUV data using a color space conversion formula.
- the transmitting end encodes the YUV data to generate code stream data.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- a user may be added to add configuration parameters, which may be parameters that encode the YUV data.
- the transmitting end may encode the YUV data by using configuration parameters to generate code stream data.
- a user may also be added to add encoder complexity, which may be a fineness parameter of the encoding determined according to hardware performance of the transmitting end.
- the transmitting end may configure an encoder complexity to encode the YUV data to generate code stream data.
- the sending end sends the compressed image data to the relay device.
- the transmitting end may generate compressed image data, and the transmitting end may store the compressed image data, and when detecting the request of the receiving end for the picture file, the sending end performs the The compressed image data is sent to the receiving end.
- the relay device receives the compressed image data sent by the sending end, and sends the compressed image data to a receiving end.
- the relay device may be a connection device between the sending end and the receiving end, and if the sending end and the receiving end cannot directly connect, the relay device performs compressed image data.
- the transit processing for example, for two clients in an instant messaging application, the relay device may be a background service device of an instant messaging application or the like.
- the relay device receives the compressed image data sent by the sending end, and may send the compressed image data to the receiving end according to the application identifier of the receiving end indicated by the sending end.
- the receiving end receives the compressed image data sent by the relay device, and parses the compressed image data to obtain the picture header information data segment and the code stream data.
- the receiving end receives the compressed image data sent by the relay device, and the receiving end may perform parsing processing on the compressed image data to obtain the picture header information in the compressed image data. Data segment and the code stream data.
- the receiving end decodes the code stream data to generate the YUV data, and converts the YUV data into RGB data by using a color space conversion formula.
- the receiving end decodes the code stream data to generate the YUV data, and may convert the YUV data into RGB data using a color space conversion formula. In some examples, the receiving end needs to determine the color space conversion formula used according to the range of values of the luminance component.
- the receiving end updates the RGB values of each pixel in the RGB data by using the global color table information.
- the receiving end performs image coding on the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- FIG. 8 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present application.
- the embodiment of the present application describes a specific process of the image file processing method from the sending end side, and the method may include the following steps S301-S303.
- the sender may decode the original picture file to generate a picture header information data segment of the picture file.
- the transmitting end further converts each frame image in the picture file into YUV data.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- the receiving end decodes the code stream data to generate the YUV data, and performs encoding processing on the YUV data based on delay information, total frame number, global color table information, and the like in a picture header information data segment to generate a Said picture file.
- the picture header information data segment of the picture file is obtained, and each frame image in the picture file is converted into YUV data, and then the code stream data obtained by encoding based on the YUV data is obtained. And the picture header information data segment generates compressed image data, and the compressed image data is transmitted.
- the code stream data can be decoded to generate YUV data, and then the YUV data is encoded and processed based on the picture header information data segment.
- Image file
- FIG. 9 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the embodiment of the present application describes a specific process of the image file processing method from the sending end side, and the method may include the following steps S401-S404.
- the transmitting end may decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end may convert the RGB data into YUV data using a color space conversion formula.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- FIG. 10 is a schematic diagram of an example of generating compressed image data according to an embodiment of the present application.
- delay information total number of frames, global color table information, and the like of the picture file may be acquired, and the delay information is recorded in each frame of the picture file.
- the interval between the playback time and the total number of frames is the number of image frames in the picture file.
- the picture header information data segment may further include global color table information of the picture file,
- the global color table information contains the RGB values of each pixel of each frame of image. For some image files, all the images contained in it share a global color table information, and for other image files, each frame image has its own The local color table information is obtained.
- the global color table information exists in the image file, the global color table information is acquired. If only the local color table information exists in the image file, the local color table information is not acquired.
- the delay information, the total number of frames, the global color table information, and the like of the picture file may be encapsulated to generate a picture header information data segment of the picture file.
- Simultaneously decoding the picture file can also obtain RGB data corresponding to each frame image in the picture file.
- the RGB data can be converted into YUV data by using a color space conversion formula, and the pixel of each frame image can be rastered by a color space conversion formula.
- the scanning process converts each frame of RGB data into corresponding YUV data.
- the YUV data may be further subjected to encoding processing to generate code stream data, which may include predictive coding, transform coding, quantization coding, and entropy coding.
- code stream data which may include predictive coding, transform coding, quantization coding, and entropy coding.
- the user can be added to add configuration parameters, and the user can also add encoder complexity.
- compressed image data can be generated.
- FIG. 11 a flow chart of still another method for processing a picture file is provided in the embodiment of the present application.
- the embodiment of the present application describes a specific process of the image file processing method from the receiving end side, and the method may include the following steps S501 and S502.
- the sender may decode the original picture file to generate a picture header information data segment of the picture file.
- the transmitting end further converts each frame image in the picture file into YUV data.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- FIG. 12 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the embodiment of the present application describes a specific process of the image file processing method from the receiving end side, and the method may include the following steps S601-S604.
- the transmitting end may decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end may convert the RGB data into YUV data by using a color space conversion formula.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- S602. Decode the code stream data to generate the YUV data, and convert the YUV data into RGB data by using a color space conversion formula.
- the receiving end decodes the code stream data to generate the YUV data, and may convert the YUV data into RGB data using a color space conversion formula. In some examples, the receiving end needs to determine the color space conversion formula used according to the range of values of the luminance component.
- the receiving end performs image encoding on the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the receiving end may perform image coding on a color index of each pixel in the RGB data by using an LZW based on the delay time and the total number of frames to generate the picture file.
- the receiving end may store or display the picture file.
- FIG. 13 is a schematic diagram of an example of generating a picture file according to an embodiment of the present application.
- the compressed image data may be parsed to obtain a picture header information data segment and code stream data in the compressed image data, where the picture header information data segment may include extension Time information, total number of frames, global color table information, etc., the delay information records a play interval between each frame of the picture file, and the total number of frames is the number of image frames in the picture file .
- the global color table information contains RGB values for each pixel of each frame of image. For some image files, all the images it contains share a global color table information. For other picture files, each frame image has its own local color table information.
- the global color table information exists in the picture header information data segment. If only the local color table information exists in the picture file, the local color table information does not exist in the picture header information data segment.
- the code stream data may be decoded to generate the YUV data, and the YUV data is converted to RGB data using a color space conversion formula. If the currently processed RGB data is the first frame RGB data in the picture file, it is determined whether there is global color table information in the picture header information data segment. If present, the global color table information is trained to generate local color table information of the RGB data by using RGB values of each pixel in the RGB data, and the local color table information of the RGB data is used. The RGB values of the respective pixel points are updated.
- the initialization color table information may be generated, and the initialization color table information is trained by using the RGB values of the pixels in the RGB data to generate local color table information of the RGB data, and adopted
- the local color table information of the RGB data updates the RGB values of the respective pixel points.
- the currently processed RGB data is the Nth frame RGB data in the picture file, where N is a positive integer greater than 1, the RGB value of each pixel in the RGB data may be used to the RGB frame of the N-1th frame.
- the local color table information of the data is trained to generate local color table information of the RGB data, and the RGB values of the respective pixel points are updated using local color table information of the RGB data.
- the updated RGB data is image-encoded by using the delay time and the total number of frames to generate the picture file.
- FIG. 14 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the picture file processing device in the embodiment of the present application may be a distributed service device, or may be an image processing module in a distributed service device.
- the method in this embodiment may include the following step S701- Step S705.
- the picture file processing device can decode the original picture file to generate a picture header information data segment of the picture file.
- the picture file processing device further converts each frame image in the picture file into YUV data.
- the picture file processing apparatus may further perform encoding processing on the YUV data to generate code stream data, where the encoding may include predictive coding, transform coding, quantization coding, and entropy coding, for example, the picture file processing.
- the device may compress the YUV data by using an IPPP mode.
- the YUV data of the first frame is an I frame
- the I frame is an intra prediction frame
- the remaining frame YUV data is a P frame
- the P frame is an inter prediction frame, which can be effective.
- the amount of file data in the compressed image file can also be fixed in a QP manner to stabilize the quality between different frames.
- the entropy coding may include Huffman coding, arithmetic coding, and the like.
- the picture file processing device may generate compressed image data, and the picture file processing device may store the compressed image data, the compressed image data including the picture header information data segment and the code stream data.
- the picture file processing device may perform parsing processing on the compressed image data to obtain the picture header information data segment and the code stream data in the compressed image data.
- the picture file processing device decodes the code stream data to generate the YUV data, and based on the delay information, the total number of frames, the global color table information, and the like in the picture header information data segment, the YUV data. An encoding process is performed to generate the picture file.
- the picture header information data segment of the picture file is generated according to the original picture file, and each frame image in the picture file is converted into YUV data, and then the code stream data and the picture header information obtained by encoding according to the YUV data are obtained.
- the data segment generates compressed image data, and stores the compressed image data.
- YUV data may be generated according to the code stream data, and the YUV data is processed based on the image header information data segment to generate a picture file.
- FIG. 15 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the picture file processing device in the embodiment of the present application may be a distributed service device, or may be an image processing module in a distributed service device.
- the embodiment of the present application specifically describes compressing a picture file into a compressed image.
- the process of data, the method may include the following steps S801 to S807.
- the picture file processing device may decode the original picture file. It may be understood that the global color table information includes RGB values of each pixel of each frame image in the picture file, for some picture files, All the images contained in the image share a global color table information, and for other image files, each frame image has its own local color table information, and the image file processing device can further determine the color table in the image file. Whether the information is the global color table information of the picture file, and if yes, proceeds to step S803. If no, the process proceeds to step S804.
- the picture file processing device may generate a picture including delay information, total number of frames, global color table information, and the like. a header information data segment, wherein the delay information records a play interval between each frame of the image file, where the total number of frames is an image frame number in the image file, and the image file is also generated The RGB data corresponding to each frame of the image.
- the picture file processing device may generate a picture header information data segment including delay information, total number of frames, and the like.
- the delay information is recorded with a play interval time between each frame image in the picture file, where the total number of frames is the number of image frames in the picture file, and each frame image in the picture file is also generated.
- Corresponding RGB data may be generated.
- the picture file processing device may convert the RGB data into YUV data using a color space conversion formula.
- a user may be added to add configuration parameters, which may be parameters that encode the YUV data.
- the picture file processing device may encode the YUV data using configuration parameters to generate code stream data.
- the user may also support adding encoder complexity, and the encoder complexity may be the fineness of the encoding determined according to the hardware performance of the picture file processing device. parameter.
- the picture file processing device may configure encoder complexity to encode the YUV data to generate code stream data.
- the picture header information data segment of the picture file is generated according to the original picture file, and each frame image in the picture file is converted into YUV data, and then the code stream data and the picture header information obtained by encoding according to the YUV data are obtained.
- the data segment generates compressed image data and stores the compressed image data.
- FIG. 16 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the picture file processing device in the embodiment of the present application may be a distributed service device, or may be an image processing module in a distributed service device.
- the embodiment of the present application specifically decompresses compressed image data into a picture.
- the process of the file, the method may include the following steps S901 to S907.
- the picture file processing device decodes the code stream data to generate the YUV data, and may convert the YUV data into RGB data by using a color space conversion formula.
- the receiving end needs to be based on The range of values of the luminance component determines the color space conversion formula used.
- the picture file processing device determines whether the global color table information of the picture file is included in the picture header information data segment, and if yes, proceeds to step S904. If no, the process proceeds to step S905.
- the picture file processing apparatus may use the global color table information for each of the RGB data.
- the RGB values of the pixels are updated.
- the global color table information is trained by using RGB values of each pixel in the RGB data to generate local color table information of the RGB data, and adopting The local color table information of the RGB data updates the RGB values of the respective pixels.
- the picture file processing device may adopt the RGB data.
- the RGB values of the respective pixels train the global color table information to generate local color table information of the RGB data, and update the RGB values of the respective pixel points by using the local color table information of the RGB data.
- the picture file processing device may use the global color table. At least one source RGB value in the information is sorted according to a preset ordering of the G components (eg, ascending, descending, etc.) to generate training color table information.
- the picture file processing device acquires, in the training color table information, a first source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquires the first value of the first source RGB value. Color index.
- the image file processing device acquires, in a preset range centered on the first color index in the training color table information, a second source RGB value that has the smallest error with the RGB value of the current pixel point, and acquires The second color index of the second source RGB value, it may be understood that the picture file processing device may be centered on the first color index, and acquire multiple times in a preset range before and after the first color index.
- Source RGB values and respectively calculating the first source RGB value and the error of the plurality of source RGB values and the RGB value of the current pixel point, and determining the source RGB value that minimizes the error among the plurality of source RGB values Is the second source RGB value.
- the picture file processing device replaces the second source RGB value with the RGB value of the current pixel point, and adopts a preset weight value and an RGB value of the current pixel point, and the training color table information is
- the second color index is modified by a plurality of source RGB values within a preset range of the center.
- the picture file processing Obtaining, by the device, the training color table information obtained by the modification, using a next pixel point of the current pixel point as a current pixel point, and transferring to performing acquiring the G component and the RGB data in the training color table information
- the first source RGB value of the G component of the current pixel is closest to the first pixel RGB value, for example, the second pixel point in the RGB data is used as the current pixel point, and the training color table information obtained after the modification is trained again.
- the training process in some examples can be referred to the description of the above training process, and will not be described here.
- the picture file processing device Obtaining the training color table information obtained after the modification, and determining the training color table information as local color table information of the RGB data.
- the picture file processing device may update the RGB values of the pixel points by using local color table information of the RGB data.
- the picture file processing device may have local color table information of the RGB data.
- the source RGB values obtained in the same manner as the respective pixel points or the smallest error are sequentially replaced with the RGB values of the respective pixel points, and in some examples, the picture file processing device may be the same as or different from the pixel points.
- the color index corresponding to the smallest source RGB value replaces the RGB values of the respective pixel points respectively. Please refer to FIG. 6 .
- the RGB value of the first pixel of the RGB data is (9, 9, 9)
- the error with (9, 9, 9) is (9, 9, 8)
- the color index "8" corresponding to (9, 9, 8) is replaced by The RGB value of the first pixel, the same reason, the color index of the second pixel is "10", the color index of the third pixel is "9", and the color index of the fourth pixel is "6". .
- the initial color table information is trained by using the RGB values of each pixel in the RGB data to generate local color table information of the RGB data, and the RGB data is used.
- the local color table information updates the RGB values of the respective pixels.
- the picture file processing device may adopt the RGB data.
- the RGB values of the respective pixels are trained to initialize the color table information to generate local color table information of the RGB data, and the RGB values of the respective pixel points are updated by using the local color table information of the RGB data.
- the picture file processing device may generate an initialization color table.
- Information such as: (0, 0, 0), (1, 1, 1), (2, 2, 2), ..., (255, 255, 255), the picture file processing device initializes the color table information Obtaining a third source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquiring a third color index of the third source RGB value, where the picture file processing device is Obtaining, in a preset range centered on the third color index in the color table information, obtaining a fourth source RGB value having the smallest error with the RGB value of the current pixel point, and acquiring the fourth source RGB value fourth.
- the picture file processing device replacing the fourth source RGB value with an RGB value of the current pixel point, and using the preset weight value and the RGB value of the current pixel point, the initialization color table a preset in the information centered on the fourth color index
- the plurality of source RGB values in the range are modified.
- the picture file processing device acquires the initialized color table information obtained after the modification, and The next pixel point of the current pixel point is used as the current pixel point, and is transferred to a third source RGB value that performs the closest acquisition of the G component in the initialization color table information to the G component of the current pixel point in the RGB data.
- the picture file processing device acquires the modified color table information obtained by the modification, and determines the initialization color table information as the The local color table information of the RGB data, the picture file processing device may update the RGB values of the respective pixel points by using the local color table information of the RGB data. It should be noted that the process of training the initialization color table information in each instance and updating the RGB values of each pixel by using the local color table information of the RGB data can refer to the execution process of the foregoing manner, and details are not described herein.
- the local color table information of the N-1 frame RGB data is trained to generate the local part of the RGB data by using the RGB values of the pixels in the RGB data.
- the color table information is updated with the RGB values of the respective pixels using the local color table information of the RGB data.
- the picture file processing device may set at least one source RGB value in the partial color table information of the N-1th frame RGB data according to Sorting the preset order of the G components to generate training color table information, wherein the picture file processing device acquires the G component in the training color table information that is closest to the G component of the current pixel point in the RGB data.
- the picture file processing device acquires in the preset range centered on the fifth color index in the training color table information a sixth source RGB value having a smallest error from an RGB value of the current pixel point, and acquiring a sixth color index of the sixth source RGB value, the picture file processing device replacing the sixth source RGB value with Determining an RGB value of the current pixel, and using a preset weight value and an RGB value of the current pixel, a plurality of sources in the preset range of the training color table information centered on the sixth color index RGB values are modified
- the picture file processing device acquires the modified color table information obtained after the modification, and uses the next pixel point of the current pixel point as a current pixel point, and proceeds to perform a fifth source RGB value that is obtained in the training color table information to obtain a G component closest to a G component of a current pixel point in the RGB data
- S907 Perform image coding on the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the picture file processing device performs image encoding on the updated RGB data using the delay time and the total number of frames to generate the picture file.
- the picture file processing device may perform image coding on a color index of each pixel in the RGB data by using an LZW based on the delay time and the total number of frames, to generate the picture file.
- the picture file processing device may store or display the picture file.
- FIG. 17 a flowchart of still another method for processing a picture file is provided in the embodiment of the present application.
- the image file processing method provided by the embodiment of the present application is further described with reference to the embodiment shown in FIG. 15, and the method includes the following steps S1001 to S1009.
- S1002 Determine whether the color table information of the picture file is global color table information of the picture file.
- S1003 Generate a picture header information data segment including delay information, a total number of frames, and global color table information, and RGB data corresponding to each frame image in the picture file.
- S1004 Generate a picture header information data segment including delay information, a total number of frames, and RGB data corresponding to each frame image in the picture file.
- the picture file processing device may obtain a user-defined information data segment, where the user-defined information data segment may include configuration parameters and encoder complexity, etc., and the configuration parameter may be performed on the YUV data. Encoded parameters. If the obtained configuration parameter is a lossless mode parameter, the RGB data may be directly encoded to generate code stream data. If the acquired configuration parameter is an SD mode parameter or a HD mode parameter, the RGB data needs to be converted into YUV data, and then The YUV data is encoded to generate codestream data. In some examples, the picture file processing device may determine whether the configuration parameter used in the user-defined information data segment is a lossless mode parameter, and if yes, proceed to step S1006. If no, the process proceeds to steps S1007-S1008.
- the encoder complexity may be a fineness parameter of the encoding determined according to hardware performance of the picture file processing device.
- the picture file processing device may encode the RGB data to generate code stream data.
- the encoding may include predictive coding, transform coding, quantization coding, and entropy coding.
- the picture file processing apparatus may compress the RGB data by using an IPPP mode, where the first frame of RGB data is an I frame, and the I frame is a frame.
- the intra prediction frame, the remaining frame RGB data is a P frame, and the P frame is an inter prediction frame, which can effectively compress the file data amount of the picture file, and can also use a fixed QP method to stabilize the quality between different frames.
- the entropy coding may include Huffman coding, arithmetic coding, and the like.
- the picture file processing device may convert the RGB data into a YUV by using a color space conversion formula. data.
- the picture file processing device may add the configuration parameter to a picture header information data segment. In some examples, the picture file processing device may add the user-defined information data segment to the picture. Header data segment.
- the steps S1001 to S1004 of the embodiment of the present application may refer to the detailed description of the steps S801 to S804 of the embodiment shown in FIG. 15 respectively, and details are not described herein.
- the picture header information data segment of the picture file is generated according to the original picture file, and each frame image in the picture file is converted into YUV data, and then the code stream data and the picture header information obtained by encoding according to the YUV data are obtained.
- the data segment generates compressed image data and stores the compressed image data.
- FIG. 18 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application. As shown in FIG. 18, the picture file processing method provided by the embodiment of the present application is further described with reference to the embodiment shown in FIG. 16, and the method includes the following steps S1101-step S1112.
- S1101 Parse the compressed image data to obtain the picture header information data segment and the code stream data.
- the picture header information data segment may further include a user-defined information data segment, where the user-defined information data segment may include a configuration parameter, an encoder complexity, and the like, and the configuration parameter may be a previous
- the parameters for encoding the YUV data, the configuration parameters may include any one of an SD mode parameter, a HD mode parameter, and a lossless mode parameter.
- the encoder complexity may be a fineness parameter of the encoding determined according to hardware performance of the transmitting end, and the encoder complexity may include any one of a first complexity, a second complexity, and a third complexity. The first complexity is higher than the second complexity, and the second complexity is higher than the third complexity.
- the above performance value, the first preset value range, the second pass preset value range, and the third preset value range may be specifically set according to the developer's human experience.
- S1102 Determine whether the configuration parameter in the data segment of the picture header information is a lossless mode parameter.
- the picture file processing device may determine whether the configuration parameter in the picture header information data segment is a lossless mode parameter, and if yes, proceed to step S1103. If no, the process proceeds to steps S1104-S1107.
- the picture file processing device can decode the code stream data generated by directly encoding the RGB data in the lossless mode.
- the code stream data directly generates RGB data.
- S1104 Determine whether the configuration parameter is a YUV444 mode parameter.
- the configuration parameter in the data segment of the picture header information is not a lossless mode parameter, that is, the configuration parameter in the data segment of the picture header information is an SD mode parameter or a HD mode parameter
- the The standard definition mode parameter is preferably a YUV420 mode parameter
- the high definition mode parameter is preferably a YUV444 mode parameter.
- the picture file processing device may further determine whether the configuration parameter is a YUV444 mode parameter, and if yes, proceed to step S1105. If no, the process proceeds to steps S1106-S1107.
- S1105 Decode the code stream data to generate the YUV data, and convert the YUV data into RGB data by using a color space conversion formula.
- the picture file processing device determines that the configuration parameter is a YUV444 mode parameter, that is, when encoding the YUV data, the luminance component and the chrominance component of each pixel in the YUV data are completely retained, so The picture file processing device decodes the code stream data to directly generate the YUV data.
- the chrominance includes a sample matrix and a single sample of any one of Cb and Cr color difference signals. The two color difference signals need to be separately upsampled, and the two methods of upsampling are the same. Among them, Cb corresponds to U in YUV, Cr corresponds to V in YUV, and YUV data consists of Y image, Cb image and Cr image. .
- the picture file processing device converts the YUV data into RGB data using a color space conversion formula.
- the picture file processing device determines that the configuration parameter is not a YUV444 mode parameter, that is, when encoding the YUV data, only the luminance component of each pixel in the YUV data is retained, but for each of the YUV data.
- the chrominance component of the pixel is compressed. For example, in the process of encoding the YUV data by using the YUV420 mode parameter, the chrominance components of the four pixels adjacent to each other in the YUV data can be compressed into one color. Degree component.
- the picture file processing apparatus needs to decode the code stream data to generate the YUV data, and perform chroma component upsampling processing on the YUV data, that is, restore one chroma component to a chroma component of four pixel points. Processing.
- the chromaticity includes a sample matrix and a single sample of any one of Cb and Cr color difference signals.
- the two color difference signals need to be separately upsampled, and the two methods of upsampling are the same.
- Cb corresponds to U in YUV
- Cr corresponds to V in YUV
- YUV data consists of Y image, Cb image and Cr image. .
- the following description will be made in conjunction with the Cb image in the YUV420 mode.
- the compressed Cb image that is, the source Cb image
- the Cb value of each upsampled non-boundary pixel is determined by the Cb value of four adjacent pixels of the corresponding position of the source Cb image.
- Dst(2x,2y) Clip3(0,255,(src(x-1,y-1)+3*src(x,y-1)+3*src(x-1,y)+9*src (x, y) + 8) >> 4).
- Dst(2x-1,2y) Clip3(0,255,(3*src(x-1,y-1)+src(x,y-1)+9*src(x-1,y)+3 *src(x,y)+8)>>4).
- Dst(2x,2y-1) Clip3(0,255,(3*src(x-1,y-1)+9*src(x,y-1)+src(x-1,y)+3 *src(x,y)+8)>>4).
- Dst(2x-1,2y-1) Clip3(0,255,(9*src(x-1,y-1)+3*src(x,y-1)+3*src(x-1, y) + src(x, y) + 8) >> 4).
- the Cb value of the corner point in the target Cb image is determined by the Cb value of the corner point in the source Cb image.
- the Cb value is determined by the Cb value of the two adjacent pixels in the first row and the last row in the source Cb image.
- Dst(2x, 2H-1) Clip3(0,255,(src(x-1,2H-1)+3*src(x,2H-1)+2)>>2).
- Dst(2x-1,2H-1) Clip3(0,255,(3*src(x-1,2H-1)+src(x,2H-1)>>2)>>2 ⁇ ;
- the Cb value is determined by the Cb value of two pixel points adjacent in the first column and the last column in the source Cb image.
- Dst(0,2y-1) Clip3(0,255,(src(0,y-1)+3*src(0,y)+2)>>2).
- Dst(0, 2y) Clip3 (0, 255, (3 * src (0, y) + src (0, y-1) + 2) >> 2).
- Dst(2W-1, 2y-1) Clip3(0,255,(src(2W-1,y-1)+3*src(2W-1,y)+2)>>2).
- Dst(2W-1, 2y) Clip3(0,255,(3*src(2W-1,y)+src(2W-1, y-1)+2)>>2).
- the Cb value of all the pixel points in the target Cb image can be obtained by the above calculation rule. It can be understood that the weight value in the above formula can be determined according to the empirical value. Similarly, the Cr value of all pixel points in the target Cr image can be obtained by using the above calculation rule. This completes the processing of the chroma component upsampling process.
- the picture file processing device may convert the YUV data into RGB data by using a color space conversion formula. In some examples, the picture file processing device needs to determine the color space used according to the range of values of the brightness component. Conversion formula.
- the global color table information is trained by using RGB values of each pixel in the RGB data to generate local color table information of the RGB data, and adopting The local color table information of the RGB data updates the RGB values of the respective pixels.
- the initial color table information is trained by using the RGB values of each pixel in the RGB data to generate local color table information of the RGB data, and the RGB data is used.
- the local color table information updates the RGB values of the respective pixels.
- the local color table information of the N-1 frame RGB data is trained to generate a partial part of the RGB data by using the RGB values of the pixels in the RGB data.
- the color table information is updated with the RGB values of the respective pixels using the local color table information of the RGB data.
- S1112 Perform image coding on the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the steps S1108 to S1112 of the embodiment of the present application may refer to the detailed description of the steps S903 to S907 of the embodiment shown in FIG. 16 respectively, and details are not described herein.
- FIG. 21A is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the method can be performed by a computing device. As shown in FIG. 21A, the method may include the following steps.
- S1201A Acquire color data of the first picture file, where the color data includes color values of each pixel point.
- the color data may be RGB data, or YUV data, or the like.
- the computing device can obtain RGB data generated by decoding the first picture file, the RGB data including RGB values of the respective pixel points.
- S1202A Acquire an initial color table, wherein each color value in the initial color table is arranged according to a preset sorting manner of values of one color component.
- the computing device can obtain a global color table in the first picture file.
- the color values in the global color table are sorted according to the value of one color component to obtain the initial color table.
- the computing device can arrange the preset plurality of color values in a predetermined order by the values of one color component to generate the initial color table.
- the initial color table may be (0, 0, 0), (1, 1, 1), ... (255, 255, 255).
- S1203A Search, in the initial color table, a corresponding initial color value of each pixel in the initial color table according to the value of the color component in the color value of each pixel, and use the initial color value pair The color values of the respective pixel points are updated.
- the computing device can obtain a first color value of the color component that is closest to the color component of the pixel point in the initial color table, and obtain a first color index of the first color value.
- a second color value having the smallest error in color value of the pixel point is obtained as the initial color value in a preset range centered on the first color index in the initial color table.
- the computing device may modify the initial color table by using the color value of each pixel in the color data and the initial color value to obtain a partial color table corresponding to the color data, and adopt the The local color table updates the color values of the respective pixel points.
- the computing device can obtain a second color index of the initial color value. Substituting the initial color value with a color value of the pixel point, and using a preset weight value and a color value of the pixel point, a preset centered on the second color index in the training color table Multiple color values within the range are modified.
- the modified initial color table is determined as the partial color table.
- the computing device may use the color data to train an initial color table of the first frame image to obtain the first a partial color table corresponding to the frame image, the initial color table of the first frame image being the initial color table.
- the computing device may use the second color data to train the initial color table corresponding to the Nth frame image to obtain the A partial color table corresponding to the N frame image.
- the initial color table corresponding to the Nth frame image is a local color table corresponding to the N-1th frame image of the original picture file, and N is an integer greater than 1.
- the color data is RGB data
- the RGB data is an example for decoding the first image file.
- FIG. 21B a flow chart of another method for processing a picture file is provided in the embodiment of the present application. As shown in FIG. 21B, the embodiment of the present application specifically describes a process of training color table information and updating RGB values of pixel points, and the method may include the following steps S1201B-S1203B.
- S1201B Acquire RGB data generated by decoding the original image file, and obtain RGB values of each pixel in the RGB data.
- the picture file processing device may obtain the RGB data generated by decoding the original picture file, and it may be understood that decoding the picture file may generate a picture header information data segment of the picture file and the picture file.
- the RGB data corresponding to each frame of image.
- the RGB data may be converted into YUV data, and the YUV data is encoded to generate code stream data, and further generated compressed image data including code stream data and picture header information data segments, the compressed image data may be used for storage or transmission.
- the picture file processing device acquires the compressed image data, may parse the compressed image data to obtain the picture header information data segment and the code stream data, and decode the code stream data to generate a YUV. Data, further converting YUV data into RGB data, the picture file processing device acquiring the RGB data, and acquiring RGB values of respective pixels in the RGB data.
- the initial color table information of the RGB data is trained by using the RGB values of the respective pixels to generate local color table information of the RGB data.
- the picture file processing device may determine whether the picture header information data segment includes global color table information of the picture file, and if yes, determine the global color table information as the picture file.
- Initial color table information of the first frame of RGB data if not, generating initial color table information, and determining initial color table information of the first frame of RGB data in the picture file.
- the initial color table information is trained by using the RGB values of the pixels in the first frame of RGB data to generate local color table information of the first frame of RGB data, and for the Nth frame of RGB data, the N-1th frame of RGB data may be used.
- the local color table information is used as the initial color table information of the Nth frame RGB data, and the initial color table information of the Nth frame RGB data is trained by using the RGB values of the pixels in the Nth frame RGB data to generate the Nth Local color table information for frame RGB data, where N is a positive integer greater than one.
- S1203B updates the RGB values of each pixel point by using local color table information of RGB data.
- the picture file processing device may update the RGB values of the pixels by using local color table information of the RGB data, and the picture file processing device adopts the delay time and the total frame.
- the number of the updated RGB data is image encoded to generate the picture file.
- the local color table information of the RGB data is generated by training the global color table information or the initialization color table information, thereby effectively reducing the distortion of the image.
- FIG. 22 is a schematic flowchart diagram of still another method for processing a picture file according to an embodiment of the present application.
- the embodiment of the present application specifically describes a process of training color table information and updating RGB values of pixel points, and the method may include the following steps S1301 - step S1306.
- S1301 Acquire RGB data generated by decoding the original image file, and obtain RGB values of each pixel in the RGB data.
- the picture file processing device may acquire RGB data generated by decoding the original picture file.
- the RGB data may be converted into YUV data, and the YUV data is encoded to generate code stream data, and further generated compressed image data including code stream data and picture header information data segments, the compressed image data may be used for storage or transmission. Restore the image file to the image file processing device.
- S1302 Determine whether the global color table information of the picture file is included in the picture header information data segment generated by the decoded picture file.
- the picture file processing device determines whether the global color table information of the picture file is included in the picture header information data segment, and if yes, proceeds to step S1103. If no, go to step 1104.
- the initialization color table information is trained by using the RGB values of the pixels in the RGB data to generate local color table information of the RGB data.
- the local color table information of the N-1th frame RGB data is trained to generate a local part of the RGB data by using the RGB values of the pixels in the RGB data. Color table information.
- the picture file processing device may update the RGB values of each pixel in the RGB data by using local color table information of the RGB data, and the picture file processing device adopts the delay time and the The total number of frames encodes the updated RGB data to generate the picture file.
- the local color table information of the RGB data is generated by training the global color table information or the initialization color table information, thereby effectively reducing the distortion of the image.
- the G component By using the G component to obtain the closest source RGB value and performing a small range of color index search in the color table information, it is not necessary to traverse the entire color table information, thereby reducing the complexity of training the local color table information, and further improving the image file. The coding efficiency.
- the picture file processing device provided by the embodiment of the present application will be described in detail below with reference to FIG. 23 to FIG. It should be noted that the picture file processing device shown in FIG. 23 to FIG. 24 is used to execute the method of the embodiment shown in FIG. 8 and FIG. 9 of the present application, and is specifically the transmitting end in the foregoing embodiment, for convenience. It is to be noted that only the parts related to the embodiments of the present application are shown, and the specific technical details are not disclosed. Please refer to the embodiments shown in FIG. 8 and FIG. 9 of the present application.
- FIG. 23 is a schematic structural diagram of a picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 1 of the embodiment of the present application may include an image converting unit 11, an image compressing unit 12, and a code stream transmitting unit 13.
- the image conversion unit 11 is configured to generate a picture header information data segment of the picture file according to the original picture file, and convert each frame image in the picture file into YUV data.
- the image conversion unit 11 may decode the original picture file to generate a picture header information data segment of the picture file.
- the image conversion unit 11 further converts each frame image in the picture file into YUV data.
- FIG. 24 is a schematic structural diagram of an image conversion unit according to an embodiment of the present application.
- the image conversion unit 11 may include:
- the image decoding sub-unit 111 is configured to decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the image decoding sub-unit 111 may decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the image conversion sub-unit 112 is configured to convert the RGB data into YUV data by using a color space conversion formula.
- the image conversion sub-unit 112 may convert the RGB data into YUV data using a color space conversion formula.
- the image compression unit 12 is configured to encode the YUV data to generate code stream data.
- the image compression unit 12 may further perform encoding processing on the YUV data to generate code stream data, where the encoding may include predictive coding, transform coding, quantization coding, and entropy coding, for example, the image compression unit.
- the YUV data may be compressed by using the IPPP mode.
- the YUV data of the first frame is an I frame
- the I frame is an intra prediction frame
- the remaining frame YUV data is a P frame
- the P frame is an inter prediction frame, which can be effective.
- the amount of file data in the compressed image file can also be fixed in a QP manner to stabilize the quality between different frames.
- the entropy coding may include Huffman coding, arithmetic coding, and the like.
- a user may be added to add configuration parameters, which may be parameters that encode the YUV data.
- the image compression unit 12 may encode the YUV data using configuration parameters to generate codestream data.
- the user may also be allowed to add an encoder complexity, which may be a fine code determined according to the hardware performance of the picture file processing device 1.
- the encoder complexity may include any one of a first complexity, a second complexity, and a third complexity, the first complexity being higher than the second complexity, the second The complexity is higher than the third complexity, for example, the hardware performance of the picture file processing device 1 can be detected to generate a performance value, for example, the calculation rate of the central processing unit of the picture file processing device 1 is tested, etc.
- the performance value is within the first preset value range, it may be determined that the hardware performance of the picture file processing device 1 is high, and the coding mode of the first complexity may be recommended.
- the image compression unit 12 may configure the encoder complexity to encode the YUV data to generate codestream data.
- the code stream sending unit 13 is configured to send the compressed image data to the receiving end.
- the code stream sending unit 13 may generate compressed image data, and the picture file processing device 1 may store the compressed image data, when detecting a request of the receiving end for the picture file, The code stream transmitting unit 13 transmits the compressed image data to the receiving end again, for example, when the picture file processing apparatus 1 detects that the web page including the picture file is turned on.
- the code stream sending unit 13 may also directly send the compressed image data to the receiving end, for example, when a client in the instant messaging application needs to send the picture file to another client, the application service device The picture file needs to be forwarded or the like, and the compressed image data includes the picture header information data segment and the code stream data.
- the receiving end receives the compressed image data sent by the picture file processing device 1, and the receiving end may perform parsing processing on the compressed image data to obtain the picture header information data in the compressed image data. Segment and the code stream data.
- the picture file processing apparatus 1000 may include at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, a memory 1005, and at least one communication bus 1002.
- the communication bus 1002 is used to implement connection communication between these components.
- the user interface 1003 can include a display and a keyboard.
- the optional user interface 1003 can also include a standard wired interface and a wireless interface.
- Some examples of the network interface 1004 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
- the memory 1005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
- the memory 1005 may also be at least one storage device located remotely from the aforementioned processor 1001 in some instances. As shown in FIG. 25, an operating system, a network communication module, a user interface module, and an image processing application may be included in the memory 1005 as a computer storage medium.
- the network interface 1004 is mainly used to connect to the receiving end and perform data communication with the receiving end.
- the user interface 1003 is mainly used to provide an input interface for the user, and obtain data input by the user.
- the processor 1001 can be used to call the image processing application stored in the memory 1005, and specifically perform the following steps:
- the YUV data is encoded to generate codestream data.
- the compressed image data comprising the picture header information data segment and the code stream data
- the receiving end parses the compressed image data to obtain the picture header information data segment
- generating, by the code stream data, the YUV data according to the code stream data so that the receiving end processes the YUV data based on the picture header information data segment to generate the picture file.
- the processor 1001 performs the following steps when performing a picture header information data segment for generating a picture file according to the original picture file and converting each frame image in the picture file into YUV data:
- the original picture file is decoded to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file, and the picture header information data segment includes delay information, total frame number, and global color table information.
- the RGB data is converted to YUV data using a color space conversion formula.
- the encoding includes predictive coding, transform coding, quantization coding, and entropy coding.
- the processor 1001 performs the following steps when performing the encoding of the YUV data to generate code stream data:
- the YUV data is encoded by using configuration parameters to generate code stream data
- the configuration parameter is a parameter for encoding the YUV data
- the configuration parameter includes any one of an SD mode parameter, a HD mode parameter, and a lossless mode parameter.
- the processor 1001 performs the following steps when performing the encoding of the YUV data to generate code stream data:
- the encoder complexity is configured to encode the YUV data to generate codestream data, the encoder complexity being a fineness parameter of the encoding determined according to hardware capabilities.
- the picture file is a GIF image.
- FIG. 26 to FIG. 27 is used to perform the method of the embodiment shown in FIG. 11 and FIG. 12 of the present application, and is specifically the receiving end in the above embodiment, for convenience. It is to be noted that only the parts related to the embodiments of the present application are shown, and the specific technical details are not disclosed. Please refer to the embodiment shown in FIG. 11 and FIG. 12 of the present application.
- FIG. 26 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 2 of the embodiment of the present application may include an information acquiring unit 21 and an image encoding unit 22.
- the information acquiring unit 21 is configured to receive compressed image data for the original picture file sent by the sending end, and parse the compressed image data to obtain a picture header information data segment and code stream data of the picture file.
- the transmitting end may decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the transmitting end may convert the RGB data into YUV data by using a color space conversion formula.
- the transmitting end may further perform encoding processing on the YUV data to generate code stream data.
- the transmitting end may generate compressed image data, and the sending end may store the compressed image data, and when detecting the request of the picture file processing device 2 for the picture file, the sending end performs the compression Image data is sent to the picture file processing device 2.
- the information acquiring unit 21 receives the compressed image data sent by the sending end, and the information acquiring unit 21 may perform parsing processing on the compressed image data to obtain the picture header information in the compressed image data. Data segment and the code stream data.
- the image encoding unit 22 is configured to generate the YUV data according to the code stream data, and process the YUV data based on the picture header information data segment to generate the picture file.
- the image encoding unit 22 decodes the code stream data to generate the YUV data, and compares the YUV data based on delay information, total frame number, global color table information, and the like in a picture header information data segment. An encoding process is performed to generate the picture file.
- FIG. 27 is a schematic structural diagram of an image coding unit according to an embodiment of the present application.
- the image encoding unit 22 may include:
- the image conversion sub-unit 221 is configured to decode the code stream data to generate the YUV data, and convert the YUV data into RGB data by using a color space conversion formula.
- the image conversion sub-unit 221 decodes the code stream data to generate the YUV data, and may convert the YUV data into RGB data using a color space conversion formula, in some examples, the image converter The unit 221 needs to determine the color space conversion formula adopted according to the range of values of the luminance component.
- the pixel point updating sub-unit 222 is configured to update the RGB values of each pixel point in the RGB data by using the global color table information.
- the pixel update subunit 222 may adopt the global color table information pair. The RGB values of the respective pixel points in the RGB data are updated.
- the pixel point update subunit 222 may adopt the RGB data.
- the RGB values of the pixels in the image are trained to generate the local color table information of the RGB data, and the RGB values of the pixels are performed by using the local color table information of the RGB data. Update.
- the pixel point update subunit 222 may At least one source RGB value in the color table information is sorted according to a preset ordering of the G components (eg, ascending, descending, etc.) to generate training color table information.
- the pixel point update sub-unit 222 acquires, in the training color table information, a first source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquires the first source RGB value.
- the first color index is a first color index that is closest to the G component of the current pixel point in the RGB data.
- the pixel point update sub-unit 222 acquires, in a preset range centered on the first color index in the training color table information, a second source RGB value that has the smallest error with the RGB value of the current pixel point. Obtaining a second color index of the second source RGB value, it may be understood that the pixel point update subunit 222 may be centered on the first color index and preset before and after the first color index. Acquiring a plurality of source RGB values in a range, and respectively calculating an error of the first source RGB value and the RGB values of the plurality of source RGB values and the current pixel point, and minimizing an error among the plurality of source RGB values The source RGB value is determined to be the second source RGB value.
- the pixel point update sub-unit 222 replaces the second source RGB value with the RGB value of the current pixel point, and uses the preset weight value and the RGB value of the current pixel point to the training color table information. A plurality of source RGB values within a preset range centered on the second color index are modified.
- the pixel point update The sub-unit 222 obtains the training color table information obtained after the modification, takes the next pixel point of the current pixel point as the current pixel point, and transfers to perform the acquisition of the G component and the RGB in the training color table information.
- the first source RGB value of the G component of the current pixel in the data is the closest, for example, the second pixel point in the RGB data is used as the current pixel point, and the training color table information obtained after the modification is performed again. training.
- the training process in some examples can be referred to the description of the above training process, and will not be described here.
- the pixel point updater obtains the training color table information obtained after the modification, and determines the training color table information as the local color table information of the RGB data.
- the pixel point update sub-unit 222 may update the RGB values of the pixel points by using local color table information of the RGB data.
- the pixel point update sub-unit 222 may be in the RGB data.
- the source RGB values in the local color table information are sequentially obtained with the same or the smallest error as the pixel points respectively replacing the RGB values of the respective pixel points, and in some examples, the pixel point update sub-unit 222 may The color index corresponding to the source RGB values of the same or the smallest error is replaced by the RGB values of the respective pixels.
- the pixel point update subunit 222 may adopt the RGB.
- the RGB values of the pixels in the data are trained to initialize the color table information to generate local color table information of the RGB data, and the RGB values of the pixels are updated by the local color table information of the RGB data. .
- the pixel point update subunit 222 may generate an initialization.
- Color table information for example: (0, 0, 0), (1, 1, 1), (2, 2, 2), ..., (255, 255, 255)
- the pixel point update subunit 222 is Acquiring, in the initialization color table information, a third source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquiring a third color index of the third source RGB value
- the picture file processing device 2 acquiring, in a preset range centered on the third color index in the initialization color table information, a fourth source RGB value having a smallest error from an RGB value of the current pixel point, and acquiring the fourth source a fourth color index of the RGB value
- the pixel point update sub-unit 222 replacing the fourth source RGB value with the RGB value of the current pixel point, and adopting a preset weight value and an RGB value
- the pixel point update subunit 222 acquires the initialization color obtained after the modification.
- the table information, the next pixel point of the current pixel point is taken as the current pixel point, and is transferred to perform the first step of acquiring the G component in the initialization color table information and the G component of the current pixel point in the RGB data.
- the pixel update sub-unit 222 acquires the modified color table information obtained after the modification, and initializes the color
- the table information is determined as local color table information of the RGB data, and the pixel point updating sub-unit 222 may update the RGB values of the respective pixel points by using local color table information of the RGB data. It should be noted that the process of training the initialization color table information in each instance and updating the RGB values of each pixel by using the local color table information of the RGB data can refer to the execution process of the foregoing manner, and details are not described herein.
- the pixel point update subunit 222 may train the local color table information of the N-1th frame RGB data by using the RGB values of the pixels in the RGB data to generate the The local color table information of the RGB data, and the RGB values of the respective pixel points are updated by using the local color table information of the RGB data.
- the pixel point update subunit 222 may at least one source RGB of the partial color table information of the N-1th frame RGB data.
- the values are sorted according to a preset sorting manner of the G component to generate training color table information, and the pixel point updating sub-unit 222 acquires the G component and the G component of the current pixel point in the RGB data in the training color table information.
- the pixel point update subunit 222 being centered on the fifth color index in the training color table information And within a preset range, acquiring a sixth source RGB value having the smallest error of the RGB value of the current pixel point, and acquiring a sixth color index of the sixth source RGB value, where the pixel point update subunit 222 is to The sixth source RGB value is replaced by the RGB value of the current pixel point, and the preset weight value and the RGB value of the current pixel point are used, and the training color table information is centered on the sixth color index.
- the pixel update subunit 222 obtains the modified color table information obtained after the modification, and the current pixel point The next pixel is taken as the current pixel, and is transferred to a fifth source RGB value that performs the closest acquisition of the G component in the training color table information to the G component of the current pixel in the RGB data, when the current pixel
- the pixel point update subunit 222 acquires the modified color table information obtained by the modification, and determines the training color table information as a part of the RGB data.
- the color point information sub-unit 222 may update the RGB values of the respective pixel points by using local color table information of the RGB data. It should be noted that the process of training the training color table information in each instance and updating the RGB values of each pixel by using the local color table information of the RGB data may refer to the execution process of the foregoing manner, and details are not described herein.
- the image encoding sub-unit 223 is configured to image-encode the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the image encoding sub-unit 223 performs image encoding on the updated RGB data using the delay time and the total number of frames to generate the picture file.
- FIG. 28 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 2000 may include at least one processor 2001, such as a CPU, at least one network interface 2004, a user interface 2003, a memory 2005, and at least one communication bus 2002.
- the communication bus 2002 is used to implement connection communication between these components.
- the user interface 2003 may include a display and a keyboard.
- the optional user interface 2003 may further include a standard wired interface and a wireless interface.
- Some examples of the network interface 2004 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
- the memory 2005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
- the memory 2005 may also be at least one storage device located remotely from the aforementioned processor 2001.
- an operating system, a network communication module, a user interface module, and an image processing application may be included in the memory 2005 as a computer storage medium.
- the network interface 2004 is mainly used to connect to the transmitting end and perform data communication with the transmitting end.
- the user interface 2003 is mainly used to provide an input interface for the user, and obtain data input by the user.
- the processor 2001 can be used to call the image processing application stored in the memory 2005, and specifically perform the following steps:
- Receiving compressed image data for the original picture file sent by the sending end parsing the compressed image data to obtain a picture header information data segment and code stream data of the picture file, where the code stream data is converted for each frame image in the picture file.
- YUV data is encoded to generate information.
- the picture header information data segment includes delay information, a total number of frames, and global color table information.
- the processor 2001 performs the following steps when the YUV data is generated according to the code stream data, and the YUV data is processed based on the picture header information data segment to generate the picture file:
- the code stream data is decoded to generate the YUV data, and the YUV data is converted to RGB data using a color space conversion formula.
- the RGB values of the respective pixel points in the RGB data are updated using the global color table information.
- the updated RGB data is image encoded using the delay time and the total number of frames to generate the picture file.
- the processor 2001 when the processor 2001 performs the update of the RGB values of each pixel in the RGB data by using the global color table information, the following steps are specifically performed:
- the RGB data is the first frame image in the picture file
- the global color table information exists in the picture header information data segment
- the color table information is trained to generate local color table information of the RGB data, and the RGB values of the respective pixel points are updated using local color table information of the RGB data.
- the processor 2001 performs specific execution when training the global color table information by using RGB values of respective pixels in the RGB data to generate local color table information of the RGB data. The following steps:
- At least one source RGB value in the global color table information is sorted according to a preset sorting manner of the G component to generate training color table information.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is rotated.
- the first source RGB value that obtains the G component closest to the G component of the current pixel point in the RGB data in the training color table information is executed.
- the processor 2001 when the processor 2001 performs the update of the RGB values of each pixel in the RGB data by using the global color table information, the following steps are specifically performed:
- the RGB value pair of each pixel point in the RGB data is used to initialize the color.
- the table information is trained to generate local color table information of the RGB data, and the RGB values of the respective pixel points are updated using local color table information of the RGB data.
- the processor 2001 performs the following steps when performing the training of the initialization color table information by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data. :
- a third source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the initialization color table information, and a third color index of the third source RGB value is acquired.
- the initialization color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is rotated.
- the third source RGB value that obtains the G component closest to the G component of the current pixel point in the RGB data in the initialization color table information is executed.
- the processor 2001 when the processor 2001 performs the update of the RGB values of each pixel in the RGB data by using the global color table information, the following steps are specifically performed:
- the RGB data is the Nth frame image in the picture file
- training the local color table information of the N-1th frame RGB data by using the RGB values of the pixels in the RGB data to generate the The local color table information of the RGB data, and the RGB values of the respective pixel points are updated by the local color table information of the RGB data, where N is a positive integer greater than 1 and less than or equal to the total number of frames.
- the processor 2001 performs training on local color table information of the N-1th frame RGB data using RGB values of respective pixels in the RGB data to generate a partial color of the RGB data. For the table information, perform the following steps:
- At least one source RGB value in the partial color table information of the N-1th frame RGB data is sorted according to a preset sorting manner of the G component to generate training color table information.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is rotated.
- the fifth source RGB value that obtains the G component closest to the G component of the current pixel point in the RGB data in the training color table information is executed.
- the picture file is a GIF image.
- FIG. 29 is a schematic structural diagram of an image processing system according to an embodiment of the present application.
- the system may include a transmitting end 1 and a receiving end 2, and the transmitting end 1 and the receiving end 2 may be connected to each other through a network, wherein the transmitting end 1 is specifically a picture file processing of the embodiment shown in FIG.
- the device 1, the receiving end 2 is specifically the picture file processing device 2 of the embodiment shown in FIG.
- the transmitting end 1 is specifically the picture file processing device 1000 of the embodiment shown in FIG. 25
- the receiving end 2 is specifically the picture file processing device 2000 of the embodiment shown in FIG.
- the sending end 1 is configured to generate a picture header information data segment of the picture file according to the original picture file, convert each frame image in the picture file into YUV data, and encode the YUV data to generate code stream data, and
- the compressed image data is transmitted to the receiving end 2, and the compressed image data includes the picture header information data segment and the code stream data.
- the receiving end 2 is configured to receive the compressed image data sent by the sending end 1, and parse the compressed image data to obtain the picture header information data segment and the code stream data, according to the code stream data. Generating the YUV data and processing the YUV data based on the picture header information data segment to generate the picture file.
- the sending end 1 is configured to generate a picture header information data segment of the picture file according to the original picture file, and convert each frame image in the picture file into YUV data, specifically for:
- the original picture file is decoded to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file, and the picture header information data segment includes delay information, total frame number, and global color table information.
- the RGB data is converted to YUV data using a color space conversion formula.
- the encoding includes predictive coding, transform coding, quantization coding, and entropy coding.
- the sending end 1 when the sending end 1 is used to encode the YUV data to generate code stream data, it is specifically used to:
- the YUV data is encoded by using configuration parameters to generate code stream data
- the configuration parameter is a parameter for encoding the YUV data
- the configuration parameter includes any one of an SD mode parameter, a HD mode parameter, and a lossless mode parameter.
- the sending end 1 when the sending end 1 is used to encode the YUV data to generate code stream data, it is specifically used to:
- the encoder complexity is configured to encode the YUV data to generate codestream data, the encoder complexity being a fineness parameter of the encoding determined according to the hardware performance of the transmitting end 1.
- the receiving end 2 is configured to generate the YUV data according to the code stream data, and process the YUV data according to the picture header information data segment to generate the picture file, specifically Used for:
- the code stream data is decoded to generate the YUV data, and the YUV data is converted to RGB data using a color space conversion formula.
- the RGB values of the respective pixels in the RGB data are updated using the global color table information.
- the updated RGB data is image encoded using the delay time and the total number of frames to generate the picture file.
- the receiving end 2 when the receiving end 2 is used to update the RGB values of each pixel in the RGB data by using the global color table information, specifically, the receiving end 2 is used to:
- the RGB data is the first frame image in the picture file, and the global color table information exists in the picture header information data segment, the RGB value of each pixel in the RGB data is used to the global color table.
- the information is trained to generate local color table information of the RGB data, and the RGB values of the respective pixels are updated using the local color table information of the RGB data.
- the receiving end 2 when the receiving end 2 is used to train the global color table information to generate local color table information of RGB data by using RGB values of respective pixels in the RGB data, specifically, the receiving end 2 is used to:
- At least one source RGB value in the global color table information is sorted according to a preset sorting manner of the G component to generate training color table information.
- a first source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the training color table information, and a first color index of the first source RGB value is acquired.
- a second source RGB value having the smallest error with the RGB value of the current pixel point is acquired, and a second color index of the second source RGB value is obtained.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the first source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained from the training color table information.
- the training color table information obtained after the modification is acquired, and the training color table information is determined as the local color table information of the RGB data.
- the receiving end 2 when the receiving end 2 is used to update the RGB values of each pixel in the RGB data by using the global color table information, specifically, the receiving end 2 is used to:
- the RGB value of each pixel in the RGB data is used to train the initialization color table information.
- the RGB values of the respective pixels are updated by generating local color table information of the RGB data and using local color table information of the RGB data.
- the receiving end 2 is configured to: when the initial color table information is trained to generate the local color table information of the RGB data by using the RGB values of the pixels in the RGB data, specifically for:
- a third source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the initialization color table information, and a third color index of the third source RGB value is acquired.
- a fourth source RGB value having the smallest error from the RGB value of the current pixel point is acquired, and a fourth color index of the fourth source RGB value is obtained.
- the initialization color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the third source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained in the initialization color table information.
- the training color table information obtained after the modification is acquired, and the initialization color table information is determined as the local color table information of the RGB data.
- the receiving end 2 when the receiving end 2 is used to update the RGB values of each pixel in the RGB data by using the global color table information, specifically, the receiving end 2 is used to:
- the local color table information of the N-1th frame RGB data is trained to generate a partial color table of the RGB data by using the RGB values of the pixels in the RGB data.
- Information, and the RGB values of the respective pixels are updated using the local color table information of the RGB data, and N is a positive integer greater than 1 and less than or equal to the total number of frames.
- the receiving end 2 is configured to use the RGB values of the pixels in the RGB data to train the local color table information of the N-1th frame RGB data to generate the local color table information of the RGB data. Specifically used for:
- At least one source RGB value in the partial color table information of the N-1th frame RGB data is sorted according to a preset sorting manner of the G component to generate training color table information.
- a fifth source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the training color table information, and a fifth color index of the fifth source RGB value is acquired.
- a sixth source RGB value having the smallest error with the RGB value of the current pixel point is acquired, and a sixth color index of the sixth source RGB value is obtained.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the fifth source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained from the training color table information.
- the training color table information obtained after the modification is acquired, and the training color table information is determined as the local color table information of the RGB data.
- the picture file is a GIF image.
- FIG. 30 to FIG. 32 The picture file processing device provided by the embodiment of the present application will be described in detail below with reference to FIG. 30 to FIG. It should be noted that the picture file processing device shown in FIG. 30 to FIG. 32 is used to perform the method of the embodiment shown in FIG. 14 to FIG. 16 . For the convenience of description, only the embodiment of the present application is shown. For related parts, the specific technical details are not disclosed, please refer to the embodiment shown in FIG. 14 to FIG. 16 of the present application.
- FIG. 30 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 3 of the embodiment of the present application may include an image conversion unit 31, an image compression unit 32, a code stream generation unit 33, an information acquisition unit 34, and an image coding unit 35.
- the image conversion unit 31 is configured to generate a picture header information data segment of the picture file according to the original picture file, and convert each frame image in the picture file into YUV data.
- the image conversion unit 31 may decode the original picture file to generate a picture header information data segment of the picture file, and the image conversion unit 31 further converts each frame image in the picture file into YUV data.
- FIG. 31 is a schematic structural diagram of another image conversion unit according to an embodiment of the present application.
- the image conversion unit 31 may include:
- the image decoding sub-unit 311 is configured to decode the original picture file to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file.
- the image decoding sub-unit 311 can decode the original picture file.
- the image decoding sub-unit 311 can determine whether the color table information in the picture file is global color table information of the picture file.
- the image decoding sub-unit 311 may generate picture header information data including delay information, total number of frames, global color table information, and the like. And the delay information is recorded with a play interval between each frame image in the picture file, where the total number of frames is the number of image frames in the picture file, and each frame in the picture file is also generated.
- the RGB data corresponding to the image is also generated.
- the image decoding sub-unit 311 may generate a picture header information data segment including delay information, a total number of frames, and the like, The delay information is recorded with a play interval between each frame of the image file, where the total number of frames is the number of image frames in the picture file, and the RGB corresponding to each frame image in the picture file is also generated. data.
- the image conversion sub-unit 312 is configured to convert the RGB data into YUV data by using a color space conversion formula.
- the image conversion sub-unit 312 can convert the RGB data into YUV data using a color space conversion formula.
- the image compression unit 32 is configured to encode the YUV data to generate code stream data.
- the image compression unit 32 may further perform encoding processing on the YUV data to generate code stream data, where the encoding may include predictive coding, transform coding, quantization coding, and entropy coding, for example, the image compression unit. 32.
- the YUV data may be compressed by using an IPPP mode.
- the YUV data of the first frame is an I frame
- the I frame is an intra prediction frame
- the remaining frame YUV data is a P frame
- the P frame is an inter prediction frame, which can be effective.
- the amount of file data in the compressed image file can also be fixed in a QP manner to stabilize the quality between different frames.
- the entropy coding may include Huffman coding, arithmetic coding, and the like.
- the code stream generating unit 33 is configured to store compressed image data.
- the code stream generating unit 33 may generate compressed image data, and the code stream generating unit 33 may store the compressed image data, the compressed image data including the picture header information data segment and the Code stream data.
- the information acquiring unit 34 is configured to parse the compressed image data to obtain the picture header information data segment and the code stream data.
- the information acquiring unit 34 may perform parsing processing on the compressed image data to acquire the picture header information data segment and the code stream data in the compressed image data.
- the image encoding unit 35 is configured to generate the YUV data according to the code stream data, and process the YUV data based on the picture header information data segment to generate the picture file.
- the image encoding unit 35 decodes the code stream data to generate the YUV data, and compares the YUV data based on delay information, total frame number, global color table information, and the like in a picture header information data segment. An encoding process is performed to generate the picture file.
- FIG. 32 is a schematic structural diagram of another image coding unit according to an embodiment of the present application.
- the image encoding unit 35 may include:
- the image conversion sub-unit 351 is configured to decode the code stream data to generate the YUV data, and convert the YUV data into RGB data by using a color space conversion formula.
- the image conversion sub-unit 351 decodes the code stream data to generate the YUV data, and may convert the YUV data into RGB data using a color space conversion formula, in some examples, the image converter Unit 351 needs to determine the color space conversion formula employed based on the range of values of the luminance component.
- the pixel update subunit 352 is configured to update the RGB values of each pixel in the RGB data by using the global color table information.
- the pixel point update sub-unit 352 determines whether the global color table information of the picture file is included in the picture header information data segment, because the YUV data is previously encoded, thereby causing the In the case where the RGB data of the YUV data conversion is distorted, the picture file processing device 3 may update the RGB values of the respective pixel points in the RGB data by using the global color table information.
- the pixel point update subunit 352 may adopt the RGB data.
- the RGB values of the pixels in the image are trained to generate the local color table information of the RGB data, and the RGB values of the pixels are performed by using the local color table information of the RGB data. Update.
- the pixel point update subunit 352 may use the global At least one source RGB value in the color table information is sorted according to a preset ordering of the G components (eg, ascending, descending, etc.) to generate training color table information.
- the pixel point update sub-unit 352 acquires, in the training color table information, a first source RGB value whose G component is closest to the G component of the current pixel point in the RGB data, and acquires the first source RGB value.
- the first color index is a first color index.
- the pixel point update sub-unit 352 acquires, in a preset range centered on the first color index in the training color table information, a second source RGB value that has the smallest error with the RGB value of the current pixel point. Obtaining a second color index of the second source RGB value, it may be understood that the pixel point update subunit 352 may be centered on the first color index and preset before and after the first color index. Acquiring a plurality of source RGB values in a range, and respectively calculating an error of the first source RGB value and the RGB values of the plurality of source RGB values and the current pixel point, and minimizing an error among the plurality of source RGB values The source RGB value is determined to be the second source RGB value.
- the pixel point update sub-unit 352 replaces the second source RGB value with the RGB value of the current pixel point, and uses the preset weight value and the RGB value of the current pixel point to the training color table information. A plurality of source RGB values within a preset range centered on the second color index are modified.
- the picture file processing The device 3 obtains the training color table information obtained after the modification, takes the next pixel point of the current pixel point as the current pixel point, and transfers to perform the acquisition of the G component and the RGB data in the training color table information.
- the first source RGB value of the G component of the current pixel is closest to the first source RGB value, for example, the second pixel point in the RGB data is used as the current pixel point, and the training color table information obtained after the modification is trained again.
- the training process in some examples can be referred to the description of the above training process, and will not be described here.
- the pixel point updater The unit 352 obtains the training color table information obtained after the modification, and determines the training color table information as the local color table information of the RGB data.
- the pixel point update sub-unit 352 may update the RGB values of the pixel points by using local color table information of the RGB data.
- the pixel point update sub-unit 352 may be in the RGB data.
- the local color table information sequentially acquires the source RGB values that are the same as the respective pixel points or have the smallest error, and replace the RGB values of the respective pixel points.
- the pixel point update sub-unit 352 can replace the RGB values of the respective pixel points with color indices corresponding to the source RGB values that are the same or the smallest error of the respective pixel points.
- the pixel point update subunit 352 may adopt the RGB.
- the RGB values of the pixels in the data are trained to initialize the color table information to generate local color table information of the RGB data, and the RGB values of the pixels are updated by the local color table information of the RGB data. .
- the pixel point update sub-unit 352 may generate an initialization.
- Color table information for example: (0, 0, 0), (1, 1, 1), (2, 2, 2), ..., (255, 255, 255), the picture file processing device 3 is initializing Obtaining, in the color table information, a third source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquiring a third color index of the third source RGB value, the pixel point update subunit 352: in a preset range centered on the third color index in the initialization color table information, acquire a fourth source RGB value that has the smallest error with the RGB value of the current pixel point, and acquire the fourth source a fourth color index of the RGB value, the pixel point update sub-unit 352 replacing the fourth source RGB value with the RGB value of the current pixel point, and adopting a preset weight value and an RGB value of
- the pixel update sub-unit 352 acquires the modified color table information obtained after the modification, and initializes the color
- the table information is determined as local color table information of the RGB data, and the pixel point update sub-unit 352 may update the RGB values of the respective pixel points by using local color table information of the RGB data. It should be noted that the process of training the initialization color table information in each instance and updating the RGB values of each pixel by using the local color table information of the RGB data can refer to the execution process of the foregoing manner, and details are not described herein.
- the pixel point update subunit 352 may at least one source RGB of the partial color table information of the N-1th frame RGB data.
- the values are sorted according to a preset sorting manner of the G component to generate training color table information, and the pixel point updating sub-unit 352 acquires the G component and the G component of the current pixel point in the RGB data in the training color table information.
- the pixel point update subunit 352 being centered on the fifth color index in the training color table information
- the pixel update sub-unit 352 obtains the modified color table information obtained after the modification, and the current pixel point The next pixel is taken as the current pixel, and is transferred to a fifth source RGB value that performs the closest acquisition of the G component in the training color table information to the G component of the current pixel in the RGB data, when the current pixel
- the pixel point update sub-unit 352 obtains the modified color table information obtained by the modification, and determines the training color table information as a part of the RGB data.
- the color point information sub-unit 352 may update the RGB values of the respective pixel points by using local color table information of the RGB data. It should be noted that the process of training the training color table information in each instance and updating the RGB values of each pixel by using the local color table information of the RGB data can be referred to the execution process of the foregoing method, and details are not described herein.
- the image encoding sub-unit 353 is configured to image-encode the updated RGB data by using the delay time and the total number of frames to generate the picture file.
- the image encoding sub-unit 353 performs image encoding on the updated RGB data using the delay time and the total number of frames to generate the picture file, and in some examples, the image encoding The sub-unit 353 may perform image coding on the color index of each pixel in the RGB data by using LZW to generate the picture file, and the picture file processing device 3 may be configured according to the delay time and the total number of frames.
- the image file is stored or displayed.
- FIG. 33 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 3000 may include at least one processor 3001, such as a CPU, at least one network interface 3004, a user interface 3003, a memory 3005, and at least one communication bus 3002.
- the communication bus 3002 is used to implement connection communication between these components.
- the user interface 3003 can include a display and a keyboard.
- the optional user interface 3003 can also include a standard wired interface and a wireless interface.
- Some examples of the network interface 3004 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
- the memory 3005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
- the memory 3005 may also be at least one storage device located remotely from the aforementioned processor 3001. As shown in FIG. 33, an operating system, a network communication module, a user interface module, and an image processing application may be included in the memory 3005 as a computer storage medium.
- the network interface 3004 is mainly used for function module connection in a distributed service device, and performs data communication with the function module.
- the user interface 3003 is mainly used to provide an input interface for the user, and obtain data input by the user.
- the processor 3001 can be used to call the image processing application stored in the memory 3005, and specifically perform the following steps:
- the YUV data is encoded to generate codestream data.
- the compressed image data is stored, the compressed image data including the picture header information data segment and the code stream data.
- the code stream data is decoded to generate the YUV data, and the YUV data is encoded based on the picture header information data segment to generate the picture file.
- the processor 3001 performs the following steps when performing the generation of the picture header information data segment of the picture file according to the original picture file and converting each frame image in the picture file into YUV data:
- the original picture file is decoded to generate a picture header information data segment of the picture file and RGB data corresponding to each frame image in the picture file, and the picture header information data segment includes delay information, total frame number, and global color table information.
- the RGB data is converted to YUV data using a color space conversion formula.
- the encoding includes predictive coding, transform coding, quantization coding, and entropy coding.
- the processor 3001 performs the following steps when performing the encoding of the YUV data to generate code stream data:
- the YUV data is encoded by using configuration parameters to generate code stream data
- the configuration parameter is a parameter for encoding the YUV data
- the configuration parameter includes any one of an SD mode parameter, a HD mode parameter, and a lossless mode parameter.
- the processor 3001 performs the following steps when performing the encoding of the YUV data to generate code stream data:
- the encoder complexity is configured to encode the YUV data to generate codestream data, the encoder complexity being a fineness parameter of the encoding determined according to hardware performance of the transmitting end.
- the processor 3001 performs decoding of the code stream data to generate the YUV data, and performs encoding processing on the YUV data based on the picture header information data segment to generate the picture file. , specifically perform the following steps:
- the code stream data is decoded to generate the YUV data, and the YUV data is converted to RGB data using a color space conversion formula.
- the RGB values of the respective pixel points in the RGB data are updated using the global color table information.
- the updated RGB data is image encoded using the delay time and the total number of frames to generate the picture file.
- the processor 3001 when the processor 3001 performs the update of the RGB values of each pixel in the RGB data by using the global color table information, the following steps are specifically performed:
- the RGB data is the first frame image in the picture file, and the global color table information exists in the picture header information data segment, the RGB value of each pixel in the RGB data is used to the global color table.
- the information is trained to generate local color table information of the RGB data, and the RGB values of the respective pixels are updated using the local color table information of the RGB data.
- the processor 3001 performs the following steps when performing the training of the global color table information by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data:
- At least one source RGB value in the global color table information is sorted according to a preset sorting manner of the G component to generate training color table information.
- the receiving end acquires a second source RGB value with a minimum error of the RGB value of the current pixel point in the preset range centered on the first color index in the training color table information, and acquires a second source RGB value second. Color index.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the first source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained from the training color table information.
- the training color table information obtained after the modification is acquired, and the training color table information is determined as the local color table information of the RGB data.
- the processor 3001 when the processor 3001 performs the update of the RGB values of each pixel in the RGB data by using the global color table information, the following steps are specifically performed:
- the RGB value of each pixel in the RGB data is used to train the initialization color table information.
- the RGB values of the respective pixels are updated by generating local color table information of the RGB data and using local color table information of the RGB data.
- the processor 3001 when the processor 3001 performs the training of initializing the color table information by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data, the following steps are specifically performed:
- a third source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the initialization color table information, and a third color index of the third source RGB value is acquired.
- a fourth source RGB value having the smallest error from the RGB value of the current pixel point is acquired, and a fourth color index of the fourth source RGB value is obtained.
- the initialization color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the third source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained in the initialization color table information.
- the training color table information obtained after the modification is acquired, and the initialization color table information is determined as the local color table information of the RGB data.
- the processor 3001 when the processor 3001 performs the update of the RGB values of each pixel in the RGB data by using the global color table information, the following steps are specifically performed:
- the local color table information of the N-1th frame RGB data is trained to generate a partial color table of the RGB data by using the RGB values of the pixels in the RGB data.
- Information, and the RGB values of the respective pixels are updated using the local color table information of the RGB data, and N is a positive integer greater than 1 and less than or equal to the total number of frames.
- the processor 3001 performs training on the local color table information of the N-1th frame RGB data by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data. Specifically perform the following steps:
- At least one source RGB value in the partial color table information of the N-1th frame RGB data is sorted according to a preset sorting manner of the G component to generate training color table information.
- a fifth source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the training color table information, and a fifth color index of the fifth source RGB value is acquired.
- a sixth source RGB value having the smallest error with the RGB value of the current pixel point is acquired, and a sixth color index of the sixth source RGB value is obtained.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the fifth source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained from the training color table information.
- the training color table information obtained after the modification is acquired, and the training color table information is determined as the local color table information of the RGB data.
- the compressed image data includes the picture header information data segment and the code stream data.
- FIG. 34 to FIG. 35 is used to perform the method of the embodiment shown in FIG. 17 to FIG. 20 of the present application.
- FIGS. 17-20 of the present application please refer to the embodiment shown in FIGS. 17-20 of the present application.
- FIG. 34 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 4 of the embodiment of the present application may include a pixel value acquisition unit 41, a color table generation unit 42, and a pixel value update unit 43.
- the pixel value obtaining unit 41 is configured to acquire RGB data generated by processing the original image file, and acquire RGB values of each pixel in the RGB data.
- the pixel value acquisition unit 41 may acquire RGB data generated by decoding the original picture file.
- the RGB data may be converted into YUV data, and the YUV data is encoded to generate code stream data, and further generated compressed image data including code stream data and picture header information data segments, the compressed image data may be used for storage or transmission.
- the pixel value acquisition unit 41 acquires the compressed image data, and the compressed image data may be parsed to obtain the picture header information data segment and the Code stream data, and decoding the code stream data to generate YUV data, further converting YUV data into RGB data, the pixel value obtaining unit 41 acquiring the RGB data, and acquiring RGB of each pixel point in the RGB data value.
- the color table generating unit 42 is configured to train the initial color table information of the RGB data by using the RGB values of the respective pixels to generate local color table information of the RGB data.
- the color table generating unit 42 determines whether the global color table information of the picture file is included in the picture header information data segment, and the YUV is previously encoded by the YUV data, thereby causing the YUV to be
- the RGB data of the data conversion may be distorted.
- the color table generating unit 42 may update the RGB values of each pixel in the RGB data by using the global color table information. It can be understood that the RGB data is One or more pixels may be included. When there is only one pixel in the RGB data, the initial color table information may be trained by using the RGB values of the pixel, when there are many RGB data.
- the initial color table information may be trained by using RGB values of each of the plurality of pixels, and the color table generating unit 42 needs to retrain the global color table information to generate a match.
- the local color table information of the RGB data and for the case where there is no global color table information, the color table generating unit 42 may generate an initialization color Table information, and by training of the color table initialization information to generate comply with local color table information of the RGB data.
- the color table generating unit 42 may adopt the RGB data.
- the RGB values of the respective pixels train the global color table information to generate local color table information of the RGB data.
- the color table generating unit 42 may use the global color At least one source RGB value in the table information is sorted according to a preset ordering of the G components (eg, ascending, descending, etc.) to generate training color table information.
- the color table generating unit 42 acquires, in the training color table information, a first source RGB value whose G component is closest to the G component of the current pixel point in the RGB data, and acquires the first source RGB value. A color index.
- the color table generating unit 42 acquires, in a preset range centered on the first color index in the training color table information, a second source RGB value that has the smallest error with the RGB value of the current pixel point, and Obtaining a second color index of the second source RGB value, it may be understood that the color table generating unit 42 may be centered on the first color index, within a preset range before and after the first color index Obtaining a plurality of source RGB values, and separately calculating the first source RGB value and the error of the plurality of source RGB values and the RGB value of the current pixel point, and the source RGB having the smallest error among the plurality of source RGB values The value is determined to be the second source RGB value.
- the color table generating unit 42 replaces the second source RGB value with the RGB value of the current pixel point, and uses the preset weight value and the RGB value of the current pixel point in the training color table information. A plurality of source RGB values within a preset range centered on the second color index are modified.
- the color table generation The unit 42 obtains the training color table information obtained after the modification, takes the next pixel point of the current pixel point as the current pixel point, and transfers to perform the acquisition of the G component and the RGB data in the training color table information.
- the first source RGB value of the G component of the current pixel is closest to the first source RGB value, for example, the second pixel point in the RGB data is used as the current pixel point, and the training color table information obtained after the modification is trained again. .
- the training process in some examples can be referred to the description of the above training process, and will not be described here.
- the color table generating unit 42 obtaining the training color table information obtained after the modification, and determining the training color table information as local color table information of the RGB data.
- the color table generating unit 42 may adopt the RGB data.
- the RGB values of the respective pixels in the pair are trained to initialize the color table information to generate local color table information of the RGB data.
- the color table generating unit 42 may generate an initialization color.
- Table information for example: (0, 0, 0), (1, 1, 1), (2, 2, 2), ..., (255, 255, 255), the color table generating unit 42 initializes colors Obtaining, in the table information, a third source RGB value that is closest to the G component of the current pixel point in the RGB data, and acquiring a third color index of the third source RGB value, where the color table generating unit 42 is And acquiring, in a preset range centered on the third color index, the fourth source RGB value having the smallest error with the RGB value of the current pixel point, and acquiring the fourth source RGB value.
- the color table generating unit 42 replacing the fourth source RGB value with the RGB value of the current pixel point, and adopting a preset weight value and an RGB value of the current pixel point, a preset range centered on the fourth color index in the initialization color table information
- the plurality of source RGB values are modified, and when the current pixel point is not the last pixel point in the RGB data, the color table generating unit 42 acquires the initialized color table information obtained after the modification, and The next pixel point of the current pixel point is used as the current pixel point, and is transferred to a third source RGB value that performs the closest acquisition of the G component in the initialization color table information to the G component of the current pixel point in the RGB data.
- the color table generating unit 42 acquires the training color table information obtained after the modification, and determines the initialization color table information as The local color table information of the RGB data. It should be noted that the process of training the initialization color table information in each instance may refer to the execution process of the foregoing manner, and details are not described herein.
- the color table generating unit 42 may train the local color table information of the N-1th frame RGB data by using the RGB values of the pixels in the RGB data to generate the RGB Local color table information for the data.
- the color table generating unit 42 may set at least one source RGB value in the local color table information of the N-1th frame RGB data. Sorting according to a preset sorting manner of the G component to generate training color table information, wherein the color table generating unit 42 acquires the G component in the training color table information and is closest to the G component of the current pixel point in the RGB data.
- the color table generating unit 42 has a preset range centered on the fifth color index in the training color table information a sixth source RGB value having the smallest error in the RGB value of the current pixel point, and acquiring a sixth color index of the sixth source RGB value, the color table generating unit 42 to the sixth source RGB
- the value is replaced by the RGB value of the current pixel point, and the preset weight value and the RGB value of the current pixel point are used, and the training color table information is within a preset range centered on the sixth color index.
- the color table generating unit 42 acquires the training color table information obtained after the modification, and uses the next pixel point of the current pixel point as a current pixel point, and proceeds to perform a fifth source RGB value that is obtained in the training color table information to obtain a G component closest to a G component of a current pixel point in the RGB data, when the current pixel point is the RGB
- the color table generating unit 42 acquires the training color table information obtained after the modification, and determines the training color table information as the local color table information of the RGB data. It should be noted that the training of the training color table information in each instance can be referred to the execution process of the foregoing manner, and details are not described herein.
- the pixel value updating unit 43 is configured to update the RGB values of the respective pixel points by using the local color table information of the RGB data.
- the pixel value updating unit 43 may update the RGB values of each pixel in the RGB data by using local color table information of the RGB data, and the picture file processing device 4 adopts the delay time.
- the updated RGB data is image-encoded with the total number of frames to generate the picture file.
- the local color table information of the RGB data is generated by training the global color table information or the initialization color table information, thereby effectively reducing the distortion of the image.
- the G component By using the G component to obtain the closest source RGB value and performing a small range of color index search in the color table information, it is not necessary to traverse the entire color table information, thereby reducing the complexity of training the local color table information, and further improving the image file. The coding efficiency.
- FIG. 35 is a schematic structural diagram of still another picture file processing apparatus according to an embodiment of the present application.
- the picture file processing apparatus 4000 may include at least one processor 4001, such as a CPU, at least one network interface 4004, a user interface 4003, a memory 4005, and at least one communication bus 4002.
- the communication bus 4002 is used to implement connection communication between these components.
- the user interface 4003 can include a display and a keyboard.
- the optional user interface 4003 can also include a standard wired interface and a wireless interface.
- Some examples of the network interface 4004 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
- the memory 4005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
- the memory 4005 may also be at least one storage device located remotely from the aforementioned processor 4001 in some examples.
- an operating system, a network communication module, a user interface module, and an image processing application may be included in the memory 4005 as a computer storage medium.
- the network interface 4004 is mainly used to connect to the transmitting end and perform data communication with the transmitting end.
- the user interface 4003 is mainly used to provide an input interface for the user, and obtain data input by the user.
- the processor 4001 can be used to call the image processing application stored in the memory 4005, and specifically perform the following steps:
- RGB data generated by processing the original image file, and obtain RGB values of each pixel in the RGB data.
- the initial color table information of the RGB data is trained using the RGB values of the respective pixels to generate local color table information of the RGB data.
- the RGB values of the respective pixels are updated using the local color table information of the RGB data.
- the processor 4001 when the processor 4001 performs the training of the initial color table information of the RGB data by using the RGB values of the respective pixels to generate the local color table information of the RGB data, the following steps are specifically performed:
- the RGB data is the first frame image in the picture file, and the global color table information exists in the picture header information data segment generated by decoding the picture file, the RGB value of each pixel in the RGB data is used.
- the global color table information is trained to generate local color table information for the RGB data.
- the processor 4001 when the processor 4001 performs the training of the global color table information by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data, the following steps are specifically performed:
- At least one source RGB value in the global color table information is sorted according to a preset sorting manner of the G component to generate training color table information.
- a first source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the training color table information, and a first color index of the first source RGB value is acquired.
- a second source RGB value having the smallest error with the RGB value of the current pixel point is acquired, and a second color index of the second source RGB value is obtained.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the first source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained from the training color table information.
- the training color table information obtained after the modification is acquired, and the training color table information is determined as the local color table information of the RGB data.
- the processor 4001 when the processor 4001 performs the training of the initial color table information of the RGB data by using the RGB values of the respective pixels to generate the local color table information of the RGB data, the following steps are specifically performed:
- the RGB data is the first frame image in the picture file, and the global color table information does not exist in the picture header information data segment generated by decoding the picture file, the RGB value pair of each pixel in the RGB data is initialized.
- the color table information is trained to generate local color table information of the RGB data.
- the processor 4001 when the processor 4001 performs the training of initializing the color table information by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data, the following steps are specifically performed:
- a third source RGB value whose G component is closest to the G component of the current pixel point in the RGB data is acquired in the initialization color table information, and a third color index of the third source RGB value is acquired.
- a fourth source RGB value having the smallest error from the RGB value of the current pixel point is acquired, and a fourth color index of the fourth source RGB value is obtained.
- the initialization color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the third source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained in the initialization color table information.
- the training color table information obtained after the modification is acquired, and the initialization color table information is determined as the local color table information of the RGB data.
- the processor 4001 when the processor 4001 performs the training of the initial color table information of the RGB data by using the RGB values of the respective pixels to generate the local color table information of the RGB data, the following steps are specifically performed:
- the local color table information of the N-1th frame RGB data is trained to generate a partial color table of the RGB data by using the RGB values of the pixels in the RGB data.
- N is a positive integer greater than 1 and less than or equal to the total number of frames.
- the processor 4001 performs training on the local color table information of the N-1th frame RGB data by using the RGB values of the pixels in the RGB data to generate the local color table information of the RGB data. Specifically perform the following steps:
- At least one source RGB value in the partial color table information of the N-1th frame RGB data is sorted according to a preset sorting manner of the G component to generate training color table information.
- a fifth source RGB value whose G component is closest to the G component of the current pixel point in the RGB data and a fifth color index of the fifth source RGB value are acquired in the training color table information.
- a sixth source RGB value having the smallest error with the RGB value of the current pixel point is acquired, and a sixth color index of the sixth source RGB value is obtained.
- the training color table information obtained after the modification is obtained, and the next pixel point of the current pixel point is used as the current pixel point, and is transferred to the execution center.
- the fifth source RGB value that is closest to the G component of the current pixel point in the RGB data is obtained from the training color table information.
- the training color table information obtained after the modification is acquired, and the training color table information is determined as the local color table information of the RGB data.
- the picture file is a GIF image.
- the local color table information of the RGB data is generated by training the global color table information or the initialization color table information, thereby effectively reducing the distortion of the image.
- the G component By using the G component to obtain the closest source RGB value and performing a small range of color index search in the color table information, it is not necessary to traverse the entire color table information, thereby reducing the complexity of training the local color table information, and further improving the image file. The coding efficiency.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Color Image Communication Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
Les modes de réalisation de la présente invention concernent un procédé et un dispositif de traitement de fichier d'image, le procédé comprenant les étapes suivantes : la réception d'un flux de code transcodé d'un fichier d'image d'origine envoyé par un terminal d'envoi et l'analyse du flux de code transcodé afin d'acquérir un segment de données d'informations d'en-tête d'image et des données de flux de code du fichier d'image, les données de flux de code étant des informations générées par le terminal d'envoi codant des données YUV, les données YUV étant des données générées par le terminal d'envoi qui effectue une conversion de chaque trame d'image dans le fichier d'image ; la génération, sur la base des données de flux de code, des données YUV et le traitement, sur la base du segment de données d'informations d'en-tête d'image, des données YUV afin de générer le fichier d'image. L'utilisation de la présente invention réduit la quantité de données de fichier du fichier d'image pendant le processus de transmission, ce qui réduit les coûts de bande passante.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710225916.0A CN106921869B (zh) | 2017-04-08 | 2017-04-08 | 一种图片文件处理方法及其设备 |
CN201710225916.0 | 2017-04-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018184465A1 true WO2018184465A1 (fr) | 2018-10-11 |
Family
ID=59568640
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/079463 WO2018184465A1 (fr) | 2017-04-08 | 2018-03-19 | Procédé de traitement de fichier d'image, appareil et support de stockage |
Country Status (3)
Country | Link |
---|---|
CN (2) | CN109151503B (fr) |
TW (1) | TWI672942B (fr) |
WO (1) | WO2018184465A1 (fr) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109151503B (zh) * | 2017-04-08 | 2022-03-15 | 腾讯科技(深圳)有限公司 | 一种图片文件处理方法及其设备 |
CN110069728B (zh) * | 2017-10-30 | 2022-08-12 | 北京京东尚科信息技术有限公司 | 用于展示图片的方法及装置 |
US10841458B2 (en) * | 2018-03-02 | 2020-11-17 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN108933945B (zh) * | 2018-08-17 | 2020-06-19 | 腾讯科技(深圳)有限公司 | 一种gif图片的压缩方法、装置及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1516848A (zh) * | 2001-06-15 | 2004-07-28 | ��˹��ŵ�� | 基于小波变换的图像编解码器中获得编码增益的方法和系统 |
US20080112634A1 (en) * | 2006-11-13 | 2008-05-15 | Samsung Electronics Co., Ltd. | Method and apparatus for image processing |
CN101540901A (zh) * | 2008-03-20 | 2009-09-23 | 华为技术有限公司 | 编解码方法及装置 |
CN102231836A (zh) * | 2011-06-27 | 2011-11-02 | 深圳市茁壮网络股份有限公司 | 一种gif文件在数字电视系统中的处理方法和装置 |
CN106383880A (zh) * | 2016-09-13 | 2017-02-08 | 广州视睿电子科技有限公司 | 一种gif文件的播放方法及系统 |
CN106899861A (zh) * | 2017-04-08 | 2017-06-27 | 腾讯科技(深圳)有限公司 | 一种图片文件处理方法及其设备、系统 |
CN106921869A (zh) * | 2017-04-08 | 2017-07-04 | 腾讯科技(深圳)有限公司 | 一种图片文件处理方法及其设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8189908B2 (en) * | 2005-09-02 | 2012-05-29 | Adobe Systems, Inc. | System and method for compressing video data and alpha channel data using a single stream |
CN101459829B (zh) * | 2008-12-25 | 2011-05-04 | 杭州恒生数字设备科技有限公司 | 一种低延迟的全数字监控系统 |
CN101742317B (zh) * | 2009-12-31 | 2012-03-28 | 北京中科大洋科技发展股份有限公司 | 一种带阿尔法透明通道的视频压缩编码方法 |
CN104333762B (zh) * | 2014-11-24 | 2017-10-10 | 成都瑞博慧窗信息技术有限公司 | 一种视频解码方法 |
-
2017
- 2017-04-08 CN CN201810834942.8A patent/CN109151503B/zh active Active
- 2017-04-08 CN CN201710225916.0A patent/CN106921869B/zh active Active
-
2018
- 2018-03-19 WO PCT/CN2018/079463 patent/WO2018184465A1/fr active Application Filing
- 2018-04-03 TW TW107111915A patent/TWI672942B/zh active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1516848A (zh) * | 2001-06-15 | 2004-07-28 | ��˹��ŵ�� | 基于小波变换的图像编解码器中获得编码增益的方法和系统 |
US20080112634A1 (en) * | 2006-11-13 | 2008-05-15 | Samsung Electronics Co., Ltd. | Method and apparatus for image processing |
CN101540901A (zh) * | 2008-03-20 | 2009-09-23 | 华为技术有限公司 | 编解码方法及装置 |
CN102231836A (zh) * | 2011-06-27 | 2011-11-02 | 深圳市茁壮网络股份有限公司 | 一种gif文件在数字电视系统中的处理方法和装置 |
CN106383880A (zh) * | 2016-09-13 | 2017-02-08 | 广州视睿电子科技有限公司 | 一种gif文件的播放方法及系统 |
CN106899861A (zh) * | 2017-04-08 | 2017-06-27 | 腾讯科技(深圳)有限公司 | 一种图片文件处理方法及其设备、系统 |
CN106921869A (zh) * | 2017-04-08 | 2017-07-04 | 腾讯科技(深圳)有限公司 | 一种图片文件处理方法及其设备 |
Non-Patent Citations (1)
Title |
---|
CAO HONG: "A graphic interchange format file display model", COMPUTER ERA, no. 10, 25 October 2003 (2003-10-25), pages 45 - 46, ISSN: 1006-8228 * |
Also Published As
Publication number | Publication date |
---|---|
CN106921869B (zh) | 2018-09-04 |
CN109151503B (zh) | 2022-03-15 |
TW201838416A (zh) | 2018-10-16 |
CN106921869A (zh) | 2017-07-04 |
CN109151503A (zh) | 2019-01-04 |
TWI672942B (zh) | 2019-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI680671B (zh) | 圖片檔處理方法、設備及系統以及儲存介質 | |
US12256089B2 (en) | Coded-block-flag coding and derivation | |
CN113411577B (zh) | 编码方法及装置 | |
RU2668723C2 (ru) | Способ и оборудование для кодирования и декодирования видеосигналов | |
US20150229933A1 (en) | Adaptive screen and video coding scheme | |
TWI672942B (zh) | 圖片檔處理方法、裝置及儲存介質 | |
RU2693185C2 (ru) | Способ кодирования и способ декодирования цветового преобразования и соответствующие устройства | |
WO2017129023A1 (fr) | Procédé de décodage, procédé de codage, appareil de décodage et appareil de codage | |
CN112954367B (zh) | 使用调色板译码的编码器、解码器和相应方法 | |
KR102349788B1 (ko) | 영상의 부호화/복호화 방법 및 장치 | |
CN110754085B (zh) | 用于非4:4:4格式视频内容的颜色重映射 | |
JP2023549210A (ja) | ビデオフレーム圧縮方法、ビデオフレーム伸長方法及び装置 | |
CN112204971A (zh) | 视频图像编码方法、设备及可移动平台 | |
WO2019109955A1 (fr) | Procédé et appareil de prédiction inter-trames et dispositif terminal | |
CN111246208B (zh) | 视频处理方法、装置及电子设备 | |
CN113557727B (zh) | 一种视频解码方法和相关装置 | |
WO2020042853A1 (fr) | Procédé et appareil de prédiction intra | |
WO2021169817A1 (fr) | Procédé de traitement vidéo et dispositif électronique | |
CN115150370B (zh) | 一种图像处理的方法 | |
CN119032566A (zh) | 视频编解码方法、装置、设备、系统及存储介质 | |
WO2023092404A1 (fr) | Procédés et dispositifs de codage et de décodage vidéo, système et support de stockage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18780447 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18780447 Country of ref document: EP Kind code of ref document: A1 |