CN111754412B - Method and device for constructing data pair and terminal equipment - Google Patents
Method and device for constructing data pair and terminal equipment Download PDFInfo
- Publication number
- CN111754412B CN111754412B CN201910249132.0A CN201910249132A CN111754412B CN 111754412 B CN111754412 B CN 111754412B CN 201910249132 A CN201910249132 A CN 201910249132A CN 111754412 B CN111754412 B CN 111754412B
- Authority
- CN
- China
- Prior art keywords
- sdr
- hdr
- picture
- pictures
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000013507 mapping Methods 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000012937 correction Methods 0.000 claims abstract description 18
- 238000003860 storage Methods 0.000 claims abstract description 12
- 238000013136 deep learning model Methods 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 25
- 238000012549 training Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 12
- 238000005096 rolling process Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 4
- 229920006395 saturated elastomer Polymers 0.000 description 14
- 230000008569 process Effects 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The application is applicable to the technical field of data processing, and provides a method, a device, terminal equipment and a computer readable storage medium for constructing a data pair, wherein the method comprises the following steps: acquiring a plurality of HDR sample pictures; acquiring SDR sample pictures corresponding to each HDR sample picture in the plurality of HDR sample pictures; performing gamma correction on the SDR sample picture, and performing different tone mapping on the SDR sample picture subjected to gamma correction to obtain a plurality of SDR pictures; and performing the different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, wherein the SDR pictures and the HDR pictures obtained by using the same tone mapping are a pair of SDR and HDR data pairs. By the application, SDR and HDR data pairs can be constructed.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, a terminal device, and a computer readable storage medium for constructing a data pair.
Background
The human eye may perceive a luminance in the range of approximately 1 to 10 5 levels with the pupil unchanged. In the television field, a standard dynamic range (STANDARD DYNAMIC RANAGE, SDR) video is generally adopted, the brightness range is 1 to 10 3 levels, and a high dynamic range (HIGH DYNAMIC RANGE, HDR) video which is raised in recent years is 1 to 10 5 levels, so that the brightness range which can be perceived by the pupils of people can be met. Compared with SDR video, HDR video has great improvement in various convenience such as dynamic range, quantized depth, color gamut, frame rate and the like. However, how to construct SDR and HDR data pairs is a technical problem to be solved.
Disclosure of Invention
In view of this, embodiments of the present application provide a method, apparatus, terminal device, and computer readable storage medium for constructing a data pair to construct an SDR and HDR data pair.
A first aspect of an embodiment of the present application provides a method for constructing a data pair, the method including:
acquiring a plurality of HDR sample pictures;
Acquiring SDR sample pictures corresponding to each HDR sample picture in the plurality of HDR sample pictures;
performing gamma correction on the SDR sample picture, and performing different tone mapping on the SDR sample picture subjected to gamma correction to obtain a plurality of SDR pictures;
And performing the different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, wherein the SDR pictures and the HDR pictures obtained by using the same tone mapping are one SDR and HDR data pair.
A second aspect of an embodiment of the present application provides an apparatus for constructing a data pair, the apparatus comprising:
the first acquisition module is used for acquiring a plurality of HDR sample pictures;
A second obtaining module, configured to obtain an SDR sample picture corresponding to each HDR sample picture in the plurality of HDR sample pictures;
The picture processing module is used for correcting the gamma of the SDR sample pictures and carrying out different tone mapping on the SDR sample pictures after the gamma correction to obtain a plurality of SDR pictures;
And the tone mapping module is used for carrying out different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, wherein the SDR pictures and the HDR pictures obtained by using the same tone mapping are one SDR and HDR data pair.
A third aspect of an embodiment of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect described above when the computer program is executed.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as described in the first aspect above.
A fifth aspect of the application provides a computer program product comprising a computer program which, when executed by one or more processors, implements the steps of the method as described in the first aspect above.
As can be seen from the above, after obtaining multiple HDR sample pictures, the present application obtains SDR sample pictures corresponding to each HDR sample picture, and performs gamma correction on the SDR sample pictures, so that the brightness of the SDR sample pictures is the same as the brightness of each HDR sample picture, and performs different tone mapping on the SDR sample pictures, so as to obtain multiple SDR pictures, and simultaneously performs the different tone mapping on each HDR sample picture, so as to obtain multiple HDR pictures corresponding to the multiple SDR pictures, thereby obtaining multiple SDR and HDR data pairs. The scheme of the application can obtain a plurality of SDR and HDR data pairs by carrying out different tone mapping on the HDR sample picture and the SDR sample picture, thereby realizing that a large number of SDR and HDR data pairs are obtained on the basis of a small amount of HDR sample picture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for constructing data pairs according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for constructing data pairs according to a second embodiment of the present application;
FIG. 3 is a diagram showing an example of the structure of a deep learning model;
FIG. 4 is a schematic diagram of an apparatus for constructing data pairs according to a third embodiment of the present application;
fig. 5 is a schematic diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
It should be understood that, the sequence number of each step in this embodiment does not mean the execution sequence, and the execution sequence of each process should be determined by its function and internal logic, and should not limit the implementation process of the embodiment of the present application in any way.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Referring to fig. 1, a flowchart of a method for constructing a data pair according to an embodiment of the present application is shown, where the method is applied to a terminal device, and as shown in the figure, the method may include the following steps:
Step S101, a plurality of HDR sample pictures are acquired.
In the embodiment of the present application, the plurality of HDR sample pictures may refer to a plurality of HDR pictures input by a user for acquiring a plurality of SDR and HDR data pairs. The plurality of HDR sample pictures is the basis for acquiring the plurality of SDR and HDR data pairs.
Step S102, obtaining an SDR sample picture corresponding to each HDR sample picture in the plurality of HDR sample pictures.
In the embodiment of the present application, an operation of losing saturation region information may be performed on each HDR sample picture, so as to obtain a picture (i.e., an SDR sample picture) of lost saturation region information corresponding to the HDR sample.
Optionally, the obtaining the SDR sample picture corresponding to each HDR sample picture in the plurality of HDR sample pictures includes:
Normalizing the values of pixel points in each HDR sample picture in the plurality of HDR sample pictures;
Counting pixel points with the median value larger than a first threshold value in each HDR sample picture, and multiplying the value of the pixel points in each HDR sample picture by the reciprocal of the first threshold value to obtain a first picture;
and setting the value of the pixel point with the value of the first picture larger than 1 as 1 to obtain an SDR sample picture.
In the embodiment of the present application, the value of the pixel point in each HDR sample picture is normalized to be between 0 and 1, histogram statistics can be performed on the pixel point in each HDR sample picture after normalization, the pixel point with the value greater than the first threshold value is counted, the region where the pixel point with the value greater than the first threshold value is located is the saturated region, and the value of the pixel point is multiplied by the reciprocal of the first threshold value, so that the value of the pixel point is greater than 1, and the first picture including the saturated region information is obtained (because the brightness of the saturated region in the first picture is changed, the saturated region information can be displayed, that is, the saturated region information is not lost), and the value of the pixel point with the value greater than 1 in the first picture is set to be 1, so as to obtain the SDR sample picture losing the saturated region information (because the value of the pixel point with the saturated region is set to be 1 in the SDR sample picture, the brightness of the saturated region is unchanged, the saturated region information cannot be displayed, that the saturated region information is lost).
The first threshold is used for determining a saturation region in each HDR sample picture, for example, a region where a pixel point with a value greater than the first threshold is located is a saturation region, and a region where a pixel point with a value less than or equal to the first threshold is located is a non-saturation region. The first threshold may be preset by a user or may be calculated according to a preset algorithm, where the preset algorithm may refer to performing histogram statistics on pixel points in each normalized HDR sample picture, counting the total number of pixel points in each HDR sample picture and the number of pixel points corresponding to different values, and counting the minimum value of the values of the pixel points located in the first n percent according to the sequence of the values (i.e., the values of the pixel points) from large to small according to the total number of pixel points in each HDR sample picture and the number of pixel points corresponding to different values. The value of n can be set by the user according to the actual requirement, for example, a random number between 5 and 15, which is not limited herein.
Step S103, gamma correction is carried out on the SDR sample picture, and different tone mapping is carried out on the SDR sample picture after the gamma correction, so that a plurality of SDR pictures are obtained.
Step S104, performing the different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures.
Wherein the SDR picture and the HDR picture obtained using the same tone mapping are one SDR and HDR data pair.
Optionally, said performing said different tone mapping on said each HDR sample picture comprises:
The different tone mapping is performed on the first picture.
In the embodiment of the application, the gamma correction is performed on the SDR sample picture to enable the brightness of the SDR sample picture to be the same as that of the HDR sample picture, and different tone mapping is performed on the SDR sample picture and the first picture after the gamma correction, so that a plurality of SDR picture and HDR picture data pairs can be obtained. The pixel value may be a gray value of the pixel. For one HDR sample picture, multiple SDR pictures and HDR pictures of different exposure levels can be obtained by different tone mapping. It should be noted that the corresponding SDR picture is the same as the tone mapping adopted by the HDR picture, i.e. the exposure level is the same. The different tone mapping may be different from the same tone mapping algorithm, or may refer to different tone mapping algorithms, which is not limited herein.
For example, the picture a is an HDR sample picture, the values of the pixel points in the picture a are normalized to between 0 and 1, the pixel points in the picture a are subjected to histogram statistics, the minimum value of the values of the pixel points in the first 10% in the picture a is 0.8 (i.e., the pixel points with the values above 0.8 account for 10% of the total number of the pixel points in the picture a), the pixel points with the median value of the picture a being greater than 0.8 are multiplied by 1.25, it can be ensured that the values of the pixel points with the median value of the picture a being greater than 0.8 and less than or equal to 1 are changed from greater than 0.8 to 1, a first picture is obtained, the value of the pixel point with the first picture being greater than 1 is set to 1, the sample picture of the SDR losing saturation region information is obtained, the sample picture is gamma corrected, the SDR sample picture with the brightness of the picture a can be the same as the picture a, different tone maps are performed on the SDR sample picture after brightness adjustment, the first SDR picture, the second SDR picture and the second SDR picture is mapped with the first tone SDR picture, and the first tone SDR picture is obtained, and the first tone SDR picture is mapped with the second tone SDR picture.
According to the embodiment of the application, different tone mapping is carried out on the HDR sample picture and the SDR sample picture, so that a plurality of SDR and HDR data pairs can be obtained, and a large number of SDR and HDR data pairs can be obtained on the basis of a small amount of HDR sample picture.
Referring to fig. 2, a flowchart of a method for constructing a data pair according to a second embodiment of the present application is shown, where the method is applied to a terminal device, and as shown in the figure, the method may include the following steps:
Step S201, a plurality of HDR sample pictures are acquired.
The step is the same as step S101, and the detailed description of step S101 will be omitted here.
Step S202, obtaining SDR sample pictures corresponding to each HDR sample picture in the plurality of HDR sample pictures.
The step is the same as step S102, and the detailed description of step S102 will be omitted herein.
Step S203, gamma correction is carried out on the SDR sample picture, and different tone mapping is carried out on the SDR sample picture after the gamma correction, so as to obtain a plurality of SDR pictures.
The step is the same as step S103, and specific reference may be made to the description related to step S103, which is not repeated here.
Step S204, performing the different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures.
Wherein the SDR picture and the HDR picture obtained using the same tone mapping are one SDR and HDR data pair.
The step is the same as step S104, and the detailed description of step S104 will be omitted herein.
In step S205, a deep learning model is trained from a plurality of SDR and HDR data pairs.
In the embodiment of the present application, the training of the deep learning model may refer to making the deep learning model learn a mapping relationship between an SDR picture (i.e. an unsaturated picture) and an HDR picture (i.e. a saturated picture), and reconstructing saturated region information of the SDR picture (i.e. recovering information lost in a saturated region of the SDR picture) by using the trained deep learning model, so as to obtain the HDR picture (the SDR picture after reconstructing the saturated region information is the HDR picture).
In the embodiment of the application, a plurality of SDR and HDR data pairs can be used as training samples, the deep learning model is trained, parameters in the deep learning model are adjusted, and the accuracy of converting the SDR picture into the HDR picture is improved. The content of one SDR and the content of the HDR in the pair of data are the same, the difference is that the saturation region information of the SDR picture is lost, but the saturation region information of the HDR picture is not lost, the plurality of SDR and the HDR in the pair of data are used as the input of a depth learning model, the plurality of SDR and the HDR in the pair of data are used as the target picture, the depth learning model is trained, the depth learning model can learn the mapping relation between the SDR picture and the HDR picture, and the saturation region information in the SDR picture is reconstructed through the trained depth learning model, so that the HDR picture is obtained.
Optionally, the deep learning model includes an encoding stage and a decoding stage; training a deep learning model from a plurality of SDR and HDR data pairs includes:
Rolling and downsampling SDR pictures in each SDR and HDR data pair in the coding stage to obtain a characteristic diagram of the SDR pictures in each SDR and HDR data pair;
rolling and up-sampling the feature map of the SDR picture in each pair of SDR and HDR data in the decoding stage, and outputting a predicted HDR picture;
And training the depth learning model by learning the difference between the predicted HDR picture and the HDR picture in each pair of SDR and HDR data through a preset loss function.
The preset loss function may be a loss function preset by a user, and the loss function may be any loss function, for example, an L1 loss function, an L2 loss function, and the like, which is not limited herein. The loss function measures the difference between the predicted HDR picture and the HDR picture (i.e. the target picture) in the SDR and HDR data pair, through which the deep learning model can be optimized, and when the loss function reaches a convergence state, it is explained that the deep learning model is trained.
As shown in fig. 3, which is a schematic diagram of a deep learning model, the deep learning model according to the embodiment of the present application mainly includes an encoding stage and a decoding stage, in which a convolution with a step length of 2 is used to perform feature extraction and downsampling on each SDR video frame input to the deep learning model to obtain a predetermined size (for example, the size of an SDR video frame)) In the decoding stage, picture reconstruction is carried out by adopting an up-sampling and convolution mode to obtain an output HDR video frame. In order to reduce information loss caused in the downsampling process, a concat layer is added to the deep learning model, and the feature images of the encoding stage and the feature images corresponding to the decoding stage are spliced together, so that information loss is reduced. The deep learning model in the embodiment of the application adopts an up-sampling and convolution mode to replace a deconvolution layer commonly used in the field of deep learning, avoids grid artifacts in a reconstructed picture, learns the difference between an output picture and a target picture through an L2 loss function, iterates continuously, and trains the deep learning model.
Optionally, the embodiment of the present application further includes:
splitting an SDR video into a plurality of SDR video frames;
Processing each SDR video frame in the plurality of SDR video frames by using the depth learning model to obtain a corresponding HDR video frame, wherein the plurality of SDR video frames correspond to the plurality of HDR video frames;
Synthesizing the plurality of HDR video frames into an HDR video.
Optionally, the splitting the SDR video into a plurality of SDR video frames includes:
Splitting the SDR video into a plurality of SDR video frames through FFmpeg;
Synthesizing the plurality of HDR video frames into an HDR video includes:
the plurality of HDR video frames are synthesized into an HDR video by FFmpeg.
In the embodiment of the application, the terminal device can firstly acquire the SDR video to be converted into the HDR video (for example, acquire the SDR video from a network or acquire the SDR video from a storage device), and cut the SDR video into a plurality of SDR video frames in a preset cutting mode. Video is typically made up of multiple frames, each SDR video frame being a still picture, the SDR video frames being displayed in rapid succession to form a moving SDR video. The preset splitting manner may be a manner preset by a user for splitting the SDR video into SDR video frames, and includes, but is not limited to, FFmpeg, which is a set of open source computer programs that may be used to record, convert digital audio and video, and convert it into a stream, and includes a leading audio/video coding library libavcodec and so on.
In the embodiment of the application, the SDR video can be sequentially segmented into a plurality of SDR video frames according to the play sequence of the SDR video frames, and each SDR video frame in the plurality of SDR video frames is sequentially processed by using a trained deep learning model.
It should be noted that, before the terminal device segments the SDR video into a plurality of SDR video frames, the user may select whether to start the video conversion function, when the user selects "yes", the terminal device may start the video conversion function and segment the SDR video into a plurality of SDR video frames, and when the user selects "no", the terminal device does not start the video conversion function, that is, does not convert the SDR video into the HDR video, and does not need to segment the SDR video into a plurality of SDR video frames. Wherein, a physical button or a virtual button can be arranged on the terminal device, and a user can select whether the video conversion function needs to be started or not through the physical button or the virtual button. The video conversion function may refer to a function of converting SDR video into HDR video.
One HDR video frame can be obtained by processing one SDR video frame through a deep learning model, and then a plurality of HDR video frames can be obtained by processing a plurality of SDR video frames through the deep learning model, namely, a plurality of SDR video frames correspond to a plurality of HDR video frames.
In the embodiment of the application, after the SDR video is segmented into a plurality of SDR video frames, each SDR video frame is processed by using a trained deep learning model, and the saturation region information of each SDR video frame can be recovered, so that the HDR video frame corresponding to each SDR video frame is obtained. Wherein the depth learning model is used to convert SDR video frames into HDR video frames. The saturation region may refer to a region where a pixel point with a median value greater than a preset saturation threshold value in the SDR video frame is located, and the information of the saturation region may be referred to as saturation region information, or may also be referred to as saturation region details, for example, a region where light rays such as sun, street lamps and the like are relatively bright is included in the SDR video frame, the details of the region are usually masked by high light, details (for example, clouds around sunlight) of the region cannot be clearly displayed in the SDR video frame, the details of the region can be recovered by processing the SDR video frame through a deep learning model, and the details of the region are recovered by the SDR video frame. The preset saturation threshold may be a threshold preset by a user for determining a saturation region.
In an embodiment of the present application, after multiple HDR video frames are acquired, the multiple HDR video frames may be synthesized into an HDR video using FFmpeg. Wherein the HDR video frames may be synthesized according to the order in which the HDR video frames were obtained. For example, the SDR video is sequentially segmented into a first SDR video frame, a second SDR video frame and a third SDR video frame according to the playing sequence (i.e. when playing the SDR video, the first SDR video frame is played first, then the second SDR video frame is played, and finally the third SDR video frame is played), the first SDR video frame, the second SDR video frame and the third SDR video frame are sequentially processed by using a trained depth learning model, a first HDR video frame, a second HDR video frame and a third HDR video frame corresponding to the first SDR video frame are sequentially obtained, the first HDR video frame, the second HDR video frame and the third HDR video frame are synthesized into an HDR video, that is, when playing the synthesized HDR video, the first HDR video frame is played first, then the second HDR video frame is played, and finally the third HDR video frame is played.
According to the embodiment of the application, the deep learning model can be trained through the acquired plurality of SDR and HDR data pairs, the deconvolution layer commonly used in the deep learning field is replaced by an up-sampling and convolution mode, the deep learning model can be better trained, and grid artifacts in a reconstructed picture can be avoided when the trained deep learning model is used for converting the SDR video into the HDR video, so that the picture quality of the HDR video is improved.
Referring to fig. 4, a schematic diagram of an apparatus for constructing a data pair according to the third embodiment of the present application is shown, for convenience of explanation, only a portion related to the embodiment of the present application.
The device comprises:
a first obtaining module 41, configured to obtain a plurality of HDR sample pictures;
a second obtaining module 42, configured to obtain an SDR sample picture corresponding to each HDR sample picture in the plurality of HDR sample pictures;
the picture processing module 43 is configured to perform gamma correction on the SDR sample picture, and perform different tone mapping on the SDR sample picture after gamma correction to obtain a plurality of SDR pictures;
the tone mapping module 44 is configured to perform the different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, where the SDR pictures and the HDR pictures obtained by using the same tone mapping are one SDR and HDR data pair.
Optionally, the second obtaining module 42 includes:
a normalization unit, configured to normalize values of pixel points in each of the plurality of HDR sample pictures;
The pixel processing unit is used for counting pixel points, the median value of which is larger than a first threshold value, in each HDR sample picture, and multiplying the value of the pixel points in each HDR sample picture by the reciprocal of the first threshold value to obtain a first picture;
The picture acquisition unit is used for setting the value of the pixel point with the value larger than 1 in the first picture as1 to obtain an SDR sample picture;
the tone mapping module 44 is specifically configured to:
The different tone mapping is performed on the first picture.
Optionally, the apparatus further includes:
Model training module 45 is configured to train a deep learning model based on the plurality of SDR and HDR data pairs.
Optionally, the deep learning model includes an encoding stage and a decoding stage; the model training module 45 includes:
The characteristic diagram obtaining unit is used for carrying out rolling and downsampling on SDR pictures in each SDR and HDR data pair in the encoding stage to obtain the characteristic diagram of the SDR pictures in each SDR and HDR data pair;
a picture output unit, configured to perform convolution and upsampling on a feature map of an SDR picture in each of the SDR and HDR data pairs in the decoding stage, and output a predicted HDR picture;
And the difference prediction unit is used for learning the difference between the predicted HDR picture and the HDR picture in each pair of SDR and HDR data through a preset loss function and training the depth learning model.
Optionally, the apparatus further includes:
a video slicing module 46 for slicing the SDR video into a plurality of SDR video frames;
a video frame processing module 47, configured to process each SDR video frame of the plurality of SDR video frames using the depth learning model to obtain a corresponding HDR video frame, where the plurality of SDR video frames corresponds to a plurality of HDR video frames;
video synthesis module 48 is configured to synthesize the plurality of HDR video frames into an HDR video.
Optionally, the video slicing module 46 is specifically configured to:
the SDR video is sliced into multiple SDR video frames by FFmpeg.
Optionally, the video synthesis module 48 is specifically configured to:
the plurality of HDR video frames are synthesized into an HDR video by FFmpeg.
The device provided in the embodiment of the present application may be applied to the first and second embodiments of the foregoing method, and details refer to the description of the first and second embodiments of the foregoing method, which are not repeated herein.
Fig. 5 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 5, the terminal device 5 of this embodiment includes: a processor 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processor 50. The processor 50, when executing the computer program 52, implements the steps of the method embodiments of constructing data pairs described above, such as steps S101 to S104 shown in fig. 1. Or the processor 50, when executing the computer program 52, performs the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 41 to 48 shown in fig. 4.
By way of example, the computer program 52 may be partitioned into one or more modules/units that are stored in the memory 51 and executed by the processor 50 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 52 in the terminal device 5. For example, the computer program 52 may be divided into a first acquisition module, a second acquisition module, a picture processing module, a tone mapping module, a model training module, a video segmentation module, a video frame processing module, and a video synthesis module, each of which specifically functions as follows:
the first acquisition module is used for acquiring a plurality of HDR sample pictures;
A second obtaining module, configured to obtain an SDR sample picture corresponding to each HDR sample picture in the plurality of HDR sample pictures;
The picture processing module is used for correcting the gamma of the SDR sample pictures and carrying out different tone mapping on the SDR sample pictures after the gamma correction to obtain a plurality of SDR pictures;
And the tone mapping module is used for carrying out different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, wherein the SDR pictures and the HDR pictures obtained by using the same tone mapping are one SDR and HDR data pair.
Optionally, the second obtaining module includes:
a normalization unit, configured to normalize values of pixel points in each of the plurality of HDR sample pictures;
The pixel processing unit is used for counting pixel points, the median value of which is larger than a first threshold value, in each HDR sample picture, and multiplying the value of the pixel points in each HDR sample picture by the reciprocal of the first threshold value to obtain a first picture;
The picture acquisition unit is used for setting the value of the pixel point with the value larger than 1 in the first picture as1 to obtain an SDR sample picture;
The tone mapping module is specifically configured to:
The different tone mapping is performed on the first picture.
Optionally, the model training module is configured to train the deep learning model according to the plurality of SDR and HDR data pairs.
Optionally, the deep learning model includes an encoding stage and a decoding stage; the model training module 45 includes:
The characteristic diagram obtaining unit is used for carrying out rolling and downsampling on SDR pictures in each SDR and HDR data pair in the encoding stage to obtain the characteristic diagram of the SDR pictures in each SDR and HDR data pair;
a picture output unit, configured to perform convolution and upsampling on a feature map of an SDR picture in each of the SDR and HDR data pairs in the decoding stage, and output a predicted HDR picture;
And the difference prediction unit is used for learning the difference between the predicted HDR picture and the HDR picture in each pair of SDR and HDR data through a preset loss function and training the depth learning model.
The video segmentation module is used for segmenting the SDR video into a plurality of SDR video frames;
the video frame processing module is used for processing each SDR video frame in the plurality of SDR video frames by using the depth learning model to obtain a corresponding HDR video frame, wherein the plurality of SDR video frames correspond to the plurality of HDR video frames;
and the video synthesis module is used for synthesizing the plurality of HDR video frames into HDR video.
Optionally, the video slicing module is specifically configured to:
the SDR video is sliced into multiple SDR video frames by FFmpeg.
Optionally, the video synthesis module is specifically configured to:
the plurality of HDR video frames are synthesized into an HDR video by FFmpeg.
The terminal device 5 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, and a television. The terminal device may include, but is not limited to, a processor 50, a memory 51. It will be appreciated by those skilled in the art that fig. 5 is merely an example of the terminal device 5 and does not constitute a limitation of the terminal device 5, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 50 may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (DIGITAL SIGNAL processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf programmable gate array (field-programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the terminal device 5, such as a hard disk or a memory of the terminal device 5. The memory 51 may also be an external storage device of the terminal device 5, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the terminal device 5. The memory 51 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 51 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM), a random access memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (10)
1. A method of constructing a data pair, the method comprising:
acquiring a plurality of HDR sample pictures;
Acquiring SDR sample pictures corresponding to each HDR sample picture in the plurality of HDR sample pictures;
aiming at each SDR sample picture, gamma correction is carried out on the SDR sample picture, and different tone mapping is carried out on the SDR sample picture after the gamma correction, so as to obtain a plurality of SDR pictures;
And performing the different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, wherein the SDR pictures and the HDR pictures obtained by using the same tone mapping are one SDR and HDR data pair.
2. The method of claim 1, wherein the obtaining the SDR sample picture corresponding to each of the plurality of HDR sample pictures comprises:
Normalizing the values of pixel points in each HDR sample picture in the plurality of HDR sample pictures;
Counting pixel points with the median value larger than a first threshold value in each HDR sample picture, and multiplying the value of the pixel points in each HDR sample picture by the reciprocal of the first threshold value to obtain a first picture;
setting the value of the pixel point with the value of the first picture larger than 1 as 1 to obtain an SDR sample picture;
Correspondingly, said performing said different tone mapping on said each HDR sample picture comprises:
The different tone mapping is performed on the first picture.
3. The method of claim 1, wherein the method further comprises:
a deep learning model is trained from a plurality of SDR and HDR data pairs.
4. The method of claim 3, wherein the deep learning model includes an encoding stage and a decoding stage; said training a deep learning model from a plurality of SDR and HDR data pairs comprises:
Rolling and downsampling SDR pictures in each SDR and HDR data pair in the coding stage to obtain a characteristic diagram of the SDR pictures in each SDR and HDR data pair;
rolling and up-sampling the feature map of the SDR picture in each pair of SDR and HDR data in the decoding stage, and outputting a predicted HDR picture;
And training the depth learning model by learning the difference between the predicted HDR picture and the HDR picture in each pair of SDR and HDR data through a preset loss function.
5. The method of claim 4, wherein the method further comprises:
splitting an SDR video into a plurality of SDR video frames;
Processing each SDR video frame in the plurality of SDR video frames by using the depth learning model to obtain a corresponding HDR video frame, wherein the plurality of SDR video frames correspond to the plurality of HDR video frames;
Synthesizing the plurality of HDR video frames into an HDR video.
6. The method of claim 5, wherein the splitting the SDR video into a plurality of SDR video frames comprises:
Splitting the SDR video into a plurality of SDR video frames through FFmpeg;
Synthesizing the plurality of HDR video frames into an HDR video includes:
the plurality of HDR video frames are synthesized into an HDR video by FFmpeg.
7. An apparatus for constructing a data pair, the apparatus comprising:
the first acquisition module is used for acquiring a plurality of HDR sample pictures;
A second obtaining module, configured to obtain an SDR sample picture corresponding to each HDR sample picture in the plurality of HDR sample pictures;
The picture processing module is used for correcting the gamma of each SDR sample picture and carrying out different tone mapping on the SDR sample pictures after the gamma correction to obtain a plurality of SDR pictures;
And the tone mapping module is used for carrying out different tone mapping on each HDR sample picture to obtain a plurality of HDR pictures corresponding to the plurality of SDR pictures, wherein the SDR pictures and the HDR pictures obtained by using the same tone mapping are one SDR and HDR data pair.
8. The apparatus of claim 7, wherein the second acquisition module comprises:
a normalization unit, configured to normalize values of pixel points in each of the plurality of HDR sample pictures;
The pixel processing unit is used for counting pixel points, the median value of which is larger than a first threshold value, in each HDR sample picture, and multiplying the value of the pixel points in each HDR sample picture by the reciprocal of the first threshold value to obtain a first picture;
The picture acquisition unit is used for setting the value of the pixel point with the value larger than 1 in the first picture as1 to obtain an SDR sample picture;
The tone mapping module is specifically configured to:
The different tone mapping is performed on the first picture.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910249132.0A CN111754412B (en) | 2019-03-29 | 2019-03-29 | Method and device for constructing data pair and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910249132.0A CN111754412B (en) | 2019-03-29 | 2019-03-29 | Method and device for constructing data pair and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111754412A CN111754412A (en) | 2020-10-09 |
CN111754412B true CN111754412B (en) | 2024-04-19 |
Family
ID=72672149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910249132.0A Active CN111754412B (en) | 2019-03-29 | 2019-03-29 | Method and device for constructing data pair and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111754412B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11803946B2 (en) * | 2020-09-14 | 2023-10-31 | Disney Enterprises, Inc. | Deep SDR-HDR conversion |
CN112738392A (en) * | 2020-12-24 | 2021-04-30 | 上海哔哩哔哩科技有限公司 | Image conversion method and system |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103843058A (en) * | 2011-09-27 | 2014-06-04 | 皇家飞利浦有限公司 | Apparatus and method for dynamic range transforming of images |
CN104995903A (en) * | 2013-02-21 | 2015-10-21 | 皇家飞利浦有限公司 | Improved HDR image encoding and decoding methods and devices |
CN106210921A (en) * | 2016-08-12 | 2016-12-07 | 深圳创维-Rgb电子有限公司 | A kind of image effect method for improving and device thereof |
CN106233706A (en) * | 2014-02-25 | 2016-12-14 | 苹果公司 | For providing the apparatus and method of the back compatible of the video with standard dynamic range and HDR |
CN106686320A (en) * | 2017-01-22 | 2017-05-17 | 宁波星帆信息科技有限公司 | Tone mapping method based on numerical density balance |
WO2017082175A1 (en) * | 2015-11-12 | 2017-05-18 | Sony Corporation | Information processing apparatus, information recording medium, information processing method, and program |
CN107005716A (en) * | 2014-10-10 | 2017-08-01 | 皇家飞利浦有限公司 | Specified for the saturation degree processing that dynamic range maps |
CN107968919A (en) * | 2016-10-20 | 2018-04-27 | 汤姆逊许可公司 | Method and apparatus for inverse tone mapping |
CN108769804A (en) * | 2018-04-25 | 2018-11-06 | 杭州当虹科技股份有限公司 | A kind of format conversion method of high dynamic range video |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10735688B2 (en) * | 2017-07-13 | 2020-08-04 | Samsung Electronics Co., Ltd. | Electronics apparatus, display apparatus and control method thereof |
-
2019
- 2019-03-29 CN CN201910249132.0A patent/CN111754412B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103843058A (en) * | 2011-09-27 | 2014-06-04 | 皇家飞利浦有限公司 | Apparatus and method for dynamic range transforming of images |
CN104995903A (en) * | 2013-02-21 | 2015-10-21 | 皇家飞利浦有限公司 | Improved HDR image encoding and decoding methods and devices |
CN106233706A (en) * | 2014-02-25 | 2016-12-14 | 苹果公司 | For providing the apparatus and method of the back compatible of the video with standard dynamic range and HDR |
CN107005716A (en) * | 2014-10-10 | 2017-08-01 | 皇家飞利浦有限公司 | Specified for the saturation degree processing that dynamic range maps |
WO2017082175A1 (en) * | 2015-11-12 | 2017-05-18 | Sony Corporation | Information processing apparatus, information recording medium, information processing method, and program |
CN106210921A (en) * | 2016-08-12 | 2016-12-07 | 深圳创维-Rgb电子有限公司 | A kind of image effect method for improving and device thereof |
CN107968919A (en) * | 2016-10-20 | 2018-04-27 | 汤姆逊许可公司 | Method and apparatus for inverse tone mapping |
CN106686320A (en) * | 2017-01-22 | 2017-05-17 | 宁波星帆信息科技有限公司 | Tone mapping method based on numerical density balance |
CN108769804A (en) * | 2018-04-25 | 2018-11-06 | 杭州当虹科技股份有限公司 | A kind of format conversion method of high dynamic range video |
Also Published As
Publication number | Publication date |
---|---|
CN111754412A (en) | 2020-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
RU2589857C2 (en) | Encoding, decoding and representing high dynamic range images | |
US10672112B2 (en) | Method and system for real-time noise removal and image enhancement of high-dynamic range images | |
CN113518185B (en) | Video conversion processing method and device, computer readable medium and electronic equipment | |
US11100888B2 (en) | Methods and apparatuses for tone mapping and inverse tone mapping | |
KR20140142381A (en) | Method and Apparatus for removing haze in a single image | |
CN110717868B (en) | Video high dynamic range inverse tone mapping model construction and mapping method and device | |
CN116894770A (en) | Image processing method, image processing apparatus, and computer program | |
CN113658065B (en) | Image noise reduction method and device, computer readable medium and electronic equipment | |
WO2013145510A1 (en) | Methods and Systems for Image Enhancement and Estimation of Compression Noise | |
CN114511479A (en) | Image enhancement method and device | |
Zhang et al. | Deep tone mapping network in HSV color space | |
CN111724316B (en) | Method and apparatus for processing high dynamic range image | |
CN111754412B (en) | Method and device for constructing data pair and terminal equipment | |
CN111757172A (en) | HDR video acquisition method, HDR video acquisition device and terminal equipment | |
CN115661008A (en) | Image enhancement processing method, device, equipment and medium | |
CN114998122A (en) | Low-illumination image enhancement method | |
CN113379702A (en) | Blood vessel path extraction method and device of microcirculation image | |
CN102663683B (en) | Image enhancement method and image enhancement system | |
CN111179158B (en) | Image processing method, device, electronic equipment and medium | |
US9400940B2 (en) | Method of processing images, notably from night vision systems and associated system | |
CN111833262A (en) | Image noise reduction method and device and electronic equipment | |
CN109308690B (en) | Image brightness balancing method and terminal | |
CN117078574A (en) | Image rain removing method and device | |
CN113379631B (en) | Image defogging method and device | |
CN112118446B (en) | Image compression method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Country or region after: China Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province Applicant after: TCL Technology Group Co.,Ltd. Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District Applicant before: TCL Corp. Country or region before: China |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |