CN114529483A - Data processing method, device, terminal and readable storage medium - Google Patents
Data processing method, device, terminal and readable storage medium Download PDFInfo
- Publication number
- CN114529483A CN114529483A CN202210126015.7A CN202210126015A CN114529483A CN 114529483 A CN114529483 A CN 114529483A CN 202210126015 A CN202210126015 A CN 202210126015A CN 114529483 A CN114529483 A CN 114529483A
- Authority
- CN
- China
- Prior art keywords
- filtering
- filter
- layer
- operator
- radius
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000001914 filtration Methods 0.000 claims abstract description 438
- 238000012545 processing Methods 0.000 claims abstract description 53
- 238000000034 method Methods 0.000 claims description 14
- 101100134058 Caenorhabditis elegans nth-1 gene Proteins 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 abstract description 14
- 230000002829 reductive effect Effects 0.000 abstract description 10
- 230000008569 process Effects 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- -1 falling Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The application provides a data processing method, which comprises the steps of obtaining the preset filtering radiuses of a plurality of filtering layers; determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and acquiring the image data of the filtering area. The data processing method, the data processing device, the terminal and the nonvolatile computer readable storage medium can acquire all required filtering areas when data to be processed is filtered by a plurality of filtering layers, and compared with the data which are acquired in the filtering areas of each filtering layer respectively when each filtering layer is filtered, the data which are acquired in the filtering areas of different filtering layers have repeated parts, the data transmission quantity and the data transmission occupy more time, so that the power consumption of hardware is increased, the data processing efficiency is lower, the time occupied by the data transmission quantity and the data transmission is obviously reduced, the power consumption of the hardware is reduced, and the data processing efficiency is improved.
Description
Technical Field
The present application relates to the field of image technologies, and in particular, to a data processing method, a data processing apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
At present, when data is processed, the data needs to be transmitted back and forth between a memory and a storage, and the data required to be acquired in different data processing flows are different, so that each data processing flow needs to transmit the data between the memory and the storage, which results in that the data transmission amount and the time occupied by the data transmission are both large, the power consumption of hardware is increased, and the data processing efficiency is low.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, a terminal and a non-volatile computer readable storage medium.
The data processing method comprises the steps of obtaining the filtering radiuses of a plurality of preset filtering layers; determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and acquiring the image data of the filtering area.
The data processing device of the embodiment of the application comprises a first acquisition module, a first determination module and a second acquisition module. The first obtaining module is used for obtaining the preset filtering radiuses of a plurality of filtering layers; the first determining module is used for determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; the second obtaining module is used for obtaining the image data of the filtering area.
The terminal comprises a processor, a first filter layer and a second filter layer, wherein the processor is used for acquiring the preset filter radii of the plurality of filter layers; determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and acquiring the image data of the filtering area.
A non-transitory computer-readable storage medium of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform a data processing method. The data processing method comprises the steps of obtaining the preset filtering radiuses of a plurality of filtering layers; determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and acquiring the image data of the filtering area.
According to the data processing method, the data processing device, the terminal and the nonvolatile computer readable storage medium, when the data to be processed is determined and obtained to be filtered by the multiple filter layers by obtaining the filter radiuses of the multiple filter layers of the data to be processed, all required filter areas are obtained, compared with the situation that when each filter layer is filtered, the data of the filter areas of each filter layer are obtained respectively, repeated parts exist in the filter areas of different filter layers, the data transmission quantity and the data transmission occupy more time, the power consumption of hardware is increased, the data processing efficiency is lower, the data transmission quantity and the data transmission occupy time are obviously reduced, the power consumption of the hardware is reduced, and the data processing efficiency is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a data processing method according to some embodiments of the present application;
FIG. 2 is a block schematic diagram of a data processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic plan view of a terminal according to some embodiments of the present application;
FIGS. 4-6 are schematic illustrations of certain embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of a data processing method according to some embodiments of the present application;
FIG. 8 is a schematic illustration of certain embodiments of the present application;
FIG. 9 is a schematic flow chart diagram of a data processing method according to some embodiments of the present application;
FIG. 10 is a schematic illustration of certain embodiments of the present application;
FIGS. 11 and 12 are schematic flow diagrams of data processing methods according to certain embodiments of the present application;
FIG. 13 is a schematic diagram of a connection between a processor and a computer readable storage medium according to some embodiments of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
Referring to fig. 1 to 3, a data processing method according to an embodiment of the present disclosure includes the following steps:
011: acquiring preset filtering radiuses of a plurality of filtering layers;
012: determining a filtering area of the data to be processed according to the filtering radiuses of the multiple filtering layers; and
013: image data of the filter region is acquired.
The data processing apparatus 10 of the present embodiment includes a first acquisition module 11, a first determination module 12, and a second acquisition module 13. The first obtaining module 11, the first determining module 12 and the second obtaining module 13 are configured to perform step 011, step 012 and step 013, respectively. That is, the first obtaining module 11 is configured to obtain preset filter radii of multiple filter layers, and the first determining module 12 is configured to determine a filter area of data to be processed according to the filter radii of the multiple filter layers; the second obtaining module 13 is configured to obtain image data of the filtering area.
The terminal 100 of the present embodiment includes a processor 30. The processor 30 is configured to obtain filtering radii of a plurality of preset filtering layers; determining a filtering area of the data to be processed according to the filtering radiuses of the multiple filtering layers; and acquiring image data of the filtering area. That is, step 011, step 012, and step 013 can be implemented by processor 30.
Specifically, the terminal 100 further includes a housing 40. The terminal 100 may be a mobile phone, a tablet computer, a display device, a notebook computer, a teller machine, a gate, a smart watch, a head-up display device, a game console, etc. As shown in fig. 3, the embodiment of the present application is described by taking the terminal 100 as an example, and it is understood that the specific form of the terminal 100 is not limited to a mobile phone. The housing 40 may also be used to mount functional modules of the terminal 100, such as a display device, an imaging device, a power supply device, and a communication device, so that the housing 40 provides protection for the functional modules against dust, falling, water, and the like.
The data to be processed may be an image captured by the camera 20 of the terminal 100, or may be an image downloaded from the internet, without limitation. The data to be processed may also be part of the image taken by the camera 20; alternatively, the data to be processed may be multi-frame attitude data, acceleration data, or the like acquired by the terminal 100. The embodiment of the present application takes the data to be processed as the image to be processed as an example for explanation.
When filtering an image to be processed, the filtering is generally performed based on a preset filtering algorithm. The preset filtering algorithm determines the preset size of the image to be processed, the preset number of filtering layers and the preset filtering radius of each filtering layer.
During filtering, if the size of the captured image is larger than the preset size, the captured image may be divided into a plurality of to-be-processed images according to the preset size, so that the processor 30 may directly obtain the position information of each to-be-processed image, such as the vertex coordinates of the to-be-processed image (taking the to-be-processed image as a rectangle). Meanwhile, the processor 30 can also directly obtain the filtering radius of each filtering layer.
Then, the processor 30 may determine the filtering area of each filtering layer according to the position information of the image to be processed and the filtering radius of each filtering layer. For example, the filtering radius is a distance from the edge of the image to be processed, the position information includes a vertex coordinate of the image to be processed, and the width and the height of the image to be processed can be quickly determined and obtained according to the vertex coordinate of the image to be processed. The processor 30 may determine the vertex coordinates of the filtering area according to the vertex coordinates and the filtering radius of the image to be processed
In one example, referring to fig. 4, an image coordinate system is established with the upper left corner of the to-be-processed image a1 as the origin, the width of the to-be-processed image a1 is 8 (the number of pixels in the W direction) and the height is 8 (the number of pixels in the H direction), the vertex coordinates of the to-be-processed image a1 are (0,0), (8,0), (0,8) and (8,8), respectively, for example, the filtering radius is 2 pixels, two rows of pixels are added to the upper edge and the lower edge of the to-be-processed image a1, two columns of pixels are added to the left edge and the right edge, respectively, and the vertex coordinates of the to-be-processed image S1 are (-2, -2), (10,2), (-2,10) and (10,10), respectively. In this way, according to the position information of the image a1 to be processed and the filtering radius of each filtering layer, the filtering region S1 of each filtering layer can be determined and obtained quickly.
In other embodiments, the filtering radii include a first filtering radius (e.g., corresponding to the top of the image), a second filtering radius (e.g., corresponding to the right of the image), a third filtering radius (e.g., corresponding to the bottom of the image), and a fourth filtering radius (e.g., corresponding to the left of the image), where the first filtering radius, the second filtering radius, the third filtering radius, and the fourth filtering radius are the same (as in the example shown in fig. 4); or the first filtering radius is the same as the third filtering radius, and the second filtering radius is the same as the fourth filtering radius; or the first filtering radius, the second filtering radius, the third filtering radius and the fourth filtering radius are different from each other.
In another example, referring to fig. 5, the image coordinate system is established with the upper left corner of the image to be processed a1 as the origin, and the vertex coordinates of the image to be processed a1 are (0,0), (8,0), (0,8) and (8,8), respectively, if the first filtering radius and the third filtering radius are the same and are both 2 pixels, and the second filtering radius and the fourth filtering radius are the same and are both 1 pixel, two rows of pixels are added to the upper edge and the lower edge of the image to be processed a1, and one column of pixels is added to the left edge and the right edge, respectively, and the vertex coordinates of the image to be processed S1 are (-1, -2), (9, -2), (-1,10) and (9,10), respectively. In this way, according to the position information of the image to be processed a1 and the filtering radius of each filtering layer, the filtering region S1 of each filtering layer can be determined and obtained quickly.
In still another example, referring to fig. 6, the image coordinate system is established with the upper left corner of the image to be processed a1 as the origin, the vertex coordinates of the image to be processed a1 are (0,0), (8,0), (0,8) and (8,8), respectively, if the first filtering radius, the second filtering radius, the third filtering radius and the fourth filtering radius are 4 pixels, 3 pixels, 2 pixels and 1 pixel, respectively, 4 rows of pixels and 2 rows of pixels are added to the upper edge and the lower edge of the image to be processed a1, respectively, 1 column of pixels and 3 columns of pixels are added to the left edge and the right edge, and the vertex coordinates of the image to be processed a1 are (-1, -4), (11, -4), (1, 10) and (11,10), respectively. In this way, according to the position information of the image to be processed a1 and the filtering radius of each filtering layer, the filtering region S1 of each filtering layer can be determined and obtained quickly.
It will be appreciated that when performing filtering processing, the adjacent two filtering layers are generally interdependent, for example, the output image of the upper filtering layer will be used as the input image of the lower filtering layer, and the input image of the lower filtering layer is determined by the output image of the lower filtering layer and the filtering radius of the lower filtering layer.
Therefore, the filtering area of the last filtering layer can be determined according to the output image of the last filtering layer and the filtering radius of the last filtering layer, so as to determine the output image of the second last filtering layer (i.e. the image data of the filtering area of the last filtering layer, which is also the input image of the last filtering layer). Then, based on the output image of the penultimate filtering layer and the filtering radius of the penultimate filtering layer, the filtering area of the penultimate filtering layer is determined, so as to determine the output image of the penultimate filtering layer (i.e., the image data of the filtering area of the penultimate filtering layer is also the input image of the penultimate filtering layer). In this way, the filtering regions of the first filtering layer in all the filtering layers can be determined by sequentially determining the last layer to the first layer, so as to determine the filtering region of the image to be processed (i.e., the image data of the filtering region of the first filtering layer is also the input image of the first filtering layer).
It is understood that the number of filter layers may be 1, 2, 3, 4, 5, etc. The above embodiment has been described with the filter layer being 3 layers. If the filter layers are two layers, the filter region of the filter layer of the penultimate layer in the above embodiment is only required to be determined and obtained, and the filter region is the filter region of the image to be processed. If the filtering layer is 1 layer, determining the filtering area of the first filtering layer only according to the output image of the first filtering layer and the filtering radius of the first filtering layer, wherein the filtering area is the filtering area of the image to be processed.
Of course, the filtering radius of any adjacent 2 layers, 3 layers and 4 layers which are adjacent in sequence in the multi-layer filtering layer can be obtained, so that the corresponding filtering areas of any adjacent 2 layers, 3 layers and 4 layers which are adjacent in sequence are obtained, the data in the filtering area can be read once, and then the data can be used by any adjacent 2 layers, 3 layers and 4 layers which are adjacent in sequence, and the data reading and writing amount during filtering is reduced.
For example, taking the filter layer as 4 layers as an example, after the filtering of the layer 1 is completed, the filter area corresponding to the input image of the second layer may be calculated according to the filter radii from the layer 2 to the layer 4, and the calculation manner refers to the foregoing implementation manner of calculating the filter area of the layer 1, and is not described herein again. So, can make adjacent 2 nd layer to 4 th layer in proper order only need once read can of image data to data read and write volume when reducing and filtering.
In addition, the image size of the output of each filter layer may be fixed, e.g., all the same as the size of the image to be processed. For example, if the captured image is 100 × 100 and the size of the image to be processed is 50 × 50, the captured image is divided into 4 images to be processed. After the filtering process is performed on the image to be processed, the size of the finally output filtered image is also 50 × 50.
Finally, the processor 30 directly obtains the image data in the filtering region of the image to be processed, so as to obtain all the image data required for performing the filtering process on the image to be processed subsequently. The image data existing in the filter region may be directly acquired (for example, a portion where the filter region overlaps the captured image), and the portion where the image data does not exist in the filter region (a portion where the filter region does not overlap the captured image) needs to be filled according to the image data existing in the filter region, so as to acquire the image data of each pixel in the filter region.
The data processing method, the data processing device 10 and the terminal 100 of the embodiment of the application, by obtaining the filtering radiuses of the multiple filtering layers of the data to be processed, thereby according to the filtering radiuses of the multiple filtering layers, when determining and obtaining the data to be processed for filtering the multiple filtering layers, all the required filtering areas are needed, compared with the case of filtering each filtering layer, the data of the filtering areas of each filtering layer are respectively obtained, repeated parts exist in the filtering areas of different filtering layers, the time occupied by data transmission quantity and data transmission is more, the power consumption of hardware is increased, the data processing efficiency is lower, the time occupied by the data transmission quantity and the data transmission is obviously reduced, the power consumption of hardware is reduced, and the data processing efficiency is improved.
Referring to fig. 2, fig. 3 and fig. 7, in some embodiments, the filter layer includes M layers, where M is a positive integer, and the filter area of the M layer is determined according to the filter radius of the M layer and the image area where the image to be processed is located, step 012 further includes:
0121: determining a filtering area of the N-1 layer according to the filtering radius of the N-1 layer and the filtering area of the N layer, wherein N is a positive integer less than or equal to M;
0122: after determining that the filtering region of the N-1 st layer is obtained, decreasing the value of N by 1, and performing step 0121 again: and determining the filtering area of the (N-1) th layer according to the filtering radius of the (N-1) th layer and the filtering area of the (N) th layer until the filtering area of the (1) th layer is determined to be used as the filtering area of the image to be processed.
In certain embodiments, the first determining module 12 is further configured to perform step 0121 and step 0122. The first determining module 12 is further configured to determine a filtering area of the N-1 th layer according to the filtering radius of the N-1 th layer and the filtering area of the N-1 th layer, where N is a positive integer less than or equal to M; and after determining to obtain the filtering area of the (N-1) th layer, reducing the value of N by 1, and executing the step of determining the filtering area of the (N-1) th layer again according to the filtering radius of the (N-1) th layer and the filtering area of the (N) th layer until determining the filtering area of the (1) th layer as the filtering area of the image to be processed.
In some embodiments, the processor 30 is further configured to determine a filtering area of the N-1 st layer according to the filtering radius of the N-1 st layer and the filtering area of the N-1 th layer, where N is a positive integer less than or equal to M; and after determining to obtain the filtering area of the (N-1) th layer, reducing the value of N by 1, and executing the step of determining the filtering area of the (N-1) th layer again according to the filtering radius of the (N-1) th layer and the filtering area of the (N) th layer until determining the filtering area of the (1) th layer as the filtering area of the image to be processed. That is, steps 0121 and 0122 may be implemented by processor 30.
Specifically, the filter layer includes M layers, such as M being 1, 2, 3, 4, 5, etc. The M layer is the last layer for filtering, and after the image data of the filtering area of the image to be processed is sequentially processed by the M filtering layers, filtering can be completed to generate a filtering image.
In determining the filtering area of the image to be processed, the processor 30 may first determine the filtering area of the mth layer according to the filtering radius of the mth layer and the image area where the image to be processed is located.
The size of the filtered image may be the same as that of the image to be processed, and after the filtering region of the M-th layer is filtered, only image data in the image region where the image to be processed is located is retained to generate the filtering region. Therefore, the filter area of the M layer can be determined according to the image area where the image to be processed is located and the filter radius of the M layer. For example, if the filtering radius is 1 pixel, a row of pixels is added to the upper edge and the lower edge of the image area where the image to be processed is located, and a column of pixels is added to the left edge and the right edge, respectively, to serve as the filtering area of the mth layer. It can be understood that after each filtering layer pair is filtered, the image data of the image area where the image to be processed is located may be changed.
Then, the processor 30 determines the filtering area of the (N-1) th layer according to the filtering radius of the (N-1) th layer and the filtering area of the (N) th layer; referring to fig. 8, taking M as 4 for illustration, the layers L1 to L4 are respectively the 1 st layer and the 4 th layer. Initially, the value of N is equal to the value of M, i.e. the processor 30 determines the filtering area of layer 3 according to the filtering radius of layer 3 and the filtering area of layer 4; after determining the filtering area of layer 3, subtracting 1 from the value of N to become 3, and the processor 30 determines the filtering area of layer 2 according to the filtering radius of layer 2 and the filtering area of layer 3; after determining the filtering area of the layer 2, subtracting 1 from the value of N to become 2 again, the processor 30 determines the filtering area of the layer 1 according to the filtering radius of the layer 1 and the filtering area of the layer 2, and after determining the filtering area of the layer 1, the filtering area can be used as the filtering area of the image to be processed.
Referring to fig. 2, fig. 3 and fig. 9 again, in some embodiments, the filter layer includes a filter operator, and the filter area of the filter operator of the M-th layer is determined according to the filter radius of the filter operator of the M-th layer and the image area where the image to be processed is located, and step 0121 further includes:
01211: and determining the filtering area of each filtering operator of the N-1 layer according to the filtering radius of the filtering operator of the N-1 layer and the filtering area of the filtering operator of the N-1 layer.
In certain embodiments, the first determining module 12 is further configured to perform step 01211. Namely, the first determining module 12 is further configured to determine the filter region of each filter operator of the N-1 st layer according to the filter radius of the filter operator of the N-1 st layer and the filter region of the filter operator of the N-1 st layer.
In some embodiments, processor 30 is further configured to determine the filter region for each filter operator of layer N-1 based on the filter radius of the filter operator of layer N-1 and the filter region of the filter operator of layer N. That is, step 01211 can be implemented by processor 30.
In particular, each filter layer may include one or more filter operators, each of which may individually filter an output image input by a previous filter layer to generate an intermediate filtered image. Referring to fig. 8, the filter layer is 4 layers, the layer 1L 1 includes 1 filter operator (i.e., OP1), the layer 2L 2 includes 3 filter operators (i.e., OP2, OP3, and OP4), the layer 3L 3 includes 2 filter operators (i.e., OP5 and OP6), and the layer 4L 4 includes 1 filter operator (i.e., OP 7). 109*109, 107*107
The first-layer filter operator OP1 performs filtering processing on the image data of the filter region of the image to be processed (i.e., the image P0 in fig. 8), and outputs a first intermediate filtered image P1; the first intermediate filtered image P1 is input to the second layer of OP2, OP3, and OP4, respectively, and OP2, OP3, and OP4 may process the first intermediate filtered image P1, respectively, i.e., may output 3 second intermediate filtered images P2; 3 second intermediate filtered images P2 are respectively input to OP5 (second intermediate filtered image P2 for processing OP2 and OP 3) and OP6 (second intermediate filtered image P2 for processing OP4) of the third layer, and OP5 and OP6 can respectively process the second intermediate filtered image P2, i.e., 2 third intermediate filtered images P3 can be output; the 2 third intermediate filtered images P3 are all input to the OP7 at the fourth level, and the OP7 processes the 2 third intermediate filtered images P3, which may output a fourth intermediate filtered image P4, and may output the fourth intermediate filtered image P4 as a final filtered image.
According to fig. 8, there are associations between different filter operators, OP1 and OP2, OP1 and OP3, OP1 and OP4, OP2 and OP5, OP3 and OP5, OP4 and OP6, OP5 and OP7, and OP6 and OP7, and the associated filter operator refers to the output of the previous layer filter operator as the input of the next layer filter operator.
When calculating the filtering area of each filtering operator of the (N-1) th layer, the filtering area of the filtering operator of the M-th layer (i.e., the filtering area of OP7) is first determined according to the filtering radius of the filtering operator of the M-th layer and the image area where the image to be processed is located. Then, after determining the N-th layer filter operator associated with each filter operator of the N-1 th layer, the filter area of each filter operator of the N-1 st layer can be determined according to the filter radius of the N-1 st layer filter operator and the filter area of the N-1 st layer filter operator.
For example, if the value of N is equal to the value of M being 4, the filtering area of OP5 is determined according to the filtering area of OP7 and the filtering radius of OP 5; the filtering area of OP6 is determined according to the filtering area of OP7 and the filtering radius of OP 6. In this way, the filtering region of each filtering operator in each filtering layer is calculated layer by layer until the filtering region of the filtering operator OP1 in the first layer is obtained through calculation, that is, the filtering region of the image to be processed is obtained.
It will be appreciated that one filter operator at layer N-1 may be associated with multiple filter operators at layer N. The first filter operator of the N-1 th layer is associated with the second filter operator of the N-1 th layer, the output data of the filter operator of the N-1 th layer is input data of the filter operator of the N-1 th layer, the first filter operator is any one of the filter operators of the N-1 th layer, and the second filter operator is any one of the filter operators of the N-1 th layer.
For example, as shown in fig. 8, a first filter operator (i.e., OP1) of the N-1 th layer (taking layer 1 as an example) is associated with a plurality of second filter operators (i.e., OP2, OP3, and OP4) of the nth layer (i.e., layer 2). The first intermediate filtered image P1 output by the OP1 is the input image of the OP2, OP3 and OP4, and in order to ensure that the first intermediate filtered image P1 contains all the image data required by the OP2, OP3 and OP4, after determining the filter regions of the OP2, OP3 and OP4, the intermediate filter region may be determined according to the filter region of each second filter operator of the nth layer (i.e., the filter regions of the OP2, OP3 and OP 4).
As shown in fig. 10, a union of the filter regions of all the filter operators of the current filter layer (e.g., the nth layer) may be taken as the intermediate filter region S2, such as a union of the filter regions of OP2, OP3, and OP4 (i.e., the second intermediate filtered image P2 output by OP2, OP3, and OP4, respectively) may be taken as the intermediate filter region S2, and since the filter regions are generally rectangular, a rectangular region corresponding to the union of the filter regions of OP2, OP3, and OP4 may be taken as the intermediate filter region S2.
And finally, determining the filtering area of the first filtering operator according to the intermediate filtering area and the filtering radius of the first filtering operator. So that the image data output after the filtering process by the first filter operator contains all the image data required for OP2, OP3, and OP 4. Thus, the image data of all the filter operators of the current filter layer can be read at one time.
Alternatively, the union of the filter regions of the multiple filter operators (at least two) of the current filter layer may be taken as the intermediate filter region S2, such as the union of the filter regions of OP2 and OP3, the union of the filter regions of OP2 and OP4, or the union of the filter regions of OP3 and OP4 may be taken as the intermediate filter region S2.
Then, according to the intermediate filtering region and the filtering radius of the first filtering operator, the filtering region of the first filtering operator is determined. Therefore, the image data output after the filtering processing of the first filtering operator comprises all the image data required by the plurality of filtering operators, and the image data of the plurality of filtering operators is obtained by once reading.
In the case where the first filter operator of the N-1 layer (taking the 3 rd layer as an example) is plural (i.e., OP5 and OP6) and is associated with the OP7 of the nth layer (i.e., the 4 th layer), two third intermediate filtered images P3 generated by the plural OP5 and OP6 may be input to the OP7, and the OP7 may perform filtering processing on the two third intermediate filtered images P3 to obtain two fourth intermediate filtered images P4, and fuse the two fourth intermediate filtered images P4, for example, perform weighted fusion on pixels at the same position in the two fourth intermediate filtered images P4, and finally output one fused fourth intermediate filtered image P4.
Referring to fig. 2, 3 and 11, in some embodiments, the data processing method further includes the following steps:
014: determining the coordinate offset of each second filter operator according to the vertex coordinates of the filter region of each second filter operator and the vertex coordinates of the intermediate filter region;
015: when the Nth layer is subjected to filtering processing, determining the vertex coordinates of the filtering area of each second filtering operator according to the coordinate offset of each second filtering operator and the vertex coordinates of the output image of the Nth-1 layer;
016: from the output image, image data corresponding to the filter region of each second filter operator is acquired and processed.
In certain embodiments, the data processing apparatus 10 further comprises a second determination module 14, a third determination module 15, and a first processing module 16. The second determining module 14, the third determining module 15 and the first processing module 16 are configured to perform step 014, step 015 and step 016, respectively. That is, the second determining module 14 is configured to determine the coordinate offset of each second filter operator according to the vertex coordinates of the filter region of each second filter operator and the vertex coordinates of the intermediate filter region; the third determining module 15 is configured to determine, when performing filtering processing on the nth layer, a vertex coordinate of a filtering region of each second filtering operator according to the coordinate offset of each second filtering operator and the vertex coordinate of the output image of the nth-1 layer; the first processing module 16 is configured to obtain and process image data corresponding to the filtering region of each second filtering operator from the output image.
In some embodiments, processor 30 is further configured to determine a coordinate offset for each second filter operator based on the vertex coordinates of the filter region and the vertex coordinates of the intermediate filter region for each second filter operator; when the Nth layer is subjected to filtering processing, determining the vertex coordinates of the filtering area of each second filtering operator according to the coordinate offset of each second filtering operator and the vertex coordinates of the output image of the Nth-1 layer; from the output image, image data corresponding to the filter region of each second filter operator is acquired and processed. That is, step 014, step 015 and step 016 may be implemented by the processor 30.
Specifically, since one first filter operator may be associated with a plurality of second filter operators, and the filter region of the first filter operator is determined according to the filter regions of the plurality of second filter operators (e.g., the union of the filter regions of the plurality of second filter operators, i.e., the middle filter region) and the filter radius of the first filter operator.
In order to ensure that the second filter operator can accurately acquire the required image data from the first intermediate filtered image P1 (corresponding to the intermediate filtering region) output by the first filter operator, it is necessary to determine the position of the filtering region of the second filter operator in the intermediate filtering region.
Thus, processor 30 may determine the coordinate offset of each second filter operator from the vertex coordinates of the filter region of each second filter operator and the vertex coordinates of the intermediate filter region; thus, when the first intermediate filtered image P1 (i.e., the intermediate filtering region) is output by the first filtering operator (e.g., the first filtering operator of layer 1), the second filtering operator can acquire image data necessary for filtering from the first intermediate filtered image P1.
For example, when filtering is performed in the nth layer (e.g., layer 2), the second filter operator may calculate the vertex coordinates of the filter region of the second filter operator according to the corresponding coordinate offset and the vertex coordinates of the first intermediate filtered image P1, so as to obtain image data corresponding to the filter region of the second filter operator in the first intermediate filtered image P1, and perform filtering processing, thereby ensuring the accuracy of data acquisition and processing.
Referring to fig. 2, 3 and 12, in some embodiments, the filtering radius includes a fifth filtering radius and a sixth filtering radius, and the data processing method further includes the following steps:
017: determining a first difference value of a fifth filtering radius of the filtering region of each second filtering operator and a fifth filtering radius of the intermediate filtering region, and a second difference value of a sixth filtering radius of the filtering region of each second filtering operator and a sixth filtering radius of the intermediate filtering region;
018: when the Nth layer is subjected to filtering processing, determining the width of a filtering region of each second filtering operator according to the first difference of each second filtering operator and the width of the output image of the Nth-1 layer, and determining the height of the filtering region of each second filtering operator according to the second difference of each second filtering operator and the height of the output image of the Nth-1 layer;
019: determining a filtering area of each second filtering operator according to the width and the height of each second filtering operator; and
020: from the output image, image data corresponding to the filter region of each second filter operator is acquired and processed.
In certain embodiments, the data processing apparatus 10 further comprises a fourth determination module 17, a fifth determination module 18, a sixth determination module 19, and a second processing module 20. The fourth determining module 17, the fifth determining module 18, the sixth determining module 19 and the second processing module 20 are configured to perform step 017, step 018, step 019 and step 020, respectively. That is, the fourth determining module 17 is configured to determine a first difference between the fifth filter radius of the filter region of each second filter operator and the fifth filter radius of the intermediate filter region, and a second difference between the sixth filter radius of the filter region of each second filter operator and the sixth filter radius of the intermediate filter region; the fifth determining module 18 is configured to determine, when performing filtering processing on the nth layer, a width of a filter region of each second filter operator according to the first difference of each second filter operator and the width of the output image of the nth-1 layer, and determine a height of the filter region of each second filter operator according to the second difference of each second filter operator and the height of the output image of the nth-1 layer; the sixth determining module 19 is configured to determine a filtering region of each second filtering operator according to the width and the height of each second filtering operator; the second processing module 20 is configured to obtain and process image data corresponding to the filtering region of each second filtering operator from the output image.
In some embodiments, the processor 30 is further configured to determine a first difference between the fifth filter radius of the filter region of each second filter operator and the fifth filter radius of the intermediate filter region, and a second difference between the sixth filter radius of the filter region of each second filter operator and the sixth filter radius of the intermediate filter region; when the N layer is subjected to filtering processing, determining the width of a filtering region of each second filtering operator according to the first difference value of each second filtering operator and the width of an output image of the N-1 layer, and determining the height of the filtering region of each second filtering operator according to the second difference value of each second filtering operator and the height of the output image of the N-1 layer; determining a filtering area of each second filtering operator according to the width and the height of each second filtering operator; and acquiring and processing image data corresponding to the filtering region of each second filtering operator from the output image. That is, step 017, step 018, step 019 and step 020 may be implemented by the processor 30.
Specifically, when the filter radius of the filter operator only includes the fifth filter radius and the sixth filter radius, that is, the filter radii of the upper side and the lower side of the filter region are the same and are both the fifth filter radius, and the filter radii of the left side and the right side are the same and are both the sixth filter radius.
For example, when filtering is performed at the nth layer (e.g., layer 2), the second filter operator may be differenced with the height and width of the first intermediate filtered image P1 according to the corresponding first difference value and second difference value, so as to obtain the height and width of each second filter operator. Then, processor 30 determines the filtering area of each second filtering operator according to the height and width of the second filtering operator, for example, 106 × 106 for first intermediate filtered image P1, 2 and 1 for the first difference and the second difference, respectively, and 104 × 102 for the width and height, respectively, of the second filtering operator, that is, the height and the width of the second filtering operator are determined by the processor 30
Therefore, when the width and the height of the second filter operator are determined, the filter region of each second filter operator can be quickly determined. Finally, the processor 30 obtains the image data corresponding to the filtering region of the second filtering operator from the first intermediate filtered image P1, and performs filtering processing to ensure the accuracy of data obtaining and processing.
In another embodiment, the filter radii of the four sides of the filter region are the same, and then only the difference between the filter radius of the filter region of each second filter operator and the filter radius of the middle filter region needs to be calculated. Therefore, when the subsequent filtering processing is convenient, each filtering operator quickly obtains all image data required by filtering from the output image output by the previous layer.
In another embodiment, the filter radii of the four sides of the filter region are different, that is, the filter region includes the aforementioned first to fourth filter radii, and when calculating the difference value of the filter radii, four difference values of the first to fourth filter radii of each second filter operator and the first to fourth filter radii of the intermediate filter region need to be calculated, so that when performing subsequent filter processing, each filter operator can quickly obtain all image data required for filtering from the output image output from the previous layer.
In one example, referring again to fig. 8, when data processing is performed by the above-mentioned data processing algorithm, neither the input data nor the output data of each filter operator is down-sampled or up-sampled; the filtering radius of each filter operator is 5 pixels, and the planned output of each filter operator is also planned as [ w, h ] in the general method, and it is assumed that w is 128 and h is 64.
For the existing filtering algorithm to read and write data from and into the memory 50 (e.g. dynamic random access memory, DRAM) of the terminal 100 separately at each filtering layer, the read and write amount of the memory 50 at the first layer to the fourth layer is as follows in table 1:
TABLE 1
DRAM read | DRAM write | |
L1 | (128+2*5)*(64+2*5)=10212 | 128*64=8192 |
L2 | 10212 | 128*64*3=8192*3 |
L3 | 10212*3 | 128*64*2=8192*2 |
L4 | 10212*2 | 128*64=8192 |
Total of | 71484 | 57344 |
When the data processing method of the present application is used to process data, the read/write amount of the memory 50 is as follows:
TABLE 2
DRAM read | DRAM write | |
Total of | (128+2*5*4)*(64+2*5*4)=17472 | w*h=8192 |
It can be seen that by using the method, the data volume read by the DRAM is reduced from 71484 to 17472, the data volume written by the DRAM is reduced from 57344 to 8192, the required read-write volume of the algorithm is greatly reduced, and therefore the algorithm efficiency and the power consumption performance can be obviously improved.
Referring to fig. 13, a non-volatile computer readable storage medium 300 storing a computer program 302 according to an embodiment of the present disclosure, when the computer program 302 is executed by one or more processors 30, the processor 30 may execute the data processing method according to any of the above embodiments.
For example, referring to fig. 1, the computer program 302, when executed by the one or more processors 30, causes the processors 30 to perform the steps of:
011: acquiring preset filtering radiuses of a plurality of filtering layers;
012: determining a filtering area of the data to be processed according to the filtering radiuses of the multiple filtering layers; and
013: image data of the filter region is acquired.
For another example, referring to fig. 7, when the computer program 302 is executed by the one or more processors 30, the processors 30 may further perform the following steps:
0121: determining a filtering area of the N-1 layer according to the filtering radius of the N-1 layer and the filtering area of the N layer, wherein N is a positive integer less than or equal to M;
0122: after determining that the filtering region of the N-1 st layer is obtained, decreasing the value of N by 1, and performing step 0121 again: and determining the filtering area of the (N-1) th layer according to the filtering radius of the (N-1) th layer and the filtering area of the (N) th layer until the filtering area of the (1) th layer is determined to be used as the filtering area of the image to be processed.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more program modules for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes additional implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (13)
1. A method of data processing, comprising:
acquiring preset filtering radiuses of a plurality of filtering layers;
determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and
and acquiring the image data of the filtering area.
2. The data processing method of claim 1, wherein the determining a filtering region of the data to be processed according to the filtering radii of a plurality of filtering layers comprises:
and determining the filtering area according to the filtering radiuses of the plurality of filtering layers and the position information of the data to be processed.
3. The data processing method according to claim 1, wherein the filter layer includes M layers, M is a positive integer, a filter region of the M layer is determined according to a filter radius of the M layer and an image region where the data to be processed is located, and the determining the filter region of the data to be processed according to the filter radius and the position information includes:
determining a filtering area of an N-1 layer according to the filtering radius of the N-1 layer and the filtering area of the Nth layer, wherein N is a positive integer less than or equal to M;
and after determining to obtain the filtering area of the (N-1) th layer, reducing the value of N by 1, and executing the step of determining the filtering area of the (N-1) th layer again according to the filtering radius of the (N-1) th layer and the filtering area of the (N) th layer until determining the filtering area of the (1) th layer as the filtering area of the data to be processed.
4. The data processing method of claim 3, wherein the filter radius comprises a first filter radius, a second filter radius, a third filter radius, and a fourth filter radius, and wherein the first filter radius, the second filter radius, the third filter radius, and the fourth filter radius are the same; or the first filtering radius is the same as the third filtering radius, and the second filtering radius is the same as the fourth filtering radius; or the first filtering radius, the second filtering radius, the third filtering radius and the fourth filtering radius are different from each other.
5. The data processing method according to claim 3, wherein the filter layer includes a filter operator, a filter region of the filter operator of an M-th layer is determined according to a filter radius of the filter operator of the M-th layer and an image region where the data to be processed is located, and the determining of the filter region of an N-1-th layer according to the filter radius of the N-1-th layer and the filter region of the N-1-th layer includes:
determining the filtering region of each filtering operator of the N-1 th layer according to the filtering radius of the filtering operator of the N-1 th layer and the filtering region of the filtering operator of the N-1 th layer.
6. The data processing method according to claim 5, wherein the first filter operator of the N-1 th layer is associated with the second filter operator of the N-th layer, and the association of the first filter operator of the N-1 th layer with the second filter operator of the N-1 th layer comprises the output data of the filter operator of the N-1 th layer being the input data of the filter operator of the N-1 th layer, the first filter operator being any one of the filter operators of the N-1 th layer, and the second filter operator being any one of the filter operators of the N-1 th layer;
said determining said filter region of each said filter operator of layer N-1 from said filter radius of said filter operator of layer N-1 and a filter region of said filter operator of layer N, comprising:
determining a middle filtering region according to the filtering region of each second filtering operator of the Nth layer; and
and determining the filtering area of the first filtering operator according to the intermediate filtering area and the filtering radius of the first filtering operator.
7. The data processing method according to claim 6, wherein the second filter operator associated with the first filter operator is plural, and the determining an intermediate filter region from the filter regions of each second filter operator of the Nth layer comprises:
and determining the intermediate filtering region according to a union set of the filtering regions of the plurality of second filtering operators.
8. The data processing method of claim 6, further comprising:
determining the coordinate offset of each second filter operator according to the vertex coordinates of the filter region of each second filter operator and the vertex coordinates of the middle filter region;
when the Nth layer is subjected to filtering processing, determining the vertex coordinates of the filtering area of each second filtering operator according to the coordinate offset of each second filtering operator and the vertex coordinates of the output image of the Nth-1 layer;
and acquiring and processing image data corresponding to the filtering region of each second filtering operator from the output image.
9. The data processing method of claim 6, wherein the filter radius comprises a fifth filter radius and a sixth filter radius; the data processing method further comprises:
determining a first difference of the fifth filter radius of the filter region of each of the second filter operators and the fifth filter radius of the intermediate filter region, and a second difference of the sixth filter radius of the filter region of each of the second filter operators and the sixth filter radius of the intermediate filter region;
when the Nth layer is subjected to filtering processing, determining the height of a filtering region of each second filtering operator according to the first difference value of each second filtering operator and the width of an output image of the Nth-1 layer, and determining the width of the filtering region of each second filtering operator according to the second difference value of each second filtering operator and the height of the output image of the Nth-1 layer;
determining a filtering area of each second filtering operator according to the width and the height of each second filtering operator; and
and acquiring and processing image data corresponding to the filtering region of each second filtering operator from the output image.
10. The data processing method of claim 1, further comprising:
and carrying out filtering processing on the image data of the filtering area to generate a filtering image.
11. A data processing apparatus, characterized by comprising:
the first obtaining module is used for obtaining the preset filtering radiuses of a plurality of filtering layers;
the first determining module is used for determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and
and the second acquisition module is used for acquiring the image data of the filtering area.
12. The terminal is characterized by comprising a processor, a first filter layer and a second filter layer, wherein the processor is used for obtaining the filter radius of the preset plurality of filter layers; determining a filtering area of data to be processed according to the filtering radiuses of the plurality of filtering layers; and acquiring the image data of the filtering area.
13. A non-transitory computer-readable storage medium comprising a computer program which, when executed by a processor, causes the processor to perform the data processing method of any one of claims 1-10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210126015.7A CN114529483A (en) | 2022-02-10 | 2022-02-10 | Data processing method, device, terminal and readable storage medium |
PCT/CN2022/139728 WO2023151386A1 (en) | 2022-02-10 | 2022-12-16 | Data processing method and apparatus, and terminal and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210126015.7A CN114529483A (en) | 2022-02-10 | 2022-02-10 | Data processing method, device, terminal and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114529483A true CN114529483A (en) | 2022-05-24 |
Family
ID=81623225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210126015.7A Pending CN114529483A (en) | 2022-02-10 | 2022-02-10 | Data processing method, device, terminal and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114529483A (en) |
WO (1) | WO2023151386A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023151386A1 (en) * | 2022-02-10 | 2023-08-17 | Oppo广东移动通信有限公司 | Data processing method and apparatus, and terminal and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070147674A1 (en) * | 2005-12-06 | 2007-06-28 | Lutz Gundel | Method and system for computer aided detection of high contrast objects in tomographic pictures |
CN106412582A (en) * | 2016-10-21 | 2017-02-15 | 北京大学深圳研究生院 | Panoramic video region of interest description method and coding method |
CN112639867A (en) * | 2020-05-07 | 2021-04-09 | 深圳市大疆创新科技有限公司 | Image processing method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141459A (en) * | 1997-09-24 | 2000-10-31 | Sarnoff Corporation | Method and apparatus for processing image pyramid borders |
US8261034B1 (en) * | 2009-12-15 | 2012-09-04 | Ambarella, Inc. | Memory system for cascading region-based filters |
CN102170520B (en) * | 2011-04-29 | 2013-12-25 | 杭州海康威视数字技术股份有限公司 | Cascade filter and dynamic setting method for calibrated denoising intensity thereof |
JP2013178753A (en) * | 2012-02-01 | 2013-09-09 | Canon Inc | Image processing device and method |
CN112150353A (en) * | 2020-09-30 | 2020-12-29 | 广州虎牙科技有限公司 | Image processing method and device, electronic equipment and readable storage medium |
CN114529483A (en) * | 2022-02-10 | 2022-05-24 | Oppo广东移动通信有限公司 | Data processing method, device, terminal and readable storage medium |
-
2022
- 2022-02-10 CN CN202210126015.7A patent/CN114529483A/en active Pending
- 2022-12-16 WO PCT/CN2022/139728 patent/WO2023151386A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070147674A1 (en) * | 2005-12-06 | 2007-06-28 | Lutz Gundel | Method and system for computer aided detection of high contrast objects in tomographic pictures |
CN106412582A (en) * | 2016-10-21 | 2017-02-15 | 北京大学深圳研究生院 | Panoramic video region of interest description method and coding method |
CN112639867A (en) * | 2020-05-07 | 2021-04-09 | 深圳市大疆创新科技有限公司 | Image processing method and device |
Non-Patent Citations (2)
Title |
---|
HONG-DAR LINDUAN-CHENG HO: "Detection of pinhole defects on chips and wafers using DCT enhancement in computer vision systems", THE INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, vol. 34, no. 5, 31 December 2007 (2007-12-31) * |
王志刚, 王伟, 许晓鸣: "基于塔型结构的多分辨率模糊边缘检测", 红外与毫米波学报, no. 04, 25 August 2002 (2002-08-25) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023151386A1 (en) * | 2022-02-10 | 2023-08-17 | Oppo广东移动通信有限公司 | Data processing method and apparatus, and terminal and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023151386A1 (en) | 2023-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764039B (en) | Neural network, building extraction method of remote sensing image, medium and computing equipment | |
CN109996023B (en) | Image processing method and device | |
US20180137414A1 (en) | Convolution operation device and convolution operation method | |
US20090309896A1 (en) | Multi Instance Unified Shader Engine Filtering System With Level One and Level Two Cache | |
US20220092325A1 (en) | Image processing method and device, electronic apparatus and storage medium | |
CN106648510A (en) | Display method and device for display resolution | |
CN108346131A (en) | A kind of digital image scaling method, device and display equipment | |
US10290132B2 (en) | Graphics processing | |
CN115035128A (en) | Image overlapping sliding window segmentation method and system based on FPGA | |
CN114529483A (en) | Data processing method, device, terminal and readable storage medium | |
CN110689061B (en) | Image processing method, device and system based on alignment feature pyramid network | |
CN113592720B (en) | Image scaling processing method, device, equipment and storage medium | |
CN114419322B (en) | Image instance segmentation method and device, electronic equipment and storage medium | |
CN111882480A (en) | Method, device and system for processing block data and storage medium | |
US11212435B2 (en) | Semiconductor device for image distortion correction processing and image reduction processing | |
US20130236117A1 (en) | Apparatus and method for providing blurred image | |
CN114519661A (en) | Image processing method, device, terminal and readable storage medium | |
US6954207B2 (en) | Method and apparatus for processing pixels based on segments | |
EP1575298B1 (en) | Data storage apparatus, data storage control apparatus, data storage control method, and data storage control program | |
CN111477183A (en) | Reader refresh method, computing device, and computer storage medium | |
CN112070708A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN113971738B (en) | Image detection method, device, electronic equipment and storage medium | |
CN107273072B (en) | Picture display method and device and electronic equipment | |
CN116152037A (en) | Image deconvolution method and apparatus, storage medium | |
CN114625997A (en) | Page rendering method and device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |