Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure, as shown in fig. 1, the method comprising:
in step S11, acquiring a partial image of one or more scales based on the image to be processed;
in step S12, performing feature extraction processing on each of the one or more local images to obtain feature maps corresponding to the one or more scale local images;
in step S13, the image to be processed is segmented based on the feature map corresponding to the local image at one or more scales, so as to obtain a segmentation result.
According to the image processing method disclosed by the embodiment of the disclosure, one or more scales of feature maps can be obtained by performing one or more scales of screenshot processing on the image to be processed and performing feature extraction processing on the local image, so that local fine features and global features can be obtained, fine features of a smaller target and more complex global distribution features can be obtained at the same time, and the accuracy of segmentation processing is improved.
In one possible implementation, the image processing method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
In one possible implementation, the image to be processed may include a medical image, such as a Computed Tomography Angiography (CTA) image, the target region includes a region where a target such as a coronary blood vessel is located, the coronary blood vessel target is small, the distribution is complex, and the coronary blood vessel target is connected with other blood vessels (such as a pulmonary blood vessel) and is easily interfered by noise of other blood vessels. It should be understood that the image to be processed may be other images, such as a portrait image, a street view image, etc., and the target in the image to be processed may include a human five sense organs, a pedestrian on the street, a vehicle, etc., and the present disclosure is not limited to the type of the image to be processed and the type of the target.
In one possible implementation manner, in step S11, a screenshot of one or more scales may be performed on the image to be processed, so as to obtain a partial image of one or more scales. The screenshot process of any scale can be performed on the image to be processed to obtain the local image of the scale, for example, the screenshot of a smaller scale can be performed to obtain the local image representing the detail feature, or the screenshot of a larger scale can be performed to obtain the local image representing the global feature.
In a possible implementation manner, the image to be processed may also be subjected to screenshot in multiple scales, so as to obtain a local image in multiple scales, and step S11 may include: the method comprises the steps of carrying out screenshot processing on an image to be processed in multiple scales to obtain multiple local images, wherein the multiple local images comprise a first local image with a reference size and a second local image with the size larger than the reference size, and the centers of the multiple local images are the same.
In one possible implementation manner, screenshot processing of multiple scales can be performed on an image to be processed, so that a plurality of partial images of different sizes are obtained, wherein the partial images include a first partial image of a reference size and a second partial image of a size larger than the reference size. During screenshot, the image centers can be kept consistent, for example, the screenshot can be performed by taking the image center of the image to be processed as the center, and the image center of each obtained local image is consistent with the image center of the image to be processed. Or any pixel point of the image to be processed can be the image center of the local image, and screenshot is performed, and the image center of each obtained local image is the pixel point. In the obtained plurality of partial images, the second partial image includes more content than the first partial image. The present disclosure does not limit the selection of the reference size and the shape of the partial image.
Fig. 2A, 2B, 2C, and 2D illustrate schematic diagrams of screenshot processing of an image to be processed according to an embodiment of the present disclosure. As shown in fig. 2A, a plurality of scales of screen shot processing may be performed on the image to be processed, that is, screen shot processing with the reference size as the screen shot size is performed to obtain a first partial image x1 (as shown in fig. 2B) of the reference size, and screen shot processing with the screen shot size larger than the reference size is performed to obtain a second partial image x2 (as shown in fig. 2C) and a second partial image x3 (as shown in fig. 2D) larger than the reference size. The image centers of the first partial image x1 and the second partial images x2 and x3 are the same, the content of the second partial image x2 is more than the content of the first partial image x1, and the content of the second partial image x3 is more than the content of the second partial image x 2. The first local image x1 of the reference size contains finer local detail features (e.g., detail features of the coronary vessels themselves), and the second local images x2 and x3 contain more global distribution features (e.g., distribution of coronary vessels and association with other vessels), such as association between the target in the first local image x1 and other regions in the second local image x2 or x3 (e.g., association between coronary vessels in the first local image x1 and vessels in other regions in the second local image x2 or x 3).
In a possible implementation manner, in step S12, if the screenshot obtains a partial image of one scale, the partial image may be subjected to a feature extraction process to obtain a feature map. If the local images with a plurality of scales are obtained, the feature extraction processing can be respectively carried out on the local images to obtain the feature map of each local image. Step S12 may include: and respectively carrying out feature extraction processing on the plurality of partial images to obtain a first feature map corresponding to the first partial image and a second feature map corresponding to the second partial image.
In an example, each local image may be subjected to feature extraction processing by a feature extraction network, which may be a deep learning neural network such as a convolutional neural network, and the present disclosure does not limit the type of the feature extraction network.
In an example, the number of feature extraction networks and the number of partial images may be the same, i.e., one partial image is extracted per feature extraction network. For example, the first partial image x1 may be input to the feature extraction network 1 for feature extraction processing, the second partial image x2 may be input to the feature extraction network 2 for feature extraction processing, and the second partial image x3 may be input to the feature extraction network 3 for feature extraction processing.
In one possible implementation, the plurality of feature extraction networks may be the same feature extraction network, the sizes of the images input to the feature extraction networks may be kept the same, the reference size of the first partial image may be used as the input size of the input feature extraction network, and the second partial image larger than the reference size may be processed to reduce the size of the second partial image to meet the input requirement of the feature extraction network. In an example, other sizes may be used as the input size of the input feature extraction network, for example, a size may be preset as the input size, and all local images that are inconsistent with the input size are processed to meet the input requirement of the feature extraction network. The present disclosure is not limited to the selection of the input size.
In one possible implementation manner, when the reference size is used as an input size, the performing feature extraction processing on each of the plurality of partial images to obtain a first feature map corresponding to the first partial image and a second feature map corresponding to the second partial image may include: performing downsampling processing on the second local image to obtain a third local image with a reference size; and respectively carrying out feature extraction processing on the first partial image and the third partial image to obtain a first feature map corresponding to the first partial image and a second feature map corresponding to the second partial image.
In one possible implementation, the size of the second partial image is larger than the reference size, that is, larger than the input size, and the second partial image may be downsampled to obtain a third partial image of the reference size, that is, the third partial image satisfies the input size requirement of the feature extraction network.
In one possible implementation, the feature extraction process may be performed on the first partial image and the third partial image separately, and in an example, corresponding feature extraction networks may be input separately for performing the feature extraction process, for example, the first partial image x1 may be input to perform the feature extraction process on the feature extraction network 1 to obtain a first feature map, the third partial image corresponding to the second partial image x2 may be input to perform the feature extraction process on the feature extraction network 2 to obtain a second feature map corresponding to x2, and the third partial image corresponding to the second partial image x3 may be input to perform the feature extraction process on the feature extraction network 3 to obtain a second feature map corresponding to x 3.
In one possible implementation, the first feature map may contain fine local detail features (e.g., detail features of coronary vessels themselves), such as detail features (e.g., shapes, contours, etc.) of targets (e.g., coronary vessels) in the first local image x1, and the second feature map may have a larger receptive field, containing more globally distributed features, such as distribution of coronary vessels in the second local image x2 or x3 and connections with other vessels.
By the method, the local images with different sizes can meet the input requirement of the feature extraction network through downsampling, so that the feature maps with multiple scales can be obtained, and the feature information with multiple scales can be obtained.
In a possible implementation manner, when the plurality of feature extraction networks are the same feature extraction network, and the sizes of the input first local image and the input third local image are the same, the feature maps output by the feature extraction networks, that is, the sizes of the first feature map and the second feature map are the same, but the second feature map is the processing result of the second local image, so that the size of the region corresponding to the second feature map in the original image to be processed is larger than the size of the region corresponding to the first feature map in the original image to be processed, and the feature information included in the second feature map is different from the feature information included in the first feature map.
In a possible implementation manner, the image to be processed may be segmented according to a feature map including different feature information, and the step S13 may include: overlapping the first characteristic diagram and the second characteristic diagram to obtain a third characteristic diagram; and performing activation processing on the third feature map to obtain a segmentation result of the target area in the image to be processed.
In a possible implementation manner, the overlapping the first feature map and the second feature map to obtain a third feature map may include: performing upsampling processing on the second feature map to obtain a fourth feature map, wherein the ratio of the sizes of the fourth feature map and the first feature map is the same as the ratio of the sizes of the second local image and the first local image; cutting the fourth feature map to obtain a fifth feature map, wherein the size of the fifth feature map is consistent with that of the first feature map; and carrying out weighted summation processing on the first characteristic diagram and the fifth characteristic diagram to obtain the third characteristic diagram.
In a possible implementation manner, the second feature map may be subjected to upsampling to obtain a fourth feature map, for example, the upsampling may be performed by using interpolation or the like, and the upsampling manner is not limited by the present disclosure.
In one possible implementation, the first partial image and the second partial image are both part of the image to be processed, the first partial image is smaller in size than the second partial image, and the first partial image is centered on the same center as the second partial image, i.e., the first partial image is part of the second partial image. The first feature map is a feature map of the first partial image, the fourth feature map is a feature map of the second partial image, and the ratio of the size of the fourth feature map to the size of the first feature map may be made equal to the ratio of the size of the second partial image to the size of the first partial image.
In an example, the size of the second partial image x2 is 8 times that of the first partial image x1 (three-dimensional image), the first partial image is a part of the second partial image (the first partial image x1 coincides with the central region of the second partial image x2, and the length and width of the first partial image x1 are both half of that of the second partial image x 2), the ratio between the length and width of the fourth feature map corresponding to the second partial image x2 and the size of the first feature map is also 8, the length and width of the first feature map is both half of that corresponding to the second partial image x2, and the central region of the fourth feature map corresponding to the second partial image x2 corresponds to the same region in the image to be processed as the first partial image x 1. In an example, the size of the second partial image x3 is 27 times that of the first partial image x1, the first partial image is a part of the second partial image (the first partial image x1 coincides with the central region of the second partial image x3, and the length and width of the first partial image x1 are both one third of that of the second partial image x 3), the ratio between the size of the fourth feature map corresponding to the second partial image x3 and the size of the first feature map is also 27, the length and width of the first feature map is both one third of that of the fourth feature map corresponding to the second partial image x3, and the central region of the fourth feature map corresponding to the second partial image x3 corresponds to the same region in the image to be processed as the first partial image x 1.
In an example, the first feature map may be the same size as the first partial image x1, the fourth feature map corresponding to the second partial image x2 may be the same size as the second partial image x2, and the fourth feature map corresponding to the second partial image x3 may be the same size as the second partial image x 3. The present disclosure does not limit the size of the first feature diagram and the fourth feature diagram.
In one possible implementation, the fourth feature map may be clipped to obtain a fifth feature map with a size consistent with the first feature map. In an example, a central region of the fourth feature map (the central region corresponds to the same region in the image to be processed as the region in the image to be processed corresponding to the first feature map) may be retained, and other regions may be cropped to obtain the fifth feature map. In an example, the fifth feature map corresponding to the second partial image x2 is the central region of the fourth feature map corresponding to the second partial image x2 (the corresponding region in the image to be processed is the same as the first partial image x 1), and the fifth feature map corresponding to the second partial image x3 is the central region of the fourth feature map corresponding to the second partial image x3 (the corresponding region in the image to be processed is the same as the first partial image x 1). The fifth feature map contains a part of global features of the central region of the second local image, for example, global features of the central region of the second local image x2 or x3, in particular, the regions of the fourth feature map other than the central region are cropped, and only the central region (i.e., the fifth feature map) is reserved, so that the fifth feature map contains features of the central region of the fourth feature map, for example, the receptive field of the fifth feature map is larger than that of the first feature map, and can contain distribution information of coronary vessels in the region of the image to be processed corresponding to x1, for example, which is beneficial for determining the distribution of coronary vessels and reducing noise interference of other vessels (e.g., pulmonary vessels). That is, the fifth feature map and the first feature map are both feature maps of regions corresponding to x1, but since the parameters (weight, reception field, and other parameters) of each feature extraction network are different, the features of the fifth feature map and the first feature map are different, and the feature information amount of the region corresponding to x1 can be increased in the above manner, so that a richer basis is provided for the segmentation processing, and the accuracy of the segmentation processing is improved.
In a possible implementation manner, the first feature map and the fifth feature map may be subjected to weighted summation processing to obtain a third feature map. For example, the first feature map and the fifth feature map may be subjected to weighted summation pixel by pixel. In an example, the weight of the first feature map is α, the weight of the fifth feature map corresponding to the second partial image x2 is β, and the weight of the fifth feature map corresponding to the second partial image x3 is γ, and according to the weights, the first feature map, the fifth feature map corresponding to the second partial image x2, and the fifth feature map corresponding to the second partial image x3 may be subjected to weighted summation pixel by pixel to obtain a third feature map. The third feature map contains both the detail features of the first feature map and the global features of the fifth feature map.
In an example, the weighted summation process may be performed by a superposition network, which may be a deep learning neural network such as a convolutional neural network, and the present disclosure does not limit the type of the superposition network.
By the method, the sizes of the multiple feature maps can be unified through upsampling and cutting processing, weighted average is carried out, the multiple feature maps are fused, and the obtained third feature map contains more feature information, so that the method is beneficial to determining the details and distribution of the target, reducing noise interference and improving segmentation accuracy.
In a possible implementation manner, the activation processing may be performed on the third feature map, for example, the activation processing may be performed according to feature information included in the third feature map, a segmentation result of the target region where the target (for example, coronary artery blood vessel) is located may be obtained, for example, the activation processing may be performed on the third feature map through a softmax activation function, a probability map may be obtained, and the activation processing may also be performed through other activation functions, for example, a RELU activation function, and the activation processing is not limited by the present disclosure. In the probability map, the probability of each pixel point represents the probability that the pixel point is located in the target region (for example, in the case that the probability is greater than or equal to a probability threshold (for example, 50%), the pixel point may be considered to be located in the target region), and the position of the target region where the target is located may be determined based on the probability of each pixel point in the probability map. In an example, the segmentation process may be performed by activating a layer. In an example, the target region may be a region where a target is located in a region corresponding to x1 in the image to be processed, for example, coronary vessels in a region corresponding to x1 may be segmented.
In a possible implementation manner, the image processing method is implemented by a neural network, the neural network includes a plurality of feature extraction networks, an overlay network, and an activation layer, for example, feature extraction processing may be performed on a plurality of local images by the feature extraction networks, overlay processing may be performed on the first feature map and the fifth feature map by the overlay network, and segmentation processing may be performed on the third feature map by the activation layer. The neural network may be trained before performing the image processing method described above using the neural network.
In one possible implementation, the method further includes: and training the plurality of feature extraction networks, the overlay network and the activation layer through the sample image to obtain the trained plurality of feature extraction networks, overlay networks and activation layers.
In one possible implementation, the sample image may include a medical image, e.g., a CTA image, and the sample image may include labeling information for a target (e.g., coronary vessels).
In one possible implementation, the feature extraction network may be pre-trained. The screenshot processing of multiple scales can be carried out on the sample image, and a first sample local image of a reference size and a second sample local image of the reference size are obtained. The total number of the first sample partial images and the second sample partial images is the same as the number of the feature extraction networks. In an example, the image centers of the first sample partial image and the second sample partial image are the same, and the region where the first sample partial image is cut out is the center region of the second sample partial image. For example, the sample image may be subjected to a screen shot to obtain a first sample partial image y1 of a reference size, a second sample partial image y2 larger than the reference size, and a second sample partial image y3 larger than the size of the second sample partial image y 2.
In one possible implementation manner, the second sample local image may be subjected to downsampling processing to obtain a third sample local image of a reference size. The plurality of feature extraction networks may be the same feature extraction network, and the sizes of the images input to the feature extraction networks may be kept uniform, and for example, the reference size of the first sample partial image may be set as the input size of the input feature extraction network, and the down-sampling process may be performed on the second sample partial image larger than the reference size to reduce the size of the second sample partial image, and the third sample partial image may be obtained so as to satisfy the input requirement of the feature extraction network, that is, so that the size of the third sample partial image is equal to the reference size. In an example, the second sample partial images y2 and y3 may be respectively subjected to down-sampling processing, and a third sample partial image of a reference size is obtained.
In a possible implementation manner, the first sample local image and the third sample local image may be respectively input to corresponding feature extraction networks for feature extraction processing, so as to obtain a first sample feature map corresponding to the first sample local image and a second sample feature map corresponding to the second sample local image. In an example, the first sample partial image y1 may be input to the feature extraction network 1 to perform a feature extraction process to obtain a first sample feature map, the third sample partial image corresponding to the second sample partial image y2 may be input to the feature extraction network 2 to perform a feature extraction process to obtain a second sample feature map corresponding to y2, and the third sample partial image corresponding to the second sample partial image y3 may be input to the feature extraction network 3 to perform a feature extraction process to obtain a second sample feature map corresponding to y 3.
In one possible implementation manner, the first sample feature map and the second sample feature map may be subjected to activation processing, for example, activation processing is performed by a softmax function, so as to obtain first sample target areas corresponding to a plurality of feature extraction networks, respectively. In an example, the activation process may be performed on the first sample feature map, the second sample feature map corresponding to y2, and the second sample feature map corresponding to y3, respectively, to obtain the first sample target region in the first sample local image y1, the first sample target region in the second sample local image y2, and the first sample target region in the second sample local image y3, respectively. That is, sample target regions are determined in the plurality of sample partial images by the feature extraction processing and the activation processing of the feature extraction network, respectively, and the sample target regions may have errors.
In one possible implementation manner, first network losses of the plurality of feature extraction networks may be respectively determined according to the labeling information of the sample image and the first sample target region. In an example, the first network loss of the feature extraction network 1 may be determined according to the label information of the region corresponding to y1 in the sample image and the first sample target region in the first sample local image y 1. The first network loss of the feature extraction network 2 may be determined from the labeling information of the region corresponding to y2 in the sample image and the first sample target region in the second sample partial image y 2. The first network loss of the feature extraction network 3 may be determined from the labeling information of the region corresponding to y3 in the sample image and the first sample target region in the second sample partial image y 3. In an example, the first network loss may include cross entropy loss and set similarity loss (diceloss), the cross entropy loss and the set similarity loss of each feature extraction network may be determined according to the first sample target region and corresponding label information, and the cross entropy loss and the set similarity loss are subjected to weighted summation processing to obtain the first network loss of each feature extraction network.
In one possible implementation, the plurality of feature extraction networks may be trained according to the first network loss to obtain a plurality of pre-trained feature extraction networks. In an example, each feature extraction network may be trained according to a first network loss of the feature extraction network, for example, the feature extraction network 1 is trained according to a first network loss of the feature extraction network 1, the feature extraction network 2 is trained according to a first network loss of the feature extraction network 2, and the feature extraction network 3 is trained according to a first network loss of the feature extraction network 3. In an example, the network parameters of the feature extraction network may be adjusted by back-propagating a first network loss through a gradient descent method, and the training method may be iteratively performed a plurality of times until a first training condition is satisfied, where the first training condition may include a training number, that is, when the training number is greater than or equal to a preset number, the first training condition is satisfied, and the first training condition may include a magnitude or a convergence of the first network loss, for example, the first network loss is less than or equal to a preset threshold, or when the training number converges to a preset interval, the first training condition is satisfied, and the disclosure does not limit the first training condition. In the case where the first training condition is satisfied, a pre-trained feature extraction network may be obtained.
In one possible implementation, the neural network may be trained based on a pre-trained feature extraction network.
In one possible implementation manner, training the plurality of feature extraction networks, the overlay network, and the activation layer through the sample image to obtain a plurality of trained feature extraction networks, overlay networks, and activation layers includes: performing screenshot processing on the sample image in multiple scales to obtain a fourth sample local image with the reference size and a fifth sample local image with the size larger than the reference size; performing downsampling processing on the fifth sample local image to obtain a sixth sample local image with a reference size; inputting the fourth sample local image and the sixth sample local image into corresponding feature extraction networks respectively for feature extraction processing, and obtaining a third sample feature map corresponding to the fourth sample local image and a fourth sample feature map corresponding to the sixth sample local image; performing upsampling processing and cutting processing on the fourth sample characteristic diagram to obtain a fifth sample characteristic diagram; inputting the third sample feature map and the fifth sample feature map into the overlay network to obtain a sixth sample feature map; inputting the sixth sample characteristic diagram into the activation layer for activation processing to obtain a second sample target area of the sample image; determining a second network loss of the neural network according to the second sample target area and the labeling information of the sample image; and training the plurality of feature extraction networks, the overlay network and the activation layer according to the second network loss to obtain the trained plurality of feature extraction networks, overlay networks and activation layers.
In one possible implementation, the sample image may include a medical image, e.g., a CTA image, and the sample image may include labeling information for a target (e.g., coronary vessels).
In one possible implementation manner, screenshot processing of multiple scales may be performed on the sample image, so as to obtain a fourth sample partial image of the reference size and a fifth sample partial image larger than the reference size. The total number of the fourth sample partial images and the fifth sample partial images is the same as the number of the feature extraction networks. In an example, the image centers of the fourth sample partial image and the fifth sample partial image are the same, and the region cut out by the fourth sample partial image is the center region of the fifth sample partial image. For example, the sample image may be subjected to a screen shot to obtain a fourth sample partial image z1 of a reference size, a fifth sample partial image z2 larger than the reference size, and a fifth sample partial image z3 larger than the size of the fifth sample partial image z 2.
In one possible implementation, the fifth sample partial image may be subjected to downsampling processing to obtain a sixth sample partial image of a reference size. The plurality of feature extraction networks may be feature extraction networks of the same structure, and the sizes of the images input to the feature extraction networks may be kept uniform, and for example, the reference size of the fourth sample partial image may be set as the input size of the input feature extraction network, and the downsampling process may be performed on the fifth sample partial image larger than the reference size to reduce the size of the fifth sample partial image, and the sixth sample partial image may be obtained so as to satisfy the input requirement of the feature extraction network, that is, so that the size of the sixth sample partial image is equal to the reference size. In an example, the fifth sample partial images z2 and z3 may be respectively subjected to down-sampling processing, and a sixth sample partial image of a reference size is obtained.
In a possible implementation manner, the fourth sample partial image and the sixth sample partial image may be respectively input to corresponding feature extraction networks for feature extraction processing, so as to obtain a third sample feature map corresponding to the fourth sample partial image and a fourth sample feature map corresponding to the sixth sample partial image. In an example, the fourth sample partial image z1 may be input to the feature extraction network 1 for feature extraction processing to obtain a third sample feature map, the sixth sample partial image corresponding to the fifth sample partial image z2 may be input to the feature extraction network 2 for feature extraction processing to obtain a fourth sample feature map corresponding to z2, and the sixth sample partial image corresponding to the fifth sample partial image z3 may be input to the feature extraction network 3 for feature extraction processing to obtain a fourth sample feature map corresponding to z 3.
In a possible implementation manner, the fourth sample feature map may be subjected to upsampling and clipping to obtain a fifth sample feature map. In an example, the plurality of feature extraction networks are the same feature extraction network, and the sizes of the input fourth and sixth sample partial images are the same, and therefore, the obtained third and fourth sample feature maps are the same in size. The fourth sample feature map may be subjected to an upsampling process such that a ratio of sizes of the upsampled fourth sample feature map and the third sample feature map is the same as a ratio of the fifth sample partial image and the fourth sample partial image. For example, the third sample feature map and the fourth sample feature map may correspond to a fourth sample partial image size, and the up-sampled fourth sample feature map may correspond to a fifth sample partial image size.
In an example, a central region of the fifth sample feature map may be retained, and other regions may be clipped to obtain the fifth sample feature map, and a corresponding region of the fifth sample feature map in the sample image coincides with a corresponding region of the third sample feature map in the sample image, for example, each corresponds to a region of the fourth sample partial image z1 in the sample image.
In a possible implementation manner, the third sample feature map and the fifth sample feature map may be input into the overlay network to obtain a sixth sample feature map. Inputting the third sample feature map and the fifth sample feature map into the overlay network to obtain a sixth sample feature map, including: and performing weighted summation processing on the third sample feature map and the fifth sample feature map through the superposition network to obtain the sixth sample feature map.
In an example, the overlay network may perform a pixel-by-pixel weighted summation of the three sample feature maps and the fifth sample feature map. In an example, the weight of the third sample feature map is α, the weight of the fifth sample feature map corresponding to the fifth sample local image z2 is β, the weight of the fifth sample feature map corresponding to the fifth sample local image z3 is γ, α, β, and γ may be network parameters of the overlay network, and values of the weights may be determined through training of the overlay network. After the overlay network processing, a sixth sample feature map may be obtained, and the sixth sample feature map may contain more feature information.
In one possible implementation, the sixth sample feature map may be input into the activation layer to obtain a second sample target region of the sample image. In an example, the activation layer may perform activation processing on the sixth sample feature map through a softmax function to obtain a target region where a target (e.g., coronary blood vessels) is located. The target region may be a region where the target of the fourth sample partial image z1 is located in the corresponding region in the sample image, for example, a region where coronary blood vessels of z1 are located in the corresponding region in the sample image, that is, the second sample target region may be segmented. The second sample target area may have errors.
In one possible implementation, the second network loss of the neural network may be determined according to the second sample target region and the annotation information of the sample image. The second network loss may be determined according to the annotation information of the sample image (for example, the annotation information of z1 in the corresponding region in the sample image) and a second sample target region obtained by the neural network, in an example, the second network loss may include a cross entropy loss and a set similarity loss (dice loss), the cross entropy loss and the set similarity loss of each neural network may be determined according to the second sample target region and the annotation information, and the cross entropy loss and the set similarity loss are subjected to weighted summation processing to obtain the second network loss of the neural network, where the determination manner of the second network loss is not limited by the present disclosure.
In a possible implementation manner, the plurality of feature extraction networks, the overlay network, and the activation layer may be trained according to the second network loss, so as to obtain a plurality of trained feature extraction networks, overlay networks, and activation layers. In an example, the network parameters of the neural network may be adjusted by back-propagating a second network loss through a gradient descent method, and the training method may be iteratively performed a plurality of times until a second training condition is satisfied, where the second training condition may include a training number, that is, when the training number is greater than or equal to a preset number, the second training condition is satisfied, and the second training condition may include a magnitude or a convergence of the second network loss, for example, the second network loss is less than or equal to a preset threshold, or when the training number converges to a preset interval, the second training condition is satisfied, and the disclosure does not limit the second training condition. After the second training condition is met, a plurality of trained feature extraction networks, superposition networks and activation layers can be obtained and can be used for segmenting targets in the image.
By the method, the neural network can be trained by the pre-trained feature extraction network and the sample image, and the training precision of the neural network is improved. Furthermore, by training the superposed weight of the superposed network, a proper weight parameter can be selected in the training process, the effect of feature fusion is improved, the detail feature and the global feature are optimized, and the precision of the neural network is improved.
In a possible implementation manner, the parameters of the feature extraction network can be adjusted subsequently, so that the precision of the neural network is further improved. The training of the plurality of feature extraction networks, the overlay network and the activation layer through the sample image to obtain the trained plurality of feature extraction networks, overlay networks and activation layers further comprises: activating the fourth sample characteristic diagram to obtain a third sample target area; determining a third network loss of the feature extraction network corresponding to the fourth sample feature map according to the third sample target area and the labeling information of the sample image; and training a feature extraction network corresponding to the fourth sample feature map according to the third network loss.
In an example, the third sample target region may be obtained by performing an activation process (e.g., an activation process may be performed by a softmax function) on the fourth sample feature map before performing an upsampling and cropping process on the fourth sample feature map. For example, the third sample target region is predicted from the fourth sample feature map obtained by the feature extraction network 2 and the feature extraction network 3. In an example, the feature extraction network 2 may predict a third sample target region in the fifth sample local image z2, and the feature extraction network 3 may predict a third sample target region in the fifth sample local image z 3.
In an example, the third network loss of the feature extraction network 2 may be obtained using the annotation information of the corresponding region in the sample image of z2 and the third sample target region in the fifth sample local image z2, and the third network loss of the feature extraction network 3 may be obtained using the annotation information of the corresponding region in the sample image of z3 and the third sample target region in the fifth sample local image z3, which may include a cross entropy loss and an aggregate similarity loss, without limitation to the third network loss.
In an example, each feature extraction network can be trained with its third network loss to continue adjusting the parameters of the feature extraction network using the predicted sample target regions in the larger local images (e.g., z2 and z3) and the labeling information to improve the accuracy of the neural network.
In this way, the prediction information in the larger local image can be used to further train the feature extraction network, and the accuracy of the neural network can be further improved.
After the training is completed, the trained neural network may be used in an image segmentation process to segment out a target region in an image (e.g., a CTA image).
According to the image processing method disclosed by the embodiment of the disclosure, screenshot processing of multiple scales is carried out on an image to be processed, and local images of different sizes can meet the input requirements of a feature extraction network through downsampling, so that feature maps of multiple scales are obtained. The sizes of the multiple feature maps can be unified through upsampling and clipping processing, and weighted average is carried out to fuse the multiple feature maps, so that the obtained third feature map contains more feature information, the method is favorable for determining the details of the target, determining the distribution of the target and reducing noise interference. Furthermore, the neural network can be trained by the pre-trained feature extraction network and the sample image, so that the training precision of the neural network is improved. The superimposed weights can be obtained by training the superimposed network, appropriate weight parameters can be selected in the training process, the effect of feature fusion is improved, the detailed features and the global features are optimized, the feature extraction network is further trained by utilizing the prediction information in a larger local area, the precision of the neural network is improved, and the accuracy of segmentation processing is improved.
Fig. 3 shows an application schematic diagram of an image processing method according to an embodiment of the present disclosure, and as shown in fig. 3, a neural network for image segmentation may include a feature extraction network 1, a feature extraction network 2, a feature extraction network 3, an overlay network, and an activation layer.
In one possible implementation, the neural network may be trained with sample images, which may include medical images, e.g., CTA images, which may include annotation information for a target (e.g., coronary vessels). The sample image may be preprocessed, for example, the sample image may be preprocessed by resampling, normalization, etc., and the preprocessed sample image may be subjected to a screenshot process to obtain a local image x1 with a reference size, a local image x2 with a size larger than the reference size, and a local image x3 with a size larger than x2, referring to fig. 3, three local images with the same image center but different sizes, that is, x1, x2, and x3, may be cut from the sample image, and further, the local images x2 and x3 may be downsampled to reduce the sizes to the reference size, and the local image x1, the local image x2 with the reference size, and the x3 with the reference size may be input to the feature extraction network 1, the feature extraction network 2, and the feature extraction network 3, respectively, for training (for example, training is performed based on the output result of each feature extraction network and annotation information of the sample image), a pre-trained feature extraction network 1, a feature extraction network 2 and a feature extraction network 3 are obtained.
In one possible implementation, the neural network may continue to be trained by the pre-trained feature extraction network 1, the feature extraction network 2, and the feature extraction network 3, and the sample image (which may be the same as the sample image described above, or may be another sample image). In an example, the sample image may be pre-processed and processed for screenshot, resulting in a partial image x1 of a base size, a partial image x2 of a size greater than the base size, and a partial image x3 of a size greater than x 2. The local images x2 and x3 may be down-sampled to reduce the size to a reference size, and the local image x1, the reference size x2, and the reference size x3 may be input to the feature extraction network 1, the feature extraction network 2, and the feature extraction network 3, respectively, so that the feature extraction network 1 may obtain a feature map corresponding to the local image x1, the feature extraction network 2 may obtain a feature map corresponding to the local image x2, and the feature extraction network 3 may obtain a feature map corresponding to the local image x 3.
In one possible implementation, the feature map corresponding to the partial image x2 and the feature map corresponding to the partial image x3 may be upsampled and the central region corresponding to the partial image x1 may be cropped out. Further, the feature map corresponding to the local image x1 and the two clipped feature maps may be subjected to an overlay process, for example, the three feature maps may be subjected to weighted summation with weights α, β, and γ respectively through an overlay network, and the overlay result is input to the activation layer for activation processing, so as to obtain a second sample target region, that is, a target region predicted by the local image x1 in the corresponding region in the sample image. The network loss of the neural network can be determined according to the labeling information of the target area and the sample image, and the neural network is trained by utilizing the network loss.
In a possible implementation manner, the feature maps output by the feature extraction network 2 and the feature extraction network 3 may be respectively activated to obtain third sample target areas, that is, a target area predicted by x2 in a corresponding area in the sample image and a target area predicted by x3 in a corresponding area in the sample image, and further, the network loss of the feature extraction network 2 and the feature extraction network 3 may be respectively determined by using the labeling information of the third sample target area and the sample image, and the feature extraction network 2 and the feature extraction network 3 may be trained.
In a possible implementation manner, after the training process is completed, the neural network may be used to perform segmentation processing on the image to be processed, for example, the image to be processed may be preprocessed and subjected to screenshot processing, so as to obtain a partial image x1 with a reference size, a partial image x2 with a size larger than the reference size, and a partial image x3 with a size larger than x 2. The partial images x2, x3 may be down-sampled, reduced in size to a reference size, and the partial image x1, x2 of the reference size, and x3 of the reference size may be input to the feature extraction network 1, the feature extraction network 2, and the feature extraction network 3, respectively. Further, after the feature maps output by the feature extraction network 2 and the feature extraction network 3 are up-sampled and clipped, the feature maps and the feature map output by the feature extraction network 1 are subjected to superposition processing, that is, input into a superposition network. The overlay result output by the overlay network may be input to the active layer for activation processing, and a target region where the target is located in a corresponding region of the image to be processed by x1 may be obtained, for example, the image to be processed is a Computed Tomography Angiography (CTA) image, and the neural network may segment a region where the coronary artery is located in the corresponding region of the image to be processed by x 1.
In a possible implementation manner, the image processing method can be used in the segmentation processing of coronary vessels, and can utilize feature information of multiple scales to improve the segmentation accuracy, obtain the region where the coronary vessel is located, and provide a basis for subsequent diagnosis (for example, diagnosis of plaque in the vessel, vessel blockage, stenosis, and other diseases). The image processing method can also be used in image segmentation processing in other fields, for example, in the fields of portrait segmentation, object segmentation, and the like, and the application field of the image processing method is not limited by the present disclosure.
Fig. 4 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 4: a local image module 11, configured to obtain local images of one or more scales based on an image to be processed; a feature extraction module 12, configured to perform feature extraction processing on the one or more local images respectively to obtain feature maps corresponding to the local images of the one or more scales; and the segmentation module 13 is configured to segment the image to be processed based on the feature map corresponding to the local image of the one or more scales to obtain a segmentation result.
In one possible implementation, the local image module is further configured to: the method comprises the steps of carrying out screenshot processing on an image to be processed in multiple scales to obtain multiple local images, wherein the multiple local images comprise a first local image with a reference size and a second local image with the size larger than the reference size, and the centers of the multiple local images are the same.
In one possible implementation, the feature extraction module is further configured to: and respectively carrying out feature extraction processing on the plurality of partial images to obtain a first feature map corresponding to the first partial image and a second feature map corresponding to the second partial image.
In one possible implementation, the segmentation module is further configured to: overlapping the first characteristic diagram and the second characteristic diagram to obtain a third characteristic diagram; and performing activation processing on the third feature map to obtain a segmentation result of the target area in the image to be processed.
In one possible implementation, the feature extraction module is further configured to: performing downsampling processing on the second local image to obtain a third local image with a reference size; and respectively carrying out feature extraction processing on the first partial image and the third partial image to obtain a first feature map corresponding to the first partial image and a second feature map corresponding to the second partial image.
In one possible implementation, the segmentation module is further configured to: performing upsampling processing on the second feature map to obtain a fourth feature map, wherein the ratio of the sizes of the fourth feature map and the first feature map is the same as the ratio of the sizes of the second local image and the first local image; cutting the fourth feature map to obtain a fifth feature map, wherein the size of the fifth feature map is consistent with that of the first feature map; and carrying out weighted summation processing on the first characteristic diagram and the fifth characteristic diagram to obtain the third characteristic diagram.
In one possible implementation, the image processing apparatus is implemented by a neural network including a plurality of feature extraction networks, an overlay network, and an activation layer, the apparatus further including: and the training module is used for training the plurality of feature extraction networks, the overlay network and the activation layer through the sample image to obtain the trained plurality of feature extraction networks, overlay networks and activation layers.
In one possible implementation, the training module is further configured to: performing screenshot processing on the sample image in multiple scales to obtain a fourth sample local image with the reference size and a fifth sample local image with the size larger than the reference size; performing downsampling processing on the fifth sample local image to obtain a sixth sample local image with a reference size; inputting the fourth sample local image and the sixth sample local image into corresponding feature extraction networks respectively for feature extraction processing, and obtaining a third sample feature map corresponding to the fourth sample local image and a fourth sample feature map corresponding to the sixth sample local image; performing upsampling processing and cutting processing on the fourth sample characteristic diagram to obtain a fifth sample characteristic diagram; inputting the third sample feature map and the fifth sample feature map into the overlay network to obtain a sixth sample feature map; inputting the sixth sample characteristic diagram into the activation layer for activation processing to obtain a second sample target area of the sample image; determining a second network loss of the neural network according to the second sample target area and the labeling information of the sample image; and training the plurality of feature extraction networks, the overlay network and the activation layer according to the second network loss to obtain the trained plurality of feature extraction networks, overlay networks and activation layers.
In one possible implementation, the training module is further configured to: and performing weighted summation processing on the third sample feature map and the fifth sample feature map through the superposition network to obtain the sixth sample feature map.
In one possible implementation, the training module is further configured to: activating the fourth sample characteristic diagram to obtain a third sample target area; determining a third network loss of the feature extraction network corresponding to the fourth sample feature map according to the third sample target area and the labeling information of the sample image; and training a feature extraction network corresponding to the fourth sample feature map according to the third network loss.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an image processing apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the image processing methods provided by the present disclosure, and the descriptions and corresponding descriptions of the corresponding technical solutions and the corresponding descriptions in the methods section are omitted for brevity.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The embodiments of the present disclosure also provide a computer program product, which includes computer readable code, and when the computer readable code runs on a device, a processor in the device executes instructions for implementing the image processing method provided in any one of the above embodiments.
The embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the image processing method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense an edge of a touch or slide action, but also detect a duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, stored in memory 1932TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMOr the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.