Disclosure of Invention
In view of the above, the present disclosure provides a method and an apparatus for calculating echocardiography cardiac parameters and measuring myocardial strain.
According to an aspect of the present disclosure, there is provided an echocardiographic heart parameter calculation and myocardial strain measurement method, which performs processing by a trained neural network, the processing at least including:
classifying the heart ultrasonic video to obtain a section classification result;
obtaining a segmentation result by carrying out image segmentation on the section classification result;
and obtaining cardiac parameters and myocardial strain according to the segmentation result.
In one possible implementation, the trained neural network includes: classified networks and split networks.
In a possible implementation manner, the obtaining a slice classification result by performing classification processing on the cardiac ultrasound video includes:
acquiring a heart ultrasonic original image;
converting each section sequence in the ultrasonic original image into a section video with the same resolution and the same frame number;
and inputting the section video into the classification network to obtain a section classification result.
In a possible implementation manner, the obtaining a segmentation result by performing image segmentation on the section classification result includes:
screening the section classification result to obtain an appointed section;
the designated section comprises: at least one of a two-chamber and two-dimensional section of the apex of the heart, a three-chamber and two-dimensional section of the apex of the heart and a four-chamber and two-dimensional section of the apex of the heart;
carrying out segmentation processing on the specified section through the segmentation network to obtain a segmentation result;
the segmentation result comprises: at least one of contours of a left ventricular intima, a left ventricular epicardium, a left atrial intima, a right ventricular intima, and a right ventricular adventitia.
In one possible implementation, the obtaining of the cardiac parameter and the left ventricular myocardial strain according to the segmentation result includes:
calculating cardiac parameters according to the segmentation result;
and calculating the left ventricle myocardial strain according to the segmentation result and the speckle tracking.
In a possible implementation manner, the segmenting the designated section through the segmentation network to obtain a segmentation result includes:
preprocessing the designated section to obtain a preprocessed designated section sequence;
inputting the specified section sequence into the segmentation network to obtain masks of all structures of the heart;
obtaining structural regions of the heart according to the mask;
and obtaining the initial frame number of the cardiac cycle and the positions of the apex and the root of the mitral valve according to each structural region of the heart.
In one possible implementation, the classification network is a three-dimensional convolutional neural network.
According to another aspect of the present disclosure, there is provided an echocardiographic heart parameter calculation and myocardial strain measurement apparatus, which performs processing by a trained neural network, including:
the classification module is used for classifying the cardiac ultrasonic video to obtain a section classification result;
the segmentation module is used for carrying out image segmentation on the section classification result to obtain a segmentation result;
and the calculation module is used for obtaining the heart parameters and the myocardial strain according to the segmentation result.
According to another aspect of the present disclosure, there is provided an echocardiographic heart parameter calculation and myocardial strain measurement apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the trained neural network is used for carrying out automatic section classification processing and image segmentation on the heart ultrasonic video, and further automatically obtaining the measurement results of the heart parameters and the myocardial strain, thereby effectively reducing the workload of doctors and improving the working efficiency.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In recent years, the artificial intelligence technology represented by the deep learning neural network has undergone a rapid development, and its application in the field of medical image processing is also currently a focus of research. Echocardiography, a commonly used medical examination method, is also beginning to be used as an analysis object of deep learning models.
On the one hand, in the related art, when the deep learning model is used to automatically classify the ultrasound slices, an image is generally randomly extracted from the ultrasound video file directly, and the 2D convolutional neural network VGGNET shown in fig. 1 is used to classify the image, so as to obtain the ultrasound slice classification result. However, in this way, by extracting the images in the video and inputting the images into the 2D convolutional neural network for section classification, the motion information of the heart is lost, and the classification is not accurate enough.
On the other hand, in the related art, when performing ultrasound image segmentation, the left ventricle of the ultrasound image is generally segmented by using a convolutional neural network model to calculate some cardiac parameters. As shown in fig. 2, the Convolutional neural network model Unet is used to segment the left ventricle heart chamber of the apical four-chamber heart section, the original ultrasound image (Echo Cine Raw Frames) and Optical Flow (Echo Cine Optical Flow) are used as input, the sliding temporal window (sliding temporal window) and two encoders of Unet (U-net Encoder) are used as the Convolutional Encoder (Convolutional Encoder) of each image in the video, the encoding process is performed, after the encoding result is cascaded (coordination), the obtained feature map is processed by a bidirectional CONVLSTM layer (bidirectional consistent LSTM), or jump cascade (concatenation of skip connections from the Convolutional Decoder) directly to the Convolutional Decoder, and decode the feature map of each image by using the Decoder (U-net Decoder) of Unet as the Convolutional Decoder (Convolutional Decoder) to obtain the final image segmentation result (segmentation of LV front frame). However, the ultrasound image segmentation method mainly segments the apical four-chamber cardiac section, and only can segment the left ventricle structure in the image, which has strong limitation.
In addition, in the related art, a speckle tracking method is usually adopted to track the motion of a pixel point in the center muscle of an ultrasonic video to obtain a myocardial strain calculation result. Using commercial ultrasound analysis software such as TOMTEC as a representative, a speckle tracking technique is used to track the speckle motion in the myocardium in the ultrasound image, so as to obtain the strain indicator during the motion of the myocardium. However, in this method, the calculation result of the strain is greatly affected by the image quality, and the position of the left ventricle heart chamber needs to be manually specified as initialization, so the operation is complicated, the repeatability is poor, and the accuracy is poor.
Therefore, the embodiment of the disclosure provides an ultrasonic cardiogram heart parameter calculation and myocardial strain measurement scheme based on artificial intelligence, automatic section classification processing and image segmentation are performed on a heart ultrasonic video through a trained neural network, and a measurement result of the heart parameter and myocardial strain is further obtained automatically, so that the workload of a doctor is effectively reduced, and the work efficiency is improved.
Figure 3 illustrates a flow chart of a method of echocardiographic cardiac parameter calculation and myocardial strain measurement according to an embodiment of the present disclosure. As shown in fig. 3, the method performs a process by the trained neural network, and the process at least includes:
101, classifying cardiac ultrasonic videos to obtain a section classification result;
102, carrying out image segmentation on the section classification result to obtain a segmentation result;
and 103, obtaining heart parameters and myocardial strain according to the segmentation result.
In order to realize automatic measurement of cardiac parameters and automatic calculation of myocardial strain, in the embodiment of the disclosure, a trained neural network is utilized to automatically classify the section of a cardiac ultrasound video, and based on a classified image, a part of the section is selected to perform image segmentation to obtain a segmentation result (such as the outline of the cardiac structures in the left ventricle, the right ventricle, the left atrium, the right atrium and the like in the image); further based on the segmentation results, a variety of cardiac parameters are calculated and measures of myocardial strain are automatically derived.
In one possible implementation, the trained neural network may include: classified networks and split networks. The classification network and the segmentation network may be convolutional neural networks, and specifically may include but not limited to convolutional neural networks such as V-NET, U-NET, VGG, ResNet, densnet, and the like, which is not limited in this embodiment. The classification network is used for classifying the cardiac ultrasound video to obtain a section classification result; the segmentation network is used for carrying out image segmentation on the section classification result output by the classification network so as to obtain a segmentation result.
In a possible implementation manner, in step 101, obtaining a slice classification result by performing classification processing on the cardiac ultrasound video may include the following steps:
step 10101, acquiring a heart ultrasonic original image;
10102, converting each section sequence in the ultrasonic original image into a section video with the same resolution and the same frame number;
10103, inputting the section video into the classification network to obtain a section classification result.
The heart ultrasonic original image (i.e. echocardiogram) is a graph in which the periodic activities of structures such as cardiac walls, ventricles and valves are measured by using the ultrasonic ranging principle to make pulse ultrasonic waves penetrate through the chest wall and soft tissues, and the periodic activities are displayed on a display as the relationship curve between the corresponding activities of the structures and time. The original image of an echocardiogram usually contains a plurality of sequences, each sequence corresponding to a slice in the ultrasound examination. In order to fully utilize the motion information of the heart, in the embodiment of the disclosure, before the classification processing is performed through the trained neural network, the acquired heart ultrasound original image is preprocessed, and each section sequence to be classified is converted into a video with the same resolution and the same frame number, so that the processed section video rather than the section picture is used as the input of the neural network, so that the information contained in the section video is fully utilized, and the classification accuracy is improved.
In one possible implementation, the classification network is a three-dimensional convolutional neural network. FIG. 4 illustrates a block diagram of a three-dimensional convolutional neural network model, according to an embodiment of the present disclosure; as shown in fig. 4, the three-dimensional convolutional neural network includes: the three-dimensional convolutional neural network is trained in advance through a sample section video file to obtain a trained 3D convolutional neural network. And inputting the preprocessed section videos with the same resolution and the same frame number into a trained 3D convolutional neural network, and performing section classification processing to obtain a classification result of the ultrasonic section (namely, the section category corresponding to each video file). Therefore, the 3D convolutional neural network is used for classifying the ultrasonic section videos, so that the heart motion information in the ultrasonic videos can be effectively utilized, and the type of the section can be judged more accurately.
The ultrasound slices that the three-dimensional convolutional neural network can classify include most of conventional cardiac ultrasound video slices, which may be, for example: the first three-dimensional model of the heart comprises a first three-dimensional model of a parasternal long axis two-dimensional model, a second three-dimensional model of a parasternal long axis two-dimensional model, a third two-dimensional model of a parasternal long axis ascending aorta two-dimensional model, a third two-dimensional model of a parasternal long axis model, a third two-dimensional model of a parasternal aorta short axis model, a fourth three-dimensional model of a parasternal aorta short axis model, a third two-dimensional model of a parasternal long axis model, a fourth three-dimensional model of a parasternal aorta model, a fourth three-dimensional model of a parasternal short axis model, a third three-dimensional model of a parasternal aorta short axis model, a third three-dimensional model of a parasternal long axis model, a third two-dimensional model of a parasternal long axis model, a left ventricular short axis model of a parasternal long axis model, a parasternal short.
In a possible implementation manner, in step 102, the obtaining a segmentation result by performing image segmentation on the section classification result may include the following steps:
step 10201, screening the section classification result to obtain an appointed section; the designated section comprises: at least one of an Apical two-chamber-heart two-dimensional section (A2C, Apical 2 chamber), an Apical three-chamber-heart two-dimensional section (A3C, Apical 3chamber), and an Apical four-chamber-heart two-dimensional section (A4C, Apical 4 chamber);
in this embodiment, according to the section category corresponding to each section video obtained in step 101, the apical two-chamber two-dimensional section, apical three-chamber two-dimensional section, and apical four-chamber two-dimensional section in each case can be automatically selected for subsequent image segmentation.
Step 10202, segmenting the designated section through the segmentation network to obtain a segmentation result; the segmentation result comprises: at least one of contours of a left ventricular intima, a left ventricular epicardium, a left atrial intima, a right ventricular intima, and a right ventricular adventitia.
In the embodiment of the present disclosure, the segmentation network may be a classification network, may also be a regression network, and may also include both a classification network and a regression network. For example, the segmentation network may be a multitasking convolutional neural network; FIG. 5 illustrates a block diagram of a multitasking convolutional neural network model according to one embodiment of the present disclosure; as shown in fig. 5, the multitask convolutional neural network model may include: the method comprises the steps of training a multitask Convolutional neural network model in advance through segmentation samples to obtain a trained segmentation network, inputting the video with the specified section obtained in the step into the trained segmentation network, and performing segmentation processing to obtain the outlines of a left ventricular intima, a left ventricular adventitia, a left atrial intima, a right ventricular intima and a right ventricular adventitia on each frame in the video. In this way, the multitask division network outputs the division result and simultaneously outputs the distance graph and the curvature graph by utilizing the cavity convolution and the multitask learning, and effectively guides the division operation, so that the division result with higher accuracy and continuity is generated. The distance graph is the shortest straight-line distance from each pixel point in the image to the contour of the segmented object, and the curvature graph is the bending degree of each point on the segmented physical contour.
In one possible implementation manner, in step 10202, the performing, by the segmentation network, a segmentation process on the designated section to obtain a segmentation result may include:
step 1020201, preprocessing the designated section to obtain a preprocessed designated section sequence;
step 1020202, inputting the specified tangent plane sequence into the segmentation network to obtain the mask of each structure of the heart;
step 1020203, obtaining structural regions of the heart according to the mask;
and 1020204, obtaining the initial frame number of the cardiac cycle and the positions of the apex and the root of the mitral valve according to each structural region of the heart.
For example, the selected video with the designated section may be preprocessed by using image processing techniques, such as: peripheral character information irrelevant to image segmentation is eliminated, the pixel spacing of the image is unified, and pixel gray scale is subjected to normalization processing, so that the accuracy of image segmentation is improved.
Furthermore, each picture in the screened video with the specified section is preprocessed through the method and then sequentially input into a trained segmentation network (a multitask convolution neural network), masks of all structural regions of the heart are output through segmentation processing, and all structural regions of the heart are extracted through multiplication of all the masks and the original pictures. After all the pictures in the video with the specified section are processed, the segmentation result of the heart structure in the whole video time period can be obtained. Therefore, on the basis of completing the classification task of the pixel level, the segmentation network of the embodiment of the disclosure simultaneously completes the distance and curvature regression task of the contour, thereby effectively improving the segmentation accuracy, and the segmentation result can further assist in determining the positions of the apex and the root of the mitral valve.
Based on the obtained segmentation result of the heart structure in the whole video time period, the image processing technology can be further utilized to determine the initial frame number of a cardiac cycle and the positions of the apex and the root of the mitral valve, so as to provide a basis for subsequent strain measurement. For example, a region corresponding to the left ventricle in each frame may be divided from the above segmentation result, the number of pixels included in the region is calculated, then the number of frames corresponding to the maximum value of the number of pixels included in the left ventricle region in the video is found, and two adjacent frames are selected as a cardiac cycle. Meanwhile, the area corresponding to the left atrium can be divided according to the division result, the boundary of the left ventricle and the left atrium is determined by using a support vector machine, then the position of the boundary is adjusted according to the image size, and two intersection points of the boundary and the left atrium are obtained and used as the position of the root of the mitral valve; the point in the left ventricular area furthest from the line joining the bases of the mitral valves is then identified as the apex.
In one possible implementation manner, in step 103, the obtaining a cardiac parameter and a left ventricular myocardial strain according to the segmentation result may include:
step 10301, calculating cardiac parameters based on the segmentation result;
based on the start frame number of the cardiac cycle and the apex and mitral valve root positions obtained in step 1020204, various cardiac parameters including: ventricular septal thickness, ejection fraction, left ventricular end-diastolic diameter, left ventricular end-systolic diameter, left ventricular end-diastolic volume, left ventricular end-systolic volume, right ventricular lateral diameter, left atrial anterior posterior diameter, and the like.
Step 10302, calculating left ventricular myocardial strain based on the segmentation and speckle tracking.
On the basis of the segmentation result, the left ventricular myocardium strain (for example, the strain of the segment 17 of the middle myocardium) of the three selected slices is calculated by combining the speckle tracking and the segmentation network result, for example, fig. 6 shows a left ventricular myocardium segmentation schematic diagram according to an embodiment of the present disclosure, and the left ventricular myocardium portion in each frame of the video is extracted by using the left ventricular myocardium mask obtained by the segmentation. The myocardium is segmented according to 17-segment models by using the obtained apex and mitral valve root positions (as shown in fig. 6), the motion information of the feature points in each segment of the myocardium is obtained by using a speckle tracking technique, the motion information of the feature points is corrected by using a left ventricular myocardium mask in the segmentation result (for example, after a corresponding feature point in the current frame is obtained according to a certain feature point in the previous frame of the image, the feature point position is compared with the mask contour, if a certain threshold value is exceeded, the feature point position is corrected to the mask contour), the speckle tracking error is reduced, and finally the strain of each segment is calculated according to the tracking result. It should be noted that, in the embodiment of the present disclosure, the cardiac structure segmentation result obtained by deep learning is used to determine the left ventricular myocardium range, the endocardium contour, the apex of the heart, and the mitral valve root position, on this basis, the speckle tracking is performed, the endocardium contour obtained by deep learning segmentation is used to correct the tracking result, and finally the left ventricular myocardium strain result is obtained by calculation; therefore, the accuracy and the reliability of the strain calculation are improved by adopting a mode of combining deep learning and speckle tracking, and the repeatability of the strain calculation is stronger.
Illustratively, fig. 7 shows a flow chart of an echocardiographic heart parameter calculation and myocardial strain measurement method according to an embodiment of the present disclosure. As shown in fig. 7, a cardiac ultrasound original image is obtained, a 3D convolutional neural network is used to perform automatic section classification on the cardiac ultrasound image, a designated section is selected based on the classified image, the convolutional neural network is used to perform image segmentation and cardiac parameter calculation on the designated section, and a measurement result of myocardial strain is obtained automatically by a method combining speckle tracking and the neural network. Therefore, cardiac parameter calculation and strain calculation based on the cardiac ultrasound original image are completed automatically without manual intervention, so that the workload of doctors is effectively reduced, and the working efficiency is greatly improved.
It should be noted that, although the echocardiographic heart parameter calculation and the myocardial strain measurement method are described above by taking the above embodiments as examples, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Therefore, the embodiment of the disclosure can perform automatic section classification processing and image segmentation on the heart ultrasonic video through the trained neural network, and further automatically obtain the measurement results of the heart parameters and the myocardial strain, thereby effectively reducing the workload of doctors and improving the working efficiency.
FIG. 8 illustrates a block diagram of an echocardiographic heart parameter calculation and myocardial strain measurement apparatus according to an embodiment of the present disclosure; the apparatus performs processing by the trained neural network, and may include: the classification module 81 is used for classifying the cardiac ultrasound video to obtain a section classification result; a segmentation module 82, configured to perform image segmentation on the section classification result to obtain a segmentation result; and the calculating module 83 is configured to obtain the cardiac parameter and the myocardial strain according to the segmentation result.
In one possible implementation, the trained neural network includes: classified networks and split networks.
In one possible implementation, the classification module may include: the acquisition unit is used for acquiring a cardiac ultrasound original image; the conversion unit is used for converting each section sequence in the ultrasonic original image into a section video with the same resolution and the same frame number; and the classification unit is used for inputting the section video into the classification network to obtain a section classification result.
In one possible implementation, the partitioning module may include: the screening submodule is used for screening the section classification result to obtain an appointed section; the designated section comprises: at least one of a two-chamber and two-dimensional section of the apex of the heart, a three-chamber and two-dimensional section of the apex of the heart and a four-chamber and two-dimensional section of the apex of the heart; the segmentation submodule is used for carrying out segmentation processing on the specified section through the segmentation network to obtain a segmentation result; the segmentation result comprises: at least one of contours of a left ventricular intima, a left ventricular epicardium, a left atrial intima, a right ventricular intima, and a right ventricular adventitia.
In one possible implementation, the calculation module may include: the heart parameter calculation unit is used for calculating heart parameters according to the segmentation result; and the myocardial strain calculation unit is used for calculating the left ventricle myocardial strain according to the segmentation result and the speckle tracking.
In one possible implementation, partitioning the sub-module may include: the pretreatment unit is used for pretreating the specified section to obtain a pretreated specified section sequence; the mask unit is used for inputting the specified section sequence into the segmentation network to obtain masks of all structures of the heart; the heart area unit is used for obtaining each structural area of the heart according to the mask; and the apex and mitral valve root position unit is used for obtaining the initial frame number of the cardiac cycle and the apex and mitral valve root positions according to each structural region of the heart.
In one possible implementation, the classification network is a three-dimensional convolutional neural network.
It should be noted that, although the echocardiographic heart parameter calculation and the myocardial strain measurement device are described above by taking the above-described embodiment as an example, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Therefore, the embodiment of the disclosure can perform automatic section classification processing and image segmentation on the heart ultrasonic video through the trained neural network, and further automatically obtain the measurement results of the heart parameters and the myocardial strain, thereby effectively reducing the workload of doctors and improving the working efficiency.
Fig. 9 shows a block diagram of an apparatus 1900 for echocardiographic heart parameter calculation and myocardial strain measurement according to an embodiment of the present disclosure. For example, the apparatus 1900 may be provided as a server. Referring to fig. 9, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.