CN116564483A - Medical image report generation method, device and computer equipment - Google Patents
Medical image report generation method, device and computer equipment Download PDFInfo
- Publication number
- CN116564483A CN116564483A CN202310412339.1A CN202310412339A CN116564483A CN 116564483 A CN116564483 A CN 116564483A CN 202310412339 A CN202310412339 A CN 202310412339A CN 116564483 A CN116564483 A CN 116564483A
- Authority
- CN
- China
- Prior art keywords
- picture
- medical image
- text
- text information
- report
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000004590 computer program Methods 0.000 claims description 13
- 238000005520 cutting process Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 11
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 10
- 210000000629 knee joint Anatomy 0.000 description 12
- 210000004556 brain Anatomy 0.000 description 9
- 238000000605 extraction Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000013500 data storage Methods 0.000 description 7
- 238000007689 inspection Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 210000002414 leg Anatomy 0.000 description 4
- 210000004072 lung Anatomy 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 210000001264 anterior cruciate ligament Anatomy 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 210000003127 knee Anatomy 0.000 description 3
- 210000000426 patellar ligament Anatomy 0.000 description 3
- 210000002967 posterior cruciate ligament Anatomy 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 210000000544 articulatio talocruralis Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000002591 computed tomography Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000001503 joint Anatomy 0.000 description 2
- 210000003041 ligament Anatomy 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 210000001370 mediastinum Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 description 1
- 206010023215 Joint effusion Diseases 0.000 description 1
- 208000005228 Pericardial Effusion Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000000617 arm Anatomy 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000000621 bronchi Anatomy 0.000 description 1
- 210000000845 cartilage Anatomy 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000000416 exudates and transudate Anatomy 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 210000001165 lymph node Anatomy 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005499 meniscus Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000004224 pleura Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
- 210000000779 thoracic wall Anatomy 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The application relates to a medical image report generation method, a medical image report generation device and computer equipment, wherein the method comprises the following steps: extracting text in a medical image picture of an inspected object to obtain picture text information; performing similarity comparison on the medical image picture and a sample picture of the marked text to determine marked text information of the medical image picture; and generating a reading report based on the picture text information, the marked text information and the medical image picture. Text information is acquired through two methods, and the text information and the medical image picture acquired through the two methods are utilized to jointly determine the film reading report, so that the accuracy and the comprehensiveness of film reading report generation in the report processing process are improved.
Description
Technical Field
The present invention relates to the field of computer processing technologies, and in particular, to a medical image report generating method, a medical image report generating device, and a computer device.
Background
Medical image reports are an important basis for doctors to diagnose illness states, and the generation mode is continuously evolved and improved. Early medical image reports were generated by manual input, and reporting physicians would compose reports based on the results of patient image examinations, in combination with their own expertise. Typically, one image reports few cross and hundreds of cross, and for hospitals with hundreds or thousands of daily exams, a spot is visible to the working intensity of the reporting physician. In addition, the manual input mode is not efficient, delays the time for a patient to acquire a report, and delays the time for patient condition diagnosis and treatment to a certain extent. At present, the accuracy is difficult to be ensured by relying on a machine to generate a report.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a medical image report generating method, apparatus, and computer device that can ensure accuracy.
In a first aspect, a medical image report generating method is provided, the method comprising:
extracting text in a medical image picture of an inspected object to obtain picture text information;
performing similarity comparison on the medical image picture and a sample picture of the marked text to determine marked text information of the medical image picture; the marked text information comprises examination parts and focus information corresponding to the medical image picture;
and generating a reading report based on the picture text information, the marked text information and the medical image picture.
In one embodiment, generating a reading report based on the picture text information, the markup text information, and the medical image picture includes:
inquiring a corpus according to the picture text information and the marked text information to obtain a technical term;
and generating a film reading report based on the technical terms and the medical image picture.
In one embodiment, before the step of extracting text in the medical image picture of the object under examination, the method further comprises:
And performing image cutting on the medical image picture to obtain a cut medical image picture.
In one embodiment, extracting text in a medical image picture of an object to be inspected to obtain picture text information includes:
and if the cut medical image picture comprises a text, extracting the text of the cut medical image picture to obtain picture text information.
In one embodiment, performing similarity comparison between the medical image picture and a sample picture of the marked text to determine marked text information of the medical image picture includes:
if the cut medical image picture comprises a text, performing text removal processing on the cut medical image picture to obtain a text-free cut medical image picture;
and comparing the similarity between each text-free cut medical image picture and a sample picture of the marked text, and determining marked text information of the medical image picture.
In one embodiment, comparing the similarity between each text-free cut medical image picture and a sample picture of the marked text to determine marked text information of the medical image picture, including:
normalizing each text-free cut medical image picture to obtain a normalized picture;
Comparing the similarity between the normalized picture and the sample picture of the marked text in the sample library, and determining the sample picture with the highest similarity in the sample library;
and determining the marked text information of the medical image picture based on the marked text of the sample picture with the highest similarity.
In one embodiment, after the step of generating the reading report based on the term of art, the marked text information and the medical image picture, the method further includes:
sending the film reading report to at least two user devices, so that the user devices display the film reading report;
receiving an image report fed back by user equipment, wherein the image report is generated by the user equipment in response to writing operation of a user on a displayed film reading report;
determining the similarity of any two image reports;
under the condition that the similarity of any two image reports is greater than or equal to a similarity threshold value, determining a target image report; the target image report is any one of two image reports with similarity greater than or equal to a preset similarity threshold.
In one embodiment, determining the similarity of any two image reports includes:
extracting conclusion contents in the image report, wherein the conclusion contents are other contents except the existing contents of the film reading report in the image report;
And determining the similarity of the conclusion contents of any two image reports as the similarity of any two image reports.
In a second aspect, there is provided a medical image report generating apparatus, the apparatus comprising:
the picture text information determining module is used for extracting texts in medical image pictures of the checked objects to obtain picture text information;
the marked text information determining module is used for comparing the similarity between the medical image picture and the sample picture of the marked text to determine marked text information of the medical image picture; the marked text information comprises examination parts and focus information corresponding to the medical image picture;
and the reading report generation module is used for generating a reading report based on the picture text information, the marked text information and the medical image picture.
In a third aspect, a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the above method when the processor executes the computer program.
The medical image report generation method, the medical image report generation device and the computer equipment have at least the following beneficial effects:
the method comprises the steps of extracting a text from a medical image picture, determining picture text information, wherein the text information is information carried in the picture, comparing the similarity between the medical image picture and a sample picture of a marked text, determining marked text information of the medical image picture, and generating a reading report based on the picture text information, the marked text information and the medical image picture. Text information is acquired through two methods, and a film reading report is generated by combining the text information acquired through the two approaches and the medical image picture, so that the accuracy and the comprehensiveness of film reading report generation in the report processing process are improved. Furthermore, based on the film reading report with higher accuracy, the accuracy of the finally generated report is improved.
Drawings
FIG. 1 is an application environment diagram of a method for generating a medical image report according to one embodiment;
FIG. 2 is a flow chart of a method for generating a medical image report according to one embodiment;
FIG. 3 is a second flowchart of a method for generating a medical image report according to an embodiment;
FIG. 4 is a schematic diagram of an intermediate image during processing of a medical image picture in one embodiment;
FIG. 5 is a third flow chart of a method for generating a medical image report according to one embodiment;
FIG. 6 is a flow chart of a method for generating a medical image report according to one embodiment;
FIG. 7 is a flowchart of a method for generating a medical image report according to one embodiment;
FIG. 8 is a flowchart of a method for generating a medical image report according to an embodiment;
FIG. 9 is a flow chart of a method for generating a medical image report according to one embodiment;
FIG. 10 is a flowchart illustrating a method for generating a medical image report according to one embodiment;
FIG. 11 is a block diagram showing the structure of a medical image report generating apparatus according to an embodiment;
fig. 12 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The medical image report generating method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104 acquires a medical image picture of the checked object from the data storage system, and performs text extraction on the medical image picture of the checked object to obtain picture text information; the medical image picture is subjected to similarity comparison with a sample picture of the marked text stored in the data storage system, so that the marked text information of the medical image picture can be further determined; the server 104 generates a reading report from the picture text information, the mark text information, and the medical image picture, and outputs the reading report to the terminal 102. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, internet of things devices, portable wearable devices, CT (Computed Tomography, electronic computer tomography) scanners integrated with computers, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a medical image report generating method is provided, which is illustrated by using the method applied to the server 104 in fig. 1 as an example, and includes the following steps:
s202, extracting texts in the medical image pictures of the checked objects to obtain picture text information.
The medical image picture may refer to a medical examination image such as a CT image or an X-ray image. The server 104 may access HIS (Hospital Information System ) and PACS (Picture Archiving and Communication Systems, image archiving and communication system) based on the B/S network architecture to obtain a picture of the effect of the inspected object. The DICOM file corresponding to the checked object can be firstly obtained from the HIS system, the file contains the data information of the medical image picture, and the data in the DICOM format can be subjected to format conversion to obtain the medical image picture in the PNG format for generating a reading report.
It should be noted that the medical image picture generally includes identity information (such as name and gender) of an object to be inspected (such as a patient), and also includes inspection items of the object to be inspected (such as brain inspection or leg inspection). The medical image picture may be in the form of PNG (Portable Network Graphics ) or JPGE (Joint Photographic Experts Group, joint photographic experts group).
The picture text information refers to text information marked in the medical image picture, and comprises an examination type (such as magnetic resonance imaging examination, CT examination or X-ray examination), an examination item (such as brain examination or leg examination, etc.), and the like. For example, when the medical image is a brain CT medical image, the non-image area of the medical image is usually marked with an approximate word such as "brain" or "brain" that indicates the brain, and also marked with an inspection type such as "CT" to indicate why the medical image is inspected and the inspection type.
S204, performing similarity comparison on the medical image picture and the sample picture of the marked text, and determining marked text information of the medical image picture.
The sample picture generally refers to a picture with a text mark stored in a data storage system, and the mark text represents the attribute type of the marked sample picture, for example, the attribute of the inspected part, the focus information and the like can be included. The marked text information of the medical image picture can be determined by using a sample picture with high similarity with the medical image picture. Based on the correspondence between the two, the marked text information can comprise an examination part represented by the medical image picture, corresponding focus information and the like. The examination region may include the contents of an arm, brain, knee joint, leg joint, ankle joint, and the like. For example, a sample picture representing a left knee ligament strain may be labeled "left knee", "ligament strain", while it is stored in the data storage system. The similarity comparison may refer to comparing the image of the cut picture with the outline of the image in the sample picture (i.e. the structure of the image in the picture), or may refer to comparing the contrast of the corresponding areas of the two images, or may refer to comparing the brightness of the corresponding areas of the two images, further, may be selected from the above three comparison types, or may simultaneously compare the above three types, and the manner or type of similarity comparison of the pictures is merely exemplified herein, and the specific implementation is not limited herein. The marked text information only refers to the marked text of the cut picture obtained after the similarity comparison; if the similarity is compared, and the picture marked as the left knee joint in the cut picture and the sample picture is confirmed to have higher similarity, outputting a left knee joint text as marked text information.
S206, generating a film reading report based on the picture text information, the mark text information and the medical image picture. The process of generating the reading report based on the three dimensional information may include other intermediate steps, or may be obtained directly through data fusion and image fusion, which is not restricted herein. For example, the picture text information, the mark text information and the medical image picture can be fused according to a preset report format, and the fused document is the film reading report. Or firstly, carrying out intermediate steps such as corpus inquiry based on the picture text information and the mark text information, and taking the processing result of the intermediate steps and the medical image picture as the basis for generating the reading report.
According to the medical image report generation method, the image text information is obtained by carrying out text extraction on the medical image, the marked text information of the medical image is determined by carrying out similarity comparison on the medical image and the sample image, the generation of the reading report is carried out by jointly acting the text information obtained by the two ways and the medical image on the basis of the medical image, and the mode of obtaining the text information in multiple ways provides more dimension data for determining the reading report, so that the accuracy and the reliability of the reading report are improved.
In one embodiment, as shown in fig. 3, the step S206 of generating the reading report based on the picture text information, the mark text information, and the medical image picture includes:
s302, inquiring a corpus according to the picture text information and the mark text information to obtain the technical terms.
The corpus is stored with the contents of the mapping relation of the picture text information, the mark text information and the technical terms, and the technical terms related to the corpus can be determined by inquiring the corpus based on the mapping relation, so that the corpus is used for generating the subsequent reading report. The term of art should be understood as a term of art in the medical field. For example, the image text information may include examination sites, such as arms, brains, knee joints, leg joints, ankle joints, etc., and for each examination site, there are site terms and focus terms related to the examination site in the corpus, for example, when the examination site is a knee joint, the focus terms related to the examination site include "knee joint effusion", "meniscus relief angle signal increase", etc., and the examination site is output as terms to provide materials for the generation of subsequent reading reports.
In the above embodiment, based on the correlation between the professional terms in the corpus and the picture text information and the label text information, the professional terms are screened based on the text information acquired by the picture text information and the label text information, so that the generated film reading report is more accurate.
S304, generating a film reading report based on the technical terms and the medical image picture.
The technical terms and the medical image pictures can be fused according to a certain arrangement mode to generate a film reading report. The image reading report can be an image reading report comprising a plurality of fillable options, such as image views and diagnostic comments, further, the image views can be composed of a plurality of examination positions and focus information, and a reporting doctor can conveniently and accurately fill in the report by only selecting the examination positions and focus information. If the generated report is a knee joint image report, the report includes multiple parts of the human body (such as anterior cruciate ligament, posterior cruciate ligament, patellar ligament, etc.) under the knee joint branches for the reporting physician to pick, and further includes multiple common focus information such as strain, tear, fracture, etc. for picking.
In the above embodiment, the text information of the picture is determined by performing text extraction on the medical image picture, and then the marked text information is determined by performing similarity comparison between the medical image picture and the sample picture of the marked text. Inquiring a corpus according to the picture text information and the marked text information to obtain the technical terms. Finally, generating a film reading report based on the technical terms and the medical image pictures, acquiring text information through two methods, inquiring a corpus to acquire the technical terms, providing richer information for the determination of the technical terms, and generating the film reading report by using the technical terms and the medical image pictures, thereby improving the accuracy and the comprehensiveness of the film reading report generation in the report processing process.
In one embodiment, before the step of extracting text in the medical image picture of the object under examination, further comprises:
and performing image cutting on the medical image picture to obtain a cut medical image picture.
For the medical image recognition processing, the medical image of the checked object may be cut first, for example, the medical image may be cut into small medical image images according to the grid direction by using a canny edge detection algorithm, and in a specific embodiment, the brain CT medical image 402 shown in fig. 4 may be cut into four halves according to the aspect ratio of the medical image according to the transverse and longitudinal directions, so as to obtain a cut quarter image 404. The number of the divided halves of the medical image picture may be set according to the size of the picture and the information extraction accuracy, and is not limited herein. The specific algorithm implementation manner for performing the image segmentation for the canny edge detection algorithm is well known to those skilled in the art, and will not be described herein.
By performing the steps of steps S202 to S206 on the cut medical image picture by cutting the image, the amount of calculation on the single picture can be reduced, and the processing of steps S202 to S206 can be performed on the cut medical image picture in parallel, so that the processing speed is increased, and the medical image report generating efficiency is improved.
In one embodiment, as shown in fig. 5, the step S202 of extracting text in a medical image picture of an object to be inspected to obtain picture text information includes:
s502, if the cut medical image picture comprises a text, extracting the text of the cut medical image picture to obtain picture text information.
Specifically, the server identifies the cut picture, extracts the text part of the picture identified as the picture with text information such as text, and then uses the text information as the picture text information determined as the medical image picture. The cut pictures may include, among others, a text-only picture (e.g., 406 in fig. 4), an image-only picture, and a picture having both text and images. This allows the extraction of the picture text information only for the picture including the text when step S202 is performed, thereby reducing the amount of calculation and improving the efficiency of generating the medical image report.
In one embodiment, the process of extracting the text of the cut medical image picture to obtain the picture text information may include:
the text areas are cut according to preset intervals, and a plurality of word areas are obtained.
The word region may include chinese characters, or may include other language characters such as english characters. The preset interval may be a character length formed by a single word, or may be an interval between every two adjacent words, and the preset interval may be adjusted according to actual needs, because the character lengths of the words are different and the intervals are different, which is not limited herein.
And cutting each word area according to the connected domain to obtain different letter pictures.
The cutting of the connected domain is a technique well known to those skilled in the art, and will not be described herein. Since the text region of the medical image is generally mostly english, the letter image generally refers to a picture formed by a single letter obtained after the word in each word region is cut, and in other cases, when the word region represents a chinese word, the letter image refers to a picture formed by each chinese character, and further, the method is also applicable to different languages, and will not be repeated herein.
And inputting each letter picture into a support vector machine to obtain the letter text in each letter picture. And learning based on a support vector machine to obtain the alphabetic text.
And splicing the letter texts to obtain the picture text information.
Specifically, after each letter picture is input to the support vector machine, a corresponding letter text is obtained, and under the processing of the support vector machine, each letter text is spliced again into a complete word, so that picture text information (e.g. 410 in fig. 4) is obtained.
And performing mask processing on the cut picture, and performing multiple cutting and dividing on the text region based on different standards, so that the accuracy of determining the text information of the picture is ensured.
In one embodiment, the cut picture may be masked, and the process of locating text regions in the cut picture may include:
and determining the area of the text in the cut picture. Specifically, in one embodiment, as shown in fig. 4, a sliding window process is performed on each cut picture, so as to obtain an approximate area of the text display portion in the cut picture (e.g., 406).
And then, carrying out binarization processing and expansion processing on the area where the text is located, and obtaining the outline of the text display part. Specifically, as shown in fig. 4, in the above-determined approximate area, the exact outline of the text display section (as at 408 in fig. 4) can be obtained by subjecting the area to binarization processing and at least 3 times of expansion.
And finally, taking the outline of the text display part as a mask, and extracting the text region in the cut picture.
In the embodiment, the accurate outline of the text display part is obtained based on the binarization processing and the multiple expansion modes, so that the determination of the text area and the extraction accuracy of the text information are ensured, and the accuracy of the text information of the picture is further ensured.
In one embodiment, as shown in fig. 6, the step S204 of comparing the similarity between the medical image picture and the sample picture of the marked text to determine the marked text information of the medical image picture includes:
S602, if the cut medical image picture comprises text, performing text removal processing on the cut medical image picture to obtain a text-free cut medical image picture. The text-free cut medical image pictures refer to pictures in which image parts remain after text removal, but these pictures are only a part of the original complete medical image picture after the above-mentioned cutting process. The cut picture can be processed by taking the outline of the text display part as a mask mode, and the cut medical image picture without characters is obtained. The cut picture may be masked to define an outline of the cut picture including the approximate area of the text display portion, and the text area of the cut picture may be located.
And S604, comparing the similarity between each text-free cut medical image picture and a sample picture of the marked text, and determining marked text information of the medical image picture.
The cut medical image picture is subjected to text removal processing to obtain a text-free cut medical image picture, the text-free cut medical image picture is compared with a sample picture of a marked text based on picture similarity, for example, picture feature similarity comparison can be specifically performed, and marked text information of the medical image picture is determined. By the method, the influence of texts on the medical image pictures can be avoided, and the accuracy and reliability of extraction of the marked text information are improved.
Optionally, the similarity comparison between each text-free cut medical image picture and the sample picture of the marked text may be performed by first splicing each text-free cut medical image picture, and then performing similarity comparison between the spliced text-free medical image picture and the sample picture of the marked text.
In one embodiment, as shown in fig. 7, the step S604 of comparing the similarity between each text-free cut medical image picture and the sample picture of the marked text to determine the marked text information of the medical image picture includes:
s702, carrying out normalization processing on each text-free cut medical image picture to obtain a normalized picture. As shown in fig. 4, the normalization process is to re-stitch the text-free cut medical image into a complete medical image (e.g. 412 in fig. 4).
S704, comparing the similarity between the normalized picture and the sample picture of the marked text in the sample library, and determining the sample picture with the highest similarity in the sample library. The type of comparison adopted in the image similarity comparison may refer to the description of the above embodiments, which is not repeated herein, and further, in order to improve the accuracy of the similarity comparison, a structural similarity algorithm may be introduced to count the result of the similarity comparison, and determine the sample image with the highest similarity.
S706, determining the marked text information of the medical image picture based on the marked text of the sample picture with the highest similarity. The marked text information is the text marked by the sample picture with the highest similarity.
Specifically, based on the determined sample picture with the highest similarity, the corresponding mark text can be directly obtained, and further the mark text information is obtained.
In the above embodiment, the cut medical image picture with the image portion is normalized, that is, the cut medical image picture is spliced again to obtain the complete image, and the similarity comparison is performed based on the structural similarity algorithm, and the sample picture with the highest similarity is determined based on the similarity comparison result, so that the marking text information is determined, and the accuracy of the marking text information is ensured.
In one embodiment, before the step of extracting text in the medical image picture of the inspected object to obtain the picture text information, the method further includes:
and acquiring a medical image file of the checked object, and converting the medical image file into a medical image picture.
The medical image file refers to a medical image picture taken by a medical imaging device stored in a DICOM (Digital Imaging and Communications in Medicine, digital imaging and communication in medicine) format, and due to format compatibility, the medical image file in the DICOM format is usually required to be converted into a medical image picture in the PNG format before the medical image picture is processed.
The process of acquiring the medical image file may be performed in response to a user writing a report triggering operation.
For example, after a user logs into the system through a browser (Chrome or Edge), the system presents the patient information to be reported in the form of a list. After the user selects information of a patient on the interface and clicks a "write report" button (write report trigger operation is triggered by a form of voice or other operation modes, of course), the server acquires medical image files from the HIS system and the PACS system, further acquires medical image pictures, and acquires text information in the medical image pictures by using two methods, so as to further acquire more reliable key information based on the text information acquired by the two ways, and guide the generation of the reading report. The medical image pictures, the film reading report, the report and the like can be displayed back on the browser page operated by the user. The user can modify, adjust and save the automatically generated film reading report and the report on the browser page, and the modified content can be stored in a database of the server so as to trace and view the data.
Since the DICOM format file is usually a sequence, when PNG format conversion is performed, a small number of files (for example, once every 20 samples) are extracted from the sequence by adopting an interval sampling form to perform PNG format conversion, so that the workload of processing is reduced.
In the embodiment, autonomous format conversion can be realized, so that the system compatibility is improved.
In one embodiment, the film reading report can be directly used as an output text for a user to view, or the film reading report can be sent to a doctor for further writing, so that an image report written by the doctor is obtained, and the image report is output for the user to view.
In one embodiment, as shown in fig. 8, after the step of generating the reading report based on the picture text information, the mark text information, and the medical image picture, the method further includes:
s802, sending the film reading report to at least two user devices, and enabling the user devices to display the film reading report.
Wherein, the sending of the reading report is random, and the user equipment can refer to a personal computer, a notebook computer, a smart phone, a tablet computer and the like.
S804, receiving an image report fed back by the user equipment, wherein the image report is generated by the user equipment in response to the writing operation of the user on the displayed film reading report.
Specifically, after receiving the above-mentioned film-reading report, the user equipment reports that the doctor has finished writing the report and then feeds back the image report to the server, and the server receives the fed-back image report and stores it in the data storage system.
S806, determining the similarity of any two image reports.
S808, if the similarity of any two image reports is greater than or equal to a similarity threshold, determining a target image report; the target image report is any one of two image reports with similarity greater than or equal to a preset similarity threshold.
By comparing the similarity of the image reports, one seat target image report in the two image reports with the similarity larger than the similarity threshold is selected and output, so that the reliability of the image report can be ensured.
In one embodiment, if the similarity of any two image reports is smaller than the similarity threshold, step S802 is performed until the similarity of any two image reports is greater than or equal to the similarity threshold, and then the target image report is determined.
In one embodiment, under the condition that the similarity of any two image reports is smaller than a similarity threshold, iteratively executing sending the image report with the similarity smaller than the similarity threshold to user equipment without feeding back the image report, and receiving the modified image report fed back by the user equipment until the similarity of any two image reports is larger than or equal to the similarity threshold; wherein the modified visual report is generated by the user equipment in response to a writing operation performed by the user on the displayed visual report.
Specifically, when the similarity of the image reports written by two reporting doctors is smaller than the similarity threshold, it is indicated that the diagnosis results of the image reports have great deviation, and in order to avoid misdiagnosis, a third reporting doctor needs to write the image reports, and perform similarity calculation on the three reports until the similarity of any two image reports is greater than or equal to the similarity threshold.
In the above embodiment, when there are two image reports with low similarity, the third image report is generated, and the similarity between any two reports is recalculated, so that the target influence report is not output until the similarity between any two reports is greater than or equal to the similarity threshold, thereby ensuring the accuracy of the influence report.
As can be seen from the above embodiments, the image report may be a report including options related to the examination location and options related to the lesion information of the examined object, and the reporting physician may select or write a content matching with the medical image picture according to the text and the medical image picture displayed on the reading report, and this content may be understood as a conclusive content, so as to form the image report.
In order to increase the efficiency of similarity comparison of image reports, in one embodiment, determining the similarity of any two image reports includes:
and extracting conclusion contents in the image report. The conclusive content is other content in the image report than the existing content of the reading report. The conclusive content refers to content for reflecting the result of the inspection item of the inspected object. For example, for lung CT, the contents of qualitative conclusion under examination, such as "clear double lung texture, no abnormality in trend distribution, no exudate and space occupation in lung parenchyma", mediastinum window showing no enlargement of mediastinum lymph node, no enlargement of double lung portals, smooth bronchi, normal blood vessel, no thickening of pleura, no abnormality in rib and chest wall cartilage, no enlargement of heart shadow, no pericardial effusion "are conclusive contents. There are various implementations of extracting conclusive content, for example, by comparing the image report with the film-reading report, and the content newly added in the image report is used as the conclusive content, so for the report writing mode under the hook mode, for example, the similarity comparison of the image reports may refer to comparing whether the options that are hooked in the generated at least two image reports are the same. For another example, a conclusion writing area can be set on the film reading report, and the content of the area in the image report can be directly extracted, so that conclusion content can be obtained.
And determining the similarity of the conclusion contents of any two image reports as the similarity of any two image reports.
In the above embodiment, the target image report is generated based on the similarity comparison result of at least two image reports fed back, so as to ensure the accuracy of the target influence report.
In one embodiment, the step of determining the similarity of any two image reports S806 may include calculating the similarity between any two image reports by a cosine similarity algorithm, and the specific algorithm flow is well known to those skilled in the art and will not be described herein.
The setting of the similarity threshold may be selected according to the actual accuracy requirement, which is not limited herein.
In one embodiment, the report authoring page supports corpus quick selection and retrieval functionality in addition to displaying automatically generated report information. For example, text related to the key information is updated in response to a corpus information selection operation for the corpus information selection interface.
For example, when the key information "knee joint" is identified according to the medical image picture, terms related to the knee joint may be displayed in a tree structure on a page, such as: knee joint- > (anterior cruciate ligament, posterior cruciate ligament, patellar ligament, …) - > (strain, tear, break) for quick selection by the user. If the corpus information (text related to the key information) wanted by the user does not exist in the current tree structure, the user can search the wanted corpus information in the corpus through a search column on the page. For selected corpus information, it is automatically appended to the reading report, for example, to the back of the generated reading report content.
In one embodiment, as shown in fig. 9, after the step of generating the reading report based on the picture text information, the mark text information, and the medical image picture, the method further includes:
s902, displaying a film reading report.
S904, the reading report is updated in response to the adjustment action for the reading report.
Specifically, the reading report generated based on the picture text information, the marking text information and the medical image picture may not be fully suitable for each diagnosis analysis requirement or not match with the habit of the user, for example, the server identifies that the checked part corresponding to the medical image picture of a certain checked object is a knee joint, and other human body parts (such as anterior cruciate ligament, posterior cruciate ligament and patellar ligament) under the knee joint branch and focus options (such as strain, tear and fracture) of the part are displayed at this time, so that the reporting doctor can quickly select to correct the reading report. Further, the reporting physician may also add terms in the corpus to the reading report by way of manual addition to revise the reading report.
In one embodiment, sending a reading report to at least two user devices includes:
in response to a confirmation operation for the reading report, the reading report is sent to at least two user devices.
In this embodiment, before sending to the user equipment, the generated reading report is initially confirmed, so that the error of the subsequent image report caused by the problem at the front end of the server is avoided, and the accuracy of the subsequent image report is ensured.
In one embodiment, as shown in fig. 10, the method further comprises:
s1002, sending a target image report to auditing equipment.
S1004, sending a report receiving prompt message to the device of the checked object under the condition that the auditing of the auditing device passes the feedback information.
Specifically, after the target image report is generated, the report is sent to auditing equipment, and an auditing doctor audits the report to judge whether the phenomena such as misdiagnosis exist or not. Under the condition of no misdiagnosis and other phenomena, the device is considered to pass the examination and transmits a report receiving prompt message to the device of the patient so as to remind the patient to take the image report on time.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a medical image report generating device for realizing the medical image report generating method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitations in the embodiments of the device for generating a medical image report provided below may be referred to the above limitations of the method for generating a medical image report, which are not repeated here.
In one embodiment, as shown in fig. 11, there is provided a medical image report generating apparatus including:
the picture text information determining module 1102 is configured to extract text in a medical image picture of an object to be inspected to obtain picture text information;
the marked text information determining module 1104 is configured to compare the similarity between the medical image picture and the sample picture of the marked text, and determine marked text information of the medical image picture; the marked text information comprises examination parts and focus information corresponding to the medical image picture;
the reading report generating module 1106 is configured to generate a reading report based on the term of art, the tagged text information, and the medical image picture.
In one embodiment, the reading report generation module 1106 includes:
the corpus inquiring unit is used for inquiring the corpus according to the picture text information and the marked text information to obtain the technical terms;
and the reading report generating unit is used for generating a reading report based on the professional terms and the medical image picture.
In one embodiment, the medical image report generating apparatus further comprises:
and the image cutting module is used for cutting the image of the medical image picture to obtain the cut medical image picture.
In one embodiment, the picture text information determination module 1102 includes:
and the cutting picture text extraction unit is used for extracting the text of the cut medical image picture to obtain picture text information when the cut medical image picture comprises the text.
In one embodiment, the tag text information determination module 1104 includes:
the text removing unit is used for performing text removing treatment on the cut medical image picture when the cut medical image picture comprises text, so as to obtain a text-free cut medical image picture;
the text image marking text information obtaining unit is used for comparing the similarity between each text-free cut medical image picture and the sample picture of the marked text, and determining the marking text information of the medical image picture.
In one embodiment, the de-text image marking text information acquisition unit includes:
the normalization unit is used for carrying out normalization processing on each text-free cut medical image picture to obtain normalized pictures;
the highest similarity picture determining unit is used for comparing the similarity between the normalized picture and the sample picture of the marked text in the sample library, and determining the sample picture with the highest similarity in the sample library;
and the high-reliability marked text information determining unit is used for determining the marked text information of the medical image picture based on the marked text of the sample picture with the highest similarity.
In one embodiment, the apparatus further comprises:
the image conversion module is used for acquiring a medical image file of the checked object and converting the medical image file into a medical image.
In one embodiment, the apparatus further comprises:
the film reading report sending module is used for sending film reading reports to at least two user equipment so that the user equipment can display the film reading reports;
the film reading report receiving module is used for receiving a film report fed back by the user equipment, wherein the film report is generated by the user equipment in response to the writing operation of the user on the displayed film reading report;
The reading report similarity comparison module is used for determining the similarity of any two image reports;
the target image report determining module is used for determining the target image report when the similarity of any two image reports is greater than or equal to a similarity threshold value; the target image report is any one of two image reports with similarity greater than or equal to a preset similarity threshold.
In one embodiment, the report similarity comparison module includes:
the conclusive content extraction unit is used for extracting the conclusive content in the image report, wherein the conclusive content is other content except the existing content of the film reading report in the image report;
and the conclusion similarity comparison unit is used for determining the similarity of conclusion contents of any two image reports as the similarity of any two image reports.
In one embodiment, the apparatus further includes:
the film reading report display module is used for displaying film reading reports;
and the reading report updating module is used for updating the reading report in response to the adjustment action for the reading report.
In one embodiment, the reading report sending module includes:
and a post-acknowledgement report transmitting unit configured to transmit the film reading report to at least two user equipments in response to an acknowledgement operation for the film reading report.
In one embodiment, the apparatus further includes:
the auditing and transmitting module is used for transmitting the target image report to auditing equipment;
and the reminding module is used for sending a report receiving prompt message to the equipment of the checked object under the condition that the auditing of the auditing equipment is received and the feedback information is passed.
The above-described respective modules in the medical image report generating apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules. And the medical image report generating device may further include other modules or units to perform other steps in the medical image report generating method embodiment.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 12. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing reporting information data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a medical image report generation method.
It will be appreciated by those skilled in the art that the structure shown in fig. 12 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method of the above embodiments when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method in the above embodiments.
In an embodiment a computer program product is provided comprising a computer program which, when executed by a processor, implements the steps of the method of the above embodiments.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (10)
1. A method of medical image report generation, the method comprising:
extracting text in a medical image picture of an inspected object to obtain picture text information;
performing similarity comparison on the medical image picture and a sample picture of the marked text to determine marked text information of the medical image picture; the marked text information comprises examination parts and focus information corresponding to the medical image picture;
And generating a film reading report based on the picture text information, the mark text information and the medical image picture.
2. The method of claim 1, wherein the generating a reading report based on the picture text information, the markup text information, and the medical image picture comprises:
inquiring a corpus according to the picture text information and the marked text information to obtain a technical term;
and generating the reading report based on the technical term and the medical image picture.
3. The method of claim 1, further comprising, prior to the step of extracting text in the medical image picture of the subject under examination:
and performing image cutting on the medical image picture to obtain a cut medical image picture.
4. A method according to claim 3, wherein the extracting text in the medical image picture of the inspected object to obtain picture text information comprises:
and if the cut medical image picture comprises a text, extracting the text of the cut medical image picture to obtain picture text information.
5. The method of claim 3, wherein the comparing the similarity of the medical image picture to the sample picture of marked text to determine the marked text information of the medical image picture comprises:
If the cut medical image picture comprises a text, performing text removal processing on the cut medical image picture to obtain a text-free cut medical image picture;
and comparing the similarity between each text-free cut medical image picture and a sample picture of the marked text, and determining marked text information of the medical image picture.
6. The method of claim 5, wherein comparing the similarity of each text-free cut medical image picture to a sample picture of marked text to determine marked text information for the medical image picture, comprises:
normalizing each text-free cut medical image picture to obtain a normalized picture;
comparing the similarity between the normalized picture and a sample picture of a marked text in a sample library, and determining a sample picture with highest similarity in the sample library;
and determining the marked text information of the medical image picture based on the marked text of the sample picture with the highest similarity.
7. The method of any one of claims 1-6, wherein after the step of generating a reading report based on the term of art, the tagged text information, and the medical image picture, further comprising:
Sending the film reading report to at least two user devices, so that the user devices display the film reading report;
receiving an image report fed back by the user equipment, wherein the image report is generated by the user equipment in response to writing operation of a user on a displayed film reading report;
determining the similarity of any two image reports;
if the similarity of any two image reports is greater than or equal to the similarity threshold, determining a target image report; the target image report is any one of two image reports with similarity greater than or equal to a preset similarity threshold.
8. The method of claim 7, wherein said determining the similarity of any two of said visual reports comprises:
extracting conclusion contents in the image report, wherein the conclusion contents are other contents except the existing contents of the film reading report in the image report;
and determining the similarity of the conclusion contents of any two image reports as the similarity of the any two image reports.
9. A medical image report generating apparatus, the apparatus comprising:
The picture text information determining module is used for extracting texts in medical image pictures of the checked objects to obtain picture text information;
the marked text information determining module is used for comparing the similarity between the medical image picture and the sample picture of the marked text to determine marked text information of the medical image picture; the marked text information comprises examination parts and focus information corresponding to the medical image picture;
and the reading report generation module is used for generating a reading report based on the picture text information, the mark text information and the medical image picture.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 8 when the computer program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310412339.1A CN116564483A (en) | 2023-04-14 | 2023-04-14 | Medical image report generation method, device and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310412339.1A CN116564483A (en) | 2023-04-14 | 2023-04-14 | Medical image report generation method, device and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116564483A true CN116564483A (en) | 2023-08-08 |
Family
ID=87487044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310412339.1A Pending CN116564483A (en) | 2023-04-14 | 2023-04-14 | Medical image report generation method, device and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116564483A (en) |
-
2023
- 2023-04-14 CN CN202310412339.1A patent/CN116564483A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Azizi et al. | Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging | |
US10929420B2 (en) | Structured report data from a medical text report | |
US11176188B2 (en) | Visualization framework based on document representation learning | |
Beddiar et al. | Automatic captioning for medical imaging (MIC): a rapid review of literature | |
JP6749835B2 (en) | Context-sensitive medical data entry system | |
RU2711305C2 (en) | Binding report/image | |
JP7258772B2 (en) | holistic patient radiology viewer | |
CN109460756B (en) | Medical image processing method and device, electronic equipment and computer readable medium | |
US20230154593A1 (en) | Systems and methods for medical data processing | |
CN112530550A (en) | Image report generation method and device, computer equipment and storage medium | |
CN113656706A (en) | Information pushing method and device based on multi-mode deep learning model | |
US10650923B2 (en) | Automatic creation of imaging story boards from medical imaging studies | |
US10235360B2 (en) | Generation of pictorial reporting diagrams of lesions in anatomical structures | |
US20230005580A1 (en) | Document creation support apparatus, method, and program | |
JP7504987B2 (en) | Information processing device, information processing method, and information processing program | |
WO2023274599A1 (en) | Methods and systems for automated follow-up reading of medical image data | |
US20240119750A1 (en) | Method of generating language feature extraction model, information processing apparatus, information processing method, and program | |
US20230420096A1 (en) | Document creation apparatus, document creation method, and document creation program | |
US12094584B2 (en) | Document creation support apparatus, document creation support method, and program | |
JP2024012644A (en) | Information saving device, method, and program, and analysis record generation device, method, and program | |
CN116564483A (en) | Medical image report generation method, device and computer equipment | |
CN115985506A (en) | Information extraction method and device, storage medium and computer equipment | |
KR102553060B1 (en) | Method, apparatus and program for providing medical image using spine information based on ai | |
US20240296934A1 (en) | Information processing apparatus, information processing method, and program | |
CN118823790A (en) | Interaction method, device, computer equipment, storage medium and program product for medical image and image report |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |