WO2018070285A1 - Image processing device and image processing method - Google Patents
Image processing device and image processing method Download PDFInfo
- Publication number
- WO2018070285A1 WO2018070285A1 PCT/JP2017/035787 JP2017035787W WO2018070285A1 WO 2018070285 A1 WO2018070285 A1 WO 2018070285A1 JP 2017035787 W JP2017035787 W JP 2017035787W WO 2018070285 A1 WO2018070285 A1 WO 2018070285A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- learning
- image feature
- image processing
- processing apparatus
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims description 22
- 230000003902 lesion Effects 0.000 claims abstract description 224
- 238000000605 extraction Methods 0.000 claims abstract description 57
- 206010028980 Neoplasm Diseases 0.000 claims description 106
- 201000011510 cancer Diseases 0.000 claims description 106
- 230000036210 malignancy Effects 0.000 claims description 106
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 239000000284 extract Substances 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 6
- 238000000034 method Methods 0.000 abstract description 33
- 230000008569 process Effects 0.000 abstract description 17
- 238000011176 pooling Methods 0.000 description 10
- 238000003745 diagnosis Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000009825 accumulation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000002685 pulmonary effect Effects 0.000 description 2
- 241000272470 Circus Species 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
Definitions
- the present invention relates to an image processing apparatus, and more particularly to an image processing technique for processing medical images.
- the captured three-dimensional medical image is re-created as a continuous two-dimensional section.
- the image is interpreted by observing the two-dimensional cross-sectional image.
- the three-dimensional resolution of the generated three-dimensional medical image is also improved, and the data size tends to increase.
- the two-dimensional cross-section generation interval described above can be made finer, and more detailed observation of the lesion appearing on the medical image is possible.
- the number of two-dimensional sections is also increasing.
- the CT apparatus it has become possible to capture a high-quality three-dimensional medical image with a low dose, and the number of CT image capturing opportunities tends to increase.
- CAD Computer Aided Detection
- This CAD aims to automatically or semi-automatically apply image processing technology to detect shadows, measure sizes, identify normal / abnormal shadows, and distinguish between types of abnormal shadows using a computer.
- CAD aims to present shadows with high suspicion of lesions based on image features. Since the purpose of this CAD is to prevent a doctor from overlooking it, it is often desirable to present all shadows with a high suspicion of lesions. However, on the other hand, if there are too many shadows to present, there is a problem that the burden on the doctor who examines each suspicion increases. Therefore, there is a need for a method for presenting suspected shadows in a form desired by a doctor and reducing the burden on the doctor.
- Non-Patent Document 1 CAD data is improved by continuously collecting and re-learning diagnostic data of facilities using CAD systems developed by machine learning. A method for achieving this has been proposed.
- a formula that defines the suspicion of a lesion is generally set using a feature amount obtained from an image, and a shadow having a high suspicion of the lesion is presented.
- the image feature quantity of the lesion shadow is unique, the feature quantity extraction method set based on the CAD development data set cannot be applied to the image quality and findings of the actual operation. The estimation performance of the suspicion of the lesion is not obtained.
- the contribution rate of which feature amount is high in the method of adjusting the threshold of the suspicion of the shadow to be presented and the method of calculating the suspicion
- the image feature extraction processing cannot be adjusted.
- An object of the present invention is to provide an image processing apparatus and an image processing method that enable adjustment of image feature amount extraction processing of a lesion shadow and reduce a burden on a user when interpreting a medical image. It is in.
- an image processing apparatus that presents a suspected lesion area image detected from image data, and that performs learning for classifying image feature labels related to the suspected lesion area image
- a feature label learning unit an image feature amount extraction unit that extracts an image feature amount of a suspicious lesion region image using a learning parameter of an image feature label obtained by learning of the image feature label learning unit, and a suspicious lesion region image are displayed
- An image processing apparatus having a display unit, a user input unit, and an image feature label learning update unit that updates a learning parameter in response to an input from the user input unit.
- a processing method of an image processing apparatus that includes a display unit and a user input unit, and presents a suspected lesion area image detected from image data. Learns to classify the image feature labels related to the suspicious lesion region image, extracts the image feature amount of the suspicious lesion region image using the learning parameters of the image feature label obtained by learning, and extracts the suspicious lesion region image.
- an image processing method configured to display on a display unit and update a learning parameter in accordance with an input from a user input unit.
- FIG. 1 is a block diagram illustrating an example of the overall configuration of an image processing apparatus according to Embodiment 1.
- FIG. 3 is a flowchart illustrating a flow of machine learning processing in the image processing apparatus according to the first embodiment.
- FIG. 3 is a flowchart illustrating a flow of a suspicious area malignancy estimation process in the image processing apparatus according to the first embodiment.
- FIG. 3 is a flowchart illustrating a flow of image feature amount extraction processing in the image processing apparatus according to the first embodiment.
- FIG. FIG. 3 is a schematic diagram illustrating an example of a plurality of image feature labels according to the first embodiment.
- FIG. 3 is a diagram illustrating an example of a configuration of an image label learning device for classifying image feature labels according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of image feature amount extraction processing of a suspected lesion area image according to the first embodiment. Explanatory drawing which shows an example of the display part which displays image data and a lesion suspicious area image based on Example 1, and the user interface which selects an image feature label and malignancy right / wrong information.
- FIG. 6 is a block diagram illustrating an example of the overall configuration of an image processing apparatus according to a second embodiment.
- FIG. 10 is a flowchart illustrating a flow of image feature amount extraction processing in the image processing apparatus according to the second embodiment.
- FIG. 10 is a diagram illustrating an example of a configuration of an image label learning device for classifying image feature labels according to the second embodiment.
- an image feature amount extraction process related to a suspected lesion area image according to an input from the user 1 is an example of an image processing apparatus capable of adjusting the image quality.
- an image processing apparatus that presents a suspected lesion area image detected from image data, an image feature label learning unit that performs learning for classifying image feature labels related to a suspected lesion area image, and an image feature label learning unit
- An image feature amount extraction unit that extracts an image feature amount of a suspicious lesion region image using a learning parameter of an image feature label obtained by learning, a display unit that displays a suspicious lesion region image, a user input unit, and a user
- a processing method of an image processing apparatus that includes a display unit and a user input unit and presents a suspected lesion area image detected from image data, wherein the image processing apparatus classifies image feature labels related to a suspected lesion area image. For learning, using the learning parameters of the image feature label obtained by learning, extracting the image feature amount of the suspicious lesion region image, displaying the suspicious lesion region image on the display unit, and inputting it from the user input unit Accordingly, the embodiment is an embodiment of an image processing method configured to update learning parameters.
- a reconstructed three-dimensional medical image obtained by a CT medical image photographing apparatus will be described as an example.
- the structure of this embodiment is an image process based on data obtained by another medical image photographing apparatus.
- the apparatus can also be applied.
- the data is obtained by an MRI imaging apparatus or the like, it is applicable to obtain a three-dimensional image that can be expressed as a stack of a plurality of two-dimensional cross sections, and that are supposed to show lesion characteristics in the pixel distribution. Can do.
- the suspected lesion area in the present embodiment refers to a point and an area with a high suspected lesion, which are determined based on the medical knowledge of the interpreting doctor, the medical evidence (evidence) for the disease diagnosis, and the like.
- the target lesion is highly likely to be judged from the difference in luminance from the surrounding area, that is, the region with low suspicion of the lesion, or the difference in luminance value distribution when it appears on the medical image.
- the CT value appears on the CT image as a region including many pixels higher than the surrounding air region.
- FIG. 1 is a diagram illustrating an example of a system configuration including an image processing apparatus according to the first embodiment.
- the image processing apparatus 100 includes a user input unit 10, an image feature label learning unit 21, an image feature label learning parameter storage unit 22, an image feature amount extraction unit 23, and a lesion suspected area malignancy learning.
- the image processing apparatus 100 is configured by a normal computer, the display unit 11 is a display thereof, the storage unit is configured by a memory thereof, and each functional block such as the image feature amount extraction unit 23 is a central processing unit (CPU).
- CPU central processing unit
- the medical image DB 20, the diagnostic image, and the suspected lesion area image 27 are realized by the external storage device or the like.
- the image feature quantity extraction unit 23 is shown as three blocks, but these are functional blocks that execute the same process of extracting the feature quantity from each input target image.
- the middle block of the three blocks corresponds to step S203 in FIG. 2, which will be described later, the right block corresponds to step S302 in FIG. 3, and the left block corresponds to step S502 in FIG.
- FIG. 2 is a flowchart showing an operation process of extracting an image feature amount by learning an image feature label and learning of a suspected lesion region malignancy estimation using the extracted image feature amount for a suspected lesion region image. . These operation processes are executed by the image feature label learning unit 21, the image feature amount extraction unit 23, and the suspected lesion area malignancy learning unit 24.
- the image feature label learning unit 21 receives information on a suspected lesion region image and the corresponding image feature label from the medical image DB 20.
- the image feature label refers to the type of image feature related to the suspected lesion area, such as the size of the area of the shadow area, the brightness, the presence / absence of contact with the surrounding existing structure, the occurrence site, and the shape.
- FIG. 6 shows image feature labels 61 to 66 as an example of such image feature labels of the suspicious lesion area.
- the image feature label learning unit 21 performs machine learning for classifying the image feature labels 70, and generates learning parameters thereof.
- the CNN Convolutional Neural Network
- FIG. 7 shows an example of the configuration of a learning device (network) using the CNN method.
- an input learning image is identified by repeating a convolution layer 71 that performs a number of image filtering processes and a pooling layer 72 that samples from the output of the convolution layer. It is possible to automatically generate an image feature amount that optimally expresses the feature of the image, that is, as accurately as possible.
- the last layer of the CNN network configuration shown in FIG. 7 is an identification layer (Classification layerer) 73.
- the probability (score) that an input image belongs to a preset type (class) is calculated and the result (Result) ) 74, that is, a process of classifying (identifying) the input image.
- six types of image feature labels 61 to 66 shown in FIG. 6 are set so as to identify large area area, small area area, sternal contact type, luminance intensity, luminance intensity, and tubular. It is also possible to change settings such as the type of image feature label and the number of types depending on the feature of the target image in the suspected lesion area.
- a CNN network is set for each class of the image feature label 70.
- Input learning images for learning each CNN network are positive sample data that is an image belonging to the image feature label class and negative sample data that is an image belonging to another image feature label class. That is, the image feature label learning unit 21 sets a CNN network for each class of image feature labels.
- the image feature label learning parameter storage unit 22 acquires the parameters of each convolution layer 71, the pooling layer 72, and the identification layer 73 from the image feature label learning unit 21, and uses them as image feature label learning parameters. save.
- the image feature label learning parameter is a parameter related to the configuration of a convolution layer or a pooling layer in a known CNN method, for example, the total number of convolution layers or pooling layers (2 in the case of FIG. 7) or a convolution operation. It is a parameter such as the coefficient and size of the convolution filter of time.
- the purpose of identifying the type of the image feature label 70 is to extract the image feature amount of the highly unique lesion suspicious area image in a form that can be automatically adjusted with high accuracy.
- FIG. 8 shows an example of image feature amount extraction processing of a suspected lesion area image by the image feature amount extraction unit 23.
- the image feature quantity extraction unit 23 receives the image feature label learning parameters from the image feature label learning parameter storage unit 22, and samples them from the convolution layer 81 shown in FIG. 8 and the output of the convolution layer.
- An image feature amount of the input suspicious region image 80 is extracted using a CNN network constituted by repetition of a pooling layer 82 and a classification layer 83.
- an output vector of any pooling layer 82 of the CNN network or an identification score vector obtained by concatenating results (Result) 84 that are identification scores of networks of each class may be used as the image feature amount.
- a vector concatenated with the output vector of the pooling layer 72 and the identification score vector may be used.
- the image feature amount extraction unit 23 can identify the type of the image feature label related to the suspected lesion region image using the learning parameter.
- the suspected lesion malignancy learning unit 24 generates an estimated parameter of the suspected lesion malignancy using machine learning in order to calculate the malignancy of the suspected lesion region.
- the suspected lesion area malignancy learning unit 24 receives the image feature amount of the suspected lesion area image obtained by the processing of FIG. 8 from the image feature amount extraction unit 23. Further, the suspected lesion area malignancy learning unit 24 acquires malignancy information corresponding to the suspected lesion area image from the medical image DB 20.
- the suspicious lesion area malignancy learning unit 24 creates a learning device that estimates the suspicious area malignancy using the image feature amount of the suspicious area image and the corresponding malignancy information.
- the suspected lesion malignancy estimation parameter storage unit 25 receives and stores the suspected lesion malignancy estimation parameter from the suspected lesion malignancy learning unit 24.
- the lesion suspicious area malignancy estimation parameter is a parameter in a known machine learning method such as the SVM method.
- the parameter is a boundary line or a boundary surface for classification, for example, a straight line.
- step S ⁇ b> 301 the image feature quantity extraction unit 23 receives a diagnostic image and a lesion suspected area image 27.
- step S302 the image feature amount extraction unit 23 receives the image feature label learning parameter from the image feature label learning parameter storage unit 22, and extracts the image feature amount of the input suspicious region image using the CNN network. This is the same process as the process of FIG. 8 described in step S203.
- the suspected lesion malignancy estimation unit 26 receives the suspected lesion malignancy estimation parameter stored in the suspected lesion malignancy estimation parameter storage unit 25. Further, the suspected lesion area malignancy estimation unit 26 receives the image feature amount of the suspected lesion area image previously extracted from the image feature amount extraction unit 23. The suspected lesion area malignancy estimation unit 26 calculates the malignancy of the suspected lesion area in the suspected lesion area image using the suspected lesion area malignancy estimation parameter.
- the display unit 11 receives the diagnostic image, the suspected lesion area image 27, and the corresponding suspected lesion area malignancy, and displays them as the diagnosis result of the image processing apparatus. When there are a plurality of suspected lesion area images, they can be ranked and displayed based on the malignancy estimation result. A detailed display method will be described later using the display screen example shown in FIG.
- step S401 the image feature label learning update unit 28 receives a displayed suspicious lesion area image.
- step S ⁇ b> 402 the image feature label learning update unit 28 receives from the user input unit 10 an image feature label corresponding to the suspected lesion region image input by the user.
- the image feature label learning update unit 28 determines whether to update the image feature amount extraction process. For example, when the image feature label learning update unit 28 acquires a predetermined number of suspected lesion area images, the image feature amount extraction processing may be updated. Further, the image feature amount extraction process may be updated when a predetermined accumulation period has elapsed. Further, the image feature amount extraction process may be updated in accordance with a user instruction. That is, when the image feature label learning update unit 28 acquires a predetermined number of suspected lesion area images, when the acquired suspected lesion area image has passed a predetermined accumulation period, or when a user instruction is input from the user input unit 10 If it has been done, the image feature quantity extraction unit 23 updates the image feature quantity extraction process.
- the image feature label learning update unit 28 receives the image feature label learning parameter from the image feature label learning parameter storage unit 22.
- the image feature label learning update unit 28 updates the learning parameters of the CNN network related to the image feature label.
- the image feature label learning parameter storage unit 22 receives and stores the updated image feature label learning parameter. This makes it possible to automatically adjust the image feature amount extraction process more appropriately for a new lesion-suspicious area image.
- the image feature amount extraction unit 23 can extract the image feature amount of the suspicious lesion region image again using the image feature label learning parameter updated by the image feature label learning update unit 28.
- the image processing apparatus includes the suspected lesion malignancy learning update unit 29 that updates the suspected lesion malignancy estimation parameter in accordance with the input from the user input unit 10 for each suspected lesion image. ing.
- the suspected lesion malignancy learning update unit 29 receives from the user input unit 10 correctness / incorrectness information of malignancy corresponding to the displayed suspected lesion region image input by the user.
- the image feature amount extraction unit 23 receives the displayed suspicious lesion region image, further receives the image feature label learning parameter from the image feature label learning parameter storage unit 22, and uses the CNN network to input the lesion. Extract the image feature amount of the suspicious area image. This is the same processing as step S203.
- the lesion suspicious area malignancy learning update unit 29 receives the extracted image feature amount.
- the suspicious lesion malignancy learning update unit 29 determines whether to update the suspected lesion malignancy.
- the suspected lesion area malignancy may be updated. Further, the suspected lesion area malignancy may be updated after a predetermined accumulation period. Further, the suspicious lesion area malignancy may be updated in accordance with a user instruction.
- the suspected lesion malignancy learning update unit 29 receives the suspected lesion malignancy estimation parameter storage unit 25 from the suspected lesion malignancy estimation parameter storage unit 25.
- the suspected lesion malignancy learning update unit 29 performs learning again in order to calculate a new lesion suspected region malignancy estimation parameter.
- a known online learning method may be used.
- the suspected lesion malignancy estimation parameter storage unit 25 receives and stores the updated suspected lesion malignancy estimation parameter.
- the suspected lesion malignancy estimation unit 26 receives the updated lesion suspected region malignancy estimation parameter, updates the malignancy estimation result regarding the displayed suspected lesion region image, and displays the result. Output to. That is, in the image processing apparatus of the present embodiment, the suspected lesion area malignancy estimation unit 26 uses the suspected lesion area malignancy learning update unit 29 to update the suspected lesion area image. The malignancy of the suspicious area is estimated and displayed again.
- FIG. 9 shows an example of a user interface 91 that displays diagnostic images, diagnosis results using the image processing apparatus 100, image feature label presentation for user input, malignancy correctness information, and the like.
- the display unit 11 receives the diagnostic image and the suspected lesion area image 27. Further, the display unit 11 receives image feature labels corresponding to the respective suspected lesion area images from the image feature amount extraction unit 23. Further, the display unit 11 receives from the suspected lesion malignancy estimation unit 26 the malignancy estimation result and the estimated score corresponding to each suspected lesion image.
- the user interface 91 displays a diagnostic image, a suspected lesion area image area 92, an image feature label presentation selection area 94, a malignancy degree correct / incorrect information selection area 95, and the like.
- the diagnostic image and suspected lesion area image area 92 of the user interface 91 display a diagnostic image and a suspected lesion area image. At that time, it is possible to rank and display the suspected lesion region images using the malignancy estimation result. That is, the display unit 11 displays the image data, the suspected lesion area image, and the identification result of the image feature label corresponding to the suspected lesion area image, ranks the estimated score of the suspected lesion area malignancy, It is possible to rearrange the suspected lesion region images in descending order of malignancy. In addition, a location corresponding to the selected suspicious lesion image 93 can be displayed on the diagnostic image using a mark or the like.
- the image feature label presentation selection area 94 of the user interface 91 displays an image example of a predetermined image feature label, presents an image feature label corresponding to the selected suspicious lesion region image, and selects it for the image feature user. Or can be modified.
- the user interface 91 can display the image of the image feature label and the identification result of the image feature label corresponding to the suspected lesion area image, and allow the user to select the correct image feature label corresponding to the suspected lesion area image. Therefore, a check box is arranged below a predetermined image feature label displayed in the image feature label presentation selection area 94.
- the malignancy correctness / incorrectness information selection area 95 of the user interface 91 the correctness / incorrectness information TP, FP, TN, FN of the malignancy corresponding to the selected suspicious lesion region image is displayed, and the user can select any one of four types. Can be selected. That is, the user interface 91 can allow the user to select correct / incorrect information corresponding to the suspected lesion area image.
- the user interface 91 can allow the user to add a new image feature label in addition to the predetermined image feature label displayed in the image feature label presentation selection area 94.
- the user interface 91 can cause the user to newly add a suspected lesion area image in addition to a suspected lesion area image, and select an image feature label and correct / incorrect information corresponding to the image.
- the image feature label learning unit 21 of the image processing apparatus adds the type of the image feature label according to the image feature label newly added from the user input unit 10 regarding the suspected lesion area image, Learning parameters can be updated.
- the user can add a new suspected lesion area image by using the diagnostic image and the diagnostic image displayed in the suspected lesion area image area 92, and can select malignancy correctness / incorrectness information corresponding thereto. .
- the image processing apparatus 100 does not include a medical image capturing apparatus.
- the image processing apparatus 100 may include a medical image capturing apparatus, and the image processing apparatus 100 is one of the medical image capturing apparatuses. It may function as a part.
- image processing that enables adjustment of the extraction process of the image feature amount of the lesion shadow and can reduce the burden on the interpreter who is the user when interpreting a large amount of three-dimensional medical images.
- An apparatus and a medical image photographing apparatus can be provided.
- a user in addition to a predetermined image feature label, a user defines and adds a new image feature label, and further adds a suspected lesion area image corresponding to the newly added image feature label to the image DB.
- It is an Example of an image processing apparatus provided with DB update part.
- FIG. 10 shows an example of a system configuration including the image processing apparatus according to the second embodiment.
- the image processing apparatus 100 includes a user input unit 10, an image feature label learning unit 21, an image feature label learning parameter storage unit 22, an image feature amount extraction unit 23, and a suspected lesion.
- step S601 the medical image DB update unit 30 receives the displayed suspected lesion area image.
- step S ⁇ b> 602 the medical image DB update unit 30 receives an image feature label corresponding to the suspected lesion region image added by the user from the user input unit 10.
- step S ⁇ b> 603 the suspected lesion malignancy learning update unit 29 receives from the user input unit 10 correctness / wrongness information of malignancy corresponding to the displayed lesion suspected region image input by the user.
- step S ⁇ b> 604 the medical image DB update unit 30 updates the medical image DB 20 using the displayed diagnostic image, suspected lesion area image, and a newly added image feature label corresponding thereto.
- step S605 it is determined whether to update the image feature amount extraction process.
- the image feature amount extraction processing may be updated when the medical image DB update unit 30 acquires a predetermined number of images. Further, the image feature amount extraction process may be updated when a predetermined accumulation period has elapsed. Further, the image feature amount extraction process may be updated in accordance with a user instruction.
- the image feature label learning unit 21 performs machine learning for classifying the image feature label including the added image feature label, and sets the learning parameter. Generate. This is the same as the processing in step S202.
- the learning of the image feature label in step S606 may use a network configuration using the CNN method shown in FIG. Further, the network configuration shown in FIG. 12 may be used. That is, as shown in the network configuration of FIG. 12, for each class of image feature label 120, one multi-class image feature using one convolution layer 121, pooling layer 122, identification layer 123, and result 124. A CNN network for identifying the label may be set.
- a user in addition to a predetermined image feature label, a user defines and adds a new image feature label to new clinical data, and further, a lesion suspected region image corresponding to the newly added image feature label Can be added to the image DB.
- this invention is not limited to the above-mentioned Example, Various modifications are included.
- the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
- a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
- each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
- Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
- Information such as programs, tables, and files for realizing each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
- An image processing apparatus that presents a suspected lesion image detected from image data, An image feature amount extraction unit that extracts an image feature amount of the suspected lesion region image using a learning parameter of the image feature label obtained by learning for classifying image feature labels related to the suspected lesion region image; Using the image feature amount extracted by the image feature amount extraction unit, using the parameter for estimating the malignancy level of the suspected lesion area obtained by learning for estimating the malignancy level of the suspected lesion area, the malignancy of the suspected lesion area A suspected lesion malignancy estimation unit for calculating An image processing apparatus comprising:
- Example 2 An image processing apparatus described in Example 1, A display unit for displaying a suspected lesion image; A user input section; An image feature label learning update unit that updates the learning parameter in response to an input from the user input unit; An image processing apparatus.
- Example 3 An image processing apparatus described in Example 2, When the image feature label learning update unit has acquired a predetermined number of suspected lesion area images, or when the acquired suspected lesion area image has passed a predetermined accumulation period, or when the user's instruction is input, The image feature amount extraction unit updates the image feature amount extraction process. An image processing apparatus.
- Example 4 An image processing apparatus described in Example 2, The image feature label is a type of image feature related to the suspected lesion area. An image processing apparatus.
- Example 5 An image processing apparatus described in Example 4, The image feature label is the size of the shadow area of the suspected lesion area, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, and the shape. An image processing apparatus.
- An image processing method of an image processing apparatus for presenting a suspected lesion area image detected from image data The image processing device Learning to classify image feature labels related to the suspected lesion image, Using the learning parameter of the image feature label obtained by the learning, extract the image feature amount of the suspected lesion image, Using the extracted image feature amount, learn the suspicious lesion malignancy for estimating the malignancy of the suspicious region, Using the lesion suspicious area malignancy estimation parameter obtained by learning the lesion suspicious area malignancy, calculating the malignancy of the suspected lesion area, An image processing method.
- Example 7 An image processing method described in Example 6,
- the image feature label is the size of the area of the shadow region, which is the type of the image feature related to the suspected lesion region, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, or the shape.
- An image processing method is the size of the area of the shadow region, which is the type of the image feature related to the suspected lesion region, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, or the shape.
- Example 8 An image processing method described in Example 6,
- the image processing apparatus includes a display unit and a user input unit, Displaying the suspected lesion area image and the malignancy of the suspected lesion area on the display unit; Updating the learning parameter in response to an input from the user input unit; An image processing method.
- Example 9 An image processing method described in Example 8, When the image processing apparatus acquires a predetermined number of suspected lesion area images, when the acquired suspected lesion area image has passed a predetermined accumulation period, or when an instruction from the user is input, the image feature Update quantity extraction process, An image processing method.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Engineering & Computer Science (AREA)
- Radiology & Medical Imaging (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a medical image processing device that makes adjustments, automatically and with a high degree of accuracy, to the process of extracting an image feature quantity of a lesion shadow so as to reduce the burden on a user. This image processing device presents a suspected lesion site image detected from image data captured for medical purposes. The image processing device has: an image feature label learning unit (21) for classifying image a feature label with regard to the suspected lesion site image; an image feature quantity extraction unit (23) for extracting an image feature quantity with regard to the suspected lesion site image using a learning parameter for the image feature label obtained through learning; a display unit (11) for displaying the suspected lesion site image; a user input unit (10); and an image feature label learning update unit (29) for updating the learning parameter in accordance with an input from the user input unit (10).
Description
本発明は画像処理装置、特に医用画像を処理する画像処理技術に関する。
The present invention relates to an image processing apparatus, and more particularly to an image processing technique for processing medical images.
X線CT(X-ray Computed Tomography)装置やMRI(Magnetic Resonance Imaging)装置等に代表される医用画像撮像装置を用いた診断では、撮影された三次元医用画像を、連続した二次元断面として再構成し、その二次元断面画像を観察して読影を行うことが一般的である。
In diagnosis using a medical imaging apparatus typified by an X-ray CT (X-ray Computed Tomography) apparatus and an MRI (Magnetic Resonance Imaging) apparatus, the captured three-dimensional medical image is re-created as a continuous two-dimensional section. In general, the image is interpreted by observing the two-dimensional cross-sectional image.
これらの撮影装置の高度化により、生成される三次元医用画像の三次元分解能も向上しており、データサイズは増加する傾向にある。特に、先に述べた二次元断面の生成間隔はより細かくすることが可能となり、医用画像上に現れる病変のより詳細な観察が可能となってきているが、結果的に三次元医用画像あたりの二次元断面の枚数も増加している。また特にCT装置においては、低線量で高画質な三次元医用画像の撮影が可能になってきたこともあり、CT画像の撮影機会も増加傾向にある。
With the advancement of these imaging devices, the three-dimensional resolution of the generated three-dimensional medical image is also improved, and the data size tends to increase. In particular, the two-dimensional cross-section generation interval described above can be made finer, and more detailed observation of the lesion appearing on the medical image is possible. The number of two-dimensional sections is also increasing. In particular, in the CT apparatus, it has become possible to capture a high-quality three-dimensional medical image with a low dose, and the number of CT image capturing opportunities tends to increase.
これらの理由により、膨大な三次元医用画像を読影する際に医師や技師にかかる負担を軽減し、主に病変の見落としを防ぐためにCAD(Computer Aided Detection)というコンピュータ支援診断技術の開発が進められている。このCADはコンピュータにより陰影の検出やサイズ計測、陰影の正常/異常の識別や異常陰影の病変種類の区別等を、画像処理技術を応用して自動あるいは半自動で行うことを目指したものである。
For these reasons, development of computer-aided diagnosis technology called CAD (Computer Aided Detection) is being promoted to reduce the burden on doctors and technicians when reading vast 3D medical images and mainly to prevent oversight of lesions. ing. This CAD aims to automatically or semi-automatically apply image processing technology to detect shadows, measure sizes, identify normal / abnormal shadows, and distinguish between types of abnormal shadows using a computer.
ここでは、画像の特徴から病変の疑いが高い陰影を提示することを目的とするCADについて述べる。このCADは、医師の見落としを防ぐことが目的であるため、少しでも病変の疑いが高い陰影は全て提示することが望ましいとされる場合が多い。しかし一方で提示する陰影数が多すぎれば、それぞれの疑わしさを精査する医師の負担も大きくなるという問題もある。従って、医師の希望する形での病変疑い陰影の提示を行い、医師の負担を軽減するための方法が求められている。
Here, we will describe CAD that aims to present shadows with high suspicion of lesions based on image features. Since the purpose of this CAD is to prevent a doctor from overlooking it, it is often desirable to present all shadows with a high suspicion of lesions. However, on the other hand, if there are too many shadows to present, there is a problem that the burden on the doctor who examines each suspicion increases. Therefore, there is a need for a method for presenting suspected shadows in a form desired by a doctor and reducing the burden on the doctor.
これらの課題を解決するために、例えば、非特許文献1には、機械学習により開発されたCADシステムの使用施設の診断データを継続的に収集して再学習を行うことで、CADの性能改善を図る方法が提案されている。
In order to solve these problems, for example, in Non-Patent Document 1, CAD data is improved by continuously collecting and re-learning diagnostic data of facilities using CAD systems developed by machine learning. A method for achieving this has been proposed.
CADによる病変疑い陰影の自動検出では一般的に、画像から得られる特徴量を用いて病変の疑わしさを定義する式を設定し、その病変の疑わしさが高い陰影を提示する。しかし、その病変陰影の画像特徴量の独自性が高いため、CAD開発用データセットに基づいて設定した特徴量の抽出方法が、実運用時のデータの画質や所見に適用しきれない場合、期待される病変の疑わしさの推定性能が得られない。また、ユーザである医師の希望に沿った形の検出精度の調整としては、提示する陰影の疑わしさの閾値を調整する方法と、疑わしさを算出する方法において、どの特徴量の寄与率が高いかを調整する方法の二種類の調整が必要となるが、いずれにおいても画像特徴量の抽出処理を調整することができない。
In the automatic detection of a suspicious lesion by CAD, a formula that defines the suspicion of a lesion is generally set using a feature amount obtained from an image, and a shadow having a high suspicion of the lesion is presented. However, since the image feature quantity of the lesion shadow is unique, the feature quantity extraction method set based on the CAD development data set cannot be applied to the image quality and findings of the actual operation. The estimation performance of the suspicion of the lesion is not obtained. In addition, as an adjustment of the detection accuracy of the shape in accordance with the wishes of the doctor who is the user, the contribution rate of which feature amount is high in the method of adjusting the threshold of the suspicion of the shadow to be presented and the method of calculating the suspicion However, in any case, the image feature extraction processing cannot be adjusted.
本発明の目的は、病変陰影の画像特徴量の抽出処理の調整を可能にし、ユーザが医用画像を読影する際の負担を軽減することが可能な画像処理装置、及び画像処理方法を提供することにある。
An object of the present invention is to provide an image processing apparatus and an image processing method that enable adjustment of image feature amount extraction processing of a lesion shadow and reduce a burden on a user when interpreting a medical image. It is in.
上記の目的を達成するため、本発明においては、画像データから検出される病変疑い領域画像を提示する画像処理装置であって、病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行う画像特徴ラベル学習部と、画像特徴ラベル学習部の学習により得られる画像特徴ラベルの学習パラメータを用いて、病変疑い領域画像の画像特徴量を抽出する画像特徴量抽出部と、病変疑い領域画像を表示する表示部と、ユーザ入力部と、前記ユーザ入力部からの入力に応じて、学習パラメータを更新する画像特徴ラベル学習更新部を備える構成の画像処理装置を提供する。
In order to achieve the above object, according to the present invention, an image processing apparatus that presents a suspected lesion area image detected from image data, and that performs learning for classifying image feature labels related to the suspected lesion area image A feature label learning unit, an image feature amount extraction unit that extracts an image feature amount of a suspicious lesion region image using a learning parameter of an image feature label obtained by learning of the image feature label learning unit, and a suspicious lesion region image are displayed An image processing apparatus having a display unit, a user input unit, and an image feature label learning update unit that updates a learning parameter in response to an input from the user input unit.
また、上記の目的を達成するため、本発明においては、表示部とユーザ入力部を備え、画像データから検出される病変疑い領域画像を提示する画像処理装置の処理方法であって、画像処理装置は、病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行い、学習により得られる画像特徴ラベルの学習パラメータを用いて、病変疑い領域画像の画像特徴量を抽出し、病変疑い領域画像を表示部に表示し、ユーザ入力部からの入力に応じて、学習パラメータを更新する構成の画像処理方法を提供する。
In order to achieve the above object, according to the present invention, there is provided a processing method of an image processing apparatus that includes a display unit and a user input unit, and presents a suspected lesion area image detected from image data. Learns to classify the image feature labels related to the suspicious lesion region image, extracts the image feature amount of the suspicious lesion region image using the learning parameters of the image feature label obtained by learning, and extracts the suspicious lesion region image. Provided is an image processing method configured to display on a display unit and update a learning parameter in accordance with an input from a user input unit.
本発明によれば、ユーザの希望する形に沿って、病変陰影の画像特徴量抽出処理を調整することが可能するになる。
According to the present invention, it becomes possible to adjust the image feature extraction processing of the lesion shadow along the shape desired by the user.
以下、本発明の実施の形態を図面に基づいて詳細に説明する。なお、実施の形態を説明するための全図において、同一部分には原則として同一の符号を付し、その繰り返しの説明は省略する。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. Note that components having the same function are denoted by the same reference symbols throughout the drawings for describing the embodiment, and the repetitive description thereof will be omitted.
本実施例は、医用画像から検出された病変疑い領域画像に対する悪性度を、高精度かつユーザの希望する形で算出するため、ユーザからの入力に応じて病変疑い領域画像に関する画像特徴量抽出処理を調整することが可能な画像処理装置の実施例である。すなわち、画像データから検出される病変疑い領域画像を提示する画像処理装置であって、病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行う画像特徴ラベル学習部と、画像特徴ラベル学習部の学習により得られる画像特徴ラベルの学習パラメータを用いて、病変疑い領域画像の画像特徴量を抽出する画像特徴量抽出部と、病変疑い領域画像を表示する表示部と、ユーザ入力部と、ユーザ入力部からの入力に応じて、画像特徴ラベルの学習パラメータを更新する画像特徴ラベル学習更新部とを備える構成の画像処理装置の実施例である。
In the present embodiment, in order to calculate the malignancy level for a suspected lesion area image detected from a medical image in a form desired by the user with high accuracy, an image feature amount extraction process related to a suspected lesion area image according to an input from the user 1 is an example of an image processing apparatus capable of adjusting the image quality. That is, an image processing apparatus that presents a suspected lesion area image detected from image data, an image feature label learning unit that performs learning for classifying image feature labels related to a suspected lesion area image, and an image feature label learning unit An image feature amount extraction unit that extracts an image feature amount of a suspicious lesion region image using a learning parameter of an image feature label obtained by learning, a display unit that displays a suspicious lesion region image, a user input unit, and a user It is an Example of the image processing apparatus of a structure provided with the image feature label learning update part which updates the learning parameter of an image feature label according to the input from an input part.
また、表示部とユーザ入力部を備え、画像データから検出される病変疑い領域画像を提示する画像処理装置の処理方法であって、画像処理装置は、病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行い、学習により得られる画像特徴ラベルの学習パラメータを用いて、病変疑い領域画像の画像特徴量を抽出し、病変疑い領域画像を表示部に表示し、ユーザ入力部からの入力に応じて、学習パラメータを更新する構成の画像処理方法の実施例である。
A processing method of an image processing apparatus that includes a display unit and a user input unit and presents a suspected lesion area image detected from image data, wherein the image processing apparatus classifies image feature labels related to a suspected lesion area image. For learning, using the learning parameters of the image feature label obtained by learning, extracting the image feature amount of the suspicious lesion region image, displaying the suspicious lesion region image on the display unit, and inputting it from the user input unit Accordingly, the embodiment is an embodiment of an image processing method configured to update learning parameters.
なお、本実施例においては、CT医用画像撮影装置により得られる再構成三次元医用画像を例示して説明するが、本実施例の構成は他の医用画像撮影装置により得られるデータに基づく画像処理装置についても応用可能である。例えばMRI撮影装置等により得られるデータであっても、複数の二次元断面の積み重ねとして表現できる三次元画像を得るもので、画素分布に病変特徴が現れるとされているものであれば適用することができる。
In this embodiment, a reconstructed three-dimensional medical image obtained by a CT medical image photographing apparatus will be described as an example. However, the structure of this embodiment is an image process based on data obtained by another medical image photographing apparatus. The apparatus can also be applied. For example, even if the data is obtained by an MRI imaging apparatus or the like, it is applicable to obtain a three-dimensional image that can be expressed as a stack of a plurality of two-dimensional cross sections, and that are supposed to show lesion characteristics in the pixel distribution. Can do.
また、本実施例における病変疑い領域とは、読影医の医学的知識や当該疾病診断に対する医学的根拠(エビデンス)等に基づいて判断される、病変の疑いが高い点、および領域を指す。ここで対象となる病変とは、医用画像上に現れた場合に、周囲即ち病変の疑いが低い領域との輝度の違いや、輝度値の分布の違いから判断できる可能性が高いものとする。例えば肺結節の場合は一般に、そのCT値が周辺の空気領域よりも高い画素を多く含む領域としてCT画像上に現れることが知られている。胸部CT画像上で高輝度画素を多く含む他のオブジェクトとしては、血管や骨があるが、高輝度値の分布形状に応じて、それらと区別し、病変すなわちこの例における肺結節である疑いの高さを判別できると言われている。
In addition, the suspected lesion area in the present embodiment refers to a point and an area with a high suspected lesion, which are determined based on the medical knowledge of the interpreting doctor, the medical evidence (evidence) for the disease diagnosis, and the like. Here, it is assumed that the target lesion is highly likely to be judged from the difference in luminance from the surrounding area, that is, the region with low suspicion of the lesion, or the difference in luminance value distribution when it appears on the medical image. For example, in the case of a pulmonary nodule, it is generally known that the CT value appears on the CT image as a region including many pixels higher than the surrounding air region. Other objects that contain many high-luminance pixels on the chest CT image include blood vessels and bones, but depending on the distribution shape of the high-luminance values, they are distinguished from those, and the lesions, ie pulmonary nodules in this example, are suspected It is said that the height can be distinguished.
図1は、実施例1に係わる画像処理装置を含むシステム構成の一例を示す図である。図1に示すように画像処理装置100は、ユーザ入力部10と、画像特徴ラベル学習部21と、画像特徴ラベル学習パラメータ記憶部22と、画像特徴量抽出部23と、病変疑い領域悪性度学習部24と、病変疑い領域悪性度推定パラメータ記憶部25と、病変疑い領域悪性度推定部26と、画像特徴ラベル学習更新部28と、病変疑い領域悪性度学習更新部29と、表示部11とから構成されている。なお、画像処理装置100は通常のコンピュータで構成され、表示部11はそのディスプレイで、記憶部はそのメモリで構成され、画像特徴量抽出部23などの各機能ブロックは、その中央処理部(CPU)によるプログラム実行で実現される。また、医用画像DB20と、診断用画像と病変疑い領域画像27は、その外部記憶装置などで実現される。なお、図1において、画像特徴量抽出部23を3つのブロックとして示したが、これらはそれぞれ入力される対象画像に対してその特徴量を抽出するという同一の処理を実行する機能ブロックである。3つのブロックの真ん中のブロックは、後で説明する図2のステップS203に、右側のブロックは図3のステップS302に、左側のブロックは図5のステップS502に対応している。
FIG. 1 is a diagram illustrating an example of a system configuration including an image processing apparatus according to the first embodiment. As shown in FIG. 1, the image processing apparatus 100 includes a user input unit 10, an image feature label learning unit 21, an image feature label learning parameter storage unit 22, an image feature amount extraction unit 23, and a lesion suspected area malignancy learning. Unit 24, suspected lesion malignancy estimation parameter storage unit 25, suspected lesion malignancy estimation unit 26, image feature label learning update unit 28, suspected lesion malignancy learning update unit 29, and display unit 11 It is composed of The image processing apparatus 100 is configured by a normal computer, the display unit 11 is a display thereof, the storage unit is configured by a memory thereof, and each functional block such as the image feature amount extraction unit 23 is a central processing unit (CPU). ) Is executed by program execution. Further, the medical image DB 20, the diagnostic image, and the suspected lesion area image 27 are realized by the external storage device or the like. In FIG. 1, the image feature quantity extraction unit 23 is shown as three blocks, but these are functional blocks that execute the same process of extracting the feature quantity from each input target image. The middle block of the three blocks corresponds to step S203 in FIG. 2, which will be described later, the right block corresponds to step S302 in FIG. 3, and the left block corresponds to step S502 in FIG.
次に、図2~図5に示すフローチャートを用いて、図1に示した画像処理装置100の動作処理を説明する。図2は、病変疑い領域画像に対して、画像特徴ラベルの学習による画像特徴量の抽出、および抽出された画像特徴量を用いた病変疑い領域悪性度推定の学習の動作処理を示すフローチャートである。これらの動作処理は、画像特徴ラベル学習部21、画像特徴量抽出部23、病変疑い領域悪性度学習部24で実行される。
Next, the operation processing of the image processing apparatus 100 shown in FIG. 1 will be described using the flowcharts shown in FIGS. FIG. 2 is a flowchart showing an operation process of extracting an image feature amount by learning an image feature label and learning of a suspected lesion region malignancy estimation using the extracted image feature amount for a suspected lesion region image. . These operation processes are executed by the image feature label learning unit 21, the image feature amount extraction unit 23, and the suspected lesion area malignancy learning unit 24.
まず、図2のステップS201において、画像特徴ラベル学習部21が医用画像DB20から、病変疑い領域画像とそれに対応する画像特徴ラベルの情報を受け取る。ここで画像特徴ラベルとは、病変疑い領域に関する画像特徴の種類、例えば陰影領域の面積の大きさ、輝度の濃淡、周囲既存構造との接触の有無、発生部位、形状などである。図6に、このような病変疑い領域の画像特徴ラベルの一例を画像特徴ラベル61~66として示した。
First, in step S201 of FIG. 2, the image feature label learning unit 21 receives information on a suspected lesion region image and the corresponding image feature label from the medical image DB 20. Here, the image feature label refers to the type of image feature related to the suspected lesion area, such as the size of the area of the shadow area, the brightness, the presence / absence of contact with the surrounding existing structure, the occurrence site, and the shape. FIG. 6 shows image feature labels 61 to 66 as an example of such image feature labels of the suspicious lesion area.
次に、ステップS202において、画像特徴ラベル学習部21は、画像特徴ラベル70を分類するための機械学習を行い、その学習パラメータを生成する。ここでは、公知の深層学習(Deep Learning)方法であるCNN(Convolutional Neural Network)法を用いることができる。図7は、CNN法を用いた学習器(ネットワーク)の構成の一例を示している。CNN法を用いた学習処理では、多数の画像フィルタリング処理を行う畳み込み層(Convolution layer)71と、畳み込み層の出力からサンプリングするプーリング層(Pooling layer)72の繰り返しによって、入力の学習用画像を識別できるように、画像の特徴を最適に、すなわち最大限正確に表現する画像特徴量を自動で生成することができる。
Next, in step S202, the image feature label learning unit 21 performs machine learning for classifying the image feature labels 70, and generates learning parameters thereof. Here, the CNN (Convolutional Neural Network) method, which is a known deep learning method, can be used. FIG. 7 shows an example of the configuration of a learning device (network) using the CNN method. In the learning process using the CNN method, an input learning image is identified by repeating a convolution layer 71 that performs a number of image filtering processes and a pooling layer 72 that samples from the output of the convolution layer. It is possible to automatically generate an image feature amount that optimally expresses the feature of the image, that is, as accurately as possible.
図7では、便宜のため畳み込み層71とプーリング層72をそれぞれ2層のみを示しているが、実際の場合、更に多数の層を設定することによって、性能向上を図ることが 可能である。図7で示しているCNNネットワーク構成の最後の層は識別層(Classification layler)73であり、ここでは、入力画像が予め設定した種類(クラス)に所属する確率(スコア)を算出し結果(Result)74とする、すなわち入力画像を分類(識別)する処理を行う。本実施例では、図6に示した画像特徴ラベル61~66の6種類、領域面積大、領域面積小、胸骨接触型、輝度濃、輝度淡、管状を識別するように設定しているが、病変疑い領域の対象画像の特徴によって、画像特徴ラベルの種類、種類の数などの設定を変更することも可能である。
In FIG. 7, only two layers of the convolution layer 71 and the pooling layer 72 are shown for convenience, but in an actual case, it is possible to improve performance by setting a larger number of layers. The last layer of the CNN network configuration shown in FIG. 7 is an identification layer (Classification layerer) 73. Here, the probability (score) that an input image belongs to a preset type (class) is calculated and the result (Result) ) 74, that is, a process of classifying (identifying) the input image. In this embodiment, six types of image feature labels 61 to 66 shown in FIG. 6 are set so as to identify large area area, small area area, sternal contact type, luminance intensity, luminance intensity, and tubular. It is also possible to change settings such as the type of image feature label and the number of types depending on the feature of the target image in the suspected lesion area.
本実施例では、画像特徴ラベル70のクラスの各々に対し、CNNネットワークをそれぞれ設定している。各CNNネットワークを学習させるための入力学習画像は、その画像特徴ラベルクラスに所属する画像である正のサンプルデータとその他の画像特徴ラベルクラスに所属する画像である負のサンプルデータである。すなわち、画像特徴ラベル学習部21は、画像特徴ラベルのクラスのそれぞれに対し、CNNネットワークをそれぞれ設定する。このネットワークの学習が完了したら、画像特徴ラベル学習パラメータ記憶部22は、画像特徴ラベル学習部21から、各畳み込み層71、プーリング層72、識別層73のパラメータを取得し、画像特徴ラベル学習パラメータとして保存する。なお、この画像特徴ラベル学習パラメータとは、公知のCNN法における畳み込み層やプーリング層などの構成に関するパラメータ、例えば、畳み込み層やプーリング層の総数(図7の場合それぞれ2)、または畳み込み演算を行う時の畳み込みフィルタの係数や大きさなどのパラメータである。
In this embodiment, a CNN network is set for each class of the image feature label 70. Input learning images for learning each CNN network are positive sample data that is an image belonging to the image feature label class and negative sample data that is an image belonging to another image feature label class. That is, the image feature label learning unit 21 sets a CNN network for each class of image feature labels. When the network learning is completed, the image feature label learning parameter storage unit 22 acquires the parameters of each convolution layer 71, the pooling layer 72, and the identification layer 73 from the image feature label learning unit 21, and uses them as image feature label learning parameters. save. The image feature label learning parameter is a parameter related to the configuration of a convolution layer or a pooling layer in a known CNN method, for example, the total number of convolution layers or pooling layers (2 in the case of FIG. 7) or a convolution operation. It is a parameter such as the coefficient and size of the convolution filter of time.
本実施例では、画像特徴ラベル70の種類を識別する目的は、独自性の高い病変疑い領域画像の画像特徴量を高精度かつ自動調整可能な形で抽出することである。図8に、画像特徴量抽出部23による病変疑い領域画像の画像特徴量の抽出処理の一例を示す。ステップS203において、画像特徴量抽出部23は、画像特徴ラベル学習パラメータ記憶部22から、画像特徴ラベル学習パラメータを受け取り、図8に示す畳み込み層(Convolution layer)81と、畳み込み層の出力からサンプリングするプーリング層(Pooling layer)82の繰り返しと、識別層(Classification layler)83で構成したCNNネットワークを用いて、入力の病変疑い領域画像80の画像特徴量を抽出する。本実施例では、画像特徴量として、CNNネットワークのいずれかのプーリング層82の出力ベクトル、もしくは各クラスのネットワークの識別スコアである結果(Result)84を連結した識別スコアベクトルを使用しても良い。プーリング層72の出力ベクトルと識別スコアベクトルと連結したベクトルを使用しても良い。以上のように、画像特徴量抽出部23は、学習パラメータを用いて、病変疑い領域画像に関する画像特徴ラベルの種類を識別することができる。
In this embodiment, the purpose of identifying the type of the image feature label 70 is to extract the image feature amount of the highly unique lesion suspicious area image in a form that can be automatically adjusted with high accuracy. FIG. 8 shows an example of image feature amount extraction processing of a suspected lesion area image by the image feature amount extraction unit 23. In step S203, the image feature quantity extraction unit 23 receives the image feature label learning parameters from the image feature label learning parameter storage unit 22, and samples them from the convolution layer 81 shown in FIG. 8 and the output of the convolution layer. An image feature amount of the input suspicious region image 80 is extracted using a CNN network constituted by repetition of a pooling layer 82 and a classification layer 83. In this embodiment, an output vector of any pooling layer 82 of the CNN network or an identification score vector obtained by concatenating results (Result) 84 that are identification scores of networks of each class may be used as the image feature amount. . A vector concatenated with the output vector of the pooling layer 72 and the identification score vector may be used. As described above, the image feature amount extraction unit 23 can identify the type of the image feature label related to the suspected lesion region image using the learning parameter.
次に、ステップS204~S205において、病変疑い領域悪性度学習部24は、病変疑い領域の悪性度を算出するため、機械学習を用いて病変疑い領域悪性度の推定パラメータを生成する。ステップS204において、病変疑い領域悪性度学習部24は、画像特徴量抽出部23から、図8の処理で得られた病変疑い領域画像の画像特徴量を受け取る。また、病変疑い領域悪性度学習部24は、医用画像DB20から病変疑い領域画像に対応する悪性度情報を取得する。ステップS205において、病変疑い領域悪性度学習部24は、病変疑い領域画像の画像特徴量とそれに対応する悪性度情報を用いて、病変疑い領域悪性度を推定する学習器を作成する。ここでは、公知であるSVM(Support Vector Machine)法、さらに、悪性度を順位付けることが可能になるRanking SVM法を用いることができる。学習処理が完了したら、病変疑い領域悪性度推定パラメータ記憶部25は、病変疑い領域悪性度学習部24から、病変疑い領域悪性度推定パラメータを受け取り、保存する。なお、この病変疑い領域悪性度推定パラメータとは、公知であるSVM法などの機械学習手法におけるパラメータであり、例えば特徴量空間において、分類のための境界線や境界面の位置となり、例えば直線で分ける場合は直線(y=a*x+b)の係数aとbとなる。
Next, in steps S204 to S205, the suspected lesion malignancy learning unit 24 generates an estimated parameter of the suspected lesion malignancy using machine learning in order to calculate the malignancy of the suspected lesion region. In step S <b> 204, the suspected lesion area malignancy learning unit 24 receives the image feature amount of the suspected lesion area image obtained by the processing of FIG. 8 from the image feature amount extraction unit 23. Further, the suspected lesion area malignancy learning unit 24 acquires malignancy information corresponding to the suspected lesion area image from the medical image DB 20. In step S <b> 205, the suspicious lesion area malignancy learning unit 24 creates a learning device that estimates the suspicious area malignancy using the image feature amount of the suspicious area image and the corresponding malignancy information. Here, a well-known SVM (Support Vector Machine) method and a Ranking SVM method that can rank malignancy can be used. When the learning process is completed, the suspected lesion malignancy estimation parameter storage unit 25 receives and stores the suspected lesion malignancy estimation parameter from the suspected lesion malignancy learning unit 24. The lesion suspicious area malignancy estimation parameter is a parameter in a known machine learning method such as the SVM method. For example, in the feature amount space, the parameter is a boundary line or a boundary surface for classification, for example, a straight line. When dividing, the coefficients a and b of a straight line (y = a * x + b) are obtained.
次に、図3に示すフローチャートを用いて、診断時に診断対象として入力の病変疑い領域画像の悪性度を推定する処理を説明する。このフローチャートの動作処理は、画像特徴量抽出部23と、病変疑い領域悪性度推定部26によって実行される。ステップS301において、画像特徴量抽出部23は、診断用画像と病変疑い領域画像27を受け取る。ステップS302において、画像特徴量抽出部23は、画像特徴ラベル学習パラメータ記憶部22から画像特徴ラベル学習パラメータを受け取り、CNNネットワークを用いて、入力の病変疑い領域画像の画像特徴量を抽出する。これはステップS203で説明した図8の処理と同様な処理である。
Next, a process for estimating the malignancy of an input suspicious area image as a diagnosis target at the time of diagnosis will be described using the flowchart shown in FIG. The operation processing of this flowchart is executed by the image feature amount extraction unit 23 and the suspected lesion malignancy estimation unit 26. In step S <b> 301, the image feature quantity extraction unit 23 receives a diagnostic image and a lesion suspected area image 27. In step S302, the image feature amount extraction unit 23 receives the image feature label learning parameter from the image feature label learning parameter storage unit 22, and extracts the image feature amount of the input suspicious region image using the CNN network. This is the same process as the process of FIG. 8 described in step S203.
ステップS303において、病変疑い領域悪性度推定部26は、病変疑い領域悪性度推定パラメータ記憶部25に記憶された病変疑い領域悪性度推定パラメータを受け取る。さらに、病変疑い領域悪性度推定部26は、画像特徴量抽出部23から、先に抽出した病変疑い領域画像の画像特徴量を受け取る。病変疑い領域悪性度推定部26は、病変疑い領域悪性度推定パラメータを用いて、病変疑い領域画像における病変疑い領域の悪性度を算出する。ステップS304において、表示部11は、診断用画像と病変疑い領域画像27、およびそれに対応する病変疑い領域悪性度を受け取り、画像処理装置の診断結果として表示する。病変疑い領域画像が複数存在する場合、悪性度の推定結果に基づいて順位付けて表示することも可能である。詳細な表示方法は、後で図9に示した表示画面例を用いて説明する。
In step S303, the suspected lesion malignancy estimation unit 26 receives the suspected lesion malignancy estimation parameter stored in the suspected lesion malignancy estimation parameter storage unit 25. Further, the suspected lesion area malignancy estimation unit 26 receives the image feature amount of the suspected lesion area image previously extracted from the image feature amount extraction unit 23. The suspected lesion area malignancy estimation unit 26 calculates the malignancy of the suspected lesion area in the suspected lesion area image using the suspected lesion area malignancy estimation parameter. In step S304, the display unit 11 receives the diagnostic image, the suspected lesion area image 27, and the corresponding suspected lesion area malignancy, and displays them as the diagnosis result of the image processing apparatus. When there are a plurality of suspected lesion area images, they can be ranked and displayed based on the malignancy estimation result. A detailed display method will be described later using the display screen example shown in FIG.
次に、図4に示すフローチャートを用いて、ユーザである医師から入力されて表示される病変疑い領域画像に関する画像特徴ラベルを用いた画像特徴量抽出処理を更新する処理について説明する。このフローチャートの動作処理は、画像特徴ラベル学習更新部28、画像特徴量抽出部23で実行される。まず、ステップS401において、画像特徴ラベル学習更新部28は、表示される病変疑い領域画像を受け取る。ステップS402において、画像特徴ラベル学習更新部28は、ユーザ入力部10から、ユーザが入力した病変疑い領域画像に対応する画像特徴ラベルを受け取る。
Next, a process for updating the image feature amount extraction process using the image feature label relating to the lesion area image input and displayed by the doctor who is the user will be described using the flowchart shown in FIG. The operation processing of this flowchart is executed by the image feature label learning update unit 28 and the image feature amount extraction unit 23. First, in step S401, the image feature label learning update unit 28 receives a displayed suspicious lesion area image. In step S <b> 402, the image feature label learning update unit 28 receives from the user input unit 10 an image feature label corresponding to the suspected lesion region image input by the user.
ステップS403において、画像特徴ラベル学習更新部28は、画像特徴量抽出処理を更新するかどうかを判断する。例えば、画像特徴ラベル学習更新部28が所定数の病変疑い領域画像を取得したら画像特徴量抽出処理を更新してもよい。また、所定の蓄積期間を経過したら画像特徴量抽出処理を更新してもよい。さらに、ユーザの指示に応じて画像特徴量抽出処理を更新してもよい。すなわち、画像特徴ラベル学習更新部28が所定数の病変疑い領域画像を取得した場合、或いは取得した病変疑い領域画像が所定の蓄積期間を経過した場合、或いはユーザ入力部10からユーザの指示が入力された場合、画像特徴量抽出部23は画像特徴量の抽出処理を更新する。
In step S403, the image feature label learning update unit 28 determines whether to update the image feature amount extraction process. For example, when the image feature label learning update unit 28 acquires a predetermined number of suspected lesion area images, the image feature amount extraction processing may be updated. Further, the image feature amount extraction process may be updated when a predetermined accumulation period has elapsed. Further, the image feature amount extraction process may be updated in accordance with a user instruction. That is, when the image feature label learning update unit 28 acquires a predetermined number of suspected lesion area images, when the acquired suspected lesion area image has passed a predetermined accumulation period, or when a user instruction is input from the user input unit 10 If it has been done, the image feature quantity extraction unit 23 updates the image feature quantity extraction process.
画像特徴量抽出処理を更新する場合(Yes)、ステップS404において、画像特徴ラベル学習更新部28は、画像特徴ラベル学習パラメータ記憶部22から、画像特徴ラベル学習パラメータを受け取る。ステップS405において、画像特徴ラベル学習更新部28は、画像特徴ラベルに関するCNNネットワークの学習パラメータを更新する。画像特徴ラベル学習パラメータ記憶部22は、更新後の画像特徴ラベル学習パラメータを受取り、保存する。これにより、新しい病変疑い領域画像に対して自動で画像特徴量抽出処理をより適切に調整することが可能になる。画像特徴量抽出部23は、画像特徴ラベル学習更新部28により更新された画像特徴ラベル学習パラメータを用いて、病変疑い領域画像の画像特徴量を改めて抽出することができる。
When updating the image feature amount extraction process (Yes), in step S404, the image feature label learning update unit 28 receives the image feature label learning parameter from the image feature label learning parameter storage unit 22. In step S405, the image feature label learning update unit 28 updates the learning parameters of the CNN network related to the image feature label. The image feature label learning parameter storage unit 22 receives and stores the updated image feature label learning parameter. This makes it possible to automatically adjust the image feature amount extraction process more appropriately for a new lesion-suspicious area image. The image feature amount extraction unit 23 can extract the image feature amount of the suspicious lesion region image again using the image feature label learning parameter updated by the image feature label learning update unit 28.
次に、図5に示すフローチャートを用いて、ユーザから入力されて表示される病変疑い領域画像に関する悪性度の正誤情報を用いて病変疑い領域悪性度推定パラメータを更新する処理、について説明する。このフローチャートの動作処理は、画像特徴量抽出部23と、病変疑い領域悪性度学習更新部29で実行される。すなわち、本実施例の画像処理装置は、病変疑い領域画像のそれぞれに対するユーザ入力部10からの入力に応じて、病変疑い領域悪性度推定パラメータを更新する病変疑い領域悪性度学習更新部29を備えている。
Next, a process for updating the suspected lesion malignancy estimation parameter using the malignancy correctness information regarding the suspected lesion image input and displayed by the user will be described with reference to the flowchart shown in FIG. The operation processing of this flowchart is executed by the image feature amount extraction unit 23 and the lesion suspicious area malignancy learning update unit 29. In other words, the image processing apparatus according to the present embodiment includes the suspected lesion malignancy learning update unit 29 that updates the suspected lesion malignancy estimation parameter in accordance with the input from the user input unit 10 for each suspected lesion image. ing.
ステップS501において、病変疑い領域悪性度学習更新部29は、ユーザ入力部10から、ユーザが入力した表示の病変疑い領域画像に対応する悪性度の正誤情報を受け取る。ステップS502において、画像特徴量抽出部23は、表示される病変疑い領域画像を受け取り、さらに、画像特徴ラベル学習パラメータ記憶部22から画像特徴ラベル学習パラメータを受け取り、CNNネットワークを用いて、入力の病変疑い領域画像の画像特徴量を抽出する。これはステップS203と同様な処理である。病変疑い領域悪性度学習更新部29は、抽出された画像特徴量を受け取る。
In step S501, the suspected lesion malignancy learning update unit 29 receives from the user input unit 10 correctness / incorrectness information of malignancy corresponding to the displayed suspected lesion region image input by the user. In step S502, the image feature amount extraction unit 23 receives the displayed suspicious lesion region image, further receives the image feature label learning parameter from the image feature label learning parameter storage unit 22, and uses the CNN network to input the lesion. Extract the image feature amount of the suspicious area image. This is the same processing as step S203. The lesion suspicious area malignancy learning update unit 29 receives the extracted image feature amount.
ステップS503において、病変疑い領域悪性度学習更新部29は、病変疑い領域悪性度を更新するかどうかを判断する。ここでは、画像特徴量抽出処理の更新と同様に、例えば、病変疑い領域悪性度学習更新部29が所定数の画像の特徴量を取得したら病変疑い領域悪性度を更新してもよい。また、所定の蓄積期間を経過したら病変疑い領域悪性度を更新してもよい。さらに、ユーザの指示に応じて病変疑い領域悪性度を更新してもよい。病変疑い領域悪性度を更新する場合(Yes)、ステップS504において、病変疑い領域悪性度学習更新部29は、病変疑い領域悪性度推定パラメータ記憶部25から、病変疑い領域悪性度推定パラメータを受取る。
In step S503, the suspicious lesion malignancy learning update unit 29 determines whether to update the suspected lesion malignancy. Here, similarly to the update of the image feature amount extraction process, for example, when the suspected lesion area malignancy learning update unit 29 acquires a predetermined number of image feature amounts, the suspected lesion area malignancy may be updated. Further, the suspected lesion area malignancy may be updated after a predetermined accumulation period. Further, the suspicious lesion area malignancy may be updated in accordance with a user instruction. When updating the suspected lesion malignancy (Yes), in step S504, the suspected lesion malignancy learning update unit 29 receives the suspected lesion malignancy estimation parameter storage unit 25 from the suspected lesion malignancy estimation parameter storage unit 25.
次に、ステップS505において、病変疑い領域悪性度学習更新部29は、新しい病変疑い領域悪性度推定パラメータを算出するために、学習を改めて行う。ここでは、公知であるOnline学習の手法を用いてもよい。病変疑い領域悪性度推定パラメータ記憶部25は、更新後の病変疑い領域悪性度推定パラメータを受取り、保存する。ステップS506において、病変疑い領域悪性度推定部26は、更新後の病変疑い領域悪性度推定パラメータを受取り、表示される病変疑い領域画像に関する悪性度の推定結果を更新し、その結果を表示部11に出力する。すなわち、本実施例の画像処理装置においては、病変疑い領域悪性度推定部26は、病変疑い領域悪性度学習更新部29により更新された病変疑い領域悪性度推定パラメータを用いて、病変疑い領域画像に関する病変疑い領域の悪性度を改めて推定して、表示する。
Next, in step S505, the suspected lesion malignancy learning update unit 29 performs learning again in order to calculate a new lesion suspected region malignancy estimation parameter. Here, a known online learning method may be used. The suspected lesion malignancy estimation parameter storage unit 25 receives and stores the updated suspected lesion malignancy estimation parameter. In step S506, the suspected lesion malignancy estimation unit 26 receives the updated lesion suspected region malignancy estimation parameter, updates the malignancy estimation result regarding the displayed suspected lesion region image, and displays the result. Output to. That is, in the image processing apparatus of the present embodiment, the suspected lesion area malignancy estimation unit 26 uses the suspected lesion area malignancy learning update unit 29 to update the suspected lesion area image. The malignancy of the suspicious area is estimated and displayed again.
ここで、図9を用いて表示部11の動作について説明する。図9には、本実施例の診断用画像、画像処理装置100を用いた診断結果、ユーザ入力のための画像特徴ラベル提示、悪性度正誤情報などを表示するユーザインタフェース91の一例を示している。表示部11は、診断用画像と病変疑い領域画像27を受け取る。また、表示部11は、画像特徴量抽出部23からそれぞれの病変疑い領域画像に対応する画像特徴ラベルを受け取る。さらに、表示部11は、病変疑い領域悪性度推定部26からそれぞれの病変疑い領域画像に対応する悪性度の推定結果、推定スコアを受け取る。ユーザインタフェース91は、診断用画像と病変疑い領域画像エリア92、画像特徴ラベル提示選択エリア94、悪性度正誤情報選択エリア95などが表示される。
Here, the operation of the display unit 11 will be described with reference to FIG. FIG. 9 shows an example of a user interface 91 that displays diagnostic images, diagnosis results using the image processing apparatus 100, image feature label presentation for user input, malignancy correctness information, and the like. . The display unit 11 receives the diagnostic image and the suspected lesion area image 27. Further, the display unit 11 receives image feature labels corresponding to the respective suspected lesion area images from the image feature amount extraction unit 23. Further, the display unit 11 receives from the suspected lesion malignancy estimation unit 26 the malignancy estimation result and the estimated score corresponding to each suspected lesion image. The user interface 91 displays a diagnostic image, a suspected lesion area image area 92, an image feature label presentation selection area 94, a malignancy degree correct / incorrect information selection area 95, and the like.
ユーザインタフェース91の診断用画像と病変疑い領域画像エリア92は、診断用画像と病変疑い領域画像を表示する。その際、悪性度推定結果を用いて病変疑い領域画像に対し、順位付けをして表示することができる。すなわち、表示部11は、画像データと病変疑い領域画像と、病変疑い領域画像に対応する画像特徴ラベルの識別結果とを表示し、病変疑い領域悪性度の推定スコアに対し、順位付けを行い、悪性度の高い順で病変疑い領域画像を並べ替えることができる。また、選択された病変疑い領域画像93に対応する場所を診断用画像上にマークなどを用いて表示することができる。
The diagnostic image and suspected lesion area image area 92 of the user interface 91 display a diagnostic image and a suspected lesion area image. At that time, it is possible to rank and display the suspected lesion region images using the malignancy estimation result. That is, the display unit 11 displays the image data, the suspected lesion area image, and the identification result of the image feature label corresponding to the suspected lesion area image, ranks the estimated score of the suspected lesion area malignancy, It is possible to rearrange the suspected lesion region images in descending order of malignancy. In addition, a location corresponding to the selected suspicious lesion image 93 can be displayed on the diagnostic image using a mark or the like.
さらに、ユーザインタフェース91の画像特徴ラベル提示選択エリア94は、所定の画像特徴ラベルの画像例を表示し、選択された病変疑い領域画像に対応する画像特徴ラベルを提示して、画像特徴ユーザに選択させること、もしくは修正させることができる。ユーザインタフェース91は、画像特徴ラベルの画像と、病変疑い領域画像に対応する画像特徴ラベルの識別結果を表示し、病変疑い領域画像に対応する画像特徴ラベルの正解をユーザに選択させることもできる。そのため、画像特徴ラベル提示選択エリア94に表示された所定の画像特徴ラベルの下にチェックボックスを配置してある。また更に、ユーザインタフェース91の悪性度正誤情報選択エリア95には、選択された病変疑い領域画像に対応する悪性度の正誤情報TP、FP、TN、FNを表示し、ユーザに4つの何れかを選択させることができる。すなわち、ユーザインタフェース91は、病変疑い領域画像に対応する正誤情報をユーザに選択させることができる。
Further, the image feature label presentation selection area 94 of the user interface 91 displays an image example of a predetermined image feature label, presents an image feature label corresponding to the selected suspicious lesion region image, and selects it for the image feature user. Or can be modified. The user interface 91 can display the image of the image feature label and the identification result of the image feature label corresponding to the suspected lesion area image, and allow the user to select the correct image feature label corresponding to the suspected lesion area image. Therefore, a check box is arranged below a predetermined image feature label displayed in the image feature label presentation selection area 94. Still further, in the malignancy correctness / incorrectness information selection area 95 of the user interface 91, the correctness / incorrectness information TP, FP, TN, FN of the malignancy corresponding to the selected suspicious lesion region image is displayed, and the user can select any one of four types. Can be selected. That is, the user interface 91 can allow the user to select correct / incorrect information corresponding to the suspected lesion area image.
更にまた、ユーザインタフェース91は、画像特徴ラベル提示選択エリア94に表示した所定の画像特徴ラベルだけでなく、ユーザに新たな画像特徴ラベルを追加させることができる。言い換えるなら、ユーザインタフェース91は、病変疑い領域画像の他、ユーザに新たに病変疑い領域画像を追加させ、それに対応する画像特徴ラベルおよび正誤情報を選択させることができる。すなわち、本実施例の画像処理装置の画像特徴ラベル学習部21は、病変疑い領域画像に関する、ユーザ入力部10から新たに追加された画像特徴ラベルに応じて、画像特徴ラベルの種類を追加し、学習パラメータを更新することができる。また、ユーザに、診断用画像と病変疑い領域画像エリア92に表示した診断用画像を利用するなどして新たな病変疑い領域画像を追加させ、それに対応する悪性度正誤情報を選択させることができる。
Furthermore, the user interface 91 can allow the user to add a new image feature label in addition to the predetermined image feature label displayed in the image feature label presentation selection area 94. In other words, the user interface 91 can cause the user to newly add a suspected lesion area image in addition to a suspected lesion area image, and select an image feature label and correct / incorrect information corresponding to the image. That is, the image feature label learning unit 21 of the image processing apparatus according to the present embodiment adds the type of the image feature label according to the image feature label newly added from the user input unit 10 regarding the suspected lesion area image, Learning parameters can be updated. Further, the user can add a new suspected lesion area image by using the diagnostic image and the diagnostic image displayed in the suspected lesion area image area 92, and can select malignancy correctness / incorrectness information corresponding thereto. .
以上説明した実施例1には、画像処理装置100に医用画像撮影装置を含まなかったが、画像処理装置100は医用画像撮影装置を含んでもよく、また画像処理装置100は医用画像撮影装置の一部として機能してもよい。本実施例によれば、病変陰影の画像特徴量の抽出処理の調整を可能にし、大量の三次元医用画像を読影する際にユーザである読影者にかかる負担を軽減することが可能な画像処理装置、医用画像撮影装置を提供することができる。
In the first embodiment described above, the image processing apparatus 100 does not include a medical image capturing apparatus. However, the image processing apparatus 100 may include a medical image capturing apparatus, and the image processing apparatus 100 is one of the medical image capturing apparatuses. It may function as a part. According to the present embodiment, image processing that enables adjustment of the extraction process of the image feature amount of the lesion shadow and can reduce the burden on the interpreter who is the user when interpreting a large amount of three-dimensional medical images. An apparatus and a medical image photographing apparatus can be provided.
実施例2は、所定の画像特徴ラベルのほか、ユーザが新規の画像特徴ラベルを定義して追加し、さらに、新規追加の画像特徴ラベルに対応する病変疑い領域画像を画像DBへ追加する医用画像DB更新部を備える画像処理装置の実施例である。図10に実施例2に係わる画像処理装置を含むシステム構成の一例を示す。
In the second embodiment, in addition to a predetermined image feature label, a user defines and adds a new image feature label, and further adds a suspected lesion area image corresponding to the newly added image feature label to the image DB. It is an Example of an image processing apparatus provided with DB update part. FIG. 10 shows an example of a system configuration including the image processing apparatus according to the second embodiment.
図10に示すように本実施例の画像処理装置100は、ユーザ入力部10と、画像特徴ラベル学習部21と、画像特徴ラベル学習パラメータ記憶部22と、画像特徴量抽出部23と、病変疑い領域悪性度学習部24と、病変疑い領域悪性度推定パラメータ記憶部25と、病変疑い領域悪性度推定部26と、病変疑い領域悪性度学習更新部29と、医用画像DB更新部30と、表示部11とから構成されている。
As illustrated in FIG. 10, the image processing apparatus 100 according to the present exemplary embodiment includes a user input unit 10, an image feature label learning unit 21, an image feature label learning parameter storage unit 22, an image feature amount extraction unit 23, and a suspected lesion. Region malignancy learning unit 24, suspected lesion malignancy estimation parameter storage unit 25, suspected lesion malignancy estimation unit 26, suspected lesion malignancy learning update unit 29, medical image DB update unit 30, and display Part 11.
次に、図11に示すフローチャートを用いて、図9に示した画像処理装置100の動作処理を説明する。ステップS601において、医用画像DB更新部30は、表示される病変疑い領域画像を受け取る。ステップS602において、医用画像DB更新部30は、ユーザ入力部10から、ユーザが追加した病変疑い領域画像に対応する画像特徴ラベルを受け取る。ステップS603において、病変疑い領域悪性度学習更新部29は、ユーザ入力部10から、ユーザが入力した表示の病変疑い領域画像に対応する悪性度の正誤情報を受け取る。ステップS604において、医用画像DB更新部30は、表示される診断用画像、病変疑い領域画像、およびそれに対応する新規追加の画像特徴ラベルを用いて、医用画像DB20を更新する。
Next, operation processing of the image processing apparatus 100 shown in FIG. 9 will be described using the flowchart shown in FIG. In step S601, the medical image DB update unit 30 receives the displayed suspected lesion area image. In step S <b> 602, the medical image DB update unit 30 receives an image feature label corresponding to the suspected lesion region image added by the user from the user input unit 10. In step S <b> 603, the suspected lesion malignancy learning update unit 29 receives from the user input unit 10 correctness / wrongness information of malignancy corresponding to the displayed lesion suspected region image input by the user. In step S <b> 604, the medical image DB update unit 30 updates the medical image DB 20 using the displayed diagnostic image, suspected lesion area image, and a newly added image feature label corresponding thereto.
ステップS605において、画像特徴量抽出処理を更新するかどうかを判断される。例えば、医用画像DB更新部30が所定数の画像を取得したら画像特徴量抽出処理を更新してもよい。また、所定の蓄積期間を経過したら画像特徴量抽出処理を更新してもよい。さらに、ユーザの指示に応じて画像特徴量抽出処理を更新してもよい。画像特徴量抽出処理を更新する場合(Yes)、ステップS606において、画像特徴ラベル学習部21は、追加された画像特徴ラベルを含む画像特徴ラベルを分類するための機械学習を行い、その学習パラメータを生成する。これは、ステップS202の処理と同様である。
In step S605, it is determined whether to update the image feature amount extraction process. For example, the image feature amount extraction processing may be updated when the medical image DB update unit 30 acquires a predetermined number of images. Further, the image feature amount extraction process may be updated when a predetermined accumulation period has elapsed. Further, the image feature amount extraction process may be updated in accordance with a user instruction. When updating the image feature amount extraction processing (Yes), in step S606, the image feature label learning unit 21 performs machine learning for classifying the image feature label including the added image feature label, and sets the learning parameter. Generate. This is the same as the processing in step S202.
なお、ステップS606における画像特徴ラベルの学習については、図7に示すCNN法を用いたネットワークの構成を用いてもよい。また、図12に示すネットワーク構成を用いても良い。すなわち、図12のネットワーク構成に示すように、画像特徴ラベル120のクラスの各々に対し、1つの畳み込み層121と、プーリング層122、識別層123、結果124を用いて、1つの多クラス画像特徴ラベルを識別するCNNネットワークを設定しても良い。
Note that the learning of the image feature label in step S606 may use a network configuration using the CNN method shown in FIG. Further, the network configuration shown in FIG. 12 may be used. That is, as shown in the network configuration of FIG. 12, for each class of image feature label 120, one multi-class image feature using one convolution layer 121, pooling layer 122, identification layer 123, and result 124. A CNN network for identifying the label may be set.
本実施例の構成により、所定の画像特徴ラベルのほか、ユーザが新しい臨床データに対し、新規の画像特徴ラベルを定義して追加し、さらに、新規追加の画像特徴ラベルに対応する病変疑い領域画像を画像DBへ追加することが可能になる。
According to the configuration of the present embodiment, in addition to a predetermined image feature label, a user defines and adds a new image feature label to new clinical data, and further, a lesion suspected region image corresponding to the newly added image feature label Can be added to the image DB.
なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。
In addition, this invention is not limited to the above-mentioned Example, Various modifications are included. For example, the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described. Further, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。
In addition, each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor. Information such as programs, tables, and files for realizing each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
以上説明した本発明によれば、病変陰影の画像特徴量の抽出処理の調整を可能にし、大量の三次元医用画像を読影する際にユーザである読影者にかかる負担を軽減することができる。以上の記載においては、特許請求の範囲に記載した本発明以外に、種々の発明が含まれている。その一例を下記に例記する。
According to the present invention described above, it is possible to adjust the extraction process of the image feature amount of the lesion shadow, and to reduce the burden on the interpreter who is the user when interpreting a large amount of three-dimensional medical images. The above description includes various inventions in addition to the present invention described in the claims. An example is given below.
<例記1>
画像データから検出される病変疑い領域画像を提示する画像処理装置であって、
前記病変疑い領域画像に関する画像特徴ラベルを分類するための学習により得られる前記画像特徴ラベルの学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を抽出する画像特徴量抽出部と、
前記画像特徴量抽出部で抽出される前記画像特徴量を用いて、病変疑い領域の悪性度を推定するための学習で得られる病変疑い領域悪性度推定パラメータを用いて、病変疑い領域の悪性度を算出する病変疑い領域悪性度推定部と、
を備える
ことを特徴とする画像処理装置。 <Example 1>
An image processing apparatus that presents a suspected lesion image detected from image data,
An image feature amount extraction unit that extracts an image feature amount of the suspected lesion region image using a learning parameter of the image feature label obtained by learning for classifying image feature labels related to the suspected lesion region image;
Using the image feature amount extracted by the image feature amount extraction unit, using the parameter for estimating the malignancy level of the suspected lesion area obtained by learning for estimating the malignancy level of the suspected lesion area, the malignancy of the suspected lesion area A suspected lesion malignancy estimation unit for calculating
An image processing apparatus comprising:
画像データから検出される病変疑い領域画像を提示する画像処理装置であって、
前記病変疑い領域画像に関する画像特徴ラベルを分類するための学習により得られる前記画像特徴ラベルの学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を抽出する画像特徴量抽出部と、
前記画像特徴量抽出部で抽出される前記画像特徴量を用いて、病変疑い領域の悪性度を推定するための学習で得られる病変疑い領域悪性度推定パラメータを用いて、病変疑い領域の悪性度を算出する病変疑い領域悪性度推定部と、
を備える
ことを特徴とする画像処理装置。 <Example 1>
An image processing apparatus that presents a suspected lesion image detected from image data,
An image feature amount extraction unit that extracts an image feature amount of the suspected lesion region image using a learning parameter of the image feature label obtained by learning for classifying image feature labels related to the suspected lesion region image;
Using the image feature amount extracted by the image feature amount extraction unit, using the parameter for estimating the malignancy level of the suspected lesion area obtained by learning for estimating the malignancy level of the suspected lesion area, the malignancy of the suspected lesion area A suspected lesion malignancy estimation unit for calculating
An image processing apparatus comprising:
<例記2>
例記1記載の画像処理装置であって、
病変疑い領域画像を表示する表示部と、
ユーザ入力部と、
前記ユーザ入力部からの入力に応じて、前記学習パラメータを更新する画像特徴ラベル学習更新部と、を更に備える、
ことを特徴とする画像処理装置。 <Example 2>
An image processing apparatus described in Example 1,
A display unit for displaying a suspected lesion image;
A user input section;
An image feature label learning update unit that updates the learning parameter in response to an input from the user input unit;
An image processing apparatus.
例記1記載の画像処理装置であって、
病変疑い領域画像を表示する表示部と、
ユーザ入力部と、
前記ユーザ入力部からの入力に応じて、前記学習パラメータを更新する画像特徴ラベル学習更新部と、を更に備える、
ことを特徴とする画像処理装置。 <Example 2>
An image processing apparatus described in Example 1,
A display unit for displaying a suspected lesion image;
A user input section;
An image feature label learning update unit that updates the learning parameter in response to an input from the user input unit;
An image processing apparatus.
<例記3>
例記2記載の画像処理装置であって、
前記画像特徴ラベル学習更新部が所定数の病変疑い領域画像を取得した場合、或いは取得した前記病変疑い領域画像が所定の蓄積期間を経過した場合、或いは前記ユーザの指示が入力された場合、前記画像特徴量抽出部は前記画像特徴量の抽出処理を更新する、
ことを特徴とする画像処理装置。 <Example 3>
An image processing apparatus described in Example 2,
When the image feature label learning update unit has acquired a predetermined number of suspected lesion area images, or when the acquired suspected lesion area image has passed a predetermined accumulation period, or when the user's instruction is input, The image feature amount extraction unit updates the image feature amount extraction process.
An image processing apparatus.
例記2記載の画像処理装置であって、
前記画像特徴ラベル学習更新部が所定数の病変疑い領域画像を取得した場合、或いは取得した前記病変疑い領域画像が所定の蓄積期間を経過した場合、或いは前記ユーザの指示が入力された場合、前記画像特徴量抽出部は前記画像特徴量の抽出処理を更新する、
ことを特徴とする画像処理装置。 <Example 3>
An image processing apparatus described in Example 2,
When the image feature label learning update unit has acquired a predetermined number of suspected lesion area images, or when the acquired suspected lesion area image has passed a predetermined accumulation period, or when the user's instruction is input, The image feature amount extraction unit updates the image feature amount extraction process.
An image processing apparatus.
<例記4>
例記2記載の画像処理装置であって、
前記画像特徴ラベルは、前記病変疑い領域に関する画像特徴の種類である、
ことを特徴とする画像処理装置。 <Example 4>
An image processing apparatus described in Example 2,
The image feature label is a type of image feature related to the suspected lesion area.
An image processing apparatus.
例記2記載の画像処理装置であって、
前記画像特徴ラベルは、前記病変疑い領域に関する画像特徴の種類である、
ことを特徴とする画像処理装置。 <Example 4>
An image processing apparatus described in Example 2,
The image feature label is a type of image feature related to the suspected lesion area.
An image processing apparatus.
<例記5>
例記4記載の画像処理装置であって、
前記画像特徴ラベルは、前記病変疑い領域の陰影領域の面積の大きさ、輝度の濃淡、周囲既存構造との接触の有無、発生部位、及び形状である、
ことを特徴とする画像処理装置。 <Example 5>
An image processing apparatus described in Example 4,
The image feature label is the size of the shadow area of the suspected lesion area, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, and the shape.
An image processing apparatus.
例記4記載の画像処理装置であって、
前記画像特徴ラベルは、前記病変疑い領域の陰影領域の面積の大きさ、輝度の濃淡、周囲既存構造との接触の有無、発生部位、及び形状である、
ことを特徴とする画像処理装置。 <Example 5>
An image processing apparatus described in Example 4,
The image feature label is the size of the shadow area of the suspected lesion area, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, and the shape.
An image processing apparatus.
<例記6>
画像データから検出される病変疑い領域画像を提示する画像処理装置の画像処理方法であって、
画像処理装置は、
前記病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行い、
前記学習により得られる前記画像特徴ラベルの学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を抽出し、
抽出された前記画像特徴量を用いて、病変疑い領域の悪性度を推定するための病変疑い領域悪性度を学習し、
前記病変疑い領域悪性度の学習で得られる病変疑い領域悪性度推定パラメータを用いて、病変疑い領域の悪性度を算出する、
ことを特徴とする画像処理方法。 <Example 6>
An image processing method of an image processing apparatus for presenting a suspected lesion area image detected from image data,
The image processing device
Learning to classify image feature labels related to the suspected lesion image,
Using the learning parameter of the image feature label obtained by the learning, extract the image feature amount of the suspected lesion image,
Using the extracted image feature amount, learn the suspicious lesion malignancy for estimating the malignancy of the suspicious region,
Using the lesion suspicious area malignancy estimation parameter obtained by learning the lesion suspicious area malignancy, calculating the malignancy of the suspected lesion area,
An image processing method.
画像データから検出される病変疑い領域画像を提示する画像処理装置の画像処理方法であって、
画像処理装置は、
前記病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行い、
前記学習により得られる前記画像特徴ラベルの学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を抽出し、
抽出された前記画像特徴量を用いて、病変疑い領域の悪性度を推定するための病変疑い領域悪性度を学習し、
前記病変疑い領域悪性度の学習で得られる病変疑い領域悪性度推定パラメータを用いて、病変疑い領域の悪性度を算出する、
ことを特徴とする画像処理方法。 <Example 6>
An image processing method of an image processing apparatus for presenting a suspected lesion area image detected from image data,
The image processing device
Learning to classify image feature labels related to the suspected lesion image,
Using the learning parameter of the image feature label obtained by the learning, extract the image feature amount of the suspected lesion image,
Using the extracted image feature amount, learn the suspicious lesion malignancy for estimating the malignancy of the suspicious region,
Using the lesion suspicious area malignancy estimation parameter obtained by learning the lesion suspicious area malignancy, calculating the malignancy of the suspected lesion area,
An image processing method.
<例記7>
例記6記載の画像処理方法であって、
前記画像特徴ラベルは、前記病変疑い領域に関する画像特徴の種類である陰影領域の面積の大きさ、輝度の濃淡、周囲既存構造との接触の有無、発生部位、または形状である、
ことを特徴とする画像処理方法。 <Example 7>
An image processing method described in Example 6,
The image feature label is the size of the area of the shadow region, which is the type of the image feature related to the suspected lesion region, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, or the shape.
An image processing method.
例記6記載の画像処理方法であって、
前記画像特徴ラベルは、前記病変疑い領域に関する画像特徴の種類である陰影領域の面積の大きさ、輝度の濃淡、周囲既存構造との接触の有無、発生部位、または形状である、
ことを特徴とする画像処理方法。 <Example 7>
An image processing method described in Example 6,
The image feature label is the size of the area of the shadow region, which is the type of the image feature related to the suspected lesion region, the brightness, the presence or absence of contact with the surrounding existing structure, the occurrence site, or the shape.
An image processing method.
<例記8>
例記6記載の画像処理方法であって、
前記画像処理装置は、表示部とユーザ入力部を備え、
前記病変疑い領域画像と前記病変疑い領域の悪性度を前記表示部に表示し、
前記ユーザ入力部からの入力に応じて、前記学習パラメータを更新する、
ことを特徴とする画像処理方法。 <Example 8>
An image processing method described in Example 6,
The image processing apparatus includes a display unit and a user input unit,
Displaying the suspected lesion area image and the malignancy of the suspected lesion area on the display unit;
Updating the learning parameter in response to an input from the user input unit;
An image processing method.
例記6記載の画像処理方法であって、
前記画像処理装置は、表示部とユーザ入力部を備え、
前記病変疑い領域画像と前記病変疑い領域の悪性度を前記表示部に表示し、
前記ユーザ入力部からの入力に応じて、前記学習パラメータを更新する、
ことを特徴とする画像処理方法。 <Example 8>
An image processing method described in Example 6,
The image processing apparatus includes a display unit and a user input unit,
Displaying the suspected lesion area image and the malignancy of the suspected lesion area on the display unit;
Updating the learning parameter in response to an input from the user input unit;
An image processing method.
<例記9>
例記8記載の画像処理方法であって、
前記画像処理装置が、所定数の病変疑い領域画像を取得した場合、或いは取得した前記病変疑い領域画像が所定の蓄積期間を経過した場合、或いは前記ユーザの指示が入力された場合、前記画像特徴量の抽出処理を更新する、
ことを特徴とする画像処理方法。 <Example 9>
An image processing method described in Example 8,
When the image processing apparatus acquires a predetermined number of suspected lesion area images, when the acquired suspected lesion area image has passed a predetermined accumulation period, or when an instruction from the user is input, the image feature Update quantity extraction process,
An image processing method.
例記8記載の画像処理方法であって、
前記画像処理装置が、所定数の病変疑い領域画像を取得した場合、或いは取得した前記病変疑い領域画像が所定の蓄積期間を経過した場合、或いは前記ユーザの指示が入力された場合、前記画像特徴量の抽出処理を更新する、
ことを特徴とする画像処理方法。 <Example 9>
An image processing method described in Example 8,
When the image processing apparatus acquires a predetermined number of suspected lesion area images, when the acquired suspected lesion area image has passed a predetermined accumulation period, or when an instruction from the user is input, the image feature Update quantity extraction process,
An image processing method.
10 ユーザ入力部
11 表示部
20 医用画像DB
21 画像特徴ラベル学習部
22 画像特徴ラベル学習パラメータ記憶部
23 画像特徴量抽出部
24 病変疑い領域悪性度学習部
25 病変疑い領域悪性度推定パラメータ記憶部
26 病変疑い領域悪性度推定部
27 診断用画像と病変疑い領域画像
28 画像特徴ラベル学習更新部
29 病変疑い領域悪性度学習更新部
30 医用画像DB更新部
61-66、70、120 画像特徴ラベル
71、81、121 畳み込み層
72、82、122 プーリング層
73、83、123 識別層
74、84、124 結果
80 病変疑い領域画像
91 インタフェース
92 診断用画像と病変疑い領域画像エリア
93 選択された病変疑い領域画像
94 画像特徴ラベル提示選択エリア
95 悪性度正誤情報選択エリア
100 画像処理装置 10User Input Unit 11 Display Unit 20 Medical Image DB
21 Image featurelabel learning unit 22 Image feature label learning parameter storage unit 23 Image feature amount extraction unit 24 Suspicious region malignancy learning unit 25 Suspicious region malignancy estimation parameter storage unit 26 Suspicious region malignancy estimation unit 27 Diagnostic image And suspected lesion image 28 image feature label learning update unit 29 suspected lesion region malignancy learning update unit 30 medical image DB update units 61-66, 70, 120 image feature labels 71, 81, 121 convolutional layers 72, 82, 122 pooling Layers 73, 83, 123 Discrimination layers 74, 84, 124 Result 80 Suspected lesion image 91 Interface 92 Diagnosis image and suspected lesion image area 93 Selected suspected lesion image 94 Image feature label presentation selection area 95 Grade accuracy Information selection area 100 Image processing apparatus
11 表示部
20 医用画像DB
21 画像特徴ラベル学習部
22 画像特徴ラベル学習パラメータ記憶部
23 画像特徴量抽出部
24 病変疑い領域悪性度学習部
25 病変疑い領域悪性度推定パラメータ記憶部
26 病変疑い領域悪性度推定部
27 診断用画像と病変疑い領域画像
28 画像特徴ラベル学習更新部
29 病変疑い領域悪性度学習更新部
30 医用画像DB更新部
61-66、70、120 画像特徴ラベル
71、81、121 畳み込み層
72、82、122 プーリング層
73、83、123 識別層
74、84、124 結果
80 病変疑い領域画像
91 インタフェース
92 診断用画像と病変疑い領域画像エリア
93 選択された病変疑い領域画像
94 画像特徴ラベル提示選択エリア
95 悪性度正誤情報選択エリア
100 画像処理装置 10
21 Image feature
Claims (15)
- 画像データから検出される病変疑い領域画像を提示する画像処理装置であって、
前記病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行う画像特徴ラベル学習部と、
前記画像特徴ラベル学習部の学習により得られる前記画像特徴ラベルの学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を抽出する画像特徴量抽出部と、
前記病変疑い領域画像を表示する表示部と、
ユーザ入力部と、
前記ユーザ入力部からの入力に応じて、前記学習パラメータを更新する画像特徴ラベル学習更新部と、を備える
ことを特徴とする画像処理装置。 An image processing apparatus that presents a suspected lesion image detected from image data,
An image feature label learning unit that performs learning to classify image feature labels related to the suspected lesion image;
An image feature amount extraction unit that extracts an image feature amount of the suspected lesion region image using a learning parameter of the image feature label obtained by learning of the image feature label learning unit;
A display unit for displaying the suspected lesion area image;
A user input section;
An image processing apparatus comprising: an image feature label learning update unit that updates the learning parameter in response to an input from the user input unit. - 請求項1に記載の画像処理装置であって、
前記画像特徴量抽出部は、
前記画像特徴ラベル学習更新部により更新された前記学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を改めて抽出する、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 1,
The image feature amount extraction unit includes:
Using the learning parameters updated by the image feature label learning update unit, the image feature amount of the suspected lesion area image is extracted again.
An image processing apparatus. - 請求項1に記載の画像処理装置であって、
前記画像特徴量抽出部で抽出される前記画像特徴量を用いて、病変疑い領域の悪性度を推定するための病変疑い領域悪性度学習部と、
前記病変疑い領域悪性度学習部で得られる病変疑い領域悪性度推定パラメータを用いて、病変疑い領域の悪性度を算出する病変疑い領域悪性度推定部と、
前記病変疑い領域画像のそれぞれに対する前記ユーザ入力部からの入力に応じて、前記病変疑い領域悪性度推定パラメータを更新する病変疑い領域悪性度学習更新部を備える、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 1,
Using the image feature amount extracted by the image feature amount extraction unit, a lesion suspected region malignancy learning unit for estimating the malignancy of a suspected lesion region,
Using the lesion suspected area malignancy estimation parameter obtained in the suspected lesion area malignancy learning unit, the lesion suspected area malignancy estimation unit for calculating the malignancy of the suspected lesion area,
A suspected lesion area malignancy learning update unit that updates the suspected lesion area malignancy estimation parameter in response to an input from the user input unit for each of the suspected lesion area images;
An image processing apparatus. - 請求項3に記載の画像処理装置であって、
前記病変疑い領域悪性度推定部は、
前記病変疑い領域悪性度学習更新部より更新された前記病変疑い領域悪性度推定パラメータを用いて、前記病変疑い領域画像に関する病変疑い領域の悪性度を改めて推定する、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 3,
The suspected lesion area malignancy estimation unit is:
Using the lesion suspicious area malignancy estimation parameter updated from the lesion suspicious area malignancy learning update unit, reestimating the malignancy of the lesion suspicious area related to the lesion suspicious area image,
An image processing apparatus. - 請求項1に記載の画像処理装置であって、
前記画像特徴量抽出部は、前記学習パラメータを用いて、前記病変疑い領域画像に関する前記画像特徴ラベルの種類を識別する、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 1,
The image feature amount extraction unit identifies the type of the image feature label related to the lesion suspicious area image using the learning parameter.
An image processing apparatus. - 請求項1に記載の画像処理装置であって、
前記画像特徴ラベル学習部は、
前記病変疑い領域画像に関する、前記ユーザ入力部から新たに追加された画像特徴ラベルに応じて、前記画像特徴ラベルの種類を追加し、前記学習パラメータを更新する、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 1,
The image feature label learning unit
According to the image feature label newly added from the user input unit related to the suspected lesion region image, the type of the image feature label is added, and the learning parameter is updated.
An image processing apparatus. - 請求項1に記載の画像処理装置であって、
前記表示部は、
前記画像データと前記病変疑い領域画像と、前記病変疑い領域画像に対応する前記画像特徴ラベルの識別結果とを表示し、前記病変疑い領域悪性度の推定スコアに対し、順位付けを行い、悪性度の高い順で前記病変疑い領域画像を並べ替える、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 1,
The display unit
Displaying the image data, the suspected lesion area image, and the identification result of the image feature label corresponding to the suspected lesion area image, ranking the estimated score of the suspected lesion area malignancy, Rearrange the suspected lesion image in descending order of
An image processing apparatus. - 請求項7に記載の画像処理装置であって、
前記表示部は、
前記画像特徴ラベルの画像と、前記病変疑い領域画像に対応する画像特徴ラベルの識別結果を表示し、前記病変疑い領域画像に対応する画像特徴ラベルの正解をユーザに選択させる、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 7,
The display unit
Displaying the image of the image feature label and the identification result of the image feature label corresponding to the suspected lesion region image, and allowing the user to select the correct image feature label corresponding to the suspected lesion region image;
An image processing apparatus. - 請求項7に記載の画像処理装置であって、
前記表示部は、
前記病変疑い領域画像に対応する正誤情報をユーザに選択させる、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 7,
The display unit
Allowing the user to select correct / incorrect information corresponding to the suspected lesion image,
An image processing apparatus. - 請求項7に記載の画像処理装置であって、
前記表示部は、
前記病変疑い領域画像の他、ユーザに新たに病変疑い領域画像を追加させ、それに対応する画像特徴ラベルおよび正誤情報を選択させる、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 7,
The display unit
In addition to the suspected lesion area image, the user is allowed to add a new suspected lesion area image and select the corresponding image feature label and correct / incorrect information.
An image processing apparatus. - 請求項1に記載の画像処理装置であって、
前記画像特徴ラベル学習部は、
画像特徴ラベルクラスのそれぞれに対し、CNN(Convolutional Neural Network)ネットワークをそれぞれ設定し、
前記CNNネットワーク各々を学習させるための学習データの中、正のサンプルデータは前記それぞれの画像特徴ラベルクラスに所属する画像であり、負のサンプルデータはそれ以外の画像特徴ラベルクラスに所属する画像である、
ことを特徴とする画像処理装置。 The image processing apparatus according to claim 1,
The image feature label learning unit
For each image feature label class, set up a CNN (Convolutional Neural Network) network,
Among the learning data for learning each of the CNN networks, positive sample data is an image belonging to the respective image feature label class, and negative sample data is an image belonging to the other image feature label class. is there,
An image processing apparatus. - 表示部とユーザ入力部を備え、画像データから検出される病変疑い領域画像を提示する画像処理装置の画像処理方法であって、
前記画像処理装置は、
前記病変疑い領域画像に関する画像特徴ラベルを分類するための学習を行い、
前記学習により得られる前記画像特徴ラベルの学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を抽出し、
前記病変疑い領域画像を前記表示部に表示し、
前記ユーザ入力部からの入力に応じて、前記学習パラメータを更新する、
ことを特徴とする画像処理方法。 An image processing method of an image processing apparatus that includes a display unit and a user input unit and presents a suspected lesion area image detected from image data,
The image processing apparatus includes:
Learning to classify image feature labels related to the suspected lesion image,
Using the learning parameter of the image feature label obtained by the learning, extract the image feature amount of the suspected lesion image,
Displaying the suspected lesion image on the display unit;
Updating the learning parameter in response to an input from the user input unit;
An image processing method. - 請求項12に記載の画像処理方法であって、
前記画像処理装置は、
更新された前記学習パラメータを用いて、前記病変疑い領域画像の画像特徴量を改めて抽出する、
ことを特徴とする画像処理方法。 The image processing method according to claim 12,
The image processing apparatus includes:
Using the updated learning parameters, the image feature amount of the lesion suspicious area image is extracted again.
An image processing method. - 請求項12に記載の画像処理方法であって、
前記画像処理装置は、
前記学習パラメータを用いて、前記病変疑い領域画像に関する前記画像特徴ラベルの種類を識別する、
ことを特徴とする画像処理方法。 The image processing method according to claim 12,
The image processing apparatus includes:
Identifying the type of image feature label for the suspected lesion image using the learning parameter;
An image processing method. - 請求項12に記載の画像処理方法であって、
前記画像処理装置は、
前記病変疑い領域画像に関する、前記ユーザ入力部から新たに追加された画像特徴ラベルに応じて、前記画像特徴ラベルの種類を追加し、前記学習パラメータを更新する、
ことを特徴とする画像処理方法。 The image processing method according to claim 12,
The image processing apparatus includes:
According to the image feature label newly added from the user input unit related to the suspected lesion region image, the type of the image feature label is added, and the learning parameter is updated.
An image processing method.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016202782A JP2018061771A (en) | 2016-10-14 | 2016-10-14 | Image processing apparatus and image processing method |
JP2016-202782 | 2016-10-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018070285A1 true WO2018070285A1 (en) | 2018-04-19 |
Family
ID=61905580
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/035787 WO2018070285A1 (en) | 2016-10-14 | 2017-10-02 | Image processing device and image processing method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2018061771A (en) |
WO (1) | WO2018070285A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020035991A1 (en) * | 2018-08-14 | 2020-02-20 | キヤノン株式会社 | Medical information processing device, medical information processing method, and program |
JP2020062340A (en) * | 2018-10-19 | 2020-04-23 | キヤノンメディカルシステムズ株式会社 | Image processing apparatus and program |
WO2020099986A1 (en) * | 2018-11-15 | 2020-05-22 | 株式会社半導体エネルギー研究所 | Content classification method |
CN112334070A (en) * | 2018-06-28 | 2021-02-05 | 富士胶片株式会社 | Medical image processing apparatus and method, machine learning system, program, and storage medium |
CN112862741A (en) * | 2019-11-12 | 2021-05-28 | 株式会社日立制作所 | Medical image processing apparatus, medical image processing method, and medical image processing program |
CN116645485A (en) * | 2023-06-02 | 2023-08-25 | 中交一公局第二工程有限公司 | Ancient building model construction method based on unmanned aerial vehicle oblique photography |
US11914918B2 (en) | 2018-08-14 | 2024-02-27 | Canon Kabushiki Kaisha | Medical information processing apparatus, medical information processing method, and non-transitory computer-readable storage medium |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7321671B2 (en) * | 2018-04-20 | 2023-08-07 | キヤノン株式会社 | Information processing device, information processing method, information processing system and program |
EP3564961A1 (en) | 2018-05-03 | 2019-11-06 | Koninklijke Philips N.V. | Interactive coronary labeling using interventional x-ray images and deep learning |
US11488306B2 (en) * | 2018-06-14 | 2022-11-01 | Kheiron Medical Technologies Ltd | Immediate workup |
JP7210175B2 (en) * | 2018-07-18 | 2023-01-23 | キヤノンメディカルシステムズ株式会社 | Medical information processing device, medical information processing system and medical information processing program |
WO2020110774A1 (en) | 2018-11-30 | 2020-06-04 | 富士フイルム株式会社 | Image processing device, image processing method, and program |
WO2020110278A1 (en) * | 2018-11-30 | 2020-06-04 | オリンパス株式会社 | Information processing system, endoscope system, trained model, information storage medium, and information processing method |
WO2020262681A1 (en) * | 2019-06-28 | 2020-12-30 | 富士フイルム株式会社 | Learning device, method, and program, medical image processing device, method, and program, and discriminator |
JP7394588B2 (en) * | 2019-11-07 | 2023-12-08 | キヤノン株式会社 | Information processing device, information processing method, and imaging system |
JP7408361B2 (en) | 2019-11-29 | 2024-01-05 | 富士フイルムヘルスケア株式会社 | Medical image diagnosis support system, medical image processing device, and medical image processing method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007527743A (en) * | 2004-02-03 | 2007-10-04 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | System and method for automatic diagnosis and decision support for heart related diseases and conditions |
JP2014534822A (en) * | 2011-10-14 | 2014-12-25 | 富士フイルム株式会社 | Coronary artery calcium scoring based on a model |
JP2015116319A (en) * | 2013-12-18 | 2015-06-25 | パナソニックIpマネジメント株式会社 | Diagnosis support device, diagnosis support method, and diagnosis support program |
JP2016007270A (en) * | 2014-06-23 | 2016-01-18 | 東芝メディカルシステムズ株式会社 | Medical image processor |
JP2016016265A (en) * | 2014-07-10 | 2016-02-01 | 株式会社東芝 | Image processing apparatus, image processing method and medical image diagnostic apparatus |
-
2016
- 2016-10-14 JP JP2016202782A patent/JP2018061771A/en active Pending
-
2017
- 2017-10-02 WO PCT/JP2017/035787 patent/WO2018070285A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007527743A (en) * | 2004-02-03 | 2007-10-04 | シーメンス メディカル ソリューションズ ユーエスエー インコーポレイテッド | System and method for automatic diagnosis and decision support for heart related diseases and conditions |
JP2014534822A (en) * | 2011-10-14 | 2014-12-25 | 富士フイルム株式会社 | Coronary artery calcium scoring based on a model |
JP2015116319A (en) * | 2013-12-18 | 2015-06-25 | パナソニックIpマネジメント株式会社 | Diagnosis support device, diagnosis support method, and diagnosis support program |
JP2016007270A (en) * | 2014-06-23 | 2016-01-18 | 東芝メディカルシステムズ株式会社 | Medical image processor |
JP2016016265A (en) * | 2014-07-10 | 2016-02-01 | 株式会社東芝 | Image processing apparatus, image processing method and medical image diagnostic apparatus |
Non-Patent Citations (1)
Title |
---|
NOMURA, YUKIHIRO ET AL.: "CIRCUS: an MDA platform for clinical image analysis in hospitals", TRANSACTIONS ON MASS-DATA ANALYSIS OF IMAGES AND SIGNALS, vol. 2, no. 1, September 2010 (2010-09-01), pages 112 - 127, XP055502121 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112334070A (en) * | 2018-06-28 | 2021-02-05 | 富士胶片株式会社 | Medical image processing apparatus and method, machine learning system, program, and storage medium |
CN112334070B (en) * | 2018-06-28 | 2024-03-26 | 富士胶片株式会社 | Medical image processing device and method, machine learning system, and storage medium |
US12009104B2 (en) | 2018-06-28 | 2024-06-11 | Fujifilm Corporation | Medical image processing apparatus, medical image processing method, machine learning system, and program |
WO2020035991A1 (en) * | 2018-08-14 | 2020-02-20 | キヤノン株式会社 | Medical information processing device, medical information processing method, and program |
US11914918B2 (en) | 2018-08-14 | 2024-02-27 | Canon Kabushiki Kaisha | Medical information processing apparatus, medical information processing method, and non-transitory computer-readable storage medium |
US12131235B2 (en) | 2018-08-14 | 2024-10-29 | Canon Kabushiki Kaisha | Medical information processing apparatus, medical information processing method, and non-transitory computer-readable storage medium |
JP2020062340A (en) * | 2018-10-19 | 2020-04-23 | キヤノンメディカルシステムズ株式会社 | Image processing apparatus and program |
JP7325942B2 (en) | 2018-10-19 | 2023-08-15 | キヤノンメディカルシステムズ株式会社 | Image processing device and program |
WO2020099986A1 (en) * | 2018-11-15 | 2020-05-22 | 株式会社半導体エネルギー研究所 | Content classification method |
CN112862741A (en) * | 2019-11-12 | 2021-05-28 | 株式会社日立制作所 | Medical image processing apparatus, medical image processing method, and medical image processing program |
CN116645485A (en) * | 2023-06-02 | 2023-08-25 | 中交一公局第二工程有限公司 | Ancient building model construction method based on unmanned aerial vehicle oblique photography |
CN116645485B (en) * | 2023-06-02 | 2024-02-27 | 中交一公局第二工程有限公司 | Ancient building model construction method based on unmanned aerial vehicle oblique photography |
Also Published As
Publication number | Publication date |
---|---|
JP2018061771A (en) | 2018-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018070285A1 (en) | Image processing device and image processing method | |
US11850021B2 (en) | Dynamic self-learning medical image method and system | |
US11250048B2 (en) | Control method and non-transitory computer-readable recording medium for comparing medical images | |
CN110232383B (en) | Focus image recognition method and focus image recognition system based on deep learning model | |
US10222954B2 (en) | Image display apparatus, display control apparatus and display control method using thumbnail images | |
US9282929B2 (en) | Apparatus and method for estimating malignant tumor | |
JP5868231B2 (en) | Medical image diagnosis support apparatus, medical image diagnosis support method, and computer program | |
JP4911029B2 (en) | Abnormal shadow candidate detection method, abnormal shadow candidate detection device | |
CN111919260A (en) | Surgical video retrieval based on preoperative images | |
JP6318739B2 (en) | Image processing apparatus and program | |
US20170308661A1 (en) | Diagnosis support apparatus and method of controlling the same | |
JP5661890B2 (en) | Information processing apparatus, information processing method, and program | |
JP2010167042A (en) | Medical diagnostic support apparatus and control method of the same and program | |
JP6824845B2 (en) | Image processing systems, equipment, methods and programs | |
JP7413011B2 (en) | Medical information processing equipment | |
US20150363054A1 (en) | Medical image display apparatus, method for controlling the same | |
JP2007151645A (en) | Medical diagnostic imaging support system | |
JP6309417B2 (en) | Detector generating apparatus, method and program, and image detecting apparatus | |
US10307124B2 (en) | Image display device, method, and program for determining common regions in images | |
US11836923B2 (en) | Image processing apparatus, image processing method, and storage medium | |
WO2021246013A1 (en) | Diagnostic imaging method, diagnostic imaging assisting device, and computer system | |
JP5533198B2 (en) | Medical image display apparatus and program | |
JP2021189962A (en) | Medical information processing device, medical information processing system, medical information processing method, and program | |
JP6313741B2 (en) | Image processing apparatus and method of operating image processing apparatus | |
EP4421736A1 (en) | Image processing apparatus, image processing method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17860923 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17860923 Country of ref document: EP Kind code of ref document: A1 |