[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

TWI850670B - System and method for cardiovascular risk prediction and computer readable medium thereof - Google Patents

System and method for cardiovascular risk prediction and computer readable medium thereof Download PDF

Info

Publication number
TWI850670B
TWI850670B TW111120307A TW111120307A TWI850670B TW I850670 B TWI850670 B TW I850670B TW 111120307 A TW111120307 A TW 111120307A TW 111120307 A TW111120307 A TW 111120307A TW I850670 B TWI850670 B TW I850670B
Authority
TW
Taiwan
Prior art keywords
region
medical image
machine learning
learning model
calcification
Prior art date
Application number
TW111120307A
Other languages
Chinese (zh)
Other versions
TW202349409A (en
Inventor
王宗道
李文正
黃裕城
曾秋旺
李正匡
王偉仲
周呈霙
Original Assignee
國立臺灣大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立臺灣大學 filed Critical 國立臺灣大學
Priority to TW111120307A priority Critical patent/TWI850670B/en
Publication of TW202349409A publication Critical patent/TW202349409A/en
Application granted granted Critical
Publication of TWI850670B publication Critical patent/TWI850670B/en

Links

Images

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Provided are a system and a method for cardiovascular risk prediction, where artificial intelligence is utilized to perform segmentation on non-contrast or contrast medical images to identify precise regions of the heart, pericardium, and aorta of a subject, such that the adipose tissue volume and calcium score can be derived from the medical images to assist in cardiovascular risk prediction. Also provided is a computer readable medium for storing a computer executable code to implement the method.

Description

用於心血管風險預測的系統與方法及其電腦可讀媒介 Systems and methods for cardiovascular risk prediction and computer-readable media thereof

本發明係關於醫學影像分析,尤其關於一種用於心血管風險預測的系統與方法及其電腦可讀媒介。 The present invention relates to medical image analysis, and more particularly to a system and method for cardiovascular risk prediction and a computer-readable medium thereof.

在醫學影像分析領域中,心臟/主動脈及心臟/主動脈周圍或內部的脂肪組織(如心外膜脂肪組織,EAT)中鈣化的量化,是未來心血管風險的重要預測指標。尤其,心臟/主動脈鈣化的量化可反映主動脈及冠狀動脈中鈣化斑塊的數量,心包內EAT的量化可能反映冠狀動脈周圍脂肪的區域厚度,該脂肪可導致發炎及/或冠狀動脈粥樣硬化。 In the field of medical image analysis, quantification of calcification in the heart/aorta and the adipose tissue surrounding or within the heart/aorta (e.g., epicardial adipose tissue, EAT) is an important predictor of future cardiovascular risk. In particular, quantification of heart/aorta calcification can reflect the amount of calcified plaques in the aorta and coronary arteries, and quantification of intrapericardial EAT may reflect the regional thickness of the fat surrounding the coronary arteries, which can lead to inflammation and/or coronary atherosclerosis.

用於識別上述變量的現有技術,例如侵入性診斷測試(例如,心導管手術),可能有助於獲得準確的病灶識別,但上述技術與額外的手術風險及醫療費用相關。因此,在市場上,診斷心血管風險的非侵入性方法具有很高的價值。 Existing technologies for identifying the above variables, such as invasive diagnostic tests (e.g., cardiac catheterization), may help obtain accurate lesion identification, but the above technologies are associated with additional surgical risks and medical expenses. Therefore, non-invasive methods for diagnosing cardiovascular risk are highly valued in the market.

基於上述理由,本領域亟需利用人工智慧由非顯影(non-contrast)或顯影(contrast)醫學影像中分割心臟、主動脈及/或心包的區域,並從中導出EAT和鈣化積分以預測心血管風險。 Based on the above reasons, there is an urgent need to use artificial intelligence to segment the heart, aorta and/or pericardium from non-contrast or contrast medical images, and derive EAT and calcification integral from them to predict cardiovascular risk.

綜上所述,本揭露提供一種用於心血管風險預測的系統,其包括:用於從醫學影像中分割出區域的分割模組;以及用於從醫學影像的區域中提取分析結果的提取模組。 In summary, the present disclosure provides a system for cardiovascular risk prediction, which includes: a segmentation module for segmenting a region from a medical image; and an extraction module for extracting analysis results from the region of the medical image.

本揭露還提供一種心血管風險預測方法,包括:配置分割模組以從醫學影像中分割出區域;以及配置提取模組以從該醫學影像的區域中提取分析結果。 The present disclosure also provides a cardiovascular risk prediction method, comprising: configuring a segmentation module to segment a region from a medical image; and configuring an extraction module to extract an analysis result from the region of the medical image.

在本揭露的至少一個實施態樣中,醫學影像為非顯影電腦斷層掃描影像。在本公開的至少一個實施態樣中,醫學影像為顯影醫學影像。 In at least one embodiment of the present disclosure, the medical image is a non-developing CT scan image. In at least one embodiment of the present disclosure, the medical image is a developing medical image.

在本揭露的至少一個實施態樣中,分割模組是由機器學習模型來執行,以從醫學影像中分割出區域,並且該機器學習模型具有包括一編碼器部分、一解碼器部分、一注意力機制以及一變分自編碼器解碼器分支的網路架構。 In at least one embodiment of the present disclosure, the segmentation module is executed by a machine learning model to segment regions from a medical image, and the machine learning model has a network architecture including an encoder part, a decoder part, an attention mechanism, and a variational self-encoder-decoder branch.

在本揭露的至少一個實施態樣中,注意力機制係配置用於挑出(highlight)通過編碼器部分及解碼器部分之間的殘差連接的顯著特徵(salient feature),並且變分自編碼器解碼器分支係配置用於在機器學習模型訓練期間基於編碼器部分之端點的特徵重建醫學影像。 In at least one embodiment of the present disclosure, an attention mechanism is configured to highlight salient features through residual connections between an encoder portion and a decoder portion, and a variational self-encoder-decoder branch is configured to reconstruct a medical image based on features of endpoints of the encoder portion during training of a machine learning model.

在本揭露的至少一個實施態樣中,復包括用於藉由以下步驟為機器學習模型提供訓練的模型訓練模組:將訓練資料預處理成預定的一致性;藉由對訓練資料執行隨機裁剪、隨機空間翻轉及/或隨機強度縮放或平移來增強訓練資料;使用訓練資料訓練機器學習模型;以及使用損失函數驗證機器學習模型的訓練結果。 In at least one embodiment of the present disclosure, a model training module is included for providing training for a machine learning model by the following steps: preprocessing training data to a predetermined consistency; enhancing the training data by performing random cropping, random spatial flipping and/or random intensity scaling or translation on the training data; training the machine learning model using the training data; and verifying the training results of the machine learning model using a loss function.

在本揭露的至少一個實施態樣中,該訓練資料是通過手動及/或在輔助標註模型的幫助下標記非顯影或顯影醫學影像而產生。 In at least one embodiment of the present disclosure, the training data is generated by manually and/or with the help of an auxiliary annotation model to label non-visual or visual medical images.

在本揭露的至少一個實施態樣中,該分析結果包括該區域的脂肪組織體積,以及該提取模組包括配置為透過以下步驟量化該區域中的心包內的脂肪組織體積的脂肪提取單元:基於電腦斷層下的衰減係數計算心包的亨氏單位值(Hounsfield unit value);根據雜訊容限定義該亨氏單位值的正負標準偏差範圍;以及根據該範圍確定該心包內的該脂肪組織體積。在一些實施態樣中,該脂肪組織可以是心外膜脂肪組織,但本揭露不限於此。 In at least one embodiment of the present disclosure, the analysis result includes the volume of fat tissue in the region, and the extraction module includes a fat extraction unit configured to quantify the volume of fat tissue in the pericardium in the region by the following steps: calculating the Hounsfield unit value of the pericardium based on the attenuation coefficient under computer tomography; defining the positive and negative standard deviation range of the Hounsfield unit value according to the noise tolerance limit; and determining the volume of the fat tissue in the pericardium according to the range. In some embodiments, the fat tissue can be epicardial fat tissue, but the present disclosure is not limited thereto.

在本揭露的至少一個實施態樣中,該分析結果包括該區域的鈣化積分,且其中該提取模組包括配置為通過以下步驟量化來自該區域的心臟或主動脈的該鈣化積分的鈣提取單元:基於藉由心臟血管鈣化積分(Agatston score)定義的切點從該區域中識別出鈣區域;擷取該鈣區域的3D影像;藉由分類器分析該3D影像以確定該鈣區域的分類;指定該鈣區域的鈣化積分;以及產生熱圖以說明該鈣區域及該鈣化積分。 In at least one embodiment of the present disclosure, the analysis result includes a calcification integral of the region, and wherein the extraction module includes a calcium extraction unit configured to quantify the calcification integral of the heart or aorta from the region by the following steps: identifying a calcium region from the region based on a cut-off point defined by a cardiovascular calcification integral (Agatston score); capturing a 3D image of the calcium region; analyzing the 3D image by a classifier to determine a classification of the calcium region; assigning a calcification integral to the calcium region; and generating a heat map to illustrate the calcium region and the calcification integral.

在本揭露的至少一個實施態樣中,復包括用於通過以下步驟將該醫學影像預處理成預定的一致性的預處理模組:將該醫學影像的3D體積重新取樣為2×2×2毫米的間距;將該3D體積的強度正規化為均值為零的單位標準差;以及將該3D體積轉換為通道第一維(channel-first)矩陣的形式。 In at least one embodiment of the present disclosure, a preprocessing module is further included for preprocessing the medical image to a predetermined consistency by: resampling the 3D volume of the medical image to a spacing of 2×2×2 mm; normalizing the intensity of the 3D volume to a unit standard deviation with a mean of zero; and converting the 3D volume to a channel-first matrix form.

在本揭露的至少一個實施態樣中,復包括配置為基於該分析結果呈現心血管風險預測分數的輸出模組。 In at least one embodiment of the present disclosure, an output module is further included that is configured to present a cardiovascular risk prediction score based on the analysis results.

本揭露進一步提供一種電腦可讀媒介,其儲存有電腦可執行代碼,且執行該電腦可執行代碼實現上述方法。 The present disclosure further provides a computer-readable medium that stores computer-executable code, and the computer-executable code is executed to implement the above method.

1:系統 1: System

10:影像取得模組 10: Image acquisition module

121:起始點 121: Starting point

122:心尖 122: Apex of the Heart

123:心包 123: Pericardium

131:主動脈弓的邊緣 131: Edge of the aortic arch

132:上行升主動脈的邊緣 132: The edge of the ascending aorta

133:下行主動脈的邊緣 133: The edge of the descending aorta

134:上行升主動脈 134: Ascending aorta

135:主動脈弓 135: Aortic arch

136:第一下行主動脈 136: First descending aorta

137:第二下行主動脈 137: Second descending aorta

20:預處理模組 20: Preprocessing module

30:分割模組 30: Split module

30A:編碼器部分 30A: Encoder part

30B:解碼器部分 30B: Decoder part

30C:注意力機制 30C: Attention mechanism

30D:變分自編碼器解碼器分支 30D: Variational autoencoder-decoder branch

31:心包及心臟 31: Pericardium and heart

32:上行升主動脈 32: Ascending aorta

33:下行主動脈 33: descending aorta

34:心包膜 34: Pericardium

301:殘差卷積塊 301: Residual volume block

302:注意閥 302: Attention valve

40:提取模組 40: Extraction module

41:脂肪提取單元 41: Fat extraction unit

42:鈣提取單元 42: Calcium extraction unit

50:輸出模組 50: Output module

60:模型訓練模組 60: Model training module

501:原始圖片 501: Original image

502:分割結果 502: Segmentation result

503:結果 503:Results

504:存活分析 504:Survival analysis

CL:中心線 CL: Center Line

F1-F2:基準點 F1-F2: Base point

H1-H3:醫院 H1-H3: Hospital

S1-S5:步驟 S1-S5: Steps

S61-S65:步驟 S61-S65: Steps

S621-S624:步驟 S621-S624: Steps

x:跳過的特徵 x: Skipped features

x':先前跳過的特徵 x': previously skipped feature

p:注意邏輯特 p: Pay attention to the logic

p':先前的注意邏輯特 p': previous attention logic

g:閥控信號 g: Valve control signal

圖1是根據本揭露的用於心血管風險預測的系統的示意圖。 FIG1 is a schematic diagram of a system for cardiovascular risk prediction according to the present disclosure.

圖2是根據本揭露的用於心血管風險預測的步驟的流程圖。 FIG2 is a flow chart of the steps for cardiovascular risk prediction according to the present disclosure.

圖3A及圖3B為藉由根據本揭露不同成像能力的掃描裝置所拍攝的影像示意圖。 Figures 3A and 3B are schematic diagrams of images captured by scanning devices with different imaging capabilities according to the present disclosure.

圖4A及圖4B是根據本揭露的機器學習模型的網路架構的示意圖。 Figures 4A and 4B are schematic diagrams of the network architecture of the machine learning model disclosed herein.

圖5A至圖5E是根據本揭露的從醫學影像中分割的區域的示意圖。 Figures 5A to 5E are schematic diagrams of regions segmented from medical images according to the present disclosure.

圖6是根據本揭露從醫學影像中提取的心外膜脂肪組織(EAT)的示意圖。 FIG6 is a schematic diagram of epicardial adipose tissue (EAT) extracted from medical images according to the present disclosure.

圖7是根據本揭露從醫學影像中提取的鈣化積分的示意圖。 FIG. 7 is a schematic diagram of the calcification integral extracted from the medical image according to the present disclosure.

圖8是根據本揭露用於分類鈣區域的分類器的網路架構的示意圖。 FIG8 is a schematic diagram of the network architecture of a classifier for classifying calcium regions according to the present disclosure.

圖9是根據本揭露的心血管風險預測報告的示意圖。 FIG9 is a schematic diagram of a cardiovascular risk prediction report according to the present disclosure.

圖10是根據本揭露用於訓練機器學習模型的步驟的流程圖。 FIG10 is a flow chart of the steps for training a machine learning model according to the present disclosure.

圖11是聯合學習(federated learning)概念的示意圖。 Figure 11 is a schematic diagram of the concept of federated learning.

圖12A-1至圖12D-3是手動標記心臟的訓練資料的過程的示意圖。 Figures 12A-1 to 12D-3 are schematic diagrams of the process of manually marking the training data of the heart.

圖13A至圖13G-2是手動標記主動脈的訓練資料的過程的示意圖。 Figures 13A to 13G-2 are schematic diagrams of the process of manually marking the training data of the aorta.

圖14是根據本揭露的用於訓練機器學習模型的步驟的流程圖。 FIG. 14 is a flow chart of the steps for training a machine learning model according to the present disclosure.

為了詳細說明本揭露,提供以下實施態樣。熟知本領域技術人員在閱讀了本說明書的公開內容後,可以很容易地理解本揭露的優點及效果,也可以在其他不同的實施態樣中實施或應用。因此,可以修改及/或改變用於執行本揭露的以下實施態樣而不違背其針對不同態樣及應用的範圍,並且本文公開的本揭露範圍內的任何元素或方法可以與任何其他在本公開的任何實施態樣中公開的元素或方法組合。 In order to explain the present disclosure in detail, the following implementations are provided. After reading the disclosure of this specification, a person skilled in the art can easily understand the advantages and effects of the present disclosure, and can also implement or apply it in other different implementations. Therefore, the following implementations for executing the present disclosure can be modified and/or changed without violating the scope of different aspects and applications, and any element or method within the scope of the present disclosure disclosed herein can be combined with any other element or method disclosed in any implementation of the present disclosure.

本揭露的附圖所示的比例關係、結構、尺寸等特徵僅用於說明本文所描述的實施態樣,以便熟知本領域技術人員能夠從中閱讀和理解本揭露,並不旨在侷限本揭露的範圍。在不影響本揭露的設計目的及效果的情況下,任何對上述特徵的變動、修改或調整,均應落入本揭露技術內容的範圍內。 The proportions, structures, dimensions and other features shown in the attached figures of this disclosure are only used to illustrate the implementation described herein so that those skilled in the art can read and understand this disclosure, and are not intended to limit the scope of this disclosure. Any changes, modifications or adjustments to the above features without affecting the design purpose and effect of this disclosure shall fall within the scope of the technical content of this disclosure.

如本文所用,諸如「第一」、「第二」等的順序術語僅被引用以方便描述或區分諸如元件、組件、結構、區域、部件、裝置、系統等的限制,其並非旨在限制本揭露的範圍,亦不旨在限制這些限制之間的空間序列。此外,除非另有說明,諸如「一」、「該」之類的單數形式的措詞也屬於復數形式,並且諸如「或」及「及/或」之類的措詞可以互換使用。 As used herein, ordinal terms such as "first", "second", etc. are only used to facilitate description or distinction of limitations such as elements, components, structures, regions, parts, devices, systems, etc., and are not intended to limit the scope of the present disclosure, nor are they intended to limit the spatial sequence between these limitations. In addition, unless otherwise specified, singular terms such as "one", "the" and the like also belong to the plural form, and terms such as "or" and "and/or" can be used interchangeably.

如本文所用,「目標」、「個體」和「患者」的術語可以互換並且是指動物,例如包括人類物種的哺乳動物。除非明確指出一種性別,否則「目標」旨在指男性及女性。 As used herein, the terms "subject," "individual," and "patient" are interchangeable and refer to animals, such as mammals including the human species. Unless a gender is explicitly indicated, "subject" is intended to refer to both males and females.

本文所述「包括」、「包含」、「具有」、「含有」或其他類似 術語旨在非排除其他要件。例如,當描述一個目標「包括」一個限制時,除非另有說明,它可能額外包括其他元素、組件、結構、區域、部件、裝置、系統、步驟或連接等,並且不應排除其他限制。 The terms "include", "comprises", "has", "contains" or other similar terms described herein are not intended to exclude other elements. For example, when an object is described as "including" a limitation, unless otherwise stated, it may additionally include other elements, components, structures, regions, parts, devices, systems, steps or connections, etc., and other limitations should not be excluded.

如本文所用,提及一個或多個元素的列表「至少一個」的短語,應理解為表示選自元素列表中的任何一個或多個元素的至少一個元素,但不一定包括元素列表中列出的每個元素中的至少一個,並且不排除元素列表中元素的任何組合。該定義還允許除了在「至少一個」的短語所指的元素列表中標識的元素之外,可以可選地存在元素,無論與那些標識的元素相關還是不相關。因此,作為非限制性示例,「A及B中的至少一個」(或等效地,「A或B中的至少一個」,或等效地,「A及/或B中的至少一個」)可以在一個實施方案中,指至少一種任選地包括多於一個的A且不存在B(並且任選地包括除B以外的元素);在另一個實施方案中,指至少一種任選地包括多於一個的B且不存在A(並且任選地包括除A之外的元素);在又一個實施方案中,指至少一種任選地包括多於一個的A以及至少一種任選地包括多於一個的B(並且任選地包括其它元素)。 As used herein, the phrase "at least one" referring to a list of one or more elements should be understood to mean at least one element selected from any one or more elements in the list of elements, but does not necessarily include at least one of each element listed in the list of elements, and does not exclude any combination of elements in the list of elements. This definition also allows that in addition to the elements identified in the list of elements to which the phrase "at least one" refers, elements may optionally be present, whether related or unrelated to those identified elements. Thus, as a non-limiting example, "at least one of A and B" (or equivalently, "at least one of A or B", or equivalently, "at least one of A and/or B") may refer, in one embodiment, to at least one optionally including more than one A and no B (and optionally including elements other than B); in another embodiment, to at least one optionally including more than one B and no A (and optionally including elements other than A); in yet another embodiment, to at least one optionally including more than one A and at least one optionally including more than one B (and optionally including other elements).

如本文所用,「一個或多個」及「至少一個」可以具有相同的含義並且包括一個、兩個、三個或更多個。 As used herein, "one or more" and "at least one" may have the same meaning and include one, two, three or more.

參考圖1,揭露本申請的用於心血管風險預測的系統1。在至少一個實施態樣中,系統1主要包括一影像取得模組10、一預處理模組20、一分割模組30、一提取模組40、一輸出模組50及一模型訓練模組60。系統1中的各個元素之間的箭頭代表它們之間的操作關係及資訊傳輸方向,可以經由任何合適的有線或無線方式來實現。 Referring to FIG. 1 , the present application discloses a system 1 for cardiovascular risk prediction. In at least one embodiment, the system 1 mainly includes an image acquisition module 10, a preprocessing module 20, a segmentation module 30, an extraction module 40, an output module 50, and a model training module 60. The arrows between the elements in the system 1 represent the operational relationship and information transmission direction between them, which can be implemented by any suitable wired or wireless method.

在一些實施態樣中,影像取得模組10可耦接至或植入掃描裝置 中以取得個體(例如,患者)的醫學影像。在較佳實施態樣中,該掃描裝置是由飛利浦(Brilliance iCT、Brilliance CT)、通用電子(Lightspeed VCT、Revolution CT)、西門子(Somation DefinitionAS)、佳能(Aquilion PRIME)或類似者提供之電腦斷層(CT)掃描儀,且得到的醫學影像為非顯影電腦斷層影像。在本文所述的實施態樣中,影像取得模組10還可以耦接或植入安裝在任意醫院內的醫療影像儲傳系統(picture archiving and communication system,PACS),使得藉由PACS儲存的醫學影像可以由影像取得模組10聯繫用於接續在系統1中進行處理。在進一步的實施態樣中,影像取得模組10還可以藉由提供用於上傳/輸入醫學影像的交互界面,藉由手動方式接收個體的醫學影像。然而,以上討論的掃描裝置、醫學影像及影像取得模組10亦可以以其他合適的形式實現,因此,並不意味著限制本揭露的範圍。將在本揭露後面進一步描述影像取得模組10的詳細功能。 In some embodiments, the image acquisition module 10 can be coupled to or embedded in a scanning device to acquire medical images of an individual (e.g., a patient). In a preferred embodiment, the scanning device is a computer tomography (CT) scanner provided by Philips (Brilliance iCT, Brilliance CT), General Electric (Lightspeed VCT, Revolution CT), Siemens (Somation DefinitionAS), Canon (Aquilion PRIME), or the like, and the acquired medical images are non-developmental computer tomography images. In the embodiments described herein, the image acquisition module 10 can also be coupled or implanted in a medical image storage and transmission system (picture archiving and communication system, PACS) installed in any hospital, so that the medical images stored by the PACS can be connected by the image acquisition module 10 for subsequent processing in the system 1. In a further embodiment, the image acquisition module 10 can also manually receive individual medical images by providing an interactive interface for uploading/inputting medical images. However, the scanning device, medical image and image acquisition module 10 discussed above can also be implemented in other suitable forms, and therefore, it is not meant to limit the scope of the present disclosure. The detailed functions of the image acquisition module 10 will be further described later in the present disclosure.

在一些實施態樣中,預處理模組20係配置為在系統1中執行分析程序之前維持由影像取得模組10取得的醫學影像的一致性。在本文描述的實施態樣中,藉由預處理模組20執行的任務可包括但不限於醫學影像的重新取樣、正規化及轉換。將在本揭露後面進一步描述預處理模組20的詳細功能。 In some embodiments, the preprocessing module 20 is configured to maintain the consistency of the medical image acquired by the image acquisition module 10 before performing the analysis process in the system 1. In the embodiments described herein, the tasks performed by the preprocessing module 20 may include but are not limited to resampling, normalization, and transformation of the medical image. The detailed functions of the preprocessing module 20 will be further described later in this disclosure.

在一些實施態樣中,分割模組30係配置為從給定的醫學影像(經由預處理模組20的預處理之後)分割心臟、心包及/或主動脈的區域,以用於心血管風險預測的進一步分析。在本文描述的實施態樣中,分割模組30係配置成由執行分割任務的機器學習模型來實現。機器學習模型可以基於如決策樹、卷積神經網路(CNN)、遞歸神經網路(RNN)等算法或其任意組合來開發,而本揭露不限於此。將在本揭露後面進一步描述分割模組30的詳細功能。 In some embodiments, the segmentation module 30 is configured to segment the heart, pericardium and/or aorta regions from a given medical image (after preprocessing by the preprocessing module 20) for further analysis of cardiovascular risk prediction. In the embodiments described herein, the segmentation module 30 is configured to be implemented by a machine learning model that performs the segmentation task. The machine learning model can be developed based on algorithms such as decision trees, convolutional neural networks (CNNs), recursive neural networks (RNNs), or any combination thereof, but the present disclosure is not limited thereto. The detailed functions of the segmentation module 30 will be further described later in the present disclosure.

在一些實施態樣中,提取模組40係配置為從分割後的醫學影像 中提取分析結果。在本文所述的實施態樣中,提取模組40包括配置為從所述醫學影像計算脂肪組織體積的脂肪提取單元41,以及配置為從所述醫學影像計算鈣化積分的鈣提取單元42。應當注意的是,提取模組40還可包括用於從醫學影像中提取其他資訊以輔助心血管風險預測的其他單元,而本揭露不限於此。將在本揭露後面進一步描述提取模組40的詳細功能。 In some embodiments, the extraction module 40 is configured to extract analysis results from the segmented medical image. In the embodiments described herein, the extraction module 40 includes a fat extraction unit 41 configured to calculate the volume of fat tissue from the medical image, and a calcium extraction unit 42 configured to calculate the calcification integral from the medical image. It should be noted that the extraction module 40 may also include other units for extracting other information from the medical image to assist in cardiovascular risk prediction, but the present disclosure is not limited thereto. The detailed functions of the extraction module 40 will be further described later in the present disclosure.

在一些實施態樣中,輸出模組50係配置為在對醫學影像進行分析之後,輸出關於個體心血管風險的分析結果。在本文描述的實施態樣中,分析結果可以以報告的形式實現,並指示來自所述醫學影像的分割區域、脂肪組織體積、鈣化積分等資訊。然而,也可以使用用於呈現分析結果的其他形式並且不應限制本揭露的範圍。將在本揭露後面進一步描述輸出模組50的詳細功能。 In some embodiments, the output module 50 is configured to output analysis results about individual cardiovascular risk after analyzing the medical image. In the embodiments described herein, the analysis results can be implemented in the form of a report and indicate information such as segmented regions, fat tissue volume, calcification integral, etc. from the medical image. However, other forms for presenting the analysis results may also be used and should not limit the scope of the present disclosure. The detailed functions of the output module 50 will be further described later in the present disclosure.

在一些實施態樣中,模型訓練模組60係配置為在部署到分割模組30之前提供機器學習模型的訓練。在本文描述的實施態樣中,機器學習模型的訓練是基於聯合學習及/或自適應學習而執行,使得機器學習模型可基於從不同機構及/或掃描裝置的臨床實踐中收集的經更新的醫學影像及參數設置來不斷改進其分割準確性,即使機器學習模型已經部署於實際使用中。將在本揭露後面進一步描述模型訓練模組60的詳細功能。 In some embodiments, the model training module 60 is configured to provide training of the machine learning model before deployment to the segmentation module 30. In the embodiments described herein, the training of the machine learning model is performed based on joint learning and/or adaptive learning, so that the machine learning model can continuously improve its segmentation accuracy based on updated medical images and parameter settings collected from clinical practices of different institutions and/or scanning devices, even if the machine learning model has been deployed in actual use. The detailed functions of the model training module 60 will be further described later in this disclosure.

在一些實施態樣中,系統1的該些元件可以單獨地實現為任何合適的電腦設備、裝置、程序、系統等,但是本揭露不限於此。在一些實施態樣中,影像取得模組10、預處理模組20、分割模組30、提取模組40、輸出模組50及模型訓練模組60中的任意兩個或多個可整合而不是實現為不同的單位。在一些實施態樣中,所述元件也可以在雲端電腦環境中實現。然而,在不偏離本揭露的操作理念的情況下,系統1的所述元件的配置可以以任何合適的形式實現並且 不應限制本揭露的範圍。 In some embodiments, the components of system 1 can be individually implemented as any suitable computer equipment, devices, programs, systems, etc., but the present disclosure is not limited thereto. In some embodiments, any two or more of the image acquisition module 10, the preprocessing module 20, the segmentation module 30, the extraction module 40, the output module 50, and the model training module 60 can be integrated rather than implemented as different units. In some embodiments, the components can also be implemented in a cloud computer environment. However, without departing from the operating concept of the present disclosure, the configuration of the components of system 1 can be implemented in any suitable form and should not limit the scope of the present disclosure.

參考圖2,揭露描述利用系統1的元件進行心血管風險預測的步驟的流程圖,而圖3A-圖3B、圖4A-圖4B、圖5A-圖5E及圖6-9亦被引用以藉由參考來說明每個步驟的執行細節。應當理解的,圖2所示的步驟是在分割模組30的機器學習模型經過充分訓練並準備好實際使用的基礎上執行的。然而,機器學習模型的訓練過程也可以在自適應學習的概念下於圖2的步驟中執行,且因此不會干擾圖2中的所述步驟。 Referring to FIG. 2 , a flow chart describing the steps of cardiovascular risk prediction using the elements of system 1 is disclosed, and FIGS. 3A-3B , 4A-4B , 5A-5E and 6-9 are also cited to illustrate the execution details of each step by reference. It should be understood that the steps shown in FIG. 2 are performed on the basis that the machine learning model of the segmentation module 30 is fully trained and ready for actual use. However, the training process of the machine learning model can also be performed in the steps of FIG. 2 under the concept of adaptive learning, and therefore will not interfere with the steps in FIG. 2 .

在步驟S1,影像取得模組10(例如,從掃描裝置及/或PACS)獲得一個或多個醫學影像。在本文描述的實施態樣中,影像取得模組10係配置為接收醫學數位影像傳輸協定(digital imaging communications in medicine,DICOM)標準的醫學影像,並且影像取得模組10可提供圖形用戶界面(GUI)供用戶手動或自動上傳及輸入醫學影像。如上所述,醫學影像不限於電腦斷層掃描(CT)影像,還可以是核磁共振成像(MRI)影像、單光子射出電腦斷層掃描(SPECT)影像、正電子斷層掃描(PET)影像等,其中本揭露不限於此。此外,由於出於分析目的的感興趣區域,所獲得的醫學影像應至少包含個體心臟周圍的區域(例如,胸腔電腦斷層影像),但醫學影像的內容可能因成像能力而異(例如,參見圖3A及圖3B,其中醫院A的掃描裝置可以僅對個體的心臟進行成像,而醫院B的掃描裝置則可以對個體的整個胸腔進行成像),然而本揭露不限於此。在替代實施態樣中,影像取得模組10可以實施一過濾機制以確保醫學影像在分析之前確實與個體中的感興趣區域相關聯。 In step S1, the image acquisition module 10 obtains one or more medical images (for example, from a scanning device and/or PACS). In the embodiment described herein, the image acquisition module 10 is configured to receive medical images in accordance with the digital imaging communications in medicine (DICOM) standard, and the image acquisition module 10 may provide a graphical user interface (GUI) for users to manually or automatically upload and input medical images. As described above, the medical images are not limited to computer tomography (CT) images, but may also be magnetic resonance imaging (MRI) images, single photon emission computed tomography (SPECT) images, positron emission tomography (PET) images, etc., but the present disclosure is not limited thereto. In addition, due to the region of interest for analysis purposes, the obtained medical image should at least include the area around the individual's heart (e.g., chest CT image), but the content of the medical image may vary depending on the imaging capability (e.g., see Figures 3A and 3B, where the scanning device of Hospital A can only image the individual's heart, while the scanning device of Hospital B can image the individual's entire chest), but the present disclosure is not limited to this. In an alternative embodiment, the image acquisition module 10 can implement a filtering mechanism to ensure that the medical image is indeed associated with the region of interest in the individual before analysis.

在步驟S2,預處理模組20將一個或多個醫學影像預處理成預定的一致性後進行分析。在本文所述的一些實施態樣中,醫學影像的預處理包括以 下步驟:將醫學影像的三維(3D)體積重新取樣為2×2×2毫米的間距;將醫學影像的重新取樣的3D體積的強度正規化為均值為零的單位標準差(即均值為零,而標準差為一);以及將中間影像的正規化3D體積轉換為通道第一維(channel-first)矩陣的形式。在至少一個實施態樣中,醫學影像的3D體積的所述重新取樣可以避免在分割模組30的後續處理期間儲存空間不足。在至少一個實施態樣中,醫學影像的重新取樣的3D體積的所述正規化強度可使來自不同掃描裝置的醫學影像保持一致,從而使分割模組30得到更好的分割結果。在至少一個實施態樣中,所述轉換醫學影像的正規化3D體積可幫助加快分割模組30後續處理過程中的運算速度。在醫學影像預處理之後,即可以作為分割模組30的輸入以進行進一步處理。 In step S2, the preprocessing module 20 preprocesses one or more medical images to a predetermined consistency before analysis. In some embodiments described herein, the preprocessing of the medical images includes the following steps: resampling the three-dimensional (3D) volume of the medical image to a spacing of 2×2×2 mm; normalizing the intensity of the resampled 3D volume of the medical image to a unit standard deviation with a mean of zero (i.e., the mean is zero and the standard deviation is one); and converting the normalized 3D volume of the intermediate image to a channel-first matrix form. In at least one embodiment, the resampling of the 3D volume of the medical image can avoid insufficient storage space during subsequent processing by the segmentation module 30. In at least one embodiment, the normalized intensity of the resampled 3D volume of the medical image can make the medical images from different scanning devices consistent, so that the segmentation module 30 can obtain better segmentation results. In at least one embodiment, the normalized 3D volume of the converted medical image can help speed up the calculation speed in the subsequent processing of the segmentation module 30. After the medical image is pre-processed, it can be used as the input of the segmentation module 30 for further processing.

在步驟S3,藉由分割模組30的機器學習模型對一個或多個醫學影像進行分割,以識別個體的心臟、心包及主動脈的區域。在本文描述的實施態樣中,來自分割模組30的機器學習模型的輸出係標記有心臟、心包、上行升主動脈及下行主動脈的區域的個體的醫學影像,並將分別對其進行處理以在後續處理中從中確定脂肪組織體積及鈣化積分。 In step S3, one or more medical images are segmented by the machine learning model of the segmentation module 30 to identify the regions of the heart, pericardium, and aorta of the individual. In the embodiment described herein, the output from the machine learning model of the segmentation module 30 is a medical image of the individual with the regions of the heart, pericardium, ascending aorta, and descending aorta labeled, and they are processed separately to determine the fat tissue volume and calcification integral therefrom in subsequent processing.

參考圖4A,揭露本文使用的機器學習模型的網路架構,其包括編碼器部分30A、解碼器部分30B、注意力機制30C及變分自編碼器(VAE)解碼器分支30D。從所示的網路架構可看出,機器學習模型係基於U-Net結構,其中的每個殘差卷積塊301由6個算術運算堆疊而成,即群組正規化(如圖4A所示之Norm)、整流線性單元(ReLU)(如圖4A所示之Act)、卷積(如圖4A所示之Conv),群組正規化、ReLU及卷積,使用快捷方式(shortcut connecting)連接及最初的16個過濾器(filter)。在至少一個實施態樣中,編碼器部分30A係配置為逐漸縮小輸 入(醫學影像)的影像尺寸(參見編碼器部分30A的向下箭頭),同時在輸入的特徵提取過程中增加輸入的特徵尺寸。在至少一個實施態樣中,解碼器部分30B係配置為逐漸減小特徵尺寸,同時從編碼器部分30A的端點逐級增大特徵的影像尺寸(參見解碼器部分30B的向上箭頭)直到產生與輸入具有相同空間大小的輸出。在至少一個實施態樣中,注意力機制30C被配置為突出通過編碼器部分30A和解碼器部分30B之間的殘差連接(參見注意力機制30C周圍的水平箭頭)傳遞的顯著特徵。在至少一個實施態樣中,變分自編碼器(VAE)解碼器分支30D係配置為基於來自編碼器部分30A的端點的特徵,遵循解碼器部分30B的相同架構來重建輸入(醫學影像),若訓練資料有限,則這有助於在訓練期間向編碼器部分30A添加額外的指導及規範化。 Referring to FIG. 4A , the network architecture of the machine learning model used in this paper is disclosed, which includes an encoder part 30A, a decoder part 30B, an attention mechanism 30C, and a variational autoencoder (VAE) decoder branch 30D. From the network architecture shown, it can be seen that the machine learning model is based on a U-Net structure, in which each residual convolution block 301 is composed of 6 arithmetic operations stacked, namely group normalization (Norm as shown in FIG. 4A ), rectified linear unit (ReLU) (Act as shown in FIG. 4A ), convolution (Conv as shown in FIG. 4A ), group normalization, ReLU and convolution, using shortcuts (shortcut connecting) to connect and the initial 16 filters (filter). In at least one embodiment, the encoder portion 30A is configured to gradually reduce the image size of the input (medical image) (see the downward arrow of the encoder portion 30A) while increasing the feature size of the input during the feature extraction process of the input. In at least one embodiment, the decoder portion 30B is configured to gradually reduce the feature size while gradually increasing the image size of the feature from the end point of the encoder portion 30A (see the upward arrow of the decoder portion 30B) until an output having the same spatial size as the input is generated. In at least one embodiment, the attention mechanism 30C is configured to highlight the salient features transmitted through the residual connection between the encoder portion 30A and the decoder portion 30B (see the horizontal arrows around the attention mechanism 30C). In at least one implementation, the variational autoencoder (VAE) decoder branch 30D is configured to reconstruct the input (medical image) based on features from the endpoints of the encoder portion 30A, following the same architecture of the decoder portion 30B, which helps add additional guidance and regularization to the encoder portion 30A during training if the training data is limited.

參考圖4A及圖4B,注意力機制30C包括多個注意閥(attention gate)302,其對應於解碼器部分30B的階級並且藉由上採樣、ReLU、卷積及Sigmoid函數的操作堆疊,其中先前的注意邏輯特(attention logit)p'攜帶從前一層學習的資訊,閥控信號g攜帶來自先前較粗尺度的上下文資訊,以及先前跳過的特徵(skipped feature)x'(即,跳過的特徵x通過編碼器部分30A的相應級別的兩個殘差卷積塊301)由每個注意閥302在進入下一個注意閥302及/或與解碼器部分30B連接用於輸出之前產生注意邏輯特p。 4A and 4B , the attention mechanism 30C includes a plurality of attention gates 302, which correspond to the stages of the decoder part 30B and are stacked by upsampling, ReLU, convolution and sigmoid functions, wherein the previous attention logit p' carries the information learned from the previous layer, the gate control signal g carries the contextual information from the previous coarse scale, and the previously skipped features. feature)x' (i.e., the skipped feature x passes through the two residual convolution blocks 301 of the corresponding level of the encoder part 30A) and generates an attention logic feature p by each attention valve 302 before entering the next attention valve 302 and/or connecting to the decoder part 30B for output.

參考圖5A-5E,示出了分割模組30處理的醫學影像的分割結果的示例,其中圖5A、圖5B及圖5C示出了心包及心臟31、上行升主動脈32及下行主動脈33的分割區域分別從個體的矢狀面、冠狀面及水平面觀察到的二維(2D)視圖;圖5D示出了從個體的水平面觀察到的2D視圖中呈現的心包膜34的分段區域;及圖5E示出了在個體胸腔的3D視圖中呈現的心包及心臟31、上行 升主動脈32及下行主動脈33的分段區域。 Referring to FIGS. 5A-5E, examples of segmentation results of medical images processed by the segmentation module 30 are shown, wherein FIGS. 5A, 5B and 5C show segmented regions of the pericardium and heart 31, the ascending aorta 32 and the descending aorta 33 respectively observed from the sagittal plane, the coronal plane and the horizontal plane of the individual; FIG. 5D shows a segmented region of the pericardium 34 presented in a 2D view observed from the horizontal plane of the individual; and FIG. 5E shows a segmented region of the pericardium and heart 31, the ascending aorta 32 and the descending aorta 33 presented in a 3D view of the chest cavity of the individual.

在圖2中的步驟S4,提取模組40分析一個或多個醫學影像以基於分割結果提取脂肪組織(例如,EAT)體積及鈣化積分。在本文描述的實施態樣中,提取模組40係配置為從醫學影像中有序地排除不重要的部分(例如,椎骨及胸骨),然後基於醫學影像中呈現的分割區域的亨氏單位(HU)值量化所述脂肪組織體積及鈣化積分(例如,經由脂肪提取單元41及鈣提取單元42)。 In step S4 of FIG. 2 , the extraction module 40 analyzes one or more medical images to extract the volume and calcification integral of fat tissue (e.g., EAT) based on the segmentation results. In the embodiment described herein, the extraction module 40 is configured to exclude unimportant parts (e.g., vertebrae and sternum) from the medical image in an orderly manner, and then quantify the volume and calcification integral of the fat tissue based on the Hounsfield Unit (HU) value of the segmented region presented in the medical image (e.g., via the fat extraction unit 41 and the calcium extraction unit 42).

在至少一個實施態樣中,脂肪提取單元41係配置為藉由以下方式量化來自醫學影像的EAT體積:基於電腦斷層下的衰減係數(例如,水、空氣及/或鈣的衰減係數)計算藉由分割模組30分割的心包的HU值;基於雜訊容限定義所述HU值的正負標準偏差範圍;以及基於所述範圍確定心包內的EAT體積及位置。藉由脂肪提取單元41提取的EAT體積及位置的示例如圖6所示,其中EAT的位置在心包上以點表示,而被提取的EAT體積為140cm3。在一些實施態樣中,X射線在掃描過程中的不同電子能量可能會導致水、空氣及/或鈣化的不同測量衰減,本揭露的演算法可以根據電子能量的參數進行校準以獲得更準確的鈣化積分或其分類。 In at least one embodiment, the fat extraction unit 41 is configured to quantify the EAT volume from the medical image by: calculating the HU value of the pericardium segmented by the segmentation module 30 based on the attenuation coefficient under CT (e.g., the attenuation coefficient of water, air and/or calcium); defining the positive and negative standard deviation range of the HU value based on the noise tolerance; and determining the EAT volume and position in the pericardium based on the range. An example of the EAT volume and position extracted by the fat extraction unit 41 is shown in FIG6 , where the position of the EAT is represented by a point on the pericardium, and the extracted EAT volume is 140 cm 3 . In some embodiments, different electron energies of X-rays during scanning may result in different measured attenuations of water, air and/or calcification. The algorithm disclosed herein can be calibrated based on the parameters of electron energy to obtain a more accurate calcification integral or classification thereof.

在至少一個實施態樣中,鈣提取單元42係配置為藉由以下方式量化來自醫學影像的鈣化積分:基於藉由130HU的心臟血管鈣化積分(Agatston score)定義的切割點從醫學影像上的分割區域中識別一個或多個鈣區域;擷取鈣區域的複數個3D影像;藉由分類器(例如,DenseNet)分析複數個3D影像以對鈣區域進行分類,其中鈣區域的分類可以包括但不限於下表1中描述的那些(其中梁柱列表示範疇領域,左列代表所述範疇領域的主要類別,右欄代表所述範疇領域的子類別);指定每個鈣區域的鈣化積分;以及產生熱圖(例如,經由梯度類 加權啟用對映Gradient-weighted Class Activation Mapping技術)來顯示鈣區域及其相應的鈣化積分。藉由鈣提取單元42提取的鈣區域及其對應的鈣化積分的示例如圖7所示,其中心臟、上行升主動脈及下行主動脈上的鈣區域的區域呈現為彩色區域,且所述區域對應的鈣化積分分別為52、450及1282。 In at least one embodiment, the calcium extraction unit 42 is configured to quantify the calcification integral from the medical image by: based on the cardiovascular calcification integral of 130HU (Agatston score); capturing a plurality of 3D images of the calcium regions; analyzing the plurality of 3D images by a classifier (e.g., DenseNet) to classify the calcium regions, wherein the classification of the calcium regions may include but is not limited to those described in Table 1 below (wherein the columns represent categories, the left columns represent the main categories of the categories, and the right columns represent the subcategories of the categories); specifying a calcification integral for each calcium region; and generating a heat map (e.g., by a Gradient-weighted Class Activation Mapping technique) to display the calcium regions and their corresponding calcification integrals. An example of the calcium area extracted by the calcium extraction unit 42 and its corresponding calcification integral is shown in FIG7 , wherein the calcium areas on the heart, ascending aorta, and descending aorta are presented as colored areas, and the corresponding calcification integrals of the areas are 52, 450, and 1282, respectively.

表1.分類器識別的鈣區域類別

Figure 111120307-A0202-12-0013-1
Table 1. Calcium region categories identified by the classifier
Figure 111120307-A0202-12-0013-1

Figure 111120307-A0202-12-0014-2
Figure 111120307-A0202-12-0014-2

參考圖8,鈣提取單元42使用的分類器是基於網路架構開發的,其中分類器的神經網路內的卷積塊被設計為相互互連(例如,通過殘差連接,skip connection),使得分類器可以學習最佳傳輸路徑,以便從醫學影像中有效地確定鈣區域。在此所描述的實施態樣中,分類器的神經網路總共由121層構成,但是分類器的神經網路的層數可以根據需要進行改變,本揭露不限於此。 Referring to FIG8 , the classifier used in the calcium extraction unit 42 is developed based on a network architecture, wherein the convolution blocks within the neural network of the classifier are designed to be interconnected (e.g., via a residual connection, skip connection) so that the classifier can learn the best transmission path to effectively determine the calcium area from the medical image. In the embodiment described herein, the neural network of the classifier is composed of a total of 121 layers, but the number of layers of the neural network of the classifier can be changed as needed, and the present disclosure is not limited thereto.

在圖2的步驟S5中,輸出模組50將來自分割模組30的分割結果以及提取模組40提取的脂肪組織體積及鈣化積分進行整理,以生成分析結果。在本文所述的實施態樣中,輸出模組50係還配置為基於在系統1的先前處理步驟(步驟S1-S4)中分析的資訊計算心血管風險預測分數(存活機率)。參考圖9,揭露輸出模組50產生的分析結果報告的示例,其中個體的醫學影像的原始圖片501、來自原始圖片501的分割區域的分割結果502(藉由分割模組30產生)、關 於分割結果502的量化值(藉由提取模組40計算)的結果503、以及關於個體的心血管風險預測分數的存活分析504。需注意的是,分析結果的格式不限於所示的報告,而可以以任何合適的物理或虛擬形式呈現。 In step S5 of FIG2 , the output module 50 organizes the segmentation results from the segmentation module 30 and the adipose tissue volume and calcification integral extracted by the extraction module 40 to generate an analysis result. In the embodiment described herein, the output module 50 is further configured to calculate a cardiovascular risk prediction score (survival probability) based on the information analyzed in the previous processing steps (steps S1-S4) of the system 1. Referring to FIG. 9 , an example of an analysis result report generated by the output module 50 is disclosed, wherein an original image 501 of an individual's medical image, a segmentation result 502 (generated by the segmentation module 30 ) of a segmented region from the original image 501 , a result 503 of a quantitative value of the segmentation result 502 (calculated by the extraction module 40 ), and a survival analysis 504 of the individual's cardiovascular risk prediction score. It should be noted that the format of the analysis result is not limited to the report shown, but can be presented in any suitable physical or virtual form.

在本文描述的實施態樣中,心血管風險預測分數的公式是建立在來自全民健康保險資料庫的接受胸腔電腦斷層掃描的患者的研究樣本上,透過以下步驟:收集來自總共1970名患者的研究樣本,其中研究樣本包括患者影像資訊、門診資訊、住院資訊、用藥資訊、投保(死亡)記錄等資料;將研究樣本串聯起來,對上述患者進行一代追踪研究,約每年2633.2人;收集(例如,使影像資訊預先通過分割模組30的機器學習模型,或者在來自國民健康保險資料庫的影像資訊上臨床醫生已經標記的收集記錄)研究樣本影像資訊中的心臟、上行升主動脈及下行主動脈的鈣化積分及脂肪組織體積;從研究樣本的門診資訊、住院資訊及用藥資訊中收集基本的人口統計學資訊(如性別、年齡等)及共病症資訊(定義了約57項共病症);從經由限制性立方樣條(restricted cubic spline)收集的資訊中平滑化(smoothing)連續變量(例如年齡、鈣化積分等);並使用基於平滑連續變量的Cox回歸分析構建心血管風險預測分數公式。如圖9所示,心血管風險預測分數以個體從報告創建日期算起的年數計算遭遇事件(例如,再住院或死亡)的機率呈現。 In the embodiments described herein, the formula for the cardiovascular risk prediction score is based on a study sample of patients who underwent chest computed tomography scans from the National Health Insurance database, through the following steps: collecting study samples from a total of 1,970 patients, wherein the study samples include patient imaging information, outpatient information, hospitalization information, medication information, insurance (death) records, and other data; linking the study samples together to conduct a generation of follow-up studies on the above patients, approximately 2,633.2 people per year; collecting (e.g., using imaging data The invention discloses a method for detecting calcification integral and fat tissue volume of the heart, ascending aorta and descending aorta in the imaging information of the research sample (previously through the machine learning model of the segmentation module 30, or the collected records that have been marked by the clinician on the imaging information from the National Health Insurance database); collecting basic demographic information (such as gender, age, etc.) and comorbidity information (about 57 comorbidities are defined) from the outpatient information, hospitalization information and medication information of the research sample; smoothing continuous variables (such as age, calcification integral, etc.) from the information collected via restricted cubic spline; and constructing a cardiovascular risk prediction score formula using Cox regression analysis based on the smoothed continuous variables. As shown in Figure 9, the cardiovascular risk prediction score is presented as the individual's probability of experiencing an event (e.g., rehospitalization or death) calculated as the number of years from the report creation date.

參考圖10,揭露藉由模型訓練模組60執行的用於訓練分割模組30的機器學習模型的步驟的流程圖,其中圖11、圖12A-1至圖12D-3、圖13A至圖13G-2及圖14係引用來說明每個步驟的執行細節。應當理解的,機器學習模型的訓練可以獨立於系統1的其他元件進行操作(即圖2所示的步驟S1至S5),因此在實際使用過程中不會干擾系統1的操作。 Referring to FIG. 10 , a flowchart of the steps for training the machine learning model of the segmentation module 30 executed by the model training module 60 is disclosed, wherein FIG. 11 , FIG. 12A-1 to FIG. 12D-3 , FIG. 13A to FIG. 13G-2 and FIG. 14 are cited to illustrate the execution details of each step. It should be understood that the training of the machine learning model can be operated independently of other components of the system 1 (i.e., steps S1 to S5 shown in FIG. 2 ), and therefore will not interfere with the operation of the system 1 during actual use.

在本文描述的實施態樣中,機器學習模型的訓練可以在任何合適的開發平台中實現,例如NVIDIA DGX、NVIDIA EGX、TensorFlow、Caffe等,並且可以利用任何合適的架構,例如NVIDIA Clara imaging、Horovod等,而本揭露不限於此。在一些實施態樣中,圖10中描述的每個步驟可以藉由設置在模型訓練模組60內的單獨或指定的單元來實現,而本揭露不限於此。在其他實施態樣中,模型訓練模組60係還用於提供GUI或其他指示機制來引導用戶完成圖10中所述的步驟,而本揭露亦不限於此。 In the embodiments described herein, the training of the machine learning model can be implemented in any suitable development platform, such as NVIDIA DGX, NVIDIA EGX, TensorFlow, Caffe, etc., and can utilize any suitable architecture, such as NVIDIA Clara imaging, Horovod, etc., but the present disclosure is not limited thereto. In some embodiments, each step described in FIG. 10 can be implemented by a separate or designated unit disposed in the model training module 60, but the present disclosure is not limited thereto. In other embodiments, the model training module 60 is also used to provide a GUI or other instruction mechanism to guide the user to complete the steps described in FIG. 10, but the present disclosure is not limited thereto.

在至少一個實施態樣中,機器學習模型的訓練是基於聯合學習來執行的。一般來說,在隱私保護的限制下,很難收集大量的醫療資料來訓練強大的神經網路,並且在多個醫療機構(例如醫院)之間共享資料通常是不可行的。因此,聯合學習藉由將神經網路訓練分散在各個醫療機構中,而僅在各個醫療機構之間共享訓練權重來完成機器學習模型的訓練,從而解決了上述問題。 In at least one implementation, the training of the machine learning model is performed based on federated learning. Generally speaking, it is difficult to collect a large amount of medical data to train a powerful neural network under the constraints of privacy protection, and it is usually not feasible to share data between multiple medical institutions (such as hospitals). Therefore, federated learning solves the above problems by distributing the training of the neural network among various medical institutions and only sharing the training weights among various medical institutions to complete the training of the machine learning model.

在本文描述的實施態樣中,如圖11所示,三個參與分割模組30的機器學習模型開發的醫院(分別表示為H1、H2及H3),由此聯合學習將建立在伺服器端(開發人員)及客戶端(醫院)之間的關係進行機器學習模型的訓練。 In the embodiment described herein, as shown in FIG. 11 , three hospitals (respectively indicated as H1, H2 and H3) participate in the development of the machine learning model of the segmentation module 30, whereby joint learning will establish a relationship between the server side (developer) and the client side (hospital) to train the machine learning model.

如圖11所示,服務器端首先將初始化/預訓練的全局權重分發給客戶端。然後,客戶端的醫院將基於全局權重,分別使用相應醫院H1/H2/H3中的患者資料訓練一個本機模型(local model)。接下來,在本機模型訓練期間從客戶端派生的本機權重將應用於差分隱私(例如,選擇性參數共享、稀疏向量技術等技術),以在返回到服務器端之前防止模型轉換。最後,客戶端將本機權重回傳給服務器端,而伺服器端將根據其貢獻(例如,貢獻係由每個醫院提供的資料量決定)聚合出本機權重,作為用於訓練機器學習模型的更新的全局權重。訓練 完成後,可將更新後的全局權重再次下發給客戶端,以用於實際部署機器學習模型。 As shown in Figure 11, the server first distributes the initialized/pre-trained global weights to the client. Then, the client's hospital will train a local model based on the global weights using the patient data in the corresponding hospital H1/H2/H3. Next, the local weights derived from the client during local model training will be applied to differential privacy (e.g., selective parameter sharing, sparse vector technology, etc.) to prevent model conversion before returning to the server. Finally, the client transmits the local weights back to the server, and the server will aggregate the local weights based on their contributions (e.g., the contribution is determined by the amount of data provided by each hospital) as updated global weights for training the machine learning model. Once training is complete, the updated global weights can be sent back to the client for actual deployment of the machine learning model.

基於圖11所解釋的聯合學習的概念,訓練分割模組30的機器學習模型的步驟將描述如下。 Based on the concept of joint learning explained in FIG11 , the steps of training the machine learning model of the segmentation module 30 will be described as follows.

在圖10中的步驟S61,機器學習模型的訓練資料使用AI輔助註釋(AI-Assited Annotation,AIAA)工具(如NVIDIA Clara成像提供的NVIDIA Clara train SDK)或標記工具(如3D slicer(三維重建軟體)、醫學影像交互工具包(Medical Imaging Interaction Toolkit,MITK)等)。然而,用於標記訓練資料的工具不限於此處描述的,並且可以根據需要而使用其他合適的工具,本揭露不限於此。 In step S61 of FIG. 10 , the training data of the machine learning model uses an AI-Assisted Annotation (AIAA) tool (such as the NVIDIA Clara train SDK provided by NVIDIA Clara Imaging) or a labeling tool (such as 3D slicer (three-dimensional reconstruction software), Medical Imaging Interaction Toolkit (MITK), etc.). However, the tools used to label the training data are not limited to those described herein, and other suitable tools may be used as needed, and the present disclosure is not limited thereto.

此外,在聯合學習的情況下,考慮到隱私保護,在選擇標記之前,可以先對非顯影或顯影醫學影像進行去標識化,從而去除任何標識符(例如但不限於名稱、地址、出生日期、住院日期、出院日期、死亡日期、電話號碼、傳真號碼、電子郵箱地址、社會安全號號、病歷號、健保卡號、證件號碼、駕照號、車牌號、醫療材料產品編號、個人URL地址、個人IP地址、生物特徵資料、全臉影像或其他)可能會暴露貢獻醫療影像的個體的身份。 In addition, in the case of joint learning, considering privacy protection, non-visual or visual medical images can be de-identified before selecting labels, thereby removing any identifiers (such as but not limited to name, address, date of birth, hospitalization date, discharge date, date of death, telephone number, fax number, email address, social security number, medical record number, health insurance card number, ID number, driver's license number, license plate number, medical material product number, personal URL address, personal IP address, biometric data, full face image or other) that may reveal the identity of the individual who contributed the medical image.

需要說明的是,機器學習模型的訓練資料選擇規則可以有更多或更少的規則,所描述的這些要求僅用於示例,不應限製本揭露的範圍。 It should be noted that the training data selection rules of the machine learning model can have more or fewer rules, and the requirements described are only for example and should not limit the scope of this disclosure.

在本文描述的實施態樣中,圖12A-1至圖12D-3及圖13A至圖13G-2示出標記訓練資料的過程,其中圖12A-1至圖12D-3示出了從特定個體的醫學影像手動標記出心包的過程,而圖13A至圖13G-2示出了從特定個體的醫學影像手動標記出主動脈的過程。 In the embodiments described herein, FIGS. 12A-1 to 12D-3 and FIGS. 13A to 13G-2 illustrate the process of marking training data, wherein FIGS. 12A-1 to 12D-3 illustrate the process of manually marking the pericardium from a medical image of a specific individual, and FIGS. 13A to 13G-2 illustrate the process of manually marking the aorta from a medical image of a specific individual.

在從醫學影像手動標記特定個體的心包的情況下,該過程藉由從右心室底部的肺幹的起始點121開始繪製心臟的上邊緣來開始。圖12A-1至圖12A-3示出了分別從個體的水平面(圖12A-1)、矢狀面(圖12A-2)及冠狀面(圖12A-3)觀察的用於繪製心包上邊緣的起始點121的位置,其中在水平面(圖12A-1)的起始點121處觀察到的上邊緣的區域係隨後被圈出。在這個階段,需要注意不要將食道包括在心包的所述上邊緣的區域中。 In the case of manually marking the pericardium of a particular individual from medical images, the process begins by drawing the upper edge of the heart starting from the starting point 121 of the pulmonary trunk at the bottom of the right ventricle. Figures 12A-1 to 12A-3 show the location of the starting point 121 for drawing the upper edge of the pericardium as observed from the horizontal plane (Figure 12A-1), sagittal plane (Figure 12A-2), and coronal plane (Figure 12A-3) of the individual, respectively, where the area of the upper edge observed at the starting point 121 in the horizontal plane (Figure 12A-1) is then circled. At this stage, care should be taken not to include the esophagus in the area of the upper edge of the pericardium.

然後,該過程繼續繪製終止於心臟的心尖122的心包的下邊緣。圖12B-1至圖12B-3示出了分別從個體的水平面(圖12B-1)、矢狀面(圖12B-2)及冠狀面(圖12B-3)觀察的用於繪製心臟下邊緣的心尖122的位置,其中在水平面(圖12B-1)的心尖122處觀察到的下邊緣區域係隨後被圈出。在這個階段,需要注意不要將肝臟包括在心包的所述下邊緣區域中。 The process then continues with drawing the inferior margin of the pericardium ending at the apex 122 of the heart. Figures 12B-1 to 12B-3 show the location of the apex 122 used to draw the inferior margin of the heart as viewed from the horizontal plane (Figure 12B-1), sagittal plane (Figure 12B-2), and coronal plane (Figure 12B-3) of the individual, respectively, where the inferior margin area observed at the apex 122 in the horizontal plane (Figure 12B-1) is then circled. At this stage, care should be taken not to include the liver in the inferior margin area of the pericardium.

接著,藉由沿著心臟的心包123繪製心臟邊界(連接上邊緣及下邊緣)來結束該過程。圖12C-1至圖12C-3示出了分別從目標的水平面(圖12C-1)、矢狀面(圖12C-2)及冠狀面(圖12C-3)繪製心包123的位置。在這個階段,需要注意不要將胸骨包括在所述心臟邊界的區域中。 Next, the process is completed by drawing the heart border along the pericardium 123 of the heart (connecting the upper and lower edges). Figures 12C-1 to 12C-3 show the location of the pericardium 123 drawn from the horizontal plane (Figure 12C-1), sagittal plane (Figure 12C-2), and coronal plane (Figure 12C-3) of the target, respectively. At this stage, care should be taken not to include the sternum in the area of the heart border.

圖12D-1至圖12D-3示出了說明在標記完成後個體的心臟及心包的示例,其中心臟的圈選區域分別顯示在個體的水平面(圖12D-1)、矢狀面(圖12D-2)及冠狀面(圖12D-3)。從這裡開始,在用作為訓練資料之前,可以附加地應用平滑函數來平衡圈選區域的粗糙度。 Figures 12D-1 to 12D-3 show examples of an individual's heart and pericardium after labeling, with the circled area of the heart displayed in the individual's horizontal plane (Figure 12D-1), sagittal plane (Figure 12D-2), and coronal plane (Figure 12D-3). From here, a smoothing function can be additionally applied to balance the roughness of the circled area before using it as training data.

在從影像手動標記特定個體的主動脈的情況下,該過程藉由確定與頭肱動脈相交的主動脈的基準點F2來開始。圖13A示出了從個體的矢狀面觀察到的基準點F2的位置。 In the case of manually marking the aorta of a particular individual from imaging, the process begins by determining the reference point F2 of the aorta where it intersects the capitulum brachial artery. Figure 13A shows the location of reference point F2 as viewed from the sagittal plane of the individual.

然後,該過程繼續確定與左鎖骨下動脈相交的主動脈的另一個基準點F1。圖13B示出了從個體的矢狀面觀察到的基準點F1的位置。 The process then continues by identifying another reference point F1 of the aorta that intersects the left subclavian artery. Figure 13B shows the location of reference point F1 as viewed from the sagittal plane of the subject.

接下來,該過程繼續繪製主動脈弓的邊緣131。圖13C-1至圖13C-3示出了分別從個體的水平面(圖13C-1)、矢狀面(圖13C-2)及冠狀面(圖13C-3)觀察的用於繪製主動脈弓的邊緣131的位置。 Next, the process continues with drawing the edge 131 of the aortic arch. Figures 13C-1 to 13C-3 show the position of the edge 131 used to draw the aortic arch as viewed from the horizontal plane (Figure 13C-1), sagittal plane (Figure 13C-2), and coronal plane (Figure 13C-3) of the individual.

此外,該過程繼續繪製上行升主動脈的邊緣132。圖13D-1至圖13D-3示出了分別從個體的水平面(圖13D-1)、矢狀面(圖13D-2)及冠狀面(圖13D-3)觀察到的用於繪製上行升主動脈的邊緣132的位置。 In addition, the process continues to draw the edge 132 of the ascending aorta. Figures 13D-1 to 13D-3 show the position of the edge 132 used to draw the ascending aorta observed from the horizontal plane (Figure 13D-1), sagittal plane (Figure 13D-2), and coronal plane (Figure 13D-3) of the individual.

然後,該過程繼續按照與圖13C-1至圖13D-3中描述的類似程序來標記下行主動脈的邊緣133,於此不再贅述。圖13E係包括以說明在標記後從個體的矢狀平面觀察到的下行主動脈的邊緣133的示例,下行主動脈的邊界在該處終止(參見下行主動脈邊緣的下邊界)於髂總動脈上方。 The process then continues with marking the edge 133 of the descending aorta in a similar procedure as described in FIGS. 13C-1 to 13D-3, which are not described in detail herein. FIG. 13E is included to illustrate an example of the edge 133 of the descending aorta as viewed from the sagittal plane of the individual after marking, where the border of the descending aorta terminates (see the lower border of the descending aorta edge) above the common iliac artery.

接下來,該過程繼續從確定的主動脈邊緣提取中心線CL(通過3D slicer的提取功能),以確保主動脈的精確標記。圖13F以3D視圖示出了被提取的中心線CL及其與上行升主動脈134、主動脈弓135、第一下行主動脈136及第二下行主動脈137的關係。具體地,該過程繼續根據上述基準點F1及F2確定主動脈的截面,其中心臟與基準點F2之間的主動脈(主動脈根部)區段係定義為上行升主動脈134,在基準點F1及F2之間的主動脈區段係定義為主動脈弓135,從基準點F1下降到主動脈相應於基準點F2的高度(當在基準點F2的這個高度從水平面觀察時,主動脈似乎是分開的)的另一端(遠離主動脈根部)的主動脈區段係定義為第一下行主動脈136,而主動脈的剩餘區段被定義為第二下行主動脈137。 Next, the process continues to extract the centerline CL from the determined aortic edge (via the extraction function of the 3D slicer) to ensure accurate marking of the aorta. FIG13F shows the extracted centerline CL and its relationship with the ascending aorta 134, the aortic arch 135, the first descending aorta 136, and the second descending aorta 137 in a 3D view. Specifically, the process continues to determine the cross-section of the aorta based on the above-mentioned reference points F1 and F2, wherein the aorta segment between the heart and the reference point F2 (the aorta root) is defined as the ascending aorta 134, the aorta segment between the reference points F1 and F2 is defined as the aorta arch 135, the aorta segment descending from the reference point F1 to the other end (far from the aorta root) of the aorta corresponding to the height of the reference point F2 (when the aorta is observed from the horizontal plane at this height of the reference point F2) is defined as the first descending aorta 136, and the remaining segment of the aorta is defined as the second descending aorta 137.

圖13G-1及圖13G-2進一步示出了分別從個體的水平面(圖13G-1)及矢狀面(圖13G-2)觀察到的上行升主動脈134、主動脈弓135、第一下行主動脈136及第二下行主動脈137的位置的示例。 Figures 13G-1 and 13G-2 further show examples of the positions of the ascending aorta 134, the aortic arch 135, the first descending aorta 136, and the second descending aorta 137 observed from the horizontal plane (Figure 13G-1) and the sagittal plane (Figure 13G-2) of the individual.

在標記個體的主動脈之後,可以應用平滑函數以在用作為訓練資料之前平衡圈選區域的粗糙度。 After marking the individual aortas, a smoothing function can be applied to balance the roughness of the selected region before using it as training data.

在此描述的實施態樣中,在以收集足夠量的訓練資料為目的下,更詳細的對訓練資料進行標記的過程包括以下步驟(參見圖10中步驟S61和S62之間的關係):手動標記(藉由3D slicer)10個醫學影像實例(參見圖12A-1到圖12D-3及圖13A到圖13G-2中描述的過程);基於所述10個醫學影像實例訓練第一版機器學習模型;使用第一版機器學習模型作為AIAA工具的輔助標註模型,以輔助人工標註更多的醫學影像;基於更多醫學影像訓練第二版機器學習模型;通過人工標註及輔助標註模型重複上述標註,以訓練第二版機器學習模型直到訓練資料足夠。 In the embodiment described herein, for the purpose of collecting sufficient training data, a more detailed process of labeling the training data includes the following steps (see the relationship between steps S61 and S62 in FIG. 10 ): manual labeling (by 3D slicer) 10 medical image examples (see the process described in Figures 12A-1 to 12D-3 and Figures 13A to 13G-2); train the first version of the machine learning model based on the 10 medical image examples; use the first version of the machine learning model as an auxiliary annotation model of the AIAA tool to assist in manual annotation of more medical images; train the second version of the machine learning model based on more medical images; repeat the above annotation through manual annotation and the auxiliary annotation model to train the second version of the machine learning model until the training data is sufficient.

在圖10的步驟S62中,使用訓練資料訓練機器學習模型(第二版機器學習模型)。圖14示出了用於訓練機器學習模型的步驟的流程圖,下文將對其進行描述。 In step S62 of FIG10 , the machine learning model (second version machine learning model) is trained using training data. FIG14 shows a flow chart of the steps for training the machine learning model, which will be described below.

在步驟S621,藉由對訓練資料進行重新取樣、正規化及轉換,將訓練資料預處理成預定的一致性。訓練資料的預處理與上述預處理模組20中描述的類似,在此不再贅述。 In step S621, the training data is preprocessed to a predetermined consistency by resampling, normalizing and transforming the training data. The preprocessing of the training data is similar to that described in the above-mentioned preprocessing module 20 and will not be described in detail here.

在步驟S622,將進一步加強訓練資料以防止過度配適(overfitting)。在本文描述的實施態樣中,從訓練資料預處理的醫學影像的3D體積能首先被隨機裁剪成隨機大小(例如,最大為160×160×160像素及/或最小為64 ×64×64像素)。接著,可以用固定大小(例如,160×160×160像素)填充隨機裁剪的醫學影像3D體積,以確保機器學習模型的平滑訓練。此處還可以使用其他增強技術(augmentation technique),例如但不限於:在訓練期間利用隨機空間翻轉增強處理醫學影像的3D體積以增強訓練資料的偏差、在無底色下縮放或移動醫學影像的3D體積的強度,以正規化機器學習模型。因此,即使來自不同醫院的掃描設備具有不同的成像能力,機器學習模型仍然能泛化以將各種類型的醫學影像作為訓練資料進行處理。 In step S622, the training data is further reinforced to prevent overfitting. In the embodiments described herein, the 3D volume of medical images preprocessed from the training data can first be randomly cropped to a random size (e.g., a maximum of 160×160×160 pixels and/or a minimum of 64×64×64 pixels). The randomly cropped 3D volume of medical images can then be padded with a fixed size (e.g., 160×160×160 pixels) to ensure smooth training of the machine learning model. Other augmentation techniques can also be used here, such as but not limited to: augmenting the 3D volume of medical images with random spatial flips during training to enhance the bias of the training data, scaling or moving the strength of the 3D volume of medical images without background to regularize the machine learning model. Therefore, even if the scanning equipment from different hospitals has different imaging capabilities, the machine learning model can still generalize to process various types of medical images as training data.

在步驟S623,將訓練資料發送到機器學習模型以進行訓練。須注意的是,機器學習模型在訓練時的網路架構係與圖4A及圖4B中描述的相同,在此不再贅述。 In step S623, the training data is sent to the machine learning model for training. It should be noted that the network architecture of the machine learning model during training is the same as that described in Figures 4A and 4B, and will not be repeated here.

在步驟S624,將輸出機器學習模型的訓練結果(例如在訓練過程中藉由機器學習模型輸出的分割結果)以供後續處理驗證。 In step S624, the training results of the machine learning model (e.g., the segmentation results output by the machine learning model during the training process) are output for subsequent processing and verification.

接續圖10的步驟S63,對正在訓練的機器學習模型進行驗證,以量化其分割性能(基於圖14中步驟S624的訓練結果)。在本文描述的實施態樣中,使用損失函數來驗證機器學習模型的分割性能,其可以表示為根據DICE相似度係數(dice similarity coefficient,DSC)設計的骰子損失,定義如下: Continuing from step S63 of FIG. 10 , the machine learning model being trained is verified to quantify its segmentation performance (based on the training results of step S624 in FIG. 14 ). In the implementation described herein, a loss function is used to verify the segmentation performance of the machine learning model, which can be expressed as a dice loss designed according to the DICE similarity coefficient (DSC), which is defined as follows:

Figure 111120307-A0202-12-0021-3
Figure 111120307-A0202-12-0021-3

Dice Loss=1-DSC Dice Loss = 1- DSC

其中y表示機器學習模型要分割的目標區域(即訓練資料中標記的心臟/心包及主動脈區域),

Figure 111120307-A0202-12-0021-53
表示在訓練期間機器學習模型實際分割的預測區域。 Where y represents the target region to be segmented by the machine learning model (i.e., the heart/pericardium and aorta regions marked in the training data).
Figure 111120307-A0202-12-0021-53
represents the predicted region of the actual segmentation by the machine learning model during training.

由上可知,DSC旨在衡量機器學習模型識別的目標區域與預測區域的相似度,因此是評價機器學習模型分割性能的可量化度量。 From the above, we can see that DSC aims to measure the similarity between the target region recognized by the machine learning model and the predicted region, so it is a quantifiable measure for evaluating the segmentation performance of the machine learning model.

基於損失函數,模型訓練模組60能將機器學習模型重新訓練(參見圖10中步驟S63及S62之間的關係)或使機器學習模型準備好實際使用(即,進行圖10中的步驟S64)。在本文所描述的實施態樣中,機器學習模型在投入實際使用之前至少應該達到下表2中列出的DSC。 Based on the loss function, the model training module 60 can retrain the machine learning model (see the relationship between steps S63 and S62 in FIG. 10 ) or prepare the machine learning model for actual use (i.e., perform step S64 in FIG. 10 ). In the implementation described herein, the machine learning model should at least achieve the DSC listed in Table 2 below before being put into actual use.

表2機器學習模型的訓練結果

Figure 111120307-A0202-12-0022-4
Table 2 Training results of machine learning model
Figure 111120307-A0202-12-0022-4

具有足夠分割性能的機器學習模型係於步驟S64輸出,並在訓練結束時於步驟S65部署(例如,通過NVIDIA Clara Imaging提供的NVIDIA Clara deploy SDK)到分割模組30。如果首先將機器學習模型部署到分割模組30,則系統1的元件因此能投入實際使用以實時地對任意個體進行心血管風險預測。然而,模型訓練模組60仍可在臨床實踐期間基於更新的訓練資料及/或參數設置實時優化分割模組30的機器學習模型的性能。 The machine learning model with sufficient segmentation performance is output in step S64 and deployed (e.g., via the NVIDIA Clara deploy SDK provided by NVIDIA Clara Imaging) to the segmentation module 30 in step S65 at the end of training. If the machine learning model is first deployed to the segmentation module 30, the components of the system 1 can therefore be put into practical use to predict cardiovascular risk for any individual in real time. However, the model training module 60 can still optimize the performance of the machine learning model of the segmentation module 30 in real time based on updated training data and/or parameter settings during clinical practice.

在本文描述的進一步實施態樣中,復提供計算機可讀媒介,其儲存電腦可執行代碼,並且該電腦可執行代碼係配置為於執行後實現如上所述圖2的步驟S1至S5、圖10的步驟S61至S65,及/或圖14的步驟S621至S624。 In a further embodiment described herein, a computer-readable medium is further provided, which stores computer-executable code, and the computer-executable code is configured to implement steps S1 to S5 of FIG. 2 , steps S61 to S65 of FIG. 10 , and/or steps S621 to S624 of FIG. 14 as described above after execution.

綜上所述,本揭露利用人工智慧對醫學影像進行分割,以識別個體的心臟、主動脈及/或心包的精確區域,從而從無顯影劑或有顯影劑的用於心血管風險預測的醫學影像中得出脂肪組織體積及鈣化積分。 In summary, the present disclosure utilizes artificial intelligence to segment medical images to identify the precise regions of the heart, aorta and/or pericardium of an individual, thereby deriving the fat tissue volume and calcification integral from contrast-free or contrast-enhanced medical images for cardiovascular risk prediction.

1:系統 1: System

10:影像取得模組 10: Image acquisition module

20:預處理模組 20: Preprocessing module

30:分割模組 30: Split module

40:提取模組 40: Extraction module

41:脂肪提取單元 41: Fat extraction unit

42:鈣提取單元 42: Calcium extraction unit

50:輸出模組 50: Output module

60:模型訓練模組 60: Model training module

Claims (20)

一種用於心血管風險預測的系統,其包括:配置用於從醫學影像中分割出一區域的分割模組,該分割模組的機器學習模型識別個體的心臟、心包及主動脈的區域,以輸出標記有心臟、心包、上行升主動脈及下行主動脈的區域的該個體的該醫學影像的分割結果;配置用於分析該醫學影像的該區域的提取模組,該提取模組基於該分割結果,從該醫學影像中排除非脂肪組織和鈣化部分後,基於該醫學影像中呈現的分割區域的亨氏單位值量化脂肪組織體積及鈣化積分,以提取該脂肪組織體積及該鈣化積分;以及配置用於將來自該分割模組的該分割結果以及該提取模組提取的該脂肪組織體積及該鈣化積分進行整理,以生成分析結果的輸出模組,其中,該機器學習模型具有包括一編碼器部分、一解碼器部分以及一注意力機制的網路架構,且該注意力機制係配置為挑出通過該編碼器部分及該解碼器部分之間的殘差連接的顯著特徵。 A system for predicting cardiovascular risk, comprising: a segmentation module configured to segment a region from a medical image, wherein a machine learning model of the segmentation module identifies the region of the heart, pericardium, and aorta of an individual, so as to output a segmentation result of the medical image of the individual with the region of the heart, pericardium, ascending aorta, and descending aorta marked; an extraction module configured to analyze the region of the medical image, wherein the extraction module excludes non-fat tissue and calcified parts from the medical image based on the segmentation result, and then extracts the region of the medical image based on the segmentation results presented in the medical image. The method comprises: quantifying the fat tissue volume and the calcification integral of the region by using the Hounsfield unit value of the region to extract the fat tissue volume and the calcification integral; and an output module configured to organize the segmentation result from the segmentation module and the fat tissue volume and the calcification integral extracted by the extraction module to generate an analysis result, wherein the machine learning model has a network architecture including an encoder part, a decoder part and an attention mechanism, and the attention mechanism is configured to pick out significant features through residual connections between the encoder part and the decoder part. 如請求項1所述之系統,其中,該醫學影像為非顯影或顯影電腦斷層掃描影像。 A system as described in claim 1, wherein the medical image is a non-developing or developing CT scan image. 如請求項1所述之系統,其中,該分割模組係由該機器學習模型來執行,以從該醫學影像中分割出該區域,且其中,該機器學習模型之該網路架構復包括一變分自編碼器解碼器分支。 A system as described in claim 1, wherein the segmentation module is executed by the machine learning model to segment the region from the medical image, and wherein the network architecture of the machine learning model further includes a variational autoencoder-decoder branch. 如請求項3所述之系統,其中,該變分自編碼器解碼器分支係配置為於該機器學習模型訓練期間基於該編碼器部分之端點的特徵重建該醫學影像。 The system of claim 3, wherein the variational self-encoder decoder branch is configured to reconstruct the medical image based on features of the endpoints of the encoder portion during training of the machine learning model. 如請求項3所述之系統,復包括配置為藉由以下步驟提供該機器學習模型訓練的模型訓練模組:將訓練資料預處理成預定的一致性;藉由對該訓練資料執行隨機裁剪、隨機空間翻轉及/或隨機強度縮放或平移來增強該訓練資料;使用該訓練資料訓練機器學習模型;以及使用一損失函數驗證該機器學習模型的訓練結果。 The system as described in claim 3 further includes a model training module configured to provide the machine learning model training by the following steps: preprocessing the training data to a predetermined consistency; enhancing the training data by performing random cropping, random spatial flipping and/or random intensity scaling or translation on the training data; training the machine learning model using the training data; and verifying the training result of the machine learning model using a loss function. 如請求項5所述之系統,其中,該訓練資料係經由手動及/或在輔助標註模型的幫助下標記該醫學影像而產生。 A system as claimed in claim 5, wherein the training data is generated by labeling the medical image manually and/or with the help of an auxiliary annotation model. 如請求項1所述之系統,其中,該分析結果包括該區域的該脂肪組織體積,且其中,該提取模組包括配置為透過以下步驟量化該區域中的心包內的該脂肪組織體積的脂肪提取單元:基於電腦斷層下的衰減係數計算該心包的亨氏單位值(Hounsfield unit value);基於雜訊容限定義該亨氏單位值的正負標準偏差範圍;以及基於該範圍確定該心包內的該脂肪組織體積。 A system as described in claim 1, wherein the analysis result includes the volume of the fat tissue in the region, and wherein the extraction module includes a fat extraction unit configured to quantify the volume of the fat tissue in the pericardium in the region by: calculating the Hounsfield unit value of the pericardium based on the attenuation coefficient under computer tomography; defining the positive and negative standard deviation range of the Hounsfield unit value based on the noise tolerance limit; and determining the volume of the fat tissue in the pericardium based on the range. 如請求項1所述之系統,其中,該分析結果包括該區域的該鈣化積分,且其中,該提取模組包括配置為透過以下步驟量化來自該區域的心臟或主動脈的該鈣化積分的鈣提取單元:基於藉由心臟血管鈣化積分(Agatston score)定義的切點從該區域中識別出鈣區域;擷取該鈣區域的3D影像; 藉由一分類器分析該3D影像以確定該鈣區域的分類;指定該鈣區域的鈣化積分;以及產生一熱圖以示出該鈣區域及該鈣化積分。 The system of claim 1, wherein the analysis result includes the calcification integral of the region, and wherein the extraction module includes a calcium extraction unit configured to quantify the calcification integral of the heart or aorta from the region by: identifying a calcium region from the region based on a cut-off point defined by a cardiovascular calcification integral (Agatston score); capturing a 3D image of the calcium region; Analyzing the 3D image by a classifier to determine the classification of the calcium region; assigning the calcification integral of the calcium region; and generating a heat map to illustrate the calcium region and the calcification integral. 如請求項1所述之系統,復包括配置為透過以下步驟將該醫學影像預處理成預定的一致性的預處理模組:將該醫學影像的3D體積重新取樣為2×2×2毫米的間距或任何預定的尺寸;以及將該3D體積的強度正規化為均值為零的單位標準差。 The system as described in claim 1 further includes a pre-processing module configured to pre-process the medical image to a predetermined consistency by: resampling the 3D volume of the medical image to a spacing of 2×2×2 mm or any predetermined size; and normalizing the intensity of the 3D volume to a unit standard deviation with a mean of zero. 如請求項1所述之系統,其中,該輸出模組基於該分析結果呈現一心血管風險預測分數。 A system as described in claim 1, wherein the output module presents a cardiovascular risk prediction score based on the analysis results. 一種心血管風險預測方法,係於電腦或伺服器上執行,該心血管風險預測方法包括以下步驟:藉由一分割模組從一醫學影像中分割出一區域,該分割模組的機器學習模型識別個體的心臟、心包及主動脈的區域,以輸出標記有心臟、心包、上行升主動脈及下行主動脈的區域的該個體的該醫學影像的分割結果;藉由一提取模組分析該醫學影像的該區域,基於該分割結果,從該醫學影像中排除非脂肪組織和鈣化部分後,基於該醫學影像中呈現的分割區域的亨氏單位值量化脂肪組織體積及鈣化積分,以提取該脂肪組織體積及該鈣化積分;以及藉由一輸出模組將來自該分割模組的該分割結果以及該提取模組提取的該脂肪組織體積及該鈣化積分進行整理,以生成分析結果,其中,該機器學習模型具有包括一編碼器部分、一解碼器部分以及一注意力機制的網路架構,且該注意 力機制係配置為挑出通過該編碼器部分及該解碼器部分之間的殘差連接的顯著特徵。 A cardiovascular risk prediction method is executed on a computer or a server. The cardiovascular risk prediction method includes the following steps: a segmentation module is used to segment a region from a medical image. The machine learning model of the segmentation module identifies the region of the heart, pericardium and aorta of the individual to output the segmentation result of the medical image of the individual with the region of the heart, pericardium, ascending aorta and descending aorta marked; an extraction module is used to analyze the region of the medical image. Based on the segmentation result, non-fat tissue and calcification are excluded from the medical image, and then the region of the medical image is extracted based on the segmentation result. quantifying the fat tissue volume and the calcification integral according to the Hounsfield unit value of the segmented region presented in the image to extract the fat tissue volume and the calcification integral; and arranging the segmentation result from the segmentation module and the fat tissue volume and the calcification integral extracted by the extraction module through an output module to generate an analysis result, wherein the machine learning model has a network architecture including an encoder part, a decoder part and an attention mechanism, and the attention mechanism is configured to pick out significant features through residual connections between the encoder part and the decoder part. 如請求項11所述之方法,其中,該醫學影像係一電腦斷層掃描影像。 The method as described in claim 11, wherein the medical image is a computer tomography image. 如請求項11所述之方法,其中,該分割模組係由該機器學習模型來執行,以從該醫學影像中分割出該區域,且其中,該機器學習模型之該網路架構復包括一變分自編碼器解碼器分支。 The method as claimed in claim 11, wherein the segmentation module is executed by the machine learning model to segment the region from the medical image, and wherein the network architecture of the machine learning model further includes a variational autoencoder-decoder branch. 如請求項13所述之方法,其中,該變分自編碼器解碼器分支係配置為於該機器學習模型訓練期間基於該編碼器部分之端點的特徵重建該醫學影像。 The method of claim 13, wherein the variational self-encoder decoder branch is configured to reconstruct the medical image based on features of the endpoints of the encoder portion during training of the machine learning model. 如請求項13所述之方法,復包括配置為藉由以下步驟提供該機器學習模型訓練的模型訓練模組:將訓練資料預處理成預定的一致性,其中,該訓練資料係經由手動及/或在輔助標註模型的幫助下標記該醫學影像而產生;藉由對該訓練資料執行隨機裁剪、隨機空間翻轉及/或隨機強度縮放或平移來加強該訓練資料;使用該訓練資料訓練機器學習模型;以及使用一損失函數驗證該機器學習模型的訓練結果。 The method as described in claim 13 further includes a model training module configured to provide the machine learning model training by the following steps: preprocessing the training data to a predetermined consistency, wherein the training data is generated by manually and/or with the help of an auxiliary annotation model to label the medical image; enhancing the training data by performing random cropping, random spatial flipping and/or random intensity scaling or translation on the training data; training the machine learning model using the training data; and verifying the training result of the machine learning model using a loss function. 如請求項11所述之方法,其中,該分析結果包括該區域的該脂肪組織體積,且其中,該提取模組包括配置為透過以下步驟量化該區域中的心包內的該脂肪組織體積的脂肪提取單元:基於電腦斷層下的衰減係數計算該心包的亨氏單位值; 基於雜訊容限定義該亨氏單位值的正負標準偏差範圍;以及基於該範圍確定該心包內的該脂肪組織體積。 The method as claimed in claim 11, wherein the analysis result includes the volume of the fat tissue in the region, and wherein the extraction module includes a fat extraction unit configured to quantify the volume of the fat tissue in the pericardium in the region by: calculating the Hounsfield unit value of the pericardium based on the attenuation coefficient under computer tomography; defining the positive and negative standard deviation range of the Hounsfield unit value based on the noise tolerance limit; and determining the volume of the fat tissue in the pericardium based on the range. 如請求項11所述之方法,其中,該分析結果包括該區域的該鈣化積分,且其中,該提取模組包括配置為透過以下步驟量化來自該區域的心臟或主動脈的該鈣化積分的鈣提取單元:基於藉由心臟血管鈣化積分定義的切點從該區域中識別出鈣區域;擷取該鈣區域的3D影像;藉由一分類器分析該3D影像以確定該鈣區域的分類;指定該鈣區域的鈣化積分;以及產生一熱圖以示出該鈣區域及該鈣化積分。 The method of claim 11, wherein the analysis result includes the calcification integral of the region, and wherein the extraction module includes a calcium extraction unit configured to quantify the calcification integral of the heart or aorta from the region by: identifying a calcium region from the region based on a cut point defined by a cardiac vascular calcification integral; capturing a 3D image of the calcium region; analyzing the 3D image by a classifier to determine a classification of the calcium region; assigning a calcification integral to the calcium region; and generating a heat map to illustrate the calcium region and the calcification integral. 如請求項11所述之方法,復包括配置為透過以下步驟將該醫學影像預處理成預定的一致性的預處理模組:將該醫學影像的3D體積重新取樣為2×2×2毫米的間距或任何預定的尺寸;以及將該3D體積的強度正規化為均值為零的單位標準差。 The method as claimed in claim 11, further comprising a preprocessing module configured to preprocess the medical image to a predetermined consistency by: resampling the 3D volume of the medical image to a spacing of 2×2×2 mm or any predetermined size; and normalizing the intensity of the 3D volume to a unit standard deviation with a mean of zero. 如請求項11所述之方法,其中,該輸出模組基於該分析結果呈現一心血管風險預測分數。 The method as described in claim 11, wherein the output module presents a cardiovascular risk prediction score based on the analysis results. 一種電腦可讀媒介,其儲存有一電腦可執行代碼,且執行該電腦可執行代碼實現如請求項11所述之方法。 A computer-readable medium storing a computer-executable code, and executing the computer-executable code to implement the method described in claim 11.
TW111120307A 2022-05-31 2022-05-31 System and method for cardiovascular risk prediction and computer readable medium thereof TWI850670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111120307A TWI850670B (en) 2022-05-31 2022-05-31 System and method for cardiovascular risk prediction and computer readable medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111120307A TWI850670B (en) 2022-05-31 2022-05-31 System and method for cardiovascular risk prediction and computer readable medium thereof

Publications (2)

Publication Number Publication Date
TW202349409A TW202349409A (en) 2023-12-16
TWI850670B true TWI850670B (en) 2024-08-01

Family

ID=90039210

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111120307A TWI850670B (en) 2022-05-31 2022-05-31 System and method for cardiovascular risk prediction and computer readable medium thereof

Country Status (1)

Country Link
TW (1) TWI850670B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260209A (en) * 2020-01-14 2020-06-09 山东大学 Cardiovascular disease risk prediction and evaluation system combining electronic medical record and medical image
US11120549B2 (en) * 2020-01-07 2021-09-14 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
US11276170B2 (en) * 2020-01-07 2022-03-15 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11120549B2 (en) * 2020-01-07 2021-09-14 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
US11276170B2 (en) * 2020-01-07 2022-03-15 Cleerly, Inc. Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking
CN111260209A (en) * 2020-01-14 2020-06-09 山东大学 Cardiovascular disease risk prediction and evaluation system combining electronic medical record and medical image

Also Published As

Publication number Publication date
TW202349409A (en) 2023-12-16

Similar Documents

Publication Publication Date Title
Bernard et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?
Houssein et al. Deep and machine learning techniques for medical imaging-based breast cancer: A comprehensive review
Apostolopoulos et al. Multi-input deep learning approach for cardiovascular disease diagnosis using myocardial perfusion imaging and clinical data
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
Sugimori Classification of computed tomography images in different slice positions using deep learning
CN106372390A (en) Deep convolutional neural network-based lung cancer preventing self-service health cloud service system
JP7034306B2 (en) Region segmentation device, method and program, similarity determination device, method and program, and feature quantity derivation device, method and program
CN109308495A (en) From the device and system of the medical image automatic Prediction physiological status of patient
He et al. Automatic segmentation and quantification of epicardial adipose tissue from coronary computed tomography angiography
Lee et al. Machine learning and coronary artery calcium scoring
JP7004829B2 (en) Similarity determination device, method and program
Abdelrahman et al. Efficientnet family u-net models for deep learning semantic segmentation of kidney tumors on ct images
Patel An overview and application of deep convolutional neural networks for medical image segmentation
Kaliyugarasan et al. Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI
Souid et al. Xception-ResNet autoencoder for pneumothorax segmentation
Ke et al. Biological gender estimation from panoramic dental x-ray images based on multiple feature fusion model
WO2021187483A1 (en) Document creation assistance device, method, and program
Cheng et al. Dr. Pecker: A Deep Learning-Based Computer-Aided Diagnosis System in Medical Imaging
TWI850670B (en) System and method for cardiovascular risk prediction and computer readable medium thereof
Balasubramaniam et al. Medical Image Analysis Based on Deep Learning Approach for Early Diagnosis of Diseases
Oniga et al. Applications of ai and hpc in the health domain
AU2019204365B1 (en) Method and System for Image Segmentation and Identification
US20240005479A1 (en) System and method for cardiovascular risk prediction and computer readable medium thereof
Waseem Sabir et al. FibroVit—Vision transformer-based framework for detection and classification of pulmonary fibrosis from chest CT images
Bhushan Liver cancer detection using hybrid approach-based convolutional neural network (HABCNN)