[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

TWI740361B - Artificial intelligence operation assistive system and method thereof - Google Patents

Artificial intelligence operation assistive system and method thereof Download PDF

Info

Publication number
TWI740361B
TWI740361B TW109102552A TW109102552A TWI740361B TW I740361 B TWI740361 B TW I740361B TW 109102552 A TW109102552 A TW 109102552A TW 109102552 A TW109102552 A TW 109102552A TW I740361 B TWI740361 B TW I740361B
Authority
TW
Taiwan
Prior art keywords
target object
artificial intelligence
portable device
auxiliary platform
environment image
Prior art date
Application number
TW109102552A
Other languages
Chinese (zh)
Other versions
TW202129483A (en
Inventor
陳文彬
Original Assignee
國眾電腦股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國眾電腦股份有限公司 filed Critical 國眾電腦股份有限公司
Priority to TW109102552A priority Critical patent/TWI740361B/en
Publication of TW202129483A publication Critical patent/TW202129483A/en
Application granted granted Critical
Publication of TWI740361B publication Critical patent/TWI740361B/en

Links

Images

Landscapes

  • Testing And Monitoring For Control Systems (AREA)

Abstract

An artificial intelligence operation assistive system comprises a portable device and an artificial intelligence operation assistive platform. The portable device comprises a camera module, a wireless communication module, a processor, and a display module. The camera module is configured to obtain an environment image with a target object. The wireless communication module is configured to send the environment image and to receive an operation signal. The processor is configured to control the wireless communication module to send the environment image and generate an operation layer according to the operation signal. The display module is configured to display the operation layer. The artificial intelligence operation assistive platform is configured to receive the environment image and perform a recognition procedure to identify the target object, and generate and send the operation signal according to the target object and a object database.

Description

智慧操作輔助系統及智慧操作輔助方法Intelligent operation auxiliary system and intelligent operation auxiliary method

本發明涉及影像辨識,特別是一種基於混合實境(Mixed Reality,MR)技術與高速通訊網路的智慧操作輔助系統及其方法。The present invention relates to image recognition, in particular to a smart operation assistance system and method based on Mixed Reality (MR) technology and high-speed communication network.

在石化業或化工業的廠房中具有複雜的管線、種類繁多的設備及各式各樣的儀表,為了維護廠房內各項資產設備的安全,勢必配置足夠的人員進行巡檢,以便於進行日常的設備指標數據監測,同時針對緊急事件進行排除。In the petrochemical or chemical industry plants, there are complex pipelines, a wide variety of equipment and various instruments. In order to maintain the safety of various assets and equipment in the plant, it is necessary to deploy enough personnel to conduct inspections to facilitate daily routine inspections. Monitoring of equipment index data, and at the same time eliminate emergency events.

然而,對於新進巡檢人員而言,由於無法立刻熟悉所有設備的操作方式及故障排除機制,經常有操作錯誤的狀況發生。即使巡檢人員採用對講機與中控中心互相通話以得到關於操作上的指示,仍有可能因為雙方語意表達不夠明確而導致巡檢人員操作錯誤,進而造成故障無法被排除或設備無法運作,如此將可能導致工廠難以估計的損失。However, for new patrol personnel, because they cannot be immediately familiar with the operation methods and troubleshooting mechanisms of all equipment, operating errors often occur. Even if the inspectors use the walkie-talkie to talk to the central control center to get instructions on the operation, it is still possible that the operator may make mistakes in the operation due to the lack of clear meaning of the two parties, and cause the fault to be unable to be eliminated or the equipment to fail. This may result in unpredictable losses for the factory.

有鑑於此,本發明提出一種智慧操作輔助系統及智慧操作輔助方法,在第五代行動通訊技術(5 thGeneration mobile networks,5G)專用頻譜之企業專網布建而成的高速通訊環境底下實現石化業廠區內的設備操作輔助引導,可有效讓新進的巡檢人員面對石化業廠房中複雜的設備及儀表時仍能正確操作,降低人為失誤的可能性。 Accordingly, the present invention provides a kind of wisdom and intelligence operations support system operations helper method implemented in dedicated spectrum under the fifth generation of mobile communications technology (5 th Generation mobile networks, 5G ) corporate network deployment environment made of high-speed communication The equipment operation assistance guidance in the petrochemical industry plant area can effectively enable new inspectors to operate correctly when facing the complex equipment and instruments in the petrochemical industry plant, reducing the possibility of human error.

依據本發明一實施例敘述的一種智慧操作輔助系統,包括可攜式裝置以及人工智慧操作輔助平台。可攜式裝置包括攝像模組、無線通訊模組、處理器及顯示模組。攝像模組用以取得環境影像。環境影像中具有目標物件。無線通訊模組用以發送環境影像,並接收操作訊號。處理器電性連接無線通訊模組及攝像模組。處理器控制無線通訊模組發送環境影像,並依據操作訊號產生操作圖層。顯示模組用以呈現操作圖層。人工智慧操作輔助平台,通訊連接無線通訊模組,人工智慧操作輔助平台用以接收環境影像並執行一辨識程序以辨識目標物件,人工智慧操作輔助平台更用以依據目標物件及物件資料庫產生並發送操作訊號。A smart operation assistance system described according to an embodiment of the present invention includes a portable device and an artificial intelligence operation assistance platform. The portable device includes a camera module, a wireless communication module, a processor, and a display module. The camera module is used to obtain environmental images. There is a target object in the environment image. The wireless communication module is used to send environmental images and receive operation signals. The processor is electrically connected to the wireless communication module and the camera module. The processor controls the wireless communication module to send environmental images, and generates an operation layer according to the operation signal. The display module is used for presenting the operation layer. The artificial intelligence operation auxiliary platform communicates with the wireless communication module. The artificial intelligence operation auxiliary platform is used to receive environmental images and execute a recognition process to identify the target object. The artificial intelligence operation auxiliary platform is used to generate and generate data based on the target object and the object database. Send operation signal.

依據本發明一實施例敘述的一種智慧操作輔助方法,包括:可攜式裝置之攝像模組取得環境影像,環境影像中具有目標物件;可攜式裝置之處理器控制可攜式裝置之無線通訊模組發送環境影像;人工智慧操作輔助平台接收環境影像並執行辨識程序以辨識目標物件;人工智慧操作輔助平台依據目標物件及物件資料庫產生並發送操作訊號;可攜式裝置之無線通訊模組接收操作訊號;可攜式裝置之處理器依據操作訊號產生操作圖層;以及可攜式裝置之一顯示模組呈現操作圖層。A smart operation assistance method described according to an embodiment of the present invention includes: a camera module of a portable device obtains an environmental image with a target object in the environmental image; and a processor of the portable device controls the wireless communication of the portable device The module sends environmental images; the artificial intelligence operation auxiliary platform receives the environmental images and executes the identification process to identify the target object; the artificial intelligence operation auxiliary platform generates and sends operation signals according to the target object and the object database; the wireless communication module of the portable device Receive the operation signal; the processor of the portable device generates an operation layer according to the operation signal; and a display module of the portable device presents the operation layer.

以上之關於本揭露內容之說明及以下之實施方式之說明係用以示範與解釋本發明之精神與原理,並且提供本發明之專利申請範圍更進一步之解釋。The above description of the disclosure and the following description of the embodiments are used to demonstrate and explain the spirit and principle of the present invention, and to provide a further explanation of the scope of the patent application of the present invention.

以下在實施方式中詳細敘述本發明之詳細特徵以及優點,其內容足以使任何熟習相關技藝者了解本發明之技術內容並據以實施,且根據本說明書所揭露之內容、申請專利範圍及圖式,任何熟習相關技藝者可輕易地理解本發明相關之目的及優點。以下之實施例係進一步詳細說明本發明之觀點,但非以任何觀點限制本發明之範疇。The detailed features and advantages of the present invention will be described in detail in the following embodiments. The content is sufficient to enable anyone familiar with the relevant art to understand the technical content of the present invention and implement it accordingly, and according to the content disclosed in this specification, the scope of patent application and the drawings. Anyone who is familiar with relevant skills can easily understand the purpose and advantages of the present invention. The following examples further illustrate the viewpoints of the present invention in detail, but do not limit the scope of the present invention by any viewpoint.

請參考圖1,其繪示本發明一實施例的智慧操作輔助系統100的方塊架構圖。所述的智慧操作輔助系統100適用於已佈建高速行動通訊網路N的石化業廠房。在一實施例中,高速行動通訊網路N由多個無線存取點AP及基地台所組成,這些無線存取點AP係透過第五代行動通訊技術(5 thGeneration mobile networks,5G)專用頻譜之企業專網彼此互相連接。如圖1所示,智慧操作輔助系統100包括:可攜式裝置3及人工智慧操作輔助平台5。可攜式裝置3係透過前述無線存取點AP其中一者通訊連接至人工智慧操作輔助平台5。 Please refer to FIG. 1, which shows a block diagram of a smart operation assistance system 100 according to an embodiment of the present invention. The smart operation assistance system 100 is suitable for petrochemical plants where a high-speed mobile communication network N has been deployed. In one embodiment, the high speed mobile communication network N by the radio access point AP and a plurality of base stations are composed of the wireless access point AP through the fifth-generation mobile communication system technologies (5 th Generation mobile networks, 5G ) dedicated spectrum of Enterprise private networks are connected to each other. As shown in FIG. 1, the intelligent operation assistance system 100 includes: a portable device 3 and an artificial intelligence operation assistance platform 5. The portable device 3 is connected to the artificial intelligence operation auxiliary platform 5 through one of the aforementioned wireless access points AP.

如圖1所示,用於傳送巡檢人員目視到的環境影像的可攜式裝置3包括攝像模組32、無線通訊模組34、處理器36及顯示模組38。處理器36電性連接至攝像模組32、無線通訊模組34及顯示模組38。人工智慧操作輔助平台5通訊連接無線通訊模組34以接收環境影像。在一實施例中,人工智慧操作輔助平台5係一伺服器或伺服器叢集,並且通訊連接至以5G實現的高速行動通訊網路N。藉此,人工智慧操作輔助平台5可在較小延遲的條件下接收到每個可攜式裝置3回傳的環境影像。As shown in FIG. 1, the portable device 3 used for transmitting the environmental images visually seen by the inspectors includes a camera module 32, a wireless communication module 34, a processor 36 and a display module 38. The processor 36 is electrically connected to the camera module 32, the wireless communication module 34 and the display module 38. The artificial intelligence operation auxiliary platform 5 is communicatively connected to the wireless communication module 34 to receive environmental images. In one embodiment, the artificial intelligence operation assistance platform 5 is a server or a cluster of servers, and is communicatively connected to a high-speed mobile communication network N implemented by 5G. In this way, the artificial intelligence operation assistance platform 5 can receive the environmental images returned by each portable device 3 with a relatively small delay.

請參考圖2,其繪示本發明一實施例敘述的可攜式裝置3的外觀示意圖。在一實施例中,可攜式裝置3為供巡檢人員配戴的一智慧型頭盔。因攝像模組32的設置鄰近於配戴智慧型頭盔型態的可攜式裝置3的巡檢人員的眼部位置,故巡檢人員可輕易地透過攝像模組32收集其所看到的廠房內各種設備的影像。在其他實施例中,可攜式裝置3的外觀型態亦可以是智慧型眼鏡、具有攝像模組32的智慧型手機、具有攝像模組32的平板電腦或具有攝像模組32的個人數位助理(Personal Digital Assistant,PDA)等。本發明對於可攜式裝置3的外觀或硬體類型並未加以限制。Please refer to FIG. 2, which shows a schematic diagram of the appearance of the portable device 3 described in an embodiment of the present invention. In one embodiment, the portable device 3 is a smart helmet worn by inspectors. Since the camera module 32 is arranged close to the eye position of the inspector wearing the portable device 3 of the smart helmet type, the inspector can easily collect the factory buildings he sees through the camera module 32 Images of various equipment inside. In other embodiments, the appearance of the portable device 3 can also be smart glasses, a smart phone with a camera module 32, a tablet computer with a camera module 32, or a personal digital assistant with a camera module 32. (Personal Digital Assistant, PDA) etc. The present invention does not limit the appearance of the portable device 3 or the type of hardware.

在圖2繪示的實施例中,顯示模組38設置於鏡片的位置。顯示模組38係具有透明度的小型螢幕,因此,巡檢人員的視線可穿過顯示模組38而看到前方的廠房內部設備。與此同時,巡檢人員也可以看到呈現在鏡片形式上的小型螢幕上的顯示圖層,故在此實施例中,顯示模組38可提供混合實境(Miexed Reality,MR)或擴增實境(Augemented Reality,AR)形式呈現的疊合影像供巡檢人員檢視。在另一實施例中,顯示模組38例如係一般螢幕,如平板電腦的顯示面板,使用者可在顯示模組38上看到由攝像模組32拍攝到的實境影像,或是看到由人工智慧操作輔助平台5發送過來的三維物件影像。本發明並不限制顯示模組38的硬體類型。In the embodiment shown in FIG. 2, the display module 38 is disposed at the position of the lens. The display module 38 is a small transparent screen. Therefore, the inspectors can pass through the display module 38 to see the equipment inside the factory. At the same time, the inspector can also see the display layer presented on the small screen in the form of the lens. Therefore, in this embodiment, the display module 38 can provide mixed reality (Miexed Reality, MR) or augmented reality. The superimposed images presented in the form of Augemented Reality (AR) are available for inspection by inspectors. In another embodiment, the display module 38 is, for example, a general screen, such as the display panel of a tablet computer, and the user can see the actual image captured by the camera module 32 on the display module 38, or see The three-dimensional object image sent by the artificial intelligence operation auxiliary platform 5. The present invention does not limit the hardware type of the display module 38.

請參考圖3,其繪示本發明一實施例敘述的智慧操作輔助方法的流程圖。所述的智慧操作輔助方法適用於前述的智慧操作輔助系統100。以下結合圖3繪示的智慧操作輔助方法的流程,進一步說明智慧操作輔助系統100中各項元件的組成及實現方式。Please refer to FIG. 3, which shows a flowchart of a smart operation assistance method described in an embodiment of the present invention. The aforementioned smart operation assistance method is applicable to the aforementioned smart operation assistance system 100. The following describes the composition and implementation of various components in the smart operation assistance system 100 in conjunction with the flow of the smart operation assistance method shown in FIG. 3.

請參考步驟S1,可攜式裝置3之攝像模組32取得並發送環境影像。環境影像中具有目標物件。所述目標物件泛指可供巡檢人員操作的設備或儀器。舉例來說,配戴著如圖2繪示的可攜式裝置3的巡檢人員到達指定座標點後開始檢視或操作該座標點周圍的石化管線、設備或儀表。首先由可攜式裝置3之攝像模組32取得具有目標物件的環境影像,然後,透過前述的以5G企業專網實現的高速行動通訊網路N,可攜式裝置3之處理器36控制可攜式裝置3之無線通訊模組34將具有目標物件的環境影像發送至人工智慧操作輔助平台5。Please refer to step S1, the camera module 32 of the portable device 3 obtains and sends an environment image. There is a target object in the environment image. The target object generally refers to equipment or instruments that can be operated by inspectors. For example, an inspector wearing a portable device 3 as shown in FIG. 2 reaches a designated coordinate point and starts to inspect or operate petrochemical pipelines, equipment or instruments around the coordinate point. First, the camera module 32 of the portable device 3 obtains the environment image with the target object, and then, through the aforementioned high-speed mobile communication network N realized by the 5G enterprise private network, the processor 36 of the portable device 3 controls the portable The wireless communication module 34 of the mobile device 3 sends the environment image with the target object to the artificial intelligence operation assistance platform 5.

請參考步驟S2,人工智慧操作輔助平台5辨識環境影像中的目標物件。在一實施例中,人工智慧操作輔助平台5可執行一影像辨識程序,並載入一物件資料庫,藉此對環境影像中的目標物件進行辨識。所述的物件資料庫係包括石化業廠房中所有設備的影像資訊,這些影像資訊例如為電腦輔助設計(CAD)檔的形式。因此,無論巡檢人員以何種視角觀看目標物件,人工智慧操作輔助平台5皆可以辨識出環境影像中的目標物件為何種類型。Please refer to step S2, the artificial intelligence operation auxiliary platform 5 recognizes the target object in the environmental image. In one embodiment, the artificial intelligence operation assistance platform 5 can execute an image recognition process and load an object database to identify the target object in the environmental image. The object database includes the image information of all equipment in the petrochemical industry plant, for example, in the form of a computer-aided design (CAD) file. Therefore, no matter what perspective the inspector views the target object, the artificial intelligence operation assistance platform 5 can recognize the type of the target object in the environmental image.

在步驟S2的又一實施例中,當目標物件具有可讀取數值(例如管線上的儀表)時,人工智慧操作輔助平台5所執行的辨識程序更包括確認此可讀取數值。舉例來說,巡檢人員自行確認可讀取數值後再透過可攜式裝置3的輸入元件輸入此可讀取數值並回傳至人工智慧操作輔助平台5。或者,巡檢人員也可以透過其他儀器讀取此數值再回傳到人工智慧操作輔助平台5。或者,該儀表屬於智慧物聯網(AIoT)類型,其每隔一固定週期便透過高速通訊網路N將可讀取數值自動回傳至人工智慧操作輔助平台5。或者,人工智慧操作輔助平台5直接從攝像模組32拍攝到的環境影像中辨識出此可讀取數值。承上所述,本發明對於認為智慧操作輔助平台5確認可讀取數值的方式不予限制。In another embodiment of step S2, when the target object has a readable value (for example, a meter on a pipeline), the identification procedure executed by the artificial intelligence operation auxiliary platform 5 further includes confirming the readable value. For example, the inspector confirms the readable value by himself, and then inputs the readable value through the input element of the portable device 3 and sends it back to the artificial intelligence operation auxiliary platform 5. Alternatively, the inspector can also read this value through other instruments and then send it back to the artificial intelligence operation auxiliary platform 5. Or, the instrument belongs to the type of Intelligent Internet of Things (AIoT), which automatically returns the readable value to the artificial intelligence operation auxiliary platform 5 through the high-speed communication network N every fixed cycle. Alternatively, the artificial intelligence operation assistance platform 5 directly recognizes the readable value from the environmental image captured by the camera module 32. In summary, the present invention does not limit the manner in which the smart operation assistance platform 5 can confirm the readable value.

在步驟S2的再一實施例中,在人工智慧操作輔助平台5執行辨識程序辨識目標物件之前,更包括辨識目標物件週邊的其可辨識物件。舉例來說,人工智慧操作輔助平台5載入三維模型資料庫,再依據環境影像及三維模型資料庫執行辨識程序以找出環境影像中所有可供辨識的物件,根據這些可辨識物件的分佈,人工智慧操作輔助平台5比對三維模型資料庫中儲存的物件分佈資訊,以確認巡檢人員當前所在的位置的座標點。若人工智慧操作輔助平台5判斷巡檢人員所在的座標點週邊確實有有此目標物件,則人工智慧操作輔助平台5可發送通知訊號,藉此指示巡檢人員的視線朝向目標物件所在的位置。關於前述的三維模型資料庫,其包括一或多個電腦輔助設計(Computer Aided Design,CAD)檔案及多個座標點。舉例來說,可取用石化業廠房在設計時的CAD檔建立此三維模型資料庫。在其他實施例中,可運用雷射掃描或石化廠詳細施工圖來逆向建立管路與其他設備的三維模型資料庫。舉例來說,利用三維掃瞄儀器取得管路或設備表面的空間資訊,並利用點雲辨識軟體將其轉換為具有實體意義的立體模型。舉另一例來說,根據詳細施工圖找出管線的起迄點或參考點的座標,然後藉由CAD軟體系統半自動地建立三維模型資料庫。承前所述,在確認巡檢人員且其視線方向朝向目標物件之後,人工智慧操作輔助平台5才依據環境影像及位置資訊執行辨識程序以辨識目標物件。由於石化業廠房內的管線、設備及儀表等設置位置基本上固定不動,透過上述機制,可預先篩選出屬於該位置資訊的所有設備,因此可減少辨識程序辨識出目標物件的時間。In still another embodiment of step S2, before the artificial intelligence operation assistance platform 5 executes the identification process to identify the target object, it further includes identifying the identifiable objects around the target object. For example, the artificial intelligence operation assistance platform 5 loads the 3D model database, and then executes the identification process based on the environment image and the 3D model database to find all the identifiable objects in the environment image. According to the distribution of these identifiable objects, The artificial intelligence operation auxiliary platform 5 compares the object distribution information stored in the 3D model database to confirm the coordinate point of the current position of the inspector. If the artificial intelligence operation assistance platform 5 determines that there is the target object around the coordinate point where the inspector is located, the artificial intelligence operation assistance platform 5 can send a notification signal, thereby instructing the inspector's line of sight toward the location of the target object. Regarding the aforementioned 3D model database, it includes one or more Computer Aided Design (CAD) files and multiple coordinate points. For example, the CAD file of the petrochemical plant during the design can be used to create this 3D model database. In other embodiments, laser scanning or detailed construction drawings of a petrochemical plant can be used to reversely build a three-dimensional model database of pipelines and other equipment. For example, use a three-dimensional scanning instrument to obtain the spatial information of the pipeline or equipment surface, and use point cloud recognition software to convert it into a solid model with physical meaning. For another example, find out the coordinates of the start and end points or reference points of the pipeline according to the detailed construction drawings, and then semi-automatically build a 3D model database by the CAD software system. As mentioned above, after the inspectors are confirmed and their line of sight is facing the target object, the artificial intelligence operation auxiliary platform 5 performs the identification process to identify the target object according to the environment image and location information. Because the pipelines, equipment, and meters in the petrochemical plant are basically fixed, all equipment belonging to the location information can be screened out in advance through the above mechanism, so the time for the identification process to identify the target object can be reduced.

請參考步驟S3,人工智慧操作輔助平台5依據目標物件及物件資料庫產生並發送操作訊號。詳言之,在步驟S2確認目標物件之後,人工智慧操作輔助平台5進一步依據目標物件及物件資料庫輸出該目標物件適用的操作訊號。在一實施例中,物件資料庫不僅儲存所有目標物件的影像,此外更儲存每個目標物件所適用的目標物件狀態判斷模型。實務上,人工智慧操作輔助平台5預先收集多個關聯於目標物件之狀態參數,這些狀態參數例如該目標物件的出廠規格,可供調整的數值項目,以及該目標物件在不同時間點所應呈現的狀態(如溫度、壓力或儀表讀數等)。人工智慧操作輔助平台5將上述收集的關聯於目標物件的狀態參數作為輸出層,並代入一類神經網路演算法,藉此產生一目標物件狀態判斷模型,其輸出即為前述的操作參數。所述的類神經網路演算法例如係多層感知機(MultiLayer Perceptron,MLP)、長短期記憶網路(Long Short-Term Memory,LSTM)、卷積神經網路 (Convolutional neural network,CNN)。Please refer to step S3, the artificial intelligence operation assistance platform 5 generates and sends operation signals according to the target object and the object database. In detail, after confirming the target object in step S2, the artificial intelligence operation assistance platform 5 further outputs the operation signal applicable to the target object according to the target object and the object database. In one embodiment, the object database not only stores the images of all target objects, but also stores the target object state judgment model applicable to each target object. In practice, the artificial intelligence operation assistance platform 5 pre-collects a number of state parameters associated with the target object, such as the factory specifications of the target object, the numerical items that can be adjusted, and the display of the target object at different points in time. The status (such as temperature, pressure or meter readings, etc.). The artificial intelligence operation assistance platform 5 uses the collected state parameters associated with the target object as the output layer, and substitutes it into a neural network algorithm to generate a target object state judgment model, the output of which is the aforementioned operation parameter. The neural network-like algorithms are, for example, MultiLayer Perceptron (MLP), Long Short-Term Memory (LSTM), and Convolutional Neural Network (CNN).

在又一實施例中,人工智慧操作輔助平台5依據可讀取數值設置操作參數。舉例來說,設置操作參數的範例如:當設備的溫度超過一第一閾值時,將設備上關聯於溫度控制的操作閥旋轉指定角度以調節溫度,或是當設備的壓力儀表低於一第二閾值時,啟動設備上的升壓開關。需注意的是,本發明的人工智慧操作輔助平台5,可依據設備當前的狀態,動態地調整操作參數的配置值,以使操作訊號適用於巡檢人員當前的情況。In another embodiment, the artificial intelligence operation assistance platform 5 sets operation parameters according to readable values. For example, when the temperature of the device exceeds a first threshold, the operating valve associated with temperature control on the device is rotated to a specified angle to adjust the temperature, or when the pressure gauge of the device is lower than a first threshold. At the second threshold, the boost switch on the device is activated. It should be noted that the artificial intelligence operation assistance platform 5 of the present invention can dynamically adjust the configuration values of the operation parameters according to the current state of the equipment, so that the operation signal is suitable for the current situation of the inspector.

請參考步驟S4及步驟S5,可攜式裝置3接收並依據操作訊號產生操作圖層,且可攜式裝置3之顯示模組38呈現操作圖層。詳言之,可攜式裝置3之處理器36依據操作訊號及其中的操作參數產生操作圖層。所述的操作圖層例如為文字訊息,或者是圖形訊息。在其他實施例中,處理器36更依據操作訊號產生一操作語音,並透過可攜式裝置3上之揚聲器播放此語音。Please refer to step S4 and step S5. The portable device 3 receives and generates an operation layer according to the operation signal, and the display module 38 of the portable device 3 presents the operation layer. In detail, the processor 36 of the portable device 3 generates an operation layer according to the operation signal and the operation parameters therein. The operation layer is, for example, text messages or graphic messages. In other embodiments, the processor 36 further generates an operating voice according to the operating signal, and plays the voice through the speaker on the portable device 3.

在一實施例中,顯示模組38係一非透明顯示面板,其用以呈現操作圖層與環境影像之一疊合影像,此處所述的環境影像可以是攝像模組32拍攝到的實景影像,操作圖層覆蓋在此實景影像上形成疊合影像,因此可達到擴增實境(AR)的效果。所述的環境也可以是人工智慧操作輔助平台5發送的目標物件的三維物件影像,操作圖層覆蓋在此虛擬影像上形成疊合影像,因此可達到虛擬實境(VR)或混合實境(MR)的效果。在另一實施例中,顯示模組38係一具有透明度的顯示面板,其用以呈現操作圖層疊合於實體目標物件之一畫面。在上述實施例中,可攜式裝置3更包括一慣性測量單元(Inertial Measurement Unit,IMU),其用以感測巡檢人員目視目標物件的視線角度,並將此角度反饋至處理器36中。處理器36更用以依據此視線角度調整操作圖層呈現的型態,以使顯示模組38呈現的虛擬影像與目標物件能夠以較佳型態互相呈現。In one embodiment, the display module 38 is a non-transparent display panel, which is used to present a superimposed image of the operation layer and the environment image. The environment image described here may be a real-life image captured by the camera module 32 , The operation layer is overlaid on the real image to form a superimposed image, so the effect of augmented reality (AR) can be achieved. The environment can also be a three-dimensional object image of the target object sent by the artificial intelligence operation auxiliary platform 5. The operation layer is overlaid on the virtual image to form a superimposed image, so that virtual reality (VR) or mixed reality (MR) can be achieved. )Effect. In another embodiment, the display module 38 is a display panel with transparency, which is used for presenting a screen of the operation diagram stacked on the physical target object. In the above embodiment, the portable device 3 further includes an inertial measurement unit (IMU), which is used to sense the line-of-sight angle of the inspector to the target object, and feed this angle back to the processor 36 . The processor 36 is further used to adjust the presentation type of the operating layer according to the viewing angle, so that the virtual image presented by the display module 38 and the target object can be presented with each other in a better form.

請參考圖4,其繪示巡檢人員透過顯示模組38觀看本發明一實施例的操作圖層L的示意圖。如圖4所示,巡檢人員透過透明鏡片的顯示模組38可看到實體的目標物件(包含管線71、儀表73、球閥75及把手77),並可同時觀看到顯示模組38以混合實境(Mixed Reality)呈現在實體裝置71~77上方的操作圖層L(顯示模組38例如以半透明型態呈現此操作圖層),然後遵循操作訊號的指示,將把手77由操作起始位置A旋轉到操作結束位置B。Please refer to FIG. 4, which illustrates a schematic diagram of the inspector viewing the operation layer L of an embodiment of the present invention through the display module 38. As shown in Figure 4, the inspector can see the physical target object (including the pipeline 71, the meter 73, the ball valve 75 and the handle 77) through the display module 38 of the transparent lens, and can see the display module 38 to mix at the same time. Mixed Reality is displayed on the operation layer L above the physical devices 71~77 (the display module 38 presents this operation layer in a semi-transparent form, for example), and then follow the instructions of the operation signal to move the handle 77 from the starting position of the operation A rotates to the operation end position B.

綜上所述,本發明提出的智慧操作輔助系統可提升巡檢人員操作設備的正確率,提高工安安全性。並且依據設備的當前狀態,動態地調整操作指示,並進行巡檢資訊收集回饋,及石化廠房內資產設備的監控管理。In summary, the intelligent operation assistance system proposed by the present invention can improve the accuracy of equipment operation by inspectors and improve industrial safety. And according to the current status of the equipment, the operation instructions are dynamically adjusted, the inspection information is collected and feedback, and the assets and equipment in the petrochemical plant are monitored and managed.

雖然本發明以前述之實施例揭露如上,然其並非用以限定本發明。在不脫離本發明之精神和範圍內,所為之更動與潤飾,均屬本發明之專利保護範圍。關於本發明所界定之保護範圍請參考所附之申請專利範圍。Although the present invention is disclosed in the foregoing embodiments, it is not intended to limit the present invention. All changes and modifications made without departing from the spirit and scope of the present invention fall within the scope of the patent protection of the present invention. For the scope of protection defined by the present invention, please refer to the attached scope of patent application.

100…智慧操作輔助系統 3…可攜式裝置 32…攝像模組 34…無線通訊模組 36…處理器 38…顯示模組 5…人工智慧操作輔助平台 N…高速行動通訊網路 AP…無線存取點 S1~S5…步驟 A …操作起始位置 B…操作結束位置 71…管線 73…儀表 75…球筏 77…把手 L…操作圖層 100...Smart operation assistance system 3...portable device 32...Camera module 34...Wireless communication module 36...Processor 38...Display module 5...Auxiliary platform for artificial intelligence operation N...High-speed mobile communication network AP...Wireless access point S1~S5...Step A …operation starting position B...operation end position 71...Pipeline 73...Instrument 75...ball raft 77...handle L...Operation layer

圖1係依據本發明一實施例繪示的智慧操作輔助系統的方塊架構圖。 圖2係依據本發明一實施例繪示的可攜式裝置的外觀示意圖。 圖3係依據本發明一實施例繪示的智慧操作輔助方法的流程圖。 圖4係依據本發明一實施例繪示的操作圖層的示意圖。 FIG. 1 is a block diagram of a smart operation assistance system according to an embodiment of the present invention. FIG. 2 is a schematic diagram showing the appearance of a portable device according to an embodiment of the present invention. FIG. 3 is a flowchart of a method for assisting smart operation according to an embodiment of the present invention. FIG. 4 is a schematic diagram of operating layers according to an embodiment of the present invention.

100…智慧操作輔助系統 3…可攜式裝置 32…攝像模組 34…無線通訊模組 36…處理器 38…顯示模組 5…人工智慧操作輔助平台 N…高速行動通訊網路 AP…無線存取點 100...Smart operation assistance system 3...portable device 32...Camera module 34...Wireless communication module 36...Processor 38...Display module 5...Auxiliary platform for artificial intelligence operation N...High-speed mobile communication network AP...Wireless access point

Claims (6)

一種智慧操作輔助系統,包括:一可攜式裝置,包括:一攝像模組,用以取得一環境影像,該環境影像中具有一目標物件;一無線通訊模組,用以發送該環境影像,並接收一操作訊號;一慣性測量單元,用以取得該攝像模組之一拍攝角度;一處理器,電性連接該無線通訊模組及該攝像模組,該處理器控制該無線通訊模組發送該環境影像,並依據該操作訊號及該拍攝角度產生一操作圖層;及一顯示模組,電性連接該處理器,該顯示模組用以呈現該操作圖層;以及一人工智慧操作輔助平台,通訊連接該無線通訊模組,該人工智慧操作輔助平台用以接收該環境影像並執行一辨識程序以辨識該目標物件及該目標物件週邊的多個可辨識物件,該人工智慧操作輔助平台更用以依據該目標物件及一物件資料庫產生並發送該操作訊號;該人工智慧操作輔助平台更依據該些可辨識物件及一物件分布資訊確認該可攜式裝置之座標位置,且依據該座標位置判斷是否具有該目標物件;其中該目標物件具有一可讀取數值,該操作訊號更包括一操作參數,且該辨識程序更用以辨識該可讀取數值,該人工智慧操作輔助平台更用以依據該可讀取數值設置該操作參數; 該人工智慧操作輔助平台更用以收集多個關聯於該目標物件之狀態參數,並依據該些狀態參數及一類神經網路演算法產生一目標物件狀態判斷模型,且該目標物件狀態判斷模型之輸出為該操作參數。 A smart operation assistance system includes: a portable device, including: a camera module for obtaining an environment image, the environment image has a target object; a wireless communication module for sending the environment image, And receives an operation signal; an inertial measurement unit for obtaining a shooting angle of the camera module; a processor electrically connected to the wireless communication module and the camera module, and the processor controls the wireless communication module Send the environment image, and generate an operation layer according to the operation signal and the shooting angle; and a display module electrically connected to the processor, the display module for presenting the operation layer; and an artificial intelligence operation auxiliary platform , The wireless communication module is communicatively connected, the artificial intelligence operation auxiliary platform is used to receive the environment image and execute a recognition process to recognize the target object and multiple identifiable objects around the target object, the artificial intelligence operation auxiliary platform further It is used to generate and send the operation signal according to the target object and an object database; the artificial intelligence operation auxiliary platform further confirms the coordinate position of the portable device according to the identifiable objects and an object distribution information, and according to the coordinates The location judges whether there is the target object; wherein the target object has a readable value, the operation signal further includes an operation parameter, and the identification program is used to identify the readable value, and the artificial intelligence operation auxiliary platform is more useful To set the operating parameter according to the readable value; The artificial intelligence operation assistance platform is further used to collect a plurality of state parameters associated with the target object, and generate a target object state judgment model based on the state parameters and a type of neural network algorithm, and the output of the target object state judgment model Is the operating parameter. 如請求項1所述的智慧操作輔助系統,適用於一高速網路環境,該高速網路環境包括多個無線存取點,該些無線存取點應用第五代行動通訊技術(5th Generation mobile networks,5G)專用頻譜之企業專網彼此互相連接,且該無線通訊模組係透過該些無線存取點其中一者通訊連接至該人工智慧操作輔助平台。 The operation of the auxiliary system according to the wisdom of a request for a high-speed network environment, the network environment includes a plurality of high-speed wireless access point, wireless access point of the plurality of fifth-generation mobile communication technology applications (5 th Generation Mobile networks (5G) dedicated spectrum enterprise private networks are connected to each other, and the wireless communication module is connected to the artificial intelligence operation auxiliary platform through one of the wireless access points. 如請求項1所述的智慧操作輔助系統,其中該顯示模組更用以呈現該操作圖層與該環境影像之一疊合影像。 The intelligent operation assistance system according to claim 1, wherein the display module is further used for presenting a superimposed image of the operation layer and the environment image. 一種智慧操作輔助方法,包括:以一可攜式裝置之一攝像模組取得一環境影像,該環境影像中具有一目標物件;以該可攜式裝置之一慣性測量單元取得該攝像模組之一拍攝角度;以該可攜式裝置之一處理器控制該可攜式裝置之一無線通訊模組發送該環境影像;以一人工智慧操作輔助平台接收該環境影像並執行一辨識程序以辨識該目標物件;以該人工智慧操作輔助平台依據該目標物件及一物件資料庫產生並發送一操作訊號;以該可攜式裝置之該無線通訊模組接收該操作訊號; 以該可攜式裝置之該處理器依據該操作訊號及該拍攝角度產生一操作圖層;以及以該可攜式裝置之一顯示模組呈現該操作圖層;其中該目標物件具有一可讀取數值,該操作訊號更包括一操作參數,且該辨識程序更用以辨識該可讀取數值,該人工智慧操作輔助平台更用以依據該可讀取數值設置該操作參數;該人工智慧操作輔助平台更用以收集多個關聯於該目標物件之狀態參數,並依據該些狀態參數及一類神經網路演算法產生一目標物件狀態判斷模型,且該目標物件狀態判斷模型之輸出為該操作參數;在以該人工智慧操作輔助平台接收該環境影像並執行該辨識程序以辨識該目標物件之前,更包括:以該人工智慧操作輔助平台執行該辨識程序辨識該目標物件週邊的多個可辨識物件;以該人工智慧操作輔助平台依據該些可辨識物件及一物件分布資訊確認該可攜式裝置之座標位置,且依據該座標位置判斷是否具有該目標物件。 A method for assisting smart operation, including: obtaining an environment image with a camera module of a portable device, the environment image having a target object; obtaining an image of the camera module with an inertial measurement unit of the portable device A shooting angle; a processor of the portable device controls a wireless communication module of the portable device to send the environment image; an artificial intelligence operation auxiliary platform receives the environment image and executes an identification process to identify the Target object; use the artificial intelligence operation auxiliary platform to generate and send an operation signal based on the target object and an object database; use the wireless communication module of the portable device to receive the operation signal; The processor of the portable device generates an operation layer according to the operation signal and the shooting angle; and a display module of the portable device presents the operation layer; wherein the target object has a readable value , The operation signal further includes an operation parameter, and the identification program is further used to identify the readable value, the artificial intelligence operation auxiliary platform is further used to set the operation parameter according to the readable value; the artificial intelligence operation auxiliary platform It is further used to collect a plurality of state parameters associated with the target object, and generate a target object state judgment model based on the state parameters and a type of neural network algorithm, and the output of the target object state judgment model is the operating parameter; Before receiving the environment image by the artificial intelligence operation auxiliary platform and executing the recognition procedure to identify the target object, it further includes: performing the recognition procedure on the artificial intelligence operation auxiliary platform to recognize a plurality of identifiable objects around the target object; The artificial intelligence operation assistance platform confirms the coordinate position of the portable device according to the identifiable objects and an object distribution information, and judges whether the target object is present according to the coordinate position. 如請求項4所述的智慧操作輔助方法,適用於一高速網路環境,該高速網路環境包括多個無線存取點,該些無線存取點應用第五代行動通訊技術(5th Generation mobile networks,5G)專用頻譜之企業專網彼此互相連接,且該無線通訊模組係透過該些無線存取點其中一者通訊連接至該人工智慧操作輔助平台。 The intelligence operation assisting method according to a request item for a high-speed network environment, the network environment includes a plurality of high-speed wireless access point, wireless access point of the plurality of fifth-generation mobile communication technology applications (5 th Generation Mobile networks (5G) dedicated spectrum enterprise private networks are connected to each other, and the wireless communication module is connected to the artificial intelligence operation auxiliary platform through one of the wireless access points. 如請求項4所述的智慧操作輔助方法,其中在以該可攜式裝置之該處理器控制該可攜式裝置之該無線通訊模組發送該環境影像之後,更包括:以該人工智慧操作輔助平台載入一三維模型資料庫;以及以該人工智慧操作輔助平台依據該環境影像及該三維模型資料庫執行該辨識程序以獲取一位置資訊;其中以該人工智慧操作輔助平台執行該辨識程序係依據該環境影像及該位置資訊辨識該目標物件。 The intelligent operation assistance method according to claim 4, wherein after the processor of the portable device controls the wireless communication module of the portable device to send the environmental image, the method further comprises: operating with the artificial intelligence The auxiliary platform loads a three-dimensional model database; and the artificial intelligence operation auxiliary platform executes the identification process according to the environment image and the three-dimensional model database to obtain a position information; wherein the artificial intelligence operation auxiliary platform executes the identification process Identify the target object based on the environment image and the location information.
TW109102552A 2020-01-22 2020-01-22 Artificial intelligence operation assistive system and method thereof TWI740361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109102552A TWI740361B (en) 2020-01-22 2020-01-22 Artificial intelligence operation assistive system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109102552A TWI740361B (en) 2020-01-22 2020-01-22 Artificial intelligence operation assistive system and method thereof

Publications (2)

Publication Number Publication Date
TW202129483A TW202129483A (en) 2021-08-01
TWI740361B true TWI740361B (en) 2021-09-21

Family

ID=78282746

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109102552A TWI740361B (en) 2020-01-22 2020-01-22 Artificial intelligence operation assistive system and method thereof

Country Status (1)

Country Link
TW (1) TWI740361B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI814208B (en) * 2021-09-28 2023-09-01 宏達國際電子股份有限公司 Virtual image display device and object selection method for virtual image thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201351174A (en) * 2012-03-30 2013-12-16 Intel Corp Techniques for intelligent media show across multiple devices
TW201514761A (en) * 2013-10-04 2015-04-16 Tatung Co Method for controlling electronic apparatus, handheld electronic apparatus and monitoring system
TW201616370A (en) * 2012-05-31 2016-05-01 英特爾股份有限公司 Method for providing augmented reality service, server and computer-readable recording medium
TW201619754A (en) * 2014-11-25 2016-06-01 Univ Kaohsiung Medical Medical image object-oriented interface auxiliary explanation control system and method thereof
TW201814361A (en) * 2016-09-30 2018-04-16 日商索尼互動娛樂股份有限公司 Head mounted display with heat image generation
TW201816548A (en) * 2016-10-26 2018-05-01 宏達國際電子股份有限公司 Virtual reality interaction method, device and system
TWM569007U (en) * 2018-07-13 2018-10-21 拓集科技股份有限公司 Systems for virtual reality browsing
TWM578822U (en) * 2019-01-09 2019-06-01 國立雲林科技大學 AR (Augmented reality) education game system
TW201937461A (en) * 2018-02-22 2019-09-16 亞東技術學院 Interactive training and testing apparatus

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201351174A (en) * 2012-03-30 2013-12-16 Intel Corp Techniques for intelligent media show across multiple devices
TW201616370A (en) * 2012-05-31 2016-05-01 英特爾股份有限公司 Method for providing augmented reality service, server and computer-readable recording medium
TW201514761A (en) * 2013-10-04 2015-04-16 Tatung Co Method for controlling electronic apparatus, handheld electronic apparatus and monitoring system
TW201619754A (en) * 2014-11-25 2016-06-01 Univ Kaohsiung Medical Medical image object-oriented interface auxiliary explanation control system and method thereof
TW201814361A (en) * 2016-09-30 2018-04-16 日商索尼互動娛樂股份有限公司 Head mounted display with heat image generation
TW201816548A (en) * 2016-10-26 2018-05-01 宏達國際電子股份有限公司 Virtual reality interaction method, device and system
TW201937461A (en) * 2018-02-22 2019-09-16 亞東技術學院 Interactive training and testing apparatus
TWM569007U (en) * 2018-07-13 2018-10-21 拓集科技股份有限公司 Systems for virtual reality browsing
TWM578822U (en) * 2019-01-09 2019-06-01 國立雲林科技大學 AR (Augmented reality) education game system

Also Published As

Publication number Publication date
TW202129483A (en) 2021-08-01

Similar Documents

Publication Publication Date Title
Dini et al. Application of augmented reality techniques in through-life engineering services
CN106340217B (en) Manufacturing equipment intelligence system and its implementation based on augmented reality
US20180061269A1 (en) Control and safety system maintenance training simulator
EP2568355A2 (en) Combined stereo camera and stereo display interaction
EP3631712B1 (en) Remote collaboration based on multi-modal communications and 3d model visualization in a shared virtual workspace
WO2012142250A1 (en) Augumented reality system
US11192249B2 (en) Simulation device for robot
JP2018073377A (en) Application system of total sensing positioning technique based on 3d information model
US10970935B2 (en) Body pose message system
CN105565103A (en) Method and device for detecting elevator failures
US10964104B2 (en) Remote monitoring and assistance techniques with volumetric three-dimensional imaging
US20180342054A1 (en) System and method for constructing augmented and virtual reality interfaces from sensor input
CN107257946B (en) System for virtual debugging
TWI740361B (en) Artificial intelligence operation assistive system and method thereof
Szajna et al. The application of augmented reality technology in the production processes
Raj et al. Augmented reality and deep learning based system for assisting assembly process
KR20190095849A (en) Real time and multi local cross remote control system and method using Mixed Reality Device
TWM597913U (en) Artificial intelligence operation assistive system
US11340693B2 (en) Augmented reality interactive messages and instructions for batch manufacturing and procedural operations
WO2021127529A1 (en) Virtual reality to reality system
CN105900029B (en) Use of the live video streams in Process Control System
Cardenas et al. AARON: assistive augmented reality operations and navigation system for NASA’s exploration extravehicular mobility unit (xEMU)
TWI730605B (en) Artificial intelligence patrol system and artificial intelligence patrol method
TWM594124U (en) Artificial intelligence patrol system
EP3502836B1 (en) Method for operating an augmented interactive reality system