[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

TWI835320B - UAV visual navigation system and method for ground feature image positioning and correction - Google Patents

UAV visual navigation system and method for ground feature image positioning and correction Download PDF

Info

Publication number
TWI835320B
TWI835320B TW111137107A TW111137107A TWI835320B TW I835320 B TWI835320 B TW I835320B TW 111137107 A TW111137107 A TW 111137107A TW 111137107 A TW111137107 A TW 111137107A TW I835320 B TWI835320 B TW I835320B
Authority
TW
Taiwan
Prior art keywords
image
coordinates
landmark
current
images
Prior art date
Application number
TW111137107A
Other languages
Chinese (zh)
Other versions
TW202413886A (en
Inventor
劉吉軒
李恭儀
Original Assignee
國立政治大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立政治大學 filed Critical 國立政治大學
Priority to TW111137107A priority Critical patent/TWI835320B/en
Application granted granted Critical
Publication of TWI835320B publication Critical patent/TWI835320B/en
Publication of TW202413886A publication Critical patent/TW202413886A/en

Links

Images

Landscapes

  • Navigation (AREA)

Abstract

一種地面特徵影像定位及校正之無人機視覺導航方法於一無人機上實施,係預先儲存地標的地標影像及地標座標、地標周圍的周圍影像周圍座標;在飛行時,根據前一幀的先前影像與作為目標的地標座標之相對位置讀取校正點影像,並判斷當前影像是否符合任一校正點影像;若有,則以確知座標的校正點影像與當前影像比對以判斷當前座標;若無,則根據先前影像及先前座標與當前影像的比對判斷當前座標;本發明結合快速產生當前座標的先前影像比對以及精準校正座標的地標影像匹配,使無人機能夠有效使用運算資源並低誤差地執行飛行任務。A UAV visual navigation method for locating and correcting ground feature images is implemented on a UAV. The landmark image and landmark coordinates of the landmark, and the surrounding coordinates of the surrounding images around the landmark are stored in advance; during flight, based on the previous image of the previous frame Read the correction point image relative to the target landmark coordinates, and determine whether the current image matches any correction point image; if so, compare the correction point image with known coordinates with the current image to determine the current coordinates; if not , then the current coordinates are judged based on the previous image and the comparison between the previous coordinates and the current image; the present invention combines the previous image comparison to quickly generate the current coordinates and the landmark image matching to accurately correct the coordinates, so that the UAV can effectively use computing resources and reduce errors perform flight missions.

Description

地面特徵影像定位及校正之無人機視覺導航系統及方法UAV visual navigation system and method for ground feature image positioning and correction

一種無人機視覺導航系統及方法,尤指一種地面特徵影像定位及校正之無人機視覺導航系統及方法。 A UAV visual navigation system and method, in particular, a UAV visual navigation system and method for ground feature image positioning and correction.

當無人機執行自動飛行任務時,需要藉由特定的自動導航方法決定接下來的飛行方向。其中,視覺導航對於無人機而言是屬於低成本的導航方式,僅需要搭配攝影機與可運算的機上電腦即可使用。 When a drone performs an automatic flight mission, it needs to use a specific automatic navigation method to determine the next flight direction. Among them, visual navigation is a low-cost navigation method for drones. It only needs to be equipped with a camera and an on-board computer that can calculate it.

在一種無人機視覺導航技術中,是通過無人機在飛行中擷取的連續影像依序進行特徵匹配,以判斷無人機相對前一幅影像中的移動位置,進而判斷無人機的飛行方向及所在位置。然而,連續影像的特徵提取與匹配會有匹配錯誤或是匹配到相似物的情況、連續影像的匹配也會有誤差產生,若不校正則誤差會越來越大,影響到無人機的飛行任務結果。 In a drone visual navigation technology, the features of the continuous images captured by the drone during flight are sequentially matched to determine the movement position of the drone relative to the previous image, and then determine the flight direction and location of the drone. Location. However, the feature extraction and matching of continuous images will cause matching errors or matching of similar objects. The matching of continuous images will also have errors. If not corrected, the errors will become larger and larger, affecting the flight mission of the UAV. result.

在另一種無人機視覺導航技術中,是通過預先儲存在資料庫中的已知地標影像及其座標與無人機擷取影像進行匹配,以判斷無人機的所在位置,這裡所說的地標影像係指由高空俯視具有外表獨特性之建築、物件、區域環境等的局部影像。然而,若無人機根據所擷取的每一幅影像進行多個地標影像匹配,將會耗費大量運算資源,若根據特定週期進行地標影像匹配又有錯過地標的可能;當地標影像及其相關資訊數量龐大的時候,全部進行匹配亦會增加運算資源耗損;此外,如何評斷地標影像匹配結果的正確性,或者當匹配到資料庫中的複數地標影像時,如何選擇匹配結果也是一難解問題。 In another drone visual navigation technology, the location of the drone is determined by matching known landmark images and their coordinates pre-stored in the database with images captured by the drone. The landmark image system mentioned here is Refers to a partial image of a building, object, regional environment, etc. with a unique appearance viewed from a high altitude. However, if the drone performs multiple landmark image matching based on each captured image, it will consume a lot of computing resources. If the landmark image matching is performed based on a specific period, there is a possibility of missing landmarks; when landmark images and their related information When the number is huge, matching them all will also increase the consumption of computing resources; in addition, how to judge the correctness of the landmark image matching results, or how to select the matching results when matching multiple landmark images in the database, is also a difficult problem.

綜上所述,現有的無人機自動導航方法勢必須進一步改進。 In summary, existing UAV automatic navigation methods must be further improved.

有鑑於現有的無人機自動導航方法仍有連續影像匹配誤差、匹配錯誤、運算資源耗損或者地標匹配結果判斷等問題,本發明提供一種地面特徵影像定位及校正之無人機視覺導航方法,由一處理器執行,包含以下步驟:擷取該無人機下方之一當前影像,並讀取一先前影像及對應該先前影像的一先前座標;根據該先前座標及一地標座標,讀取對應該地標座標的複數校正點影像;將該當前影像與該等校正點影像分別進行影像匹配,以判斷該當前影像是否符合其中一校正點影像;若該當前影像符合其中一校正點影像,根據該當前影像、符合的該校正點影像及其對應的一校正點座標判斷一當前座標;若否,根據該當前影像、該先前影像及其對應的該先前座標判斷該當前座標;以及根據該當前座標及該地標座標產生一導航飛行指令,使該無人機向該地標座標飛行。 In view of the fact that existing UAV automatic navigation methods still have problems such as continuous image matching errors, matching errors, computing resource consumption or landmark matching result judgment, the present invention provides a UAV visual navigation method for ground feature image positioning and correction, which is processed by a The device is executed, including the following steps: capture a current image below the drone, and read a previous image and a previous coordinate corresponding to the previous image; based on the previous coordinates and a landmark coordinate, read the coordinates corresponding to the landmark Multiple correction point images; perform image matching on the current image and the correction point images respectively to determine whether the current image matches one of the correction point images; if the current image matches one of the correction point images, according to the current image, match Determine a current coordinate based on the correction point image and its corresponding correction point coordinate; if not, determine the current coordinate based on the current image, the previous image and its corresponding previous coordinate; and determine the current coordinate based on the current coordinate and the landmark coordinate Generate a navigation flight command to make the UAV fly to the landmark coordinates.

此外,本發明還提供一地面特徵影像定位及校正之無人機視覺導航系統,包含:一攝影機,擷取一當前影像;一儲存器,電性連接該攝影機,儲存複數先前影像及對應的先前座標、一導航地圖、一飛行任務資訊、複數地標影像及對應的複數地標座標、各該地標影像對應的複數周圍影像、以及該等周圍影像對應的複數周圍座標; 一處理器,電性連接該儲存器,執行前述的地面特徵影像定位及校正之無人機視覺導航方法。 In addition, the present invention also provides a UAV visual navigation system for positioning and correcting ground feature images, including: a camera that captures a current image; a memory that is electrically connected to the camera and stores a plurality of previous images and corresponding previous coordinates. , a navigation map, a flight mission information, a plurality of landmark images and a plurality of corresponding landmark coordinates, a plurality of surrounding images corresponding to each landmark image, and a plurality of surrounding coordinates corresponding to the surrounding images; A processor, electrically connected to the memory, executes the aforementioned UAV visual navigation method of ground feature image positioning and correction.

本發明的地面特徵影像定位及校正之無人機視覺導航方法在判斷無人機所在的當前座標時,先以該當前影像與地標座標對應的複數校正點影像,進行影像匹配,判斷是否符合其中一校正點影像,若符合其中一校正點影像,則根據該符合的校正點影像判斷當前座標;若沒有符合任何一校正點影像,則根據當前影像相對先前影像,即擷取的前一幀當前影像,計算相對該先前影像的位移向量並判斷當前座標。由於該先前影像的先前座標為已知的,故可藉由該先前座標即位移向量推算該當前座標。最後,根據該當前座標以及該地標座標之間的位移,產生飛行指令,以使無人機向該地標座標飛行。 When the UAV visual navigation method for locating and correcting ground feature images of the present invention determines the current coordinates of the UAV, it first performs image matching with the plural correction point images corresponding to the current image and the landmark coordinates to determine whether it meets one of the corrections. If the point image matches one of the correction point images, the current coordinates will be determined based on the matching correction point image; if it does not match any of the correction point images, then the current coordinates will be determined based on the current image relative to the previous image, that is, the captured previous frame of the current image. Calculate the displacement vector relative to the previous image and determine the current coordinates. Since the previous coordinates of the previous image are known, the current coordinates can be calculated based on the previous coordinates, that is, the displacement vector. Finally, based on the displacement between the current coordinates and the landmark coordinates, a flight instruction is generated to cause the UAV to fly toward the landmark coordinates.

本發明結合根據先前影像判斷相對移動位置推算當前位置,以及根據預存的對應地標座標的校正點影像進行位置校正的二種視覺導航技術,以判斷無人機的當前座標並產生飛行指令。當無人機距離地標座標較遠時,無法匹配到地標座標附近的校正點影像,故藉由當前影像與先前影像的相對移動判斷當前座標;當無人機飛行到地標座標附近,使得當前影像開始匹配到校正點影像時,則根據符合的該校正點影像對應的校正點座標判斷當前座標。 The present invention combines two visual navigation technologies that determine the relative movement position based on previous images to estimate the current position, and perform position correction based on pre-stored correction point images corresponding to landmark coordinates, to determine the current coordinates of the UAV and generate flight instructions. When the drone is far away from the landmark coordinates, it cannot match the correction point image near the landmark coordinates, so the current coordinates are determined by the relative movement of the current image and the previous image; when the drone flies near the landmark coordinates, the current image begins to match When the calibration point image is reached, the current coordinates are determined based on the calibration point coordinates corresponding to the corresponding calibration point image.

由於該等校正點影像及對應的校正點座標為確知的,當根據符合的該校正點影像判斷當前座標時,該當前座標即受到校正,不會因持續通過根據先前影像推算當前座標之視覺導航方式累積越來越大的誤差。換言之,無人機在經過地標座標時,其當前座標能夠受到一確定的影像與確定的座標之校正,從而保證該無人機在飛行過程中藉由視覺導航算的當前座標與實際座標保持在一定的誤差以內,提高無人機視覺導航的定位精準度。 Since the correction point images and the corresponding correction point coordinates are known, when the current coordinates are determined based on the corresponding correction point images, the current coordinates are corrected and will not be affected by visual navigation that continues to estimate the current coordinates based on previous images. The method accumulates larger and larger errors. In other words, when the UAV passes through the landmark coordinates, its current coordinates can be corrected by a certain image and certain coordinates, thereby ensuring that the current coordinates calculated by the UAV through visual navigation and the actual coordinates remain at a certain level during the flight. Within the error, the positioning accuracy of drone visual navigation is improved.

11:攝影機 11:Camera

12:儲存器 12:Storage

13:處理器 13: Processor

M:導航地圖 M: Navigation map

T1,T2:地標 T1, T2: Landmark

M0’:先前影像 M0’: previous image

M0:當前影像 M0: current image

M1,M2:地標影像 M1, M2: Landmark images

M11~M14,M21~M24:周圍影像 M11~M14,M21~M24: surrounding images

P1,P2:地標座標 P1,P2: landmark coordinates

P11~P14,P21~P24:周圍座標 P11~P14,P21~P24: surrounding coordinates

X:當前座標 X: current coordinates

X’:先前座標 X’: previous coordinates

R:預期路徑 R: expected path

圖1係本發明地面特徵影像定位及校正之無人機視覺導航系統的方塊示意圖。 Figure 1 is a block diagram of a UAV visual navigation system for ground feature image positioning and correction according to the present invention.

圖2A係本發明地面特徵影像定位及校正之無人機視覺導航方法的導航地圖示意圖。 Figure 2A is a schematic diagram of the navigation map of the UAV visual navigation method for ground feature image positioning and correction according to the present invention.

圖2B係本發明地面特徵影像定位及校正之無人機視覺導航方法的導航地圖示意圖。 Figure 2B is a schematic diagram of the navigation map of the UAV visual navigation method for ground feature image positioning and correction according to the present invention.

圖3係本發明地面特徵影像定位及校正之無人機視覺導航方法的流程示意圖。 Figure 3 is a schematic flow chart of the UAV visual navigation method for ground feature image positioning and correction according to the present invention.

圖4A係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S102的詳細流程示意圖。 Figure 4A is a detailed flow chart of step S102 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖4B係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S102的一應用狀態示意圖。 Figure 4B is a schematic diagram of an application state of step S102 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖5A係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S103的詳細流程示意圖。 Figure 5A is a detailed flow chart of step S103 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖5B係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S103的一應用狀態示意圖。 Figure 5B is a schematic diagram of an application state of step S103 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖5C係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S103的再一詳細流程示意圖。 Figure 5C is a further detailed flowchart of step S103 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖6A係本發明係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S104的再一詳細流程示意圖。 Figure 6A is a further detailed flowchart of step S104 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖6B係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法的步驟S104的一應用狀態示意圖。 6B is a schematic diagram of an application state of step S104 of the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖7A係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中步驟S105的再一詳細流程示意圖。 Figure 7A is a further detailed flowchart of step S105 in the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖7B係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法的步驟S105的一應用狀態示意圖。 7B is a schematic diagram of an application state of step S105 of the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖8A係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法中一較佳實施例的流程示意圖。 Figure 8A is a schematic flowchart of a preferred embodiment of the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖8B係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法的步驟S109一應用狀態示意圖。 Figure 8B is a schematic diagram of the application state of step S109 of the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

圖8C係本發明係本發明地面特徵影像定位及校正之無人機視覺導航方法步驟S109的另一應用狀態示意圖。 Figure 8C is a schematic diagram of another application state of step S109 of the UAV visual navigation method for positioning and correcting ground feature images according to the present invention.

請參閱圖1及圖2A~2B所示,本發明的地面特徵影像定位及校正之無人機視覺導航系統實施於一無人飛行器(以下簡稱無人機),包含一攝影機11、一儲存器12及一處理器13。該攝影機11及該處理器13電性連接該儲存器12,該攝影機11用以在無人機飛行過程中連續地對該無人機的正下方擷取影像,最新擷取的該影像為一當前影像M0,而在擷取下一幅當前影像後,先前擷取的影像則儲存至儲存器12中為一先前影像M0’。由於該攝影機11是對無人機正下方擷取影像,故可設定該當前影像中的中心點座標為該無人機所在位置,即待判斷的當前座標X,而該等先前影像中的中心點座標為無人機先前所在位置,即已判斷完畢的先前座標X’。此外,該儲存器12還用以儲存一導航地圖M、一飛行任務資訊、複數地標影像M1、M2及對應的複數地標座標P1、P2、各該地標影像M1、M2對應的複數周圍影像M11~M14、M21~M24以及該等周圍影像M11~M14、M21~M24對應的複數周圍座標P11~P14、P21~P24。前述無人機例如是一四軸飛行器、八軸飛行器、單軸飛行器、或一定翼飛行器,本發明不以此為限。 Please refer to Figure 1 and Figures 2A-2B. The UAV visual navigation system for ground feature image positioning and correction of the present invention is implemented on an unmanned aerial vehicle (hereinafter referred to as UAV) and includes a camera 11, a storage 12 and a Processor 13. The camera 11 and the processor 13 are electrically connected to the memory 12. The camera 11 is used to continuously capture images directly below the drone during flight. The latest captured image is a current image. M0, and after capturing a current image, the previously captured image is stored in the memory 12 as a previous image M0'. Since the camera 11 captures images directly below the drone, the center point coordinates in the current image can be set to the location of the drone, that is, the current coordinate X to be determined, and the center point coordinates in the previous images is the previous position of the drone, that is, the previous coordinate X' that has been judged. In addition, the memory 12 is also used to store a navigation map M, a flight mission information, a plurality of landmark images M1, M2 and corresponding plurality of landmark coordinates P1, P2, and a plurality of surrounding images M11~ corresponding to each landmark image M1, M2. M14, M21~M24 and the complex surrounding coordinates P11~P14, P21~P24 corresponding to the surrounding images M11~M14, M21~M24. The aforementioned drone is, for example, a quadcopter, an octocopter, a single-copter, or a fixed-wing aircraft, and the invention is not limited thereto.

在本發明的一實施例中,該導航地圖M係包含欲執行飛行任務的區域實景地面地圖。導航地圖M的產生例如是事先將一實景世界地圖通過投影形成一正方形世界地圖,再通過Quadkey計算或類似的地圖分區計算方法,將該正方形世界地圖進行區域分級,並依據欲執行飛行任務的區域選擇導航區域,產生該導航地圖,並賦予該導航地圖一正規化之座標系。 In an embodiment of the present invention, the navigation map M includes a real-life ground map of the area where the flight mission is to be performed. The navigation map M is generated, for example, by projecting a real world map into a square world map in advance, and then using Quadkey calculation or similar map partition calculation method to classify the square world map into regions according to the area where the flight mission is to be performed. Select the navigation area, generate the navigation map, and give the navigation map a normalized coordinate system.

以下為方便參閱,將以簡化繪示的導航地圖M、地面特徵及地標T1、T2之圖式進行說明。 For ease of reference, the following description will be made in the form of a simplified diagram of the navigation map M, ground features, and landmarks T1 and T2.

請參閱圖2A所示,在該導航地圖M中,經由事先選擇複數具備獨特性的地面特徵作為地標T1、T2,飛行任務是規劃於該導航地圖M範圍內的一飛行路線。在一實施例中,該飛行路線由該導航地圖M中的至少一且較佳為多個的地標T1、T2等組成,飛行任務資訊包含各該地標T1、T2的順序編號。當無人機起飛時,目標例如是令該無人機由一初始位置依序飛行抵達各該地標T1、T2等,直到最後一個地標為止。本發明中以該飛行任務為依序飛行至地標T1、T2為例進行說明。 Please refer to FIG. 2A . In the navigation map M, by selecting a plurality of unique ground features as landmarks T1 and T2 in advance, the flight mission is a flight route planned within the scope of the navigation map M. In one embodiment, the flight route is composed of at least one and preferably a plurality of landmarks T1, T2, etc. in the navigation map M, and the flight mission information includes the sequential numbers of the landmarks T1, T2. When the drone takes off, the goal is, for example, to make the drone fly from an initial position to the landmarks T1, T2, etc., until the last landmark. In the present invention, the flight mission is taken as an example of flying to landmarks T1 and T2 in sequence.

請參閱圖2B所示,在一實施例中,根據該等已知的地標T1、T2,事先收集並儲存該等地標T1的地標影像M1、地標T2的地標影像M2、地標T1的地標座標P1、地標T2的地標座標P2、地標T1的周圍影像M11~M14、地標T2的周圍影像M21~M24、以及周圍影像M11~M14的周圍座標P11~P14、周圍影像M21~M24的周圍座標P21~P24。 Please refer to FIG. 2B. In one embodiment, based on the known landmarks T1 and T2, the landmark image M1 of the landmark T1, the landmark image M2 of the landmark T2, and the landmark coordinate P1 of the landmark T1 are collected and stored in advance. , landmark coordinates P2 of landmark T2, surrounding images M11~M14 of landmark T1, surrounding images M21~M24 of landmark T2, and surrounding coordinates P11~P14 of surrounding images M11~M14, and surrounding coordinates P21~P24 of surrounding images M21~M24. .

該等地標T1、T2在該導航地圖M中設定有已知的地標座標P1、P2,而該等地標影像M1、M2係執行飛行任務之前,以該地標座標P1、P2為中心點P1、P2事先收集之地面影像。此外,還事先收集鄰近該等地標T1、T2周圍的複數周圍影像M11~M14、M21~M24。在圖2B的實施例中例如是位於該等地標T1、T2上、下、左、右的四個周圍影像M11~M14、M21~M24。更詳細的 說,該等周圍影像M11~M14、M21~M24例如是根據該地標座標P1、P2在該導航地圖M中北、南、東、西,距離一特定距離的位置為中心點擷取的複數地面影像,故該周圍影像M11~M14、M21~M24的中心點即為周圍座標P11~P14、P21~P24。此外,亦可任意選擇鄰近地標座標P1、P2中具有較明顯地面特徵的周圍區域以擷取複數等周圍影像,且周圍影像的數量亦可根據比對需求決定,只要該等周圍影像的參考座標為確知的即可,本發明不以此為限。 The landmarks T1 and T2 are set with known landmark coordinates P1 and P2 in the navigation map M, and the landmark images M1 and M2 are based on the landmark coordinates P1 and P2 as the center points P1 and P2 before performing the flight mission. Ground images collected in advance. In addition, a plurality of surrounding images M11~M14 and M21~M24 adjacent to the landmarks T1 and T2 are also collected in advance. In the embodiment of FIG. 2B , for example, there are four surrounding images M11 ~ M14 and M21 ~ M24 located above, below, left and right of the landmarks T1 and T2 . more detailed Said that the surrounding images M11~M14 and M21~M24 are, for example, plural ground surfaces captured based on the landmark coordinates P1 and P2 in the north, south, east, and west of the navigation map M, and at a specific distance away from the center point. image, so the center points of the surrounding images M11~M14, M21~M24 are the surrounding coordinates P11~P14, P21~P24. In addition, you can also arbitrarily select surrounding areas with more obvious ground features in the adjacent landmark coordinates P1 and P2 to capture multiple surrounding images, and the number of surrounding images can also be determined according to the comparison requirements, as long as the reference coordinates of these surrounding images It only needs to be known for certain, and the present invention is not limited thereto.

請參閱圖3所示,本發明的地面特徵影像定位及校正之無人機視覺導航方法由該處理器13執行,包含以下步驟:S101:擷取一當前影像M0,並讀取一先前影像M0’及對應該先前影像M0’的一先前座標X’;其中,該先前座標X’為前一幀先前影像M0’經本方法判斷後所產生的無人機所在座標;S102:根據該先前座標X’及一地標座標在導航地圖M上的相對位置,讀取對應該地標座標P1的複數校正點影像M1、M12、M13;S103:將該當前影像M0與該等校正點影像分別進行影像匹配,以判斷該當前影像M0是否符合其中一校正點影像M1、M12、M13;S104:若該當前影像M0符合其中一校正點影像,根據該當前影像M0、符合的該校正點影像對應的一校正點座標判斷一當前座標X;S105:若否,根據該當前影像M0、該先前影像M0’及其對應的該先前座標X’判斷該當前座標X;以及S106:根據該當前座標X及該地標座標產生一飛行指令。 Please refer to Figure 3. The UAV visual navigation method for ground feature image positioning and correction of the present invention is executed by the processor 13 and includes the following steps: S101: Capture a current image M0 and read a previous image M0' And a previous coordinate X' corresponding to the previous image M0'; wherein, the previous coordinate The relative position of a landmark coordinate on the navigation map M, read the plural correction point images M1, M12, M13 corresponding to the landmark coordinate P1; S103: Match the current image M0 with the correction point images respectively to determine Whether the current image M0 matches one of the correction point images M1, M12, and M13; S104: If the current image M0 matches one of the correction point images, determine based on the current image M0 and the coordinates of a correction point corresponding to the matching correction point image. A current coordinate X; S105: If not, determine the current coordinate X based on the current image M0, the previous image M0' and the corresponding previous coordinate flight instructions.

請一併參閱圖4A、4B所示,在本發明的一實施例中,當執行步驟S102時,包含以下子步驟: S1021:根據該地標座標P1標讀取一地標影像M1,以及讀取與該地標影像M1相關的複數周圍影像M11~M14,該等周圍影像M11~M14具有對應的周圍座標P11~P14;S1022:根據該先前座標X’及該地標座標P1之間的一預期路徑R,選取地標座標P1或周圍座標P11~P14距離該預期路徑R的距離小於一預設距離閥值的地標影像M1或周圍影像M11~M14為該等校正點影像M1、M12、M13。 Please refer to Figures 4A and 4B together. In an embodiment of the present invention, when step S102 is executed, the following sub-steps are included: S1021: Read a landmark image M1 based on the landmark coordinate P1, and read a plurality of surrounding images M11~M14 related to the landmark image M1. These surrounding images M11~M14 have corresponding surrounding coordinates P11~P14; S1022: According to an expected path R between the previous coordinate M11~M14 are the correction point images M1, M12, and M13.

在步驟S102中,當無人機所讀取之先前座標為X’,該處理器13先根據當前的目標地標T1的地標座標P1讀取對應的該地標影像M1及相關周圍影像M11~M14,並且計算該先前座標X’及該地標座標P1之間的一預期路徑R。在圖4B的實施例中,預期路徑R是計算該先前座標X’及該地標座標P1之間的連線。當選取該可能較校正點影像時,則是由該地標影像M1及周圍影像M11~M14中,選擇距離該預期路徑R的垂直距離在一預設距離內的周圍座標P11~P14所在的該周圍影像M11~M14為校正點影像。此外,由於該地標座標P1為預期路徑R的終點,故該地標影像M1也必然會被選擇為其中一校正點影像。以圖4B所示為例,周圍座標P11~P14與該預期路徑R的垂直距離小於該預設距離的為周圍影像M12、M13,故該等校正點影像為周圍影像M12、M13及該地標影像M1。 In step S102, when the previous coordinate read by the drone is X', the processor 13 first reads the corresponding landmark image M1 and related surrounding images M11~M14 according to the landmark coordinate P1 of the current target landmark T1, and An expected path R between the previous coordinate X' and the landmark coordinate P1 is calculated. In the embodiment of FIG. 4B , the expected path R is calculated as a connection between the previous coordinate X' and the landmark coordinate P1. When selecting the possible corrected point image, the surrounding coordinates P11 to P14 are selected from the landmark image M1 and the surrounding images M11 to M14, and the vertical distance from the expected path R is within a preset distance. Images M11~M14 are calibration point images. In addition, since the landmark coordinate P1 is the end point of the expected path R, the landmark image M1 will inevitably be selected as one of the correction point images. Taking the example shown in Figure 4B as an example, the vertical distance between the surrounding coordinates P11~P14 and the expected path R is less than the preset distance is the surrounding images M12 and M13, so the correction point images are the surrounding images M12, M13 and the landmark image. M1.

通過S102及其子步驟S1021~S1022選擇校正點影像及校正點座標,能夠動態地根據先前座標X’及作為目標的地標座標P1剔除低通過機率的周圍影像M11、M14、M21~M24。當進行判斷當前影像M0與校正點影像M1、M12、M13之間的匹配,以判斷是否接近地標T1時,該處理器13不須比對全部的周圍影像M11~M14、M21~M24,達到減少運算資源耗損之目的。 By selecting the correction point image and correction point coordinates through S102 and its sub-steps S1021~S1022, surrounding images M11, M14, M21~M24 with low passing probability can be dynamically eliminated based on the previous coordinate X' and the target landmark coordinate P1. When determining the matching between the current image M0 and the calibration point images M1, M12, and M13 to determine whether it is close to the landmark T1, the processor 13 does not need to compare all surrounding images M11~M14, M21~M24 to achieve reduction The purpose of computing resource consumption.

請參閱圖5A及5B所示,在本發明的一實施例中,當執行步驟S103時,係分別對各該校正點影像進行以下子步驟: S1031:對該當前影像M0及該校正點影像M1、M12、M13分別進行特徵點提取及特徵點匹配,以產生複數匹配特徵點對;S1032:計算該等匹配特徵點對的匹配可信度及匹配角度;S1033:判斷匹配特徵點對的數量是否大於一數量閥值、各該匹配特徵點對的平均匹配可信度是否大於一可信度閥值,且各該匹配特徵點對的平均匹配角度是否符合一角度條件;S1034:若是,判斷該當前影像M0符合該校正點M13。 Please refer to Figures 5A and 5B. In an embodiment of the present invention, when step S103 is executed, the following sub-steps are performed on each correction point image: S1031: Perform feature point extraction and feature point matching on the current image M0 and the correction point images M1, M12, and M13 respectively to generate plural matching feature point pairs; S1032: Calculate the matching credibility of the matching feature point pairs and Matching angle; S1033: Determine whether the number of matching feature point pairs is greater than a quantity threshold, whether the average matching credibility of each matching feature point pair is greater than a credibility threshold, and whether the average matching of each matching feature point pair is greater than a credibility threshold Whether the angle meets an angle condition; S1034: If so, determine that the current image M0 meets the correction point M13.

在步驟S103中,如圖5B所示,以其中一校正點影像的周圍影像M13為例,左邊的影像為某一時刻的當前影像M0,而右邊影像為該周圍影像M13,連接兩邊的複數虛線為經特徵點提取及匹配後的匹配特徵點對之連線L。其中,匹配可信度為經由匹配演算法計算產生的每一匹配特徵點對的一可信度數值,匹配數量為所有產生的匹配特徵點對之總體數量,而匹配角度為每一匹配特徵點對相對一水平方向的一角度值。其中,可信度數值之具體數值依所採用的匹配演算法而有不同數值範圍,而可信度閥值則可由使用者據以設定。匹配角度則可根據座標軸及水平方向的設定而不同。以圖5B中將當前影像M0與校正點影像M13左右並列為例,當匹配特徵點對之連線L越接近圖面中之左右水平線,即匹配角度接近180°,則代表當前影像M0越符合該校正點影像M13。較佳的,採用深度學習模型進行當前影像M0與該等校正點影像M1、M12、M13的特徵點提取及匹配,深度學習模型特徵點提取的解釋性與匹配的精準度比傳統方法來的更好。 In step S103, as shown in Figure 5B, taking the surrounding image M13 of one of the correction point images as an example, the image on the left is the current image M0 at a certain moment, and the image on the right is the surrounding image M13. The complex dotted lines connecting both sides are is the connecting line L of the matching feature point pairs after feature point extraction and matching. Among them, the matching credibility is a credibility value of each matching feature point pair calculated through the matching algorithm, the matching number is the total number of all matching feature point pairs generated, and the matching angle is each matching feature point. An angle relative to a horizontal direction. Among them, the specific value of the credibility value has different value ranges depending on the matching algorithm used, and the credibility threshold can be set by the user. The matching angle can vary according to the settings of the coordinate axis and horizontal direction. Take the current image M0 and the correction point image M13 side by side in Figure 5B as an example. When the connecting line L of the matching feature point pair is closer to the left and right horizontal lines in the image, that is, the matching angle is close to 180°, it means that the current image M0 is more consistent. This correction point image is M13. Preferably, a deep learning model is used to extract and match feature points between the current image M0 and the correction point images M1, M12, and M13. The interpretability and matching accuracy of the feature point extraction of the deep learning model are better than traditional methods. good.

當完成當前影像M0與校正點影像M1、M12、M13的特徵點提取及匹配後,進一步計算三種數值:匹配特徵點對的數量、所有匹配特徵點對的平均匹配可信度,以及所有匹配特徵點對的平均匹配角度,並且只有當以上三種數值分別符合大於數量閥值、大於可信度閥值、以及符合角度條件,才判斷 該當前影像M0符合該校正點影像,從而提高當前影像M0與校正點影像的匹配正確性。 After the feature point extraction and matching of the current image M0 and the correction point images M1, M12, and M13 are completed, three values are further calculated: the number of matching feature point pairs, the average matching credibility of all matching feature point pairs, and all matching features. The average matching angle of the point pair, and only when the above three values meet the requirements of greater than the quantity threshold, greater than the credibility threshold, and meet the angle conditions, the judgment will be made. The current image M0 matches the correction point image, thereby improving the matching accuracy of the current image M0 and the correction point image.

請參閱圖5C所示,在一更佳實施例中,當判斷匹配特徵點對的數量大於該數量閥值、各該匹配特徵點對的平均匹配可信度大於該可信度閥值,且各該匹配特徵點對的平均匹配角度符合該角度條件後,進一步包含以下步驟:S1033A:判斷當前影像M0與該校正點影像的匹配特徵點對的數量大於該數量閥值的次數是否大於一數量次數閥值;S1033B:判斷當前影像M0與該校正點影像的平均匹配可信度大於該可信度閥值的次數是否大於一可信度次數閥值;S1033C:判斷當前影像M0與該校正點影像的平均匹配角度符合該角度條件的次數是否大於一角度次數閥值;若皆是,才判斷該當前影像M0符合該校正點影像。 Please refer to Figure 5C. In a better embodiment, when it is determined that the number of matching feature point pairs is greater than the number threshold, the average matching credibility of each matching feature point pair is greater than the credibility threshold, and After the average matching angle of each matching feature point pair meets the angle condition, the following steps are further included: S1033A: Determine whether the number of matching feature point pairs between the current image M0 and the correction point image is greater than the number threshold. Times threshold; S1033B: Determine whether the average matching credibility of the current image M0 and the correction point image is greater than the credibility threshold; S1033C: Determine the current image M0 and the correction point Whether the number of times the average matching angle of the image meets the angle condition is greater than an angle number threshold; if so, the current image M0 is judged to match the correction point image.

在本較佳實施例中,該當前影像M0符合該三種數值條件時,進一步判斷當連續複數張當前影像M0符合各該數值條件分別的次數閥值,才進一步採用該校正點影像判斷當前座標X。所屬領域中具有通常知識者應可理解的是,步驟S1033A~S1033C之執行順序係可調整,並達到相同技術效果的。舉例而言,可以設定當該當前影像M0與該校正點影像的匹配特徵點對的數量大於該數量閥值(數量次數閥值為1),已有連續3幀當前影像M0與該校正點影像的匹配可信度大於該可信度閥值(可信度次數閥值為3),且有連續5幀的前影像與該校正點影像的平均匹配角度符合該角度條件(可信度次數閥值為5),才判斷目前的該當前影像M0符合該校正點影像,以避免可能的誤判,並保證採用該校正點影像之正確性。 In this preferred embodiment, when the current image M0 meets the three numerical conditions, it is further determined that when a plurality of consecutive current images M0 meet the respective thresholds of times for each of the numerical conditions, the correction point image is further used to determine the current coordinate X . It should be understood by those with ordinary knowledge in the field that the execution order of steps S1033A~S1033C can be adjusted and achieve the same technical effect. For example, it can be set that when the number of matching feature point pairs between the current image M0 and the correction point image is greater than the quantity threshold (the quantity threshold is 1), there have been 3 consecutive frames of the current image M0 and the correction point image. The matching credibility is greater than the credibility threshold (the credibility threshold is 3), and the average matching angle between the previous image of 5 consecutive frames and the correction point image meets the angle condition (the credibility threshold is 3). value is 5), it is judged that the current image M0 matches the correction point image to avoid possible misjudgments and ensure the correctness of the correction point image.

在步驟S103中,若判斷該當前影像M0符合其中一校正點影像,則採用該校正點影像判斷當前座標X,即進行步驟S104。若判斷該當前影像M0不符合任何一校正點影像,則根據該先前影像M0’判斷當前座標X,即進行步驟S105。 In step S103, if it is determined that the current image M0 matches one of the correction point images, the correction point image is used to determine the current coordinate X, that is, step S104 is performed. If it is determined that the current image M0 does not match any correction point image, the current coordinate X is determined based on the previous image M0', that is, step S105 is performed.

請參閱圖6A及6B所示,當執行步驟S104時,係包含以下子步驟:S1041:根據該當前影像M0及該校正點影像中的該些匹配特徵點對,計算該當前影像M0相對該校正點影像的一轉換矩陣;S1042:根據該校正點座標及該當前影像M0相對該校正點影像的該轉換矩陣計算該當前座標X。 Please refer to Figures 6A and 6B. When step S104 is executed, the following sub-steps are included: S1041: Calculate the current image M0 relative to the correction point according to the matching feature point pairs in the current image M0 and the correction point image. A transformation matrix of the point image; S1042: Calculate the current coordinate X according to the correction point coordinates and the transformation matrix of the current image M0 relative to the correction point image.

如圖6B所示,延續圖5B之例子,若當前影像M0符合其中一校正點影像(M13),則通過步驟S103中所產生的匹配特徵點對進一步計算該轉換矩陣。根據該轉換矩陣,將左邊的當前影像M0之中心點,即當前座標X,投影至右邊的該校正點影像M13中。由於該校正點座標P13係已知的,故根據校正點座標P13以及當前影像M0之中心點在該校正點影像M13之投影(右邊影像中的X)的相對位置,則可計算該當前座標X。須強調的是,如同針對步驟S103之說明,此處的當前影像與校正點影像之間的匹配特徵點對係利用深度學習模型特徵點提取所產生的,具有強健的計算基礎及更佳的準確性,故在通過校正點影像計算當前座標X,以作為無人機的位置校正根據時,能提供良好的準確度及校正效果。 As shown in FIG. 6B , continuing the example of FIG. 5B , if the current image M0 matches one of the correction point images (M13), the transformation matrix is further calculated through the matching feature point pair generated in step S103. According to the transformation matrix, the center point of the current image M0 on the left, that is, the current coordinate X, is projected to the correction point image M13 on the right. Since the correction point coordinate P13 is known, the current coordinate X can be calculated based on the correction point coordinate P13 and the relative position of the center point of the current image M0 in the projection of the correction point image M13 (X in the right image) . It should be emphasized that, as explained for step S103, the matching feature point pairs between the current image and the correction point image here are generated by using deep learning model feature point extraction, which has a strong computing foundation and better accuracy. Therefore, when calculating the current coordinate X through the correction point image as the basis for position correction of the drone, it can provide good accuracy and correction effect.

請參閱圖7A及7B所示,當執行步驟S105前,係先執行以下步驟:S105A:對該當前影像M0與該先前影像M0’分別進行特徵點提取,以分別產生複數特徵點; S105B:匹配該當前影像M0中的特徵點及該先前影像M0’中的特徵點,以計算該當前影像M0相對該先前影像M0’的一轉換矩陣;且當執行S105時,係執行以下內容:S105’:根據該先前座標X’及該當前影像M0相對該先前影像M0’的該轉換矩陣計算該當前座標X。 Please refer to Figures 7A and 7B. Before performing step S105, the following steps are first performed: S105A: Extract feature points of the current image M0 and the previous image M0' respectively to generate plural feature points respectively; S105B: Match the feature points in the current image M0 and the feature points in the previous image M0' to calculate a transformation matrix of the current image M0 relative to the previous image M0'; and when executing S105, the following content is executed: S105': Calculate the current coordinate X according to the previous coordinate X' and the transformation matrix of the current image M0 relative to the previous image M0'.

其中,步驟S105A及S105B係執行步驟S105(S105’)之預步驟,例如是與步驟S103同時執行。此外,在步驟S103中,若判斷結果為「是」,則不會執行到步驟S105。在步驟S105A及S105B中,該處理器13提取當前影像M0及先前影像M0’的特徵點,再根據當前影像M0及先前影像M0’的特徵點產生一轉換矩陣,或可稱一投影矩陣,該轉換矩陣代表該當前影像M0相對該先前影像M0’或該校正點影像的一相對位移,最後在步驟S105’中,藉由該轉換矩陣計算該當前座標X。其產生的效果如圖7B所示,其中右邊為先前影像M0’,左邊為當前影像M0,虛線箭頭為該處理器13根據先前影像M0’及當前影像M0的特徵點計算之轉換矩陣之投影示意。根據該轉換矩陣,將當前影像M0之中心點投影至該先前影像M0’中,由於該先前座標X’係已知的,故根據先前座標X’以及當前影像M0之中心點在該先前影像M0’之投影的相對位置,則可計算該當前座標X。 Among them, steps S105A and S105B are pre-steps for executing step S105 (S105'), for example, they are executed simultaneously with step S103. In addition, in step S103, if the determination result is "yes", step S105 will not be executed. In steps S105A and S105B, the processor 13 extracts the feature points of the current image M0 and the previous image M0', and then generates a transformation matrix, or a projection matrix, based on the feature points of the current image M0 and the previous image M0'. The transformation matrix represents a relative displacement of the current image M0 relative to the previous image M0' or the correction point image. Finally, in step S105', the current coordinate X is calculated using the transformation matrix. The effect produced is shown in Figure 7B, in which the right side is the previous image M0', the left side is the current image M0, and the dotted arrow is a projection of the transformation matrix calculated by the processor 13 based on the feature points of the previous image M0' and the current image M0. . According to the transformation matrix, the center point of the current image M0 is projected into the previous image M0'. Since the previous coordinate X' is known, based on the previous coordinate 'The relative position of the projection can be calculated as the current coordinate X.

較佳的,對該當前影像M0與該先前影像M0’或該校正點影像分別進行特徵點提取係通過尺度不變特徵轉換演算法(Scale-invariant feature transform,SIFT)影像特徵點提取演算法,而匹配該當前影像M0中的特徵點及該先前影像M0’或該校正點影像中的特徵點是通過K近鄰(K Nearest Neighbors,KNN)演算法進行匹配。 Preferably, the feature point extraction of the current image M0 and the previous image M0' or the correction point image is performed through a scale-invariant feature transform (SIFT) image feature point extraction algorithm. The matching of the feature points in the current image M0 and the feature points in the previous image M0' or the correction point image is performed through a K Nearest Neighbors (KNN) algorithm.

請一併參閱圖8A及8B,本發明進一步包含以下步驟:S107:計算該當前座標X與該地標座標P1的距離; S108:判斷該距離是否小於一地標抵達閥值;S109:若是,判斷抵達該地標座標P1,並根據一飛行任務資訊更新該地標座標及該地標影像,並回到步驟S101;若否,則直接回到步驟S101。 Please refer to Figures 8A and 8B together. The present invention further includes the following steps: S107: Calculate the distance between the current coordinate X and the landmark coordinate P1; S108: Determine whether the distance is less than a landmark arrival threshold; S109: If so, determine that the landmark coordinate P1 has been reached, update the landmark coordinates and the landmark image according to a flight mission information, and return to step S101; if not, directly Return to step S101.

以圖8B所示的導航地圖M及無人機所在當前座標X為例,為了進一步避免無人機在尚未確實抵達地標座標P1即提早離開,故在推算出當前座標X後,還進一步計算該當前座標X與該地標座標P1的距離是否足夠接近,若距離小於該地標抵達閥值,則判斷該無人機的當前座標X已經抵達該地標座標P1。根據飛行任務資訊,該處理器13會將目前作為目標的地標座標P1更新為下一個目標的地標座標P2,並且更新讀取對應的地標影像M2、周圍影像M21~M24、周圍座標P21~P24等資訊,以繼續進行向下一個地標座標P2前進的飛行任務。 Taking the navigation map M and the current coordinate X of the drone shown in Figure 8B as an example, in order to further prevent the drone from leaving early before actually arriving at the landmark coordinate P1, after the current coordinate Is the distance between X and the landmark coordinate P1 close enough? If the distance is less than the landmark arrival threshold, it is determined that the current coordinate According to the flight mission information, the processor 13 will update the current landmark coordinate P1 as the target to the landmark coordinate P2 of the next target, and update and read the corresponding landmark image M2, surrounding images M21~M24, surrounding coordinates P21~P24, etc. information to continue the flight mission to the next landmark coordinate P2.

請參閱圖8C所示,當該無人機判斷抵達地標座標P1,該處理器13則會先清空校正點影像及校正點座標,並且依據飛行任務資訊更新地標座標P1為新的地標座標P2,並依當下的先前座標X’,即判斷抵達地標座標P1的當前座標X,與新的地標座標P2再次計算預期路徑R,並重新選擇校正點影像。圖8C所繪示為該當前座標X已被判斷抵地標座標P1,故該處理器13根據新的地標座標P2計算新的預期路徑R後,由該等周圍座標P11~P14、P21~P24中選擇距離該預期路徑R的垂直距離在該預設距離內者所在的該周圍影像M11~M14、M21~M24,以及地標影像M2為新的校正點影像。在圖8C的例子中,該處理器13例如是選擇周圍影像M14、M22、M23,以及地標影像M2為該等校正點影像。 Please refer to Figure 8C. When the UAV determines that it has arrived at the landmark coordinate P1, the processor 13 will first clear the calibration point image and the calibration point coordinates, and update the landmark coordinate P1 to the new landmark coordinate P2 based on the flight mission information, and Based on the current previous coordinates As shown in FIG. 8C , the current coordinate Select the surrounding images M11~M14, M21~M24 and the landmark image M2 whose vertical distance from the expected path R is within the preset distance as new correction point images. In the example of FIG. 8C , the processor 13 selects the surrounding images M14, M22, M23 and the landmark image M2 as the correction point images.

此外,在一實施例中,當處理器13判斷抵達地標座標,係先根據該飛行任務資訊判斷是否還有尚未抵達的地標座標;若有,才根據飛行任務資訊更新地標座標及地標影像、周圍影像;若無,則結束該飛行任務。 In addition, in one embodiment, when the processor 13 determines that the landmark coordinates have been reached, it first determines whether there are any landmark coordinates that have not yet arrived based on the flight mission information; if so, it updates the landmark coordinates, landmark images, and surroundings based on the flight mission information. image; if not, the mission will be ended.

綜上所述,相較傳統的特徵點提取與匹配較不準確,傳統的深度學習的特徵點提取與匹配較耗費運算資源,本發明的地面特徵影像定位及校正之無人機視覺導航方法及系統在根據先前影像M0’進行自我定位時具有較高的匹配速度,在根據地標影像進行匹配時要求準確,從而結合二種定位方法的優點幫助無人機順利在導航地圖M上飛行。藉由先前座標X’與地標座標的相對位置設定校正點採納機制,有效的減少需要匹配的校正點影像數量,減少運算資源耗損,且同時設定三種不同的數值閥值以及次數閥值判斷當前影像M0是否符合校正點影像,如此一來能夠有效提高判斷準確性並排除多數的誤判情形。而藉由事先收集地標影像的周圍影像,一併納入校正點影像的判斷,可以有效判斷無人機相對該地標影像的所在位置,幫助匹配過程,避免提早誤判已經到達地標,使得無人機提早轉彎的情況發生。 In summary, compared with traditional feature point extraction and matching, which is less accurate and traditional deep learning feature point extraction and matching, which consumes more computing resources, the UAV visual navigation method and system for positioning and correcting ground feature images of the present invention It has a high matching speed when performing self-positioning based on the previous image M0', and requires accuracy when matching based on landmark images, thus combining the advantages of the two positioning methods to help the drone fly smoothly on the navigation map M. By setting the correction point adoption mechanism based on the relative positions of the previous coordinates Whether M0 conforms to the calibration point image, this can effectively improve the accuracy of judgment and eliminate most misjudgments. By collecting the surrounding images of the landmark image in advance and including them in the judgment of the correction point image, the position of the drone relative to the landmark image can be effectively determined, helping the matching process and avoiding early misjudgment that the landmark has been reached, causing the drone to turn early. situation occurs.

S101~S106:步驟 S101~S106: Steps

Claims (9)

一種地面特徵影像定位及校正之無人機視覺導航方法,由一處理器執行,包含以下步驟:擷取一無人機的下方的一當前影像,並讀取一先前影像及對應該先前影像的一先前座標;其中,該先前影像為擷取該當前影像之前所擷取的前一幀影像;根據該先前座標及一地標座標,讀取對應該地標座標的複數校正點影像;將該當前影像與該等校正點影像分別進行影像匹配,以判斷該當前影像是否符合其中一校正點影像;若該當前影像符合其中一校正點影像,根據該當前影像、符合的該校正點影像及其對應的一校正點座標判斷一當前座標;若否,根據該當前影像、該先前影像及其對應的該先前座標判斷該當前座標;以及根據該當前座標及該地標座標產生一導航飛行指令,使該無人機向該地標座標飛行。 A UAV visual navigation method for ground feature image positioning and correction, executed by a processor, includes the following steps: acquiring a current image below a UAV, and reading a previous image and a previous image corresponding to the previous image coordinates; wherein, the previous image is the previous frame image captured before the current image is captured; based on the previous coordinates and a landmark coordinate, read a plurality of correction point images corresponding to the landmark coordinates; combine the current image with the Image matching is performed on the other correction point images to determine whether the current image matches one of the correction point images; if the current image matches one of the correction point images, based on the current image, the matching correction point image and a corresponding correction point Determine a current coordinate based on the point coordinates; if not, determine the current coordinates based on the current image, the previous image and the corresponding previous coordinates; and generate a navigation flight command based on the current coordinates and the landmark coordinates to direct the UAV towards This landmark coordinate flight. 如請求項1所述的地面特徵影像定位及校正之無人機視覺導航方法,其中,當執行「根據該先前座標及一地標座標,讀取對應該地標座標的複數校正點影像」的步驟時,係包含以下子步驟:根據該地標座標讀取一地標影像,以及讀取與該地標影像相關的複數周圍影像,該等周圍影像具有對應的周圍座標;根據該先前座標及該地標座標之間的一預期路徑,選取地標座標或周圍座標距離該預期路徑的距離小於一預設距離閥值的地標影像或周圍影像為該等校正點影像。 The UAV visual navigation method for locating and correcting ground feature images as described in claim 1, wherein when performing the step of "reading a plurality of correction point images corresponding to the landmark coordinates based on the previous coordinates and a landmark coordinate", The system includes the following sub-steps: reading a landmark image according to the landmark coordinates, and reading a plurality of surrounding images related to the landmark image, and the surrounding images have corresponding surrounding coordinates; according to the relationship between the previous coordinates and the landmark coordinates An expected path, select landmark images or surrounding images whose distance from the landmark coordinates or surrounding coordinates to the expected path is less than a preset distance threshold as the correction point images. 如請求項1所述的地面特徵影像定位及校正之無人機視覺導航方法,其中,當執行「將該當前影像與該等校正點影像分別進行影像匹配,以判斷該當前影像是否符合其中一校正點影像」之步驟時,係分別對各該校正點影像進行以下子步驟:對該當前影像及該校正點影像分別進行特徵點提取及特徵點匹配,以產生複數匹配特徵點對;計算該等匹配特徵點對的匹配可信度及匹配角度;其中,匹配角度為每一匹配特徵點對的連線相對一水平方向的一角度值;判斷匹配特徵點對的數量是否大於一數量閥值、各該匹配特徵點對的平均匹配可信度是否大於一可信度閥值,且各該匹配特徵點對的平均匹配角度是否符合一角度條件;若是,判斷該當前影像符合該校正點影像。 The UAV visual navigation method for locating and correcting ground feature images as described in request item 1, wherein when executing "match the current image with the correction point images respectively to determine whether the current image conforms to one of the corrections" point image", the following sub-steps are performed on each correction point image: feature point extraction and feature point matching are performed on the current image and the correction point image respectively to generate a plurality of matching feature point pairs; calculate the The matching credibility and matching angle of the matching feature point pairs; where the matching angle is an angle value of the connection line of each matching feature point pair relative to a horizontal direction; determine whether the number of matching feature point pairs is greater than a quantity threshold, Whether the average matching credibility of each matching feature point pair is greater than a credibility threshold, and whether the average matching angle of each matching feature point pair meets an angle condition; if so, it is determined that the current image meets the correction point image. 如請求項3所述的地面特徵影像定位及校正之無人機視覺導航方法,其中,當判斷匹配特徵點對的數量大於該數量閥值、各該匹配特徵點對的平均匹配可信度大於該可信度閥值,且各該匹配特徵點對的平均匹配角度符合該角度條件,進一步包含以下步驟:判斷當前影像與該校正點影像的匹配特徵點對的數量大於該數量閥值的次數是否大於一數量次數閥值;判斷當前影像與該校正點影像的平均匹配可信度大於該可信度閥值的次數是否大於一可信度次數閥值;判斷當前影像與該校正點影像的平均匹配角度符合該角度條件的次數是否大於一角度次數閥值;若皆是,才判斷該當前影像符合該校正點影像。 The UAV visual navigation method for ground feature image positioning and correction as described in claim 3, wherein when it is determined that the number of matching feature point pairs is greater than the number threshold, the average matching credibility of each matching feature point pair is greater than the The credibility threshold, and the average matching angle of each matching feature point pair meets the angle condition, further including the following steps: Determine whether the number of matching feature point pairs between the current image and the correction point image is greater than the number threshold Greater than a number of times threshold; determine whether the average matching credibility of the current image and the correction point image is greater than the credibility threshold; determine whether the average matching credibility of the current image and the correction point image is greater than a credibility number threshold; determine the average matching credibility of the current image and the correction point image Whether the number of times the matching angle meets the angle condition is greater than an angle number threshold; if so, the current image is judged to match the correction point image. 如請求項2所述的地面特徵影像定位及校正之無人機視覺導航方法,其中,「根據符合的該校正點影像對應的校正點座標判斷該當前座標」的步驟,進一步包含以下子步驟:根據該當前影像及該校正點影像中的該些匹配特徵點對,計算該當前影像相對該校正點影像的一轉換矩陣;根據該校正點座標及該當前影像相對該校正點影像的該轉換矩陣計算該當前座標。 The UAV visual navigation method for locating and correcting ground feature images as described in claim 2, wherein the step of "determining the current coordinates based on the corresponding correction point coordinates of the corresponding correction point image" further includes the following sub-steps: Calculate a transformation matrix between the current image and the correction point image based on the matching feature point pairs in the current image and the correction point image; calculate based on the correction point coordinates and the transformation matrix between the current image and the correction point image the current coordinates. 如請求項2所述的地面特徵影像定位及校正之無人機視覺導航方法,其中,當符合的該校正點影像係該地標影像時,進一步包含以下步驟:計算該當前座標與該地標座標的距離;判斷該距離是否小於一地標抵達閥值;若是,判斷抵達該地標座標,並根據一飛行任務資訊更新該地標座標及該地標影像。 The UAV visual navigation method for locating and correcting ground feature images as described in claim 2, wherein when the corresponding correction point image is the landmark image, the method further includes the following steps: calculating the distance between the current coordinates and the landmark coordinates. ; Determine whether the distance is less than a landmark arrival threshold; if so, determine whether the landmark coordinates have been reached, and update the landmark coordinates and the landmark image according to a flight mission information. 如請求項1所述的地面特徵影像定位及校正之無人機視覺導航方法,其中在執行「根據該當前影像、該先前影像及其對應的該先前座標判斷該當前座標」的步驟前,先進行包含以下步驟:對該當前影像與該先前影像分別進行特徵點提取,以分別產生複數特徵點;匹配該當前影像中的特徵點及該先前影像中的特徵點,以計算該當前影像相對該先前影像的一轉換矩陣;且當執行「根據該當前影像、該先前影像及其對應的該先前座標判斷該當前座標」的步驟時,係根據該先前座標及該當前影像相對該先前影像的該轉換矩陣計算該當前座標。 The UAV visual navigation method for locating and correcting ground feature images as described in claim 1, wherein before performing the step of "determining the current coordinates based on the current image, the previous image and the corresponding previous coordinates", It includes the following steps: extracting feature points of the current image and the previous image respectively to generate a plurality of feature points; matching the feature points in the current image and the feature points in the previous image to calculate the relative position of the current image to the previous image. A transformation matrix of the image; and when executing the step of "determining the current coordinates based on the current image, the previous image and the corresponding previous coordinates", it is based on the previous coordinates and the transformation of the current image relative to the previous image The matrix calculates the current coordinates. 如請求項1所述的地面特徵影像定位及校正之無人機視覺導航方法,進一步包含: 儲存該導航地圖、該等地標影像、地標座標、該等周圍影像、及該等周圍影像資訊;其中,該飛行任務資訊包含各該地標座標對應的順序編號。 The UAV visual navigation method for positioning and correcting ground feature images as described in request 1 further includes: Store the navigation map, the landmark images, landmark coordinates, the surrounding images, and the surrounding image information; wherein the flight mission information includes the sequence number corresponding to each of the landmark coordinates. 一種地面特徵影像定位及校正之無人機視覺導航系統,包含:一攝影機,擷取一當前影像;一儲存器,電性連接該攝影機,儲存複數先前影像及對應的先前座標、一導航地圖、一飛行任務資訊、複數地標影像及對應的複數地標座標、各該地標影像對應的複數周圍影像、以及該等周圍影像對應的複數周圍座標;一處理器,電性連接該儲存器,執行如請求項1至8中任一項所述的地面特徵影像定位及校正之無人機視覺導航方法。 A UAV visual navigation system for positioning and correcting ground feature images, including: a camera that captures a current image; a memory that is electrically connected to the camera and stores a plurality of previous images and corresponding previous coordinates, a navigation map, and a Flight mission information, a plurality of landmark images and a plurality of corresponding landmark coordinates, a plurality of surrounding images corresponding to each landmark image, and a plurality of surrounding coordinates corresponding to the surrounding images; a processor electrically connected to the storage to execute the requested item UAV visual navigation method for ground feature image positioning and correction described in any one of 1 to 8.
TW111137107A 2022-09-29 2022-09-29 UAV visual navigation system and method for ground feature image positioning and correction TWI835320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111137107A TWI835320B (en) 2022-09-29 2022-09-29 UAV visual navigation system and method for ground feature image positioning and correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111137107A TWI835320B (en) 2022-09-29 2022-09-29 UAV visual navigation system and method for ground feature image positioning and correction

Publications (2)

Publication Number Publication Date
TWI835320B true TWI835320B (en) 2024-03-11
TW202413886A TW202413886A (en) 2024-04-01

Family

ID=91269719

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111137107A TWI835320B (en) 2022-09-29 2022-09-29 UAV visual navigation system and method for ground feature image positioning and correction

Country Status (1)

Country Link
TW (1) TWI835320B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822635A (en) * 2014-03-05 2014-05-28 北京航空航天大学 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
US10198011B2 (en) * 2017-07-06 2019-02-05 Top Flight Technologies, Inc. Navigation system for a drone
TW201915945A (en) * 2017-09-15 2019-04-16 林永淵 System and method for unmanned aircraft image analysis
WO2019084825A1 (en) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 Image processing method and device, and unmanned aerial vehicle
US10515458B1 (en) * 2017-09-06 2019-12-24 The United States Of America, As Represented By The Secretary Of The Navy Image-matching navigation method and apparatus for aerial vehicles
US10914590B2 (en) * 2014-03-24 2021-02-09 SZ DJI Technology Co., Ltd. Methods and systems for determining a state of an unmanned aerial vehicle
EP3825954A1 (en) * 2018-07-18 2021-05-26 SZ DJI Technology Co., Ltd. Photographing method and device and unmanned aerial vehicle
CN113256719A (en) * 2021-06-03 2021-08-13 舵敏智能科技(苏州)有限公司 Parking navigation positioning method and device, electronic equipment and storage medium
US20210358102A1 (en) * 2017-10-11 2021-11-18 Hitachi Systems Ltd. Aircraft-utilizing deterioration diagnosis system
US20220244054A1 (en) * 2015-12-09 2022-08-04 SZ DJI Technology Co., Ltd. Systems and methods for auto-return
TW202238076A (en) * 2021-03-24 2022-10-01 百一電子股份有限公司 Indoor positioning and searching object method for intelligent unmanned vehicle system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103822635A (en) * 2014-03-05 2014-05-28 北京航空航天大学 Visual information based real-time calculation method of spatial position of flying unmanned aircraft
US10914590B2 (en) * 2014-03-24 2021-02-09 SZ DJI Technology Co., Ltd. Methods and systems for determining a state of an unmanned aerial vehicle
US20220244054A1 (en) * 2015-12-09 2022-08-04 SZ DJI Technology Co., Ltd. Systems and methods for auto-return
US10198011B2 (en) * 2017-07-06 2019-02-05 Top Flight Technologies, Inc. Navigation system for a drone
US10515458B1 (en) * 2017-09-06 2019-12-24 The United States Of America, As Represented By The Secretary Of The Navy Image-matching navigation method and apparatus for aerial vehicles
TW201915945A (en) * 2017-09-15 2019-04-16 林永淵 System and method for unmanned aircraft image analysis
US20210358102A1 (en) * 2017-10-11 2021-11-18 Hitachi Systems Ltd. Aircraft-utilizing deterioration diagnosis system
WO2019084825A1 (en) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 Image processing method and device, and unmanned aerial vehicle
EP3825954A1 (en) * 2018-07-18 2021-05-26 SZ DJI Technology Co., Ltd. Photographing method and device and unmanned aerial vehicle
TW202238076A (en) * 2021-03-24 2022-10-01 百一電子股份有限公司 Indoor positioning and searching object method for intelligent unmanned vehicle system
CN113256719A (en) * 2021-06-03 2021-08-13 舵敏智能科技(苏州)有限公司 Parking navigation positioning method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
TW202413886A (en) 2024-04-01

Similar Documents

Publication Publication Date Title
CN108769821B (en) Scene of game describes method, apparatus, equipment and storage medium
WO2020186678A1 (en) Three-dimensional map constructing method and apparatus for unmanned aerial vehicle, computer device, and storage medium
CN107909600B (en) Unmanned aerial vehicle real-time moving target classification and detection method based on vision
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
WO2020014909A1 (en) Photographing method and device and unmanned aerial vehicle
WO2022183785A1 (en) Robot positioning method and apparatus, robot, and readable storage medium
US9058538B1 (en) Bundle adjustment based on image capture intervals
WO2020103110A1 (en) Image boundary acquisition method and device based on point cloud map and aircraft
WO2019196476A1 (en) Laser sensor-based map generation
CN109425348B (en) Method and device for simultaneously positioning and establishing image
WO2020228694A1 (en) Camera pose information detection method and apparatus, and corresponding intelligent driving device
JP6860620B2 (en) Information processing equipment, information processing methods, and programs
CN112414403B (en) Robot positioning and attitude determining method, equipment and storage medium
JP7147753B2 (en) Information processing device, information processing method, and program
CN109405830B (en) Unmanned aerial vehicle automatic inspection method based on line coordinate sequence
JP2017228111A (en) Unmanned aircraft, control method of unmanned aircraft and control program of unmanned aircraft
WO2021016806A1 (en) High-precision map positioning method, system and platform, and computer-readable storage medium
Zheng et al. Robust and accurate monocular visual navigation combining IMU for a quadrotor
CN111510704A (en) Method for correcting camera dislocation and device using same
TWI835320B (en) UAV visual navigation system and method for ground feature image positioning and correction
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
CN112050814A (en) Unmanned aerial vehicle visual navigation system and method for indoor transformer substation
CN111461008A (en) Unmanned aerial vehicle aerial shooting target detection method combining scene perspective information
WO2020098532A1 (en) Method for positioning mobile robot, and mobile robot
CN113470093B (en) Video jelly effect detection method, device and equipment based on aerial image processing