[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110631588B - Unmanned aerial vehicle visual navigation positioning method based on RBF network - Google Patents

Unmanned aerial vehicle visual navigation positioning method based on RBF network Download PDF

Info

Publication number
CN110631588B
CN110631588B CN201910924244.1A CN201910924244A CN110631588B CN 110631588 B CN110631588 B CN 110631588B CN 201910924244 A CN201910924244 A CN 201910924244A CN 110631588 B CN110631588 B CN 110631588B
Authority
CN
China
Prior art keywords
image
feature point
descriptor
positioning
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910924244.1A
Other languages
Chinese (zh)
Other versions
CN110631588A (en
Inventor
贾海涛
吴婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910924244.1A priority Critical patent/CN110631588B/en
Publication of CN110631588A publication Critical patent/CN110631588A/en
Application granted granted Critical
Publication of CN110631588B publication Critical patent/CN110631588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于RBF网络的无人机视觉导航定位方法。本发明的方案为:在GNSS信号未丢失时,通过相机采集图像,并从相机中提取图像帧,对每张图像检测特征点,提取描述子,保留图像的特征点信息;重复对提取的各图像帧的描述子的处理,将描述子信息与定位信息存储到视觉数据库中;在GNSS信号丢失情况下,提取相机拍摄图像,同样进行描述子提取,并利用视觉数据库信息,训练RBF网络分类器:然后根据RBF网络分类器对生成的描述子进行邻域搜索,估计最优匹配位置并基于其记录的定位信息得到当前定位信息。本发明在GNSS信号丢失情况下,基于其所构建的视觉数据库能实现对无人机的定位导航处理,且视觉数据库只存储图像的特征点描述子信息,内存占用空间小。

Figure 201910924244

The invention discloses an RBF network-based visual navigation and positioning method for an unmanned aerial vehicle. The scheme of the present invention is: when the GNSS signal is not lost, collect images through the camera, and extract image frames from the camera, detect feature points for each image, extract descriptors, and retain the feature point information of the image; The descriptor processing of the image frame stores the descriptor information and positioning information in the visual database; in the case of GNSS signal loss, extracts the camera to capture the image, and also performs descriptor extraction, and uses the visual database information to train the RBF network classifier : Then according to the RBF network classifier, the generated descriptors are searched in the neighborhood, the optimal matching position is estimated and the current positioning information is obtained based on the recorded positioning information. In the case of GNSS signal loss, the visual database constructed based on the present invention can realize the positioning and navigation processing of the UAV, and the visual database only stores the feature point descriptor information of the image, and the memory occupies a small space.

Figure 201910924244

Description

一种基于RBF网络的无人机视觉导航定位方法A method of visual navigation and positioning of UAV based on RBF network

技术领域technical field

本发明属于无人机导航定位技术领域,具体涉及一种基于RBF(Radial BasisFunction)网络的无人机视觉导航定位方法。The invention belongs to the technical field of unmanned aerial vehicle navigation and positioning, and in particular relates to an unmanned aerial vehicle visual navigation and positioning method based on an RBF (Radial Basis Function) network.

背景技术Background technique

无人机综合定位系统对其稳定性和完整性起着至关重要的作用。用于定位的最常用的解决方案包括将全球导航卫星系统(GNSS)和惯性导航系统(INS)组合在多传感器融合框架内。在这种情况下,GNSS被用作一种简洁、经济的方法来约束INS传感器在定位过程中产生的无界误差。但事实上,INS在不断地从多个传感器获取数据的迭代过程中对时间进行积分,得到近似的无人机位置,在此过程中,传感器产生的测量误差会迅速积累,并无限制地增长。因此,大多数无人机都利用扩展卡尔曼滤波(EKF)框架对来自INS和GNSS的数据进行融合,该方法将惯导系统的短期精度与全球导航卫星系统的长期精度结合起来,从而有效地抑制了定位误差。因此,全球导航卫星系统被广泛用于各种无人机。The UAV integrated positioning system plays a vital role in its stability and integrity. The most common solutions for localization include the combination of Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS) within a multi-sensor fusion framework. In this case, GNSS is used as a concise and economical method to constrain the unbounded errors produced by INS sensors during the positioning process. But in fact, INS integrates time in the iterative process of continuously acquiring data from multiple sensors to obtain an approximate drone position. During this process, the measurement errors generated by the sensors will quickly accumulate and grow without limit . Therefore, most UAVs utilize the Extended Kalman Filtering (EKF) framework to fuse data from INS and GNSS, which combines the short-term accuracy of INS with the long-term accuracy of GNSS to effectively Positioning errors are suppressed. Therefore, GNSS is widely used in various UAVs.

尽管全球导航卫星系统具有优势,但是在多个记录在案的实例中已被证明是不可靠的。在户外场景,如城市峡谷、森林、丛林以及多雨地区也证明很容易受到故意的攻击和无意的环境干扰。除此之外,使用全球导航卫星系统的无人机已被证明容易在多个场合受到信号欺骗,这种攻击如今已成为现实。在无人机导航中使用全球导航卫星系统的缺点在于获取定位数据所必需的无线电通信,无线电通信系统一般容易出现可用性问题、干扰和信号改变。而使用GNSS/INS融合的根本原因在于依赖于从GNSS获得的全局信息来解决局部定位问题。为了解决这些问题,引入合适的导航传感器和新的导航算法来解决受到无线通信干扰以及GNSS/INS短期或者长期故障时无人机的导航定位问题。Despite their advantages, GNSS has proven to be unreliable in several documented instances. Outdoor scenarios such as urban canyons, forests, jungles, and rainy areas also prove to be vulnerable to intentional attacks and unintentional environmental disturbances. In addition to this, drones using GNSS have proven to be vulnerable to signal spoofing on multiple occasions, and such attacks are now a reality. The disadvantage of using GNSS for drone navigation is the radio communication necessary to obtain positioning data, and radio communication systems are generally prone to availability issues, interference and signal changes. The fundamental reason for using GNSS/INS fusion is to rely on global information obtained from GNSS to solve local positioning problems. In order to solve these problems, suitable navigation sensors and new navigation algorithms are introduced to solve the problem of UAV navigation and positioning when it is interfered by wireless communication and GNSS/INS short-term or long-term failure.

在全球导航卫星系统拒绝/全球导航卫星系统退化的室外环境中可靠地确定无人机定位的一种流行方法是使用单目2D摄像机和基于视觉的技术相结合。这些技术分为两类:一类是使用环境先验知识的技术和不使用环境先验知识的技术。在使用先验知识视觉导航领域,基于地图的导航技术似乎很先进,这种方法将无人机拍摄的图像与以前飞行的高分辨率地标卫星图像或地标图像进行匹配,这种解决方案的局限性包括需要有一个大型的地理图像数据库,可以通过网络连接机载设备进行数据库访问,另一个重要的限制是需要事先了解起点或预定义的边界。因此,基于地图的解决方案存在严重的局限性,阻碍了其在实际场景中的应用。第二类基于视觉的技术没有这种限制,因为它们不需要事先了解环境。这一类解决方案包括视觉测量和同时定位与映射(SLAM)等。在视觉测量中,无人机的运动是通过从单目摄像机获得的连续图像之间的特征或像素来跟踪估计的。但是,即使是最先进的单目视觉测量也会随着时间的推移而受到影响,这是因为当前的定位估计是基于以前的定位估计,导致了错误的积累。相对于视觉测量,SLAM在建立环境地图的同时解决了本地化的问题。地图构建需要多个步骤,例如跟踪、重新定位和循环关闭,这种解决方案却总是伴随着繁重的计算和内存占用。A popular approach to reliably determine drone positioning in GNSS-denied/GNSS-degraded outdoor environments uses a combination of monocular 2D cameras and vision-based techniques. These techniques fall into two categories: one is techniques that use prior knowledge of the environment and techniques that do not use prior knowledge of the environment. Map-based navigation appears to be advanced in the field of visual navigation using prior knowledge. This approach matches images taken by drones with previously flown high-resolution satellite images of landmarks or landmark images. Limitations of this solution Limitations include the need to have a large geographic imagery database that can be accessed by network-connected onboard devices, and another important limitation is the need for prior knowledge of origins or pre-defined boundaries. Therefore, map-based solutions suffer from severe limitations that hinder their application in real-world scenarios. A second class of vision-based techniques does not have this limitation because they do not require prior knowledge of the environment. Solutions in this category include vision measurement and simultaneous localization and mapping (SLAM), among others. In vision measurement, the motion of a drone is estimated by tracking features or pixels between consecutive images obtained from a monocular camera. However, even state-of-the-art monocular vision measurements suffer over time because current localization estimates are based on previous localization estimates, leading to an accumulation of errors. Compared to visual measurement, SLAM solves the problem of localization while building a map of the environment. Map building requires multiple steps, such as tracking, relocalization, and loop closing, but such solutions are always accompanied by heavy computation and memory usage.

发明内容Contents of the invention

本发明的发明目的在于:针对上述存在的问题,提供一种基于RBF网络的无人机视觉导航定位方法,其通过采集无人机航行过程中的地面图像特征描述子,用特征描述子数据集训练好的RBF网络分类器对当前采集图像的特征点描述子进行邻域搜索,求得当前图像的最优匹配位置,从而估计无人机所在地较准确的定位信息。The purpose of the present invention is to: aim at the above-mentioned problems, provide a UAV visual navigation and positioning method based on RBF network, which collects ground image feature descriptors in the process of UAV navigation, and uses feature description sub-data sets The trained RBF network classifier performs a neighborhood search on the feature point descriptors of the currently collected image to obtain the optimal matching position of the current image, thereby estimating more accurate positioning information of the UAV location.

本发明的基于RBF网络的无人机视觉导航定位方法,包括下列步骤:The RBF network-based UAV visual navigation positioning method of the present invention comprises the following steps:

步骤S1:设置用于图像的特征点描述子匹配的RBF神经网络,并进行神经网络训练;Step S1: setting the RBF neural network used for feature point descriptor matching of the image, and performing neural network training;

其中,训练样本为:无人机航行过程中,通过机载摄像头采集的图像;训练样本的特征向量为:通过ORB特征点检测处理得到的图像的特征点描述子;Among them, the training sample is: the image collected by the airborne camera during the flight of the UAV; the feature vector of the training sample is: the feature point descriptor of the image obtained through the ORB feature point detection process;

步骤S2:构建无人机航行时的视觉数据库:Step S2: Construct the visual database when the drone is flying:

在无人机航行过程中,通过机载摄像头采集图像,并对采集的图像进行ORB特征点检测处理,提取每个特征点的描述符,得到当前图像的特征点描述子;将图像的特征点描述子连同图像采集时的定位信息一并存入视觉数据库中;During the flight of the UAV, the image is collected by the onboard camera, and the ORB feature point detection process is performed on the collected image, and the descriptor of each feature point is extracted to obtain the feature point descriptor of the current image; the feature point of the image is The descriptor is stored in the visual database together with the positioning information when the image is collected;

步骤S3:基于视觉数据库的无人机视觉导航定位:Step S3: UAV visual navigation and positioning based on visual database:

基于固定间隔周期,提取机载摄像头采集的图像,作为待匹配图像;Based on the fixed interval period, the image collected by the airborne camera is extracted as the image to be matched;

对待匹配图像进行ORB特征点检测处理,提取每个特征点的描述符,得到待匹配图像的特征点描述子;Perform ORB feature point detection processing on the image to be matched, extract the descriptor of each feature point, and obtain the feature point descriptor of the image to be matched;

将待匹配图像的特征点描述子输入到训练好的RBF神经网络,进行邻域搜索,得到待匹配图像在视觉数据库中的最优匹配特征点描述子;Input the feature point descriptor of the image to be matched into the trained RBF neural network, perform a neighborhood search, and obtain the optimal matching feature point descriptor of the image to be matched in the visual database;

并基于最优匹配特征点描述子在数据库中记录的定位信息,得到无人机的当前视觉导航定位结果。And based on the positioning information recorded in the database by the optimal matching feature point descriptor, the current visual navigation positioning result of the UAV is obtained.

进一步的,步骤S3还包括:检测最优匹配特征点描述子与待匹配图像的特征点描述子之间的相似度是否小于预设的相似度阈值;若是,基于最优匹配特征点描述子在数据库中记录的定位信息,得到无人机的当前视觉导航定位结果;否则,则基于最近得到的视觉导航定位结果继续航行。Further, step S3 also includes: detecting whether the similarity between the optimal matching feature point descriptor and the feature point descriptor of the image to be matched is less than a preset similarity threshold; if so, based on the optimal matching feature point descriptor in Based on the positioning information recorded in the database, the current visual navigation and positioning results of the UAV are obtained; otherwise, the navigation is continued based on the recently obtained visual navigation and positioning results.

综上所述,由于采用了上述技术方案,本发明的有益效果是:In summary, owing to adopting above-mentioned technical scheme, the beneficial effect of the present invention is:

(1)视觉数据库只存储图像的特征点描述子信息,降低内存占用空间;(1) The visual database only stores the feature point descriptor information of the image, reducing the memory footprint;

(2)可以在无基准图库情况下,直接访问视觉数据库对无人机拍摄的图像进行匹配;(2) It can directly access the visual database to match the images taken by the UAV without a reference gallery;

(3)基于所训练得到的RBF网络实现特征描述子邻域搜索,得到最佳匹配位置,估计定位信息。(3) Based on the trained RBF network, the feature description sub-neighborhood search is realized, the best matching position is obtained, and the location information is estimated.

附图说明Description of drawings

图1为视觉定位整体系统框架;Figure 1 is the overall system framework of visual positioning;

图2为ORB特征点检测的流程图;Fig. 2 is the flowchart of ORB feature point detection;

图3为ORB特征点提取时,特征点粗提取的示意图;Fig. 3 is a schematic diagram of feature point rough extraction during ORB feature point extraction;

图4为RBF网络结构示意图;Figure 4 is a schematic diagram of the RBF network structure;

图5为RBF网络匹配定位流程示意图。Fig. 5 is a schematic diagram of a process of RBF network matching and positioning.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面结合实施方式和附图,对本发明作进一步地详细描述。In order to make the purpose, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the implementation methods and accompanying drawings.

本发明的一种基于视觉的无人机导航定位方法,通过采集无人机当前所在区域的地面图像特征描述子,用特征描述子数据集训练好的RBF网络分类器对采集图像的特征点描述子进行邻域搜索,求得该图像的最优匹配位置,从而估计无人机所在地较准确的定位信息。A vision-based UAV navigation and positioning method of the present invention, by collecting the ground image feature descriptor of the area where the UAV is currently located, using the RBF network classifier trained by the feature descriptor data set to describe the feature points of the collected image Neighborhood search is carried out to obtain the optimal matching position of the image, so as to estimate the more accurate positioning information of the UAV location.

参见图1,本发明的基于视觉的无人机导航定位方法主要包括两部分:一是出航时的数据采集,二是返航时的定位估计;Referring to Fig. 1, the vision-based UAV navigation and positioning method of the present invention mainly includes two parts: the one is the data collection when going out, and the second is the positioning estimation when returning;

在数据采集部分,通过相机采集图像,并从相机中提取图像帧,对每张图像检测特征点,提取描述子,此过程丢弃图像数据,保留图像的特征点信息;重复对提取的各图像帧的描述子的处理,将描述子信息与定位信息存储到视觉数据库中;In the data acquisition part, the image is collected by the camera, and the image frame is extracted from the camera, the feature point is detected for each image, and the descriptor is extracted. This process discards the image data and retains the feature point information of the image; repeat the extraction of each image frame Descriptor processing, storing descriptor information and positioning information in the visual database;

在定位估计处理时,在GNSS信号丢失情况下,提取相机拍摄图像,同样进行描述子提取,并利用视觉数据库信息,训练RBF网络分类器:然后根据RBF网络分类器对生成的描述子进行邻域搜索,估计最优匹配位置;最后,根据最优匹配位置存储在视觉数据库的定位信息估计当前图像的定位信息。In the positioning estimation process, in the case of GNSS signal loss, extract the image taken by the camera, also perform descriptor extraction, and use the visual database information to train the RBF network classifier: and then perform neighborhood analysis on the generated descriptor according to the RBF network classifier Search and estimate the optimal matching position; finally, estimate the positioning information of the current image according to the positioning information of the optimal matching position stored in the visual database.

本发明的具体实现步骤如下:Concrete implementation steps of the present invention are as follows:

(1)数据收集。(1) Data collection.

从机载摄像头采集图像,然后对每帧图像进行ORB(ORiented Brief)特征点检测,并提取每个特征点的描述符,接着创建和存储数据库条目,其中数据库条目由先前提取的特征点描述符集和相应的定位信息组成。定位信息根据无人机机载设备应用程序提供的姿态信息和位置信息组成,这部分信息格式或性质高度依赖于所采用的具体应用程序。Collect images from the on-board camera, then perform ORB (ORiented Brief) feature point detection on each frame of image, and extract the descriptor of each feature point, then create and store database entries, where the database entries consist of previously extracted feature point descriptors set and the corresponding positioning information. Positioning information is composed of attitude information and position information provided by the UAV onboard equipment application program, and the format or nature of this part of information is highly dependent on the specific application program used.

(2)特征提取。(2) Feature extraction.

ORB特征点检测采用FAST(features from accelerated segment test)算法来检测尺度金字塔每一层上的特征点。基于特征点周围的图像灰度值,检测候选特征点周围一圈的像素值,如果候选点周围领域内有足够多的像素点与该候选点的灰度值差别够大(即灰度值之差大于预设阈值),则认为该候选点为一个特征点。ORB feature point detection uses the FAST (features from accelerated segment test) algorithm to detect feature points on each layer of the scale pyramid. Based on the gray value of the image around the feature point, detect the pixel value of a circle around the candidate feature point, if there are enough pixels in the area around the candidate point and the gray value of the candidate point is different enough If the difference is greater than the preset threshold), the candidate point is considered to be a feature point.

具体步骤如下:Specific steps are as follows:

1)ORB特征点检测。1) ORB feature point detection.

参见图2,ORB特征点检测处理时,首先对输入图像采用FAST角点检测;然后利用Harris角点度量方法,从已选的FAST特征点中计算Harris角点响应值;再根据角点响应值排序结果,挑出响应值最大的N个特征点;接着,采用灰度质心法来计算ORB特征点的方向,采用BRIEF作为特征点描述方法;最后,每个特征点生成256bit的二进制点对。See Figure 2. During the ORB feature point detection process, first use FAST corner point detection for the input image; then use the Harris corner point measurement method to calculate the Harris corner point response value from the selected FAST feature point; then according to the corner point response value After sorting the results, pick out the N feature points with the largest response value; then, use the gray centroid method to calculate the direction of the ORB feature points, and use BRIEF as the feature point description method; finally, each feature point generates a 256-bit binary point pair.

即ORB特征利用FAST特征点检测方法检测出FAST特征点,然后利用Harris角点度量方法,从己选的FAST特征点中计算Harris角点响应值,挑出响应值最大的前N个特征点。That is, the ORB feature uses the FAST feature point detection method to detect the FAST feature points, and then uses the Harris corner point measurement method to calculate the Harris corner point response value from the selected FAST feature points, and pick out the top N feature points with the largest response value.

其中,FAST特征点的角点响应函数fCRF,定义为:Among them, the corner response function f CRF of FAST feature points is defined as:

Figure BDA0002218516570000041
Figure BDA0002218516570000041

其中,εd是阂值,I(x)是待测点邻域内像素点的像素值,I*p)是当前待测点的像素值。Among them, ε d is the threshold value, I(x) is the pixel value of the pixel point in the neighborhood of the point to be measured, and I*p) is the pixel value of the current point to be measured.

待测点和对应的所有周围点的角点响应函数值的和记作N,当N大于设定的阈值值时,待测点就是FAST特征点,通常阈值取12。The sum of the corner response function values of the point to be tested and all corresponding surrounding points is recorded as N. When N is greater than the set threshold value, the point to be tested is a FAST feature point, and the threshold is usually 12.

ORB特征点提取具体处理流程为:The specific processing flow of ORB feature point extraction is as follows:

第一步:特征点粗提取。选取图像中一点记为p,以p为圆心,3像素为半径,检测圆周上位置编号为1、5、9、13位置对应点的像素值(如图3所示,其一共包括16个位置,粗提取时,即检测圆周上位于圆心p上下左右四个方向上的四个点),若这4个点中至少3个点的像素值大于或小于p点的像素值,那认为p点是特征点。The first step: rough extraction of feature points. Select a point in the image and mark it as p, take p as the center of the circle, and 3 pixels as the radius, and detect the pixel values of the corresponding points on the circumference whose position numbers are 1, 5, 9, and 13 (as shown in Figure 3, it includes 16 positions in total , during rough extraction, that is, to detect four points on the circumference located in the four directions of the center p, up, down, left, and right), if the pixel value of at least 3 points among these 4 points is greater than or smaller than the pixel value of point p, then point p is considered is a feature point.

第二步:去除局部密集点。采用非极大抑制算法计算,保留极大值位置的特征点,删除其余特征点。The second step: remove local dense points. The non-maximum suppression algorithm is used to calculate, retain the feature points at the maximum position, and delete the rest of the feature points.

第三步:特征点的尺度不变性。建立金字塔,来实现特征点的多尺度不变性。设置一个比例因子scale(例如1.2)和金字塔的层数nlevels(例如8层)。将原图像按比例因子降采样成nlevels层图像,降采样后的每层图像I’与原始图像I的关系为:Step 3: Scale invariance of feature points. Build a pyramid to achieve multi-scale invariance of feature points. Set a scale factor scale (eg 1.2) and the number of layers nlevels of the pyramid (eg 8 layers). The original image is down-sampled into an nlevels layer image according to the scale factor, and the relationship between the down-sampled image I' of each layer and the original image I is:

I’=I/scalek(k=1,2,…,8)I'=I/scale k (k=1,2,...,8)

第四步:特征点的旋转不变性。采用灰度质心法来计算特征点方向,特征点半径r范围内的矩为质心,特征点到质心之间构成的向量即特征点方向。Step 4: Rotation invariance of feature points. The gray-scale centroid method is used to calculate the direction of the feature point. The moment within the radius r of the feature point is the centroid, and the vector formed between the feature point and the centroid is the direction of the feature point.

特征点与质心C的向量角度θ就是特征点的主方向:The vector angle θ between the feature point and the centroid C is the main direction of the feature point:

θ=arctan(Cx,Cy)θ=arctan(C x ,C y )

其中,(Cx,Cy)表示质心C的坐标。Wherein, (C x ,C y ) represents the coordinates of the centroid C.

2)特征点描述子生成。2) Feature point descriptor generation.

ORB特征采用描述子BRIEF作为特征点描述方法。BRIEF描述子由长度为n的二进制串构成,本具体实施方式中,n取256。描述子中的某一位二进制值τ(p:x,y)的计算公式如下:The ORB feature uses the descriptor BRIEF as the feature point description method. The BRIEF descriptor is composed of a binary string with a length of n. In this specific implementation, n is 256. The calculation formula of a binary value τ(p:x,y) in the descriptor is as follows:

Figure BDA0002218516570000051
Figure BDA0002218516570000051

其中,p(x)和p(y)是一对点中两个点各自的灰度,n对点对构成的特征描述子fn(p)可以表示为:Among them, p(x) and p(y) are the respective gray levels of two points in a pair of points, and the feature descriptor f n (p) composed of n pairs of point pairs can be expressed as:

fn(p)=∑1≤i≤n2i-1τ(p:x,y)f n (p)=∑ 1≤i≤n 2 i-1 τ(p:x,y)

构建仿射变换矩阵Rθ使描述子具有旋转不变性,得到生成矩阵S的旋转校正版本SθConstruct the affine transformation matrix R θ to make the descriptor invariant to rotation, and obtain the rotation-corrected version S θ of the generator matrix S:

Sθ=RθSS θ = R θ S

其中生成矩阵S为n个点对(xi,yi),i=1,2,...,2n组成的,

Figure BDA0002218516570000052
Figure BDA0002218516570000053
θ为特征点主方向。The generating matrix S is composed of n point pairs (xi , y i ), i=1,2,...,2n,
Figure BDA0002218516570000052
Figure BDA0002218516570000053
θ is the main direction of the feature point.

最终得到的特征点描述子gn(p,θ)=fn(p)|xi,yi∈Sθ,构成特征点256位的描述符。The finally obtained feature point descriptor g n (p, θ) = f n (p) | x i , y i ∈ S θ constitutes a 256-bit feature point descriptor.

(3)基于RBF神经网络的匹配定位。(3) Matching positioning based on RBF neural network.

当无人机GNSS/INS信号不可用时,系统提示无人机返航操作。利用特征数据库存储的无人机运动信息,将返程提取的图像描述符与先前插入数据库中的描述符进行匹配,得到定位信息。基于RBF神经网络的匹配定位系统由网络模式训练和模式定位组成。具体方式为:When the UAV GNSS/INS signal is unavailable, the system prompts the UAV to return to the home. Using the motion information of the UAV stored in the feature database, the image descriptors extracted on the return journey are matched with the descriptors previously inserted into the database to obtain the positioning information. The matching localization system based on RBF neural network consists of network pattern training and pattern localization. The specific way is:

1)设定训练模式。1) Set the training mode.

设定训练模式,对训练样本进行学习,提供分类决策。Set the training mode, learn the training samples, and provide classification decisions.

RBF网路只含有一个隐含层,采用输入值与中心向量的距离作为函数的自变量,并使用径向基函数作为激活函数。局部逼进方式可以简化计算量,因为对于一个输入X,只有部分神经元有响应,其他的神经元近似为0,那么响应的w就调整参数。The RBF network contains only one hidden layer, uses the distance between the input value and the center vector as the independent variable of the function, and uses the radial basis function as the activation function. The local approximation method can simplify the amount of calculation, because for an input X, only some neurons respond, and other neurons are approximately 0, then the response w adjusts the parameters.

参见图4,RBF神经网络由输入层、隐含层和输出层构成,其中Referring to Figure 4, the RBF neural network consists of an input layer, a hidden layer and an output layer, where

输入层,从输入空间到隐层空间的变换为非线性的;In the input layer, the transformation from the input space to the hidden layer space is nonlinear;

隐含层,使用径向基函数作为激活函数的神经元,隐含层到输出层空间的变换为线性的;The hidden layer uses the radial basis function as the activation function of neurons, and the transformation from the hidden layer to the output layer space is linear;

输出层,采用线性函数的神经元,是对隐含层神经元输出的线性组合;The output layer, neurons using linear functions, is a linear combination of the output of hidden layer neurons;

RBF网络采用RBF作为隐单元的“基”构成隐含层空间,将输入矢量直接映射到隐空间。当中心点确定之后,就能确定映射关系。网络由输入到输出的映射是非线性的,而网络输出对可调参数而言却又是线性的,连接权值就可由线性方程组直接解出,从而大大加快学习速度并避免局部极小问题。The RBF network uses RBF as the "base" of the hidden unit to form the hidden layer space, and directly maps the input vector to the hidden space. When the center point is determined, the mapping relationship can be determined. The mapping from input to output of the network is nonlinear, while the output of the network is linear to the adjustable parameters, and the connection weights can be directly solved by the linear equations, thus greatly speeding up the learning speed and avoiding local minimum problems.

本具体实施方式中,RBF神经网络的输入层到隐含层之间的权值固定为1,隐含层单元的传递函数采用径向基函数,隐含层神经元将该层权值向量wi与输入向量Xi之间的矢量距离与偏差bi相乘后作为该神经元激活函数的输入。取径向基函数为高斯函数,神经元的输出为:In this specific embodiment, the weight between the input layer and the hidden layer of the RBF neural network is fixed to 1, the transfer function of the hidden layer unit adopts the radial basis function, and the hidden layer neuron uses the layer weight vector w The vector distance between i and the input vector X i is multiplied by the bias b i as the input of the neuron activation function. Taking the radial basis function as a Gaussian function, the output of the neuron is:

Figure BDA0002218516570000061
Figure BDA0002218516570000061

其中,x表述输入数据,即输入向量,xi为基函数的中心,σ为函数宽度参数,用来确定每一个径向基层神经元对其输入矢量。Among them, x represents the input data, that is, the input vector, xi is the center of the basis function, and σ is the function width parameter, which is used to determine the input vector of each radial basic neuron.

2)RBF神经网络学习。2) RBF neural network learning.

RBF网络要学习的参数有三个:基函数的中心xi和方差σ以及隐含层与输出层之间的权值w。There are three parameters to be learned by the RBF network: the center xi and variance σ of the basis function and the weight w between the hidden layer and the output layer.

i.确定基函数中心xii. Determine the basis function center x i .

由摄像头采集图像的特征描述子向量生成特征数据库,采用k-均值聚类算法确定核函数的中心xi,从训练样本中随机选择I个不同样本作为初始中心xi(0),随机输入训练样本Xk,确定训练样本离哪个中心最近,找到使其满足:The feature database is generated by the feature descriptor vector of the image collected by the camera, the center x i of the kernel function is determined by the k-means clustering algorithm, and I different samples are randomly selected from the training samples as the initial center x i (0), and randomly input into the training Sample X k , determine which center the training sample is closest to, and find it to satisfy:

i(Xk)=argmin||Xk-xi(n)||i(X k )=argmin||X k -x i (n)||

其中i=1,2,...,I,xi(n)表示在第n次迭代时径向基函数的第i个中心,设置迭代步数n=0。通过下面公式调整基函数中心:Where i=1,2,...,I, x i (n) represents the i-th center of the radial basis function at the nth iteration, and the number of iteration steps is set to n=0. Adjust the basis function center by the following formula:

Figure BDA0002218516570000062
Figure BDA0002218516570000062

其中,γ是学习步长,0<γ<1。Among them, γ is the learning step size, 0<γ<1.

即通过迭代训练不断的迭代更新基函数中心,更新公式为:xi(n+1)=xi(n)+γ[Xk(n)-xi(n)],当最近两次迭代更新的处理结果的变化不超过预设阈值时,则停止更新(学习结束),此时认为xi(n+1)约等于xi(n),并将最后一次的更新后的基函数中心作为最终的迭代训练输出结果xi(i=1,2,…,I)。否则,n=n+1。That is, iteratively updates the center of the basis function through iterative training. The update formula is: x i (n+1)= xi (n)+γ[X k (n) -xi (n)], when the last two iterations When the change of the updated processing result does not exceed the preset threshold, the update is stopped (the learning ends). At this time, it is considered that x i (n+1) is approximately equal to x i (n), and the center of the last updated basis function As the final iterative training output result x i (i=1, 2, . . . , I). Otherwise, n=n+1.

ii.确定基函数方差σ。ii. Determine the basis function variance σ.

确定RBF神经网络中心之后,其宽度表示为:After determining the center of the RBF neural network, its width is expressed as:

Figure BDA0002218516570000071
Figure BDA0002218516570000071

其中,M为隐含层单元个数,dmax为所选中心之间的最大距离。Among them, M is the number of hidden layer units, d max is the maximum distance between the selected centers.

iii.确定隐含层到输出层权值w。iii. Determine the weight w from the hidden layer to the output layer.

隐含层至输出层单元连接权值采用最小二乘法计算,即The unit connection weights from the hidden layer to the output layer are calculated by the least square method, that is,

Figure BDA0002218516570000072
Figure BDA0002218516570000072

式中,gqi表示第q个输入样本的向量与基函数中心的权值,Xq是第q个输入样本的向量,q=1,2,...,N,i=1,2,...,I,N表示样本数。In the formula, g qi represents the weight of the vector of the qth input sample and the center of the basis function, X q is the vector of the qth input sample, q=1,2,...,N, i=1,2, ..., I, N represent the number of samples.

3)匹配定位。3) Match positioning.

鉴于无人机拍摄图像的时序性,在返航时,可以间隔固定帧提取一次摄像机所拍摄的图像并提取特征,例如每十帧提取一次图像提取特征,生成特征描述子向量,用训练好的RBF网络分类器进行邻域搜索,求得最优匹配位置,即基于训练好的RBF网络分类器得到当前提取的特征描述子与在数据库中保存的出航过程(从出发点到目的地航程)中是拍摄的图片的特征描述子的最优匹配结果,并检测当前提取的描述子与最优匹配结果之间的相似度是否不超过预设的相似度阈值,若是,则将当前最优匹配结果的位置作为无人机在返航时的当前位置估计结果,获得定位信息。In view of the timing of the images captured by the UAV, when returning to the voyage, the images captured by the camera can be extracted at fixed intervals and the features can be extracted. For example, the image extraction features can be extracted every ten frames, and the feature descriptor vector can be generated. Using the trained RBF The network classifier conducts a neighborhood search to obtain the optimal matching position, that is, based on the trained RBF network classifier, the currently extracted feature descriptor and the shooting process (from the departure point to the destination voyage) saved in the database are taken. The optimal matching result of the feature descriptor of the picture, and check whether the similarity between the currently extracted descriptor and the optimal matching result does not exceed the preset similarity threshold, and if so, the position of the current optimal matching result Positioning information is obtained as the current position estimation result of the UAV when returning home.

进一步的,还可以对得到的位置估计结果进行导航系统的误差补偿,获得定位信息。若与最优匹配位置的相似度低于预定义的相似度阈值,则可以将位置定义为未知,继续采集无人机所在区域的地面图像,进一步根据无人机速度和姿态信息以及最近一次获得的定位结果进行导航。Furthermore, error compensation of the navigation system can also be performed on the obtained position estimation result to obtain positioning information. If the similarity with the optimal matching position is lower than the predefined similarity threshold, the position can be defined as unknown, continue to collect the ground image of the area where the UAV is located, and further base on the speed and attitude information of the UAV and the latest obtained The positioning results for navigation.

导航系统的误差公式为:

Figure BDA0002218516570000073
The error formula of the navigation system is:
Figure BDA0002218516570000073

其中,符号

Figure BDA0002218516570000074
表示位置估计结果(当前最优匹配结果的位置),
Figure BDA0002218516570000075
表示最近的前n次的最终位置估计结果(也可以是位置估计结果
Figure BDA0002218516570000076
)的平均标准误差,j=1,2,...,n,n为预设值。即,将已获得的最近多次定位结果的平均标准误差作为当前补偿量,对当前的位置估计结果进行误差补偿后作为当前的最终位置估计结果
Figure BDA0002218516570000077
Among them, the symbol
Figure BDA0002218516570000074
Indicates the position estimation result (the position of the current best matching result),
Figure BDA0002218516570000075
Indicates the final position estimation result of the last n times (it can also be the position estimation result
Figure BDA0002218516570000076
), j=1,2,...,n, where n is a preset value. That is, the average standard error of the most recent positioning results that have been obtained is used as the current compensation amount, and the error compensation is performed on the current position estimation result as the current final position estimation result
Figure BDA0002218516570000077

参见图5,本发明的RBF网络匹配定位处理过程为:Referring to Fig. 5, the RBF network matching and positioning process of the present invention is:

首先,对视觉数据库存储的特征描述子信息进行随机采样;并对采样得到的特征描述子二级制数据进行训练(训练RBF网络):设定训练模式,采用K-均值聚类,确定RBF中心;根据得到的中心确定RBF宽度d;根据RBF中心以及宽度,采用最小二乘法确定隐含层到输出层的连接权值,最终确定RBF网络结构;First, randomly sample the feature descriptor information stored in the visual database; and train the sampled feature descriptor secondary data (training RBF network): set the training mode, use K-means clustering, and determine the RBF center ; Determine the RBF width d according to the obtained center; according to the RBF center and width, use the least squares method to determine the connection weight from the hidden layer to the output layer, and finally determine the RBF network structure;

根据训练得到RBF网络结构,对返航时采集图像生成的特征点描述子进行邻域匹配,判断最优匹配位置,最后根据其存储的定位信息对当前图像进行返航时的定位估计。According to the RBF network structure obtained through training, the neighborhood matching is performed on the feature point descriptors generated by the collected images during the return to the voyage, and the optimal matching position is judged.

综上,本发明在数据收集过程从摄像机获取图像开始,采用ORB特征点提取技术对图像执行特征点检测,并为每个关键点提取描述符。创建和存储数据库条目,数据库条目由先前提取的描述符和定位信息组成。其中定位信息包括无人机的姿态信息与位置信息。而在RBF网络中需要求解的参数主要为3个:包括基函数的中心、方差以及隐含层到输出层的权值。本发明采用自组织选取中心学习方法,第一步使用无监督学习过程求解隐含层基函数的中心与方差;第二步使用有监督学习过程,最后利用最小二乘法直接求得隐含层到输出层之间的权值。在视觉匹配定位过程也从无人机返航运行状态中捕获图像开始,为了降低相邻图像的相似性,每间隔固定帧提取一张图像,然后检测关键点,并使用与数据收集过程相同的特征点提取方法为每个关键点提取描述符。利用RBF网络,得到当前图像与先前插入数据库描述子的最近距离,找到最优匹配位置。根据最优匹配位置估计当前图像的定位信息。To sum up, in the data collection process, the present invention starts from the image acquisition by the camera, uses the ORB feature point extraction technology to perform feature point detection on the image, and extracts a descriptor for each key point. Create and store a database entry consisting of previously extracted descriptors and positioning information. The positioning information includes attitude information and position information of the UAV. In the RBF network, there are mainly three parameters that need to be solved: including the center of the basis function, the variance, and the weight from the hidden layer to the output layer. The present invention adopts self-organization to select the center learning method, and the first step uses the unsupervised learning process to solve the center and variance of the hidden layer basis function; the second step uses a supervised learning process, and finally uses the least squares method to directly obtain the hidden layer to Weights between output layers. In the process of visual matching and positioning, it also starts from the capture of images in the return operation state of the UAV. In order to reduce the similarity of adjacent images, an image is extracted every fixed frame, and then the key points are detected, and the same features as the data collection process are used. Point extraction methods extract descriptors for each keypoint. Using the RBF network, the shortest distance between the current image and the previously inserted database descriptor is obtained, and the optimal matching position is found. The location information of the current image is estimated according to the optimal matching position.

以上所述,仅为本发明的具体实施方式,本说明书中所公开的任一特征,除非特别叙述,均可被其他等效或具有类似目的的替代特征加以替换;所公开的所有特征、或所有方法或过程中的步骤,除了互相排斥的特征和/或步骤以外,均可以任何方式组合。The above is only a specific embodiment of the present invention. Any feature disclosed in this specification, unless specifically stated, can be replaced by other equivalent or alternative features with similar purposes; all the disclosed features, or All method or process steps may be combined in any way, except for mutually exclusive features and/or steps.

Claims (5)

1. An unmanned aerial vehicle visual navigation positioning method based on an RBF network is characterized by comprising the following steps:
step S1: setting an RBF neural network for matching the feature point descriptors of the image, and training the neural network;
the RBF neural network comprises an input layer, a hidden layer and an output layer, wherein a transfer function of the hidden layer adopts a radial basis function;
the training samples are: in the navigation process of the unmanned aerial vehicle, images are acquired through an onboard camera; the feature vectors of the training samples are: feature point descriptors of the image obtained through ORB feature point detection processing;
step S2: constructing a visual database of the unmanned aerial vehicle during navigation:
in the navigation process of the unmanned aerial vehicle, images are collected through an airborne camera, ORB feature point detection processing is carried out on the collected images, descriptors of all feature points are extracted, and feature point descriptors of the current images are obtained; storing the feature point descriptors of the image and positioning information during image acquisition into a visual database;
and step S3: unmanned aerial vehicle vision navigation positioning based on visual database:
based on a fixed interval period, extracting an image acquired by an airborne camera to serve as an image to be matched;
carrying out ORB feature point detection processing on the image to be matched, and extracting a descriptor of each feature point to obtain a feature point descriptor of the image to be matched;
inputting the feature point descriptor of the image to be matched into a trained RBF neural network, and performing neighborhood search to obtain the optimal matching feature point descriptor of the image to be matched in a visual database;
detecting whether the similarity between the optimal matching feature point descriptor and the feature point descriptor of the image to be matched is smaller than a preset similarity threshold value or not; if so, continuing navigating based on the recently obtained visual navigation positioning result; if not, obtaining the current position estimation result of the unmanned aerial vehicle based on the positioning information recorded in the database by the optimal matching feature point descriptor
Figure FDA0003857800230000011
And by estimating the result of the obtained position
Figure FDA0003857800230000012
Error compensation of the navigation system is carried out to obtain the current visual navigation positioning result of the unmanned aerial vehicle
Figure FDA0003857800230000013
Wherein,
Figure FDA0003857800230000014
or alternatively
Figure FDA0003857800230000015
Figure FDA0003857800230000016
Visual navigation positioning result representing last previous n times
Figure FDA0003857800230000017
The average standard error of (a) is,
Figure FDA0003857800230000018
indicating the position estimation result of the most recent previous n times
Figure FDA0003857800230000019
J =1, 2.. And n, n is a preset value.
2. The method of claim 1, wherein a weight between an input layer to a hidden layer of the RBF neural network is fixed to 1.
3. The method of claim 2, wherein in training the RBF neural network, a plurality of basis function centers of the radial basis functions are determined using a k-means clustering algorithm;
the variance σ of the radial basis function is set as:
Figure FDA00038578002300000110
wherein M is the number of cells in the hidden layer, d max Is the maximum distance between the centers of the basis functions;
the weight W between the weight of the hidden layer and the weight of the output layer is:
Figure FDA0003857800230000021
wherein x is q Feature vector, x, representing the q-th input sample i Representing the ith basis function center.
4. The method of claim 1, wherein the positioning information comprises pose information and position information of the drone.
5. The method according to claim 1, wherein in step S3, the interval for extracting the images collected by the onboard camera is: extracted once every ten frames.
CN201910924244.1A 2019-09-23 2019-09-23 Unmanned aerial vehicle visual navigation positioning method based on RBF network Active CN110631588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910924244.1A CN110631588B (en) 2019-09-23 2019-09-23 Unmanned aerial vehicle visual navigation positioning method based on RBF network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910924244.1A CN110631588B (en) 2019-09-23 2019-09-23 Unmanned aerial vehicle visual navigation positioning method based on RBF network

Publications (2)

Publication Number Publication Date
CN110631588A CN110631588A (en) 2019-12-31
CN110631588B true CN110631588B (en) 2022-11-18

Family

ID=68972992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910924244.1A Active CN110631588B (en) 2019-09-23 2019-09-23 Unmanned aerial vehicle visual navigation positioning method based on RBF network

Country Status (1)

Country Link
CN (1) CN110631588B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111221340B (en) * 2020-02-10 2023-04-07 电子科技大学 Design method of migratable visual navigation based on coarse-grained features
CN111833395B (en) * 2020-06-04 2022-11-29 西安电子科技大学 A single target positioning method and device for direction finding system based on neural network model
CN111860375A (en) * 2020-07-23 2020-10-30 南京科沃信息技术有限公司 Plant protection unmanned aerial vehicle ground monitoring system and monitoring method thereof
CN114202583A (en) * 2021-12-10 2022-03-18 中国科学院空间应用工程与技术中心 A visual positioning method and system for unmanned aerial vehicles
CN113936064B (en) * 2021-12-17 2022-05-20 荣耀终端有限公司 Positioning method and device
CN115729269B (en) * 2022-12-27 2024-02-20 深圳市逗映科技有限公司 Unmanned aerial vehicle intelligent recognition system based on machine vision

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043737A (en) * 1990-06-05 1991-08-27 Hughes Aircraft Company Precision satellite tracking system
JP3256332B2 (en) * 1993-05-24 2002-02-12 郁男 荒井 Distance measuring method and distance measuring device
US6249606B1 (en) * 1998-02-19 2001-06-19 Mindmaker, Inc. Method and system for gesture category recognition and training using a feature vector
TW475991B (en) * 1998-12-28 2002-02-11 Nippon Kouatsu Electric Co Ltd Fault point location system
TW426882B (en) * 1999-10-29 2001-03-21 Taiwan Semiconductor Mfg Overlap statistic process control with efficiency by using positive and negative feedback overlap correction system
JP4020143B2 (en) * 2006-02-20 2007-12-12 トヨタ自動車株式会社 Positioning system, positioning method and car navigation system
DE102006055563B3 (en) * 2006-11-24 2008-01-03 Ford Global Technologies, LLC, Dearborn Correcting desired value deviations of fuel injected into internal combustion engine involves computing deviation value using square error method and correcting deviation based on computed deviation value
CN101118280B (en) * 2007-08-31 2011-06-01 西安电子科技大学 Node self-localization method in distributed wireless sensor network
CN101476891A (en) * 2008-01-02 2009-07-08 丘玓 Accurate navigation system and method for movable object
US8165728B2 (en) * 2008-08-19 2012-04-24 The United States Of America As Represented By The Secretary Of The Navy Method and system for providing a GPS-based position
CN101655561A (en) * 2009-09-14 2010-02-24 南京莱斯信息技术股份有限公司 Federated Kalman filtering-based method for fusing multilateration data and radar data
CN101860622B (en) * 2010-06-11 2014-07-16 中兴通讯股份有限公司 Device and method for unlocking mobile phone
CN102387526B (en) * 2010-08-30 2016-02-10 中兴通讯股份有限公司 A kind of method and device improving wireless cellular system positioning precision
CN103561463B (en) * 2013-10-24 2016-06-29 电子科技大学 A kind of RBF neural indoor orientation method based on sample clustering
CN103983263A (en) * 2014-05-30 2014-08-13 东南大学 Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
CN104330084B (en) * 2014-11-13 2017-06-16 东南大学 A kind of submarine navigation device neural network aiding Combinated navigation method
US9652688B2 (en) * 2014-11-26 2017-05-16 Captricity, Inc. Analyzing content of digital images
CN105891863B (en) * 2016-06-16 2018-03-20 东南大学 It is a kind of based on highly constrained EKF localization method
CN106203261A (en) * 2016-06-24 2016-12-07 大连理工大学 Unmanned vehicle field water based on SVM and SURF detection and tracking
US10514711B2 (en) * 2016-10-09 2019-12-24 Airspace Systems, Inc. Flight control using computer vision
CN106709909B (en) * 2016-12-13 2019-06-25 重庆理工大学 A kind of flexible robot's visual identity and positioning system based on deep learning
CN106780484A (en) * 2017-01-11 2017-05-31 山东大学 Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
CN107030699B (en) * 2017-05-18 2020-03-10 广州视源电子科技股份有限公司 Pose error correction method and device, robot and storage medium
CN108426576B (en) * 2017-09-15 2021-05-28 辽宁科技大学 Method and system for aircraft path planning based on visual navigation of marker points and SINS
CN107808407B (en) * 2017-10-16 2020-12-18 亿航智能设备(广州)有限公司 Binocular camera-based unmanned aerial vehicle vision SLAM method, unmanned aerial vehicle and storage medium
CN108051836B (en) * 2017-11-02 2022-06-10 中兴通讯股份有限公司 Positioning method, device, server and system
CN107909600B (en) * 2017-11-04 2021-05-11 南京奇蛙智能科技有限公司 Unmanned aerial vehicle real-time moving target classification and detection method based on vision
CN107862705B (en) * 2017-11-21 2021-03-30 重庆邮电大学 A small target detection method for unmanned aerial vehicles based on motion features and deep learning features
CN108153334B (en) * 2017-12-01 2020-09-25 南京航空航天大学 Visual autonomous return and landing method and system for unmanned helicopter without cooperative target
CN108168539B (en) * 2017-12-21 2021-07-27 儒安物联科技集团有限公司 Blind person navigation method, device and system based on computer vision
CN109959898B (en) * 2017-12-26 2023-04-07 中国船舶重工集团公司七五〇试验场 Self-calibration method for base type underwater sound passive positioning array
CN108364314B (en) * 2018-01-12 2021-01-29 香港科技大学深圳研究院 Positioning method, system and medium
CN108820233B (en) * 2018-07-05 2022-05-06 西京学院 Visual landing guiding method for fixed-wing unmanned aerial vehicle
CN109141194A (en) * 2018-07-27 2019-01-04 成都飞机工业(集团)有限责任公司 A kind of rotation pivot angle head positioning accuracy measures compensation method indirectly
CN109238288A (en) * 2018-09-10 2019-01-18 电子科技大学 Autonomous navigation method in a kind of unmanned plane room
CN109739254B (en) * 2018-11-20 2021-11-09 国网浙江省电力有限公司信息通信分公司 Unmanned aerial vehicle adopting visual image positioning in power inspection and positioning method thereof
CN109670513A (en) * 2018-11-27 2019-04-23 西安交通大学 A kind of piston attitude detecting method based on bag of words and support vector machines
CN109445449B (en) * 2018-11-29 2019-10-22 浙江大学 A high subsonic UAV ultra-low altitude flight control system and method
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 Vision-based feature point extraction method
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment
CN109859225A (en) * 2018-12-24 2019-06-07 中国电子科技集团公司第二十研究所 A kind of unmanned plane scene matching aided navigation localization method based on improvement ORB Feature Points Matching
CN109765930B (en) * 2019-01-29 2021-11-30 理光软件研究所(北京)有限公司 Unmanned aerial vehicle vision navigation
CN109991633A (en) * 2019-03-05 2019-07-09 上海卫星工程研究所 A kind of low orbit satellite orbit determination in real time method
CN110058602A (en) * 2019-03-27 2019-07-26 天津大学 Autonomous positioning method of multi-rotor UAV based on depth vision
CN110032965B (en) * 2019-04-10 2023-06-27 南京理工大学 Visual Positioning Method Based on Remote Sensing Image

Also Published As

Publication number Publication date
CN110631588A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
CN110631588B (en) Unmanned aerial vehicle visual navigation positioning method based on RBF network
Leira et al. Object detection, recognition, and tracking from UAVs using a thermal camera
Costea et al. Aerial image geolocalization from recognition and matching of roads and intersections
JP6411917B2 (en) Self-position estimation apparatus and moving body
CN103609178B (en) The identification of place auxiliary
Mantelli et al. A novel measurement model based on abBRIEF for global localization of a UAV over satellite images
CN113485441A (en) Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology
US20220057213A1 (en) Vision-based navigation system
JP2019527832A (en) System and method for accurate localization and mapping
CN106529538A (en) Method and device for positioning aircraft
CN109492580B (en) Multi-size aerial image positioning method based on neighborhood significance reference of full convolution network
CN111666855B (en) Method, system and electronic equipment for extracting three-dimensional parameters of animals based on drone
Tao et al. Scene context-driven vehicle detection in high-resolution aerial images
CN114325634B (en) A highly robust method for extracting traversable areas in wild environments based on LiDAR
CN114238675B (en) A method for UAV ground target positioning based on heterogeneous image matching
Vishal et al. Accurate localization by fusing images and GPS signals
WO2018207426A1 (en) Information processing device, information processing method, and program
Dumble et al. Airborne vision-aided navigation using road intersection features
CN113012215A (en) Method, system and equipment for space positioning
EP4128030A1 (en) Detection of environmental changes to delivery zone
CN110636248A (en) Target tracking method and device
CN117876723A (en) A global retrieval and positioning method for UAV aerial images in a denied environment
Venable et al. Large scale image aided navigation
CN114046790B (en) A method for detecting double loops in factor graphs
Samano et al. Global aerial localisation using image and map embeddings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant