[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115964446A - A method for interactive processing of radar data based on mobile terminal - Google Patents

A method for interactive processing of radar data based on mobile terminal Download PDF

Info

Publication number
CN115964446A
CN115964446A CN202211628850.7A CN202211628850A CN115964446A CN 115964446 A CN115964446 A CN 115964446A CN 202211628850 A CN202211628850 A CN 202211628850A CN 115964446 A CN115964446 A CN 115964446A
Authority
CN
China
Prior art keywords
point cloud
data
vehicle
obstacle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211628850.7A
Other languages
Chinese (zh)
Other versions
CN115964446B (en
Inventor
马楠
姚永强
张欢
徐成
郭聪
吴祉璇
许根宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202211628850.7A priority Critical patent/CN115964446B/en
Publication of CN115964446A publication Critical patent/CN115964446A/en
Application granted granted Critical
Publication of CN115964446B publication Critical patent/CN115964446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

A radar data interaction processing method based on a mobile terminal belongs to the field of automatic driving. According to the invention, the original point cloud data is quickly and accurately processed at the calculation generation end, and the radar data is sent to the mobile end for display through the calculation generation end, so that the problems of space occupation and difficulty in movement of display equipment are solved, the method has the advantage of high flexibility, and more testers can participate in debugging work at the same time; in addition, the radar data are processed at the mobile terminal, barrier information is represented through a simple model, more passengers can understand the radar data, and each decision made by the unmanned vehicle can be fully trusted when the unmanned vehicle is taken. The invention solves the problems that a calculation generation end is difficult to move and an ordinary user does not understand radar data.

Description

一种基于移动端的雷达数据交互处理方法A method for interactive processing of radar data based on mobile terminal

技术领域technical field

本发明属于自动驾驶领域。The invention belongs to the field of automatic driving.

背景技术Background technique

自动驾驶车辆通过感知的数据进行正确的决策和控制,而感知离不开传感器,比如雷达、摄像头等。其中雷达向目标发射探测信号(激光束),然后将接收到的从目标反射回来的信号(目标回波)与发射信号进行比较,作适当处理后,就可获得目标的有关信息,如目标距离、方位、高度、速度、姿态、甚至形状等参数,从而对目标进行探测、跟踪和识别。然而雷达的原始数据为复杂的点云数据,对于一直从事自动驾驶领域的工作者来说,理解这些数据并不算太难,但是当自动驾驶车辆行驶在道路上时,面对众多的乘车者,一种简单易懂的雷达数据呈现方式显得非常重要。因此提出一种基于移动端的雷达数据交互处理方法,计算生成端处理原始雷达数据,通过UDP协议将雷达数据传输到移动端,使用相应算法对障碍物进行显示,具有通俗易懂、灵活便捷交互显示的优点。Self-driving vehicles make correct decisions and controls through perceived data, and perception is inseparable from sensors, such as radars and cameras. Among them, the radar emits a detection signal (laser beam) to the target, and then compares the received signal (target echo) reflected from the target with the transmitted signal, and after proper processing, the relevant information of the target can be obtained, such as the target distance. , azimuth, height, speed, attitude, and even shape parameters, so as to detect, track and identify the target. However, the raw data of radar is complex point cloud data. For workers who have been engaged in the field of autonomous driving, it is not too difficult to understand these data. Or, an easy-to-understand radar data presentation is very important. Therefore, a radar data interactive processing method based on the mobile terminal is proposed. The calculation generation terminal processes the original radar data, transmits the radar data to the mobile terminal through the UDP protocol, and uses the corresponding algorithm to display obstacles, which is easy to understand, flexible and convenient. Interactive display The advantages.

在传统的无人驾驶车辆中,原始数据通常显示在中控屏幕上,想要实时了解车辆感知到的数据以及所做决策的正确性,测试人员必须处于车内。就显示界面而言,《带激光雷达数据图形用户界面的显示屏幕面板》(公开号:CN307549670S)设计了激光雷达显示界面,该界面适用于电脑、手机、平板设备,但是只能对激光雷达数据进行回放展示,而在无人驾驶中,车辆运行速度快,需要通过实时感知到的数据进行决策控制,该方法无法满足实时雷达数据处理显示需求;就雷达数据处理而言,《激光雷达的地面点云过滤方法、系统、设备及存储介质》(公开号:CN115166700A)中通过对点云结构化的编码,利用邻居点条件化查询来保留环境点,滤除点云中的地面点和噪声点,该种方法通过邻居点等信息滤除地面点可以保证精度,但是无法保证实时性。In traditional unmanned vehicles, the raw data is usually displayed on the central control screen. To understand the data perceived by the vehicle and the correctness of the decisions made in real time, testers must be in the vehicle. As far as the display interface is concerned, the "Display Screen Panel with Graphical User Interface for Lidar Data" (public number: CN307549670S) has designed a Lidar display interface, which is suitable for computers, mobile phones, and tablet devices, but only for Lidar data. Playback display, and in unmanned driving, the vehicle runs fast and needs to be decision-making and controlled by real-time perceived data. This method cannot meet the real-time radar data processing and display requirements; as far as radar data processing is concerned, the ground Point cloud filtering method, system, equipment and storage medium" (public number: CN115166700A) through the structured coding of the point cloud, using the conditional query of neighbor points to retain the environment points, and filter out the ground points and noise points in the point cloud , this method can guarantee accuracy by filtering out ground points through information such as neighbor points, but cannot guarantee real-time performance.

针对上述缺点,本发明提出一种基于移动端的雷达数据交互处理方法,在计算生成端对原始点云数据进行快捷准确的处理,并将雷达数据通过计算生成端处理发送到移动端进行显示,解决显示设备占用空间、难以移动的问题,具有灵活性高的优点,可以同时让更多的测试人员参与到调试工作中;除此之外,在移动端将雷达数据进行处理并通过简单的模型表示障碍物信息,让更多的乘车者理解雷达数据,在乘坐无人驾驶车辆时可以充分的信任无人驾驶车辆所做的每一个决策。本发明的目的在于提供一种基于移动端的雷达数据交互处理方法,以解决计算生成端难以移动和普通用户对雷达数据不理解的问题。In view of the above shortcomings, the present invention proposes a method for interactive processing of radar data based on the mobile terminal, which quickly and accurately processes the original point cloud data at the calculation generation terminal, and sends the radar data to the mobile terminal for display through the calculation generation terminal processing, solving the problem of The problem of display equipment occupying space and being difficult to move has the advantages of high flexibility, allowing more testers to participate in the debugging work at the same time; in addition, the radar data is processed on the mobile terminal and represented by a simple model Obstacle information allows more passengers to understand the radar data, and can fully trust every decision made by the unmanned vehicle when riding in the unmanned vehicle. The purpose of the present invention is to provide a method for interactive processing of radar data based on a mobile terminal, so as to solve the problems that the computing generation terminal is difficult to move and ordinary users do not understand radar data.

发明内容Contents of the invention

整个系统包含数据获取单元、处理单元和显示单元。其中数据获取单元和处理单元在计算生成端完成,显示单元在移动端完成。数据获取单元包括两部分数据,一部分通过车载三维激光雷达获得一组无序的原始点云数据,另一部分通过can总线获得车辆信息;处理单元将点云数据聚类分割成多个独立的子集,并在此基础上进行目标分类和识别;显示单元将车辆信息和识别到的障碍物显示在移动端。The whole system includes data acquisition unit, processing unit and display unit. Among them, the data acquisition unit and the processing unit are completed on the calculation generation end, and the display unit is completed on the mobile end. The data acquisition unit includes two parts of data, one part obtains a set of unordered original point cloud data through the vehicle-mounted 3D lidar, and the other part obtains vehicle information through the CAN bus; the processing unit clusters the point cloud data into multiple independent subsets , and on this basis, carry out target classification and recognition; the display unit displays vehicle information and recognized obstacles on the mobile terminal.

1.点云数据的获取1. Acquisition of point cloud data

通过车载三维激光雷达获得一组无序的原始点云数据,其中点云数据应该至少包括车辆在行驶过程中扫描到的道路区域的点云点,每个点云点数据应该包括坐标信息,并带有时间戳和光束的方向。Obtain a set of unordered original point cloud data through the vehicle-mounted 3D lidar, where the point cloud data should at least include the point cloud points of the road area scanned by the vehicle during driving, and each point cloud point data should include coordinate information, and With time stamp and direction of the beam.

2.点云数据的处理2. Processing of point cloud data

处理单元将原始点云数据进行处理,其中预处单元包括地面点云数据滤除和目标聚类分割操作,识别单元计算聚类点云的几何数据,进而识别障碍物。The processing unit processes the original point cloud data. The preprocessing unit includes ground point cloud data filtering and target clustering and segmentation operations. The recognition unit calculates the geometric data of the clustered point cloud to identify obstacles.

1)将原始点云数据转换为深度图像,这种深度图像的每个像素都存储了从传感器到物体的测量距离。1) Convert the raw point cloud data into a depth image, each pixel of this depth image stores the measured distance from the sensor to the object.

2)对地面点云数据进行滤除2) Filter out the ground point cloud data

(1)遍历点云图中的点云集合中所有的点云点,获取每个点云点对应的行驶道路的路面高度;(1) traverse all point cloud points in the point cloud collection in the point cloud graph, and obtain the road surface height of the driving road corresponding to each point cloud point;

(2)基于每个点云点在点云图中的点云坐标信息,确定每个点云点相对于行驶道路的高度坐标;(2) Based on the point cloud coordinate information of each point cloud point in the point cloud map, determine the height coordinates of each point cloud point relative to the driving road;

(3)如果任意一个点云点的高度坐标小于等于对应的路面高度,将该点云点从点云集合中移除,减小点云数据量。(3) If the height coordinate of any point cloud point is less than or equal to the corresponding road height, the point cloud point is removed from the point cloud collection to reduce the amount of point cloud data.

3)对点云数据进行聚类3) Clustering point cloud data

(1)点云数据聚类(1) Point cloud data clustering

通过KD-Tree的近邻查询算法,计算邻域点到目标点的欧氏距离,并根据距离进行聚类。重复计算过程,直到计算完所有的新点。Through the nearest neighbor query algorithm of KD-Tree, calculate the Euclidean distance from the neighbor point to the target point, and perform clustering according to the distance. Repeat the calculation process until all new points have been calculated.

(2)障碍物识别(2) Obstacle recognition

对每个聚类团簇,基于最小凸包法得到障碍物周围最小面积的矩形,得到一个立方体框。对立方体框区域进行特征提取和分类,识别出目标障碍物。障碍物信息包括角点坐标、类型和id。For each cluster, a rectangle with the minimum area around the obstacle is obtained based on the minimum convex hull method, and a cubic frame is obtained. Feature extraction and classification are performed on the cube frame area to identify the target obstacle. Obstacle information includes corner coordinates, type and id.

3.车辆状态信息获取3. Acquisition of vehicle status information

通过can总线获得车辆信息,车辆状态信息包括方向盘转角、电量、速度、档位、油门开度和刹车开度。The vehicle information is obtained through the CAN bus, and the vehicle status information includes steering wheel angle, power, speed, gear position, accelerator opening and brake opening.

4.数据通信4. Data communication

1)UDP协议传输数据1) UDP protocol to transmit data

(1)通过雷达点云数据识别到的障碍物信息进行框选。为了解决点云数据量巨大的问题,在数据传输时,传输障碍物矩形框的角点坐标和障碍物类别。(1) Carry out frame selection based on the obstacle information identified by the radar point cloud data. In order to solve the problem of huge amount of point cloud data, the corner coordinates and obstacle categories of the obstacle rectangle are transmitted during data transmission.

(2)通过UDP协议传输字节数据(2) Transfer byte data through UDP protocol

规定每八个字节为一个障碍物信息或车辆信息。传输数据占用两个端口,每一个端口传输的数据为1498位的字节数据。一个端口发送车辆状态信息,另一个端口发送雷达数据信息,两个端口循环发送。It is stipulated that every eight bytes is an obstacle information or vehicle information. The transmission data occupies two ports, and the data transmitted by each port is 1498-bit byte data. One port sends vehicle status information, the other port sends radar data information, and the two ports send cyclically.

5.数据的显示交互5. Data display interaction

1)移动端接收到障碍物数据和车辆数据。1) The mobile terminal receives obstacle data and vehicle data.

2)对障碍物信息进行坐标系转换2) Coordinate system conversion for obstacle information

雷达障碍物数据以车辆尾部中心点为坐标原点,单位为米,从而形成车辆坐标系;而在移动端显示时,形成的是图像坐标系,在界面中间绘制控件用于显示车道信息,以显示车道的左上角为坐标原点,单位为像素,需要将车辆坐标系转换为图像坐标系。在车辆坐标系中,以坐标原点向右为X轴,向上为Y轴;在图像坐标系中,以坐标原点向右为x轴,向下为y轴,则坐标换算公式为The radar obstacle data takes the center point of the rear of the vehicle as the coordinate origin, and the unit is meters, thus forming a vehicle coordinate system; when displayed on the mobile terminal, an image coordinate system is formed, and controls are drawn in the middle of the interface to display lane information, to display The upper left corner of the lane is the coordinate origin, and the unit is pixel. It is necessary to convert the vehicle coordinate system to the image coordinate system. In the vehicle coordinate system, take the origin of the coordinates to the right as the X axis, and the upwards as the Y axis; in the image coordinate system, take the origin of the coordinates as the x axis to the right, and the downwards as the y axis, then the coordinate conversion formula is

x’=w/2+X’*(w/m)x'=w/2+X'*(w/m)

y’=h–Y’*(h/n)y'=h-Y'*(h/n)

其中w为屏幕中显示车道的像素宽度,h为屏幕中显示车道的像素高度,m为显示的障碍物距离本车的最大横向距离(单位:米),n为显示的障碍物距离本车的最大纵向距离(单位:米),X’为障碍物距离本车的实际横向距离(单位:米),Y’为障碍物距离本车的实际纵向距离(单位:米),x’为计算得到的界面中障碍物距离本车的像素横向距离(单位:像素),y’为计算得到的界面中障碍物距离本车的像素纵向距离(单位:像素);Where w is the pixel width of the lane displayed on the screen, h is the pixel height of the lane displayed on the screen, m is the maximum lateral distance (unit: meter) between the displayed obstacle and the vehicle, and n is the distance between the displayed obstacle and the vehicle The maximum longitudinal distance (unit: meter), X' is the actual horizontal distance from the obstacle to the vehicle (unit: meter), Y' is the actual longitudinal distance from the obstacle to the vehicle (unit: meter), and x' is the calculated The pixel horizontal distance (unit: pixel) between the obstacle in the interface and the vehicle, and y' is the calculated pixel vertical distance (unit: pixel) between the obstacle and the vehicle in the interface;

3)进行移动端界面开发,显示车辆信息(车辆速度、电量、方向盘转角、油门开度、刹车开度、档位)和障碍物信息。3) Develop the mobile terminal interface to display vehicle information (vehicle speed, battery, steering wheel angle, accelerator opening, brake opening, gear position) and obstacle information.

附图说明Description of drawings

图1为本发明基于移动端的雷达数据交互处理方法的工作流程图;Fig. 1 is the work flowchart of the radar data interaction processing method based on the mobile terminal of the present invention;

图2为本发明基于移动端的雷达数据交互处理方法的坐标转化示意;FIG. 2 is a schematic diagram of coordinate conversion based on the mobile terminal-based radar data interactive processing method of the present invention;

图3为本发明基于移动端的雷达数据交互处理方法的移动端界面;Fig. 3 is the mobile terminal interface of the radar data interactive processing method based on the mobile terminal in the present invention;

具体实施方式Detailed ways

下面对照附图,通过对实施例的描述,对本发明的具体实施方式如所涉及的各构件的形状、构造、各部分之间的相互位置及连接关系、各部分的作用及工作原理、制造工艺及操作使用方法等,作进一步详细的说明,以帮助本领域的技术人员对本发明的发明构思、技术方案有更完整、准确和深入的理解。Referring to the accompanying drawings, through the description of the embodiments, the specific implementation of the present invention, such as the shape, structure, mutual position and connection relationship between the various parts, the function and working principle of each part, and the manufacturing process And the method of operation and use, etc., are described in further detail to help those skilled in the art have a more complete, accurate and in-depth understanding of the inventive concept and technical solution of the present invention.

在本实施例中,计算生成端选用Jetson AGX Orin,该计算机是NVIDIA发布的最小、功能最强大、能效最高的AI超级计算机,每秒可进行200万亿次运算(TOPS)。移动终端选用华为M6平板,体积较小,基本可以满足显示要求。In this embodiment, the calculation generator uses Jetson AGX Orin, which is the smallest, most powerful, and most energy-efficient AI supercomputer released by NVIDIA, capable of performing 200 trillion operations per second (TOPS). The mobile terminal is Huawei M6 tablet, which is small in size and can basically meet the display requirements.

实例1图1是本发明基于移动端的雷达数据交互处理方法的工作流程图,如图所示,首先通过车载三维激光雷达获取原始点云数据,通过can总线获取车辆信息,包括方向盘转角、电量、速度、档位、油门开度和刹车开度等信息;之后对地面点云数据进行滤除,减小点云数据量,对滤除之后的点云数据使用聚类算法对同一障碍物点云进行聚类,通过聚类之后的数据进行障碍物识别;使用基于UDP的数据传输协议将车辆信息和处理识别之后的点云数据实时传输到移动端进行显示。Example 1 Figure 1 is the working flow chart of the radar data interactive processing method based on the mobile terminal of the present invention. As shown in the figure, firstly, the original point cloud data is obtained through the vehicle-mounted three-dimensional laser radar, and the vehicle information is obtained through the CAN bus, including steering wheel angle, power, Information such as speed, gear position, accelerator opening and brake opening; then filter the ground point cloud data to reduce the amount of point cloud data, and use a clustering algorithm to filter the point cloud data for the same obstacle Carry out clustering and identify obstacles through the data after clustering; use UDP-based data transmission protocol to transmit vehicle information and point cloud data after processing and identification to the mobile terminal for display in real time.

大多数激光雷达以每个激光束的单个测距读数的形式提供原始数据,并带有时间戳和光束的方向,可以直接将数据转换为深度图像。这种深度图像的每个像素都存储了从传感器到物体的测量距离。在进行地面点云数据滤除时,遍历点云图中的点云集合中所有的点云点,并查询每个点云点对应的行驶道路的路面高度;基于每个点云点在点云图中的点云坐标信息,确定每个点云点相对于行驶道路的高度坐标;如果任意一个点云点的高度坐标小于等于对应的路面高度,将该点云点从点云集合中移除,从而完成地面点云数据滤除的工作,减小点云数据量。之后对点云数据进行欧式聚类,通过KD-Tree的近邻查询算法,计算邻域点到目标点的欧氏距离,并根据距离进行聚类。重复计算过程,直到计算完所有的新点。得到每个障碍物的点云后,基于最小凸包法得到障碍物周围的点,在这些点的基础上求出包围最小面积的矩形,在对3D物体进行包围时,将会得到一个立方体框。之后通过计算每个立方体框中团簇的几何关系,基于计算结果识别出目标障碍物。Most lidars provide raw data in the form of individual ranging readings for each laser beam, with time stamps and the orientation of the beams, allowing the data to be converted directly into a depth image. Each pixel of this depth image stores the measured distance from the sensor to the object. When filtering ground point cloud data, traverse all point cloud points in the point cloud set in the point cloud map, and query the road surface height of the driving road corresponding to each point cloud point; based on each point cloud point in the point cloud map Point cloud coordinate information, determine the height coordinates of each point cloud point relative to the driving road; if the height coordinate of any point cloud point is less than or equal to the corresponding road surface height, remove the point cloud point from the point cloud collection, thus Complete the filtering of ground point cloud data and reduce the amount of point cloud data. Afterwards, the point cloud data is clustered in a European way, and the Euclidean distance from the neighbor point to the target point is calculated through the KD-Tree nearest neighbor query algorithm, and clustered according to the distance. Repeat the calculation process until all new points have been calculated. After obtaining the point cloud of each obstacle, the points around the obstacle are obtained based on the minimum convex hull method. Based on these points, the rectangle surrounding the minimum area is obtained. When surrounding the 3D object, a cubic frame will be obtained . Then by calculating the geometric relationship of the clusters in each cube box, the target obstacle is identified based on the calculation results.

实施例2UDP协议传输数据过程Embodiment 2 UDP protocol data transmission process

通过UDP协议传输数据,协议传输数据占用两个端口,每一个端口传输的数据为1498位的字节数据。一个端口发送车辆状态信息,另一个端口发送雷达数据信息,两个端口循环发送。在两个端口中第1到6字节描述源设备MAC地址,第7到14字节描述目标地址MAC地址,第15到24字节描述源设备IP地址,第25到34字节描述目标设备IP地址,第35到38字节描述源设备数据传输端口,第39到42字节描述目标设备数据接收端口,之后的字节为数据字节。计算生成端在第一个端口传输的每个障碍物数据占用72字节,前16*4个字节表示一个角点坐标信息,每8个字节表示x或y坐标,后8个字节表示类型信息。为了传输更多的障碍物数据,只传输四个角点坐标,例如左前上、右前上、左后下、右后下,根据该四个角点坐标即可以还原出障碍物的矩形框。计算生成端在第二个端口传输的车辆信息,每8个字节表示一个数据,例如第43到50个字节表示方向盘转角信息。由于传输的车辆信息数据有限,对于其余没有占空的字节位可用于传输障碍物数据或者后期添加其他车辆信息。The data is transmitted through the UDP protocol, and the data transmitted by the protocol occupies two ports, and the data transmitted by each port is 1498-bit byte data. One port sends vehicle status information, the other port sends radar data information, and the two ports send cyclically. In the two ports, the 1st to 6th bytes describe the source device MAC address, the 7th to 14th bytes describe the target address MAC address, the 15th to 24th bytes describe the source device IP address, and the 25th to 34th bytes describe the target device IP address, the 35th to 38th bytes describe the data transmission port of the source device, the 39th to 42nd bytes describe the data receiving port of the target device, and the following bytes are data bytes. Each obstacle data transmitted by the calculation generation end at the first port occupies 72 bytes, the first 16*4 bytes represent a corner point coordinate information, each 8 bytes represent x or y coordinates, and the last 8 bytes Represents type information. In order to transmit more obstacle data, only four corner coordinates are transmitted, such as left front upper, right front upper, left rear lower, right rear lower, and the rectangular frame of the obstacle can be restored according to the four corner coordinates. For the vehicle information transmitted by the calculation generator at the second port, every 8 bytes represent a piece of data, for example, the 43rd to 50th bytes represent the steering wheel angle information. Since the transmitted vehicle information data is limited, the remaining unoccupied bytes can be used to transmit obstacle data or add other vehicle information later.

实施例3图2是本发明基于移动端的雷达数据交互处理方法的坐标转换示意图。图像坐标系像素原点(0,0)在屏幕中显示车道的左上角,向右为x轴,向下为y轴。假定移动端显示的车道像素宽度为w,高度为h,即车道右下角像素坐标为(w,h),以华为平板M6为例,在界面显示的车道像素宽度w=1800,像素高度h=1600。实际车辆坐标原点为车尾中心点,向右为X轴,向车头方为Y轴,举例有3条车道,每条车道宽4米,长50米,本车位于中间车道正下方,则显示的障碍物距离本车的横向距离范围X∈[-6,6],纵向距离范围Y∈[0,50],因此坐标换算公式如下:Embodiment 3 FIG. 2 is a schematic diagram of the coordinate transformation of the radar data interactive processing method based on the mobile terminal in the present invention. The pixel origin (0,0) of the image coordinate system displays the upper left corner of the lane on the screen, the right is the x axis, and the downward is the y axis. Assume that the pixel width of the lane displayed on the mobile terminal is w, and the height is h, that is, the pixel coordinates of the lower right corner of the lane are (w, h). Taking Huawei Tablet M6 as an example, the lane pixel width displayed on the interface is w=1800, and the pixel height h= 1600. The origin of the actual vehicle coordinates is the center point of the rear of the vehicle, the X axis is to the right, and the Y axis is to the front. For example, there are 3 lanes, each lane is 4 meters wide and 50 meters long, and the vehicle is located directly below the middle lane, then it will display The obstacle distance from the vehicle is the horizontal distance range X∈[-6,6], and the longitudinal distance range Y∈[0,50]. Therefore, the coordinate conversion formula is as follows:

x’=1800/2+X’*(1800/12)x'=1800/2+X'*(1800/12)

y’=1600–Y’*(1600/50)y’=1600–Y’*(1600/50)

例如,车辆接收到的障碍物坐标为(2,3),即距离本车的实际横向距离X=2m,纵向距离Y=3m,则在界面车道中显示的位置为For example, the coordinates of the obstacle received by the vehicle are (2,3), that is, the actual lateral distance from the vehicle is X=2m, and the longitudinal distance Y=3m, then the position displayed in the interface lane is

x’=1800/2+2*(1800/12)=1200pxx'=1800/2+2*(1800/12)=1200px

y’=1600–3*(1600/50)=1504pxy'=1600–3*(1600/50)=1504px

实施例4图3为本发明基于移动端的雷达数据交互处理方法的移动端界面,界面中显示三条车道,每条车道宽4米,长50米,本车位于中间车道正下方,为了显示的美观,车道平面向界面内旋转30度。界面上部从左到右依次显示电量、车速、和方向盘转角,且都以整数的形式进行显示。左侧显示油门开度和刹车开度,其中油门和刹车通过进度条的方式显示,单位为百分比,例如油门为34%,则进度条中34%为蓝色。右侧车辆档位信息。对于障碍物而言,在界面中显示车辆左右±6米,车前50米的障碍物。根据计算生成端发送的障碍物类型数据,显示相应的障碍物模型,并通过传输数据帧与帧之间障碍物id的关联,对障碍物模型进行平移,保证障碍物移动的流畅性。Embodiment 4 Fig. 3 is the mobile terminal interface of the radar data interactive processing method based on the mobile terminal of the present invention, three lanes are displayed in the interface, each lane is 4 meters wide and 50 meters long, and the vehicle is located directly below the middle lane, for the sake of beautiful display , the plane of the lane is rotated 30 degrees toward the interface. The upper part of the interface displays battery power, vehicle speed, and steering wheel angle in turn from left to right, and they are all displayed in the form of integers. The accelerator opening and brake opening are displayed on the left, and the accelerator and brake are displayed by a progress bar, and the unit is a percentage. For example, the accelerator is 34%, and 34% of the progress bar is blue. The gear position information of the right vehicle. For obstacles, obstacles within ±6 meters left and right of the vehicle and 50 meters in front of the vehicle are displayed on the interface. According to the obstacle type data sent by the calculation generation end, the corresponding obstacle model is displayed, and the obstacle model is translated by the association between the transmission data frame and the obstacle id between the frames to ensure the smoothness of the obstacle movement.

Claims (1)

1.一种基于移动端的雷达数据交互处理方法,其特征在于:1. A method for interactive processing of radar data based on a mobile terminal, characterized in that: 一.点云数据的获取1. Acquisition of point cloud data 通过车载三维激光雷达获得一组无序的原始点云数据,其中点云数据应该至少包括车辆在行驶过程中扫描到的道路区域的点云点,每个点云点数据应该包括坐标信息,并带有时间戳和光束的方向;Obtain a set of unordered original point cloud data through the vehicle-mounted 3D lidar, where the point cloud data should at least include the point cloud points of the road area scanned by the vehicle during driving, and each point cloud point data should include coordinate information, and with time stamp and direction of the beam; 二.点云数据的处理2. Point cloud data processing 处理单元将原始点云数据进行处理,其中预处单元包括地面点云数据滤除和目标聚类分割操作,识别单元计算聚类点云的几何数据,进而识别障碍物;The processing unit processes the original point cloud data, where the preprocessing unit includes ground point cloud data filtering and target clustering and segmentation operations, and the recognition unit calculates the geometric data of the clustered point cloud to identify obstacles; 1)将原始点云数据转换为深度图像,这种深度图像的每个像素都存储了从传感器到物体的测量距离;1) Convert the raw point cloud data into a depth image, each pixel of this depth image stores the measured distance from the sensor to the object; 2)对地面点云数据进行滤除2) Filter out the ground point cloud data (1)遍历点云图中的点云集合中所有的点云点,获取每个点云点对应的行驶道路的路面高度;(1) traverse all point cloud points in the point cloud collection in the point cloud graph, and obtain the road surface height of the driving road corresponding to each point cloud point; (2)基于每个点云点在点云图中的点云坐标信息,确定每个点云点相对于行驶道路的高度坐标;(2) Based on the point cloud coordinate information of each point cloud point in the point cloud map, determine the height coordinates of each point cloud point relative to the driving road; (3)如果任意一个点云点的高度坐标小于等于对应的路面高度,将该点云点从点云集合中移除,减小点云数据量;(3) If the height coordinate of any point cloud point is less than or equal to the corresponding road height, remove the point cloud point from the point cloud collection to reduce the amount of point cloud data; 3)对点云数据进行聚类3) Clustering point cloud data (1)点云数据聚类(1) Point cloud data clustering 通过KD-Tree的近邻查询算法,计算邻域点到目标点的欧氏距离,并根据距离进行聚类。重复计算过程,直到计算完所有的新点;Through the nearest neighbor query algorithm of KD-Tree, calculate the Euclidean distance from the neighbor point to the target point, and perform clustering according to the distance. Repeat the calculation process until all new points are calculated; (2)障碍物识别(2) Obstacle recognition 对每个聚类团簇,基于最小凸包法得到障碍物周围最小面积的矩形,得到一个立方体框;对立方体框区域进行特征提取和分类,识别出目标障碍物;障碍物信息包括角点坐标、类型和id;For each cluster, the rectangle with the smallest area around the obstacle is obtained based on the minimum convex hull method, and a cube frame is obtained; feature extraction and classification are performed on the cube frame area to identify the target obstacle; obstacle information includes corner coordinates , type and id; 三.车辆状态信息获取3. Acquisition of vehicle status information 通过can总线获得车辆信息,车辆状态信息包括方向盘转角、电量、速度、档位、油门开度和刹车开度;Obtain vehicle information through the can bus, vehicle status information includes steering wheel angle, battery, speed, gear position, accelerator opening and brake opening; 四.数据通信4. Data communication 1)UDP协议传输数据1) UDP protocol to transmit data (1)通过雷达点云数据识别到的障碍物信息进行框选;在数据传输时,传输障碍物矩形框的角点坐标和障碍物类别;(1) Frame selection of the obstacle information identified by the radar point cloud data; during data transmission, the corner coordinates and obstacle category of the obstacle rectangular frame are transmitted; (2)通过UDP协议传输字节数据(2) Transfer byte data through UDP protocol 规定每八个字节为一个障碍物信息或车辆信息;传输数据占用两个端口,每一个端口传输的数据为1498位的字节数据;一个端口发送车辆状态信息,另一个端口发送雷达数据信息,两个端口循环发送;It is stipulated that every eight bytes is an obstacle information or vehicle information; the transmission data occupies two ports, and the data transmitted by each port is 1498-bit byte data; one port sends vehicle status information, and the other port sends radar data information , the two ports send cyclically; 五.数据的显示交互5. Data display interaction 1)移动端接收到障碍物数据和车辆数据;1) The mobile terminal receives obstacle data and vehicle data; 2)对障碍物信息进行坐标系转换2) Coordinate system conversion for obstacle information 雷达障碍物数据以车辆尾部中心点为坐标原点,单位为米,从而形成车辆坐标系;而在移动端显示时,形成的是图像坐标系,在界面中间绘制控件用于显示车道信息,以显示车道的左上角为坐标原点,单位为像素,需要将车辆坐标系转换为图像坐标系;在车辆坐标系中,以坐标原点向右为X轴,向上为Y轴;在图像坐标系中,以坐标原点向右为x轴,向下为y轴,则坐标换算公式为The radar obstacle data takes the center point of the rear of the vehicle as the coordinate origin, and the unit is meters, thus forming a vehicle coordinate system; when displayed on the mobile terminal, an image coordinate system is formed, and controls are drawn in the middle of the interface to display lane information, to display The upper left corner of the lane is the coordinate origin, and the unit is pixel. It is necessary to convert the vehicle coordinate system to the image coordinate system; The origin of the coordinates is the x-axis to the right, and the y-axis is downwards, then the coordinate conversion formula is x’=w/2+X’*(w/m)x'=w/2+X'*(w/m) y’=h–Y’*(h/n)y'=h-Y'*(h/n) 其中w为屏幕中显示车道的像素宽度,h为屏幕中显示车道的像素高度,m为显示的障碍物距离本车的最大横向距离,n为显示的障碍物距离本车的最大纵向距离,X’为障碍物距离本车的实际横向距离,Y’为障碍物距离本车的实际纵向距离,x’为计算得到的界面中障碍物距离本车的像素横向距离,y’为计算得到的界面中障碍物距离本车的像素纵向距离;Where w is the pixel width of the lane displayed on the screen, h is the pixel height of the lane displayed on the screen, m is the maximum lateral distance between the displayed obstacle and the vehicle, n is the maximum vertical distance between the displayed obstacle and the vehicle, X ' is the actual horizontal distance from the obstacle to the vehicle, Y' is the actual longitudinal distance from the obstacle to the vehicle, x' is the calculated pixel horizontal distance from the obstacle to the vehicle in the interface, and y' is the calculated interface The pixel longitudinal distance between the middle obstacle and the vehicle; 3)进行移动端界面开发,显示车辆信息和障碍物信息。3) Develop the mobile terminal interface to display vehicle information and obstacle information.
CN202211628850.7A 2022-12-18 2022-12-18 A radar data interactive processing method based on mobile terminal Active CN115964446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211628850.7A CN115964446B (en) 2022-12-18 2022-12-18 A radar data interactive processing method based on mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211628850.7A CN115964446B (en) 2022-12-18 2022-12-18 A radar data interactive processing method based on mobile terminal

Publications (2)

Publication Number Publication Date
CN115964446A true CN115964446A (en) 2023-04-14
CN115964446B CN115964446B (en) 2024-07-02

Family

ID=87362907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211628850.7A Active CN115964446B (en) 2022-12-18 2022-12-18 A radar data interactive processing method based on mobile terminal

Country Status (1)

Country Link
CN (1) CN115964446B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226833A (en) * 2013-05-08 2013-07-31 清华大学 Point cloud data partitioning method based on three-dimensional laser radar
CN110726993A (en) * 2019-09-06 2020-01-24 武汉光庭科技有限公司 Obstacle detection method using single line laser radar and millimeter wave radar
CN110816527A (en) * 2019-11-27 2020-02-21 奇瑞汽车股份有限公司 Vehicle-mounted night vision safety method and system
CN111469764A (en) * 2020-04-15 2020-07-31 厦门华厦学院 Prediction control method based on mathematical model
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 A Lidar-based Road Environment Element Perception Method
CN113887276A (en) * 2021-08-20 2022-01-04 苏州易航远智智能科技有限公司 Image-based forward main target detection method
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN114488194A (en) * 2022-01-21 2022-05-13 常州大学 Method for detecting and identifying targets under structured road of intelligent driving vehicle
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar
CN115166700A (en) * 2022-06-30 2022-10-11 上海西井信息科技有限公司 Ground point cloud filtering method, system, equipment and storage medium for laser radar

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226833A (en) * 2013-05-08 2013-07-31 清华大学 Point cloud data partitioning method based on three-dimensional laser radar
CN110726993A (en) * 2019-09-06 2020-01-24 武汉光庭科技有限公司 Obstacle detection method using single line laser radar and millimeter wave radar
CN110816527A (en) * 2019-11-27 2020-02-21 奇瑞汽车股份有限公司 Vehicle-mounted night vision safety method and system
CN111469764A (en) * 2020-04-15 2020-07-31 厦门华厦学院 Prediction control method based on mathematical model
CN111985322A (en) * 2020-07-14 2020-11-24 西安理工大学 A Lidar-based Road Environment Element Perception Method
WO2022022694A1 (en) * 2020-07-31 2022-02-03 北京智行者科技有限公司 Method and system for sensing automated driving environment
CN113887276A (en) * 2021-08-20 2022-01-04 苏州易航远智智能科技有限公司 Image-based forward main target detection method
CN114488194A (en) * 2022-01-21 2022-05-13 常州大学 Method for detecting and identifying targets under structured road of intelligent driving vehicle
CN114488073A (en) * 2022-02-14 2022-05-13 中国第一汽车股份有限公司 Method for processing point cloud data acquired by laser radar
CN115166700A (en) * 2022-06-30 2022-10-11 上海西井信息科技有限公司 Ground point cloud filtering method, system, equipment and storage medium for laser radar

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YIMING MIAO; YUAN TANG; BANDER A. ALZAHRANI; AHMED BARNAWI; TARIK ALAFIF COMPUTER SCIENCE DEPARTMENT, JAMOUM UNIVERSITY COLLEGE, U: "Airborne LiDAR Assisted Obstacle Recognition and Intrusion Detection Towards Unmanned Aerial Vehicle: Architecture, Modeling and Evaluation", IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 1 October 2020 (2020-10-01), pages 4531, XP011865597, DOI: 10.1109/TITS.2020.3023189 *
王柠: "基于激光雷达的障碍物检测方法研究", 中国优秀硕士学位论文全文数据库, 15 February 2022 (2022-02-15), pages 035 - 267 *
王灿;孔斌;杨静;王智灵;祝辉;: "基于三维激光雷达的道路边界提取和障碍物检测算法", 模式识别与人工智能, no. 04, 15 April 2020 (2020-04-15), pages 70 - 79 *

Also Published As

Publication number Publication date
CN115964446B (en) 2024-07-02

Similar Documents

Publication Publication Date Title
CN110780305B (en) Track cone detection and target point tracking method based on multi-line laser radar
WO2022156175A1 (en) Detection method, system, and device based on fusion of image and point cloud information, and storage medium
CN103940434B (en) Real-time lane detection system based on monocular vision and inertial navigation unit
CN112894832A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN110111414A (en) A kind of orthography generation method based on three-dimensional laser point cloud
CN108596058A (en) Running disorder object distance measuring method based on computer vision
CN110517349A (en) A 3D Vehicle Target Detection Method Based on Monocular Vision and Geometric Constraints
CN108564525A (en) A kind of 3D point cloud 2Dization data processing method based on multi-line laser radar
CN108955702A (en) Based on the lane of three-dimensional laser and GPS inertial navigation system grade map creation system
CN103386975A (en) Vehicle obstacle avoidance method and system based on machine vision
CN102938142A (en) Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
Xu et al. Object detection based on fusion of sparse point cloud and image information
CN111880191A (en) Map generation method based on multi-agent laser radar and visual information fusion
Barua et al. A self-driving car implementation using computer vision for detection and navigation
CN107796373A (en) A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN115046541A (en) Topological map construction and mine car positioning system under underground mine environment
CN114639115A (en) 3D pedestrian detection method based on fusion of human body key points and laser radar
CN208937705U (en) A device for deep fusion of multi-source heterogeneous sensor features
CN116573017A (en) Method, system, device and medium for sensing foreign objects in urban rail train running boundary
WO2021189420A1 (en) Data processing method and device
CN109902542A (en) Dynamic ground detection method of 3D sensor
CN115267756A (en) Monocular real-time distance measurement method based on deep learning target detection
CN115964446B (en) A radar data interactive processing method based on mobile terminal
CN114092778A (en) Radar camera data fusion system and method based on characterization learning
CN111638487B (en) Automatic parking test equipment and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant