[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114640801A - Vehicle-end panoramic view angle auxiliary driving system based on image fusion - Google Patents

Vehicle-end panoramic view angle auxiliary driving system based on image fusion Download PDF

Info

Publication number
CN114640801A
CN114640801A CN202210124847.5A CN202210124847A CN114640801A CN 114640801 A CN114640801 A CN 114640801A CN 202210124847 A CN202210124847 A CN 202210124847A CN 114640801 A CN114640801 A CN 114640801A
Authority
CN
China
Prior art keywords
image
panoramic
fisheye
video
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210124847.5A
Other languages
Chinese (zh)
Other versions
CN114640801B (en
Inventor
仇翔
赵嘉楠
应皓哲
禹鑫燚
欧林林
魏岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202210124847.5A priority Critical patent/CN114640801B/en
Publication of CN114640801A publication Critical patent/CN114640801A/en
Application granted granted Critical
Publication of CN114640801B publication Critical patent/CN114640801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Signal Processing (AREA)
  • Pure & Applied Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)

Abstract

一种基于图像融合的车端全景视角辅助驾驶系统,包括:用于采集车身周围360°道路信息的图像采集模块,用来实时处理图像采集模块输出图像的嵌入式图像处理设备,以及用于显示车端全景图像的图像显示设备。其中,嵌入式图像处理设备与其他两种设备之间用线缆建立物理连接;使用三个安装在车辆上不同位置且视角为180°的鱼眼摄像机采集图像,并采用视频编码器和视频采集卡将多路模拟视频图像整合成一路。嵌入式图像处理设备将整合后的数字视频图像经过鱼眼图像处理模块以及全景图像拼接器,将三个不同角度的数字视频图像拼接成一个全景图像,并通过Web全景播放器将拼接后的全景图像在图像显示设备上显示。本发明能消除大型专用车辆行驶时的视野盲区。

Figure 202210124847

A vehicle-side panoramic view assisted driving system based on image fusion, comprising: an image acquisition module for acquiring 360° road information around a vehicle body, an embedded image processing device for real-time processing of images output by the image acquisition module, and a display The image display device of the panoramic image of the vehicle. Among them, the embedded image processing device and the other two devices are physically connected by cables; three fisheye cameras installed at different positions on the vehicle with a viewing angle of 180° are used to capture images, and video encoders and video capture are used to capture images. The card integrates multiple channels of analog video images into one channel. The embedded image processing device stitches the integrated digital video images through a fisheye image processing module and a panoramic image stitcher, stitches three digital video images from different angles into a panoramic image, and stitches the stitched panoramic image through the Web panoramic player. The image is displayed on an image display device. The invention can eliminate the blind area of vision when the large-scale special vehicle is running.

Figure 202210124847

Description

一种基于图像融合的车端全景视角辅助驾驶系统A vehicle-side panoramic view assisted driving system based on image fusion

技术领域technical field

本发明涉及大型专用车辆安全驾驶领域,具体涉及到一种基于图像融合的车端全景视角辅助驾驶系统。The invention relates to the field of safe driving of large-scale special vehicles, in particular to a vehicle-end panoramic viewing angle auxiliary driving system based on image fusion.

背景技术Background technique

近年来随着我国城镇化建设进程的不断推进,城市道路上出现了越来越多的大型专用车辆,包括但不仅限于城市公交车、水泥罐车、渣土车等。大型专用车辆的出现,极大的方便了人们的日常生产和生活。但是这些大型专用车辆往往具有车身较长、车体较高的特点,使得这些车辆具有范围较广的视野盲区,在道路上行驶时特别是在转弯时会因为视野盲区较大而发生严重的交通事故,这为道路上其他车辆的行驶或者行人的行走带来了较大的安全隐患。目前,全国主要城市已经在推行大型专用车辆“右转必停”的政策,但因为大型专用车辆的视野盲区导致的交通事故一直在发生。因此,为了提高大型专用车辆的行驶安全性以及尽可能地降低其他道路车辆行驶或者行人行走的危险性,就需要减少甚至完全消除大型专用车辆在道路上行驶时存在的视野盲区。In recent years, with the continuous advancement of my country's urbanization construction process, more and more large-scale special vehicles have appeared on urban roads, including but not limited to urban buses, cement tankers, and muck trucks. The emergence of large-scale special vehicles has greatly facilitated people's daily production and life. However, these large special-purpose vehicles often have the characteristics of long body and high body, which makes these vehicles have a wide range of blind spots. When driving on the road, especially when turning, serious traffic will occur due to the large blind spots. Accidents bring greater safety hazards to the driving of other vehicles or pedestrians on the road. At present, major cities in the country have implemented the policy of "stop right when turning right" for large-scale special vehicles, but traffic accidents caused by the blind spots of large-scale special-purpose vehicles have been happening all the time. Therefore, in order to improve the driving safety of large special vehicles and minimize the danger of other road vehicles or pedestrians walking, it is necessary to reduce or even completely eliminate the blind spots of vision that exist when large special vehicles drive on roads.

为了实现上述目的,各类减少车辆行驶时的视野盲区的系统被研究、开发。其中,王禄杰等提出了一种汽车后视镜前装探头车载系统(王禄杰;马飞龙;石雕;等.汽车后视镜前装探头车载系统[P].中国专利:CN112776729A,2021-05-11.),通过在车身安装车体异物检测组件来达到解决汽车视野盲区的问题,但是存在设计复杂、成本较高等局限性。叶胜娟等提出了一种消除汽车视野盲区的汽车影像装置(叶胜娟;王海峰;杨迎飞;等.一种消除汽车视野盲区的汽车影像装置[P].中国专利:CN213948279U,2021-08-13.),通过在车辆上安装固定显示器,调整显示器固定角度来解决汽车的视野盲区问题,但是存在视野角度固定,需要手动调节显示器角度,不能完整显示车身周围环境信息的缺陷。In order to achieve the above-mentioned purpose, various systems for reducing blind spots in the field of view of vehicles have been researched and developed. Among them, Wang Lujie et al. proposed an on-board system with front-mounted probes for automobile rearview mirrors (Wang Lujie; Ma Feilong; Stone Sculpture; et al. Vehicle-mounted system with front-mounted probes for automobile rearview mirrors [P]. Chinese Patent: CN112776729A, 2021-05-11 .), the vehicle body foreign body detection component is installed on the body to solve the problem of the blind spot of the car's visual field, but there are limitations such as complicated design and high cost. Ye Shengjuan et al. proposed an automotive imaging device for eliminating blind spots in the visual field of automobiles (Ye Shengjuan; Wang Haifeng; Yang Yingfei; et al. An automotive imaging device for eliminating blind spots in automotive visual fields [P]. Chinese Patent: CN213948279U, 2021-08-13.), By installing a fixed display on the vehicle and adjusting the fixed angle of the display to solve the problem of the blind spot of the car's field of vision, there is a problem that the viewing angle is fixed, the angle of the display needs to be adjusted manually, and the information of the surrounding environment cannot be fully displayed.

发明内容SUMMARY OF THE INVENTION

为克服现有技术的上述问题,本发明提供了一种基于图像融合的车端全景视角辅助驾驶系统,旨在减少甚至完全消除大型专用车辆的视野盲区,从而来提高大型专用车辆的行驶安全性。In order to overcome the above-mentioned problems of the prior art, the present invention provides a vehicle-side panoramic viewing angle assisted driving system based on image fusion, which aims to reduce or even completely eliminate the blind spot of the large-scale special-purpose vehicle, thereby improving the driving safety of the large-scale special-purpose vehicle. .

本发明的一种基于图像融合的车端全景视角辅助驾驶系统,包括图像采集设备、嵌入式图像处理设备和图像显示设备。A vehicle-end panoramic viewing angle assisted driving system based on image fusion of the present invention includes an image acquisition device, an embedded image processing device and an image display device.

所述图像采集设备采用三个鱼眼相机来采集车身周围360°的道路环境,使用视频编码器将三路鱼眼相机的输出图像整合成一路模拟视频,并由视频采集卡将该路模拟视频转化为数字视频,通过线缆传输给嵌入式图像处理设备中的鱼眼图像处理模块进行图像处理;The image acquisition device uses three fisheye cameras to capture the 360° road environment around the vehicle body, and uses a video encoder to integrate the output images of the three fisheye cameras into one analog video, and the video capture card uses the analog video for the channel. Convert it into digital video and transmit it to the fisheye image processing module in the embedded image processing device through the cable for image processing;

所述嵌入式图像处理设备包括鱼眼图像处理模块;Web全景播放器;全景图像拼接器;The embedded image processing device includes a fisheye image processing module; a Web panorama player; a panorama image stitcher;

鱼眼图像处理模块,用来处理视频采集卡输出的原始鱼眼图像。原始的鱼眼相机为了拍摄到更大的视场角,会导致图像周围的像素信息畸变严重,所以需要采用经纬度展开的方式将原始鱼眼图像矫正为环视图来提高最后全景拼接的效果。该方法主要通过对鱼眼图像中的像素坐标进行一系列的变化,将2D笛卡尔坐标系中的像素坐标变换到球形的笛卡尔坐标系中,最终将球形笛卡尔坐标系下的坐标转换为经纬度坐标,在此之后,基于经纬度坐标进行像素点的映射,以此来达到将鱼眼图像转换成环视图的目的。具体操作步骤如下:The fisheye image processing module is used to process the original fisheye image output by the video capture card. In order to capture a larger field of view, the original fisheye camera will cause serious distortion of the pixel information around the image, so it is necessary to correct the original fisheye image into a ring view by expanding the latitude and longitude to improve the effect of the final panorama stitching. This method mainly transforms the pixel coordinates in the 2D Cartesian coordinate system into a spherical Cartesian coordinate system by making a series of changes to the pixel coordinates in the fisheye image, and finally converts the coordinates in the spherical Cartesian coordinate system into Longitude and latitude coordinates. After that, the pixel points are mapped based on the longitude and latitude coordinates, so as to achieve the purpose of converting the fisheye image into a ring view. The specific operation steps are as follows:

1)获取到原始的鱼眼图像后,以三视角鱼眼成像的圆心和半径编写圆形的蒙版函数来截取出目标的图像区域,其中,截取后的图像区域的像素点的坐标范围如公式(1)所示:1) After obtaining the original fisheye image, write a circular mask function with the center and radius of the three-view fisheye imaging to intercept the target image area, wherein the coordinate range of the pixels in the intercepted image area is as follows: Formula (1) shows:

x∈[0,cols-1],y∈[0,rows-1] (1)x∈[0,cols-1],y∈[0,rows-1] (1)

其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the image after interception, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;

2)为了控制最终经过图像融合后的视频分辨率,需要控制步骤1)中输出图片的尺寸大小;2) In order to control the final video resolution after image fusion, it is necessary to control the size of the output picture in step 1);

3)将截取后的图像区域的像素坐标点(x,y)从2D笛卡尔坐标系转换成标准坐标A(xA,yA),转换关系如公式(2)所示:3) Convert the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to the standard coordinate A (x A , y A ), and the conversion relationship is shown in formula (2):

Figure BDA0003500054380000021
Figure BDA0003500054380000021

其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the image after interception, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;

4)将标准坐标A(xA,yA)转换成球形的三维笛卡尔坐标P(xp,yp,zp),转换公式如公式(3)、(4)所示:4) Convert the standard coordinates A (x A , y A ) into spherical three-dimensional Cartesian coordinates P (x p , y p , z p ), and the conversion formulas are shown in formulas (3) and (4):

P(p,φ,θ) (3)P(p,φ,θ) (3)

Figure BDA0003500054380000031
Figure BDA0003500054380000031

其中,P为球面上一点坐标与原点O之间的连线OP的径向距离,θ为OP与z轴之间的夹角,φ为OP在xOy平面的投影与x轴的夹角,r为球的半径,F为鱼眼相机的焦距,将球坐标系根据公式(5)转换为笛卡尔坐标系:Among them, P is the radial distance of the line OP between the coordinates of a point on the sphere and the origin O, θ is the angle between OP and the z-axis, φ is the angle between the projection of OP on the xOy plane and the x-axis, r is the radius of the ball, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to formula (5):

xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)x p =psinθcosφ,y p =psinθsinφ,z p =pcosθ (5)

5)将空间坐标系P转换成经纬度坐标,转换关系如公式(6)所示:5) Convert the space coordinate system P into latitude and longitude coordinates, and the conversion relationship is shown in formula (6):

Figure BDA0003500054380000032
Figure BDA0003500054380000032

其中,xp,yp,zp是P点的坐标,latitude是经度坐标,longitude是纬度坐标;Among them, x p , y p , z p are the coordinates of point P, latitude is the longitude coordinate, and longitude is the latitude coordinate;

6)根据步骤5)中的经纬度坐标转换映射为展开图的像素坐标(xo,yo),映射关系如公式(7)所示:6) Convert the latitude and longitude coordinates in step 5) to the pixel coordinates (x o , y o ) of the expanded image, and the mapping relationship is shown in formula (7):

Figure BDA0003500054380000033
Figure BDA0003500054380000033

其中,x0表示的是展开图中的像素横坐标,y0表示的是展开图中的像素纵坐标;Among them, x 0 represents the abscissa of the pixel in the expanded image, and y 0 represents the ordinate of the pixel in the expanded image;

7)再完成像素点映射后,画面中会出现没有被像素点映射到的黑色空隙点,针对这些黑色区域再利用cubic插值算法进行填补来达到输出图像完整的效果。7) After the pixel point mapping is completed, there will be black void points in the picture that are not mapped by the pixel points, and the cubic interpolation algorithm is used to fill these black areas to achieve the complete effect of the output image.

全景图像拼接器,用于将三个经过鱼眼图像处理模块处理后的鱼眼图像进行全景图像的拼接。为了保证拼接后视野的连贯性,需要将三个不同方向的鱼眼相机按照固定顺序进行编号,并且在后续的操作中始终保持顺序不变。在图像处理过程,用SIFT算法计算出每幅图像的特征点,将其作为尺度空间、缩放、旋转和仿射变换保持不变的图像局部不变描述算子。还需要寻找相邻图像间的匹配特征点,用RANSAC方法来进一步筛选出特征匹配点,从而通过寻找特征匹配点之间的映射关系计算出单应矩阵。最后根据计算得到的单应矩阵对图像进行透视变化,最后将透视变换后的图像进行拼接,实现车端全景图像拼接的功能。具体操作步骤如下:The panoramic image stitcher is used to stitch the three fisheye images processed by the fisheye image processing module for panoramic images. In order to ensure the continuity of the field of view after splicing, the three fisheye cameras in different directions need to be numbered in a fixed order, and the order will remain unchanged in subsequent operations. In the process of image processing, the SIFT algorithm is used to calculate the feature points of each image, which are regarded as the image local invariant description operator whose scale space, scaling, rotation and affine transformation remain unchanged. It is also necessary to find the matching feature points between adjacent images, and use the RANSAC method to further filter out the feature matching points, so as to calculate the homography matrix by finding the mapping relationship between the feature matching points. Finally, the perspective transformation of the image is performed according to the calculated homography matrix, and finally the perspective transformed images are spliced to realize the function of vehicle-end panoramic image splicing. The specific operation steps are as follows:

1)在获取到经过鱼眼图像处理模块处理后的鱼眼图像,先对不同角度的图像依次进行固定编号,并保持后续图像编号的一致性;1) After obtaining the fisheye image processed by the fisheye image processing module, firstly perform fixed numbering on images of different angles in turn, and keep the consistency of subsequent image numbers;

2)用OpenCV自带的SIFT算法计算出每幅图像的特征点,并将其作为尺度空间、缩放、旋转和仿射变换保持不变的图像局部不变描述算子;2) Calculate the feature points of each image with the SIFT algorithm that comes with OpenCV, and use it as a locally invariant description operator of the image whose scale space, scaling, rotation and affine transformation remain unchanged;

3)图像拼接还需要寻找相邻图像间的匹配特征点,所以本发明中采用计算欧式距离测度的方法对三个视角的鱼眼图像进行粗匹配,接着用比较最近邻欧式距离与次邻欧式距离的SIFT匹配方式在两幅图像的特征点中进行筛选,当最近邻欧式距离与次邻欧氏距离的比值小于0.8时选为匹配点;3) Image splicing also needs to find matching feature points between adjacent images, so in the present invention, the method of calculating Euclidean distance measure is used to roughly match the fisheye images of three viewing angles, and then compare the nearest neighbor Euclidean distance and the next adjacent Euclidean distance. The SIFT matching method of distance filters the feature points of the two images, and selects the matching point when the ratio of the nearest neighbor Euclidean distance to the second neighbor Euclidean distance is less than 0.8;

4)对经过步骤3)处理后的粗匹配点再通过RANSAC方法进一步筛选出误匹配点,从而提高后续图像处理的精度,接着找出特征点之间的映射关系,从而计算出单应矩阵;4) further screen out the incorrect matching points by the RANSAC method for the rough matching points processed in step 3), thereby improving the accuracy of subsequent image processing, and then find out the mapping relationship between the feature points, thereby calculating the homography matrix;

5)用步骤4)计算所得的单应矩阵对鱼眼图像处理模块处理后的鱼眼图像进行透视变换,接着将透视变换后的图像进行拼接,最后合成视频流,实现全景拼接的功能。5) Perform perspective transformation on the fisheye image processed by the fisheye image processing module with the homography matrix calculated in step 4), then stitch the perspective transformed images, and finally synthesize the video stream to realize the function of panoramic stitching.

Web全景播放器,用于在网页显示全景图像拼接器输出的全景图像。为了减少视频显示的延时,该全景播放器采用rtc.js播放器插件构建前端播放器对视频进行播放。也为了使前端能够支持全景视频的播放,采用three.js+video标签+rtc.js的技术来实现全景播放器。该全景播放器主要通过three.js建立一个球形模型,并将视频标签当作球体表面渲染材质对球体进行贴图,从而来达到将全景视频投影到球体上的效果。该全景播放器主要通过three.js建立一个球形模型,并将视频标签当作球体表面渲染材质对球体进行贴图,从而达到将全景视频投影到球体上的效果,在嵌入式图像处理设备上安装浏览器,便可在图像显示设备上浏览全景图像;The web panorama player is used to display the panorama images output by the panorama image stitcher on the web page. In order to reduce the delay of video display, the panorama player uses the rtc.js player plug-in to build a front-end player to play the video. In order to enable the front end to support the playback of panoramic video, the technology of three.js+video tag+rtc.js is used to realize the panoramic player. The panorama player mainly builds a spherical model through three.js, and uses the video tag as the surface rendering material of the sphere to map the sphere, so as to achieve the effect of projecting the panoramic video onto the sphere. The panorama player mainly builds a spherical model through three.js, and uses the video tag as the surface rendering material of the sphere to map the sphere, so as to achieve the effect of projecting the panoramic video onto the sphere. Install and browse on the embedded image processing device. browser, you can browse the panoramic image on the image display device;

所述图像显示设备与嵌入式图像处理设备建立物理连接,用来显示Web播放器所呈现的全景图像。The image display device establishes a physical connection with the embedded image processing device, and is used to display the panoramic image presented by the Web player.

与现有技术相比,本发明的有益效果是:用三个鱼眼相机便可以获得车身周围360°的环境信息,并通过视频编码器以及视频采集卡将三路模拟视频数据整合成一路数字视频数据,将该路数字视频数据输入到嵌入式图像处理设备进行进一步处理,大大节省了嵌入式设备的端口资源,整体上的设计成本也比较低。同时使用集成有AI芯片的嵌入式设备来提高视频的实时处理能力和图像输出能力,结合使用自行设计的全景播放器实现对全景视频的播放,实现了大幅减少甚至完全消除大型专用车辆行驶时存在范围比较广的视野盲区的问题,具有良好的辅助驾驶效果。Compared with the prior art, the beneficial effect of the present invention is: three fisheye cameras can be used to obtain 360° environmental information around the vehicle body, and three channels of analog video data are integrated into one channel of digital video data through a video encoder and a video capture card. Video data, input the digital video data to the embedded image processing device for further processing, which greatly saves the port resources of the embedded device, and the overall design cost is also relatively low. At the same time, the embedded device integrated with AI chip is used to improve the real-time processing capability and image output capability of the video. Combined with the self-designed panoramic player to realize the playback of panoramic video, it has greatly reduced or even completely eliminated the existence of large-scale special vehicles when driving. The problem of a relatively wide field of vision blind spot has a good assisted driving effect.

附图说明Description of drawings

图1是本发明的系统总体框架图;Fig. 1 is the overall frame diagram of the system of the present invention;

图2是本发明的摄像头安装示意图;Fig. 2 is the installation schematic diagram of the camera of the present invention;

图3是本发明的鱼眼图像处理流程图;Fig. 3 is the fisheye image processing flow chart of the present invention;

图4是本发明的图像拼接融合的处理流程图。FIG. 4 is a process flow chart of the image splicing and fusion of the present invention.

具体实施方式Detailed ways

以下结合附图对本发明实例做进一步详述:Below in conjunction with accompanying drawing, the example of the present invention is described in further detail:

如图1所示,一种基于图像融合的车端全景视角辅助驾驶系统,由图像采集设备、嵌入式图像处理设备以及图像显示设备三部分组成。其中,嵌入式图像处理设备与图像采集设备以及图像显示设备之间用线缆建立物理连接,图像采集设备主要实现将多路鱼眼相机的输出模拟视频图像通过视频编码器、视频采集卡等硬件设备,用硬件编码的方式把多路模拟视频整合成一路数字视频,在减少数据传输量的同时,将数字视频数据输入到嵌入式图像处理设备中进行图像处理。嵌入式图像处理设备在获取到输入的数字图像后,先将畸变严重的原始鱼眼图像用坐标变化的方法处理成环视图,从而获得比较丰富的图像信息,再将三路处理成环视图的鱼眼图像进行图像拼接形成视频流。最终通过Web全景播放器在Web端将拼接后的全景图像呈现在图像显示设备上,从而实现大幅减少甚至完全消除大型专用车辆行驶时视野盲区范围较广的功能,提高大型专用车辆行驶的安全性。As shown in Figure 1, a vehicle-side panoramic view assisted driving system based on image fusion consists of three parts: an image acquisition device, an embedded image processing device, and an image display device. Among them, the embedded image processing equipment and the image acquisition equipment and the image display equipment are physically connected by cables, and the image acquisition equipment mainly realizes the output of the multi-channel fisheye camera. The device integrates multiple channels of analog video into one channel of digital video by means of hardware encoding. While reducing the amount of data transmission, the digital video data is input into the embedded image processing device for image processing. After the embedded image processing device acquires the input digital image, it first processes the severely distorted original fisheye image into a ring view by the method of coordinate change, so as to obtain richer image information, and then processes the three-way image into a ring view. Fisheye images are image stitched to form a video stream. Finally, the spliced panoramic images are presented on the image display device on the Web side through the Web panoramic player, so as to greatly reduce or even completely eliminate the wide blind area of vision when large-scale special vehicles are driving, and improve the safety of large-scale special-purpose vehicles. .

如图2所示,本发明中所使用的鱼眼相机的视角是180°,为了实现对车身周围360°环境信息的采集以及较好的图像拼接效果,需要将三个鱼眼相机安装在同一高度的不同位置上,并且每个鱼眼相机之间需要间隔120°。按照图2的安装示意图,并且根据最终图像显示效果来调整三个相机在大型专用车辆上的安装位置,便可以获得车身周围360°的全景环境图像信息,从而获得良好的辅助驾驶效果。As shown in FIG. 2 , the angle of view of the fisheye camera used in the present invention is 180°. In order to realize the collection of 360° environmental information around the vehicle body and a better image stitching effect, it is necessary to install three fisheye cameras in the same At different positions in height, and each fisheye camera needs to be separated by 120°. According to the installation schematic diagram in Figure 2, and adjust the installation positions of the three cameras on the large-scale special vehicle according to the final image display effect, the 360° panoramic environment image information around the vehicle body can be obtained, so as to obtain a good assisted driving effect.

如图3所示,所述鱼眼图像处理模块通过对鱼眼图像中的像素坐标进行一系列的变换、像素点的映射以及空隙点的图像填补等技术来达到输出图像最优的效果,其主要的实施步骤如下:As shown in Figure 3, the fisheye image processing module achieves the optimal effect of the output image by performing a series of transformations on the pixel coordinates in the fisheye image, mapping of pixel points, and image filling of void points. The main implementation steps are as follows:

1)获取到原始的鱼眼图像后,以三视角鱼眼成像的圆心和半径编写圆形的蒙版函数来截取出目标的图像区域,其中,截取后的图像区域的像素点的坐标范围如公式(1)所示:1) After obtaining the original fisheye image, write a circular mask function with the center and radius of the three-view fisheye imaging to intercept the target image area, wherein the coordinate range of the pixels in the intercepted image area is as follows: Formula (1) shows:

x∈[0,cols-1],y∈[0,rows-1] (1)x∈[0,cols-1],y∈[0,rows-1] (1)

其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the image after interception, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;

2)为了控制最终经过图像融合后的视频分辨率,需要控制步骤1)中输出图片的尺寸大小;2) In order to control the final video resolution after image fusion, it is necessary to control the size of the output picture in step 1);

3)将截取后的图像区域的像素坐标点(x,y)从2D笛卡尔坐标系转换成标准坐标A(xA,yA),转换关系如公式(2)所示:3) Convert the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to the standard coordinate A (x A , y A ), and the conversion relationship is shown in formula (2):

Figure BDA0003500054380000061
Figure BDA0003500054380000061

其中,x,y分别是截取后图像的像素点坐标的横坐标与纵坐标,cols是原始鱼眼图像的横向宽度,rows是原始鱼眼图像的纵向宽度;Among them, x and y are the abscissa and ordinate of the pixel coordinates of the image after interception, cols is the horizontal width of the original fisheye image, and rows is the vertical width of the original fisheye image;

4)将标准坐标A(xA,yA)转换成球形的三维笛卡尔坐标P(xp,yp,zp),转换公式如公式(3)、(4)所示:4) Convert the standard coordinates A (x A , y A ) into spherical three-dimensional Cartesian coordinates P (x p , y p , z p ), and the conversion formulas are shown in formulas (3) and (4):

P(p,φ,θ) (3)P(p,φ,θ) (3)

Figure BDA0003500054380000062
Figure BDA0003500054380000062

其中,P为球面上一点坐标与原点O之间的连线OP的径向距离,θ为OP与z轴之间的夹角,φ为OP在xOy平面的投影与x轴的夹角,r为球的半径,F为鱼眼相机的焦距,将球坐标系根据公式(5)转换为笛卡尔坐标系:Among them, P is the radial distance of the line OP between the coordinates of a point on the sphere and the origin O, θ is the angle between OP and the z-axis, φ is the angle between the projection of OP on the xOy plane and the x-axis, r is the radius of the ball, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to formula (5):

xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)x p =psinθcosφ,y p =psinθsinφ,z p =pcosθ (5)

5)将空间坐标系P转换成经纬度坐标,转换关系如公式(6)所示:5) Convert the space coordinate system P into latitude and longitude coordinates, and the conversion relationship is shown in formula (6):

Figure BDA0003500054380000071
Figure BDA0003500054380000071

其中,xp,yp,zp是P点的坐标,latitude是经度坐标,longitude是纬度坐标;Among them, x p , y p , z p are the coordinates of point P, latitude is the longitude coordinate, and longitude is the latitude coordinate;

6)根据步骤5)中的经纬度坐标转换映射为展开图的像素坐标(xo,yo),映射关系如公式(7)所示:6) Convert the latitude and longitude coordinates in step 5) to the pixel coordinates (x o , y o ) of the expanded image, and the mapping relationship is shown in formula (7):

Figure BDA0003500054380000072
Figure BDA0003500054380000072

其中,x0表示的是展开图中的像素横坐标,y0表示的是展开图中的像素纵坐标;Among them, x 0 represents the abscissa of the pixel in the expanded image, and y 0 represents the ordinate of the pixel in the expanded image;

7)再完成像素点映射后,画面中会出现没有被像素点映射到的黑色空隙点,针对这些黑色区域再利用cubic插值算法进行填补来达到输出图像完整的效果。7) After the pixel point mapping is completed, there will be black void points in the picture that are not mapped by the pixel points, and the cubic interpolation algorithm is used to fill these black areas to achieve the complete effect of the output image.

如图4所述,所述全景图像拼接器的技术方法包括图像特征点提取计算、相邻图像间匹配特征点的匹配、寻找特征点之间的映射关系、单应矩阵的计算以及图像的透视变换以及图像拼接等,其主要实施步骤如下所示:As shown in FIG. 4 , the technical method of the panoramic image stitcher includes the extraction and calculation of image feature points, the matching of matching feature points between adjacent images, the finding of the mapping relationship between the feature points, the calculation of the homography matrix, and the perspective of the image. Transformation and image stitching, etc., the main implementation steps are as follows:

1)在获取到经过鱼眼图像处理模块处理后的鱼眼图像,先对不同角度的图像依次进行固定编号,并保持后续图像编号的一致性;1) After obtaining the fisheye image processed by the fisheye image processing module, firstly perform fixed numbering on images of different angles in turn, and keep the consistency of subsequent image numbers;

2)用OpenCV自带的SIFT算法计算出每幅图像的特征点,并将其作为尺度空间、缩放、旋转和仿射变换保持不变的图像局部不变描述算子;2) Calculate the feature points of each image with the SIFT algorithm that comes with OpenCV, and use it as a locally invariant description operator of the image whose scale space, scaling, rotation and affine transformation remain unchanged;

3)图像拼接还需要寻找相邻图像间的匹配特征点,所以本发明中采用计算欧式距离测度的方法对三个视角的鱼眼图像进行粗匹配,接着用比较最近邻欧式距离与次邻欧式距离的SIFT匹配方式在两幅图像的特征点中进行筛选,当最近邻欧式距离与次邻欧氏距离的比值小于0.8时选为匹配点;3) Image splicing also needs to find matching feature points between adjacent images, so in the present invention, the method of calculating Euclidean distance measure is used to roughly match the fisheye images of three viewing angles, and then compare the nearest neighbor Euclidean distance and the next adjacent Euclidean distance. The SIFT matching method of distance filters the feature points of the two images, and selects the matching point when the ratio of the nearest neighbor Euclidean distance to the second neighbor Euclidean distance is less than 0.8;

4)对经过步骤3)处理后的粗匹配点再通过RANSAC方法进一步筛选出误匹配点,从而提高后续图像处理的精度,接着找出特征点之间的映射关系,从而计算出单应矩阵;4) further screen out the incorrect matching points by the RANSAC method for the rough matching points processed in step 3), thereby improving the accuracy of subsequent image processing, and then find out the mapping relationship between the feature points, thereby calculating the homography matrix;

5)用步骤4)计算所得的单应矩阵对鱼眼图像处理模块处理后的鱼眼图像进行透视变换,接着将透视变换后的图像进行拼接,最后合成视频流,实现全景拼接的功能。5) Perform perspective transformation on the fisheye image processed by the fisheye image processing module with the homography matrix calculated in step 4), then stitch the perspective transformed images, and finally synthesize the video stream to realize the function of panoramic stitching.

本说明书实施例所述的内容仅仅是对发明构思的实现形式的列举,本发明的保护范围不应当被视为仅限于实施例所陈述的具体形式,本发明的保护范围也及于本领域技术人员根据本发明构思所能够想到的等同技术手段。The content described in the embodiments of the present specification is only an enumeration of the realization forms of the inventive concept, and the protection scope of the present invention should not be regarded as limited to the specific forms stated in the embodiments, and the protection scope of the present invention also extends to those skilled in the art. Equivalent technical means that can be conceived by a person based on the inventive concept.

Claims (3)

1. The utility model provides a car end panorama visual angle driver assistance system based on image fusion which characterized in that: the method comprises the following steps: the system comprises an image acquisition module, an embedded image processing device and an image display device, wherein the image acquisition module is used for acquiring 360-degree road environment information around a vehicle body;
the image acquisition equipment adopts three fisheye cameras to acquire a 360-degree road environment around a vehicle body, integrates output images of the three fisheye cameras into one analog video by using a video encoder, converts the analog video into a digital video by using a video acquisition card, and transmits the digital video to a fisheye image processing module in the embedded image processing equipment through a cable for image processing;
the embedded image processing equipment comprises a fisheye image processing module; a Web panorama player; a panoramic image splicer;
the fisheye image processing module is used for processing the original fisheye image output by the video acquisition card; the original fisheye image is corrected into a ring view by adopting a longitude and latitude unfolding mode to improve the final panoramic stitching effect: the method comprises the following steps of transforming pixel coordinates in a 2D Cartesian coordinate system into a spherical Cartesian coordinate system by carrying out a series of changes on the pixel coordinates in a fisheye image, finally converting the coordinates in the spherical Cartesian coordinate system into longitude and latitude coordinates, and then mapping pixel points based on the longitude and latitude coordinates to convert the fisheye image into a circular view, wherein the specific operation steps are as follows:
1) after an original fisheye image is obtained, writing a circular mask function by using the circle center and the radius of three-view fisheye imaging to intercept an image area of a target, wherein the coordinate range of a pixel point of the intercepted image area is as shown in formula (1):
x∈[0,cols-1],y∈[0,rows-1] (1)
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
2) in order to control the resolution of the video finally subjected to image fusion, the size of the output picture in the step 1) needs to be controlled;
3) converting the pixel coordinate point (x, y) of the intercepted image area from the 2D Cartesian coordinate system to a standard coordinate A (x)A,yA) The conversion relationship is shown in equation (2):
Figure FDA0003500054370000011
wherein x and y are respectively an abscissa and an ordinate of a pixel coordinate of the intercepted image, cols is a transverse width of the original fisheye image, and rows is a longitudinal width of the original fisheye image;
4) the standard coordinate A (x)A,yA) Three-dimensional Cartesian coordinates P (y) converted into spherical shapep,yp,zp) The conversion formula is shown in formulas (3) and (4):
P(p,φ,θ) (3)
Figure FDA0003500054370000021
wherein, P is the radial distance of the connecting line OP between one point coordinate on the sphere and the origin O, theta is the included angle between OP and the z axis, phi is the included angle between the projection of OP on the xOy plane and the x axis, r is the radius of the sphere, F is the focal length of the fisheye camera, and the spherical coordinate system is converted into a Cartesian coordinate system according to the formula (5):
xp=psinθcosφ,yp=psinθsinφ,zp=pcosθ (5)
5) converting the space coordinate system P into longitude and latitude coordinates, wherein the conversion relation is shown as a formula (6):
Figure FDA0003500054370000022
wherein x isp,yp,zpIs the coordinate of point P, latitude is the longitude coordinate, longtude is the latitude coordinate;
6) converting and mapping the longitude and latitude coordinates in the step 5) into pixel coordinates (x) of an expansion mapo,yo) The mapping relationship is shown in formula (7):
Figure FDA0003500054370000023
wherein x is0The pixel abscissa, y, in the expanded view is shown0Indicating the pixel ordinate in the unfolded image;
7) after the pixel point mapping is completed, black void points which are not mapped by the pixel points appear in the picture, and the cubic interpolation algorithm is utilized to fill the black areas so as to achieve the effect of outputting the complete image;
panoramic picture splicer carries out panoramic picture's concatenation with three fisheye image after fisheye image processing module handles, includes: in order to ensure the continuity of the spliced vision, the fisheye cameras in three different directions need to be numbered according to a fixed sequence, and the sequence is always kept unchanged in the subsequent operation; in the image processing process, calculating the characteristic points of each image by using an SIFT algorithm, and taking the characteristic points as image local invariant description operators for keeping scale space, scaling, rotation and affine transformation unchanged; matching feature points between adjacent images are also required to be searched, and the RANSAC method is used for further screening out the feature matching points, so that a homography matrix is calculated by searching the mapping relation between the feature matching points; finally, perspective change is carried out on the images according to the homography matrix obtained through calculation, and finally the images after perspective change are spliced to realize the function of splicing the panoramic images at the vehicle end; the specific operation steps are as follows:
(1) when the fisheye image processed by the fisheye image processing module is obtained, images at different angles are sequentially and fixedly numbered, and the consistency of the numbers of the subsequent images is kept;
(2) calculating the characteristic point of each image by using an OpenCV self-contained SIFT algorithm, and taking the characteristic point as an image local invariant description operator with unchanged scale space, scaling, rotation and affine transformation;
(3) matching feature points between adjacent images are searched for in image splicing, rough matching is carried out on fisheye images at three visual angles by adopting a method for calculating Euclidean distance measure, then screening is carried out on the feature points of the two images by using an SIFT matching mode for comparing nearest neighbor Euclidean distance with next neighbor Euclidean distance, and when the ratio of the nearest neighbor Euclidean distance to the next neighbor Euclidean distance is less than 0.8, the matching feature points are selected as the matching points;
(4) further screening out mismatching points from the rough matching points processed in the step (3) by using a RANSAC method, thereby improving the precision of subsequent image processing, and then finding out the mapping relation among the characteristic points, thereby calculating a homography matrix;
(5) carrying out perspective transformation on the fisheye image processed by the fisheye image processing module by using the homography matrix obtained by the calculation in the step (4), then splicing the images subjected to the perspective transformation, and finally synthesizing a video stream to realize the function of panoramic splicing;
the Web panoramic player displays the panoramic image output by the panoramic image splicer on a webpage; in order to reduce the time delay of video display, the Web panoramic player adopts rtc.js player plug-in to construct a front-end player to play the video; in order to enable the front end to support the playing of the panoramic video, the panoramic playing is realized by adopting the technology of three.js + video tag + rtc.js; js, the Web panoramic player establishes a spherical model, and maps a sphere by taking a video label as a sphere surface rendering material, so as to achieve the effect of projecting a panoramic video onto the sphere; the Web panoramic player establishes a spherical model through three.js, maps a sphere by taking a video label as a rendering material on the surface of the sphere, and projects a panoramic video onto the sphere; installing a browser on the embedded image processing equipment, and browsing the panoramic image on the image display equipment;
and the image display equipment and the embedded image processing equipment are in physical connection, and display the panoramic image presented by the Web panoramic player.
2. The vehicle-end panoramic view auxiliary driving system based on image fusion as claimed in claim 1, characterized in that: the embedded image processing apparatus selects Atlas 200 acceleration module as the AI computation chip.
3. The vehicle-end panoramic view auxiliary driving system based on image fusion is characterized in that: the viewing angle of the fisheye camera is 180 °, three fisheye cameras are mounted at different positions at the same height, and 120 ° is required between each fisheye camera.
CN202210124847.5A 2022-02-10 2022-02-10 A vehicle-side panoramic view assisted driving system based on image fusion Active CN114640801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210124847.5A CN114640801B (en) 2022-02-10 2022-02-10 A vehicle-side panoramic view assisted driving system based on image fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210124847.5A CN114640801B (en) 2022-02-10 2022-02-10 A vehicle-side panoramic view assisted driving system based on image fusion

Publications (2)

Publication Number Publication Date
CN114640801A true CN114640801A (en) 2022-06-17
CN114640801B CN114640801B (en) 2024-02-20

Family

ID=81946324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210124847.5A Active CN114640801B (en) 2022-02-10 2022-02-10 A vehicle-side panoramic view assisted driving system based on image fusion

Country Status (1)

Country Link
CN (1) CN114640801B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245748A (en) * 2022-12-23 2023-06-09 珠海视熙科技有限公司 Distortion correction method, device, equipment, system and storage medium for ring-looking lens
CN117893719A (en) * 2024-03-15 2024-04-16 鹰驾科技(深圳)有限公司 Method and system for splicing self-adaptive vehicle body in all-around manner
CN117935127A (en) * 2024-03-22 2024-04-26 国任财产保险股份有限公司 Intelligent damage assessment method and system for panoramic video exploration

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
US20180176465A1 (en) * 2016-12-16 2018-06-21 Prolific Technology Inc. Image processing method for immediately producing panoramic images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
US20180176465A1 (en) * 2016-12-16 2018-06-21 Prolific Technology Inc. Image processing method for immediately producing panoramic images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何林飞;朱煜;林家骏;黄俊健;陈旭东;: "基于球面空间匹配的双目鱼眼全景图像生成", 计算机应用与软件, no. 02 *
曹立波;夏家豪;廖家才;张冠军;张瑞锋;: "基于3D空间球面的车载全景快速生成方法", 中国公路学报, no. 01 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245748A (en) * 2022-12-23 2023-06-09 珠海视熙科技有限公司 Distortion correction method, device, equipment, system and storage medium for ring-looking lens
CN116245748B (en) * 2022-12-23 2024-04-26 珠海视熙科技有限公司 Distortion correction method, device, equipment, system and storage medium for ring-looking lens
CN117893719A (en) * 2024-03-15 2024-04-16 鹰驾科技(深圳)有限公司 Method and system for splicing self-adaptive vehicle body in all-around manner
CN117893719B (en) * 2024-03-15 2024-12-03 鹰驾科技(深圳)有限公司 Method and system for splicing self-adaptive vehicle body in all-around manner
CN117935127A (en) * 2024-03-22 2024-04-26 国任财产保险股份有限公司 Intelligent damage assessment method and system for panoramic video exploration
CN117935127B (en) * 2024-03-22 2024-06-04 国任财产保险股份有限公司 Intelligent damage assessment method and system for panoramic video exploration

Also Published As

Publication number Publication date
CN114640801B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN114640801B (en) A vehicle-side panoramic view assisted driving system based on image fusion
CN108263283B (en) Method for calibrating and splicing panoramic all-round looking system of multi-marshalling variable-angle vehicle
CN107133988B (en) Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system
US9858639B2 (en) Imaging surface modeling for camera modeling and virtual view synthesis
CN102045546B (en) Panoramic parking assist system
CN106952311B (en) Auxiliary parking system and method based on panoramic stitching data mapping table
US8553081B2 (en) Apparatus and method for displaying an image of vehicle surroundings
CN103770706B (en) Dynamic reversing mirror indicating characteristic
CN110381255A (en) Using the Vehicular video monitoring system and method for 360 panoramic looking-around technologies
CN109087251B (en) Vehicle-mounted panoramic image display method and system
CN109435852A (en) A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN110363085B (en) Method for realizing looking around of heavy articulated vehicle based on articulation angle compensation
US20100171828A1 (en) Driving Assistance System And Connected Vehicles
CN113468991B (en) Parking space detection method based on panoramic video
JP2008077628A (en) Image processor and vehicle surrounding visual field support device and method
CN103247030A (en) Fisheye image correction method of vehicle panoramic display system based on spherical projection model and inverse transformation model
CN101442618A (en) Method for synthesizing 360 DEG ring-shaped video of vehicle assistant drive
CN105763854A (en) Omnidirectional imaging system based on monocular camera, and imaging method thereof
KR20040111329A (en) Drive assisting system
CN102881016A (en) Vehicle 360-degree surrounding reconstruction method based on internet of vehicles
CN111968184B (en) Method, device and medium for realizing view follow-up in panoramic looking-around system
CN102291541A (en) Virtual synthesis display system of vehicle
CN102164274A (en) Vehicle-mounted virtual panoramic system with variable field of view
CN110736472A (en) An indoor high-precision map representation method based on the fusion of vehicle surround view image and millimeter-wave radar
CN110689506A (en) Panoramic stitching method, automotive panoramic stitching method and panoramic system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant