[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110884501B - Vehicle perception data processing method and device, electronic equipment and storage medium - Google Patents

Vehicle perception data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110884501B
CN110884501B CN201911202063.4A CN201911202063A CN110884501B CN 110884501 B CN110884501 B CN 110884501B CN 201911202063 A CN201911202063 A CN 201911202063A CN 110884501 B CN110884501 B CN 110884501B
Authority
CN
China
Prior art keywords
vehicle
model
object information
data
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911202063.4A
Other languages
Chinese (zh)
Other versions
CN110884501A (en
Inventor
李佳
颜卿
袁一
潘晓良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Nonda Intelligent Technology Co ltd
Original Assignee
Shanghai Nonda Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Nonda Intelligent Technology Co ltd filed Critical Shanghai Nonda Intelligent Technology Co ltd
Priority to CN201911202063.4A priority Critical patent/CN110884501B/en
Publication of CN110884501A publication Critical patent/CN110884501A/en
Application granted granted Critical
Publication of CN110884501B publication Critical patent/CN110884501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0031Mathematical model of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a data processing method, a device, electronic equipment and a storage medium for vehicle perception, wherein the method comprises the following steps: obtaining vehicle object information of a second vehicle ahead of the first vehicle; the vehicle object information comprises at least one of appearance data, relative position data, relative distance data, brake data and vehicle identification information of a corresponding vehicle; receiving moving object information of a moving object in front of the second vehicle, which is transmitted by the second vehicle; and respectively displaying a first model of the first vehicle, a second model of the second vehicle and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle. The invention can accurately feed back the real-time situation of the front and front moving objects at intervals to the driver, and further, the information fed back to the driver can comprehensively and effectively assist the driving behavior of the driver, thereby improving the safety.

Description

Vehicle perception data processing method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of vehicles, and in particular, to a vehicle sensing data processing method, device, electronic device, and storage medium.
Background
During the running of the vehicle, the vehicle may detect the external environment using a function such as ADAS, and based on the detected result, may guide the driving behavior of the driver.
In the prior art, the vehicle can only detect the front vehicle and the pedestrian, and cannot detect the front vehicle and the pedestrian of the front vehicle, and further, if the front vehicle is a large vehicle with a large size such as a truck or a bus, the situation of the front vehicle of the large vehicle cannot be known, so that the driving behavior of the driver is not facilitated.
Therefore, in the prior art, the vehicle can only sense the nearest vehicle ahead, the information is single, and the driving behavior of the driver is difficult to be comprehensively and effectively assisted.
Disclosure of Invention
The invention provides a data processing method and device for vehicle perception, electronic equipment and a storage medium, and aims to solve the problems that a vehicle can only perceive the nearest vehicle ahead, the information is single, and the driving behavior of a driver is difficult to assist comprehensively and effectively.
According to a first aspect of the present invention, a vehicle-perceived data processing method is provided, which is applied to an intelligent vehicle-mounted terminal of a first vehicle, and includes:
obtaining vehicle object information of a second vehicle ahead of the first vehicle; the vehicle object information comprises at least one of appearance data, relative position data, relative distance data, brake data and vehicle identification information of a corresponding vehicle;
receiving moving object information in front of the second vehicle sent by the second vehicle, the moving object information being used for describing a moving object in front of the second vehicle;
and respectively displaying a first model of the first vehicle, a second model of the second vehicle and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle.
Optionally, the vehicle object information and the moving object information of the second vehicle are detected by an ADAS detection component of the corresponding vehicle.
Optionally, the moving object information includes at least one of:
pedestrian information of a pedestrian in front of the second vehicle;
object information of an object in front of the second vehicle;
vehicle object information of a third vehicle ahead of the second vehicle.
Optionally, the moving object information includes relative position data and/or relative distance data of the moving object with respect to the second vehicle;
displaying a first model of the first vehicle, a second model of the second vehicle, and a third model of the moving object in an interface according to the vehicle object information and the moving object information of the second vehicle, respectively, including:
determining a first relative parameter according to relative position data and/or relative distance data of the second vehicle relative to the first vehicle, wherein the first relative parameter is used for representing the relative position and/or relative distance between the first model and the second model;
determining a second relative parameter from the relative position data and/or relative distance data of the moving object relative to the second vehicle, the second relative parameter being used to characterize the relative position and/or relative distance between the second model and the third model;
displaying the first model, the second model and the third model in the interface according to the first relative parameter and the second relative parameter.
Optionally, if the moving object information includes vehicle object information of a third vehicle in front of the second vehicle, and the vehicle object information further includes the shape data and/or the vehicle identification data, then:
the second model is selected and determined in a model base according to the appearance data and/or the vehicle identification data of the second vehicle, and the third model is selected and determined in the model base according to the appearance data and/or the vehicle identification data of the third vehicle.
Optionally, if the moving object information includes vehicle object information of a third vehicle in front of the second vehicle, and the vehicle object information further includes braking data, then:
displaying a first model of the first vehicle, a second model of the second vehicle, and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle, respectively, including:
and representing the braking data of the second vehicle by using the braking display unit of the second model in the display interface, and representing the braking data of the third vehicle by using the braking display unit of the third model in the display interface.
Optionally, in the display interface, the first model, the second model, and the third model are displayed in a perspective manner, or: in the display interface, the first model, the second model and the third model are displayed in a top view mode.
Optionally, the first vehicle and the second vehicle communicate with each other through a 4G LTE module or a 5G V2X module.
According to a second aspect of the present invention, there is provided a vehicle-perceived data processing apparatus comprising:
an acquisition module for acquiring vehicle object information of a second vehicle in front of a first vehicle; the vehicle object information comprises at least one of appearance data, relative position data, relative distance data, brake data and vehicle identification information of a corresponding vehicle;
the receiving module is used for receiving moving object information in front of the second vehicle, which is sent by the second vehicle, and the moving object information is used for describing a moving object in front of the second vehicle;
and the display module is used for respectively displaying the first model of the first vehicle, the second model of the second vehicle and the third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle.
According to a third aspect of the invention, there is provided an electronic device comprising a memory and a processor,
the memory is used for storing codes;
the processor is configured to execute the code in the memory to implement the method according to the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided a storage medium having a program stored thereon, wherein the program, when executed by a processor, implements the method of the first aspect and its alternatives.
According to the data processing method, the data processing device, the electronic equipment and the storage medium for vehicle perception provided by the invention, the perception result can be represented in the display interface utilization model according to the perception result of the vehicle in front (namely the vehicle object information of the second vehicle) and the perception result of the vehicle in front of the moving object (namely the moving object information), so that the real-time situation of the front moving object at intervals can be accurately fed back to a driver, and further, the information fed back to the driver can comprehensively and effectively assist the driving behavior of the driver, and the safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a vehicle awareness data processing method according to an embodiment of the invention;
FIG. 2 is a first schematic interface diagram of a display interface according to an embodiment of the present invention;
FIG. 3 is a second schematic interface diagram of a display interface according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating step S13 according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of program modules of a vehicle-aware data processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
FIG. 1 is a flow chart illustrating a method for processing vehicle awareness data according to an embodiment of the invention.
The data processing method for vehicle perception in the embodiment can be applied to the intelligent vehicle-mounted terminal.
The intelligent vehicle-mounted terminal can be a vehicle machine of the vehicle, can also be any other intelligent equipment connected to the vehicle machine, and can also be terminal equipment of a driver, such as a mobile phone, a tablet personal computer and the like.
Referring to fig. 1, a data processing method for vehicle sensing, which is applied to a smart plug-in terminal of a first vehicle, includes:
s11: obtaining vehicle object information of a second vehicle ahead of the first vehicle;
s12: and receiving moving object information in front of the second vehicle, which is transmitted by the second vehicle.
The process obtained in step S11 may be acquired by the relevant device of the first vehicle itself, for example, it may be detected by the ADAS detection component of the first vehicle. At the same time, in addition to the vehicle object information of the second vehicle, a pedestrian or other moving object in front of the first vehicle can be detected and displayed in the interface. Likewise, the moving object information may be detected by the ADAS detection component of the second vehicle and sent to the first vehicle via the corresponding communication module.
The ADAS is, in particular, an Advanced Driving assistance System, which may be understood as an Advanced Driving assistance System. Its functions may be for example: various sensors (millimeter wave radar, laser radar, single/binocular camera and satellite navigation) arranged on the automobile are utilized to sense the surrounding environment at any time in the automobile driving process, collect data, identify, detect and track static and dynamic objects, and calculate and analyze the system by combining with the map data of a navigator. The operation analysis results of the preceding vehicle may be used as the vehicle object information according to the present embodiment, and the operation analysis results of the preceding pedestrian, animal, or other moving object may be used as the moving object information other than the vehicle object information.
In one embodiment, the moving object information includes at least one of:
pedestrian information of a pedestrian in front of the second vehicle;
object information of an object in front of the second vehicle;
vehicle object information of a third vehicle ahead of the second vehicle.
Specifically, the moving object information, whether it is a vehicle object, a general object, or a pedestrian, may include relative position data and relative distance data between the corresponding moving object and the vehicle, and may further include speed data, identification data, acceleration data, shape data, and the like, and if the moving object information is the vehicle object information, for example, it may include at least one of shape data, relative position data, relative distance data, braking data, and vehicle identification information of the corresponding vehicle.
The shape data may be, for example, data describing the width, height, and other dimensions of a vehicle, a pedestrian, and an object, and in some scenes, may also include, for example, data describing the design lines and contours of the vehicle, the pedestrian, and the object. For example, the shape data of the vehicle as the moving object may be determined from the captured image detection of the vehicle rear portion.
The relative position data may be data representing the relative position between vehicles or between a vehicle and a moving object (e.g. a pedestrian) other than the vehicle, where if the moving object is a third vehicle: the third vehicle and the second vehicle may be in the same lane or in different lanes, and the relative position data may specifically specify whether the third vehicle is a vehicle ahead of the own lane or a vehicle ahead of an adjacent lane. For another example, the positional relationship between vehicles may change as the lane curves, and the relative positional data may also characterize it. The relative position may refer to a relative direction, or may include a relative distance.
Further, the vehicle referred to above may be an automobile or a non-automobile such as a bicycle.
The relative distance data may be any data for characterizing the distance between vehicles or between a vehicle and a moving object other than a vehicle (e.g., a pedestrian). The correspondence can be detected using any ranging sensor.
The braking data may be any data for representing whether the vehicle brakes, for example, the corresponding braking data may be obtained according to the collected image of the brake light at the tail of the vehicle.
The identification information may be any information that can distinguish between vehicles or other moving objects, such as license plate information, information of vehicle brands, information of identities represented by persons (such as police, sanitation, etc.) on roads, and the like, and the identification information of the vehicles may be identified according to the collected images of the tail of the vehicles.
The velocity data and the acceleration data may be, for example, an absolute velocity and an absolute acceleration of the moving object, or may be, for example, a relative velocity and a relative acceleration.
To implement step S12, communication between the first vehicle and the second vehicle may be implemented in any manner, such as: the first vehicle and the second vehicle communicate with each other through a 4G LTE module or a 5G V2X module.
After step S12, the method may further include:
s13: and respectively displaying a first model of the first vehicle, a second model of the second vehicle and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle.
In one example, the information measured by the second vehicle may be projected to the coordinate system of the first vehicle to realize the combination of the two. Furthermore, the displayed result can represent the vehicle object information and the moving object information to a certain extent, so that the embodiment can represent the sensing result in the display interface utilization model according to the sensing result of the vehicle to the front vehicle (namely, the vehicle object information of the second vehicle) and the sensing result of the front vehicle to the moving object in front again (namely, the moving object information), so that the invention can accurately feed back the real-time condition of the front vehicle or other moving objects at intervals to the driver, further, the information fed back to the driver can comprehensively and effectively assist the driving behavior of the driver, and the safety is improved.
FIG. 2 is a first schematic interface diagram of a display interface according to an embodiment of the present invention; fig. 3 is a second interface schematic diagram of a display interface according to an embodiment of the present invention.
Taking fig. 2 as an example, in the display interface, the first model, the second model, and the third model are displayed in a perspective manner. The perspective mode may specifically refer to a perspective effect generated by using the third model as an origin, which is convenient for a driver to substitute the vehicle into a display screen, so that the driver can know the vehicle more intuitively.
Taking fig. 3 as an example, in the display interface, the first model, the second model, and the third model are displayed in a top view manner. The display mode of the navigation interface can be matched, so that the driver can conveniently combine the navigation with the display interface. In addition, the first model, the second model and the third model can be displayed in the roads in the map in combination with the map information, so that the driver can intuitively learn the driving route.
Fig. 4 is a flowchart illustrating step S13 according to an embodiment of the present invention.
The embodiment shown in fig. 4 may be applied to a case where the vehicle object information includes the relative position data and/or the relative distance data.
Step S13 may include:
s131: determining a first relative parameter from relative position data and/or relative distance data of the second vehicle relative to the first vehicle;
s132: determining a second relative parameter from the relative position data and/or relative distance data of the moving object relative to the second vehicle.
The first relative parameter is understood to be used for characterizing the relative position and/or the relative distance between the first model and the second model, and further, the relative position and the relative distance between the models may be, for example, the projection and the conversion result of the relative position and the relative distance between the vehicles.
Wherein the second relative parameter is understood to be used for characterizing the relative position and/or the relative distance between the second model and the third model; the relative position and relative distance between the models may be, for example, the projection and conversion results of the relative position and relative distance between the vehicle and the moving object.
The steps S131 and S132 may be performed simultaneously or sequentially, and the steps S131 and S132 may be performed first, and thus, the order is arbitrary and is not limited to that shown in fig. 4.
Further, before step S131, the method may further include: it is determined that the distance between the second vehicle and a moving object, such as a third vehicle, and/or the distance between the first vehicle and the second vehicle is less than a threshold, and further, only the situation when close enough is characterized, if far away, the corresponding model need not be displayed.
Referring to fig. 4, after steps S131 and S132, the method may include:
s133: displaying the first model, the second model and the third model in the interface according to the first relative parameter and the second relative parameter.
Through the above embodiment, the relative relationship between the vehicles and/or between the vehicles and the moving object can be characterized by using the positions and/or relative distances of the first model, the second model and the third model, so that the driver can have sufficient and accurate knowledge about the relative relationship.
In one embodiment, in the case that the vehicle object information further includes braking data, after step S133, the method may further include:
s134: and representing the braking data of the second vehicle by using the braking display unit of the second model in the display interface, and representing the braking data of the third vehicle by using the braking display unit of the third model in the display interface.
The brake display unit may be, for example, a brake lamp in the model, or may be, for example, another display unit independent of the model, and specifically, whether the corresponding vehicle applies a brake may be represented in any manner of text, color, graphics, and the like. In addition, whether to implement braking can also be represented by the color of the whole first model, the second model and the like.
In a specific implementation process, only the step S131 to the step S133 may be implemented, but the step S134 is not implemented, and further, whether the vehicle is braked or not may not be characterized; in another embodiment, only step S134 may be performed, but step S131 to step S133 are not performed, and the relative position and distance between the models may be characterized as being fixed.
In one embodiment, the vehicle object information further comprises the shape data and/or vehicle identification data; the second model is selected and determined in a model base according to the appearance data and/or the vehicle identification data of the second vehicle, and the third model is selected and determined in the model base according to the appearance data and/or the vehicle identification data of the third vehicle.
In the above embodiment, the model library may only distinguish different types of vehicles (e.g., passenger cars, trucks, large-sized vehicles, small-sized vehicles, seven-seater vehicles, five-seater vehicles, etc.) and configure different models, or may specifically distinguish models with different brands and models of different vehicles.
Furthermore, through the above embodiment, the different displays of different vehicles can be conveniently realized, so that the driver can be helped to more intuitively know the condition of the front vehicle.
In the specific implementation process, the embodiment does not exclude the implementation mode of representing the vehicle by using a unified model.
In summary, the data processing method for vehicle sensing provided by this embodiment can characterize the sensing result in the display interface by using the model according to the sensing result of the vehicle in front (i.e. the vehicle object information of the second vehicle) and the sensing result of the vehicle in front of the moving object (i.e. the moving object information), so that the present invention can accurately feed back the real-time situation of the front moving object to the driver at intervals, and further feed back information to the driver, so as to comprehensively and effectively assist the driving behavior of the driver, and improve safety.
FIG. 5 is a block diagram of a program of a vehicle sensing data processing device according to an embodiment of the present invention.
Referring to fig. 5, a vehicle-aware data processing apparatus 300 includes:
an obtaining module 301, configured to obtain vehicle object information of a second vehicle in front of a first vehicle; the vehicle object information comprises at least one of appearance data, relative position data, relative distance data, brake data and vehicle identification information of a corresponding vehicle;
a receiving module 302, configured to receive moving object information in front of the second vehicle sent by the second vehicle, where the moving object information is used to describe a moving object in front of the second vehicle;
a display module 303, configured to display, in a display interface, the first model of the first vehicle, the second model of the second vehicle, and the third model of the moving object according to the vehicle object information and the moving object information of the second vehicle, respectively.
Optionally, the vehicle object information and the moving object information of the second vehicle are detected by an ADAS detection component of the corresponding vehicle.
Optionally, the moving object information includes at least one of:
pedestrian information of a pedestrian in front of the second vehicle;
object information of an object in front of the second vehicle;
vehicle object information of a third vehicle ahead of the second vehicle.
Optionally, the moving object information includes relative position data and/or relative distance data of the moving object with respect to the second vehicle;
the display module 303 is specifically configured to:
determining a first relative parameter according to relative position data and/or relative distance data of the second vehicle relative to the first vehicle, wherein the first relative parameter is used for representing the relative position and/or relative distance between the first model and the second model;
determining a second relative parameter from the relative position data and/or relative distance data of the moving object relative to the second vehicle, the second relative parameter being used to characterize the relative position and/or relative distance between the first model and the second model;
displaying the first model, the second model and the third model in the interface according to the first relative parameter and the second relative parameter.
Optionally, if the moving object information includes vehicle object information of a third vehicle in front of the second vehicle, and the vehicle object information further includes the shape data and/or the vehicle identification data, then:
the second model is selected and determined in a model base according to the appearance data and/or the vehicle identification data of the second vehicle, and the third model is selected and determined in the model base according to the appearance data and/or the vehicle identification data of the third vehicle.
Optionally, if the moving object information includes vehicle object information of a third vehicle in front of the second vehicle, and the vehicle object information further includes braking data, then:
the display module 303 is specifically configured to:
and representing the braking data of the second vehicle by using the braking display unit of the second model in the display interface, and representing the braking data of the third vehicle by using the braking display unit of the third model in the display interface.
Optionally, in the display interface, the first model, the second model, and the third model are displayed in a perspective manner, or: in the display interface, the first model, the second model and the third model are displayed in a top view mode.
Optionally, the first vehicle and the second vehicle communicate with each other through a 4G LTE module or a 5G V2X module.
In summary, the data processing device for vehicle sensing provided by this embodiment can represent the sensing result in the display interface by using the model according to the sensing result of the vehicle in front (i.e. the vehicle object information of the second vehicle) and the sensing result of the vehicle in front of the moving object (i.e. the moving object information), so that the present invention can accurately feed back the real-time situation of the vehicle in front of the vehicle at an interval to the driver, and further feed back the information to the driver, so as to comprehensively and effectively assist the driving behavior of the driver, and improve the safety.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Referring to fig. 6, an electronic device 40 is provided, which includes:
a processor 41; and the number of the first and second groups,
a memory 42 for storing executable instructions of the processor;
wherein the processor 31 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 41 is capable of communicating with the memory 42 via the bus 43.
The present embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A vehicle perception data processing method is applied to an intelligent vehicle-mounted terminal of a first vehicle, and is characterized by comprising the following steps:
obtaining vehicle object information of a second vehicle ahead of the first vehicle; the vehicle object information comprises at least one of appearance data, relative position data, relative distance data, brake data and vehicle identification information of a corresponding vehicle;
receiving moving object information in front of the second vehicle sent by the second vehicle, wherein the moving object information is used for describing a moving object in front of the second vehicle;
respectively displaying a first model of the first vehicle, a second model of the second vehicle and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle;
the moving object information includes relative position data and/or relative distance data of the moving object with respect to the second vehicle;
displaying a first model of the first vehicle, a second model of the second vehicle, and a third model of the moving object in an interface according to the vehicle object information and the moving object information of the second vehicle, respectively, including:
determining a first relative parameter according to relative position data and/or relative distance data of the second vehicle relative to the first vehicle, wherein the first relative parameter is used for representing the relative position and/or relative distance between the first model and the second model;
determining a second relative parameter from the relative position data and/or relative distance data of the moving object relative to the second vehicle, the second relative parameter being used to characterize the relative position and/or relative distance between the second model and the third model;
displaying the first model, the second model and the third model in the interface according to the first relative parameter and the second relative parameter;
if the moving object information includes vehicle object information of a third vehicle in front of the second vehicle, and the vehicle object information includes the shape data and/or the vehicle identification data, then:
the second model is selected and determined in a model base according to the appearance data and/or the vehicle identification data of the second vehicle, and the third model is selected and determined in the model base according to the appearance data and/or the vehicle identification data of the third vehicle.
2. The method of claim 1, wherein the vehicle object information and the moving object information of the second vehicle are detected by an ADAS detection component of the corresponding vehicle.
3. The method of claim 1, wherein the moving object information comprises at least one of:
pedestrian information of a pedestrian in front of the second vehicle;
object information of an object in front of the second vehicle;
vehicle object information of a third vehicle ahead of the second vehicle.
4. The method according to claim 1, wherein if the moving object information includes vehicle object information of a third vehicle ahead of the second vehicle, the vehicle object information including braking data, then:
displaying a first model of the first vehicle, a second model of the second vehicle, and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle, respectively, including:
and representing the braking data of the second vehicle by using the braking display unit of the second model in the display interface, and representing the braking data of the third vehicle by using the braking display unit of the third model in the display interface.
5. The method of any one of claims 1 to 4, wherein the first model, the second model and the third model are displayed in a perspective manner in the display interface, or wherein: in the display interface, the first model, the second model and the third model are displayed in a top view mode.
6. The method according to any one of claims 1 to 4, wherein the first vehicle and the second vehicle communicate via a 4G LTE module or a 5G V2X module.
7. A vehicle-aware data processing apparatus, comprising:
an acquisition module for acquiring vehicle object information of a second vehicle in front of a first vehicle; the vehicle object information comprises at least one of appearance data, relative position data, relative distance data, brake data and vehicle identification information of a corresponding vehicle;
the receiving module is used for receiving moving object information in front of the second vehicle, which is sent by the second vehicle, and the moving object information is used for describing a moving object in front of the second vehicle;
the display module is used for respectively displaying a first model of the first vehicle, a second model of the second vehicle and a third model of the moving object in a display interface according to the vehicle object information and the moving object information of the second vehicle;
the moving object information includes relative position data and/or relative distance data of the moving object with respect to the second vehicle;
the display module is specifically configured to:
determining a first relative parameter according to relative position data and/or relative distance data of the second vehicle relative to the first vehicle, wherein the first relative parameter is used for representing the relative position and/or relative distance between the first model and the second model;
determining a second relative parameter from the relative position data and/or relative distance data of the moving object relative to the second vehicle, the second relative parameter being used to characterize the relative position and/or relative distance between the second model and the third model;
displaying the first model, the second model and the third model in the interface according to the first relative parameter and the second relative parameter;
if the moving object information includes vehicle object information of a third vehicle in front of the second vehicle, and the vehicle object information includes the shape data and/or the vehicle identification data, then:
the second model is selected and determined in a model base according to the appearance data and/or the vehicle identification data of the second vehicle, and the third model is selected and determined in the model base according to the appearance data and/or the vehicle identification data of the third vehicle.
8. An electronic device, comprising a memory and a processor,
the memory is used for storing codes;
the processor to execute code in the memory to implement the method of any one of claims 1 to 6.
9. A storage medium having a program stored thereon, the program being characterized in that it implements the method of any one of claims 1 to 6 when executed by a processor.
CN201911202063.4A 2019-11-29 2019-11-29 Vehicle perception data processing method and device, electronic equipment and storage medium Active CN110884501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911202063.4A CN110884501B (en) 2019-11-29 2019-11-29 Vehicle perception data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911202063.4A CN110884501B (en) 2019-11-29 2019-11-29 Vehicle perception data processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110884501A CN110884501A (en) 2020-03-17
CN110884501B true CN110884501B (en) 2021-09-10

Family

ID=69749578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911202063.4A Active CN110884501B (en) 2019-11-29 2019-11-29 Vehicle perception data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110884501B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014198325A1 (en) * 2013-06-13 2014-12-18 Telefonaktiebolaget L M Ericsson (Publ) Controlling vehicle-to-vehicle communication using a distribution scheme
DE102016223579A1 (en) * 2016-11-28 2018-05-30 Robert Bosch Gmbh Method for determining data of a traffic situation
CN108932873A (en) * 2018-08-02 2018-12-04 成都秦川物联网科技股份有限公司 Vehicle safety hidden danger method for early warning and car networking system based on car networking
US10475338B1 (en) * 2018-09-27 2019-11-12 Melodie Noel Monitoring and reporting traffic information
DE102019000435A1 (en) * 2019-01-22 2019-06-06 Daimler Ag Method for visualizing camera data

Also Published As

Publication number Publication date
CN110884501A (en) 2020-03-17

Similar Documents

Publication Publication Date Title
US11847917B2 (en) Fixation generation for machine learning
US11967109B2 (en) Vehicle localization using cameras
US11610411B2 (en) Driver assistance system and method for displaying traffic information
CN107004360A (en) Radar for vehicle method and system
CN111856417B (en) Performance analysis method, device, terminal and storage medium of vehicle millimeter wave radar
CN110341621B (en) Obstacle detection method and device
CN106882184A (en) Driving safety system based on the pure image procossings of ADAS
CN113435224B (en) Method and device for acquiring 3D information of vehicle
CN110884501B (en) Vehicle perception data processing method and device, electronic equipment and storage medium
CN114503044A (en) System and method for automatically labeling objects in 3D point clouds
US20220101025A1 (en) Temporary stop detection device, temporary stop detection system, and recording medium
CN112805200B (en) Snapshot image of traffic scene
EP4435739A1 (en) Method and system for associating environment perception data with data indicative of a friction condition, computer program, computer-readable storage medium, data processing apparatus, vehicle, environment perception data and use thereof
US20240249493A1 (en) Systems and methods for detecting a driving area in a video
CN110108289A (en) Automobile navigation method, Vehicular navigation system, car-mounted terminal and vehicle
CN112889070A (en) Snapshot images for training road models
WO2020073270A1 (en) Snapshot image of traffic scenario
CN112805716A (en) Snapshot images for training event detectors
CN117894095A (en) Method for recording vehicle data and data recording system
CN115272620A (en) Road information visual presentation method and device, storage medium and electronic equipment
CN117807263A (en) Vehicle data processing method and device, medium, equipment and vehicle
CN113415289A (en) Identification device and method for unmanned vehicle
CN113689691A (en) Traffic detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant