CN111724616A - Method and device for acquiring and sharing data based on artificial intelligence - Google Patents
Method and device for acquiring and sharing data based on artificial intelligence Download PDFInfo
- Publication number
- CN111724616A CN111724616A CN202010525954.XA CN202010525954A CN111724616A CN 111724616 A CN111724616 A CN 111724616A CN 202010525954 A CN202010525954 A CN 202010525954A CN 111724616 A CN111724616 A CN 111724616A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- mounted terminal
- road condition
- real
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 22
- 238000005516 engineering process Methods 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 238000004891 communication Methods 0.000 claims description 22
- 230000015654 memory Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000003993 interaction Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 241000422846 Sequoiadendron giganteum Species 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application discloses a method for acquiring and sharing data based on artificial intelligence, which comprises the following steps: the method comprises the steps that a first vehicle-mounted terminal obtains an external image captured by a first image sensor, road condition data in the image are identified based on a neural network technology, the first vehicle-mounted terminal obtains three-dimensional point cloud data collected by a first laser radar, and the point cloud data and the image are fused; the second vehicle-mounted terminal acquires self-positioning information; the second vehicle-mounted terminal receives the first vehicle-mounted terminal positioning information, the road condition data and the distance corresponding to the road condition, which are sent by the first vehicle-mounted terminal; and the second vehicle-mounted terminal confirms the distance between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on the road condition data, starts the real-time position sharing of the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out real-time navigation operation or safe road condition prompting operation based on the real-time position sharing, wherein the second vehicle-mounted terminal is positioned on a second vehicle.
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for data acquisition and sharing based on artificial intelligence.
Background
In the field of vehicle networking, communication (namely V2V communication) can be established between different vehicles at present, and the sharing of entertainment and road conditions between the two vehicles is completed, in addition, the scheme of vehicle-road cooperation is mature day by day, a vehicle-mounted terminal OBU in the vehicle can communicate with a roadside unit RSU, and the RSU can share information such as road conditions and the like to different vehicles carrying the OBU, so that the communication between the vehicles and the roads is realized, namely V2I communication.
As shown in fig. 1, in the V2I communication system, each RSU unit has a certain propagation radius, and cars carrying OBU units can communicate with each other, and can also share traffic information through the RSU unit. However, the current V2V or V2I technologies cannot perform real-time navigation or obstacle avoidance operations based on the interrelation between different automobiles, for example, cannot remind the following automobiles to perform corresponding safety prompts (such as pedestrian avoidance, pit avoidance) or real-time navigation (based on the traffic conditions, an optimal navigation path is selected) based on the traffic conditions of the preceding automobiles, which results in low efficiency of use.
Disclosure of Invention
The embodiment of the application provides a method and a device for acquiring and sharing data based on artificial intelligence, which can feed back to a rear vehicle in real time based on the road condition of a front vehicle, and is used for solving the problem of low use efficiency of V2V or V2I in the prior art.
The embodiment of the invention provides a data acquisition and sharing method based on artificial intelligence, which comprises the following steps:
the method comprises the steps that a first vehicle-mounted terminal obtains an external image captured by a first image sensor, and identifies road condition data in the image based on a neural network technology, wherein the road condition data comprises traffic signal data, congestion indexes, emergencies and surrounding environment information of a vehicle;
the first vehicle-mounted terminal acquires three-dimensional point cloud data acquired by the first laser radar, fuses the point cloud data with the image and acquires distances corresponding to different road conditions in the image, wherein the first vehicle-mounted terminal, the first image sensor and the first laser radar are positioned on a first vehicle, and the distances corresponding to the different road conditions comprise the distances between different targets in the surrounding environment of the vehicle and the first laser radar;
the second vehicle-mounted terminal acquires self-positioning information;
the second vehicle-mounted terminal directly or indirectly establishes communication with the first vehicle-mounted terminal, and receives first vehicle-mounted terminal positioning information, road condition data and a distance corresponding to the road condition, which are sent by the first vehicle-mounted terminal;
and the second vehicle-mounted terminal confirms the distance between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on the road condition data, starts the real-time position sharing of the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out real-time navigation operation or safe road condition prompting operation based on the real-time position sharing, wherein the second vehicle-mounted terminal is positioned on a second vehicle.
Optionally, the starting of the real-time location sharing between the first vehicle-mounted terminal and the second vehicle-mounted terminal, and performing a real-time navigation operation based on the real-time location sharing include:
the second vehicle-mounted terminal determines the position relation between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on self positioning information and the positioning information of the first vehicle-mounted terminal;
the second vehicle-mounted terminal takes the position of the first vehicle-mounted terminal as a destination and takes self positioning information as a starting place to perform navigation planning;
and the second vehicle-mounted terminal selects an optimal navigation path in the navigation planning based on the road condition data sent by the first vehicle-mounted terminal and the distance information corresponding to the road condition.
Optionally, the selecting, by the second vehicle-mounted terminal, an optimal navigation path in the navigation plan based on the road condition data sent by the first vehicle-mounted terminal and the distance information corresponding to the road condition includes:
the second vehicle terminal determines the shortest navigation path in use based on a combination of one or more of congestion degree in road conditions, an emergency and surrounding conditions of the vehicle, or,
and the second vehicle terminal judges the surrounding environment state of the first terminal based on the road condition and the distance information in the road condition, and changes the navigation path in real time based on the surrounding environment state so as to seek the navigation path with the shortest time or the minimum jam.
Optionally, the second vehicle-mounted terminal performs a safe road condition prompting operation, including:
the second vehicle-mounted terminal acquires road condition information of the first vehicle-mounted terminal when sharing the real-time position;
when the road condition information is traffic signal data, the second vehicle-mounted terminal carries out signal lamp blind area prompting;
when the road condition information is a congestion index, the second vehicle-mounted terminal carries out congestion prompt according to the congestion index;
when the road condition information is an emergency, the second vehicle-mounted terminal carries out a detour prompt according to the emergency;
when the road condition information is the surrounding environment information of the vehicle, the second vehicle-mounted terminal calculates the distance between the second vehicle-mounted terminal and the first vehicle-mounted terminal according to the self positioning information and the positioning information of the first vehicle-mounted terminal;
the second vehicle-mounted terminal determines the distances between different target objects and the first vehicle-mounted terminal based on the distances corresponding to the surrounding environment and the road condition of the vehicle;
and the second vehicle-mounted terminal estimates the distance between the different target objects and the second vehicle-mounted terminal based on the distance between the different target objects and the first vehicle-mounted terminal and the distance between the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out safety prompt.
Optionally, the second vehicle-mounted terminal directly or indirectly establishes communication with the first vehicle-mounted terminal, and includes:
the second vehicle-mounted terminal and the first vehicle-mounted terminal establish a handshake protocol and communicate through a TCP/IP protocol, or,
and the second vehicle-mounted terminal establishes communication with the first terminal through a Road Side Unit (RSU).
Optionally, the acquiring, by the first vehicle-mounted terminal, three-dimensional point cloud data acquired by the first laser radar, and fusing the point cloud data with the image includes:
calibrating the three-dimensional point cloud data and the image area, and identifying different three-dimensional point cloud data corresponding to different image areas;
calculating the position and the posture of the laser radar sensor under the coordinate system of the image acquisition device; and calculating orientation angles and length, width and height of different target objects based on the corresponding three-dimensional point cloud data in the image.
Optionally, the performing a real-time navigation operation based on the real-time location sharing includes:
and the second vehicle-mounted terminal predicts the path track of the first vehicle-mounted terminal based on the real-time position sharing, dynamically adjusts the navigation path based on the predicted path track, and performs real-time navigation based on the dynamically adjusted navigation path.
The embodiment of the invention provides a data acquisition and sharing device based on artificial intelligence, which comprises: a processor and a memory for storing a computer program capable of running on the processor; the processor is configured to execute the artificial intelligence based data acquisition and sharing method when running the computer program.
The embodiment of the invention provides a computer-readable storage medium, on which computer-executable instructions are stored, and the computer-executable instructions are used for executing the artificial intelligence-based data acquisition and sharing method.
According to the method for acquiring and sharing the data based on the artificial intelligence, the data acquired by the first vehicle-mounted terminal is shared to the second vehicle-mounted terminal, so that the second vehicle-mounted terminal can perform real-time position sharing and safety prompt operation on the basis of the first vehicle-mounted terminal, and can dynamically optimize a navigation path and necessary safety prompt on the basis of dynamic position and road condition, the problem that real-time position sharing and safety prompt cannot be performed in the prior art is solved, the utilization rates of V2V and V2I are improved, the navigation efficiency is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below.
FIG. 1 is a prior art Internet of vehicles network architecture diagram;
FIG. 2 is a flow diagram of an artificial intelligence based data acquisition and sharing method in one embodiment;
FIG. 3 is a diagram illustrating a scenario of a method for artificial intelligence data acquisition and sharing in one embodiment;
FIG. 4 is a block diagram of an electronic device in one embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
FIG. 2 is a flow diagram of a method for artificial intelligence based data collection and sharing in one embodiment. The method in the embodiment comprises the following steps:
s101, a first vehicle-mounted terminal acquires an external image captured by a first image sensor, and identifies road condition data in the image based on a neural network technology, wherein the road condition data comprises traffic signal data, congestion indexes, emergency events and vehicle surrounding environment information;
the first vehicle-mounted terminal can be based On a vehicle-mounted communication terminal/vehicle-mounted Unit (OBU), such as a T-Box, a PC and the like, and is similar to the brain of an automobile, On one hand, the first vehicle-mounted terminal can collect data uploaded by each sensor in the vehicle, process the data, and issue various instructions to different sensors to realize vehicle-vehicle interaction and human-vehicle interaction, and On the other hand, the first vehicle-mounted terminal is provided with a wireless communication antenna to realize external wireless communication, and can perform data interaction externally in a bluetooth mode, an infrared mode, an LTE mode, a 5G mode and the like, for example, information interaction with a Road Side Unit (RSU). Meanwhile, the first vehicle-mounted terminal may further include components such as a GPS and a camera for implementing functions such as driver monitoring the DMS and self-positioning.
The first image sensor is arranged and fixed outside the automobile and used for identifying images of the surrounding environment of the automobile, generally speaking, a plurality of first image sensors can be placed and used for acquiring image data of 360-degree directions of the automobile, for convenience of description, the number of the first image sensors is defined to be 1, a plurality of image sensors can coexist in an actual scene, and different images of a plurality of different directions can be acquired at the same time.
After the first vehicle-mounted terminal acquires the external image acquired by the first image sensor, the neural network technology is adopted to acquire and process road condition (road condition) data, and the name, size and direction of a target object outside the vehicle can be identified through the neural network technology, so that traffic signal data, congestion indexes, emergencies and vehicle surrounding environment information are acquired. For example, the status of traffic lights (red, yellow, and green) may be identified, as may the type of road, identification of the content of traffic signs, weather, and identification of surrounding vehicles, pedestrians, and external facilities. Specifically, the first vehicle-mounted terminal performs convolution processing on the image frame containing the target object by using a convolution neural network, and extracts image features corresponding to the target object; distributing weights for each image frame based on the image features by using a long-term and short-term memory network, and capturing the action features of the target object according to an optical flow method and the image frames after the weights are distributed; and determining the name, size and orientation of the target object based on the action characteristics of the target object. The specific neural network identification algorithm is the prior art, for example, the neural network identification algorithm described in chinese patent CN110843794B, and the embodiments of the present invention will not be described again.
In the embodiment of the invention, the traffic signal data can be the state of a traffic signal indicator light, such as the state of a traffic light, whether the traffic signal is damaged or not and the like; the congestion index represents the congestion level of the current road end, for example, in the prior art, the congestion level of the road can be represented by red, yellow, green and other colors, generally, the data is detected by a roadside unit and sent to a terminal such as a mobile phone for navigation prompt, and the vehicle end itself also has the capability of identifying the congestion level, and information such as the vehicle speed, the number of surrounding vehicles and the like can be used as a criterion of the congestion level; the emergency can be vehicle collision, emergency barrier and other emergency, and can be identified and pre-judged through a neural network in the existing unmanned technology; the vehicle surrounding environment information is obtained by image recognition by using a V2I technology in the internet of vehicles, and the roadside perception data is monitored, for example, the current road end name, the name of a surrounding building, the road sign and other contents.
S102, the first vehicle-mounted terminal acquires three-dimensional point cloud data acquired by the first laser radar, fuses the point cloud data with the image and acquires distances corresponding to different road conditions in the image, wherein the first vehicle-mounted terminal, the first image sensor and the first laser radar are located on a first vehicle, and the distances corresponding to the different road conditions comprise the distances between different targets in the surrounding environment of the vehicle and the first laser radar;
the LiDAR is a device for detecting the distance by emitting and reflecting laser, and in the actual use process, the LiDAR is fixed around an automobile, detects target objects around the automobile and generates a three-dimensional point cloud picture so as to identify the distance between the target objects and the LiDAR. In the embodiment of the invention, a multi-sensor fusion technology is adopted to perform the fusion of point cloud data and image data, and the multi-sensor fusion technology is mainly used for the detection and tracking of target objects and applied to the fields of automatic driving, advanced auxiliary driving, safety early warning, traffic scheduling and the like. For example, in the patent application of CN110363820A in south-east university, a lidar point cloud data set of a camera view angle is obtained through joint calibration of a lidar and a camera; before data is input into a neural network, firstly, performing spherical projection on a laser radar data set to obtain dense and two-dimensional data; and finally, fusing the characteristics of two modes through a1 x 1 rolling block to realize target detection based on radar and vision pre-fusion. And finally, a weighted post-fusion mode is adopted, two corresponding inputs of the laser radar and the image are adopted, the characteristics are respectively learned, and finally fusion is carried out, so that the accuracy of target identification can be improved, and the three-dimensional information of the target can be simultaneously obtained.
In the embodiment of the present invention, the fusion technology of the laser radar and the image sensor may be: calibrating the three-dimensional point cloud data and the image area, and identifying different three-dimensional point cloud data corresponding to different image areas; calculating the position and the posture of the laser radar sensor under the coordinate system of the image acquisition device; and calculating orientation angles and length, width and height of different target objects based on corresponding three-dimensional point cloud data in the image.
The present invention is not limited to this embodiment, which adopts the existing neural network technology. It should be noted that, after the laser radar and the image sensor are fused, the distances between various target objects in the image and the laser radar can be obtained, and thus, the distance of the target object (road condition) is calibrated, and the calibrated distance is provided for the first vehicle-mounted terminal in real time.
S103, the second vehicle-mounted terminal acquires self-positioning information;
the second vehicle-mounted terminal can be the same type of terminal product as the first vehicle-mounted terminal, and can also be a different terminal, wherein the second vehicle-mounted terminal comprises a GPS/LBS positioning module and can perform positioning in real time.
S104, the second vehicle-mounted terminal directly or indirectly establishes communication with the first vehicle-mounted terminal, and receives first vehicle-mounted terminal positioning information, road condition data and a distance corresponding to the road condition, which are sent by the first vehicle-mounted terminal;
the second vehicle-mounted terminal establishes communication with the first vehicle-mounted terminal directly or indirectly, for example, the second vehicle-mounted terminal can perform communication between vehicles according to a Dedicated Short Range Communication (DSRC) protocol, and can also perform message passing by using the RSU as a message broker. In addition, the second vehicle-mounted terminal and the first vehicle-mounted terminal can also establish a handshake protocol to communicate through a TCP/IP protocol.
After the first vehicle-mounted terminal and the second vehicle-mounted terminal obtain communication, the first vehicle-mounted terminal can send the positioning information of the first vehicle-mounted terminal, the road condition information and the distance corresponding to the road condition to the second vehicle-mounted terminal in real time in the current year. The second vehicle-mounted terminal is located in the second vehicle, and the second vehicle can acquire image information, road condition information and the like acquired by the first vehicle in real time only by the second vehicle-mounted terminal, so that the second vehicle can share various information acquired by the first vehicle without a second laser radar or a second image sensor, and when the first vehicle is close to the second vehicle, the first vehicle can serve as 'glasses' of the second vehicle to help the second vehicle to identify obstacles, road conditions and the like.
S105, the second vehicle-mounted terminal confirms the distance between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on the road condition data, starts real-time position sharing of the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out real-time navigation operation or safe road condition prompting operation based on the real-time position sharing, wherein the second vehicle-mounted terminal is located on a second vehicle.
In the embodiment of the invention, two schemes are provided after the real-time position sharing is based, wherein the first scheme is navigation, and the second scheme is safe road condition prompting operation.
For the first scheme, the second vehicle-mounted terminal determines the position relationship between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on the self-positioning information and the positioning information of the first vehicle-mounted terminal; and the second vehicle-mounted terminal sets a navigation path by taking the position of the first vehicle-mounted terminal as a destination and the self-positioning information as a starting place, performs navigation planning, and selects the optimal navigation path in the navigation planning based on the road condition data sent by the first vehicle-mounted terminal and the distance information corresponding to the road condition.
Optionally, the second vehicle-mounted terminal predicts a path track of the first vehicle-mounted terminal based on the real-time position sharing, dynamically adjusts the navigation path based on the predicted path track, and performs real-time navigation based on the dynamically adjusted navigation path.
The selecting of the optimal navigation path in the navigation plan may specifically be: the second vehicle terminal determines the shortest navigation path in time based on the congestion degree in the road condition, the emergency and one or more combinations of the vehicle surrounding conditions, or the second vehicle terminal judges the surrounding environment state of the first terminal based on the road condition and the distance information in the road condition and changes the navigation path in real time based on the surrounding environment state so as to seek the shortest or the least congested navigation path in time.
The following description is made with reference to fig. 3, and as shown in fig. 3, it is assumed that the vehicle 14 is a first vehicle, a first vehicle-mounted terminal is arranged inside the vehicle, the vehicle 11 is a second vehicle, and a second vehicle-mounted terminal is arranged inside the vehicle, the vehicle 11 and the vehicle 14 can directly communicate through a communication protocol, and can also perform message transfer through the RSU 15, where the RSU 15 and the core network 16 perform data interaction through public networks such as LTE, 5G NR, and the like, so as to implement intelligent traffic big data interaction. Assuming that the vehicle 11 uses the self-localization position as the starting point (point a) and the position of the vehicle 14 as the ending point (point B), self-navigation is performed by an algorithm such as simultaneous localization and mapping (SLAM), and there may be multiple navigation paths between the two points AB, how to select the shortest or least congested navigation path at the time of the best use, so that it is required to use the road condition and the distance data corresponding to the road condition collected by the first vehicle-mounted terminal, for example, there are 3 routes from the point a to the point B, which are set as a1, a2 and a3, whereas based on the congestion degree, a1 is the most congested, a2 is the second, a3 is the lightest, a3 path can be selected as the path for real-time navigation, and for the source of the congestion degree, which is obtained by the identification of the object (the rest vehicles) and the identification of the distance between the object and the first vehicle-mounted terminal, for example, the surrounding vehicles are set to be greater than 10, and the distances between the surrounding vehicles and the first laser radar are all smaller than 5m and are defined as serious congestion, the distances between the surrounding vehicles and the first laser radar are all smaller than 5m and are defined as medium congestion, the vehicle conditions around the first vehicle can be obtained, so that the congestion degree is judged, the second vehicle judges the current road condition of the first vehicle on the basis of the information provided by the first vehicle, therefore, after the second vehicle receives the information sent by the first vehicle, the road congestion degree of the first vehicle can be obtained, and the running road (a1) bypassing the first vehicle is selected (in the case of road congestion), namely, the navigation routes (a2 and a3) consistent with the running road of the first vehicle are not selected, bypassing is selected, and dynamic adjustment is carried out on the basis of the real-time conditions. The congestion degree of the navigation routes a2 and a3 can be obtained through a Road Side Unit (RSU), and can also be indirectly obtained from a third vehicle-mounted terminal and a fourth vehicle-mounted terminal through collecting the congestion degree through the third vehicle-mounted terminal and the fourth vehicle-mounted terminal which are the same as the first vehicle-mounted terminal. In addition, the second vehicle-mounted terminal can acquire sudden events (such as pedestrians, vehicle collision accidents, road surface collapse, sudden obstacles blocking traffic and the like) in the information sent by the first vehicle-mounted terminal, so that the navigation route is continuously and dynamically changed. Similarly, the vehicle 11 can acquire the vehicle surrounding conditions (such as the vehicle surrounding building information, the traffic light information, and the like) of the vehicle 14, thereby assisting the vehicle 11 in navigation.
For the second scheme, the second vehicle-mounted terminal acquires the road condition information of the first vehicle-mounted terminal when the real-time position is shared;
when the road condition information is traffic signal data, the second vehicle-mounted terminal prompts signal lamp blind areas; taking fig. 3 as an example, it is assumed that the vehicle 12 is a first vehicle provided with a first vehicle-mounted terminal, the vehicle 11 is a second vehicle provided with a second vehicle-mounted terminal, and the first vehicle is in front of the second vehicle, when passing a traffic light, because the vehicle height of the vehicle 12 is greater than that of the vehicle 11, the driver of the vehicle 11 cannot see the display condition of the traffic light clearly, and is easy to cause a false red light running.
When the road condition information is a congestion index, the second vehicle-mounted terminal carries out congestion prompt according to the congestion index; for example, when the road condition information indicates that the current traveling road is congested, the second on-board terminal may be prompted to detour.
When the road condition information is an emergency (pedestrian injury, vehicle collision and the like), the second vehicle-mounted terminal carries out detour prompting according to the emergency;
when the road condition information is the surrounding environment information of the vehicle, the second vehicle-mounted terminal calculates the distance between the second vehicle-mounted terminal and the first vehicle-mounted terminal according to the self positioning information and the positioning information of the first vehicle-mounted terminal; the second vehicle-mounted terminal determines the distances between different target objects and the first vehicle-mounted terminal based on the distances corresponding to the surrounding environment and the road condition of the vehicle; the second vehicle-mounted terminal estimates the distances between different target objects and the second vehicle-mounted terminal based on the distances between the different target objects and the first vehicle-mounted terminal and the distances between the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out safety prompt. As shown in fig. 3, it is assumed that the target object is a tree, the vehicle 11 is a second vehicle, the vehicle 12 is a first vehicle, the tree is located on the right side of the vehicle 12, the vehicle 12 estimates that the distance between the tree and the vehicle 12 is 5 meters through a sensor fusion technology of a first laser radar and a first image sensor, and locates a specific position of the tree, and the vehicle 11 can obtain a linear distance between its own position and the tree position based on its own position, the position of the vehicle 12, and the position of the tree, so as to perform a safety prompt within a threshold range, for example, by using voice broadcast: the method of prompting that big trees are located 5 meters on the right side of the driving road and safety is noticed is adopted.
The embodiment of the invention also provides a data acquisition and sharing device based on artificial intelligence, which comprises: a processor and a memory for storing a computer program capable of running on the processor; the processor is configured to execute the artificial intelligence based data acquisition and sharing method in the above embodiment when running the computer program.
The embodiment of the present invention further provides a computer-readable storage medium, on which computer-executable instructions are stored, where the computer-executable instructions are used to execute the artificial intelligence based data acquisition and sharing method in the foregoing embodiments.
Fig. 4 is a schematic hardware composition diagram of an artificial intelligence based data acquisition and sharing device (e.g., a first vehicle-mounted terminal or a second vehicle-mounted terminal) according to an embodiment. It will be appreciated that fig. 4 only shows a simplified design of the electronic device. In practical applications, the electronic devices may also respectively include necessary other components, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all electronic devices that can implement the artificial intelligence based data acquisition and sharing method of the embodiments of the present application are within the protection scope of the present application.
The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input means are for inputting data and/or signals and the output means are for outputting data and/or signals. The output means and the input means may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU. The processor may also include one or more special purpose processors, which may include GPUs, FPGAs, etc., for accelerated processing.
The memory is used to store program codes and data of the network device.
The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment. Specifically, reference may be made to the description of the method embodiment, which is not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (9)
1. A method for acquiring and sharing data based on artificial intelligence is characterized by comprising the following steps:
the method comprises the steps that a first vehicle-mounted terminal obtains an external image captured by a first image sensor, and identifies road condition data in the image based on a neural network technology, wherein the road condition data comprises traffic signal data, congestion indexes, emergencies and surrounding environment information of a vehicle;
the first vehicle-mounted terminal acquires three-dimensional point cloud data acquired by the first laser radar, fuses the point cloud data with the image and acquires distances corresponding to different road conditions in the image, wherein the first vehicle-mounted terminal, the first image sensor and the first laser radar are positioned on a first vehicle, and the distances corresponding to the different road conditions comprise the distances between different targets in the surrounding environment of the vehicle and the first laser radar;
the second vehicle-mounted terminal acquires self-positioning information;
the second vehicle-mounted terminal directly or indirectly establishes communication with the first vehicle-mounted terminal, and receives first vehicle-mounted terminal positioning information, road condition data and a distance corresponding to the road condition, which are sent by the first vehicle-mounted terminal;
and the second vehicle-mounted terminal confirms the distance between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on the road condition data, starts the real-time position sharing of the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out real-time navigation operation or safe road condition prompting operation based on the real-time position sharing, wherein the second vehicle-mounted terminal is positioned on a second vehicle.
2. The method of claim 1, wherein the enabling of real-time location sharing of the first vehicle-mounted terminal and the second vehicle-mounted terminal and performing real-time navigation operations based on the real-time location sharing comprises:
the second vehicle-mounted terminal determines the position relation between the first vehicle-mounted terminal and the second vehicle-mounted terminal based on self positioning information and the positioning information of the first vehicle-mounted terminal;
the second vehicle-mounted terminal takes the position of the first vehicle-mounted terminal as a destination and takes self positioning information as a starting place to perform navigation planning;
and the second vehicle-mounted terminal selects an optimal navigation path in the navigation planning based on the road condition data sent by the first vehicle-mounted terminal and the distance information corresponding to the road condition.
3. The method according to claim 2, wherein the selecting, by the second vehicle-mounted terminal, the optimal navigation path in the navigation plan based on the road condition data and the distance information corresponding to the road condition sent by the first vehicle-mounted terminal comprises:
the second vehicle terminal determines the shortest navigation path in use based on a combination of one or more of congestion degree in road conditions, an emergency and surrounding conditions of the vehicle, or,
and the second vehicle terminal judges the surrounding environment state of the first terminal based on the road condition and the distance information in the road condition, and changes the navigation path in real time based on the surrounding environment state so as to seek the navigation path with the shortest time or the minimum jam.
4. The method according to claim 1, wherein the second vehicle-mounted terminal performs a safe road condition prompting operation, comprising:
the second vehicle-mounted terminal acquires road condition information of the first vehicle-mounted terminal when sharing the real-time position;
when the road condition information is traffic signal data, the second vehicle-mounted terminal carries out signal lamp blind area prompting;
when the road condition information is a congestion index, the second vehicle-mounted terminal carries out congestion prompt according to the congestion index;
when the road condition information is an emergency, the second vehicle-mounted terminal carries out a detour prompt according to the emergency;
when the road condition information is the surrounding environment information of the vehicle, the second vehicle-mounted terminal calculates the distance between the second vehicle-mounted terminal and the first vehicle-mounted terminal according to the self positioning information and the positioning information of the first vehicle-mounted terminal;
the second vehicle-mounted terminal determines the distances between different target objects and the first vehicle-mounted terminal based on the distances corresponding to the surrounding environment and the road condition of the vehicle;
and the second vehicle-mounted terminal estimates the distance between the different target objects and the second vehicle-mounted terminal based on the distance between the different target objects and the first vehicle-mounted terminal and the distance between the first vehicle-mounted terminal and the second vehicle-mounted terminal, and carries out safety prompt.
5. The method of claim 1, wherein the second vehicle-mounted terminal establishing communication, directly or indirectly, with the first vehicle-mounted terminal comprises:
the second vehicle-mounted terminal and the first vehicle-mounted terminal establish a handshake protocol and communicate through a TCP/IP protocol, or,
and the second vehicle-mounted terminal establishes communication with the first terminal through a Road Side Unit (RSU).
6. The method according to any one of claims 1 to 5, wherein the acquiring, by the first onboard terminal, the three-dimensional point cloud data acquired by the first lidar and fusing the point cloud data with the image comprises:
calibrating the three-dimensional point cloud data and the image area, and identifying different three-dimensional point cloud data corresponding to different image areas;
calculating the position and the posture of the laser radar sensor under the coordinate system of the image acquisition device; and calculating orientation angles and length, width and height of different target objects based on the corresponding three-dimensional point cloud data in the image.
7. The method of claim 1, 4 or 5, wherein performing a real-time navigation operation based on the real-time location sharing comprises:
and the second vehicle-mounted terminal predicts the path track of the first vehicle-mounted terminal based on the real-time position sharing, dynamically adjusts the navigation path based on the predicted path track, and performs real-time navigation based on the dynamically adjusted navigation path.
8. An artificial intelligence based data acquisition and sharing apparatus, the apparatus comprising: a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to execute the artificial intelligence based data acquisition and sharing method of any one of claims 1 to 7 when running the computer program.
9. A computer-readable storage medium having stored thereon computer-executable instructions for performing the artificial intelligence based data acquisition and sharing method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010525954.XA CN111724616B (en) | 2020-06-11 | 2020-06-11 | Method and device for acquiring and sharing data based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010525954.XA CN111724616B (en) | 2020-06-11 | 2020-06-11 | Method and device for acquiring and sharing data based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111724616A true CN111724616A (en) | 2020-09-29 |
CN111724616B CN111724616B (en) | 2021-11-05 |
Family
ID=72567941
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010525954.XA Active CN111724616B (en) | 2020-06-11 | 2020-06-11 | Method and device for acquiring and sharing data based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111724616B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785371A (en) * | 2021-01-11 | 2021-05-11 | 上海钧正网络科技有限公司 | Shared device position prediction method, device and storage medium |
CN113112840A (en) * | 2021-03-15 | 2021-07-13 | 上海交通大学 | Unmanned vehicle over-the-horizon navigation system and method based on vehicle-road cooperation |
CN113593224A (en) * | 2021-07-14 | 2021-11-02 | 广州小鹏汽车科技有限公司 | Road condition sharing method and device, vehicle-mounted terminal and storage medium |
CN114093163A (en) * | 2021-11-10 | 2022-02-25 | 山东旗帜信息有限公司 | Vehicle monitoring method, device and storage medium for expressway |
CN114399915A (en) * | 2022-01-31 | 2022-04-26 | 重庆长安汽车股份有限公司 | Traffic light intersection safety auxiliary system and operation method |
CN115063559A (en) * | 2022-05-12 | 2022-09-16 | 北京鉴智机器人科技有限公司 | Augmented reality AR road condition generation method and device and vehicle-mounted AR system |
CN116052410A (en) * | 2023-02-28 | 2023-05-02 | 重庆长安汽车股份有限公司 | Motorcade management method, motorcade management system, electronic equipment and storage medium |
GB2613400A (en) * | 2021-12-01 | 2023-06-07 | Motional Ad Llc | Automatically detecting traffic signals using sensor data |
CN117058210A (en) * | 2023-10-11 | 2023-11-14 | 比亚迪股份有限公司 | Distance calculation method and device based on vehicle-mounted sensor, storage medium and vehicle |
CN117191070A (en) * | 2023-08-28 | 2023-12-08 | 重庆赛力斯新能源汽车设计院有限公司 | Navigation position sharing method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140025096A (en) * | 2012-08-21 | 2014-03-04 | 현대모비스 주식회사 | Hazard loading vehicle warning method and apparatus |
CN105976609A (en) * | 2015-11-06 | 2016-09-28 | 乐卡汽车智能科技(北京)有限公司 | Vehicle data processing system and method |
CN107839610A (en) * | 2017-02-10 | 2018-03-27 | 问众智能信息科技(北京)有限公司 | It is a kind of that the method and system to be navigated with car is realized by intelligent back vision mirror |
CN108180916A (en) * | 2017-12-20 | 2018-06-19 | 奇瑞汽车股份有限公司 | Vehicle location sharing method and system |
CN109696173A (en) * | 2019-02-20 | 2019-04-30 | 苏州风图智能科技有限公司 | A kind of car body air navigation aid and device |
-
2020
- 2020-06-11 CN CN202010525954.XA patent/CN111724616B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140025096A (en) * | 2012-08-21 | 2014-03-04 | 현대모비스 주식회사 | Hazard loading vehicle warning method and apparatus |
CN105976609A (en) * | 2015-11-06 | 2016-09-28 | 乐卡汽车智能科技(北京)有限公司 | Vehicle data processing system and method |
CN107839610A (en) * | 2017-02-10 | 2018-03-27 | 问众智能信息科技(北京)有限公司 | It is a kind of that the method and system to be navigated with car is realized by intelligent back vision mirror |
CN108180916A (en) * | 2017-12-20 | 2018-06-19 | 奇瑞汽车股份有限公司 | Vehicle location sharing method and system |
CN109696173A (en) * | 2019-02-20 | 2019-04-30 | 苏州风图智能科技有限公司 | A kind of car body air navigation aid and device |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112785371A (en) * | 2021-01-11 | 2021-05-11 | 上海钧正网络科技有限公司 | Shared device position prediction method, device and storage medium |
CN113112840A (en) * | 2021-03-15 | 2021-07-13 | 上海交通大学 | Unmanned vehicle over-the-horizon navigation system and method based on vehicle-road cooperation |
CN113593224A (en) * | 2021-07-14 | 2021-11-02 | 广州小鹏汽车科技有限公司 | Road condition sharing method and device, vehicle-mounted terminal and storage medium |
CN114093163A (en) * | 2021-11-10 | 2022-02-25 | 山东旗帜信息有限公司 | Vehicle monitoring method, device and storage medium for expressway |
GB2613400A (en) * | 2021-12-01 | 2023-06-07 | Motional Ad Llc | Automatically detecting traffic signals using sensor data |
GB2613400B (en) * | 2021-12-01 | 2024-01-10 | Motional Ad Llc | Automatically detecting traffic signals using sensor data |
US12046049B2 (en) | 2021-12-01 | 2024-07-23 | Motional Ad Llc | Automatically detecting traffic signals using sensor data |
CN114399915A (en) * | 2022-01-31 | 2022-04-26 | 重庆长安汽车股份有限公司 | Traffic light intersection safety auxiliary system and operation method |
CN115063559A (en) * | 2022-05-12 | 2022-09-16 | 北京鉴智机器人科技有限公司 | Augmented reality AR road condition generation method and device and vehicle-mounted AR system |
CN116052410A (en) * | 2023-02-28 | 2023-05-02 | 重庆长安汽车股份有限公司 | Motorcade management method, motorcade management system, electronic equipment and storage medium |
CN117191070A (en) * | 2023-08-28 | 2023-12-08 | 重庆赛力斯新能源汽车设计院有限公司 | Navigation position sharing method, device, equipment and storage medium |
CN117058210A (en) * | 2023-10-11 | 2023-11-14 | 比亚迪股份有限公司 | Distance calculation method and device based on vehicle-mounted sensor, storage medium and vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN111724616B (en) | 2021-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111724616B (en) | Method and device for acquiring and sharing data based on artificial intelligence | |
US11238738B2 (en) | Information providing system, server, mobile terminal, and computer program | |
US11967230B2 (en) | System and method for using V2X and sensor data | |
JP7381716B2 (en) | Map update method, device, and storage medium | |
US11531354B2 (en) | Image processing apparatus and image processing method | |
US9373255B2 (en) | Method and system for producing an up-to-date situation depiction | |
EP2700032B1 (en) | A comprehensive and intelligent system for managing traffic and emergency services | |
US20210014643A1 (en) | Communication control device, communication control method, and computer program | |
CN111708358A (en) | Operation of a vehicle in an emergency | |
CN113345269B (en) | Vehicle danger early warning method, device and equipment based on V2X vehicle networking cooperation | |
US20230109909A1 (en) | Object detection using radar and lidar fusion | |
CN114041176A (en) | Security performance evaluation device, security performance evaluation method, information processing device, and information processing method | |
CN112238862A (en) | Open and safety monitoring system for autonomous driving platform | |
US11741839B2 (en) | Traffic safety assistance device, mobile information terminal, and program | |
JP2020091614A (en) | Information providing system, server, mobile terminal, and computer program | |
JP6903598B2 (en) | Information processing equipment, information processing methods, information processing programs, and mobiles | |
WO2023250290A1 (en) | Post drop-off passenger assistance | |
US20210110708A1 (en) | Hierarchical integrated traffic management system for managing vehicles | |
JP2020091652A (en) | Information providing system, server, and computer program | |
WO2023189878A1 (en) | Intersection-based offboard vehicle path generation | |
CN116229407A (en) | Method for a vehicle, vehicle and storage medium | |
WO2020241273A1 (en) | Vehicular communication system, onboard device, control method, and computer program | |
JP2020091612A (en) | Information providing system, server, and computer program | |
WO2023189881A1 (en) | Collision warning based on intersection information from map messages | |
WO2023189880A1 (en) | Path prediction based on intersection information from map messages |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211013 Address after: 221000 1205-1206, building E1, software park, Xuzhou Economic and Technological Development Zone, Jiangsu Province Applicant after: Xuzhou Guoyun Information Technology Co.,Ltd. Address before: 518000 B608, building 15, jiayushan, Xinyi, Longgang District, Shenzhen City, Guangdong Province Applicant before: Fan Xin |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |