Detailed Description
The technical scheme of the invention is further elaborated by combining the drawings and the specific embodiments in the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
As shown in fig. 1, in one embodiment, an application environment diagram adopting the binocular stereo data processing method provided by the embodiment of the invention is provided, which includes a vehicle 100, a camera 200 disposed on the vehicle 100 for forming binocular stereo vision, and a smart driving device 300 disposed on the vehicle 100. The smart driving apparatus 300 may be a smart apparatus such as an on-vehicle computer, an on-vehicle controller, a vehicle driving control system, a mobile terminal, etc. that can run a computer program of a binocular stereo data processing method. In an automatic driving scene, the intelligent driving device 300 determines whether the lane line information in the current view field meets the set condition in real time, and selects a data processing strategy based on the lane line information as prior information as a target data processing strategy when the lane line information in the current view field meets the set condition; selecting a data processing strategy based on lane prior knowledge combined with camera attitude data as prior information as a target data processing strategy when the lane line information in the current view field does not meet the set conditions, determining a visual image such as an interesting region corresponding to a left visual image according to the target data processing strategy, performing stereo matching on the interesting region and the visual image and another visual image, such as performing stereo matching on the left visual image and a right visual image, filtering point cloud data obtained by the stereo matching according to the interesting region, or directly performing stereo matching according to the interesting region of the left visual image and the right visual image, wherein the interesting region in the right visual image can be determined by the interesting region in the left visual image in an implicit way, therefore, filtered binocular stereo data are obtained, and accurate and efficient detection of the obstacles in the current view field is achieved according to the filtered binocular stereo data. It is to be understood that a visual image herein may refer to a left visual image or a right visual image, and accordingly, when a visual image herein refers to a right visual image, another visual image refers to a left visual image, and when a visual image herein refers to a left visual image, another visual image refers to a right visual image.
As shown in fig. 2, in one embodiment, there is provided a binocular stereo data processing method which may be applied to the intelligent driving apparatus shown in fig. 1, the method including:
step 101, determining whether the lane line information in the current view field meets the set conditions, and selecting a matched target data processing strategy according to the determination result;
the lane line information is information indicating the position, number, shape, and the like of a lane included in a road surface on which the vehicle is currently traveling. The maximum range that a field-of-view vehicle can observe through a camera in a driving scene is usually expressed in terms of angles, and generally the larger the field of view, the larger the observation range. Here, determining whether the lane line information in the current field of view meets the setting condition means determining whether a confidence of the lane line information that can be acquired in the current field of view is higher than a threshold. When the lane line information meets the set conditions, the position, the shape and the like of the lane where the vehicle currently runs in the current view field can be determined according to the lane lines. When the lane line information does not meet the set condition, the road where the vehicle currently runs in the current view field is represented as an unstructured road or effective information capable of identifying the lane where the vehicle currently runs cannot be obtained.
Here, selecting the matched target data processing policy according to the determination result includes: when the corresponding lane line information accords with the set conditions, selecting a data processing strategy for masking the corresponding visual image based on the lane line information as prior information as a target data processing strategy; and when the corresponding lane line information does not accord with the setting condition, selecting a data processing strategy for masking point cloud data obtained after stereo matching of the binocular stereo image based on lane prior knowledge and camera attitude data as prior information as a target data processing strategy. Wherein, the prior information refers to experience or historical data used for determining the data processing strategy. Masking refers to the masking of the image to be processed (either wholly or partially) with a selected image, graphic or object to control the area or process of processing the image to be processed.
103, determining an interested area corresponding to a visual image according to the target data processing strategy;
here, the Region Of Interest (ROI) is one image Region selected from an image, and this image Region is regarded as an important point Of Interest for image analysis. The visual image may be any one of binocular visual images, such as a left-eye visual image. By determining the region of interest corresponding to the visual image and then performing the next processing, the range of the image to be processed can be adjusted by reasonably determining the region of interest, the processing time is reduced, and the processing precision is increased. The target data processing strategy is matched according to the determination result of whether the lane line information in the current view field meets the set condition, and when the lane line information meets the set condition, the target data processing strategy based on the lane line information as prior information can be selected; when the lane line information does not accord with the setting condition, a target data processing strategy which takes the combination of lane priori knowledge and camera attitude data as the priori information can be selected, so that the accuracy of the region of interest can be improved by selecting the target data processing strategy which is matched with the determination result of whether the lane line information accords with the setting condition, the processing range can be accurately reduced, the false target point and non-obstacle data in the binocular stereoscopic vision image can be effectively removed, and the obstacle detection precision is improved.
And 105, performing stereo matching on the visual image and the other visual image according to the region of interest to obtain filtered binocular stereo data.
Here, Stereo Matching (Stereo Matching) refers to finding a Matching corresponding point from different visual images. The intelligent driving device can identify false target point or non-obstacle data according to a matching result of a target point determined by the interested area corresponding to the visual image in the other visual image by performing stereo matching on the basis of the interested area and the visual image and the other visual image. Here, the stereo matching of one visual image with another visual image according to the region of interest may be: firstly, stereo matching is carried out on a visual image and another visual image, and point cloud data obtained after stereo matching is filtered according to the region of interest; or, performing stereo matching according to the region of interest corresponding to the visual image and another visual image, wherein the region of interest corresponding to the other visual image can be implicitly determined by the region of interest of the visual image. By performing stereo matching on the visual image and the other visual image according to the region of interest, the calculation time overhead of the region of interest corresponding to the other visual image can be saved, the calculation amount is reduced, and the obstacle detection accuracy is ensured according to the determination of the region of interest corresponding to the visual image. For example, based on the position of the data point a in the region of interest corresponding to the left visual image, determining whether a data point a' matching the data point a exists in the right visual image, and if not, identifying the data point a as a false target point; if the obstacle exists, the position and the shape of the object which are jointly defined by the data points A' and the data points A can be further combined, so that the obstacle or the non-obstacle can be determined.
In the binocular stereo data processing method provided by the embodiment, whether the lane line information in the current view field meets the setting condition is determined, the matched target data processing strategy is selected according to the determination result, the region of interest corresponding to a visual image is determined according to the target data processing strategy, and stereo matching is performed according to the region of interest and the visual image and another visual image to obtain filtered binocular stereo data, wherein the target data processing strategy is determined according to whether the lane line information in the current view field meets the setting condition, so that the corresponding data processing strategy can be set in a targeted manner based on the actual conditions of different view fields, and the region of interest in the visual image can be determined more accurately; the method comprises the steps of determining an interested area corresponding to a visual image according to a target data processing strategy aiming at a binocular visual image obtained by binocular stereo vision, and carrying out stereo matching on the visual image and another visual image, wherein the calculating time overhead of the interested area corresponding to the other visual image can be saved through reasonably determining the interested area corresponding to the visual image, the calculated amount can be greatly reduced, false target point and non-obstacle data can be effectively eliminated through stereo matching, and the obstacle detection precision is improved.
In one embodiment, in step 101, the determining whether the lane line information in the current field of view meets the setting condition includes:
and acquiring a visual image corresponding to the current view field, identifying the visual image, extracting lane line information in the visual image, and determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information.
Here, the smart driving apparatus may determine lane line information through image recognition by acquiring a visual image corresponding to the current field of view obtained by the camera photographing. The confidence degree refers to the degree of the actual value of the corresponding parameter falling around the measurement result with a certain probability, and can be used for representing the credibility degree or the probability value of the measurement value of the measured parameter. Determining whether the lane line information in the current view field meets the set condition according to the confidence of the lane line information, namely identifying the lane line information according to the visual image corresponding to the current view field, and determining that the lane line information meets the set condition when the lane line information is identified and the confidence of the lane line information is higher than a threshold value; otherwise, if the effective lane line information cannot be identified or the confidence coefficient of the lane line information is smaller than the threshold value, determining that the lane line information does not meet the set condition. The lane line information is correspondingly determined to meet the set conditions through the fact that the confidence coefficient of the lane line information is higher than the threshold value, so that when a data processing strategy based on the lane line information as prior information is selected, the lane line information is accurate and complete, and the accuracy of subsequent data processing is ensured.
In another embodiment, in step 101, the determining whether the lane line information in the current field of view meets the setting condition includes:
detecting lane line information in the current view field;
when lane line information is detected, determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information;
and when the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition.
Here, the smart driving apparatus may detect lane line information in the current field of view before acquiring the visual image corresponding to the current field of view. The confidence level is used to characterize the confidence level of the detected lane line information or the probability value that falls into the true value. When lane line information is detected, determining that the lane line information in the current view field meets the set conditions according to the fact that the confidence coefficient of the lane line information is higher than a threshold value; and when the confidence of the lane line information is smaller than a threshold value or the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition. By detecting that the confidence of the lane line information is higher than the threshold value and correspondingly determining that the lane line information meets the set condition, the lane line information can be ensured to be accurate and complete when a data processing strategy based on the lane line information as prior information is selected, and the accuracy of subsequent data processing is ensured. The lane line information is detected before the visual image is acquired, so that the lane line information can be conveniently detected by combining other detection algorithms except for image recognition, independent maintenance and upgrading of the functional module are facilitated, and the realization mode is more flexible.
In one embodiment, when the lane line information meets the setting condition, the step 103 of determining the region of interest in a visual image according to the target data processing policy includes:
acquiring end points of lane lines on two corresponding sides of a target lane in a visual image, wherein the end points comprise a first end point far away from a road vanishing point and a second end point close to the road vanishing point;
determining a region of interest based on the positions of the first end point and the second end point, the slope at which the first end point and/or the second end point are located, and the lane line.
And when the lane line information meets the set conditions, correspondingly selecting a data processing strategy based on the lane line information as prior information as a target data processing strategy. And determining an interested area corresponding to a visual image according to a data processing strategy taking the lane line information as the prior information. The target lane refers to a current driving lane of the vehicle. The end points of the lane lines on the two corresponding sides of the target lane can be determined according to the road vanishing point of the target lane in the visual image, wherein the end point of the target lane at one end far away from the road vanishing point is a first end point, and the end point at one end close to the road vanishing point is a second end point. The road vanishing point is an intersection point where lane lines on two sides of the target lane in the visual image extend along the vehicle driving direction. It will be appreciated that, for the example of a forward vehicle, the first and second end points are both located forward of the vehicle's own position. As shown in fig. 3, the lane lines on the opposite sides of the target lane are denoted by L1 and L2, respectively, and the end points of the lane lines are denoted by first end points a1 and a2, and second end points B1 and B2, respectively. According to the positions of the first end point and the second end point, the slope of the position of the first end point and/or the second end point and the lane line, a lane and an adjacent area of the lane can be determined as an area which needs to detect whether an obstacle exists in the driving process of the vehicle, and the area is determined as an interested area in the visual image, so that the accuracy of obstacle detection can be ensured, the driving safety can be ensured, and meanwhile, the interference information can be reduced to the maximum extent, the calculation amount can be reduced, and the detection efficiency can be improved.
Further, the determining the region of interest based on the positions of the first end point and the second end point, the slope of the positions of the first end point and/or the second end point, and the lane line includes:
determining an initial region of interest based on a region formed by the connecting line of the first end point, the connecting line of the second end point and the lane line;
determining a first lane extension line according to the position of the first end point and the slope of the first end point, and determining a first adjacent area based on an area formed by a connecting line of the first end point and the first lane extension line;
determining a second lane extension line according to the position of the second endpoint and the slope at the second endpoint, and determining a height adjacent region based on a region formed by a connecting line of the second endpoint and the second lane extension line, or determining a height extension line at a set angle relative to the target lane according to the position of the second endpoint, and determining a height adjacent region based on a region formed by the connecting line of the second endpoint and the height extension line;
and combining the first adjacent area, the height adjacent area and the initial region of interest to determine a region of interest.
Here, the first neighboring region is determined as a part of the region of interest according to the position of the first end point located far from the road vanishing point of the target lane and the slope at the first end point, so that the reliability of the obstacle detection can be improved, and the safety in front of the driving direction can be further secured. According to the position of the second endpoint, close to the road vanishing point, of the target lane, the height extension line is determined at a set angle relative to the target lane, the set angle is usually 90 degrees, the height adjacent area is determined to be a part of the interested area, the obstacles with the front height higher than the road vanishing point can be ensured to be detected completely, and the detection accuracy and the driving safety are improved. Referring again to fig. 3, first lane extensions determined according to the positions of the first end points a1, a2 and the slopes at the first end points a1, a2 are respectively denoted by L3, L4, an initial region of interest is denoted by region 1, a first adjacent region is denoted by region 2, height extensions are denoted by L5, L6, and a height adjacent region is denoted by region 3. Optionally, referring to fig. 4, a second lane extension line may be determined according to the positions of the second end points B1 and B2 and the slopes of the second end points B1 and B2, and the height proximity area may be determined by the second lane extension line. The second lane extensions are denoted by L7 and L8, and the region of high proximity, denoted by region 4 in fig. 4, is determined based on the second lane extensions, so that the range of the region of interest can be narrowed down, the interference information can be eliminated to the maximum extent, and the detection efficiency and accuracy can be improved, while ensuring that the front obstacle having a height higher than the road vanishing point can be detected.
In an embodiment, when the lane line information does not meet the setting condition, the step 103 of determining an interesting region corresponding to a visual image according to the target data processing policy includes:
acquiring the position of a target vehicle, and determining an interested area under a world coordinate system in front of the position of the target vehicle according to the set lane width, vehicle height and effective detection distance;
acquiring attitude data of an image acquisition device;
and performing Euclidean transformation on the region of interest under the world coordinate system according to the attitude data, and determining the region of interest corresponding to the transformed visual image.
And when the lane line information does not accord with the set conditions, correspondingly selecting a data processing strategy which takes the prior knowledge of the lane and the attitude data of the camera as the prior information as a target data processing strategy. And determining an interested region corresponding to a visual image according to a data processing strategy which takes the lane prior knowledge and the camera attitude data as the prior information. The target vehicle refers to a vehicle adopting the intelligent driving device. Here, the lane prior knowledge includes the set lane width W, the vehicle height H, and the effective detection distance L. The set lane width and the vehicle height can be determined according to the conventional lane width and the vehicle height respectively. The effective detection distance is determined according to the maximum detection distance which can be detected by the current image acquisition device, and is usually not more than the maximum detection distance of the current image acquisition device. The method comprises the steps of determining an area to be driven in front of a vehicle according to the position of a target vehicle, and determining the area as an area needing to detect whether an obstacle exists in the driving process of the vehicle, so that the area is determined as an interested area in a visual image.
The image acquisition device refers to a device for acquiring binocular vision images, such as a camera. Determining the origin of a world coordinate system according to the position of a target vehicle, determining the vertex coordinates of the region of interest on a coordinate axis corresponding to the world coordinate system according to the lane width, the vehicle height and the effective detection distance, and performing Euclidean transformation on the region of interest determined in the world coordinate system by combining the posture data of an image acquisition device to obtain the region of interest in an image acquisition mode coordinate system, so that the region of interest corresponding to a visual image can be determined. It should be noted that, in the embodiment of the present invention, the images for binocular stereoscopic vision may be obtained by corresponding to two cameras, or may refer to one of the images for binocular stereoscopic vision obtained by a monocular camera, and another image for binocular vision obtained by converting the one image for binocular stereoscopic vision is obtained, which is not limited herein. For the image capturing device referred to as a binocular camera, a target camera herein corresponds to a visual image. For the image acquisition device is a monocular camera, the region of interest in the image acquisition coordinate system is obtained here, and it can be understood that when the visual image directly acquired by the monocular camera is a left visual image, the region of interest in the camera coordinate system is a region of interest corresponding to the left visual image, or when the visual image directly acquired by the monocular camera is a right visual image, the region of interest in the camera coordinate system is correspondingly a region of interest corresponding to the right visual image.
Further, the acquiring of the posture data of the image acquisition device includes:
acquiring attitude data of the image acquisition device relative to the world coordinate system, wherein the attitude data comprises a pitch angle, a roll angle and a yaw angle;
the Euclidean transformation is carried out on the region of interest under the world coordinate system according to the attitude data, and the region of interest corresponding to the transformed visual image is determined, wherein the Euclidean transformation comprises the following steps:
and determining a rotation matrix according to the pitch angle, the roll angle and the yaw angle, and determining the region of interest corresponding to the transformed visual image according to the product of the vertex coordinates of the region of interest in the world coordinate system and the rotation matrix.
Referring to fig. 5, the attitude data of the image capturing device may include a pitch angle, a roll angle, and a yaw angle. In an alternative embodiment, the rotation matrix determined from the pitch, roll and yaw angles is denoted by R as follows:
and determining the region of interest under the coordinate system of the image acquisition device according to the product of the vertex coordinates of the region of interest under the world coordinate system and the rotation matrix, so that the region of interest corresponding to the visual image can be determined, wherein a certain vertex can be taken as an example, and the vertex P is subjected to Euclidean transformation. Any point in the region of interest under the world coordinate system can be converted through the Euclidean transformation to obtain a coordinate point corresponding to the coordinate system under the image acquisition mode coordinate system, so that the region of interest under the image acquisition mode coordinate system can be determined according to the region of interest under the world coordinate system, namely, the region of interest in the corresponding visual image is determined.
In the embodiment of the invention, when the lane line information does not meet the set conditions, the situation that the vehicle can possibly run on an unstructured road under the current view field or cannot obtain effective lane line information in the running process on a structured road is shown, the region of interest is determined according to the region into which the vehicle is about to run based on the current position of the target vehicle, whether obstacles exist in front of the vehicle or not can be accurately detected under different road environments, the three-dimensional region of interest is determined by creating a world coordinate system, the region of interest in the visual image is obtained by performing Euclidean transformation based on the posture data of the image acquisition device, the accuracy of determining the region of interest can be improved, the range of the region of interest is reduced as much as possible, interference information is eliminated to the greatest extent, and the detection efficiency and the accuracy are improved.
Referring to fig. 6, an implementation process of the binocular stereo data processing method according to an embodiment of the present invention is described below with reference to an alternative embodiment as an example, in which an image capturing device is specifically a camera, and the method includes:
step S11, obtaining lane line information;
step S13, determining whether the lane line information is successfully acquired; if yes, go to step S14; if not, executing S25-S28;
step S14, determining whether the confidence of the lane line information is higher than a threshold value; if yes, go to steps S15-S18; if not, executing steps S25-S28;
step S15, generating an initial ROI according to the lane line information;
step S16, obtaining a first adjacent area and merging the first adjacent area into the ROI based on the extension of the lane line at the end point far away from one end of the road vanishing point;
step S17, obtaining a highly adjacent region and merging the highly adjacent region into the ROI based on the extension of the lane line at the end point close to one end of the road vanishing point;
step S18, obtaining filtered binocular stereo data by stereo matching of the ROI corresponding to one visual image and the other visual image;
step S25, forming an ROI under an initialized world coordinate system according to the lane priori knowledge;
step S26, acquiring camera attitude data, wherein the camera attitude data comprises a pitch angle, a roll angle and a yaw angle;
step S27, determining a conversion matrix according to the camera attitude data, and converting the ROI under the world coordinate system to obtain a corresponding ROI under the camera coordinate system;
and step S28, performing stereo matching on the visual image and the other visual image, and filtering binocular stereo point cloud data obtained by matching according to the ROI under the camera coordinate system to obtain filtered binocular stereo data.
In the embodiment of the invention, the generation mode of the matched ROI can be selected according to the lane line information and the confidence thereof, so that the ROI in the visual image can be accurately and efficiently determined according to different actual conditions of a road in the current driving scene of a vehicle or different actual conditions of lane line information acquisition, the range of the ROI can be reduced as much as possible, interference information can be eliminated to the greatest extent on the premise of ensuring accurate and efficient determination of the ROI, and the detection efficiency and accuracy are improved.
As shown in fig. 7, in one embodiment, there is provided a binocular stereo data processing apparatus including a policy selection module 11, a ROI determination module 13, and a stereo matching module 15. And the strategy selection module 11 is used for determining whether the lane line information in the current view field meets the set conditions and selecting a matched target data processing strategy according to the determination result. And the ROI determining module 13 is used for determining a region of interest corresponding to the visual image according to the target data processing strategy. And the stereo matching module 15 is used for carrying out stereo matching on the interested region, the visual image and the other visual image to obtain filtered binocular stereo data.
In an embodiment, the policy selection module 11 is specifically configured to acquire a visual image corresponding to the current field of view, identify the visual image to extract lane line information in the visual image, and determine whether the lane line information in the current field of view meets a setting condition according to a confidence level of the lane line information.
In another embodiment, the policy selection module 11 is specifically configured to detect lane line information in the current field of view; when lane line information is detected, determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information; and when the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition.
In one embodiment, the ROI determining module 13 includes an endpoint unit and an ROI unit, where when the lane line information meets the setting condition, the endpoint unit is configured to obtain endpoints of lane lines on two sides corresponding to a target lane in a visual image, where the endpoints include a first endpoint far from a road vanishing point and a second endpoint close to the road vanishing point; the ROI unit is used for determining an interested area based on the positions of the first endpoint and the second endpoint, the slope of the position of the first endpoint and/or the second endpoint and the lane line.
The ROI unit comprises an initialization unit, a first adjacent region determination unit, a second adjacent region determination unit and a merging unit. The initialization unit is used for determining an initial region of interest based on a region formed by the connecting line of the first end point, the connecting line of the second end point and the lane line; a first adjacent area determination unit configured to determine a first lane extension line according to a position of the first end point and a slope at the first end point, and determine a first adjacent area based on an area formed by a connection line of the first end point and the first lane extension line; a second adjacent region determining unit configured to determine a second lane extension line from the position of the second end point and the slope at the second end point, determine a highly adjacent region based on a region formed by a line connecting the second end point and the second lane extension line, or determine a highly adjacent region at a set angle with respect to the target lane based on the position of the second end point, and determine a highly adjacent region based on a region formed by a line connecting the second end point and the highly extending line; and the merging unit is used for merging the second adjacent area, the height adjacent area and the initial region of interest to determine the region of interest.
In another embodiment, the ROI determination module 13 includes a first ROI unit, a conversion unit, and a second ROI unit. The first ROI unit is used for acquiring the position of a target vehicle, and determining an interested area in a world coordinate system in front of the position of the target vehicle according to the set lane width, the vehicle height and the effective detection distance. And the conversion unit is used for acquiring the attitude data of the image acquisition device. And the second ROI unit is used for carrying out Euclidean transformation on the region of interest under the world coordinate system according to the posture data and determining the region of interest corresponding to the transformed visual image.
The conversion unit is specifically configured to acquire attitude data of the image acquisition device relative to the world coordinate system, where the attitude data includes a pitch angle, a roll angle, and a yaw angle. And the second ROI unit is specifically used for determining a rotation matrix according to the pitch angle, the roll angle and the yaw angle, and determining the region of interest corresponding to the transformed visual image according to the product of the vertex coordinate of the region of interest in the world coordinate system and the rotation matrix.
It should be noted that: the binocular stereo data processing device provided in the above embodiment is exemplified by only the division of the above program modules when filtering the binocular stereo data, and in practical applications, the above steps may be distributed by different program modules as needed, that is, the internal structure of the device may be divided into different program modules to complete all or part of the above-described processing. In addition, the binocular stereo data processing apparatus and the binocular stereo data processing method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail, and are not described herein again.
The embodiment of the present invention further provides an intelligent driving device, which may be a vehicle-mounted device installed on a vehicle as shown in fig. 1, and it can be understood that the intelligent driving device may also refer to a vehicle and the like including the corresponding vehicle-mounted device. Referring to fig. 8, the intelligent driving device includes a processor 201 and a memory 202 for storing a computer program capable of running on the processor 201, wherein the processor 201 is configured to execute the steps of the binocular stereo data processing method provided in any embodiment of the present application when running the computer program. The processor 201 and the memory 202 here do not refer to a corresponding number of one, but may be one or more. The intelligent driving device further comprises a memory 203, a network interface 204, and a system bus 205 connecting the memory 203, the network interface 204, the processor 201, and the storage 202. The memory stores an operating system and a virtual binocular stereo data processing device corresponding to a computer program for implementing the binocular stereo data processing method provided by the embodiment of the invention. The processor 201 is used to support the movement of the entire smart driving device. The memory 203 may be used to provide an environment for the execution of computer programs in the storage 202. The network interface 204 may be used for external server devices, terminal devices, and the like to perform network communication, receive or transmit data, such as to obtain driving control instructions input by a user.
Embodiments of the present invention further provide a computer storage medium, for example, a memory storing a computer program, where the computer program is executable by a processor to perform the steps of the binocular stereo data processing method provided in any embodiment of the present invention. The computer storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. The scope of the invention is to be determined by the scope of the appended claims.