[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113246931A - Vehicle control method and device, electronic equipment and storage medium - Google Patents

Vehicle control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113246931A
CN113246931A CN202110650767.9A CN202110650767A CN113246931A CN 113246931 A CN113246931 A CN 113246931A CN 202110650767 A CN202110650767 A CN 202110650767A CN 113246931 A CN113246931 A CN 113246931A
Authority
CN
China
Prior art keywords
pedestrian
image
preset
vehicle
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110650767.9A
Other languages
Chinese (zh)
Other versions
CN113246931B (en
Inventor
张发恩
么琳
郭慧娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innovation Qizhi Chengdu Technology Co ltd
Original Assignee
Innovation Qizhi Chengdu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innovation Qizhi Chengdu Technology Co ltd filed Critical Innovation Qizhi Chengdu Technology Co ltd
Priority to CN202110650767.9A priority Critical patent/CN113246931B/en
Publication of CN113246931A publication Critical patent/CN113246931A/en
Application granted granted Critical
Publication of CN113246931B publication Critical patent/CN113246931B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60TVEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
    • B60T7/00Brake-action initiating means
    • B60T7/12Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
    • B60T7/22Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger initiated by contact of vehicle, e.g. bumper, with an external object, e.g. another vehicle, or by means of contactless obstacle detectors mounted on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle control method, a vehicle control device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The vehicle control method is used for controlling a vehicle, wherein a camera group is arranged on the vehicle, and the camera group comprises an infrared camera and a visible light camera which have the same shooting angle; the method comprises the following steps: acquiring an infrared image and a visible light image which are shot at the same shooting angle and the same time; carrying out pedestrian detection based on the infrared image and the visible light image to obtain a detection result; when the detection result indicates that a pedestrian exists in the driving direction of the vehicle, detecting whether the pedestrian is located in a preset alarm area or a preset brake area; and when the pedestrian is positioned in the preset warning area or the preset braking area, controlling the vehicle to warn or brake. The pedestrian detection under low visibility is carried out based on the images shot by the two cameras with the same shooting angle, and the accuracy of the pedestrian detection can be greatly improved.

Description

Vehicle control method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a vehicle control method and device, electronic equipment and a storage medium.
Background
The underground distance from the ground is very far, the underground distance can only depend on limited lamplight illumination in the channel, and meanwhile, a large amount of smoke is blocked, so that the light is dim, the visibility is very low, people cannot see clearly, and people can walk into the driving range of the vehicle and cannot be found by a driver. In the traditional infrared image judgment, other heating objects such as an engine and a cable can be identified as a human body; in the traditional visible light image, under the condition of low visibility, objects are difficult to see clearly, and misjudgment is easy to generate on human behaviors, so that the safety of a driving system is influenced.
Disclosure of Invention
In view of the above, an object of the present application is to provide a vehicle control method, a vehicle control apparatus, an electronic device, and a storage medium, so as to solve the problem that the conventional detection method is prone to erroneous determination.
The embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a vehicle control method, which is used for controlling a vehicle, where a camera group is arranged on the vehicle, and the camera group includes an infrared camera and a visible light camera having a same shooting angle; the method comprises the following steps: acquiring an infrared image and a visible light image which are shot at the same shooting angle and the same time; carrying out pedestrian detection based on the infrared image and the visible light image to obtain a detection result; when the detection result indicates that a pedestrian exists in the driving direction of the vehicle, detecting whether the pedestrian is located in a preset alarm area or a preset brake area; and when the pedestrian is positioned in the preset warning area or the preset braking area, controlling the vehicle to warn or brake. In the embodiment of the application, the pedestrian detection under the low visibility is carried out based on the images shot by the two cameras with the same shooting angle, the accuracy of the pedestrian detection can be greatly improved, the possibility of misjudgment is reduced, and therefore the driving safety can be improved, and the driving experience of multiple parking due to misjudgment is improved.
With reference to a possible implementation manner of the embodiment of the first aspect, performing pedestrian detection based on the infrared image and the visible light image to obtain a detection result, includes: respectively preprocessing the infrared image and the visible light image to enable the processed infrared image and the processed visible light image to be the same in size; splicing the processed infrared image and the processed visible light image to obtain a spliced image; and inputting the spliced image into a pre-trained pedestrian detection model for pedestrian detection to obtain a detection result. In the embodiment of the application, through carrying out the preliminary treatment with two kinds of figures to make the size of infrared image after handling the same with the visible light image after handling, then splice, utilize the pedestrian detection model of training in advance to come to the pedestrian detection in the concatenation image, can begin to obtain the testing result, owing to combined two kinds of figures, can improve the accuracy that detects simultaneously.
With reference to one possible implementation manner of the embodiment of the first aspect, the pedestrian detection model includes a human head detection model and a human body detection model; inputting the spliced image into a pre-trained pedestrian detection model for pedestrian detection, and the method comprises the following steps: inputting the spliced image into the human head detection model for human head detection, and inputting the spliced image into the human body detection model for human body detection. In the embodiment of the application, the two models are adopted for pedestrian detection, so that the accuracy of the detection result is improved, and the problem that the detection result is inaccurate due to the defects of the single model is solved.
With reference to a possible implementation manner of the embodiment of the first aspect, inputting the stitched image into a pre-trained pedestrian detection model for pedestrian detection includes: cutting the spliced image to obtain a central area image of the spliced image; amplifying the central area image according to the size of the spliced image; and inputting the central area image with the enlarged size into the pedestrian detection model for pedestrian detection. In the embodiment of the application, the spliced image is cut to obtain the central region image of the spliced image, then the central region image is amplified, and then the central region image is input into the model to detect pedestrians, so that the phenomenon that the remote pedestrians cannot be identified can be avoided.
With reference to one possible implementation manner of the embodiment of the first aspect, the detecting whether the pedestrian is located in the preset warning area or the preset braking area includes: determining the position distance of the pedestrian from the vehicle; judging whether the position distance is located in the preset alarm area or the preset brake area; and if the position distance is located in the preset warning area or the preset braking area, representing that the pedestrian is located in the preset warning area or the preset braking area.
With reference to one possible implementation manner of the embodiment of the first aspect, determining the position distance of the pedestrian from the vehicle includes: obtaining a first distance between the pedestrian and the vehicle according to the area of the head of the pedestrian in the image and a preset characteristic distance and an inverse proportion parameter of the area; obtaining the horizontal position of the foot in the image according to the position coordinates of the foot of the pedestrian in the image, and obtaining a second distance between the pedestrian and the vehicle according to the horizontal position of the foot in the image and a preset relation between the horizontal position and the distance; and determining the average value of the first distance and the second distance, wherein the average value is the position distance of the pedestrian from the vehicle. In the embodiment of the application, the distance between the pedestrian and the vehicle is obtained from different angles, then the average value of the distances is obtained and is used as the position distance between the pedestrian and the vehicle, so that the distance error obtained under a single angle can be avoided, and the accuracy of the result is improved.
With reference to one possible implementation manner of the embodiment of the first aspect, the camera groups include a first camera group located at the left front of the vehicle driving direction and a second camera group located at the right front of the vehicle driving direction, and the visible light cameras in the first camera group and the visible light cameras in the second camera group form a binocular camera, and the method further includes: based on a binocular distance measurement principle, obtaining a third distance between the pedestrian and the vehicle; wherein the position distance of the pedestrian from the vehicle is an average value of the first distance, the second distance and the third distance. In this application embodiment, when constituting binocular camera, still based on binocular range finding principle, obtain the third distance of this pedestrian apart from the vehicle, then with the average value of the distance that three angle obtained, as the position distance of this pedestrian apart from the vehicle, can further improve the accuracy of result.
With reference to one possible implementation manner of the embodiment of the first aspect, the detecting whether the pedestrian is located in the preset warning area or the preset braking area includes: calculating the intersection ratio of the current area of the pedestrian and the preset warning area or the preset braking area; detecting whether the pedestrian is located in the preset alarm area or the preset brake area by judging whether the intersection ratio is larger than a preset threshold value or not; if the intersection ratio is larger than the preset threshold value, it is represented that the pedestrian is located in the preset warning area or the preset braking area. In the embodiment of the application, whether the pedestrian is located in the preset warning area or the preset braking area is detected by calculating the intersection ratio of the area where the pedestrian is located and the preset warning area or the preset braking area, so that the flexibility and the practicability of the scheme are further enhanced.
In a second aspect, an embodiment of the present application further provides a vehicle control device, configured to control a vehicle, where a camera group is arranged on the vehicle, and the camera group includes an infrared camera and a visible light camera that have the same shooting angle; the device comprises: the device comprises an acquisition module and a processing module; the acquisition module is used for acquiring the infrared image and the visible light image which are shot at the same shooting angle and the same moment; and the processing module is used for detecting pedestrians based on the infrared image and the visible light image to obtain a detection result, detecting whether the pedestrians are located in a preset alarm area or a preset brake area when the detection result represents that the pedestrians are located in the vehicle running direction, and controlling the vehicle to alarm or brake when the pedestrians are located in the preset alarm area or the preset brake area.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a memory and a processor, the processor being connected to the memory, and the processor being configured to invoke a program stored in the memory to perform the method according to the first aspect embodiment and/or any possible implementation manner of the first aspect embodiment.
In a fourth aspect, embodiments of the present application further provide a storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the method provided in the foregoing first aspect and/or any one of the possible implementation manners of the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The foregoing and other objects, features and advantages of the application will be apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not intended to be to scale as practical, emphasis instead being placed upon illustrating the subject matter of the present application.
Fig. 1 shows a schematic flow chart of a vehicle control method provided in an embodiment of the present application.
Fig. 2 is a schematic diagram illustrating a central area image according to an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a principle of binocular ranging provided in an embodiment of the present application.
Fig. 4 shows a block schematic diagram of a vehicle control device provided in an embodiment of the present application.
Fig. 5 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely in the description herein to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The embodiment of the application provides a vehicle control method for controlling a vehicle. Be provided with the camera group on this vehicle, this camera group is including having infrared camera and the visible light camera of same shooting angle. The camera group is used for collecting infrared images and visible light images in the driving direction of the vehicle. Optionally, the camera groups are installed in front of and behind the vehicle, that is, the camera groups are installed in the head and tail portions of the vehicle, when the vehicle runs forwards, the infrared image and the visible light image in the running direction of the vehicle are collected by the camera group located at the head, and when the vehicle runs backwards, the infrared image and the visible light image in the running direction of the vehicle are collected by the camera group located at the tail.
The following describes a vehicle control method provided in an embodiment of the present application with reference to fig. 1.
Step S101: and acquiring the infrared image and the visible light image which are shot at the same shooting angle and the same time.
In order to improve the accuracy of pedestrian detection, in the embodiment of the application, the infrared image and the visible light image which are shot at the same shooting angle and the same moment are obtained, the pedestrian detection under the low visibility is carried out based on the images shot by the two cameras with the same shooting angle, and the accuracy of the pedestrian detection can be greatly improved.
It should be noted that the same shooting angle does not necessarily require that the shooting angles of the two shooting angles are not different from each other, but are identical to each other, and the shooting angles are considered to belong to the same shooting angle as long as the shooting angles are within an error tolerance range, such as 5%. Similarly, the above-mentioned shooting at the same time does not necessarily require that the shooting time of the two cameras is different from each other, and is completely the same, as long as the shooting time is within the error tolerance range, such as 5%, it is considered to belong to the shooting at the same time.
Step S102: and carrying out pedestrian detection based on the infrared image and the visible light image to obtain a detection result.
After the infrared image and the visible light image which are shot at the same shooting angle and the same moment are obtained, pedestrian detection can be carried out based on the infrared image and the visible light image, and a detection result is obtained.
In an alternative embodiment, the specific process of step S102 may be: respectively preprocessing the infrared image and the visible light image to enable the processed infrared image and the processed visible light image to be the same in size, splicing the processed infrared image and the processed visible light image to obtain a spliced image, inputting the spliced image into a pre-trained pedestrian detection model for pedestrian detection, and obtaining a detection result. For example, the infrared image and the visible light image are processed into images with the same number of channels with width and height, and then the infrared image and the visible light image with the same number of channels with width and height are spliced.
The pedestrian detection model is a network model trained in advance and used for pedestrian detection. In one embodiment, the pedestrian detection model includes a human head detection model for head detection and a human body detection model for human body (including head) detection. In this embodiment, when one of the two detection models detects a pedestrian, it may be indicated that there is a pedestrian in the vehicle driving direction, and the accuracy of the detection result may be improved by using the two detection models. It should be noted that, of course, the pedestrian detection model may be only a human head detection model or a human body detection model, and therefore, a scheme including two detection models cannot be understood as a limitation to the present application.
When the pedestrian detection model includes a human head detection model for head detection and a human body detection model for human body (including head) detection, the process of inputting the stitched image into the pre-trained pedestrian detection model for pedestrian detection may be: inputting the spliced image into a human head detection model for human head detection, and inputting the spliced image into a human body detection model for human body detection. In this embodiment, it may be that one of the two detection models detects a pedestrian, indicating that there is a pedestrian in the vehicle driving direction, and that no pedestrian is considered to be present in the vehicle driving direction unless both detection models detect no pedestrian.
When performing pedestrian detection, considering that there may be a phenomenon that a remote pedestrian cannot be identified, in an optional implementation, the process of inputting the stitched image into the pre-trained pedestrian detection model for pedestrian detection may be: firstly, cutting the spliced image to obtain a central area image of the spliced image, then amplifying the central area image according to the size of the spliced image, and then inputting the central area image after size amplification into a pedestrian detection model for pedestrian detection. In the embodiment, the spliced image is not directly input into a pre-trained pedestrian detection model for pedestrian detection, but is firstly cut, then the cut image is amplified to enable the size of the image to be consistent with the size of the original image, and then the image is input into the pedestrian detection model for pedestrian detection. For example, the stitched image is cropped to obtain an image having a width of from 1/4 to 3/4 and a height of from 0 to 1/2. For the sake of easy understanding, the schematic diagram shown in fig. 2 is used to describe, and the area indicated by the dashed line box in the drawing is the cut central area image. The size of the central region image is not limited to the size of the dotted line frame shown in fig. 2, and may be other sizes, for example, an image having a width of the original image 1/5 to 4/5 and a height of 0 to 1/2, an image having a width of the original image 1/4 to 3/4 and a height of 1/4 to 3/4, and the like, and the size may be set as necessary.
It should be noted that, when the pedestrian detection model includes a human head detection model and a human body detection model, the spliced images that are cut and then enlarged may be respectively input into the two models to perform human head and human body detection.
When a human head detection model is trained, a spliced image obtained by splicing an infrared image and a visible image is used as a sample, a human head labeling file (each human head in the spliced image gives a square frame to label the position information of the human head) is coordinated to train a neural network model such as a Retinaface network, so that a trained human head detection model for human head detection is obtained, and the human head detection model can output the coordinates of the upper left corner of a detected human head target and the length and width of the target. The spliced image is obtained by splicing an infrared image and a visible light image with the same width x and height x channel number. It should be noted that the specific process of model training is well known to those skilled in the art and will not be described herein.
When the human body detection model is trained, a spliced image obtained by splicing an infrared image and a visible light image is used as a sample, a pedestrian labeling file (each pedestrian in the spliced image gives a square frame to label the position information of the square frame) is coordinated to train a neural network model such as a MobileNet V1-SSD network, and therefore the trained human body detection model for human body detection is obtained, and the human body detection model can output the coordinates of the upper left corner of the whole detected pedestrian target and the length and width of the target. The spliced image is obtained by splicing an infrared image and a visible light image with the same width x and height x channel number. It should be noted that the specific process of model training is well known to those skilled in the art and will not be described herein.
When training the model, the stitched image that is cut and then enlarged in size may be used as a sample to enhance the detection capability of the model.
Step S103: and when the detection result represents that the pedestrian exists in the driving direction of the vehicle, detecting whether the pedestrian is located in a preset alarm area or a preset brake area.
And when the detection result indicates that a pedestrian exists in the driving direction of the vehicle, detecting whether the pedestrian is located in a preset alarm area or a preset braking area, and executing the step S104 when the pedestrian is located in the preset alarm area or the preset braking area. And when the pedestrian is not located in the preset alarm area or the preset brake area, continuing to detect.
In one embodiment, the implementation process of step S103 may be: determining the position distance between the pedestrian and the vehicle, judging whether the position distance is located in a preset warning area or a preset braking area, if so, representing that the pedestrian is located in the preset warning area or the preset braking area, and if not, representing that the pedestrian is not located in the preset warning area or the preset braking area.
Optionally, the process of determining the position distance between the pedestrian and the vehicle may be to obtain a first distance between the pedestrian and the vehicle according to the area of the head of the pedestrian in the image and a preset inverse proportion parameter representing the distance and the area, obtain a horizontal position of the foot in the image according to the position coordinate of the foot of the pedestrian in the image, obtain a second distance between the pedestrian and the vehicle according to the horizontal position of the foot in the image and a preset relation between the horizontal position and the distance, and determine an average value of the first distance and the second distance, where the average value is the position distance between the pedestrian and the vehicle. In the embodiment, the distances between the pedestrian and the vehicle are obtained from different angles, and then the average value of the distances is obtained and is used as the position distance between the pedestrian and the vehicle, so that the distance error obtained under a single angle can be avoided, and the accuracy of the result is improved.
Here, a process of how to obtain the first distance of the pedestrian from the vehicle will be described below. Under the fixed condition of camera parameter, can think that the people's head area of different people is roughly the same, because the camera is installed on the vehicle, the distance of people's head distance camera is the distance of people's head distance camera promptly. The distance between the head and the camera can be represented by the letter D, the length of the head frame can be represented by the letter H, the width of the head frame can be represented by the letter W, the inverse proportion parameter of the distance and the area can be represented by the letter K, and
Figure F_210610104557696_696402001
. Wherein, under the condition that the camera parameter is fixed, the inverse proportion parameter K of the distance and the area is inconvenient and is a known value, and H and W can be obtained based on a human head detection model, and the human head detection model can output the coordinate of the upper left corner of the detected human head target and the length and width of the target, so that the distance D can be obtained.
The following describes how to obtain the second distance between the pedestrian and the vehicle, because the heights of the people are uneven, the heads of different people at the same distance from the camera are not necessarily on the same horizontal line, but the two feet are fixed on the same horizontal line, and under the condition that the angle and the height of the camera are relatively fixed, the horizontal positions of the feet in the image are obtained according to the position coordinates of the feet of the pedestrian in the image through a series of horizontal lines calibrated in advance to represent the horizontal positions of the feet of the pedestrian at the distance of 1 meter, 5 meters, 10 meters, 20 meters and the like from the camera, and then the second distance between the pedestrian and the vehicle can be obtained according to the preset relation between the horizontal positions and the distances. The position coordinates of the feet of the pedestrians in the image can be obtained based on the output result of the human body detection model, the human body detection model can output the coordinates of the upper left corner of the detected overall target of the pedestrians and the length and the width of the target, and the position coordinates of the feet of the pedestrians in the image can be obtained by adding the coordinates of the upper left corner and the length of the target.
If the camera groups comprise a first camera group positioned at the left front part of the vehicle running direction and a second camera group positioned at the right front part of the vehicle running direction, namely two camera groups are arranged at the front and the rear of the vehicle, so that no matter the vehicle moves forwards or backwards, two camera groups (the first camera group positioned at the left front part of the vehicle running direction and the second camera group positioned at the right front part of the vehicle running direction) work in the vehicle running direction. For example, if the vehicle is driving forward, the images used for pedestrian detection are the images collected by a first camera group positioned in the left front of the vehicle head and a second camera group positioned in the right front of the vehicle head; if the vehicle runs backwards, the images used for pedestrian detection are the images collected by the first camera group positioned in the left front of the vehicle tail and the second camera group positioned in the right front of the vehicle tail. The visible light cameras in the first camera group and the visible light cameras in the second camera group form a binocular camera. At this time, the third distance for determining the distance between the pedestrian and the vehicle may also be obtained based on a binocular distance measuring principle, and in an alternative embodiment, the distance between the pedestrian and the vehicle is an average value of the first distance, the second distance and the third distance.
The principle of binocular range finding will be explained below with reference to fig. 3: o isRAnd OTFor the light spots of the two visible light cameras, the imaging of the object U on the two camera photoreceptors is P and P
Figure SYM_210610104558001
F is the focal length of the camera, B is the distance between the centers of the two cameras, then P is to P
Figure SYM_210610104558002
Distance dis = B- (X)R-XT) (ii) a According to the principle of similarity of triangles, there are
Figure F_210610104557805_805735002
The formula is transformed to obtain
Figure F_210610104557994_994157003
. Wherein Z is the third distance of the pedestrian from the vehicle.
The distance from the pedestrian to the vehicle may be a distance obtained at a single angle, such as a first distance, a second distance, or a third distance, and therefore the above-mentioned average value is not to be construed as a limitation to the present application.
The distance between the preset warning area and the vehicle is greater than the distance between the preset braking area and the vehicle, for example, an area 10 to 15 meters away from the vehicle may be the preset warning area, and an area 0 to 8 meters away from the vehicle may be the preset braking area. It should be noted that the preset warning area and the preset braking area are set according to actual needs by combining the driving speed of the vehicle, and the distances from the preset warning area and the preset braking area to the vehicle are different corresponding to different application scenes.
In one embodiment, the implementation process of step S103 may be: calculating the intersection ratio of the current area where the pedestrian is located and a preset alarm area or a preset brake area, detecting whether the pedestrian is located in the preset alarm area or the preset brake area by judging whether the intersection ratio is larger than a preset threshold value, and if the intersection ratio is larger than the preset threshold value (such as 0.2), representing that the pedestrian is located in the preset alarm area or the preset brake area. The intersection ratio is a ratio of an overlapping (intersection) portion of the area where the pedestrian is currently located and the preset warning area or the preset braking area and a union portion of the area where the pedestrian is currently located and the preset warning area or the preset braking area.
It should be noted that, in addition to the intersection ratio, it may also be detected whether the pedestrian is located in the preset warning region or the preset braking region according to a ratio of an overlapping (intersection) portion of the current region where the pedestrian is located and the preset warning region or the preset braking region to the current region where the pedestrian is located. And if the ratio of the overlapping (intersection) part of the current area of the pedestrian and the preset alarm area or the preset brake area to the current area of the pedestrian is more than 0.2, determining that the pedestrian is located in the preset alarm area or the preset brake area.
Step S104: controlling vehicle warning or vehicle braking.
And when the pedestrian is positioned in a preset alarm area or a preset brake area, controlling the vehicle to alarm or brake. When the pedestrian is located in the preset warning area, the vehicle is controlled to give an alarm to remind the pedestrian to avoid the vehicle in time, and if the pedestrian can give an alarm in a broadcasting mode. When the pedestrian is located in the preset braking area, the vehicle is controlled to brake, and collision between the pedestrian and the vehicle when the pedestrian continues to advance is avoided.
It should be noted that the execution subject of the vehicle control method may be the vehicle itself, or may be other devices, such as a computer.
The embodiment of the present application also provides a vehicle control apparatus 100 for controlling a vehicle, as shown in fig. 4. Be provided with the camera group on the vehicle, the camera group is including having infrared camera and the visible light camera of same shooting angle. The vehicle control device 100 includes: an acquisition module 110 and a processing module 120.
The acquiring module 110 is configured to acquire an infrared image and a visible light image captured at the same capturing angle and the same time.
The processing module 120 is configured to perform pedestrian detection based on the infrared image and the visible light image to obtain a detection result, detect whether a pedestrian is located in a preset warning region or a preset braking region when the detection result represents that there is a pedestrian in a vehicle driving direction, and control the vehicle to warn or brake when the pedestrian is located in the preset warning region or the preset braking region.
Optionally, the processing module 120 is configured to: respectively preprocessing the infrared image and the visible light image to enable the processed infrared image and the processed visible light image to be the same in size; splicing the processed infrared image and the processed visible light image to obtain a spliced image; and inputting the spliced image into a pre-trained pedestrian detection model for pedestrian detection to obtain a detection result.
The pedestrian detection model includes a human head detection model and a human body detection model, and optionally, the processing module 120 is specifically configured to: inputting the spliced image into the human head detection model for human head detection, and inputting the spliced image into the human body detection model for human body detection.
Optionally, the processing module 120 is specifically configured to: cutting the spliced image to obtain a central area image of the spliced image; amplifying the central area image according to the size of the spliced image; and inputting the central area image with the enlarged size into the pedestrian detection model for pedestrian detection.
Optionally, the processing module 120 is configured to: determining the position distance of the pedestrian from the vehicle; judging whether the position distance is located in the preset alarm area or the preset brake area; and if the position distance is located in the preset warning area or the preset braking area, representing that the pedestrian is located in the preset warning area or the preset braking area.
Optionally, the processing module 120 is specifically configured to: obtaining a first distance between the pedestrian and the vehicle according to the area of the head of the pedestrian in the image and a preset characteristic distance and an inverse proportion parameter of the area; obtaining the horizontal position of the foot in the image according to the position coordinates of the foot of the pedestrian in the image, and obtaining a second distance between the pedestrian and the vehicle according to the horizontal position of the foot in the image and a preset relation between the horizontal position and the distance; and determining the average value of the first distance and the second distance, wherein the average value is the position distance of the pedestrian from the vehicle.
The camera group is including being located the first camera group in vehicle direction of travel left place ahead and being located vehicle direction of travel right place ahead second camera group, visible light camera in the first camera group with the visible light camera in the second camera group constitutes binocular camera. Optionally, the processing module 120 is further configured to obtain a third distance between the pedestrian and the vehicle based on a binocular distance measuring principle; wherein the position distance of the pedestrian from the vehicle is an average value of the first distance, the second distance and the third distance.
Optionally, the processing module 120 is configured to: calculating the intersection ratio of the current area of the pedestrian and the preset warning area or the preset braking area; detecting whether the pedestrian is located in the preset alarm area or the preset brake area by judging whether the intersection ratio is larger than a preset threshold value or not; if the intersection ratio is larger than the preset threshold value, it is represented that the pedestrian is located in the preset warning area or the preset braking area.
The vehicle control apparatus 100 according to the embodiment of the present application has the same implementation principle and the same technical effects as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for parts of the apparatus embodiments that are not mentioned.
Based on the same inventive concept, as shown in fig. 5, fig. 5 illustrates a block diagram of an electronic device 200 according to an embodiment of the present application. The electronic device 200 includes a transceiver 210, a memory 220, a communication bus 230, and a processor 240. If the electronic device 200 is a vehicle, the electronic device further includes a camera group located on the vehicle, where the camera group includes an infrared camera and a visible light camera having the same shooting angle, the infrared camera is used to collect an infrared image of the vehicle traveling direction, and the visible light camera is used to collect a visible light image of the vehicle traveling direction.
The elements of the transceiver 210, the memory 220, and the processor 240 are electrically connected to each other directly or indirectly to achieve data transmission or interaction. For example, the components may be electrically coupled to each other via one or more communication buses 230 or signal lines. The transceiver 210 is used for transceiving data. The memory 220 is used to store a computer program such as the software functional module shown in fig. 4, that is, the vehicle control apparatus 100. The vehicle control apparatus 100 includes at least one software function module, which may be stored in the memory 220 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device 200. The processor 240 is configured to execute an executable module stored in the memory 220, such as a software functional module or a computer program included in the vehicle control apparatus 100. For example, the processor 240 is configured to obtain an infrared image and a visible light image captured at the same capturing angle and the same time; carrying out pedestrian detection based on the infrared image and the visible light image to obtain a detection result; when the detection result indicates that a pedestrian exists in the driving direction of the vehicle, detecting whether the pedestrian is located in a preset alarm area or a preset brake area; and when the pedestrian is positioned in the preset warning area or the preset braking area, controlling the vehicle to warn or brake.
The Memory 220 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 240 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor 240 may be any conventional processor or the like.
The electronic device 200 includes, but is not limited to, a vehicle itself with a camera group mounted thereon, or a computer, a server, and the like.
The embodiment of the present application further provides a non-volatile computer-readable storage medium (hereinafter, referred to as a storage medium), where a computer program is stored on the storage medium, and when the computer program is run by the electronic device 200 as described above, the vehicle control method described above is executed.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or an electronic device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. The vehicle control method is characterized by being used for controlling a vehicle, wherein a camera group is arranged on the vehicle, and the camera group comprises an infrared camera and a visible light camera which have the same shooting angle; the method comprises the following steps:
acquiring an infrared image and a visible light image which are shot at the same shooting angle and the same time;
carrying out pedestrian detection based on the infrared image and the visible light image to obtain a detection result;
when the detection result indicates that a pedestrian exists in the driving direction of the vehicle, detecting whether the pedestrian is located in a preset alarm area or a preset brake area;
and when the pedestrian is positioned in the preset warning area or the preset braking area, controlling the vehicle to warn or brake.
2. The method of claim 1, wherein performing pedestrian detection based on the infrared image and the visible light image to obtain a detection result comprises:
respectively preprocessing the infrared image and the visible light image to enable the processed infrared image and the processed visible light image to be the same in size;
splicing the processed infrared image and the processed visible light image to obtain a spliced image;
and inputting the spliced image into a pre-trained pedestrian detection model for pedestrian detection to obtain a detection result.
3. The method of claim 2, wherein the pedestrian detection model comprises a human head detection model and a human body detection model; inputting the spliced image into a pre-trained pedestrian detection model for pedestrian detection, and the method comprises the following steps:
inputting the spliced image into the human head detection model for human head detection, and inputting the spliced image into the human body detection model for human body detection.
4. The method of claim 2, wherein inputting the stitched image into a pre-trained pedestrian detection model for pedestrian detection comprises:
cutting the spliced image to obtain a central area image of the spliced image;
amplifying the central area image according to the size of the spliced image;
and inputting the central area image with the enlarged size into the pedestrian detection model for pedestrian detection.
5. The method of claim 1, wherein detecting whether the pedestrian is located in a preset warning zone or a preset braking zone comprises:
determining the position distance of the pedestrian from the vehicle;
judging whether the position distance is located in the preset alarm area or the preset brake area;
and if the position distance is located in the preset warning area or the preset braking area, representing that the pedestrian is located in the preset warning area or the preset braking area.
6. The method of claim 5, wherein determining the positional distance of the pedestrian from the vehicle comprises:
obtaining a first distance between the pedestrian and the vehicle according to the area of the head of the pedestrian in the image and a preset characteristic distance and an inverse proportion parameter of the area;
obtaining the horizontal position of the foot in the image according to the position coordinates of the foot of the pedestrian in the image, and obtaining a second distance between the pedestrian and the vehicle according to the horizontal position of the foot in the image and a preset relation between the horizontal position and the distance;
and determining the average value of the first distance and the second distance, wherein the average value is the position distance of the pedestrian from the vehicle.
7. The method of claim 6, wherein the camera groups comprise a first camera group positioned at the left front of the vehicle travel direction and a second camera group positioned at the right front of the vehicle travel direction, the visible light cameras in the first camera group and the visible light cameras in the second camera group comprising binocular cameras, the method further comprising:
based on a binocular distance measurement principle, obtaining a third distance between the pedestrian and the vehicle;
wherein the position distance of the pedestrian from the vehicle is an average value of the first distance, the second distance and the third distance.
8. The method of claim 1, wherein detecting whether the pedestrian is located in a preset warning zone or a preset braking zone comprises:
calculating the intersection ratio of the current area of the pedestrian and the preset warning area or the preset braking area;
detecting whether the pedestrian is located in the preset alarm area or the preset brake area by judging whether the intersection ratio is larger than a preset threshold value or not;
if the intersection ratio is larger than the preset threshold value, it is represented that the pedestrian is located in the preset warning area or the preset braking area.
9. The vehicle control device is characterized by being used for controlling a vehicle, wherein a camera group is arranged on the vehicle, and the camera group comprises an infrared camera and a visible light camera which have the same shooting angle; the device comprises:
the acquisition module is used for acquiring the infrared image and the visible light image which are shot at the same shooting angle and the same moment;
and the processing module is used for detecting pedestrians based on the infrared image and the visible light image to obtain a detection result, detecting whether the pedestrians are located in a preset alarm area or a preset brake area when the detection result represents that the pedestrians are located in the vehicle running direction, and controlling the vehicle to alarm or brake when the pedestrians are located in the preset alarm area or the preset brake area.
10. An electronic device, comprising: a memory and a processor coupled to the memory, the processor to invoke a program stored in the memory to perform the method of any of claims 1-8.
11. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method according to any one of claims 1-8.
CN202110650767.9A 2021-06-11 2021-06-11 Vehicle control method and device, electronic equipment and storage medium Active CN113246931B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110650767.9A CN113246931B (en) 2021-06-11 2021-06-11 Vehicle control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110650767.9A CN113246931B (en) 2021-06-11 2021-06-11 Vehicle control method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113246931A true CN113246931A (en) 2021-08-13
CN113246931B CN113246931B (en) 2021-09-28

Family

ID=77187522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110650767.9A Active CN113246931B (en) 2021-06-11 2021-06-11 Vehicle control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113246931B (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020125383A1 (en) * 2001-03-12 2002-09-12 Honda Giken Kogyo Kabushiki Kaisha Distance measuring sensor mounting structure
JP3987048B2 (en) * 2003-03-20 2007-10-03 本田技研工業株式会社 Vehicle periphery monitoring device
CN102985958A (en) * 2010-08-31 2013-03-20 本田技研工业株式会社 Vehicle surroundings monitoring device
CN103129468A (en) * 2013-02-19 2013-06-05 河海大学常州校区 Vehicle-mounted roadblock recognition system and method based on laser imaging technique
EP2227406B1 (en) * 2007-11-12 2015-03-18 Autoliv Development AB A vehicle safety system
DE202013012294U1 (en) * 2012-04-27 2016-02-10 Dynamic Research, Inc. Devices and systems for testing traffic accident prevention technology
KR20160121026A (en) * 2015-04-09 2016-10-19 현대자동차주식회사 Apparatus and method for estimating distance to a pedestrian
CN106297142A (en) * 2016-08-17 2017-01-04 云南电网有限责任公司电力科学研究院 A kind of unmanned plane mountain fire exploration control method and system
CN206421414U (en) * 2017-01-19 2017-08-18 南京航空航天大学 A kind of automatic Pilot barrier vision detection system
CN107406071A (en) * 2015-04-01 2017-11-28 Plk科技株式会社 Pedestrian's identification device and its method
US20180079408A1 (en) * 2015-03-31 2018-03-22 Denso Corporation Object detection apparatus and object detection method
CN108388837A (en) * 2017-02-03 2018-08-10 福特全球技术公司 A kind of system and method for assessing the inside of autonomous vehicle
CN109455171A (en) * 2018-11-12 2019-03-12 成都扬发机电有限公司 Special vehicle and obstacle distance detection method
CN110956069A (en) * 2019-05-30 2020-04-03 初速度(苏州)科技有限公司 Pedestrian 3D position detection method and device and vehicle-mounted terminal
CN210416497U (en) * 2019-05-16 2020-04-28 清华大学天津高端装备研究院 Electric automobile multisensor intelligence collection system
US20200139960A1 (en) * 2016-04-11 2020-05-07 David E. Newman Systems and methods for hazard mitigation
CN111767868A (en) * 2020-06-30 2020-10-13 创新奇智(北京)科技有限公司 Face detection method and device, electronic equipment and storage medium
CN112026727A (en) * 2015-05-12 2020-12-04 深圳市大疆创新科技有限公司 Apparatus and method for identifying or detecting obstacles
CN112464782A (en) * 2020-11-24 2021-03-09 惠州华阳通用电子有限公司 Pedestrian identification method and system
CN112896159A (en) * 2021-03-11 2021-06-04 宁波均联智行科技股份有限公司 Driving safety early warning method and system

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020125383A1 (en) * 2001-03-12 2002-09-12 Honda Giken Kogyo Kabushiki Kaisha Distance measuring sensor mounting structure
JP3987048B2 (en) * 2003-03-20 2007-10-03 本田技研工業株式会社 Vehicle periphery monitoring device
EP2227406B1 (en) * 2007-11-12 2015-03-18 Autoliv Development AB A vehicle safety system
CN102985958A (en) * 2010-08-31 2013-03-20 本田技研工业株式会社 Vehicle surroundings monitoring device
DE202013012294U1 (en) * 2012-04-27 2016-02-10 Dynamic Research, Inc. Devices and systems for testing traffic accident prevention technology
CN103129468A (en) * 2013-02-19 2013-06-05 河海大学常州校区 Vehicle-mounted roadblock recognition system and method based on laser imaging technique
US20180079408A1 (en) * 2015-03-31 2018-03-22 Denso Corporation Object detection apparatus and object detection method
CN107406071A (en) * 2015-04-01 2017-11-28 Plk科技株式会社 Pedestrian's identification device and its method
KR20160121026A (en) * 2015-04-09 2016-10-19 현대자동차주식회사 Apparatus and method for estimating distance to a pedestrian
CN112026727A (en) * 2015-05-12 2020-12-04 深圳市大疆创新科技有限公司 Apparatus and method for identifying or detecting obstacles
US20200139960A1 (en) * 2016-04-11 2020-05-07 David E. Newman Systems and methods for hazard mitigation
CN106297142A (en) * 2016-08-17 2017-01-04 云南电网有限责任公司电力科学研究院 A kind of unmanned plane mountain fire exploration control method and system
CN206421414U (en) * 2017-01-19 2017-08-18 南京航空航天大学 A kind of automatic Pilot barrier vision detection system
CN108388837A (en) * 2017-02-03 2018-08-10 福特全球技术公司 A kind of system and method for assessing the inside of autonomous vehicle
CN109455171A (en) * 2018-11-12 2019-03-12 成都扬发机电有限公司 Special vehicle and obstacle distance detection method
CN210416497U (en) * 2019-05-16 2020-04-28 清华大学天津高端装备研究院 Electric automobile multisensor intelligence collection system
CN110956069A (en) * 2019-05-30 2020-04-03 初速度(苏州)科技有限公司 Pedestrian 3D position detection method and device and vehicle-mounted terminal
CN111767868A (en) * 2020-06-30 2020-10-13 创新奇智(北京)科技有限公司 Face detection method and device, electronic equipment and storage medium
CN112464782A (en) * 2020-11-24 2021-03-09 惠州华阳通用电子有限公司 Pedestrian identification method and system
CN112896159A (en) * 2021-03-11 2021-06-04 宁波均联智行科技股份有限公司 Driving safety early warning method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗文: "红外与可见光图像融合方法研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Also Published As

Publication number Publication date
CN113246931B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN105549023B (en) Article detection device and its method of work
US11620837B2 (en) Systems and methods for augmenting upright object detection
CN106503653B (en) Region labeling method and device and electronic equipment
JP5867273B2 (en) Approaching object detection device, approaching object detection method, and computer program for approaching object detection
CA2958832C (en) Method and axle-counting device for contact-free axle counting of a vehicle and axle-counting system for road traffic
KR102585219B1 (en) Device and method to control speed of vehicle
US9195895B1 (en) Systems and methods for detecting traffic signs
US10402665B2 (en) Systems and methods for detecting traffic signs
JP6450294B2 (en) Object detection apparatus, object detection method, and program
EP2807642B1 (en) Method for operating a driver assistance device of a motor vehicle, driver assistance device and motor vehicle
JPWO2020171983A5 (en)
KR20130007243A (en) Method and system for warning forward collision using camera
JP2004531424A5 (en)
US11126875B2 (en) Method and device of multi-focal sensing of an obstacle and non-volatile computer-readable storage medium
KR20170127036A (en) Method and apparatus for detecting and assessing road reflections
JP4940177B2 (en) Traffic flow measuring device
CN114705121A (en) Vehicle pose measuring method and device, electronic equipment and storage medium
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN107886729B (en) Vehicle identification method and device and vehicle
Leu et al. High speed stereo vision based automotive collision warning system
CN113246931B (en) Vehicle control method and device, electronic equipment and storage medium
US20220406077A1 (en) Method and system for estimating road lane geometry
CN112896070A (en) Parking space obstacle detection method and device and computer readable storage medium
KR20160017269A (en) Device and method for detecting pedestrians
JP7435429B2 (en) Safe driving level evaluation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant