CN112232299B - Automatic navigation method for rescuing water-falling automobile based on deep learning - Google Patents
Automatic navigation method for rescuing water-falling automobile based on deep learning Download PDFInfo
- Publication number
- CN112232299B CN112232299B CN202011244272.8A CN202011244272A CN112232299B CN 112232299 B CN112232299 B CN 112232299B CN 202011244272 A CN202011244272 A CN 202011244272A CN 112232299 B CN112232299 B CN 112232299B
- Authority
- CN
- China
- Prior art keywords
- water
- automobile
- falling
- camera
- detection frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 85
- 238000001514 detection method Methods 0.000 claims abstract description 50
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000004044 response Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an automatic navigation method for rescuing a water-falling automobile based on deep learning, and belongs to the technical field of intelligent control. Comprises three major parts: the first part is detection of automobile falling water, a data set of the established falling water automobile image is trained by utilizing a YOLO-v4 deep learning target detection frame so as to train a network model capable of detecting automobile falling water, and an image shot by a monitoring camera is put into the network to train so as to detect whether the automobile falls water or not. The second part is an automatic navigation part, when the device detects that the automobile falls into water, the central coordinate of the detection frame generated by the detection part is compared with the central coordinate of the camera, so that the advancing direction of the device is corrected, and the device can automatically navigate to the side of the automobile falling into water and can be used for unfolding rescue. The beneficial effects of the invention are as follows: the device can automatically detect whether the automobile falls into water or not, and the device can automatically navigate to the automobile falling into water.
Description
Technical Field
The invention belongs to the technical field of intelligent control, and particularly relates to an automatic navigation method for rescuing a water-falling automobile based on deep learning.
Background
The occurrence of the automobile falling into water is frequent, and the survival rate of the personnel in the automobile after falling into water is counted to be very low. Therefore, the automobile falling into water can be timely found, the first time is alarmed, so that rescue workers can come into the water on site at the first time, the problem that the rescue needs to be solved in an urgent way is solved by unfolding, along with the rapid development of rescue equipment, the intelligent recognition-based automatic control rescue equipment is generated, and the intelligent recognition-based automatic control rescue equipment has the characteristics of automatic response and rapid response, and the timeliness and the success rate of rescue are greatly improved. However, at present, the combination of automatic identification and automatic control still has a plurality of defects, which limit the application of the device in the rescue field.
Disclosure of Invention
The invention discloses an automatic navigation method for rescuing a water-falling automobile based on deep learning. The whole method comprises three major parts: the first part is detection of automobile falling water, a data set of the established falling water automobile image is trained by utilizing a YOLO-v4 deep learning target detection frame so as to train a network model capable of detecting automobile falling water, and an image shot by a monitoring camera is put into the network to train so as to detect whether the automobile falls water or not. The second part is an automatic navigation part, when the device detects that the automobile falls into water, the device compares the central coordinate of the detection frame generated by the detection part with the central coordinate of the camera, so that the advancing direction of the device is corrected, and whether the device reaches the side of the automobile falling into water or not is judged according to the ratio of the area of the detection frame to the area of the whole input image, so that the device can automatically navigate to the side of the automobile falling into water and can be used for rescue.
In order to achieve the above purpose, the invention is realized by the following technical scheme:
the method for detecting whether the automobile falls into water or not comprises the following steps:
step A1: establishing an image data set of the automobile falling into water;
step A2: dividing the established data set according to a certain proportion, so as to divide the whole data set into a training set, a verification set and a test set;
step A3: the method comprises the steps of using a LabelImg marking tool to correspondingly mark the position of a water falling automobile in an image in a data set in an original image, and storing marking information generated by each picture in an xml file format;
step A4: putting the data set into a deep learning target detection network of YOLO-v4 for corresponding training, so that a final network model for detecting the automobile falling into water can be obtained;
step A5: the monitoring camera inputs the shot image into the deep learning network model established in the steps A1 to A4, so as to judge whether the automobile falls into water in the water area, and if the automobile falls into water, the automobile falling into water is selected in a frame in a monitoring picture;
step A6: the rescue device navigates to the automobile water falling position according to the image information to rescue.
Specifically, the automatic navigation method of the invention comprises the following steps:
step B1: when the monitoring camera installed on the device inputs the image of the surrounding water area, the length h and the width w of the input image are measured, and then a rectangular frame with the length h and the width w is drawn by taking the rightmost upper corner of the image as the vertex coordinates (0, 0). Finding the central coordinate point of the rectangular frame, namely (h/2,w/2), wherein the coordinate is set to be the positive direction of the camera, the direction of the camera controls the direction of the whole device, and if the direction of the camera is the positive direction, the device moves forward.
Step B2: according to the step A5, when detecting that the automobile falls into water, a detection frame of the automobile falling into water is provided; the coordinate information (x, y, h1, w 1) of the detection frame at this time is recorded, wherein x, y are the vertex coordinates of the detection frame, h1 is the length of the detection frame, and w1 is the width of the detection frame.
Step B3: and B2, finding the center point coordinate of the detection frame, namely (x+ (h 1/2) and y+ (w 1/2)), according to the detection frame coordinate information obtained in the step B2, and comparing the center point coordinate of the detection frame with the center point coordinate of the rectangular frame to adjust the advancing direction of the device. The direction adjusting method comprises the following steps:
(1) If h/2=x+ (h 1/2), w/2=y+ (w 1/2), then it is determined that the camera is now aiming at the car falling into water, and the device can be advanced.
(2) If h/2>x + (h 1/2), w/2=y+ (w 1/2), the left position of the vehicle falling into water in the positive direction of the camera is determined. At the moment, the camera is controlled to rotate leftwards until the condition (1) is met, and the device moves forwards.
(3) If h/2>x + (h 1/2), w/2<y + (w 1/2), judging the upper left position of the water falling automobile in the positive direction of the camera. At the moment, the camera is controlled to rotate leftwards and upwards until the condition (1) is met, and the device moves forwards.
(4) If h/2=x+ (h 1/2), w/2<y + (w 1/2), the upper position of the water-falling automobile in the positive direction of the camera is determined. At the moment, the camera is controlled to rotate upwards until the condition (1) is met, and the device moves forwards.
(5) If h/2< x+ (h 1/2), w/2<y + (w 1/2), judging that the water-falling automobile is positioned at the upper right position of the positive direction of the camera. At the moment, the camera is controlled to rotate upwards and rightwards until the condition (1) is met, and the device moves forwards.
(6) If h/2< x+ (h 1/2), w/2=y+ (w 1/2), the position of the water-falling automobile on the right side of the positive direction of the camera is determined. At the moment, the camera is controlled to rotate rightwards until the condition (1) is met, and the device moves forwards.
(6) If h/2< x+ (h 1/2), w/2>y + (w 1/2), judging that the water-falling automobile is positioned at the right lower position of the positive direction of the camera. At the moment, the camera is controlled to rotate downwards to the right until the condition (1) is met, and the device moves forwards.
(7) If h/2=x+ (h 1/2), w/2<y + (w 1/2), the position of the automobile falling into water below the positive direction of the camera is determined. At the moment, the camera is controlled to rotate downwards until the condition (1) is met, and the device moves forwards.
(8) If h/2>x + (h 1/2), w/2>y + (w 1/2), judging that the automobile falls into water is at the left lower position in the positive direction of the camera. At the moment, the camera is controlled to rotate leftwards and downwards until the condition (1) is met, and the device moves forwards.
Step B4: in the advancing process of the device, whether the device reaches the side of the automobile falling into water or not can be judged according to the ratio of the area of the rectangular frame to the area of the detection frame. According to the information of the rectangular frame and the information of the detection frame obtained in the step B1 and the step B3, the rectangular frame area is s=h×w, the detection frame area is s1=h1×w1, and then division operation is performed on the two areas: t=s1/S, where T is the ratio of the two areas, and finally determining whether the device has reached the side of the car falling into water according to the following method:
(1) If T > =0.6, it indicates that the device has reached the side of the car falling into the water, and the device can be used for rescue.
(2) If T <0.6, it indicates that the device has not yet reached the side of the vehicle falling into the water, at which time the device may simply be advanced.
The beneficial effects are that:
1. the device can automatically detect whether the automobile falls into water
2. The device can automatically navigate to the automobile falling into water
Description of the drawings:
FIG. 1 is a flow chart of a detection portion of the present invention;
fig. 2 is a flow chart of the overall method of the present invention.
Detailed Description
The present invention will be further described in detail with reference to the drawings and examples, which are only for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
The invention will now be described in further detail with reference to the accompanying drawings.
The condition of surrounding water area is input by a camera, then the input image is detected by utilizing yolo-v4 algorithm, so as to judge whether the automobile falls into water in the water area, if the automobile falls into water, the automobile falling into water is detected, and the automobile detection frame is selected. The running direction of the device is controlled by using the camera, the right center position of the camera is the advancing direction of the device, and if the detection frame is not in the middle of the video, the direction of the device is adjusted according to the current position. Until the detection frame is positioned in the middle of the image, the device is indicated to face the direction of the automobile falling into the water, and the device is advanced until reaching the side of the automobile falling into the water, so that rescue is unfolded.
Step A1: establishing an image data set of the automobile falling into water, collecting 1500 images of the automobile falling into water on a network, and numbering the images;
step A2: the dataset was assembled as per 8:1: the ratio of 1 is divided into a training set, a verification set and a test set. Wherein the training set is used to produce a network model and weights; the verification set is used for selecting a model with the best effect; the test set is used for evaluating the performance of the model;
step A3: the method comprises the steps that a LabelImg marking tool is utilized to correspondingly mark all the water falling vehicles in the images in the data set in an original image, and marking information generated by each image is stored in an xml file format;
step A4: placing the marked data set into a deep learning target detection network of YOLO-v4 to perform corresponding training, so that a final network model for detecting the automobile falling into water can be obtained;
step A5: and (3) embedding the deep learning network model established in the steps A1 to A4 into the device, so that the device can judge whether the automobile falls into the water area or not, and if the automobile falls into the water area, selecting the automobile falling into the water in a frame in the monitoring picture.
Step B1.1: when the monitoring camera installed on the device inputs the image of the surrounding water area, the length h and the width w of the input image are measured, and then a rectangular frame with the length h and the width w is drawn by taking the rightmost upper corner of the image as the vertex coordinates (0, 0). Finding the central coordinate point of the rectangular frame, namely (h/2,w/2), wherein the coordinate is set to be the positive direction of the camera, the direction of the camera controls the direction of the whole device, and if the direction of the camera is the positive direction, the device moves forward.
Step B1.2: when the monitoring camera arranged on the device inputs the image of the surrounding water area, the image is put into the network model for detection, and when the automobile is detected to fall into water, a detection frame of the automobile falling into water is provided; the coordinate information (x, y, h1, w 1) of the detection frame at this time is recorded, wherein x, y are the vertex coordinates of the detection frame, h1 is the length of the detection frame, and w1 is the width of the detection frame.
Step B3: and B2, finding the center point coordinate of the detection frame, namely (x+ (h 1/2) and y+ (w 1/2)), according to the detection frame coordinate information obtained in the step B2, and comparing the center point coordinate of the detection frame with the center point coordinate of the rectangular frame to adjust the advancing direction of the device. The direction adjusting method comprises the following steps:
(1) If h/2=x+ (h 1/2), w/2=y+ (w 1/2), then it is determined that the camera is now aiming at the car falling into water, and the device can be advanced.
(2) If h/2>x + (h 1/2), w/2=y+ (w 1/2), the left position of the vehicle falling into water in the positive direction of the camera is determined. At the moment, the camera is controlled to rotate leftwards until the condition (1) is met, and the device moves forwards.
(3) If h/2>x + (h 1/2), w/2<y + (w 1/2), judging the upper left position of the water falling automobile in the positive direction of the camera. At the moment, the camera is controlled to rotate leftwards and upwards until the condition (1) is met, and the device moves forwards.
(4) If h/2=x+ (h 1/2), w/2<y + (w 1/2), the upper position of the water-falling automobile in the positive direction of the camera is determined. At the moment, the camera is controlled to rotate upwards until the condition (1) is met, and the device moves forwards.
(5) If h/2< x+ (h 1/2), w/2<y + (w 1/2), judging that the water-falling automobile is positioned at the upper right position of the positive direction of the camera. At the moment, the camera is controlled to rotate upwards and rightwards until the condition (1) is met, and the device moves forwards.
(6) If h/2< x+ (h 1/2), w/2=y+ (w 1/2), the position of the water-falling automobile on the right side of the positive direction of the camera is determined. At the moment, the camera is controlled to rotate rightwards until the condition (1) is met, and the device moves forwards.
(6) If h/2< x+ (h 1/2), w/2>y + (w 1/2), judging that the water-falling automobile is positioned at the right lower position of the positive direction of the camera. At the moment, the camera is controlled to rotate downwards to the right until the condition (1) is met, and the device moves forwards.
(7) If h/2=x+ (h 1/2), w/2<y + (w 1/2), the position of the automobile falling into water below the positive direction of the camera is determined. At the moment, the camera is controlled to rotate downwards until the condition (1) is met, and the device moves forwards.
(8) If h/2>x + (h 1/2), w/2>y + (w 1/2), judging that the automobile falls into water is at the left lower position in the positive direction of the camera. At the moment, the camera is controlled to rotate leftwards and downwards until the condition (1) is met, and the device moves forwards.
Step B4: in the advancing process of the device, whether the device reaches the side of the automobile falling into water or not can be judged according to the ratio of the area of the rectangular frame to the area of the detection frame. According to the information of the rectangular frame and the information of the detection frame obtained in the step B1 and the step B3, the rectangular frame area is s=h×w, the detection frame area is s1=h1×w1, and then division operation is performed on the two areas: t=s1/S, where T is the ratio of the two areas, and finally determining whether the device has reached the side of the car falling into water according to the following method:
(1) If T > =0.6, it indicates that the device has reached the side of the car falling into the water, and the device can be used for rescue.
(2) If T <0.6, it indicates that the device has not yet reached the side of the vehicle falling into the water, at which time the device may simply be advanced.
Claims (2)
1. The automatic navigation method for rescuing the automobile falling into water based on deep learning is characterized by comprising the following steps:
step A1: establishing an image data set of the automobile falling into water;
step A2: dividing the established data set according to a certain proportion, so as to divide the whole data set into a training set, a verification set and a test set;
step A3: the method comprises the steps of using a LabelImg marking tool to correspondingly mark the position of a water falling automobile in an image in a data set in an original image, and storing marking information generated by each picture in an xml file format;
step A4: putting the data set into a deep learning target detection network of YOLO-v4 for corresponding training, so that a final network model for detecting the automobile falling into water can be obtained;
step A5: the monitoring camera inputs the shot image into the deep learning network model established in the steps A1 to A4, so as to judge whether the automobile falls into water or not, and if the automobile is detected to fall into water, the automobile falling into water is selected in a frame in the monitoring picture;
step A6: the rescue device automatically navigates to the automobile water falling position according to the image information to rescue;
the automatic navigation method in step 6 comprises the following steps:
step B1: when a monitoring camera mounted on the device inputs an image of a surrounding water area, measuring the length h and the width w of the input image, then drawing a rectangular frame with the length h and the width w by taking the rightmost upper corner of the image as vertex coordinates (0, 0), finding a central coordinate point of the rectangular frame, namely (h/2,w/2), setting the coordinates as the positive direction of the camera, controlling the direction of the whole device by the direction of the camera, and if the direction of the camera is the positive direction, advancing the device;
step B2: according to the step A5, when detecting that the automobile falls into water, a detection frame of the automobile falling into water is provided; recording coordinate information (x, y, h1, w 1) of the detection frame at the moment, wherein x, y are vertex coordinates of the detection frame, h1 is the length of the detection frame, and w1 is the width of the detection frame;
step B3: according to the coordinate information of the detection frame obtained in the step B2, finding the coordinate of the central point of the detection frame, namely (x+ (h 1/2) and y+ (w 1/2)), and then comparing the coordinate of the central point of the detection frame with the coordinate of the central point of the rectangular frame to adjust the advancing direction of the device;
step B4: in the advancing process of the device, judging whether the device reaches the side of the automobile falling into water according to the ratio of the area of the rectangular frame to the area of the detection frame;
according to the information of the rectangular frame and the information of the detection frame obtained in the step B1 and the step B3, the rectangular frame area is s=h×w, the detection frame area is s1=h1×w1, and then division operation is performed on the two areas: t=s1/S, where T is the ratio of the two areas, and finally determining whether the device has reached the side of the car falling into water according to the following method:
(1) If T > =0.6, it indicates that the device has reached the side of the vehicle falling into water, and the device can be used for rescue;
(2) If T <0.6, it indicates that the device has not yet reached the side of the vehicle falling into the water, at which time the device may simply be advanced.
2. The automatic navigation method of a rescue water-in-vehicle based on deep learning of claim 1, wherein the method for adjusting the direction in step B3 is as follows:
(1) If h/2=x+ (h 1/2), w/2=y+ (w 1/2), judging that the camera is aligned with the automobile falling into water at the moment, and allowing the device to advance;
(2) If h/2 is greater than x+ (h 1/2), w/2=y+ (w 1/2), judging that the water-falling automobile is at the left side of the positive direction of the camera, and controlling the camera to rotate leftwards at the moment until the condition (1) is met, and the device moves forwards;
(3) If h/2 is more than x+ (h 1/2), and w/2 is less than y+ (w 1/2), judging that the water-falling automobile is at the left upper position of the positive direction of the camera, and controlling the camera to rotate left upper until the condition (1) is met;
(4) If h/2=x+ (h 1/2), w/2< y+ (w 1/2), judging that the water-falling automobile is at the upper position of the positive direction of the camera, and controlling the camera to rotate upwards until the condition (1) is met, and the device moves forwards;
(5) If h/2 is less than x+ (h 1/2), and w/2 is less than y+ (w 1/2), judging that the water-falling automobile is at the right upper position of the positive direction of the camera, and controlling the camera to rotate to the right upper position until the condition (1) is met;
(6) If h/2 is less than x+ (h 1/2), w/2=y+ (w 1/2), judging that the water-falling automobile is at the right position of the positive direction of the camera, and controlling the camera to rotate rightwards at the moment until the condition (1) is met, and the device moves forwards;
(6) If h/2< x+ (h 1/2), w/2> y+ (w 1/2), judging that the water-falling automobile is at the right lower position of the positive direction of the camera, and controlling the camera to rotate to the right lower position until the condition (1) is met;
(7) If h/2=x+ (h 1/2), w/2< y+ (w 1/2), judging that the water-falling automobile is at the lower position of the positive direction of the camera, and controlling the camera to rotate downwards until the condition (1) is met, and the device moves forwards;
(8) If h/2 is more than x+ (h 1/2), and w/2 is more than y+ (w 1/2), judging that the water-falling automobile is at the left lower position of the positive direction of the camera, and controlling the camera to rotate leftwards and downwards until the condition (1) is met, and the device moves forwards.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011244272.8A CN112232299B (en) | 2020-11-09 | 2020-11-09 | Automatic navigation method for rescuing water-falling automobile based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011244272.8A CN112232299B (en) | 2020-11-09 | 2020-11-09 | Automatic navigation method for rescuing water-falling automobile based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112232299A CN112232299A (en) | 2021-01-15 |
CN112232299B true CN112232299B (en) | 2023-10-27 |
Family
ID=74123191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011244272.8A Active CN112232299B (en) | 2020-11-09 | 2020-11-09 | Automatic navigation method for rescuing water-falling automobile based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112232299B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115063757A (en) * | 2022-06-24 | 2022-09-16 | 杭州鸿泉物联网技术股份有限公司 | Vehicle drowning identification method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5822707A (en) * | 1992-05-05 | 1998-10-13 | Automotive Technologies International, Inc. | Automatic vehicle seat adjuster |
CN106427862A (en) * | 2016-12-07 | 2017-02-22 | 合肥工业大学 | Prewarning and lifesaving system and method for automobile falling into water |
CN106845342A (en) * | 2016-12-15 | 2017-06-13 | 重庆凯泽科技股份有限公司 | A kind of intelligence community monitoring system and method |
CN106828826A (en) * | 2017-02-27 | 2017-06-13 | 上海交通大学 | One kind automation rescue at sea method |
CN107566621A (en) * | 2017-08-23 | 2018-01-09 | 努比亚技术有限公司 | Drowning protection method and mobile terminal |
CN108312995A (en) * | 2018-02-09 | 2018-07-24 | 吉林大学 | A kind of automobile falls into active life saving system and its control method in water |
CN110119718A (en) * | 2019-05-15 | 2019-08-13 | 燕山大学 | A kind of overboard detection and Survivable Control System based on deep learning |
CN110853301A (en) * | 2019-12-09 | 2020-02-28 | 王迪 | Swimming pool drowning prevention identification method based on machine learning |
CN111028480A (en) * | 2019-12-06 | 2020-04-17 | 江西洪都航空工业集团有限责任公司 | Drowning detection and alarm system |
CN111178236A (en) * | 2019-12-27 | 2020-05-19 | 清华大学苏州汽车研究院(吴江) | Parking space detection method based on deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6266238B2 (en) * | 2013-07-03 | 2018-01-24 | クラリオン株式会社 | Approaching object detection system and vehicle |
-
2020
- 2020-11-09 CN CN202011244272.8A patent/CN112232299B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5822707A (en) * | 1992-05-05 | 1998-10-13 | Automotive Technologies International, Inc. | Automatic vehicle seat adjuster |
CN106427862A (en) * | 2016-12-07 | 2017-02-22 | 合肥工业大学 | Prewarning and lifesaving system and method for automobile falling into water |
CN106845342A (en) * | 2016-12-15 | 2017-06-13 | 重庆凯泽科技股份有限公司 | A kind of intelligence community monitoring system and method |
CN106828826A (en) * | 2017-02-27 | 2017-06-13 | 上海交通大学 | One kind automation rescue at sea method |
CN107566621A (en) * | 2017-08-23 | 2018-01-09 | 努比亚技术有限公司 | Drowning protection method and mobile terminal |
CN108312995A (en) * | 2018-02-09 | 2018-07-24 | 吉林大学 | A kind of automobile falls into active life saving system and its control method in water |
CN110119718A (en) * | 2019-05-15 | 2019-08-13 | 燕山大学 | A kind of overboard detection and Survivable Control System based on deep learning |
CN111028480A (en) * | 2019-12-06 | 2020-04-17 | 江西洪都航空工业集团有限责任公司 | Drowning detection and alarm system |
CN110853301A (en) * | 2019-12-09 | 2020-02-28 | 王迪 | Swimming pool drowning prevention identification method based on machine learning |
CN111178236A (en) * | 2019-12-27 | 2020-05-19 | 清华大学苏州汽车研究院(吴江) | Parking space detection method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN112232299A (en) | 2021-01-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112434672B (en) | Marine human body target detection method based on improved YOLOv3 | |
CN111091072A (en) | YOLOv 3-based flame and dense smoke detection method | |
CN109190488B (en) | Front vehicle door opening detection method and device based on deep learning YOLOv3 algorithm | |
CN105654067A (en) | Vehicle detection method and device | |
CN109284739A (en) | A kind of preventing damage to power transmission line caused by external force method for early warning and system based on deep learning | |
CN105741324A (en) | Moving object detection identification and tracking method on moving platform | |
CN112287827A (en) | Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole | |
CN111091023A (en) | Vehicle detection method and device and electronic equipment | |
CN110378246A (en) | Ground detection method, apparatus, computer readable storage medium and electronic equipment | |
CN112232299B (en) | Automatic navigation method for rescuing water-falling automobile based on deep learning | |
CN106780727B (en) | Vehicle head detection model reconstruction method and device | |
CN109501807A (en) | Automatic Pilot pays attention to force detection system and method | |
CN111127520B (en) | Vehicle tracking method and system based on video analysis | |
CN109961013A (en) | Recognition methods, device, equipment and the computer readable storage medium of lane line | |
CN109800654A (en) | Vehicle-mounted camera detection processing method, apparatus and vehicle | |
CN109583347A (en) | A method of it is tracked for a long time for mobile platform | |
CN116259002A (en) | Human body dangerous behavior analysis method based on video | |
CN114220044B (en) | River course floater detection method based on AI algorithm | |
CN112330675B (en) | Traffic road image atmospheric visibility detection method based on AOD-Net | |
CN113031610A (en) | Automatic prison night inspection robot system and control method thereof | |
CN112926364B (en) | Head gesture recognition method and system, automobile data recorder and intelligent cabin | |
CN110059544B (en) | Pedestrian detection method and system based on road scene | |
CN114612735A (en) | Detection method of hook anti-falling device based on deep learning | |
CN105225254A (en) | A kind of exposure method of automatic tracing localized target and system | |
CN111222477A (en) | Vision-based method and device for detecting two hands leaving steering wheel |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |