[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114494694B - Semantic mapping method under plantation scene - Google Patents

Semantic mapping method under plantation scene Download PDF

Info

Publication number
CN114494694B
CN114494694B CN202210027704.2A CN202210027704A CN114494694B CN 114494694 B CN114494694 B CN 114494694B CN 202210027704 A CN202210027704 A CN 202210027704A CN 114494694 B CN114494694 B CN 114494694B
Authority
CN
China
Prior art keywords
plantation
altitude
semantic
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210027704.2A
Other languages
Chinese (zh)
Other versions
CN114494694A (en
Inventor
徐一鸣
孙飞
周宇晨
陆观
王栗
傅振宇
丁子桁
周方杰
张凯
彭玮
张媛
周霖
戴洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN202210027704.2A priority Critical patent/CN114494694B/en
Publication of CN114494694A publication Critical patent/CN114494694A/en
Application granted granted Critical
Publication of CN114494694B publication Critical patent/CN114494694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a semantic mapping method under a plantation scene, which is characterized in that a plantation is divided into different areas, unmanned aerial vehicle flight parameters are set, and a center point shot by a camera is determined; respectively acquiring high-altitude canopy and low-altitude branch and leaf images by using an unmanned aerial vehicle, and training a capsule neural network to obtain two classifiers, wherein the two classifiers are a high-altitude canopy classifier and a low-altitude branch and leaf classifier; classifying landforms and tree species according to a high-altitude canopy classifier; checking the classification result of the last step according to the low-altitude branch and leaf classifier; splicing the checked semantic pictures to generate a plantation semantic map; the method has the advantages of low cost, small training set, high accuracy and small influence of shooting interference, improves the efficiency of tree species identification and classification, and carries out large-scale efficient identification on the plantation through the canopy image so that the model has practical engineering application value.

Description

Semantic mapping method under plantation scene
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a semantic graph building method under a plantation scene.
Technical Field
The tree species identification and classification method mainly comprises two types, namely, the first type is based on spectrum image identification, spectrum images are acquired, and the traditional machine learning algorithm is used for classifying tree species in a planting park, so that unmanned aerial vehicles are required to carry spectrum cameras for aerial photography, the cost is high, the risk is high, the traditional machine learning method is required to manually extract the characteristics of the images, the result is greatly influenced by the characteristic types, and the generalization capability of a model is poor; the second type is based on RGB image recognition, a convolutional neural network is used, the method does not need to manually extract features, a large amount of data is needed to be used as a training set, the model robustness is low, and the distorted image cannot be accurately classified.
Disclosure of Invention
The invention aims to: the invention aims to solve the defects in the prior art, and designs a plantation semantic map construction method with high efficiency and high accuracy aiming at the existing requirements and by utilizing the existing technical means for specific plantation scenes, which is convenient for carrying out efficient management and maintenance on plantation trees and monitoring the growth conditions of different tree species.
The technical scheme is as follows: the semantic mapping method in the plantation scene is characterized by comprising the following steps:
step S1: dividing a plantation into different areas, setting unmanned aerial vehicle flight parameters, and determining a center point photographed by a camera;
Step S2: respectively acquiring high-altitude canopy and low-altitude branch and leaf images by using an unmanned aerial vehicle, and training a capsule neural network to obtain two classifiers, wherein the two classifiers are a high-altitude canopy classifier and a low-altitude branch and leaf classifier;
step S3: classifying landforms and tree species according to a high-altitude canopy classifier;
step S4: checking the classification result of the last step according to the low-altitude branch and leaf classifier;
Step S5: and splicing the checked semantic pictures to generate a plantation semantic map.
Further, in the first step, a semantic map of the plantation is constructed, the plantation is divided into different areas in consideration of imaging size limitation, shooting, identifying and splicing are carried out, and each area is set to be a standard square acquisition area; the unmanned aerial vehicle is flown and set to a certain height safety range, the nodding position of the unmanned aerial vehicle is set to be the center position of each standard area, a worker controls the unmanned aerial vehicle to shoot a plantation at a fixed point, and the numerical value of the fixed point distance is related to the parameters of the area range shot by a single frame under the unmanned aerial vehicle.
Further, the area range of single frame shooting under the unmanned aerial vehicle, namely the lens field of vision of the unmanned aerial vehicle camera, has the following calculation formula: the field of view fov=working distance (WD) ×target surface size (H OR V)/focal length F, where the value of WD obtains h=l×sin θ through the unmanned aerial vehicle and the control personnel signal receiver and distance and angle, is the unmanned aerial vehicle working height, and the target surface size is the lens shooting area size, and allows adjacent images to remain overlapping in a certain proportion while taking into account the possibility of distortion of the image edges, and an effective imaging field of view is set based on the lens field of view.
Furthermore, step 2 is to take a down-shoot of the planting park area at high altitude by using an unmanned aerial vehicle-mounted RGB camera to obtain canopy images of ponds, grasslands, shrubs and different tree species objects, and then collect branch and leaf images of different tree species at low altitude; preprocessing an image, reducing noise of the image by using median filtering, and interpolating the image with lower resolution to improve the resolution; according to the actual environment, labelImg software is used for dividing the data set into a pool, a sand, a mud, a grassland, shrubs and various economic trees, compared with a convolutional neural network, the capsule network has smaller training set, and meanwhile, the capsule layer processing data in the capsule neural network are of vector format, so that the recognition and classification effects on complex scenes of multiple tree inclusions and multiple branch and leaf intersections of a plantation are better; and constructing a capsule neural network, and training the capsule neural network by using the high-altitude canopy data set and the low-altitude branch and leaf data set to obtain the high-altitude canopy classifier and the low-altitude branch and leaf classifier.
Further, in step S3, the unmanned aerial vehicle flies at a high altitude of the plantation, stays at the center of each acquisition area, performs nodding, acquires the landform of the plantation and the canopy profile information of different trees, and performs real-time and efficient classification on the environment and tree species by using a high altitude canopy classifier.
Further, in step S4, classification verification is performed on the sampling object according to the low-altitude branch and leaf classifier, whether the verification result is consistent with the first classification result or not is determined, and if not, information updating is performed on the semantic pictures obtained by the first classification.
Further, in step S5, in the aerial photographing process, images of different collecting areas are overlapped, so that a panoramic semantic map of the plantation needs to be constructed, and negative effects caused by overlapping of the images need to be eliminated by using image stitching; firstly, extracting feature points by using a SIFT algorithm according to given two region images, and matching the feature points by using feature expressions, namely feature point attributes, so as to obtain the same feature points in the two region images, namely feature points of overlapping regions; then calculating homography matrixes of the two-region pictures according to the matched characteristic points to obtain the position of one region relative to the other region, solving the homography matrixes by using a RANSAC algorithm, and registering the two-region pictures by combining optimization of a beam adjustment method; horizontal correction and exposure compensation are needed before splicing, so that the consistency of the two pictures is improved; searching the most similar line in the overlapping area by using a graph cut method, fusing a plurality of pixels near the joint by using a Laplacian fusion algorithm, and only reserving an image on one side of the joint of the overlapping area to obtain a semantic picture after the two areas are spliced; carrying out the same image splicing process on the semantic pictures of other areas to finally obtain a complete plantation semantic map; the semantic map is a plant garden overall image and contains various landforms and tree species semantic information in the garden, so that convenience is provided for monitoring and managing the tree growth state in the follow-up process.
Compared with the prior art, the semantic mapping method under the plantation scene adopts the technical scheme, and the method for obtaining the plantation semantic map is based on unmanned aerial vehicle aerial RGB images, carries out twice classification and identification by using a capsule neural network, and finally carries out image stitching; the method has the advantages of low cost, small training set, high accuracy and small influence of shooting interference; and the efficiency of identifying and classifying tree species is improved, and the plantation is identified in a large range and high efficiency through the canopy image, so that the model has practical engineering application value.
Drawings
FIG. 1 is a flow chart of a semantic map construction of plantation tree species;
FIG. 2 is a diagram of a neural network model of a classification capsule of plantation tree species;
FIG. 3 is a division diagram of a plantation area;
fig. 4 is a diagram of an unmanned aerial vehicle acquisition scheme;
fig. 5 is an image stitching diagram.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
1-5, A semantic mapping method in a plantation scene comprises the following steps:
step S1: dividing a plantation into different areas, setting unmanned aerial vehicle flight parameters, and determining a center point photographed by a camera; the plantation semantic map is required to be constructed, the plantation is divided into different areas in consideration of imaging size limitation, shooting, recognition and splicing are carried out, and each area is set to be a standard square acquisition area; a certain height safety range is set for unmanned aerial vehicle flight, and the nodding position of the unmanned aerial vehicle is set to be the center position of each standard area. The staff controls the unmanned aerial vehicle to shoot the plantation at fixed points, and the numerical value of the fixed point distance is related to the parameters of the area range shot by the unmanned aerial vehicle in a single frame;
the area range of single frame shooting under the unmanned aerial vehicle, namely the lens field of vision of the unmanned aerial vehicle camera, has the following calculation formula: field of view FOV = Working Distance (WD) × target size (H OR V)/focal length F; the value of WD obtains H=L, sin theta through a signal receiver and a distance and an angle between the unmanned aerial vehicle and a controller, the working height of the unmanned aerial vehicle is obtained, and the size of a target surface is the size of a shot area of a lens; meanwhile, considering the possibility of distortion of the edges of the images, keeping a certain proportion of overlapping of adjacent images during shooting, and setting an effective imaging view based on the view of the lens;
Step S2: respectively acquiring high-altitude canopy and low-altitude branch and leaf images by using an unmanned aerial vehicle, and training a capsule neural network to obtain two classifiers; performing aerial depression shooting on a planting park area at high altitude by using an unmanned aerial vehicle-mounted RGB camera to obtain canopy images of ponds, grasslands, shrubs and different tree species objects, and collecting branch and leaf images of different tree species at low altitude; preprocessing an image, reducing noise of the image by using median filtering, and interpolating the image with lower resolution to improve the resolution; depending on the actual environment, labelImg software is used to divide this dataset into categories of ponds, sandy, mud, grasslands, shrubs, and various economic trees, such as camphor trees, willows, etc.;
Compared with a convolutional neural network, the capsule network has smaller training set, and the capsule layer processing data type in the capsule neural network is in a vector format, so that the complex scene recognition and classification effect of multi-tree inclusion and multi-branch-leaf intersection in the plantation is better; building a capsule neural network, and training the capsule neural network by using the high-altitude canopy data set and the low-altitude branch and leaf data set to obtain a high-altitude canopy classifier and a low-altitude branch and leaf classifier;
step S3: classifying landforms and tree types according to a high-altitude canopy classifier, enabling an unmanned aerial vehicle to fly at a high altitude of a plantation, staying at the center of each acquisition area, performing nodding, and acquiring the landforms of the plantation and canopy contour information of different trees; the high-altitude canopy classifier is used for classifying the environment and tree species in real time and efficiently, and ponds, grasslands, sand, poplars, willows and the like are classified;
Step S4: checking the classification result of the last step according to the low-altitude branch and leaf classifier, wherein in the process of classifying tree species by using the high-altitude canopy classifier, the difference of landform information is large, and the tree species can be accurately classified; however, the difference of the canopy information of a small part of tree species is small, and the situation of low classification confidence is easy to occur, so that the result of the first classification needs to be checked; sampling each region, and collecting all samples with low confidence and a part of samples with normal confidence; the key acquisition areas are intersected with the edge areas, the areas are easy to generate image distortion, the information loss is large, and the first classification confidence is low;
classifying and checking the sampling object by using a low-altitude branch and leaf classifier, and updating information of semantic pictures obtained by the first classification if the checking result is consistent with the first classification result and is inconsistent with the first classification result;
Step S5: splicing the checked semantic pictures to generate a plantation semantic map; in the aerial photography process, images of different acquisition areas are overlapped, and the plantation panoramic semantic map is required to be constructed, so that negative effects caused by overlapping of the images are eliminated by using image stitching; firstly, extracting feature points by using a SIFT algorithm according to given two-region images, and matching the feature points by using feature expressions (feature point attributes) to obtain the same feature points in the two-region images, namely, the feature points of overlapping regions; then calculating homography matrixes of the two-region pictures according to the matched characteristic points to obtain the position of one region relative to the other region, solving the homography matrixes by using a RANSAC algorithm, and registering the two-region pictures by combining optimization of a beam adjustment method; horizontal correction and exposure compensation are needed before splicing, so that the consistency of the two pictures is improved; searching the most similar line in the overlapping area by using a graph cut method, fusing a plurality of pixels near the joint by using a Laplacian fusion algorithm, and only reserving an image on one side of the joint of the overlapping area to obtain a semantic picture after the two areas are spliced; carrying out the same image splicing process on the semantic pictures of other areas to finally obtain a complete plantation semantic map; the semantic map is a plant garden overall image and contains various landforms and tree species semantic information in the garden, so that convenience is provided for monitoring and managing the tree growth state in the follow-up process.
The foregoing is a preferred embodiment of the present invention, and it will be apparent to those skilled in the art that variations and modifications can be made without departing from the principles of the invention, and these should also be considered as being within the scope of the invention.

Claims (7)

1. The semantic mapping method in the plantation scene is characterized by comprising the following steps:
step S1: dividing a plantation into different areas, setting unmanned aerial vehicle flight parameters, and determining a center point photographed by a camera;
Step S2: respectively acquiring high-altitude canopy and low-altitude branch and leaf images by using an unmanned aerial vehicle, and training a capsule neural network to obtain two classifiers, wherein the two classifiers are a high-altitude canopy classifier and a low-altitude branch and leaf classifier;
step S3: classifying landforms and tree species according to a high-altitude canopy classifier;
step S4: checking the classification result of the last step according to the low-altitude branch and leaf classifier;
Step S5: and splicing the checked semantic pictures to generate a plantation semantic map.
2. The semantic mapping method under a plantation scene according to claim 1, wherein in the step S1, a plantation semantic map is constructed, and in consideration of imaging size limitation, the plantation is divided into different areas, shooting, identifying and splicing are performed, and each area is set as a standard square acquisition area; the unmanned aerial vehicle is flown and set to a certain height safety range, the nodding position of the unmanned aerial vehicle is set to be the center position of each standard area, a worker controls the unmanned aerial vehicle to shoot a plantation at a fixed point, and the numerical value of the fixed point distance is related to the parameters of the area range shot by a single frame under the unmanned aerial vehicle.
3. The semantic mapping method under the plantation scene according to claim 2, wherein the area range of the single frame shooting under the unmanned aerial vehicle is the lens field of view of the unmanned aerial vehicle camera, and the calculation formula is as follows: the field of view fov=working distance (WD) ×target surface size (H OR V)/focal length F, where the value of WD obtains h=l×sin θ through the unmanned aerial vehicle and the control personnel signal receiver and distance and angle, is the unmanned aerial vehicle working height, and the target surface size is the lens shooting area size, and allows adjacent images to remain overlapping in a certain proportion while taking into account the possibility of distortion of the image edges, and an effective imaging field of view is set based on the lens field of view.
4. The semantic mapping method under the plantation scene according to claim 1, wherein the step S2 is characterized in that an unmanned aerial vehicle is used for carrying RGB camera to perform nodding on a plantation area at high altitude to obtain canopy images of ponds, grasslands, shrubs and different tree species objects, and then branch and leaf images of different tree species are collected at low altitude; preprocessing an image, reducing noise of the image by using median filtering, and interpolating the image with lower resolution to improve the resolution; according to the actual environment, labelImg software is used for dividing the data set into a pool, a sand, a mud, a grassland, shrubs and various economic trees, compared with a convolutional neural network, the capsule network has smaller training set, and meanwhile, the capsule layer processing data in the capsule neural network are of vector format, so that the recognition and classification effects on complex scenes of multiple tree inclusions and multiple branch and leaf intersections of a plantation are better; and constructing a capsule neural network, and training the capsule neural network by using the high-altitude canopy data set and the low-altitude branch and leaf data set to obtain the high-altitude canopy classifier and the low-altitude branch and leaf classifier.
5. The semantic mapping method under the plantation scene according to claim 1, wherein in step S3, the unmanned aerial vehicle flies at the high altitude of the plantation, stays at the center of each collecting area, performs nodding, obtains the landform of the plantation and the canopy profile information of different trees, and uses a high altitude canopy classifier to classify the environment and tree species in real time and efficiently.
6. The semantic mapping method under the plantation scene according to claim 1, wherein in step S4, classification verification is performed on the sampling object according to the low-altitude branch and leaf classifier, whether the verification result is consistent with the first classification result is determined, and if not, the semantic picture obtained by the first classification is updated.
7. The semantic mapping method under a plantation scene according to claim 1, wherein in step S5, images of different acquisition areas are overlapped in the aerial photographing process, so that a plantation panoramic semantic map needs to be constructed, and negative effects caused by image overlapping need to be eliminated by using image stitching; firstly, extracting feature points by using a SIFT algorithm according to given two region images, and matching the feature points by using feature expressions, namely feature point attributes, so as to obtain the same feature points in the two region images, namely feature points of overlapping regions; then calculating homography matrixes of the two-region pictures according to the matched characteristic points to obtain the position of one region relative to the other region, solving the homography matrixes by using a RANSAC algorithm, and registering the two-region pictures by combining optimization of a beam adjustment method; horizontal correction and exposure compensation are needed before splicing, so that the consistency of the two pictures is improved; searching the most similar line in the overlapping area by using a graph cut method, fusing a plurality of pixels near the joint by using a Laplacian fusion algorithm, and only reserving an image on one side of the joint of the overlapping area to obtain a semantic picture after the two areas are spliced; carrying out the same image splicing process on the semantic pictures of other areas to finally obtain a complete plantation semantic map; the semantic map is a plant garden overall image and contains various landforms and tree species semantic information in the garden, so that convenience is provided for monitoring and managing the tree growth state in the follow-up process.
CN202210027704.2A 2022-01-11 2022-01-11 Semantic mapping method under plantation scene Active CN114494694B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210027704.2A CN114494694B (en) 2022-01-11 2022-01-11 Semantic mapping method under plantation scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210027704.2A CN114494694B (en) 2022-01-11 2022-01-11 Semantic mapping method under plantation scene

Publications (2)

Publication Number Publication Date
CN114494694A CN114494694A (en) 2022-05-13
CN114494694B true CN114494694B (en) 2024-09-20

Family

ID=81512327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210027704.2A Active CN114494694B (en) 2022-01-11 2022-01-11 Semantic mapping method under plantation scene

Country Status (1)

Country Link
CN (1) CN114494694B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111551167A (en) * 2020-02-10 2020-08-18 江苏盖亚环境科技股份有限公司 Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
WO2021226900A1 (en) * 2020-05-14 2021-11-18 安徽中科智能感知产业技术研究院有限责任公司 Cotton crop row detection method and apparatus based on computer vision, and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111551167A (en) * 2020-02-10 2020-08-18 江苏盖亚环境科技股份有限公司 Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
WO2021226900A1 (en) * 2020-05-14 2021-11-18 安徽中科智能感知产业技术研究院有限责任公司 Cotton crop row detection method and apparatus based on computer vision, and storage medium

Also Published As

Publication number Publication date
CN114494694A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN111540048B (en) Fine live-action three-dimensional modeling method based on space-ground fusion
KR101105795B1 (en) Automatic processing of aerial images
CN111508260A (en) Vehicle parking space detection method, device and system
WO2020103110A1 (en) Image boundary acquisition method and device based on point cloud map and aircraft
CN111551167B (en) Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation
CN110276254B (en) Unmanned aerial vehicle-based automatic recognition and early warning method for bootlegged area bootlegged
Tan et al. Integrating Advanced Computer Vision and AI Algorithms for Autonomous Driving Systems
CN115240093B (en) Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion
US11769225B2 (en) Image processing apparatus, image processing method, and program
CN111765974B (en) Wild animal observation system and method based on miniature refrigeration thermal infrared imager
CN108320304A (en) A kind of automatic edit methods and system of unmanned plane video media
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN114973028A (en) Aerial video image real-time change detection method and system
CN117409339A (en) Unmanned aerial vehicle crop state visual identification method for air-ground coordination
CN114967731A (en) Unmanned aerial vehicle-based automatic field personnel searching method
CN115240089A (en) Vehicle detection method of aerial remote sensing image
CN109961043B (en) Single-tree height measuring method and system based on high-resolution image of unmanned aerial vehicle
CN110458945B (en) Automatic modeling method and system by combining aerial oblique photography with video data
CN116030194A (en) Air-ground collaborative live-action three-dimensional modeling optimization method based on target detection avoidance
CN108195359B (en) Method and system for acquiring spatial data
CN114494694B (en) Semantic mapping method under plantation scene
CN114581307A (en) Multi-image stitching method, system, device and medium for target tracking identification
CN113096016A (en) Low-altitude aerial image splicing method and system
Çömert et al. Detection of collapsed building from unmanned aerial vehicle data with object based image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant