CN117173631A - Method and system for monitoring biodiversity - Google Patents
Method and system for monitoring biodiversity Download PDFInfo
- Publication number
- CN117173631A CN117173631A CN202311145503.3A CN202311145503A CN117173631A CN 117173631 A CN117173631 A CN 117173631A CN 202311145503 A CN202311145503 A CN 202311145503A CN 117173631 A CN117173631 A CN 117173631A
- Authority
- CN
- China
- Prior art keywords
- biological
- image
- monitoring
- definition
- species
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 183
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 56
- 238000009826 distribution Methods 0.000 claims abstract description 43
- 230000000007 visual effect Effects 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims description 40
- 238000005516 engineering process Methods 0.000 claims description 37
- 238000013508 migration Methods 0.000 claims description 23
- 230000005012 migration Effects 0.000 claims description 23
- 238000001914 filtration Methods 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 17
- 230000005540 biological transmission Effects 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 15
- 230000033001 locomotion Effects 0.000 claims description 13
- 238000012806 monitoring device Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 13
- 238000013526 transfer learning Methods 0.000 claims description 9
- 238000012800 visualization Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 238000002372 labelling Methods 0.000 claims description 5
- 238000007726 management method Methods 0.000 claims description 5
- 238000013523 data management Methods 0.000 claims description 4
- 230000006698 induction Effects 0.000 abstract description 6
- 241000894007 species Species 0.000 description 118
- 241001465754 Metazoa Species 0.000 description 26
- 238000001514 detection method Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 17
- 210000003128 head Anatomy 0.000 description 17
- 241000196324 Embryophyta Species 0.000 description 16
- 241000251468 Actinopterygii Species 0.000 description 12
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000005457 optimization Methods 0.000 description 9
- 241000271566 Aves Species 0.000 description 8
- 239000000306 component Substances 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 239000000779 smoke Substances 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 238000001303 quality assessment method Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000289669 Erinaceus europaeus Species 0.000 description 2
- 241000721701 Lynx Species 0.000 description 2
- 241000283973 Oryctolagus cuniculus Species 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000008635 plant growth Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The application provides a method and a system for monitoring biodiversity, wherein the monitoring system comprises biodiversity monitoring equipment, the monitoring equipment comprises an infrared sensor and a high-definition camera, the infrared sensor is used for sensing biological infrared signals in a monitoring area, and the high-definition camera is used for capturing high-definition biological images in the monitoring area; the terminal edge computing equipment is used for controlling the high-definition camera to capture a high-definition biological image according to the biological infrared signals sensed by the infrared sensor; the biological species intelligent recognition processor is used for intelligently recognizing and classifying the high-definition biological images according to a species recognition algorithm to obtain the distribution information of biological species; the biological diversity visual monitoring platform is used for three-dimensionally reconstructing the distribution information of biological species by using biological infrared signals and high-definition biological images and displaying the distribution information. The technical scheme of the application can solve the problems that the monitoring range of the infrared induction camera is small and the monitoring requirement of biological diversity is difficult to adapt in the prior art.
Description
Technical Field
The application relates to the technical field of digital monitoring, in particular to a method and a system for monitoring biodiversity.
Background
The world is gradually entering the urban age, and effective use of urban land and scientific management of natural ecosystem can benefit urban and its surrounding residents and biodiversity at the same time, so that the urban is becoming an important component of a solution for suppressing global biodiversity loss, urban infrastructure plays an important role in biodiversity protection, and urban roles in protecting biodiversity are becoming increasingly important.
With the continuing interest in urban biodiversity, people's observations and monitored objects are no longer limited to large animals that have been found, and more are transferred to small animal plants that are not easily found, because many times the initial stages of bioinvasion are difficult to find, and irreversible damage has been caused to the local ecological environment and biodiversity by the time people find. In order to solve the above problems, so as to better meet the current demands of people on biodiversity monitoring and further improve biodiversity monitoring capability, the demands on the species type and the monitoring range of the camera monitoring system are gradually improved. In recent years, with the continuous development of information communication technology, internet of things cloud technology and high-definition camera shooting acquisition technology, various monitoring technical problems of biodiversity in natural environment are gradually solved. In the prior art, an intelligent automatic monitoring system for biodiversity is mostly adopted, and an infrared sensing camera is mostly adopted for monitoring various organisms in natural environment.
However, the monitoring range of the infrared sensing camera is usually small, so that the monitoring range of most of the photographing monitoring systems can only reach about 10 meters, and can only sense large animals, and for small animals, such as hedgehog, lynx, rabbit and the like, sensing and monitoring are difficult. In addition, the current society has gradually improved requirements for monitoring the biodiversity, and the requirements for biodiversity in natural environments are also increasing, so that real-time monitoring and timely feedback of image information are also required for animals and plants.
Content of the application
The application provides a biodiversity monitoring scheme which can solve the problems that an infrared induction camera in the prior art is small in monitoring range and difficult to adapt to biodiversity monitoring requirements.
Therefore, the 'one-recognition multi-sense' biodiversity monitoring system provided by the project has the characteristics of large sensing space, wide monitoring range, high photo definition and the like, in addition, the system can monitor animals, birds and fishes in real time, can record the change process of vegetation in the area, can timely capture and feed back information of biological species in the current monitoring area, can timely early warn the biodiversity protection and the occurrence of environmental bioinvasion on the designed operating system, and has great significance for biodiversity protection and research of an ecosystem in Beijing city.
To solve the above problems, according to a first aspect of the present application, the present application provides a biodiversity monitoring system, comprising:
the biological diversity monitoring device comprises an infrared sensor and a high-definition camera, wherein the infrared sensor is used for sensing biological infrared signals in a monitoring area, and the high-definition camera is used for capturing high-definition biological images in the monitoring area;
the terminal edge computing equipment is electrically connected with the biodiversity monitoring equipment and is used for controlling the high-definition camera to capture a high-definition biological image according to the biological infrared signals sensed by the infrared sensors;
the biological species intelligent recognition processor is electrically connected with the terminal edge computing equipment and is used for intelligently recognizing and classifying the high-definition biological images according to a species recognition algorithm to obtain the distribution information of biological species;
the biological diversity visual monitoring platform is electrically connected with the biological species intelligent identification processor and is used for three-dimensionally reconstructing the distribution information of the biological species and displaying the distribution information of the biological species by using biological infrared signals and high-definition biological images.
Preferably, in the above monitoring system, the biodiversity monitoring device further includes:
The power supply equipment, the control cradle head and the sensing equipment; wherein,
the power supply equipment is respectively and electrically connected with the control holder and the sensing equipment and is used for respectively supplying power to the control holder and the sensing equipment;
the control cradle head is electrically connected with the sensing equipment and is used for acquiring a sensing signal of the sensing equipment and sending a movement and acquisition control instruction to the sensing equipment according to the sensing signal;
and the sensing equipment is used for moving and acquiring biological images according to the movement and acquisition control instructions of the control cradle head.
Preferably, in the above monitoring system, the sensing device includes:
the device comprises an infrared sensor, a high-definition camera, a sensor array and a wireless data transmission module; wherein,
the infrared sensor is used for sensing biological infrared signals in the monitoring area;
the high-definition camera is used for capturing high-definition biological images in the monitoring area;
the sensor array is internally provided with a plurality of signal interfaces for externally connecting a plurality of types of sensors;
the wireless data transmission module is respectively and electrically connected with the infrared sensor, the high-definition camera and the sensor array and is used for wirelessly uploading biological infrared signals, high-definition biological images and sensor signals obtained by various sensors.
Preferably, in the above monitoring system, the terminal edge computing device includes:
The biological infrared signal preprocessing module is used for performing signal characteristic processing on the biological infrared signal sensed by the infrared sensor to obtain a biological infrared signal after characteristic processing;
the signal data characteristic extraction module is used for extracting biological characteristics in the biological infrared signals;
the signal feedback capturing control module is used for controlling the high-definition camera to capture a high-definition biological image according to the biological characteristics;
and the biological information acquisition module is used for acquiring biological information from the high-definition biological image.
Preferably, the monitoring system, the biological species intelligent identification processor includes:
the image data preprocessing module is used for carrying out image enhancement and resolution processing on the high-definition biological image according to an image enhancement processing technology and an countermeasure generation network to obtain preprocessed image data;
the image feature extraction module is used for carrying out feature extraction on the preprocessed image data by using the densely connected convolution network to obtain image features;
the image feature modeling module is used for selecting model network parameters according to the transfer learning technology and constructing a biological species identification model by using the model network parameters and the image features;
the biological species identification module is used for inputting the image characteristics into the biological species identification model and labeling to obtain the biological species category.
Preferably, in the above monitoring system, the image data preprocessing module includes:
the image clipping unit is used for extracting and clipping the interested target area in the high-definition biological image;
the image filtering unit is used for filtering the interested target area by using a filtering algorithm to obtain image enhancement data;
and the countermeasure generation network unit is used for training the image enhancement data into a countermeasure generation network to obtain preprocessed image data.
Preferably, in the above monitoring system, the image feature modeling module includes:
a migration learning unit for learning migration knowledge and migration patterns from a source domain or a multi-network framework using a migration learning technique;
the parameter selection unit is used for selecting model network parameters of the deep convolutional neural network by using migration knowledge and migration modes;
the network construction unit is used for constructing a deep convolutional neural network according to a preset biological species category by using model network parameters;
the model building unit is used for inputting the image characteristics into the deep convolutional neural network for training and building a biological species identification model.
Preferably, in the above monitoring system, the biodiversity visual monitoring platform includes:
The image model building module is used for building a three-dimensional image model by using a geographic information technology;
the information management fusion module is used for associating and fusing the three-dimensional image model and the distribution information of the biological species by using a data management tool, and establishing a comprehensive information database;
and the visual display module is used for extracting and displaying the distribution information of the biological species from the comprehensive information database.
According to a second aspect of the present invention, there is also provided a method for monitoring biodiversity, for use in the biodiversity monitoring system provided in any of the above-mentioned aspects, the method for monitoring biodiversity comprising:
sensing a biological infrared signal in the monitoring area by using an infrared sensor;
according to the biological infrared signals in the monitoring area, the high-definition camera is controlled to capture high-definition biological images in the monitoring area;
performing intelligent recognition and classification on the high-definition biological image according to a species recognition algorithm to obtain the distribution information of biological species;
and using the biological infrared signals and the high-definition biological images to carry out three-dimensional reconstruction on the distribution information of the biological species, and displaying the distribution information of the biological species.
Preferably, in the monitoring method, the step of intelligently identifying and classifying the high-definition biological image according to a species identification algorithm to obtain the distribution information of the biological species includes:
Performing image enhancement and resolution processing on the high-definition biological image according to an image enhancement processing technology and an countermeasure generation network to obtain preprocessed image data;
performing feature extraction on the preprocessed image data by using a dense connection convolution network to obtain image features;
selecting model network parameters according to a transfer learning technology, and constructing a biological species identification model by using the model network parameters and image characteristics;
inputting the image features into a biological species identification model, and labeling to obtain the biological species category.
According to the biodiversity monitoring scheme provided by the application, the infrared sensor senses the biological infrared signals in the monitoring area, then the high-definition camera is used for capturing the high-definition biological images in the monitoring area, and the terminal edge computing equipment controls the high-definition camera to capture the high-definition biological images by using the biological infrared signals sensed by the infrared sensor. The biological species intelligent recognition processor performs intelligent recognition and classification on the high-definition biological image according to a species recognition algorithm to obtain the distribution information of biological species, and the species recognition algorithm is an intelligent recognition algorithm based on an image AI and can perform intelligent recognition and classification on the high-definition biological Taigu county efficiently and accurately, wherein the distribution information of the biological species comprises the information of the types, the positions, the number and the like of the biological species. After the distribution information of the biological species is obtained, the biological diversity visual monitoring platform uses the biological infrared signals and the high-definition biological images to reconstruct the distribution information in three dimensions, so that the distribution information of the biological species can be displayed in a visual way.
In summary, according to the biological diversity monitoring scheme provided by the technical scheme, different biological species are identified by using the image AI intelligent identification technology, so that the monitoring of biological diversity and the promotion of protection work can be promoted, and meanwhile, the ecological environment quality assessment of an ecological protection area is facilitated; in addition, the biological diversity monitoring scheme provided by the technical scheme of the application adopts an AI intelligent recognition algorithm based on machine vision, has a wide monitoring range, can realize intelligent monitoring on large animals, small animals and plants, further can realize automatic monitoring on biological species diversity, provides a solution for automatic monitoring and biological diversity monitoring of biological species, and can solve the problems that an infrared induction camera in the prior art has a small monitoring range and is difficult to adapt to the monitoring requirement of biological diversity.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a bio-diversity monitoring system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the structure of a bio-diversity monitoring apparatus provided by the embodiment shown in FIG. 1;
FIG. 3 is a schematic diagram of an image data preprocessing module according to the embodiment shown in FIG. 1;
FIG. 4 is a schematic diagram of an image feature modeling module provided by the embodiment of FIG. 1;
FIG. 5 is a schematic structural view of a bio-diversity visual monitoring platform provided by the embodiment shown in FIG. 1;
FIG. 6 is a schematic flow chart of a method for monitoring biodiversity according to an embodiment of the present application;
FIG. 7 is a flow chart of a method for intelligently identifying and classifying high-definition biological images according to the embodiment shown in FIG. 6;
fig. 8 is a schematic diagram of a Retinex algorithm according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
The relevant reference numerals are as follows:
100-biodiversity monitoring equipment, 101-infrared sensor, 102-high definition camera, 103-power supply equipment, 1031-storage battery, 1032-solar charging controller, 1033-solar panel, 104-control cradle head, 1041-Linux core board, 1042-power module, 1043-4G communication module, 1044-EEPROM module, 1045-clock module, 1046-AD acquisition module, 1047-high definition cradle head camera module, 1048-infrared sensor detection module, 105-sensing equipment, 1051-sensor array, 1052-wireless data transmission module, 1053-power supply equipment, 1031-storage battery, 200-terminal edge computing equipment, 201-biological infrared signal preprocessing module, 202-signal data feature extraction module, 203-signal feedback capture control module, 204-biological information acquisition module, 300-biological species intelligent recognition processor, 301-image data preprocessing module, 3011-image clipping unit, 3012-image filtering unit, 3013-anti-generation network modeling unit, 302-image feature extraction module, 303-image feature construction module, 3031-learning unit, 2-selection unit, 3033-communication unit, 30331-communication module, 3033-communication module, 30331-communication module, 30332-communication module, and 4032-communication module, 3031-communication module, and 4032-communication module, 402-communication module, and 4032-communication module, and visual system module-information module-communication module-such as visual system, 4033-plant species visualization unit, 4034-fish species visualization unit.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The technical scheme in the prior art has the following technical problems:
in the prior art, a camera monitoring system is mostly adopted for intelligent automatic monitoring of biodiversity, and an infrared induction camera is mostly adopted for monitoring various organisms in natural environment. The monitoring range of the infrared sensing camera is usually smaller, so that the monitoring range of most of the photographing monitoring systems can only reach about 10 meters, and can only sense large animals, and the sensing and monitoring of small animals such as hedgehog, lynx, rabbits and the like are difficult. In addition, the current society has gradually improved requirements for monitoring the biodiversity, and the requirements for biodiversity in natural environments are also increasing, so that real-time monitoring and timely feedback of image information are also required for animals and plants.
In order to solve the problems in the prior art, the biodiversity monitoring scheme provided by the embodiment of the application adopts the internet of things technology, the intelligent sensing technology, the wireless communication technology and the high-definition automatic tracking camera shooting technology, combines intelligent recognition algorithms such as artificial intelligence, deep learning, big data processing and the like, realizes intelligent real-time recognition and monitoring of land animals, birds, fishes and vegetation in an ecological system, can perform remote wireless transmission, can accurately process transmission information and feed back the transmission information to an operating system of a client, and can perform real-time biological monitoring on a monitoring area. The embodiment of the application builds the biodiversity monitoring system by developing a 'one-recognition multi-sense' biodiversity monitoring system and utilizing intelligent monitoring technologies such as the Internet of things, artificial intelligence, a cloud platform, big data, edge calculation and the like, can realize all-weather real-time dynamic monitoring and visual display of small animals, birds and fish at key parts in a monitoring area, and simultaneously adopts a time-delay photographic technology and a video splicing technology to record succession processes of vegetation communities in surrounding space and analyze plant succession factors (fire succession, climatic succession and animal succession manual succession). The biodiversity monitoring system provided by the following embodiment of the application mainly comprises:
(1) Hardware layer: the biological species monitoring system based on the infrared sensor 101 and the high-definition camera is developed, so that real-time monitoring of terrestrial animals, plants, birds and fish in a protection area is realized.
(2) Algorithm layer: the biological species intelligent recognition processor 300 based on the Linux operating system is designed, and a species recognition algorithm based on a high-definition camera is built in the processor.
(3) Platform layer: the Web technology is developed to realize the real-time display of the biological species of the land animals, plants, birds and aquatic fish to be monitored.
In order to achieve the above objective, referring to fig. 1, fig. 1 is a schematic structural diagram of a bio-diversity monitoring system according to an embodiment of the present application. As shown in fig. 1, the bio-diversity monitoring system includes:
the biodiversity monitoring device 100, as shown in fig. 1, the biodiversity monitoring device 100 includes an infrared sensor 101 and a high-definition camera 102, the infrared sensor 101 is used for sensing the bio-infrared signal in the monitoring area, and the high-definition camera 102 is used for capturing the high-definition biological image in the monitoring area. In addition, the biodiversity monitoring device 100 also has built-in heat sensors, smoke sensors and underwater sonar object monitoring sensors. By means of image monitoring of most biological species in the monitored area, monitoring of fire situations possibly occurring in the monitored area and alarming reminding, the operations of real-time information acquisition, image transmission processing and the like of the monitored fish shoals can be achieved.
The terminal edge computing device 200 is electrically connected with the biodiversity monitoring device 100, and the terminal edge computing device 200 is used for controlling the high-definition camera 102 to capture high-definition biological images according to biological infrared signals sensed by the infrared sensor 101. The high-definition camera 102 mounted on the biodiversity monitoring device 100 can realize real-time video monitoring within the whole scene range, and transmits image data to the terminal edge computing device 200. The terminal edge computing device 200 can identify moving objects in the video stream in real time, and when the infrared sensor 101 senses that a surrounding living organism passes, and senses a biological infrared signal of the living organism, the cradle head of the high-definition camera 102 and the focal length of the camera are quickly adjusted so as to track and shoot the living organism in the sensing area, and a high-definition biological image is obtained. After the high-definition biological image is obtained, the high-definition biological image is transmitted to the biological species intelligent recognition processor 300 in a 4G wireless transmission mode.
The biological species intelligent recognition processor 300 is electrically connected with the terminal edge computing device 200, and is used for intelligently recognizing and classifying the high-definition biological image according to a species recognition algorithm to obtain the distribution information of biological species. Wherein the distribution information of the biological species includes density, type, quantity and location of the biological species. The machine learning algorithm of the convolutional neural network is trained by using the captured high-definition biological images and characteristic information of various organisms, so that the recognition accuracy of the algorithm can be improved. The received high-definition biological image is intelligently identified by utilizing an algorithm, so that the types and the amounts of biological species can be automatically recorded, and animals, plants, birds or fishes can be classified, so that the distribution information of the biological species can be obtained. The biological species intelligent recognition processor 300 can realize autonomous learning, establish a new biological species tag database by using different kinds of biological pictures and characteristic data of artificial recognition, realize automatic recognition and monitoring of new biological species in a monitoring area, and gradually perfect the biological recognition tag database in the monitoring area, thereby realizing intelligent monitoring of biological species.
The biodiversity visual monitoring platform 400 electrically connected with the biological species intelligent recognition processor 300 is used for three-dimensionally reconstructing the distribution information of the biological species by using biological infrared signals and high-definition biological images and displaying the distribution information of the biological species. By fusing the biological infrared signals sensed by the infrared sensor 101 and the multi-source data such as the high-definition image data shot by the high-definition camera 102, the three-dimensional reconstruction of the biological distribution in the monitoring area can be realized, and the types, the biological amounts and the real-time biological positions of the biological species in the monitoring area can be displayed. By the method, a visual display system for the regional biodiversity data can be constructed, so that the distribution information of the biological species is displayed.
The specific scheme for three-dimensional reconstruction of the distribution information of biological species by using biological infrared signals and high-definition biological images is as follows:
the embodiment of the application can adopt a three-dimensional reconstruction technology based on point cloud, and the three-dimensional model of the fish which is currently open is less, so the embodiment of the application tries to represent a three-dimensional object or scene through the acquisition of the point cloud data under the condition of unknown three-dimensional model, and mainly adopts an active and passive vision combination method, namely an RGB-D camera (namely the high-definition camera in the embodiment of the application), which is also called a fusion series method. RGB-D cameras, also known as depth cameras, combine the advantages of active and passive sensors, consisting of passive RGB cameras and active depth sensors. And calculating the point cloud data under the camera coordinate system according to the information of the high-definition biological image acquired by the RGB-D camera and the internal parameters of the camera. Because three-dimensional reconstruction has the dual requirements of spatial hierarchy restoration and color texture restoration in most application scenes, the combined active and passive vision approach is the most effective solution at present.
Three-dimensional reconstruction of distribution information of biological species "based on fusion series method" is largely divided into two types in practice: one is three-dimensional reconstruction of static scenes, typically represented by KinectFusion; the other is for three-dimensional reconstruction of dynamic scenes, typically represented by DynamicFusion. The most significant feature of the fusion series reconstruction of either static or dynamic scenes is that a TSDF model (truncated sign distance function model) is used, and there are, of course, representation methods using surface elements (Surfel) for individual operations (the surface elements are simply a collection of attributes such as points, normals, colors, weights, radii, time stamps, etc.). It should be noted that the difficulty of three-dimensional reconstruction based on a dynamic scene is far greater than that based on a static scene, and of course, the difficulty is reduced greatly for reconstruction (such as driving a three-dimensional grid template model) in which topology is not changed.
Where the TSDF (Truncated Signed Distance Function, truncated symbol distance function) typically selects the three-dimensional space to be modeled, such as a 2m x 2m three-dimensional space, this three-dimensional space is then divided into a number of small blocks, each of which is typically 256 x 256 or 128 x 128 in resolution, where each small block is called a voxel.
Each voxel in the TSDF model stores the distance of the patch from its nearest object surface. If the patch is in front of the object surface, it stores a positive value; if the patch is located behind the object surface, a negative value is stored. Further, the object surface is generally considered to be thick, so that the stored too large value and too small value are set to 1 or-1, so that the distance after truncation, the so-called TSDF model, is obtained. Finally, by definition, the place where the TSDF is 0 is the reconstruction surface, stated another way, where the TSDF value transitions from negative to positive.
The main flow of the RGB-D camera for collecting point cloud data is as follows:
integrating space point cloud data and color intensity data acquired by an RGB-D camera, and managing and outputting the space point cloud data and the color intensity data in an engineering mode;
step two, preprocessing the point cloud data by using PCL technology (a modularized cross-platform open source C++ programming library for three-dimensional point cloud processing), including denoising, segmentation, filtering, registration, sampling and other operations, and outputting the point cloud with special evidence display and simplified data.
Step three, gridding the point cloud data, namely, using a series of grids to approximate the fitting point cloud, generally using triangular grids and quadrilateral grids, and realizing the conversion from the point cloud to the grid (Mesh) in the three-dimensional representation form.
And fourthly, mapping the color and texture information acquired by the RGB-D camera onto the grid model, and carrying out fine modification and beautification to output a vivid three-dimensional model.
The design is mainly aimed at three-dimensional reconstruction of static scenes, so that three-dimensional reconstruction of fish shoals is mainly carried out by adopting the following algorithms:
kinectfusion: the system requires only one mobile low cost depth camera to be able to reconstruct arbitrary and more complex indoor scenes. The method comprises the following steps of integrating depth data streams acquired by a Kinect sensor into a global implicit surface model (TSDF model) corresponding to a current scene in real time, and tracking the relative relation between currently acquired depth frame data and the global implicit surface model by using an Iterative Closest Point (ICP) algorithm from coarse to fine, so as to obtain pose change of the Kinect sensor. But since the system can only reconstruct less than 7 cubic meters of volume, most of the small-scale reconstruction designs used in the early stages of the system are previewed.
Elastic fusion: elastfusion uses a representation of surface elements (surfels) and is used for small scene reconstruction. The local closed loops of a plurality of model-to-model are combined with the global closed loops of a larger scale, so that the distribution of the reconstructed map can be ensured to be as close as possible, and the global consistency of the reconstructed result is ensured; in addition, the algorithm is more efficient in detecting the discrete multi-point light source environment, and can obtain a better reconstruction result under the condition.
Bundlefusion: firstly, an aligned color+depth data stream acquired by an RGB-D camera is used as an input quantity, the input color+depth data stream is matched with a corresponding relation between frames firstly, then global pose optimization is carried out, the whole drift is corrected, and the model is continuously and dynamically updated in the whole reconstruction process.
In the specific flow of the algorithm, in terms of matching, a parallel global optimization method of spark-then-dense is used here. That is, sparse SIFT feature points are used for coarser registration first, because the sparse feature points themselves can be used for loop closure detection and localization. A more detailed registration is then performed using dense geometric and photometric continuity.
In the aspect of pose optimization, a layered local-to-global optimization method is used, the optimization method is divided into two layers in total, at the lowest first layer, every 10 continuous frames form a trunk, the first frame is used as a key frame, and then the pose optimization is performed on all frames in the trunk. At the second level, only all key frames of the chunk are used for correlation and then global optimization. Such layering has the following benefits: because the key frames can be stripped, the data stored and to be processed are reduced; the hierarchical optimization method reduces the unknown quantity in each optimization, and ensures that the method can be expanded to a large scene with small drift.
In dense scene reconstruction, one key point is symmetric updating of the model: to add an updated one-frame estimate, the old one-frame is removed and then re-integrated at the new pose. In theory, the model will be more accurate as more frames are grown.
In summary, the biological diversity monitoring system provided by the embodiment of the application can promote the monitoring of biological diversity and the promotion of protection work by identifying different biological species by using the image AI intelligent identification technology, and is beneficial to the ecological environment quality assessment of an ecological protection area; in addition, the biological diversity monitoring scheme provided by the technical scheme of the application adopts an AI intelligent recognition algorithm based on machine vision, has a wide monitoring range, can realize intelligent monitoring on large animals, small animals and plants, further can realize automatic monitoring on biological species diversity, provides a solution for automatic monitoring and biological diversity monitoring of biological species, and can solve the problems that an infrared induction camera in the prior art has a small monitoring range and is difficult to adapt to the monitoring requirement of biological diversity.
Specifically, as a preferred embodiment, as shown in fig. 2, in the above-mentioned monitoring system, the biodiversity monitoring device 100 further includes:
A power supply device 103, a control cradle head 104 and a sensing device 105; wherein,
the power supply device 103 is electrically connected with the control holder 104 and the sensing device 105, and is used for supplying power to the control holder 104 and the sensing device 105, respectively. As shown in fig. 2, the power supply apparatus 103 includes a storage battery 1031, a solar charge controller 1032, and a solar panel 1033; the solar energy and lithium battery pack combined power supply mode is adopted to respectively provide power for the control cradle head 104 and the sensing equipment 105.
The control cradle head 104 is electrically connected with the sensing equipment 105 and is used for acquiring a sensing signal of the sensing equipment 105 and sending a movement and acquisition control instruction to the sensing equipment 105 according to the sensing signal; the core of the sensing device 105 is a data acquisition/controller 1053, which plays a core control role for the signal receiving of the sensor system and the control of the camera head, and is internally provided with various functional programs of an operating system. As shown in fig. 2, the control pan-tilt 104 includes a Linux core board 1041, a power module 1042 electrically connected to the Linux core board 1041, a 4G communication module 1043, an EEPROM module 1044, a clock module 1045, an AD acquisition module 1046, a high-definition pan-tilt camera module 1047, and an infrared sensor detection module 1048 electrically connected to the AD acquisition module 1046; the Linux core board 1041 is a core component of the control pan-tilt 104, and is equipped with an operating system, and is capable of respectively receiving data sent by the 4G communication module 1043, the AD acquisition module 1046, and the high-definition pan-tilt camera module 1047. Most of the components of the control head 104 can be centrally located in the laboratory box to protect the majority of the components of the control head 104.
The sensing device 105 is used for moving and acquiring biological images according to the movement and acquisition control instructions of the control cradle head 104. The sensing device 105 comprises an infrared sensor 101, a high-definition camera 102, a sensor array 1051, a wireless data transmission and other functional modules, wherein the infrared sensor 101 is arranged in a monitoring area, when the infrared sensor 101 in the monitoring area captures that organisms in the current area appear, capturing information is sent to the control cradle head 104, the high-definition camera 102 can timely turn to the direction of monitoring signal sending to collect biological images, and raw data are collected for accurate identification of subsequent biological species. And the image acquisition system can acquire and compare information of various organisms in the current monitoring area according to the time set by the system. Information feedback to the user is provided when data not stored in the data set is present and automatically generated and saved to the new data set.
As a preferred embodiment, as shown in fig. 2, the sensing device 105 includes:
the infrared sensor 101, the high-definition camera 102, the sensor array 1051 and the wireless data transmission module 1052; wherein,
an infrared sensor 101 for sensing a biological infrared signal in the monitoring area; the infrared sensor 101 can capture the biological infrared signal of living beings in the current area, upload the biological infrared signal to the control cradle head 104, send to the terminal edge computing device 200 through the control cradle head 104, and control the high-definition camera 102 to capture the high-definition biological image through the computation of the terminal edge computing device 200.
The high-definition camera 102 is used for capturing a high-definition biological image in the monitoring area; after receiving the biological infrared signal sent by the control cradle head 104, the terminal edge computing device 200 sends a motion control instruction to the control cradle head 104, and the control cradle head 104 rotates the high-definition camera 102 according to the motion control instruction, so as to control the high-definition camera 102 to capture a high-definition biological image.
The sensor array 1051 is internally provided with a plurality of signal interfaces for externally connecting a plurality of types of sensors; the sensor array 1051 reserves interfaces for various signals, and can be configured with various air quality detection sensors (such as smoke concentration and negative oxygen ion concentration sensors) and weather detection sensors (such as temperature and humidity, wind speed and wind direction sensors). As can be seen from the above technical content, the technical solution provided by the embodiments of the present application generally includes two types of sensors: one is a sensor for biological species image detection: the system comprises the infrared sensor 101 and the high-definition camera 102, and the obtained data are used for biological species image detection. Another is a sensor for fire detection and alarm: such as smoke concentration sensors and heat sensing modules (infrared sensing modules may also be used) external to the sensor array 1051, provide corresponding smoke sensing data. In addition, the alarm reminding is reminded by an alarm bell and alarm information on a user visual page.
The wireless data transmission module 1052 is electrically connected with the data acquisition/controller 1053, and is used for wireless uploading of biological infrared signals, high-definition biological images and sensor signals obtained by various types of sensors.
The data acquisition/controller 1053 is configured to acquire signals of the infrared sensor 101, the high-definition camera 102, and the sensor array 1051, and send sensor signals obtained by the wireless uploading of the biological infrared signal, the high-definition biological image, and the various types of sensors to the wireless data transmission module 1052 for uploading.
According to the technical scheme provided by the embodiment of the application, various signal interfaces are reserved through the sensor array 1051, various air quality detection sensors and weather detection sensors can be configured, wherein air quality detection sensing data can provide accurate judgment basis for the air quality of the current ecological environment, sensing data of smoke can be used for timely monitoring the occurrence of fire and the like, and weather sensing data can provide auxiliary decision information for the stable operation of a hardware system. The wireless data transmission module 1052 transmits the acquired data in real time and stores the acquired data in a server side, and can realize real-time visualization of various sensing data by combining with platform special software, and can also realize remote control of the hardware system by the server side.
In addition, in order to realize efficient processing of biological species image processing information and infrared sensing information, the embodiment of the application can integrate each module of the terminal edge computing equipment 200 onto one functional platform to design a biodiversity monitoring system based on a Linux operating system. Because the Linux system opens source codes, the Linux system has strong functions, reliability, strong stability, flexibility and great flexibility, and in addition, the Linux system widely supports a large number of microprocessor architectures, hardware devices, graphic supports and communication protocols, and can be used as a good choice of the operating system of the project.
Specifically, as a preferred embodiment, as shown in fig. 1, the terminal edge computing device 200 in the above-mentioned monitoring system includes:
the biological infrared signal preprocessing module 201 is configured to perform signal feature processing on the biological infrared signal sensed by the infrared sensor 101, and obtain a biological infrared signal after feature processing.
The signal data feature extraction module 202 is configured to extract a biological feature in the biological infrared signal. The biological characteristics comprise multidimensional characteristics of animal and plant, such as size, morphology, function and the like, and fine distinguishable characteristics of different animal characteristics and plant growth characteristics can be obtained by screening from the multidimensional characteristics.
The signal feedback capturing control module 203 is configured to control the high-definition camera 102 to capture a high-definition biological image according to the biological characteristics. Through the above biological characteristics, the high-definition camera 102 is controlled to rotate and the biodiversity monitoring device 100 is controlled to move, so that the high-definition camera 102 is controlled to focus the living beings, and high-definition biological images of the living beings are captured.
The biological information acquisition module 204 is configured to acquire biological information from the high-definition biological image. The biological information acquisition module 204 can acquire biological information from high-definition biological images, so that high-precision automatic identification of different biological species in a real scene is realized. The biological recognition algorithm adopted for acquiring the biological information comprises a target detection Faster RCNN and a Yolov5 algorithm.
According to the technical scheme provided by the embodiment of the application, the operating system can realize simultaneous operation of multithreading work tasks, and can realize simultaneous login and functional operation of multiple users. When the infrared sensor 101 senses that a new species exists, the living track of the new species is monitored in real time to obtain biological infrared signals so as to obtain biological characteristics, and the high-definition camera 102 is controlled by the signal feedback capture control module 203 not to do high-definition biological images so as to acquire biological information in the high-definition biological images. When the species which can cause negative influence on the local ecological system is judged, the protection facilities of the local ecology are automatically started and are contacted with the user terminal to be processed, intelligent control can be carried out on the fire-fighting facilities of the ecological area through detection signal processing of the smoke and heat sensors, and remote equipment control can be realized at the user terminal.
In addition, in the technical scheme of the application, after the high-definition camera 102 captures the biological image of the monitoring area, the problems of image blurring, unclear edges, low background contrast and the like may exist. To solve the above-mentioned problems, as a preferred embodiment, as shown in fig. 1, in a monitoring system, a biological species intelligent recognition processor 300 includes:
the image data preprocessing module 301 is configured to perform image enhancement and resolution processing on the high-definition biological image according to an image enhancement processing technology and an countermeasure generation network, so as to obtain preprocessed image data. In the embodiment of the application, the biological image captured in the monitoring area is controlled by the high-definition camera installed on the cradle head 104 to serve as a research object, and the image enhancement processing technologies such as Gaussian filtering, self-adaptive median filtering and image clipping are adopted to highlight the target area of interest of the image, so that the high-definition processing of the biological image in motion is ensured. Considering the influence of a camera on the shooting pixels of far and near biological captured images and the influence of the brightness difference of the images on target detection, the problems of blurring and low background contrast caused in the biological motion process can be solved by combining an image super-resolution method such as an anti-generation network (Generative Adversarial Net, GAN) in deep learning. In order to ensure high-definition processing of biological images in motion, the moving object recognition algorithm adopted in the embodiment of the application comprises a video segmentation algorithm and a Kalman filtering tracking algorithm based on gray images.
The image enhancement processing technology mainly adopts the following steps:
1. gray world algorithm: the gray world algorithm is based on gray world assumptions, which assume that: for an image with a large number of color variations, the average of the R, G, B components of the image tends to the same Gray value Gray. In a physical sense, the gray world method assumes that the average of the average reflection of light by a natural scene is a constant value in general, which is approximately "gray". The color balance algorithm selected by the image enhancement processing technology forcibly applies the assumption to the image to be processed, eliminates the influence of ambient light from the image to be processed, and obtains the original scene image. There are generally two methods for determining the Gray value.
1) Using a fixed value, 128 is typically taken as a gray value for 8-bit images (0 to 255).
2) Calculating gain coefficients, and respectively calculating average values avgR, avgG and avgB of three channels, wherein:
Avg=(avgR+(avgG+avgB)/3
kr=Avg/avgR
kg=Avg/avgG
kb=Avg/avgB
and recalculating each pixel value by using the calculated gain coefficient to form a new picture.
2. Retinex algorithm: the retina-brain cortex (Retinex) theory considers the world colorless, and the world seen by the human eye is the result of light interactions with matter, that is, images mapped into the human eye are related to the long (R), medium (G), short (B) and reflective properties of the object. The ambient light irradiation component, i.e., the incident light L, is irradiated onto the reflective object R and then enters the eyes of the observer, so that the image I seen by the observer can be obtained. Then the image I seen in the human eye is calculated as follows:
I(x,y)=R(x,y)L(x,y)
Where I is an image seen in the human eye, R is a reflected component of the object, L is an ambient light irradiation component, and (x, y) is a position corresponding to the two-dimensional image. It calculates R by estimating L, which can be found by convolution operation of gaussian blur and I, expressed by the formula:
log(R)=log(I)-log(L)
L=F*L
where F is a Gaussian blur filter, representing a convolution operation
Wherein σ is called the gaussian ambient space constant (Gaussian Surround Space Constant), the so-called scale in the algorithm, has a relatively large influence on the image processing, r for two-dimensional images 2 Equal to the corresponding position: x is x 2 +y 2 That is, the illumination component is generally considered to be the result of the original image being subjected to gaussian filtering.
3. Automatic Color Equalization (ACE)
The ACE algorithm is derived from a retinex algorithm, can adjust the contrast of an image, realizes the color constancy and the brightness constancy of human eyes, takes the spatial position relation of the color and the brightness in the image into consideration, carries out self-adaptive filtering of local characteristics, realizes the brightness and color adjustment and the contrast adjustment of the image with local and nonlinear characteristics, and simultaneously meets the gray world theory assumption and the white speckle assumption.
The first step: performing color/airspace adjustment on the image to finish chromatic aberration correction of the image and obtain an airspace reconstruction image;
Wherein Rc is an intermediate result, I c (p)-I c (j) D (p, j) represents a distance metric function, r (x) is a luminance performance function, and is an odd function; this step can adapt to local image contrast, r (x) can amplify smaller differences and enrich large differences, expanding or compressing dynamic range according to local content. Generally, r (x) is:
and a second step of: and dynamically expanding the corrected image. The ACE algorithm is performed on a single color channel, which needs to be processed separately for each color channel for a color picture.
There is a simple linear expansion:
R(x)=round[127.5+ω*R c (p)]wherein ω represents a line segment [ (0, m) c )],(255,M c ) And has:
M c =min[R c (p)],M c =max[R c (p)]
and a third step of: the enhanced channel is obtained by expanding R (x) between [0,1] using the following formula:
the image feature extraction module 302 is configured to perform feature extraction on the preprocessed image data by using the densely connected convolutional network, so as to obtain image features.
On the basis of the image enhancement and resolution processing, in order to overcome the defects of shape bending, environmental shielding, shooting angles and other factors of different species in an ecological area and identify the species of the organism, a data-driven densely connected convolution network (Densely Connected Convolutional Networks, denseNet) and other methods are adopted in artificial intelligence, multidimensional information such as the size, the shape, the function and the like of animals and plants is deeply mined, and the fine distinguishable characteristics of different animal characteristics and plant growth characteristics are screened.
On the basis of the identification of the biological species based on the preprocessed image data, the Yolov7 algorithm is mainly adopted as a practical algorithm of the target detection algorithm. The YOLO algorithm is used as the most typical representative of one-stage target detection algorithm, is used for identifying and positioning objects based on a deep neural network, has a high running speed, and can be used for a real-time system. The Yolo algorithm uses a single CNN model to achieve end-to-end target detection. Firstly, inputting pictures resize to 448x448, then sending the pictures into a CNN network, and finally, processing a network prediction result to obtain a detected target. Compared with the R-CNN algorithm, the method is a unified framework, the speed is faster, and the training process of Yolo is end-to-end. The Yolo CNN network segments the incoming picture into SxS grids, and then each cell is responsible for detecting those objects whose center points fall within the grid, each cell predicts B bounding boxes (bounding boxes) and the confidence scores (confidence score) of the bounding boxes. So-called confidence actually includes two aspects: the size of the likelihood that the bounding box contains the object is one, and the accuracy of the bounding box is the other. The former is denoted Pr (object), when the bounding box is background, i.e. no background box is contained, pr (object) =0. And Pr (object) =1 when the bounding box contains a target. The accuracy of a bounding box can be characterized by the IOU (intersection over union, cross-over ratio) of the predicted box to the actual box (ground trunk), noted as: Confidence can be defined asThe size and location of the bounding box can be characterized by 4 values: (x, y, w, h), where (x, y) is the center coordinates of the bounding box and w and h are the width and height of the bounding box.
10000-20000 pictures taken independently are taken as a data set in the design, wherein the data set is divided into three parts, namely a training set, a verification set and a test set. By training our own training set, the accuracy rate of the final image recognition is expected to reach more than 90%.
The image feature modeling module 303 is configured to select model network parameters according to the transfer learning technique, and construct a biological species identification model using the model network parameters and the image features. The method comprises the steps of autonomously learning related knowledge and modes from a source field and a multi-network framework through transfer learning methods such as deep transfer learning (Deep Transfer Learning) and field self-adaption (Domain Adaptation), and transferring model network parameter selection applied to a biological target field identification model so as to realize unsupervised training on biological species identification; on the basis, different standards are calibrated according to different species of classification standards, and the biological species identification model based on the deep convolution network is constructed by fusing the global features and the local features of the living beings, so that the high-precision automatic identification of different species in a real scene is realized.
The biological species identification module 304 is configured to input the image features into a biological species identification model, and label the image features to obtain a biological species category. The image feature modeling module 303 purchases a biological species identification model, and after the biological species identification model inputs image features, the image features are learned and classified, so that the biological species category corresponding to the image features is obtained, and then the image is labeled to obtain the biological species category. The biological recognition algorithm employed by the biological species recognition algorithm 304 herein includes: target detection Faster RCNN and YOLOv5 algorithm.
As a preferred embodiment, as shown in fig. 3, the image data preprocessing module 301 includes:
an image clipping unit 3011 is used for extracting and clipping the target region of interest in the high-definition biological image.
And the image filtering unit 3012 is used for filtering the target region of interest by using a filtering algorithm to obtain image enhancement data.
An countermeasure generation network unit 3013 for training the image enhancement data into a countermeasure generation network to obtain preprocessed image data.
According to the technical scheme provided by the embodiment of the application, the image clipping unit 3011 is used for extracting and clipping the interested target area in the high-definition biological image, then the filtering algorithm is used for filtering the interested target area, so that the image enhancement data is obtained, and then the countermeasure generation network unit 3013 is used for training the image enhancement data by using a countermeasure generation network, so that the preprocessed image data is obtained. The high-definition biological image is taken as a research object, and the high-definition processing of the biological image in motion can be realized by adopting image enhancement processing technologies such as Gaussian filtering, self-adaptive median filtering, image clipping and the like for highlighting the target region of interest of the image. In addition, by combining with a super-resolution method for resisting the generation of images such as a network and the like in deep learning, the problems of blurring and low background contrast caused in the biological motion process can be solved, and the influence of a camera on the shooting pixels of far and near biological captured images and the influence of the brightness difference of the images on target detection can be reduced.
As a preferred embodiment, as shown in fig. 4, in the above-mentioned monitoring system, the image feature modeling module 303 includes:
the migration learning unit 3031 is configured to learn migration knowledge and migration patterns from a source domain or a multi-network framework using a migration learning technique.
A parameter selection unit 3032, configured to select model network parameters of the deep convolutional neural network using the migration knowledge and the migration mode.
The network construction unit 3033 is configured to construct a deep convolutional neural network according to a preset biological species category by using the model network parameters.
The model construction unit 3034 is used for inputting the image features into the deep convolutional neural network for training to construct a biological species identification model.
According to the technical scheme provided by the embodiment of the application, the migration learning unit 3031 can autonomously learn related knowledge and modes from the source field and the multi-network framework by using deep migration learning, field self-adaption and other migration learning technologies; then the parameter selection unit 3032 uses the migration knowledge and the migration mode to migrate the model network parameter selection applied to the biological target domain identification model, and the network construction unit 3033 uses the model network parameter to realize the unsupervised training of the deep convolutional neural network; on the basis, different standards are calibrated according to different species of classification standards, and the biological species identification model based on the deep convolution network is constructed by fusing the global features and the local features of the living beings, so that the high-precision automatic identification of different species in a real scene is realized.
In addition, as a preferred embodiment, as shown in fig. 5, in the above-mentioned monitoring system, the bio-diversity visual monitoring platform 400 includes:
the image model building module 401 is configured to build a three-dimensional image model using geographic information technology. The biodiversity visual monitoring platform 400 is developed based on the web and deployed on a remote server, and the biodiversity visual monitoring platform 400 can be used for ecological monitoring state evaluation. The image model building module 401 of the biodiversity visual monitoring platform 400 builds based on high-definition images and vector map data, builds a three-dimensional image model of a monitored area by means of a two-dimensional and three-dimensional geographic information technology (GIS) technology, can provide scientific, rapid, dynamic and visual management tools, and manages and displays basic data including geological features of a water area, environment, a three-dimensional water space model and the like and thematic data of water quality, fish, birds, plant types and the like.
The information management fusion module 402 is configured to use a data management tool to correlate and fuse the three-dimensional image model and the distribution information of the biological species, and build a comprehensive information database. By taking various data (such as the basic data and the thematic data) of a stored water area as a core, developing a data management tool and taking the geospatial information of the three-dimensional image model as a basic carrier, the thematic information such as fish data, plant resources, bird data, water quality data, meteorological data and the like can be associated, fused and managed, so that a comprehensive information database is constructed and obtained.
The visual display module 403 is configured to extract and display distribution information of biological species from the comprehensive information database. As shown in fig. 1, the visual display module 403 includes a terrestrial species visualization unit 4031, an avian species visualization unit 4032, a plant species visualization unit 4033, and a fish species visualization unit 4034.
In addition, in the biological species identification process, the pictures received by the remote identification computer are analyzed and calculated by adopting a biological identification intelligent algorithm, so that the types and the corresponding numbers of organisms in the high-definition biological image are obtained, and then the laser radar is used for reconstructing a three-dimensional image around the organisms. And storing the identification result into the comprehensive information database, wherein for unknown biological species which cannot be identified, the software carries out special marking in the picture, so that the subsequent manual identification is facilitated. Meanwhile, animal and plant data, air quality data and water quality data are analyzed, and the biodiversity and ecological environment conditions of the current ecological area are evaluated.
In addition, based on the same concept of the above method embodiment, the embodiment of the present application further provides a method for monitoring biodiversity, which is used for implementing the above system of the present application, and because the principle and method for solving the problem in the system embodiment are similar, at least the technical solution of the above embodiment has all the beneficial effects, which are not described in detail herein.
Referring to fig. 6, fig. 6 is a flow chart of a method for monitoring biodiversity according to an embodiment of the present application, which is used in the biodiversity monitoring system provided in any of the above embodiments, as shown in fig. 6, the method for monitoring biodiversity includes:
s110: an infrared sensor is used to sense biological infrared signals within the monitored area.
S120: and controlling the high-definition camera to capture the high-definition biological image in the monitoring area according to the biological infrared signal in the monitoring area.
S130: and carrying out intelligent recognition and classification on the high-definition biological images according to a species recognition algorithm to obtain the distribution information of biological species.
S140: and using the biological infrared signals and the high-definition biological images to carry out three-dimensional reconstruction on the distribution information of the biological species, and displaying the distribution information of the biological species.
In summary, the method for monitoring the biodiversity provided by the technical scheme of the application can promote the monitoring of the biodiversity and the promotion of protection work by identifying different biological species by using the image AI intelligent identification technology, and is beneficial to the ecological environment quality assessment of an ecological protection area; in addition, the biological diversity monitoring scheme provided by the technical scheme of the application adopts an AI intelligent recognition algorithm based on machine vision, has a wide monitoring range, can realize intelligent monitoring on large animals, small animals and plants, further can realize automatic monitoring on biological species diversity, provides a solution for automatic monitoring and biological diversity monitoring of biological species, and can solve the problems that an infrared induction camera in the prior art has a small monitoring range and is difficult to adapt to the monitoring requirement of biological diversity.
As a preferred embodiment, as shown in fig. 7, in the above-mentioned monitoring method, step S130: the method comprises the steps of intelligently identifying and classifying high-definition biological images according to a species identification algorithm to obtain the distribution information of biological species, and comprises the following steps:
s131: and carrying out image enhancement and resolution processing on the high-definition biological image according to an image enhancement processing technology and an countermeasure generation network to obtain preprocessed image data.
S132: and performing feature extraction on the preprocessed image data by using the dense connection convolution network to obtain image features.
S133: and selecting model network parameters according to the transfer learning technology, and constructing a biological species identification model by using the model network parameters and the image characteristics.
S134: inputting the image features into a biological species identification model, and labeling to obtain the biological species category.
As known from the method for monitoring biodiversity provided by the above embodiments, as a preferred embodiment, the method for monitoring biodiversity provided by the embodiment of the present application mainly includes the following steps:
(1) Image acquisition: before image processing, a high-definition camera is used to acquire a two-dimensional image of a three-dimensional object. The two-dimensional image is acquired by considering the influence of parameters such as illumination conditions, geometric characteristics of a camera and the like on subsequent image processing.
(2) Calibrating a camera: and restoring the object in the space by using the image shot by the high-definition camera. Here, it is assumed that there is one of the following simple linear relations between an image photographed by a high-definition camera and an object in a three-dimensional space: image =m object, where matrix M can be seen as a geometric model of camera imaging. The parameters in M are camera parameters. Typically, these parameters are obtained by experimentation and calculation. This process of solving for the parameters is known as camera calibration.
(3) Feature extraction: the features mainly comprise feature points, feature lines and regions. In most cases, feature points are taken as matching primitives, and the form of feature point extraction is closely related to the matching strategy. Therefore, it is necessary to determine which matching method to use when extracting feature points. The feature point extraction algorithm can be summarized as: the method based on the directional derivative, the method based on the image brightness contrast relation and the method based on mathematical morphology.
(4) Stereo matching: stereo matching refers to establishing a corresponding relationship between image pairs according to the extracted features, namely, performing one-to-one correspondence on imaging points of the same physical space point in two different images. Attention is paid to disturbances in the scene due to factors such as illumination conditions, noise disturbances, scene geometry distortions, surface physical properties, and camera characteristics.
(5) Three-dimensional reconstruction: the three-dimensional scene information can be recovered by combining the internal and external parameters calibrated by the camera with a relatively accurate matching result. Because the three-dimensional reconstruction precision is influenced by factors such as matching precision, internal and external parameter errors of the camera, the work of the previous steps is needed to be done, so that the precision of each link is high, the error is small, and a relatively accurate stereoscopic vision system can be designed.
In summary, according to the biodiversity monitoring scheme provided by the embodiment of the application, the infrared sensor and the high-definition camera are jointly applied to the biodiversity detection process, so that the bioinfrared signal and the high-definition biological image are obtained, then the image AI intelligent recognition technology is used for carrying out species recognition and classification on different types of biological species, so that the distribution information of the biological species is obtained, the promotion of biodiversity monitoring and protection work is promoted, and the ecological environment quality assessment of an ecological protection area is facilitated; meanwhile, a biological diversity monitoring system is built by sampling an AI intelligent recognition method based on machine vision, and a solution is provided for automatic monitoring of biological species and monitoring of biological diversity; in addition, the defect that the traditional monitoring method is too dependent on expert experience and consumes a large amount of manpower and material resources is overcome.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, and optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second and third, et cetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (10)
1. A biodiversity monitoring system, comprising:
the biological diversity monitoring device (100), wherein the biological diversity monitoring device (100) comprises an infrared sensor (101) and a high-definition camera (102), the infrared sensor (101) is used for sensing biological infrared signals in a monitoring area, and the high-definition camera (102) is used for capturing high-definition biological images in the monitoring area;
a terminal edge computing device (200) electrically connected with the biodiversity monitoring device (100), wherein the terminal edge computing device (200) is used for controlling the high-definition camera (102) to capture a high-definition biological image according to biological infrared signals sensed by the infrared sensor (101);
The biological species intelligent recognition processor (300) is electrically connected with the terminal edge computing equipment (200) and is used for intelligently recognizing and classifying the high-definition biological images according to a species recognition algorithm to obtain the distribution information of biological species;
and the biological diversity visual monitoring platform (400) is electrically connected with the biological species intelligent identification processor (300) and is used for three-dimensionally reconstructing the distribution information of the biological species by using the biological infrared signals and the high-definition biological images and displaying the distribution information after three-dimensional reconstruction.
2. The monitoring system according to claim 1, wherein the biodiversity monitoring device (100) further comprises:
a power supply device (103), a control cradle head (104) and a sensing device (105); wherein,
the power supply equipment (103) is respectively and electrically connected with the control holder (104) and the sensing equipment (105) and is used for respectively supplying power to the control holder (104) and the sensing equipment (105);
the control cradle head (104) is electrically connected with the sensing equipment (105) and is used for acquiring a sensing signal of the sensing equipment (105) and sending a movement and acquisition control instruction to the sensing equipment (105) according to the sensing signal;
The sensing equipment (105) is used for moving and acquiring biological images according to the movement and acquisition control instructions of the control cradle head (104).
3. The monitoring system according to claim 2, wherein the sensing device (105) comprises:
the infrared sensor (101), the high-definition camera (102), the sensor array (1051) and the wireless data transmission module (1052); wherein,
the infrared sensor (101) is used for sensing biological infrared signals in the monitoring area;
the high-definition camera (102) is used for capturing high-definition biological images in the monitoring area;
the sensor array (1051) is internally provided with a plurality of signal interfaces for externally connecting a plurality of types of sensors;
the wireless data transmission module (1052) is respectively and electrically connected with the infrared sensor (101), the high-definition camera (102) and the sensor array (1051) and is used for wirelessly uploading the biological infrared signals, the high-definition biological images and sensor signals obtained by the sensors of various types.
4. The monitoring system of claim 1, wherein the terminal edge computing device (200) comprises:
the biological infrared signal preprocessing module (201) is used for performing signal characteristic processing on the biological infrared signals sensed by the infrared sensors to obtain the biological infrared signals after the characteristic processing;
A signal data feature extraction module (202) for extracting biological features in the biological infrared signal;
the signal feedback capturing control module (203) is used for controlling the high-definition camera to capture the high-definition biological image according to the biological characteristics;
and the biological information acquisition module (204) is used for acquiring biological information from the high-definition biological image.
5. The monitoring system of claim 1, wherein the biological species intelligent recognition processor (300) comprises:
an image data preprocessing module (301) for performing image enhancement and resolution processing on the high-definition biological image according to an image enhancement processing technology and an countermeasure generation network to obtain preprocessed image data;
the image feature extraction module (302) is used for carrying out feature extraction on the preprocessed image data by using a densely connected convolution network to obtain image features;
an image feature modeling module (303) for selecting model network parameters according to a transfer learning technique, constructing a biological species identification model using the model network parameters and the image features;
and the biological species identification module (304) is used for inputting the image characteristics into the biological species identification model and labeling to obtain the biological species category.
6. The monitoring system according to claim 5, wherein the image data preprocessing module (301) comprises:
an image clipping unit (3011) for extracting and clipping a target region of interest in the high-definition biological image;
an image filtering unit (3012) for filtering the target region of interest by using a filtering algorithm to obtain image enhancement data;
-an countermeasure generation network unit (3013) for training the image enhancement data into a countermeasure generation network, resulting in the preprocessed image data.
7. The monitoring system according to claim 5, wherein the image feature modeling module (303) comprises:
a migration learning unit (3031) for learning migration knowledge and migration patterns from a source domain or a multi-network framework using a migration learning technique;
a parameter selection unit (3032) for selecting model network parameters of the deep convolutional neural network using the migration knowledge and the migration pattern;
a network construction unit (3033) for constructing a deep convolutional neural network according to a preset biological species category by using the model network parameters;
and the model construction unit (3034) is used for inputting the image characteristics into the deep convolutional neural network for training and constructing a biological species identification model.
8. The monitoring system according to claim 1, wherein the biodiversity visualization monitoring platform (400) comprises:
an image model building module (401) for building a three-dimensional image model using geographic information technology;
an information management fusion module (402) for associating and fusing the three-dimensional image model and the distribution information of the biological species by using a data management tool, and establishing a comprehensive information database;
and the visual display module (403) is used for extracting and displaying the distribution information of the biological species from the comprehensive information database.
9. A method for monitoring biodiversity according to any of claims 1 to 8, characterized in that the method for monitoring biodiversity comprises:
sensing a biological infrared signal in the monitoring area by using an infrared sensor;
according to the biological infrared signals in the monitoring area, a high-definition camera is controlled to capture a high-definition biological image in the monitoring area;
performing intelligent recognition and classification on the high-definition biological image according to a species recognition algorithm to obtain the distribution information of biological species;
and carrying out three-dimensional reconstruction on the distribution information of the biological species by using the biological infrared signals and the high-definition biological image, and displaying the distribution information of the biological species.
10. The method according to claim 9, wherein the step of intelligently identifying and classifying the high-definition biological image according to a species identification algorithm to obtain distribution information of biological species comprises:
performing image enhancement and resolution processing on the high-definition biological image according to an image enhancement processing technology and an countermeasure generation network to obtain preprocessed image data;
performing feature extraction on the preprocessed image data by using a dense connection convolution network to obtain image features;
selecting model network parameters according to a transfer learning technology, and constructing a biological species identification model by using the model network parameters and the image characteristics;
inputting the image features into the biological species identification model, and labeling to obtain the biological species category.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311145503.3A CN117173631A (en) | 2023-09-06 | 2023-09-06 | Method and system for monitoring biodiversity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311145503.3A CN117173631A (en) | 2023-09-06 | 2023-09-06 | Method and system for monitoring biodiversity |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117173631A true CN117173631A (en) | 2023-12-05 |
Family
ID=88931423
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311145503.3A Pending CN117173631A (en) | 2023-09-06 | 2023-09-06 | Method and system for monitoring biodiversity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117173631A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118111987A (en) * | 2024-02-29 | 2024-05-31 | 生态环境部南京环境科学研究所 | Partition detection device for biodiversity protection achievement analysis based on big data |
-
2023
- 2023-09-06 CN CN202311145503.3A patent/CN117173631A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118111987A (en) * | 2024-02-29 | 2024-05-31 | 生态环境部南京环境科学研究所 | Partition detection device for biodiversity protection achievement analysis based on big data |
CN118111987B (en) * | 2024-02-29 | 2024-09-27 | 生态环境部南京环境科学研究所 | Partition detection device for biodiversity protection achievement analysis based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Visual perception enabled industry intelligence: state of the art, challenges and prospects | |
Xiang et al. | Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects | |
Jia et al. | Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot | |
CN111080794B (en) | Three-dimensional reconstruction method for farmland on-site edge cloud cooperation | |
CN106356757B (en) | A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic | |
WO2021142902A1 (en) | Danet-based unmanned aerial vehicle coastline floating garbage inspection system | |
US20230298196A1 (en) | Geospatial object geometry extraction from imagery | |
CN107918776B (en) | Land planning method and system based on machine vision and electronic equipment | |
CN115439424A (en) | Intelligent detection method for aerial video image of unmanned aerial vehicle | |
US12106428B2 (en) | Radiance fields for three-dimensional reconstruction and novel view synthesis in large-scale environments | |
CN117214904A (en) | Intelligent fish identification monitoring method and system based on multi-sensor data | |
CN117876874A (en) | Forest fire detection and positioning method and system based on high-point monitoring video | |
CN115240168A (en) | Perception result obtaining method and device, computer equipment and storage medium | |
KR102262382B1 (en) | large area space information system for image acquistion base Unmanned Aerial Vehicle | |
CN117173631A (en) | Method and system for monitoring biodiversity | |
CN116311218A (en) | Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion | |
CN111783726A (en) | Herding livestock outlier video acquisition method based on artificial intelligence and BIM | |
CN118555462B (en) | Bionic eagle eye monitoring equipment | |
Azhar et al. | A framework for multiscale intertidal sandflat mapping: A case study in the Whangateau estuary | |
CN116129320A (en) | Target detection method, system and equipment based on video SAR | |
Nedevschi | A Critical Evaluation of Aerial Datasets for Semantic Segmentation | |
CN115115713A (en) | Unified space-time fusion all-around aerial view perception method | |
CN115249269A (en) | Object detection method, computer program product, storage medium, and electronic device | |
CN117739925B (en) | Intelligent image analysis method for unmanned aerial vehicle | |
CN114640785A (en) | Site model updating method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |